text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Benchmarking Meaning Representations in Neural Semantic Parsing
Meaning representation is an important component of semantic parsing. Although researchers have designed a lot of meaning representations, recent work focuses on only a few of them. Thus, the impact of meaning representation on semantic parsing is less understood. Furthermore, existing work’s performance is often not comprehensively evaluated due to the lack of readily-available execution engines. Upon identifying these gaps, we propose U NIMER , a new unified benchmark on meaning representations, by integrating existing semantic parsing datasets, completing the missing logical forms, and implementing the missing execution engines. The resulting unified benchmark contains the complete enumeration of logical forms and execution engines over three datasets × four meaning representations. A thorough experimental study on U NIMER reveals that neural semantic parsing approaches exhibit notably different performance when they are trained to generate different meaning representations. Also, program alias and grammar rules heavily impact the performance of different meaning representations. Our benchmark, execution engines and implementation can be found on: https:
Meaning representation is an important component of semantic parsing. Although researchers have designed a lot of meaning representations, recent work focuses on only a few of them. Thus, the impact of meaning representation on semantic parsing is less understood. Furthermore, existing work's performance is often not comprehensively evaluated due to the lack of readily-available execution engines. Upon identifying these gaps, we propose UNIMER, a new unified benchmark on meaning representations, by integrating existing semantic parsing datasets, completing the missing logical forms, and implementing the missing execution engines. The resulting unified benchmark contains the complete enumeration of logical forms and execution engines over three datasets × four meaning representations. A thorough experimental study on UNIMER reveals that neural semantic parsing approaches exhibit notably different performance when they are trained to generate different meaning representations. Also, program alias and grammar rules heavily impact the performance of different meaning representations. Our benchmark, execution engines and implementation can be found on: https: //github.com/JasperGuo/Unimer.
Introduction
A remarkable vision of artificial intelligence is to enable human interactions with machines through natural language. Semantic parsing has emerged as a key technology for achieving this goal. In general, semantic parsing aims to transform a natural language utterance into a logic form, i.e., a formal, machine-interpretable meaning representation (MR) (Zelle and Mooney, 1996;Dahl et al., 1994). 1 Thanks to the recent development * Work done during an internship at Microsoft Research. 1 In this paper, we focus on grounded semantic parsing, where meaning representations are grounded to specific knowl- of neural networks techniques, significant improvements have been made in semantic parsing performance (Jia and Liang, 2016;Yin and Neubig, 2017;Dong and Lapata, 2018;Shaw et al., 2019). Despite the advancement in performance, we identify three important biases in existing work's evaluation methodology. First, although multiple MRs are proposed, most existing work is evaluated on only one or two of them, leading to less comprehensive or even unfair comparisons. Table 1 shows the state-of-the-art performance of semantic parsing on different dataset × MR combinations, where the rows are the MRs and the columns are the datasets. We can observe that while Lambda Calculus is intensively studied, the other MRs have not been sufficiently studied. This biased evaluation is partly caused by the absence of target logic forms in the missing cells. Second, existing work often compares the performance on different MRs directly (Sun et al., 2020;Shaw et al., 2019;Chen et al., 2020) without considering the confounding edge bases, instead of ungrounded semantic parsing. role that MR plays in the performance, 2 causing unfair comparisons and misleading conclusions. Third, a more comprehensive evaluation methodology would consider both the exact-match accuracy and the execution-match accuracy, because two logic forms can be semantically equivalent yet do not match precisely in their surface forms. However, as shown in Table 1, most existing work is only evaluated with the exact-match accuracy. This bias is potentially due to the fact that execution engines are not available in six out of the twelve dataset × MR combinations.
Upon identifying the three biases, in this paper, we propose UNIMER, a new unified benchmark, by unifying four publicly available MRs in three of the most popular semantic parsing datasets: Geo, ATIS and Jobs. First, for each natural language utterance in the three datasets, UNIMER provides annotated logical forms in four different MRs, including Prolog, Lambda Calculus, FunQL, and SQL. We identify that annotated logical forms in some MR × dataset combinations are missing. As a result, we complete the benchmark by semi-automatically translating logical forms from one MR to another. Second, we implement six missing execution engines for MRs so that the execution-match accuracy can be readily computed for all the dataset × MR combinations. Both the logical forms and their execution results are manually checked to ensure the correctness of annotations and execution engines.
After constructing UNIMER, to obtain a preliminary understanding on the impact of MRs on semantic parsing, we empirically study the performance of MRs on UNIMER by using two widely-used neural semantic parsing approaches (a seq2seq model (Dong and Lapata, 2016;Jia and Liang, 2016) and a grammar-based neural model (Yin and Neubig, 2017)), under the supervised learning setting.
In addition to the empirical study above, we further analyze the impact of two operations, i.e., program alias and grammar rules, to understand how they affect different MRs differently. First, Program alias. A semantically equivalent program may have many syntactically different forms. As a result, if the training and testing data have a difference in their syntactic distributions of logic forms, a naive maximum likelihood estimation can suffer from this difference because it fails to capture the semantic equivalence (Bunel et al., 2018). As different MRs have different degrees of syntactic difference, they suffer from this problem differently. Second, Grammar rules. Grammar-based neural models can guarantee that the generated program is syntactically correct (Yin and Neubig, 2017;Wang et al., 2020;Sun et al., 2020). For a given set of logical forms in an MR, there exist multiple sets of grammar rules to model them. We observe that when the grammar-based neural model is trained with different sets of grammar rules, it exhibits a notable performance discrepancy. This finding alias with the one made in traditional semantic parsers (Kate, 2008) that properly transforming grammar rules can lead to better performance of a traditional semantic parser.
In summary, this paper makes the following main contributions: • We propose UNIMER, a new unified benchmark on meaning representations, by integrating and completing semantic parsing datasets in three datasets × four MRs; we also implement six execution engines so that executionmatch accuracy can be evaluated in all cases; • We provide the baseline results for two widely used neural semantic parsing approaches on our benchmark, and we conduct an empirical study to understand the impact that program alias and grammar rule plays on the performance of neural semantic parsing;
Preliminaries
In this section, we provide a brief description of the MRs and neural semantic parsing approaches that we study in the paper.
Meaning Representations
We investigate four MRs in this paper, namely, Prolog, Lambda Calculus, FunQL, and SQL, because they are widely used in semantic parsing and we can obtain their corresponding labeled data in at least one semantic parsing domain. We regard Prolog, Lambda Calculus, and FunQL as domainspecific MRs, since the predicates defined in them are specific for a given domain. Consequently, the execution engines of domain-specific MRs need to be significantly customized for different domains, requiring plenty of manual efforts. In contrast, SQL is a domain-general MR for querying relational
Prolog has long been used to represent the meaning of natural language (Zelle and Mooney, 1996;Kate and Mooney, 2006). Prolog includes first-order logical forms, augmented with some higher-order predicates, e.g., most, to handle issues such as quantification and aggregation. Take the first logical form in Tables 2 as an example. The uppercase characters denote variables, and the predicates in the logical form specify the constraints between variables. In this case, character A denotes a variable, and it is required to be a flight, and the flight should depart tomorrow morning from Pittsburgh to Atlanta. The outer predicate answer indicates the variable whose binding is of interest. One major benefit of Prolog-style MRs is that they allow predicates to be introduced in the order where they are actually named in the utterance. For instance, the order of predicates in the logical form strictly follows their mentions in the natural language utterance. Lambda Calculus is a formal system to express computation. It can represent all first-order logic and it naturally supports higher-order functions. It represents the meanings of natural language with logical expressions that contain constants, quantifiers, logical connectors, and lambda abstract. These properties make it prevalent in semantic parsing. Consider the second logical form in Table 2. It defines an expression that takes an entity A as input and returns true if the entity satisfies the constraints defined in the expressions. Lambda Calculus can be typed, allowing type checking during generation and execution. FunQL, abbreviated for Functional Query Language, is a variable-free language (Kate et al., 2005). It abstracts away variables and encodes compositionality via its nested function-argument structure, making it easier to implement an efficient execution engine for FunQL. Concretely, unlike Prolog and Lambda Calculus, predicates in FunQL take a set of entities as input and return another set of entities that meet certain requirements. Considering the third logical form in Table 2, the predicate during day(period(morning)) returns a set of flights that depart in the morning. With this function-argument structure, FunQL can directly return the entities of interest. SQL is a popular relational database query language. Since it is domain-agnostic and has wellestablished execution engines, the subtask of semantic parsing, Text-to-SQL, has received a lot of interests. Compared with domain-specific MRs, SQL cannot encapsulate too much domain prior knowledge in its expressions. As shown in Table 2, to query flights that depart tomorrow, one needs to specify the concrete values of year, month, and day in the SQL query. However, these values are not explicitly mentioned in the utterance and may even change over time.
It is important to note that although these MRs are all expressive enough to represent all meanings in some domains, they are not equivalent in terms of their general expressiveness. For example, FunQL is less expressive than Lambda Calculus in general, partially due to the elimination of variables and quantifiers.
Neural Semantic Parsing Approaches
During the last few decades, researchers have proposed different approaches for semantic parsing. Most state-of-the-art approaches are based on neural models and formulate the semantic parsing problem as a sequence transduction problem. Due to the generality of sequence transduction, these ap- proaches can be trained to generate any MRs. In this work, without loss of generality, we benchmark MRs by evaluating the seq2seq model (Dong and Lapata, 2016;Jia and Liang, 2016) and the grammar-based model (Yin and Neubig, 2017) under the supervised learning setting. We select the two models because most neural approaches are designed based on them. Seq2Seq Model. Dong and Lapata (2016) and Jia and Liang (2016) formulated the semantic parsing problem as a neural machine translation problem and employed the sequence-to-sequence model (Sutskever et al., 2014) to solve it. As illustrated in Figure 1a, the encoder takes an utterance as input and outputs a distributed representation for each word in the utterance. A decoder then sequentially predicts words in the logical form. When augmented with the attention mechanism (Bahdanau et al., 2014;Luong et al., 2015), the decoder can better utilize the encoder's information to predict logical forms. Moreover, to address the problem caused by the long tail distribution of entities in logical forms, Jia and Liang (2016) proposed an attention-based copying mechanism. That is, at each time step, the decoder takes one of two types of actions, one to predict a word from the vocabulary of logical forms and the other to copy a word from the input utterance.
Grammar-based Model. By treating a logical form as a sequence of words, the seq2seq model cannot fully utilize the property that logical forms are well-formed and must conform to certain grammars of an MR. To bridge this gap, Yin and Neubig (2017) proposed a grammar-based decoder that outputs a sequence of grammar rules instead of words, as presented in Figure 1b. The decoded grammar rules can deterministically generate a valid abstract syntax tree (AST) of a logical form. In this way, the generated logical form is guaranteed to be syntactically correct. This property makes it widely used in a lot of code generation and semantic parsing tasks (Sun et al., 2020;Wang et al., 2020;Bogin et al., 2019). The grammar-based decoder can also be equipped with the attention-based copying mechanism to address the long-tail distribution problem.
Benchmark
To provide an infrastructure for exploring MRs, we construct UNIMER, a unified benchmark on MRs, based on existing semantic parsing datasets. Currently, UNIMER covers three domains, namely Geo, ATIS, and Job, each of which has been extensively studied in previous work and has annotated logical forms for at least two MRs. All natural language utterances in UNIMER are written in English.
Geo focuses on querying a database of U.S. geography with natural language. To solve the problem, Zelle and Mooney ( Since not all the four MRs that we introduce in Section 2.1 are used in these three domains, we semi-automatically translate logical forms in one MR into another. This effort enables researchers to explore MRs in more domains and make a fair comparison among them. Take the translation of Lambda Calculus to FunQL in ATIS as an example. We first design predicates for FunQL based on those defined in Lambda Calculus and implement an execution engine for FunQL. Then, we translate logical forms in Lambda Calculus to FunQL and compare the execution results to verify the correctness of the translation. In this process, we find that there is no ready-to-use Lambda Calculus execution engine for the three domains. Hence, we implement one for each domain. These engines, on the one hand, enable evaluations of semantic parsing approaches with both exact-match accuracy and execution-match accuracy. On the other hand, they enable exploration of weakly supervised semantic parsing with Lambda Calculus. In addition, we find some annotation mistakes in logical forms and several bugs in existing execution engines of Prolog and FunQL. By correcting the mistakes and fixing the bugs in the engines, we create a refined version of these datasets. Section A.1 in the supplementary material provides more details about the construction process.
We plan to cover more domains and more MRs in UNIMER. We have made UNIMER along with the execution engines publicly available. 3 We believe that UNIMER can provide fertile soil for exploring MRs and addressing challenges in semantic parsing.
Experimental Setup
Based on UNIMER, we take the first attempt to study the characteristics of different MRs and their impact on neural semantic parsing.
Experimental Design
Meaning Representation Comparison. To understand the impact of MRs on neural semantic parsing, we first experiment with the two neural approaches described in Section 2.2 on UNIMER, and we compare the resulting performance of different MRs with two metrics: exact-match accuracy (a logical form is regarded as correct if it is syntactically identical to the gold standard), 4 and execution-match accuracy (regarded as correct if a logical form's execution result is identical to that of the gold standard). 5 Program Alias. To explore the effect of program alias, we replace a different proportion of logical forms in a training set with their aliases (semantically equivalent but syntactically different logical forms), and we re-train the neural approaches to quantify its effect. To search for aliases of a logical form, we first derive multiple transformation rules for each MR. Then, we apply these rules to the logical form to get its aliases and randomly sample one. We compare the execution results of the resulting logical forms to ensure their equivalence in semantics. Table 3 presents three transformation rules for SQL. We provide a detailed explanation of transformation rules and examples for each MR in Section A.3 of the supplementary material. Grammar Rules. To understand the grammar rules' impact on grammar-based models, we provide two sets of grammar rules for each MR. Each set of rules can cover all the logical forms in the three domains. We compare the performance of models trained with different sets of rules. Specifically, Wong and Mooney (2006) and Wong and Mooney (2007) have induced a set of grammar rules for Prolog and FunQL in Geo. We directly use them in Geo and extend them to support logical forms in ATIS and Job. As for SQL, Bogin et al.
(2019) have induced a set of rules for SQL in the Spider benchmark, and we adapt it to support the SQL queries in the three domains that we study.
When it comes to Lambda Calculus, we use the one induced by Yin and Neubig (2018). For comparison, we also manually induce another set of grammar rules for the four MRs. Section A.4 in the supplementary material provides definitions of all the grammar rules.
Implementations
We implement each approach with the Al-lenNLP (Gardner et al., 2018) and PyTorch (Paszke et al., 2019) frameworks. To make a fair comparison, we tune the hyper-parameters of approaches for each MR on the development set or through cross-validation on the training set, with the NNI platform. 6 Due to the limited number of test data in each domain, we run each approach five times and take the average number. Section A.2 in the supplementary material provides the search space of hyper-parameters for each approach and the preprocessing procedures of logical forms. Multiple neural semantic parsing approaches (Dong and Lapata, 2016;Iyer et al., 2017;Rabinovich et al., 2017) adopt the data anonymization techniques to replace entities in utterances with placeholders. However, the techniques are usually ad-hoc and specific for domains and MRs, and they sometimes require manual efforts to resolve conflicts (Finegan-Dollak et al., 2018). Hence, we do not apply data anonymization to avoid bias. Table 4 presents our experimental results on UNIMER. Since we do not use data anonymization techniques, the performance is generally lower than that shown in Table 1 and Table 8, but the performance is on par with the numbers reported in ablation studies of previous work (Dong and Lapata, 2016;Jia and Liang, 2016;Finegan-Dollak et al., 2018). We can make the following three observations from the table.
Meaning Representation Comparison
First, neural approaches exhibit notably different performance when they are trained to generate different MRs. The difference can vary by as much as 20% in both exact-match and execution-match metrics. This finding tells us that an apple-to-apple comparison is extremely important when comparing two neural semantic parsing approaches. However, we notice that some papers (Sun et al., 2020; 6 https://github.com/microsoft/nni Second, domain-specific MRs (Prolog, Lambda Calculus, and FunQL) tend to outperform SQL (domain-general) by a large margin. For example, in Geo, the execution-match accuracy of FunQL is substantially higher than that of SQL in all approaches. This result is expected because a lot of domain knowledge is injected into domain-specific MRs. Consider the logical forms in Table 2. There is a predicate tomorrow in all three domainspecific MRs, and this predicate can directly align to the description in the utterance. However, one needs to explicitly express the concrete date values in the SQL query; this requirement can be a heavy burden for neural approaches, especially when the values will change over time. In addition, a recent study (Finegan-Dollak et al., 2018) in Text-to-SQL has shown that domain-specific MRs are more robust against generating never-seen logical forms than SQL, because their surface forms are much closer to natural language.
Third, among all the domain-specific MRs, FunQL tends to outperform the others in neural approaches. In Geo, FunQL outperforms the other MRs in both metrics by a large margin. In Job, the grammar-based (w/ copy) model trained with FunQL achieves the state-of-the-art performance. One possible reason is that FunQL is more compact than the other MRs, due to its elimination of variables and quantifiers. Figure 2 shows box plots about the number of grammars rules in the AST of a logical form. We can observe that while FunQL has almost the same number of grammar rules with the other MRs (Table 5), it has much fewer grammar rules involved in a logical form than the others on average. This statistic is crucial for neural semantic parsing approaches as it directly determines the number of decoding steps in decoders. A similar reason can be used to explain that the performance on SQL is lower than others. As Figure 2 shows, SQL has larger medians of the number of grammar rules, and it also has much more outliers than domain-specific MRs. It makes neural models more challenging to learn.
Interestingly, this finding contradicts the finding in CCG-based semantic parsing approaches (Kwiatkowksi et al., 2010), in which they show that Lambda Calculus outperforms FunQL in the Geo domain. The reason is that compared with Lambda Calculus, the deeply nested structure of FunQL makes it more challenging to learn a highquality CCG lexicon, which is crucial for CCG parsing. In contrast, neural approaches do not rely on a lexicon and directly learn a mapping between source and target languages. From the figure, we have two main observations. First, in both domains, as more logical forms are replaced, the performance of all MRs declines gradually. Among all the MRs, the performance of Prolog declines more seriously than the others in both domains. In other words, it suffers from the program alias problem more seriously. The trends of Lambda Calculus and FunQL in ATIS are impressive, as their performance decreases only slowly. Selecting an MR with the less effect of program alias could be a better choice when we need to develop a semantic parser for a new domain, because we can save many efforts in defining annotation protocols and checking consistency, which could be extremely tedious. Second, the exact-match accuracy declines more seriously than execution-match. Table 6 provides the relative declines in both exact-match and execution-match metrics when 25% of logical forms are replaced. We find that the exact-match accuracy declines more seriously than execution-match, indicating that under the effect of program alias, exact-match may not be suitable as it may massively underestimate the performance. At last, given a large number of semantically equivalent logical forms, it would be valuable to explore whether they can be leveraged to improve semantic parsing (Zhong et al., 2018). Table 7 presents the experimental results of the grammar-based (w/o copy) model trained with different sets of grammar rules. As the table shows, there is a notable performance discrepancy between different sets of rules. For example, in ATIS, we can observe 2.5% absolute improvement when the model is trained with G2 for Lambda Calculus. Moreover, G2 is not always better than G1. While the model trained with G2 for Prolog outperforms G1 in Geo, it lags behind G1 in ATIS. This observation motivates us to consider what factors contribute to the discrepancy. We had tried to explore the search space of logical forms defined by different grammar rules and the distribution drift between the AST of logical forms in the training and test set. However, the exploration results cannot consistently explain the performance discrepancy. As our important future work, we would explore whether or not the discrepancy is caused by better alignments between utterances and grammar rules. Intuitively, it would be easier for decoders to learn the set of grammar rules having better alignments with utterances.
Grammar Rules
We can learn from these results that similar to traditional semantic parsers, properly transforming grammar rules for MRs can also lead to better performance in neural approaches. Therefore, grammar rules should be considered as a very important hyper-parameter of grammar-based models, and it is recommended to mention the used grammar rules in research papers clearly. Extrinsic parser evaluation. Another line of research that is closely related to our work is extrinsic parser evaluation. Miyao et al. (2008) benchmarked different syntactic parsers and their representations, including dependency parsing, phrase structure parsing, and deep parsing, and evaluated their impact on an information extraction system. Oepen et al. (2017) provided a flexible infrastructure, including data and software, to estimate the relative utility of different types of dependency representations for a variety of downstream applications that rely on an analysis of grammatical structure of natural language. There has not been work on benchmarking MRs for grounded semantic parsing in neural approaches, to the best of our knowledge.
Weakly supervised semantic parsing. In this paper, we focus on supervised learning for semantic parsing, where each utterance has its corresponding logical form annotated. But the similar evaluation methodology could be applied to weakly supervised semantic parsing, which receives wide attention because parsers are only supervised with execution results and annotated logical forms are no longer required (Berant et al., 2013;Pasupat and Liang, 2015;Goldman et al., 2018;Liang et al., 2018;Mueller et al., 2019). We also notice that various MRs have been used in weakly supervised semantic parsing, and it would be valuable to explore the impact of MRs in such settings.
Conclusion
In this work, we propose UNIMER, a unified benchmark on meaning representations, based on established semantic parsing datasets; UNIMER covers three domains and four different meaning representations along with their execution engines. UNIMER allows researchers to comprehensively and fairly evaluate the performance of their approaches. Based on UNIMER, we conduct an empirical study to understand the characteristics of different meaning representations and their impact on neural semantic parsing. By open-sourcing our source code and benchmark, we believe that our work can facilitate the community to inform the design and development of next-generation MRs.
Implications. Our findings have clear implications for future work. First, according to our experimental results, FunQL tends to outperform Lambda Calculus and Prolog in neural semantic parsing. Additionally, FunQL is relatively robust against program alias. Hence, when developers need to design an MR for a new domain, FunQL is recommended to be the first choice. Second, to reduce program alias' negative effect on neural semantic parsing, developers should define a concrete protocol for annotating logical forms to ensure their consistency. Specifically, given an MR, developers should identify as many as possible sources where program alias can occur. Take SQL as an example. To express the argmax semantics, one can either use subquery or the OrderBy clause. 8 Having identified these sources, developers need to determine using which expression in what context, e.g., argmax is always expressed with subquery, and the unordered expressions in conjunctions are always sorted by characters. In Geo and Job, we use the standard copy mechanism, i.e., directly copying a source word to a logical form. In ATIS, following (Jia and Liang, 2016), we leverage an external lexicon to identify potential copy candidates, e.g., slc:ap can be identified as a potential entity for description "salt lake city airport" in utterance. When we copy a source word that is part of a phrase in the lexicon, we write the entity associated with that lexicon entry to a logical form.
Hyper-Parameters. For the seq2seq model, the embedding dimension of both source and target languages ranges over {100, 200}. We select a one-layer bi-directional LSTM as an encoder. The hidden dimension of the encoder ranges over {32, 64, 128, 256}. Similarly, a one-layer LSTM is selected as the decoder. Its hidden dimension is as same as the encoder. In terms of attention, we select bi-linear as the activation function, where the hidden dimension is 2 times that of the encoder. We employ dropout at training time with rate ranging over {0. Similarly, for the grammar-based model, a onelayer bi-directional LSTM is used as an encoder and another LSTM is employed as a decoder. The layers of the decoder is selected from {1, 2}. The hidden dimension of the encoder ranges over {64, 128, 256}. The hidden dimension of the decoder is 2 times that of the encoder. The hidden dimension of both the grammar rule and non-terminal is selected from {64, 128, 256}. We also employ dropout in the encoder and decoder at training time with rate selected from {0.1, 0.2, 0.3}. We select batch size from {16, 32, 48, 64}, and select learning rate from {0.001, 0.0025, 0.005, 0.01, 0.025, 0.05}. We use the Adam algorithm to update the parameters.
For both models, gradients are clipped at 5 to alleviate the exploding gradient problem, and early stopping is used to determine the number of epochs. We provide the detailed configurations of the NNI platform in our Github repository. Algorithm 2 presents the way we search for aliases for a logical form. Transformation rules can be categorized into two groups based on whether they are domain-specific. Considering the following two logical forms: (lambda A:e (exists B (and (flight B) (fare B A)))) (lambda A:e (exists B (and (flight B) (equals (fare B) A)))) they are semantically equivalent due to the multiple meaning definitions of fare. There are also domain-general transformation rules, e.g., permuting the expressions in the conjunction predicate: (lambda A:e (exists B (and (flight B) (fare B A)))) (lambda A:e (exists B (and (fare B A) (flight B)))) In this work, we primarily consider domaingeneral transformation rules and only when there is limited aliases found by domain-general rules, we use domain-specific rules. Table 9 presents the transformation rules we used in Geo domains. Rules in ATIS are similar. We provide examples below to illustrate the rules. | 6,876.8 | 2020-11-01T00:00:00.000 | [
"Computer Science"
] |
cAMP Inhibits Cell Migration by Interfering with Rac-induced Lamellipodium Formation*
Cell migration is critical for animal development and physiological as well as pathological responses. One important step during cell migration is the formation of lamellipodia at the leading edge of migrating cells. Here we report that the second messenger cAMP inhibits the migration of mouse embryonic fibroblast cells and mouse breast tumor cells. cAMP acts downstream of the small GTPase Rac and interferes with the formation of lamellipodia. Moreover, cAMP decreases the phosphorylation of the myosin light chain at the leading edge of cells and increases the phosphorylation of the vasodilator-stimulated phosphoprotein. Together with our previous report of a positive role of another second messenger, cGMP, in lamellipodium formation, our data indicate that cAMP and cGMP play opposite roles in modulating lamellipodium formation.
Cell migration is critical for animal development and physiological as well as pathological responses. One important step during cell migration is the formation of lamellipodia at the leading edge of migrating cells. Here we report that the second messenger cAMP inhibits the migration of mouse embryonic fibroblast cells and mouse breast tumor cells. cAMP acts downstream of the small GTPase Rac and interferes with the formation of lamellipodia. Moreover, cAMP decreases the phosphorylation of the myosin light chain at the leading edge of cells and increases the phosphorylation of the vasodilator-stimulated phosphoprotein. Together with our previous report of a positive role of another second messenger, cGMP, in lamellipodium formation, our data indicate that cAMP and cGMP play opposite roles in modulating lamellipodium formation.
Cell migration is a cellular event that is critical for various physiological processes and pathological responses such as embryonic development, angiogenesis, immune function and inflammation, axonal guidance and neural development, tissue repair, and tumor metastasis (1,2). Cell migration is a sequential and interrelated multistep process. It involves the formation of lamellipodia/membrane protrusions at the front edge, cycles of adhesion and detachment to the extracellular matrix, cell body contraction and translocation, and tail retraction. For efficient migration to occur, these activities need to be spatially and temporally coordinated through complex signaling events. A better understanding of the regulation of cell migration will lead to the development of novel therapeutics for human disease conditions such as tumor metastasis.
Lamellipodium formation is an important step during cell migration (3,4). The lamellipodium is a specialized subcellular structure at the front of a migrating cell. It is mainly a cytoskeletal actin projection. The tips of lamellipodia localize and harness actin polymerization for cell migration. Lamellipodia display characteristic highly active behavior. They spread forwards quickly with sometimes retracting, ruffling, or bubbling (3).
Cells migrate in response to specific external signals. This orchestrated movement is subjected to modulation. cAMP is a ubiquitous cellular second messenger and could regulate a wide range of cellular processes, including cell migration (5)(6)(7)(8)(9)(10). In Xenopus spinal neurons and rat sensory neurons, the ratio of cAMP to cGMP is important in axonal guidance (11,12). Although cAMP could modulate cell migration, the mechanism by which cAMP plays its role in regulating fibroblast and tumor cell migration is not clear.
Here we use both mouse embryonic fibroblasts (MEFs) 2 and mouse 4T1 breast tumor cells to study the modulation of cell migration by cAMP. We found that cAMP inhibits the migration of MEFs and 4T1 breast tumor cells by interfering with the formation of lamellipodia at the leading edge during cell migration.
In Vitro Wound-healing Cell Migration Assay-Cell migration assays were performed as described previously (13)(14)(15). Cells were allowed to form a confluent monolayer in a 24-well plate coated with gelatin before wounding. The wound was made by scraping a conventional pipette tip across the monolayer. Cell migration was induced by adding medium supplemented with 10% fetal bovine serum and 20 ng/ml PDGF (for MEF cells) or 10 M LPA (for 4T1 cells). For MEF cells, it typically took 8 -10 h for the wound to close. For 4T1 cells, it typically took 12-14 h for the wound to close. When the wound for the positive control closed, cells were fixed with 3.7% formaldehyde and stained with crystal violet staining solution.
Boyden Chamber Cell Migration Assay-MEF and 4T1 cells (5 ϫ 10 4 ) suspended in starvation medium were added to the upper chamber of an insert (6.5-mm diameter, 8-m pore size; BD Biosciences), and the insert was placed in a 24-well plate containing starvation medium with or without 10% fetal bovine serum abd 20 ng/ml PDGF (for MEF cells) or 10 M LPA (for 4T1 cells). When used, inhibitors were added to both chambers. Migration assays were carried out for 4 h, and cells were fixed with 3.7% formaldehyde. Cells were stained with crystal violet staining solution, and cells on the upper side * This work was supported, in whole or in part, by National Institutes of Health Grant AG23202. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. 1 To whom correspondence should be addressed. of the insert were removed with a cotton swab. Three randomly selected fields (ϫ10 objective) were photographed, and the migrated cells were counted. Immunofluorescence Microscopy-Preparation of samples for fluorescence microscopy was preformed as described previously (15,(17)(18)(19). Cells cultured on gelatin-coated glass coverslips were fixed with 3.7% formaldehyde in phosphatebuffered saline for 10 min at room temperature, permeabilized with 0.1% Triton X-100 for 5 min, and then washed three times with phosphate-buffered saline. To block nonspecific binding, the cells were incubated with a solution of phosphate-buffered saline containing 1% bovine serum albumin for 30 min and then incubated with primary antibody at appropriate dilutions (1:100 for rabbit anti-phospho-Ser-19 MLC antibody (Cell Signaling Technology), 1:2000 for anti-tubulin antibody, and 1:1000 for anti-VASP antibody (Cell Signaling Technology)) for 1 h. Alexa Fluor 488-conjugated phalloidin (Molecular Probes) was used to visualize F-actin. After incubation with primary antibody, cells were washed three times with phosphatebuffered saline and incubated with rhodamine-conjugated secondary antibody (Molecular Probes). The coverslips were then fixed on slides and imaged using a Zeiss fluorescence microscope.
Microtubule Organizing Center (MTOC) Reorientation-MTOC reorientation was analyzed as described previously (20) with modification. Cells were allowed to grow to confluence on a glass coverslip coated with gelatin. Serum was added immediately after wounding. MTOC reorientation was assessed 2 h after wounding by immunolabeling using anti-pericentrin antibody. Cells in which the MTOC was within the quadrant facing the wound were scored positive, and for each condition, at least 100 woundedge cells were examined.
Statistical Analysis-Data are expressed as the means Ϯ S.D. from
Increase of cAMP Plays an Inhibitory Role in the Migration of Fibroblast and Breast
Tumor Cells-During our investigation of the role of the second messenger cGMP in cell migration (20), we noticed that IBMX, a broad-spectrum phosphodiesterase inhibitor preventing the breakdown of cAMP and cGMP, inhibited serum-induced migration of MEF cells (Fig. 1A). Because we had shown that cGMP plays a positive role in serum-induced MEF cell migration (20), here we investigated a possible inhibitory role of cAMP in MEF cell migration. We used two complementary approaches to study serum-induced MEF cell migration in the presence or absence of forskolin, a direct activator of adenylyl cyclase leading to the increase of cellular cAMP (16). In the qualitative in vitro wound-healing assay, MEF cells were grown to confluence. A wound was made in the middle of the culture plate with a pipette tip. After ϳ10 h in the presence of serum, whereas control MEF cells migrated and covered the "wound," the addition of 50 M forskolin significantly inhibited serum-induced MEF cell migration (Fig. 1B). These results were confirmed with the quantitative Boyden chamber assays with 4 h of forskolin treatment (Fig. 1C). Treatment of MEF cells with forskolin resulted in a dosage-dependent inhibition of cell migration with an IC 50 of ϳ30 M (Fig. 1D). Furthermore, treatment of MEF cells with isoproterenol, which activates the endogenous G scoupled -adrenergic receptor and results in the elevation of cellular cAMP, also inhibited PDGF-induced cell migration (Fig. 1E). This inhibitory effect was abolished in G␣ s Ϫ/Ϫ cells. Forskolin was still able to inhibit PDGF-induced MEF cell migration in the absence of G␣ s (Fig. 1E). These data demonstrate that an increase of cellular cAMP could inhibit serum-induced MEF cell migration.
To investigate whether this inhibitory role of cAMP in cell migration is unique to MEF cells, we studied the effect of forskolin on the migration of highly invasive mouse 4T1 breast tumor cells. As shown by both the wound-healing assay ( Fig. 2A) and the Boyden chamber assay (Fig. 2B), the addition of 50 M forskolin inhibited serum-induced 4T1 cell migration. Similarly, treatment with IBMX decreased serum-induced 4T1 cell migration (Fig. 2A). These results show that cAMP has an inhibitory role in the migration of breast tumor cells in addition to MEF cells. Because serum contains various growth factors, we next studied whether cAMP inhibits cell migration induced by several growth factors known to have chemotactic function, including PDGF and LPA. PDGF efficiently induced the migration of MEF cells, and forskolin treatment inhibited PDGFinduced MEF cell migration (Fig. 1, B and C). Similarly, LPA increased the migration of the 4T1 breast tumor cells (Fig. 2, A and B). Forskolin treatment decreased LPA-induced 4T1 cell migration (Fig. 2, A and B). Although PDGF works on its receptor tyrosine kinase, LPA acts through its G protein-coupled receptor. Collectively, our data suggest that increases of cAMP play an inhibitory role in controlling MEF and 4T1 cell migration induced by various factors.
cAMP Acts Downstream of the Small GTPase Rac-To explore the molecular mechanism by which cAMP inhibits serum-induced cell migration, we first investigated at which stage cAMP acts. The Rho family small GTPase Rac plays an essential role in serum-induced cell migration (14,15,21). Expression (through retroviral infection) of dominant-negative Rac (Rac1(T14N)) in MEF cells or in 4T1 breast tumor cells decreased serum-induced migration of MEF cells or 4T1 cells, respectively (Fig. 3, A-D). These results indicate that Rac is required for serum-induced MEF and 4T1 cell migration. Furthermore, as shown in Fig. 3, E-H, Rac is also sufficient to induce MEF and 4T1 cell migration, as expression of constitutively active Rac (Rac1(G12V)) in MEF cells or 4T1 cells induced the migration of these cells. MAY 16, 2008 • VOLUME 283 • NUMBER 20
JOURNAL OF BIOLOGICAL CHEMISTRY 13801
To study whether cAMP acts upstream or downstream of Rac, we examined the effect of forskolin on the migration induced by constitutively active Rac. MEF cells were infected with retroviruses carrying constitutively active Rac1(G12V) or control retroviral vector. Cells were then treated with forskolin. As shown in Fig. 3 (E-H), forskolin decreased the migration of MEF cells or 4T1 cells induced by constitutively active Rac. These data are consistent with a model that cAMP plays its inhibitory role by acting downstream of (or on) Rac in the pathway of serum-induced cell migration.
cAMP Inhibits Lamellipodium Formation-To further investigate the molecular mechanism of the cAMP action, we studied the effect of cAMP on cellular events during cell migration. Rac is known to mediate serum-induced lamellipodium formation and focal adhesion formation and turnover during cell migration (1). We first examined the lamellipodium formation. Lamellipodia are membrane extensions at the front of migrating cells (3). This structure could be visualized by the staining of actin filaments at the leading edge. Two h after making the wound in the in vitro wound-healing assay, serum effectively induced the formation of lamellipodia with a distinct polarized actin cytoskeleton with strong membrane protrusions toward the leading edge (Fig. 4A). In contrast, forskolin treatment disrupted the polarized distribution of F-actin (Fig. 4A). Instead, forskolin-treated cells adopted an elongated morphology with a non-polarized F-actin meshwork. These data demonstrate that cAMP interferes with the lamellipodium formation.
We also examined the effect of cAMP on focal adhesion turnover and on the microtubule dynamics. We found that forskolin treatment had no effect on microtubule dynamics (Fig. 4, B and C) and no effect on focal adhesion turnover (data not shown). The MTOC reorientation and the formation of microtubule protrusions in the leading edge contribute to directional cell migration (1,22). As shown in Fig. 4 (B and C), in the presence of serum, the MTOC and the microtubule cytoskeleton reorganized to face the wound. Forskolin treatment did not affect the polarization of the MTOC or the protrusion of microtubules. These data suggest that cAMP inhibits cell migration by disrupting F-actin rather than microtubule dynamics within the leading edge of migrating cells.
PKA and pMLC Contribute to cAMP Inhibitory Function-To gain a biochemical understanding of the inhibitory function of cAMP in MEF cell migration, we tested the participation of several signaling components in this regulatory pathway. The best characterized direct effector of cAMP is PKA. We first tested whether PKA contributes to the cAMP inhibitory function in MEF cell migration. We examined the MEF cell migration in the presence or absence of 6-benzoyl-cAMP, a selective cAMP analogue that directly activates PKA (5). As shown in Fig. 5A, the addition of 6-benzoyl-cAMP inhibited the seruminduced migration of MEF cells, implying that activation of PKA is capable of inhibiting MEF cell migration. The involve-ment of PKA was further investigated using H-89, a specific PKA inhibitor. In MEF cells, the addition of H-89 stimulated serum-induced cell migration, consistent with the inhibitory role of PKA in controlling cell migration (Fig. 5A). Furthermore, H-89 reduced the inhibitory effect of forskolin on seruminduced MEF cell migration (Fig. 5B). Hence, the data from both the activation of PKA and the inhibition of PKA are consistent with a negative role of PKA in MEF cell migration.
Of the reported physiological substrates for PKA, one is MLC kinase (23). PKA can phosphorylate and decrease the activity of MLC kinase (23)(24)(25). MLC kinase phosphorylates the regulatory light chain of myosin ⌱⌱ and activates myosin ⌱⌱ (26). The essential role of myosin II in actin cytoskeletal rearrangement and cell migration has long been appreciated (27). Myosin ⌱⌱ regulates the retrograde flow of F-actin in the lamella and plays an essential role in F-actin-driven cell migration (28). Phosphorylated MLC (at Ser-19) appears to be strong in both the anterior and posterior regions of motile cells (15,29,30). Furthermore, phosphorylation of the myosin light chain is required for the anchorage of lamellipodia (3). To investigate the effect of cAMP/PKA activation on MLC phosphorylation in cell migration, we first examined whether forskolin could reduce the phosphorylation of MLC in MEF cells. As shown in Fig. 6A, forskolin treatment significantly reduced the level of phosphorylated MLC, whereas the amount of total MLC was not affected. Next, we examined the distribution of phosphorylated MLC in cells at the leading edge of a wound with an antibody recognizing phosphorylated MLC. As shown in Fig. 6B, in the absence of forskolin, strong staining of phosphorylated MLC was observed in the lamella of migrating cells (indicated by arrowheads). In contrast, activation of cAMP/PKA by forskolin disrupted the anterior staining of phosphorylated MLC. To further confirm that the cAMP effect is on the phosphorylation of MLC, we tested whether raising the phosphorylation of MLC (by inhibiting the MLC phosphatase) could attenuate the cAMP inhibitory effect. As shown in Fig. 6B, treatment of cells with low concentrations of calyculin A (0.5 nM), a specific inhibitor for MLC phosphatase (31), restored the accumulation of phosphorylated MLC at the leading edge of the cells (indicated by arrowheads). Moreover, calyculin treatment also attenuated the inhibitory effect of forskolin on serum-induced MEF cell migration as measured by the Boyden chamber assay (Fig. 6C) and the wound-healing assay (Fig. 6D). Taken together, our data indicate that forskolin decreases the phosphorylation of MLC at the leading edge and provide a possible biochemical mechanism for the inhibitory role of cAMP in cell migration. Another PKA substrate that is involved in lamellipodial dynamics is VASP (32,33). VASP proteins could be targeted to the leading edge by activated Rac in migrating fibroblasts (34). VASP phosphorylation by PKA diminishes VASP binding to F-actin and suppresses the actin-nucleating activity of VASP, leading to decreased actin polymerization (33). To investigate whether the cAMP inhibitory effect on cell migration is accompanied by increased phosphorylation of VASP, we examined the phosphorylation state of VASP with or without forskolin treatment. Phosphorylation of VASP causes an electrophoretic mobility shift in SDS-PAGE (35). As shown in Fig. 6E, forskolin treatment markedly increased the phosphorylation of VASP. Together with the effect of cAMP/PKA on the phosphorylation of MLC, cAMP/ PKA could have multiple targets in their effects on cell migration and lamellipodium formation.
Conclusion-In summary, cAMP inhibits the migration of MEFs and mouse 4T1 breast tumor cells. cAMP acts downstream of Rac. Furthermore, cAMP interferes with the formation of lamellipodia. cAMP decreases the phosphorylation of MLC and increases the phosphorylation of VASP. Myosin-based contractility is important for cell migration at the leading edge as well as at the trailing edge (15,29,30). In polarized migrating MEFs, there are two areas of phosphorylated MLC staining: leading edge staining and trailing tail staining (15). In growth factor-induced migration of MEFs, we have shown previously that Ca 2ϩ influx, through calmodulin and MLC kinase, increases the phosphorylation of MLC at the training tail and contributes to the trailing tail contraction; Ca 2ϩ influx has no effect on the phosphorylation of MLC at the leading edge (15). Here cAMP appears to decrease the phosphorylation of MLC at the front as well as the tail of migrating cells.
Although we examined the effect of cAMP/PKA on the phosphorylation of MLC and VASP, we did not intend to state that phosphorylations of MLC and VASP are the only or major mechanisms by which cAMP inhibits lamellipodium formation. cAMP/PKA can regulate the activity of other proteins involved in actin cytoskeletons. For example, another means by which cAMP/PKA could modulate the actin cytoskeletal reorganization is through the AKAPs (A kinase anchoring proteins) such as WAVE-1 and AKAP-Lbc because these proteins are known to regulate actin polymerization or Rho activity (36,37). PKA phosphorylation of integrin (such as ␣4) within the protrusion is critical for integrin-dependent cell migration (38). With the reported positive and negative effects of cAMP/PKA on the migration of different cell types, it is likely that the spatial- | 4,256 | 2008-05-16T00:00:00.000 | [
"Biology",
"Medicine"
] |
A Survey of Current Datasets for Vision and Language Research
Integrating vision and language has long been a dream in work on artificial intelligence (AI). In the past two years, we have witnessed an explosion of work that brings together vision and language from images to videos and beyond. The available corpora have played a crucial role in advancing this area of research. In this paper, we propose a set of quality metrics for evaluating and analyzing the vision&language datasets and categorize them accordingly. Our analyses show that the most recent datasets have been using more complex language and more abstract concepts, however, there are different strengths and weaknesses in each.
Introduction
Bringing together language and vision in one intelligent system has long been an ambition in AI research, beginning with SHRDLU as one of the first vision-language integration systems ( Winograd, 1972) and continuing with more recent attempts on conversational robots grounded in the visual world (Kollar et al., 2013;Cantrell et al., 2010;Matuszek et al., 2012;Kruijff et al., 2007;Roy et al., 2003).In the past few years, an influx of new, large vision & language corpora, alongside dramatic advances in vision research, has sparked renewed interest in connecting vision and language.Vision & language corpora now provide alignments between visual content that can be recognized with Computer Vision (CV) algorithms and language that can be understood and generated using Natural Language Processing techniques.
Fueled in part by the newly emerging data, research that blends techniques in vision and in language has increased at an incredible rate.In just the past year, recent work has proposed methods for image and video captioning (Fang et al., 2014;Donahue et al., 2014;Venugopalan et al., 2015), summarization (Kim et al., 2015), reference (Kazemzadeh et al., 2014), and question answering (Antol et al., 2015;Gao et al., 2015), to name just a few.The newly crafted large-scale vision & language datasets have played a crucial role in defining this research, serving as a foundation for training/testing and helping to set benchmarks for measuring system performance.
Crowdsourcing and large image collections such as those provided by Flickr1 have made it possible for researchers to propose methods for vision and language tasks alongside an accompanying dataset.However, as more and more datasets have emerged in this space, it has become unclear how different methods generalize beyond the datasets they are evaluated on, and what data may be useful for moving the field beyond a single task, towards solving larger AI problems.
In this paper, we take a step back to document this moment in time, making a record of the major available corpora that are driving the field.We provide a quantitative analysis of each of these corpora in order to understand the characteristics of each, and how they compare to one another.The quality of a dataset must be measured and compared to related datasets, as low quality data may distort an entire subfield.We propose a set of criteria for analyzing, evaluating and comparing the quality of vision & language datasets against each other.Knowing the details of a dataset compared to similar datasets allows researchers to define more precisely what task(s) they are trying to solve, and select the dataset(s) best suited to their goals, while being aware of the implications and biases the datasets could impose on a task.
We categorize the available datasets into three major classes and evaluate them against these cri-teria.The datasets we present here were chosen because they are all available to the community and cover the data that has been created to support the recent focus on image captioning work.More importantly, we provide an evolving website2 containing pointers and references to many more vision-to-language datasets, which we believe will be valuable in unifying the quickly expanding research tasks in language and vision.
Quality Criteria for Language & Vision Datasets
The quality of a dataset is highly dependent on the sampling and scraping techniques used early in the data collection process.However, the content of datasets can play a major role in narrowing the focus of the field.Datasets are affected by both reporting bias (Gordon and Durme, 2013), where the frequency with which people write about actions, events, or states does not directly reflect real-world frequencies of those phenomena; they are also affected by photographer's bias (Torralba and Efros, 2011), where photographs are somewhat predictable within a given domain.This suggests that new datasets may be useful towards the larger AI goal if provided alongside a set of quantitative metrics that show how they compare against similar corpora, as well as more general "background" corpora.Such metrics can be used as indicators of dataset bias and language richness.At a higher level, we argue that clearly defined metrics are necessary to provide quantitative measurements of how a new dataset compares to previous work.This helps clarify and benchmark how research is progressing towards a broader AI goal as more and more data comes into play.
In this section, we propose a set of such metrics that characterize vision & language datasets.We focus on methods to measure language quality that can be used across several corpora.We also briefly examine metrics for vision quality.We evaluate several recent datasets based on all proposed metrics in Section 4, with results reported in Tables 1, 2, and Figure 1.
Language Quality
We define the following criteria for evaluating the captions or instructions of the datasets: • Vocabulary Size (#vocab), the number of unique vocabulary words.
• Syntactic Complexity (Frazier, Yngve) measures the amount of embedding/branching in a sentence's syntax.We report mean Yngve (Yngve, 1960) and Frazier measurements (Frazier, 1985); each provides a different counting on the number of nodes in the phrase markers of syntactic trees.
• Part of Speech Distribution measures the distribution of nouns, verbs, adjectives, and other parts of speech.
• Abstract:Concrete Ratio (#Conc, #Abs, %Abs) indicates the range of visual and non-visual concepts the dataset covers.Abstract terms are ideas or concepts, such as 'love' or 'think' and concrete terms are all the objects or events that are mainly available to the senses.For this purpose, we use a list of most common abstract terms in English (Vanderwende et al., 2015), and define concrete terms as all other words except for a small set of function words.
• Average Sentence Length (Sent Len.) shows how rich and descriptive the sentences are.
• Perplexity provides a measure of data skew by measuring how expected sentences are from one corpus according to a model trained on another corpus.We analyze perplexity (Ppl) for each dataset against a 5-gram language model learned on a generic 30B words English dataset.We further analyze pair-wise perplexity of datasets against each other in Section 4.
Vision Quality
Our focus in this survey is mainly on language, however, the characteristics of images or videos and their corresponding annotations is as important in vision & language research.The quality of vision in a dataset can be characterized in part by the variety of visual subjects and scenes provided, as well as the richness of the annotations (e.g., segmentation using bounding boxes (BB) or visual dependencies between boxes).Moreover, a vision corpus can use abstract or real images (Abs/Real).
The Available Datasets
We group a representative set of available datasets based on their content.For a complete list of datasets and their descriptions, please refer to the supplementary website. 2
Captioned Images
Several recent vision & language datasets provide one or multiple captions per image.The captions of these datasets are either the original photo title and descriptions provided by online users (Ordonez et al., 2011;Thomee et al., 2015), or the captions generated by crowd workers for existing images.The former datasets tend to be larger in size and contain more contextual descriptions.
User-generated Captions
• SBU Captioned Photo Dataset (Ordonez et al., 2011) contains 1 million images with original user generated captions, collected in the wild by systematic querying of Flickr.This dataset is collected by querying Flickr for specific terms such as objects and actions and then filtered images with descriptions longer than certain mean length.
• Déjà Images Dataset (Chen et al., 2015) consists of 180K unique user-generated captions associated with 4M Flickr images, where one caption is aligned with multiple images.This dataset was collected by querying Flickr for 693 high frequency nouns, then further filtered to have at least one verb and be judged as "good" captions by workers on Amazon's Mechanical Turk (Turkers).
Crowd-sourced Captions
• UIUC Pascal Dataset (Farhadi et al., 2010) is probably one of the first datasets aligning images with captions.Pascal dataset contains 1,000 images with 5 sentences per image.
• Flickr 30K Images (Young et al., 2014) extends previous Flickr datasets (Rashtchian et al., 2010), and includes 158,915 crowd-sourced captions that describe 31,783 images of people involved in everyday activities and events.
• Microsoft COCO Dataset (MS COCO) (Lin et al., 2014) includes complex everyday scenes with common objects in naturally occurring contexts.Objects in the scene are labeled using per-instance segmentations.In total, this dataset contains photos of 91 basic object types with 2.5 million labeled instances in 328k images, each paired with 5 captions.This dataset gave rise to the CVPR 2015 image captioning challenge and is continuing to be a benchmark for comparing various aspects of vision and language research.• Abstract Scenes Dataset (Clipart) (Zitnick et al., 2013) was created with the goal of representing real-world scenes with clipart to study scene semantics isolated from object recognition and segmentation issues in image processing.This removes the burden of low-level vision tasks.This dataset contains 10,020 images of children playing outdoors associated with total 60,396 descriptions.
Captions of Densely Labeled Images
Existing caption datasets provide images paired with captions, but such brief image descriptions capture only a subset of the content in each image.Measuring the magnitude of the reporting bias inherent in such descriptions helps us to understand the discrepancy between what we can learn for the specific task of image captioning versus what we can learn more generally from the photographs people take.One dataset useful to this end provides image annotation for content selection: • Microsoft Research Dense Visual Annotation Corpus (Yatskar et al., 2014) provides a set of 500 images from the Flickr 8K dataset (Rashtchian et al., 2010) that are densely labeled with 100,000 textual labels, with bounding boxes and facets annotated for each object.This approximates "gold standard" visual recognition.
To get a rough estimate of the reporting bias in image captioning, we determined the percentage of top-level objects3 that are mentioned in the captions for this dataset out of all the objects that are annotated.Of the average 8.04 available top-level objects in the image, each of the captions only reports an average of 2.7 of these objects. 4A more detailed analysis of reporting bias is beyond the scope of this paper, but we found that many of the biases (e.g., people selection) found with abstract scenes (Zitnick et al., 2013) are also present with photos.
Video Description and Instruction
Video datasets aligned with descriptions (Chen et al., 2010;Rohrbach et al., 2012;Regneri et al., 2013;Naim et al., 2015;Malmaud et al., 2015) generally represent limited domains and small lexicons, which is due to the fact that video processing and understanding is a very compute-intensive task.Available datasets include: • Short Videos Described with Sentences (Yu and Siskind, 2013) .The descriptions are one sentence summaries about the actions or events in the video as described by Amazon Turkers.In this dataset, both paraphrase and bilingual alternatives are captured, hence, the dataset can be useful translation, paraphrasing, and video description purposes.
Beyond Visual Description
Recent work has demonstrated that n-gram language modeling paired with scene-level understanding of an image trained on large enough datasets can result in reasonable automatically generated captions (Fang et al., 2014;Donahue et al., 2014).Some works have proposed to step beyond description generation, towards deeper AI tasks such as question answering (Ren et al., 2015;Malinowski and Fritz, 2014).We present two of these attempts below: • Visual Madlibs Dataset (VML) (Yu et al., 2015) is a subset of 10,783 images from the MS COCO dataset which aims to go beyond describing which objects are in the image.For a given image, three Amazon Turkers were prompted to complete one of 12 fill-in-the-blank template questions, such as 'when I look at this picture, I feel -', selected automatically based on the image content.This dataset contains a total of 360,001 MadLib question and answers.
• Visual Question Answering (VQA) Dataset (Antol et al., 2015) is created for the task of openended VQA, where a system can be presented with an image and a free-form natural-language question (e.g., 'how many people are in the photo?'), and should be able to answer the question.This dataset contains both real images and abstract scenes, paired with questions and answers.Real images include 123,285 images from MS COCO dataset, and 10,000 clip-art abstract scenes, made up from 20 'paperdoll' human models with adjustable limbs and over 100 objects and 31 animals.Amazon Turkers were prompted to create 'interesting' questions, resulting in 215,150 questions and 430,920 answers.
• Toronto COCO-QA Dataset (CQA) (Ren et al., 2015) is also a visual question answering dataset, where the questions are automatically generated from image captions of MS COCO dataset.This dataset has a total of 123,287 images with 117,684 questions with one-word answer about objects, numbers, colors, or locations.
Analysis
We analyze the datasets introduced in Section 3 according to the metrics defined in Section 2, using the Stanford CoreNLP suite to acquire parses and part-of-speech tags (Manning et al., 2014).We also include the Brown corpus (Francis and Kucera, 1979;Marcus et al., 1999) as a reference point.We find evidence that the VQA dataset captures more abstract concepts than other datasets, with almost 20% of the words found in our abstract concept resource.The Deja corpus has the least number of abstract concepts, followed by COCO and VDC.This reflects differences in col- To make perplexities comparable, we used the same vocabulary frequency cutoff of 3.All models are 5-grams.
q q q q q q q q q q q Brown SBU Deja Pascal We include the POS tags from the balanced Brown corpus (Marcus et al., 1999) to contextualize any very shallow syntactic biases.We mapped all nouns to "N," all verbs to "V," all adjectives to "J" and all other POS tags to "O." lecting the various corpora: For example, the Deja corpus was collected to find specifically visual phrases that can be used to describe multiple images.This corpus also has the most syntactically simple phrases, as measured by both Frazier and Yngve; this is likely caused by the phrases needing to be general enough to capture multiple images.
The most syntactically complex sentences are found in the Flickr30K, COCO and CQA datasets.However, the CQA dataset suffers from a high perplexity against a background corpus relative to the other datasets, at odds with relatively short sentence lengths.This suggests that the automatic caption-to-question conversion may be creating unexpectedly complex sentences that are less reflective of general language usage.In contrast, the COCO and Flickr30K dataset's relatively high syntactic complexity is in line with their relatively high sentence length.
Table 2 illustrates further similarities between datasets, and a more fine-grained use of perplexity to measure the usefulness of a given training set for predicting words of a given test set.Some datasets such as COCO, Flickr30K, and Clipart are generally more useful as out-domain data compared to the QA datasets.Test sets for VQA and CQA are quite idiosyncratic and yield poor perplexity unless trained on in-domain data.As shown in Figure 1, the COCO dataset is balanced across POS tags most similarly to the balanced Brown corpus (Marcus et al., 1999).The Clipart dataset provides the highest proportion of verbs, which often correspond to actions/poses in vision research, while the Flickr30K corpus provides the most nouns, which often correspond to object/stuff categories in vision research.
We emphasize here that the distinction between a qualitatively good or bad dataset is task dependent.Therefore, all these metrics and the obtained results provide the researchers with an objective set of criteria so that they can make the decision whether a dataset is suitable to a particular task.
Conclusion
We detail the recent growth of vision & language corpora and compare and contrast several recently released large datasets.We argue that newly introduced corpora may measure how they compare to similar datasets by measuring perplexity, syntactic complexity, abstract:concrete word ratios, among other metrics.By leveraging such metrics and comparing across corpora, research can be sensitive to how datasets are biased in different directions, and define new corpora accordingly.
Figure 1 :
Figure1: Simplified part-of-speech distributions for the eight datasets.We include the POS tags from the balanced Brown corpus(Marcus et al., 1999) to contextualize any very shallow syntactic biases.We mapped all nouns to "N," all verbs to "V," all adjectives to "J" and all other POS tags to "O."
Table 1 :
Summary of statistics and quality metrics of a sample set of major datasets.For Brown, we report Frazier and Yngve scores on automatically acquired parses, but we also compute them for the 24K sentences with gold parses: in this setting, the mean Frazier score is 15.26 while the mean Yngve score is 58.48.
Table 2 :
Perplexities across corpora, where rows represent test sets (20k sentences) and columns training sets (remaining sentences). | 3,995.6 | 2015-06-23T00:00:00.000 | [
"Computer Science"
] |
Digital Twins’ Applications for Building Energy Efficiency: A Review
: Over the last few decades, energy efficiency has received increasing attention from the Architecture, Engineering, Construction and Operation (AECO) industry. Digital Twins have the potential to advance the Operation and Maintenance (O&M) phase in different application fields. With the increasing industry interest, there is a need to review the current status of research developments in Digital Twins for building energy efficiency. This paper aims to provide a comprehensive review of the applications of digital twins for building energy efficiency, analyze research trends and identify research gaps and potential future research directions. In this review, Sustainability and Energy and Buildings are among the most frequently cited sources of publications. Literature reviewed was classified into four different topics: topic 1. Optimization design; topic 2. Occupants’ comfort; topic 3. Building operation and maintenance; and topic 4. Energy consumption simulation.
Introduction
The Architecture, Engineering, Construction and Operations (AECO) sector is responsible for a large percentage of the world's energy consumption, which has a negative environmental impact on its day-to-day operations [1][2][3]. There has been a continuous increase in the contribution of buildings to global energy use, including both residential and commercial buildings, with estimations ranging from 20 to 40% [4]. Developing countries are likely to use more energy and, consequently, emit more greenhouse gases (GHG) as a result of economic growth [5]. The concept of energy efficiency refers to the ratio between the output of performance, service, good, or energy, and the input of energy, according to the European Parliament [6]. In other words, energy efficiency in building operations refers to the actual operational performance of various systems within a building. Currently, the AECO sector faces a great deal of pressure to reduce polluting emissions and to develop more energy-efficient methods of operation (materials, processes, equipment, buildings) [7,8].
Building Information Modelling (BIM) has been emerging as a potential solution to improve energy efficiency [9][10][11]. BIM is "an approach to design, construction, and facilities management, in which a digital representation of the building process is used to facilitate the exchange and interoperability of information in digital format" [12]. BIM constitutes an effective platform by which to depict high-quality information and integrate different platforms. BIM utilizes 3D, parametric and object-based models to create, store and use coordinated and compatible data throughout the life cycle of a facility [13]. Acting as a central resource for decision-makers, BIM has the ability to provide better documentation, improved collaboration and work flexibility, and updated information through the building life cycle [3,14]. Researchers focus on implementing BIM for different aspects, such as: sustainability [15][16][17]; strategy planning [13,18]; retrofit planning [19]; preventive maintenance planning [8,13,20,21]; building systems analysis [13,22,23]; commissioning processes [13,24]; and energy management [25,26].
Similarly, technological advances in recent decades have initiated the emergence of Digital Twins (DT), which are commonly thought of as the digital version of physical products. Singh et al. [27] suggest the following definition: "A DT is a dynamic and selfevolving digital/virtual model or simulation of a real-life subject or object (part, machine, process, human, etc.) representing the exact state of its physical twin at any given point of time via exchanging the real-time data as well as keeping the historical data. It is not just the DT which mimics its physical twin but any changes in the DT are mimicked by the physical twin too." In this sense, three main elements are required: a physical twin (a real-world entity), a digital twin (the digital representation that can mirror the physical twin in real time), and a linking mechanism, which allows the flow of data between the twins in both directions and in real time automatically [28]. These definitions and requirements apply for any product or entity. More specifically for the construction industry, Opoku et al. [29] define digital twins as a "real-time representation of the building or structure that is fully or partially completed and developed for the purpose of representing the status and character of the building or structure it mirrors." Thus, it allows the seamless synchronization and monitoring of energy systems via computerized and virtual world simulations based on data, information and consumer behavior [30].
Interest in DT and energy efficiency has been growing through the years [31] and has led to an increase in the yearly output of articles in the related domain. Deng et al. [32] present a review paper focused on identifying the emerging technologies that facilitate the evolution of BIM to DT in built environment applications. A total of 100 related papers including 23 review papers were selected and reviewed. This paper developed a five-level ladder taxonomy to reflect the evolution from BIM to DTs. The majority of past studies in the literature fall into Levels 2 and 3, which are BIM-supported simulations and BIM-IoT integration for built environment management. Teisserenc and Sepasgozar [33] also present a review paper, but with the aim to develop a technological framework for the integration of blockchain technology (BCT) with DT for projects of the BECOM industry 4.0. This model promotes ecosystems of trusted, decentralized, and sustainable DTs where BCT secures information sharing for the data value chain of projects. Marocco and Garofolo [34] also reviewed studies, but focusing on disruptive technologies for Facility Management (FM). Their findings revealed that a promising starting point for enhancing FM includes developing DT platforms by integrating BIM, cloud computing, and IoT. Casini [35] reviewed studies on extended reality technologies such as virtual reality, augmented reality, and mixed reality technologies and applications for smart building operation and maintenance. He argues that the future of O&M is represented by digital twin technology and concluded that the use of extended reality technologies in building and city management is demonstrating promising outcomes in terms of improving human performance in technical O&M tasks, understanding and managing the energy efficiency, comfort, and safety of buildings and infrastructures, and assisting in strategic decision-making. Opoku et al. [36] conducted a literature review focusing on the DT application in the construction industry. They analyzed 22 papers sorting the applications into the four phases of the life cycle of the objects in the construction industry: design and engineering, construction, operation and maintenance, and demolition and recovery. In this approach, energy efficiency features mainly in the operation and maintenance phase to inform decision-making through simulations and energy consumption optimization. In some cases, energy simulations were also present in the design and engineering phase. These authors identify energy simulation as one of six key applications of DT in the construction industry.
There is often confusion between BIM and DT, so it is important to clarify that BIM is a process of creating a 3D model extension of a real-world item, while a DT is designed to emulate the thing it represents [37]. DTs are "a digital representation of a unique active product (real device, object, machine, service, or intangible asset) or unique product service Energies 2022, 15, 7002 3 of 17 system (a system consisting of a product and a related service) that comprises its selected characteristics, properties, conditions, and behaviors through models, information, and data within a single or even across multiple life cycle phases" [38]. DT provides insights into the life and features of individual products and it is possible to optimize its sustainability. Moreover, it contributes to the improvement of future product generations, for example through measures such as the Ecological Footprint or the Life Cycle Assessment (LCA) [39]. Few specific use cases describe how DT can be used to optimize energy use or monitor it using the Internet of Things (IoT) [40,41].
As shown, previous review papers on the use of DT in the construction industry and its enabling technologies have been identified [32][33][34][35][36]. However, these papers only mention the potential of applying DT to building energy efficiency as one of many possibilities. None of the review papers analyze specifically the current state of implementing DT to building energy efficiency. On the other hand, published research papers include case studies and methods to enable DT for different types of buildings and at different scales [40][41][42]. Although the body of literature about the use of DT specifically for building energy efficiency has grown over the past few years, the current state of implementing DT for building energy efficiency has still not been addressed in the form of literature review. Therefore, this paper presents a comprehensive review of the current status and insights of digital twins' applications focused on building energy efficiency. This study contributes to the field, identifying the main uses and methods for applying DT to building energy efficiency while also identifying gaps in the literature and paving the directions for future research.
This paper is structured as follows: Section 2 focuses on the research methodology, Section 3 provides the results of the scientometric analysis, Section 4 presents a discussion of the results from the review and groups the reviewed articles into four main application fields, and Section 5 further discusses research gaps and future directions and the conclusion.
Methodology
The methodology applied to develop the current paper was a systematic literature review. A systematic literature review identifies, evaluates and interprets relevant research on a specific topic, issue or area [43]. This method is well-known as an effective way of identifying important recurring themes and useful for structuring the data, being used by most review papers [29,32]. Unlike traditional review research methods, it allows the researcher to obtain data about a phenomenon and summarize the existing evidence concerning the specific topic in a thorough and unbiased manner. For that, systematic reviews must be undertaken in accordance with a predefined search strategy. First, the research question must be defined. Therefore, this review wants to answer the following research question: "How can digital twins be applied to energy efficiency in buildings?" To answers it, a specific strategy was followed. The first stage comprised the search for publications regarding the topic. The second stage included the definitions and application of exclusion criteria, and finally, the third stage comprised the execution of a scientometric analysis followed by the synthesis of the publications. Figure 1 delineates the overall procedure of this methodology. Three databases were used for searching the publications, the well-known scientific meta-search engines (1) Scopus, (2) Web of Science and (3) MDPI.
Phase One: Search for Publications
First, three different search terms were defined to gather the most relevant information related to digital twins used to improve building energy efficiency. The Boolean operators OR and AND were used for the keyword-based search on title-abstract-keyword of each publication: "Digital twin" AND "Building" AND "Energy Efficiency" OR "Energy Performance". Considering the type of publications, the selected ones included published journal articles, book chapters, and conference papers, to give a thorough overview of extant research and guarantying a wide diversity of information sources. As a result, 87 publications
Phase One: Search for Publications
First, three different search terms were defined to gather the most relevant information related to digital twins used to improve building energy efficiency. The Boolean operators OR and AND were used for the keyword-based search on title-abstract-keyword of each publication: "Digital twin" AND "Building" AND "Energy Efficiency" OR "Energy Performance". Considering the type of publications, the selected ones included published journal articles, book chapters, and conference papers, to give a thorough overview of extant research and guarantying a wide diversity of information sources. As a result, 87 publications were gathered. The search results were saved in Scopus, Web of Science and MDPI and the publications were downloaded and imported to Mendeley reference manager.
Phase Two: Exclusion Criteria
Firstly, the duplicate publications found in common in the three databases were excluded, resulting in 60 publications to be analyzed. Secondly, all the titles and abstracts of the publications were carefully reviewed to select the studies relevant to the subject of the present paper. Through this reading, thirteen publications were excluded from the database. Publications that were not related to the use of digital twins for buildings and those duplicate publications were excluded. After reading the content, fourteen publications were removed.
Phase Three: Scientometric Analysis
Several kinds of measurable bibliometric data were examined using statistics-based methodologies, including the evolution of publications per year, and the number of citations. The metric data was exported from the databases to Microsoft Excel, then processed and graphed to aid in the interpretation of this type of data. Additionally, co-occurrence analysis on the databases' information was carried out using VOSviewer software, which consists of a visualization application that uses natural language processing methods and text mining techniques to help analyze networks. Moreover, citation links between articles or journals, cooperation ties between researchers, and co-occurrence interactions between scientific terms were explored using the software. These are the types of analyses that are
Phase Two: Exclusion Criteria
Firstly, the duplicate publications found in common in the three databases were excluded, resulting in 60 publications to be analyzed. Secondly, all the titles and abstracts of the publications were carefully reviewed to select the studies relevant to the subject of the present paper. Through this reading, thirteen publications were excluded from the database. Publications that were not related to the use of digital twins for buildings and those duplicate publications were excluded. After reading the content, fourteen publications were removed.
Phase Three: Scientometric Analysis
Several kinds of measurable bibliometric data were examined using statistics-based methodologies, including the evolution of publications per year, and the number of citations. The metric data was exported from the databases to Microsoft Excel, then processed and graphed to aid in the interpretation of this type of data. Additionally, co-occurrence analysis on the databases' information was carried out using VOSviewer software, which consists of a visualization application that uses natural language processing methods and text mining techniques to help analyze networks. Moreover, citation links between articles or journals, cooperation ties between researchers, and co-occurrence interactions between scientific terms were explored using the software. These are the types of analyses that are most closely related with the concept of scientometric analysis, with VOSviewer being frequently used in scientific research [44].
Phase Four: Synthesis of the Results
Finally, a careful analysis of the publications was conducted to synthetize the results. First, the main topics of information of the publications were identified totaling four main topics. This analysis and resulting groups of publications allowed a structured presentation of information. The state of the art for each of the four main topics was established by a rigorous literature review. The methods for creating digital twins for each publication were Energies 2022, 15, 7002 5 of 17 also identified. As a result, research gaps, findings, conclusions, and present shortcomings were found for each publication connected to each topic.
Study Characteristics
The evolution of publications per year and the respective types are shown in Figure 2. The number of publications on journal papers increased in 2021 and 2022. Therefore, the topic discussed in this paper is of current and high interest in the scientific community. From the total of 32 publications, four were review papers, which were already presented in the introduction section.
tion of information. The state of the art for each of the four main topics was established by a rigorous literature review. The methods for creating digital twins for each publication were also identified. As a result, research gaps, findings, conclusions, and present shortcomings were found for each publication connected to each topic.
Study Characteristics
The evolution of publications per year and the respective types are shown in Figure 2. The number of publications on journal papers increased in 2021 and 2022. Therefore, the topic discussed in this paper is of current and high interest in the scientific community. From the total of 32 publications, four were review papers, which were already presented in the introduction section. Figure 3 shows the distribution of publications on digital twins for energy efficiency by country. As shown, Italy has published the majority of the studies, accounting for 19.3% of total publications in this research field. Italy is followed by the United Sates with 12.9% of the publications, and by the United Kingdom, Mexico and Australia which each have 9.6% of the publications. Figure 3 shows the distribution of publications on digital twins for energy efficiency by country. As shown, Italy has published the majority of the studies, accounting for 19.3% of total publications in this research field. Italy is followed by the United Sates with 12.9% of the publications, and by the United Kingdom, Mexico and Australia which each have 9.6% of the publications.
The absolute number of citations for each publication was also analyzed. The ten most cited articles are presented in Table 1.
Regarding the source of publications, only the two highest forms of publications, journal articles and conference papers, were considered. A total of 14 journals were identified. Sustainability and Energy and Buildings were found to be the most frequent sources of publication for this topic. These journals and the conference papers resulted in 32 articles published between 2019 and 2022.
Keywords Co-Occurrence
For conducting this analysis, initially a normalization of ambiguous keywords was necessary. For example, keywords in plural ("buildings") were adjusted to singular ("building"), to avoid the creation of different clusters of keyword frequency. The keywords with a minimum co-occurrence of two are exhibited in a network map (Figure 4), created using the VOSviewer software after reading through the literature's keywords. The weight of nodes Figure 4 is represented by their size. The weight of a node and a word is proportional to their size. The strength of two nodes is determined by the distance between them. A stronger relationship is revealed by a shorter distance. The line that connects two keywords denotes that they appeared together. The thicker the line, the more likely they are to appear together. A cluster is formed by nodes of the same color. The absolute number of citations for each publication was also analyzed. The ten most cited articles are presented in Table 1. Regarding the source of publications, only the two highest forms of publications, journal articles and conference papers, were considered. A total of 14 journals were identified. Sustainability and Energy and Buildings were found to be the most frequent sources of publication for this topic. These journals and the conference papers resulted in 32 articles published between 2019 and 2022.
Keywords Co-Occurrence
For conducting this analysis, initially a normalization of ambiguous keywords was necessary. For example, keywords in plural ("buildings") were adjusted to singular ("building"), to avoid the creation of different clusters of keyword frequency. The keywords with a minimum co-occurrence of two are exhibited in a network map (Figure 4), created using the VOSviewer software after reading through the literature's keywords. The weight of nodes and words in Figure 4 is represented by their size. The weight of a node and a word is proportional to their size. The strength of two nodes is determined by the distance between them. A stronger relationship is revealed by a shorter distance. The line that connects two keywords denotes that they appeared together. The thicker the line, the more likely they are to appear together. A cluster is formed by nodes of the same color. As expected, the keywords "energy efficiency" and "digital twin" have the highest frequency. The network presented in Figure 4 shows six clusters of keywords. The As expected, the keywords "energy efficiency" and "digital twin" have the highest frequency. The network presented in Figure 4 shows six clusters of keywords. The keywords such as "internet of things" and "intelligent buildings" show strong links with "digital twin", which may indicate that energy efficiency could be improved by the effect that these technologies can have regarding the subjects that these keywords represent. This cluster also shows connection with the keyword "BIM" because BIM is considered the most used technology as a digital representation in a digital twin. The keyword "energy efficiency" displays strong links with other keywords forming another cluster: "artificial intelligence", "automation", "zero energy buildings" and "information management". As a result, these terms could refer to processes that improve a building's energy efficiency.
Another cluster is formed by the keywords "facility management", "maintenance" and "energy utilization", which reveals studies on DT focused on existing buildings that are working with the energy management of the building stock.
There is also a cluster formed by the keywords "architectural design", "building energy model" and "energy efficiency", which indicates studies on design optimization linked with energy simulation models.
Another cluster includes the links between "genetic algorithms", "artificial neural network" and "decision making". These secondary keywords represent relevant fields of study.
Publications on Digital Twins for Energy Efficiency
Key topics were found when analyzing the publications. Table 2 lists the publications related to each topic. Although DT for energy efficiency is mainly used during the use and maintenance of a building, some researchers investigated how to optimize the building design through the implementation of DT (Topic 1. Design optimization). During the use and maintenance of a building, two relevant approaches were considered: one is based on a user-centric approach and is focused on the comfort of occupants (Topic 2. Occupants' comfort); and the other is focused on the building performance and its maintenance (Topic 3. Building operation and maintenance). Finally, many researchers focused their studies on analyzing real building energy data collected through DT to simulate and forecast future situations (Topic 4. Energy consumption simulation).
Topic 1-Design Optimization
This topic brings together publications associated with the analysis of optimization of the architectural design to save energy with different approaches. Tariq et al. [45] developed a digital twin model of a solar chimney to maximize the number of air changes per hour. Artificial intelligence methods based on artificial neural network were used in the study to maximize energy efficiency by the calculation of the number of air changes, which also minimized environmental emissions. In another publication, Tariq et al. [46] also presented the study of a digital twin for a solar chimney, having adopted a multivariable integrated approach to the development of prediction of optimal and sub-optimal outcomes in a variety of external and internal influencing parameters on energy efficiency as well as environmental footprints in various climatic zones. The study correlated the results with various social, economic, energetic, political and environmental problems of the countries. The case study of the solar chimney provided a viable solution to the rising energy consumption in the case studies.
Zhao et al. [48] created a building energy model (BEM) for an existing building using laser scanning to identify and evaluate the feasibility of retrofitting schemes, based on the concept of nearly zero-energy buildings (nZEBs). The aim of the study was to use a scan-to-BIM-based digital twin to improve energy efficiency in buildings using clean energy strategies. DesignBuilder was used to simulate the power generation of solar photovoltaic Massafra et al. [47] also worked with the BEM model, but they focused on the proposition of a workflow that integrates Heritage Building Information Modeling (HBIM) and Building Performance Simulation (BPS) tools for the energy improvement in an Italian case study. The study proposed energy intervention measures, computed construction costs and predicted benefits during the intervention in terms of thermal demand. At the end, different intervention combinations were compared indicating the optimal solution for the energy improvement of the building concerning energy, economic and financial issues.
In their study, Lydon et al. [50] focused on the energy domain's modeling technique for supporting the construction of a DT for a multifunctional building element. The thermal design of a heating and cooling system combined with a lightweight roof structure was explored in this research. The design of the concrete roof structure was optimized to produce a low embodied energy construction element that was thermally activated to provide space conditioning from a renewable geothermal source. Also analyzing an HAVC system, Trancossi et al. [49] proposed the design of a revolutionary thermoelectric heating and cooling system for an energy-efficient container house. The goal of this study was to create a thermoelectric heat pump that used the junction box of solar modules and Peltier cells as heat sources, as well as the design and thermodynamic evaluation of such a heat pump. A revolutionary thermoelectric air conditioning system and its integration in a container house were demonstrated in this research.
The goal of the study of Kaewunruen et al. [51] was to define and visualize what an NZEB was, as well as to assess the costs and technical issues of solutions that meet the NZEB criteria and can be implemented in existing buildings. Using a case study, adjustments to the thermal characteristics increased the building's efficiency and resulted in a 6.76% reduction in energy demand. Further analysis simulated the potential energy production that might be derived from the use of solar photovoltaic technology that covered 60% of the roof space, as well as the associated expenses.
Focused on existing buildings, the study of Kaewunruen et al. [52] proposed to focus on elements that can help existing buildings function better and be more sustainable, aiming for achieving NZEB goals. The research focus was an existing townhouse in Washington, DC, to see how the NZEB concept may be used to retrofit or reconstruct the architecture of a structure. The study modeled an existing townhouse to assess the current condition and produce optional models for enhancing energy efficiency. This study presented three models, two solutions, and one alternate option for improving energy efficiency and lowering the carbon footprint.
Topic 2-Occupants' Comfort
This topic embraces publications that are focused on implementing digital twins for energy saving and comfort satisfaction.
Wang et al. [53] used a digital twin in intelligent buildings, using a deep learning approach, with the aim to evaluate residents' environmental satisfaction. The study emphasized the use of Data Fusion Algorithm in Wireless Sensor Networks (WSNs). Although the study focused on satisfaction, the researchers discussed the strategies of energy efficient building digital twins, where BIM is central. The authors mentioned that digital twins in buildings can be regarded as an expression of "BIM+", born to digital descriptions. The study of Zaballos et al. [56] also focused on environmental comfort and the use of wireless sensor networks. Their work proposed a smart campuses concept to investigate the integration of BIM tools with Internet of Things (IoT) for environmental monitoring and emotion detection systems in order to provide insights into the occupants' level of comfort. To improve energy efficiency, the comfort-monitoring system might also be employed to monitor physical characteristics of educational facilities.
Martínez et al. [54] proposed the use of a Smart Readiness Indicator for university buildings as a reference environment for energy efficiency and COVID-19 prevention mod-els. This metric measured a building's (or a building unit's) ability to adapt its overall performance to the needs of its occupants (while simultaneously improving energy efficiency) and to allow energy flexibility in the performance based on various parameters (such as CO 2 , temperature, humidity). This article proposes a "measure-analyze-decide and act" methodology to quantify the indicator from a holistic perspective. The DT would act as a virtual support to show available services in a unified and harmonized way to the university community.
Bayer et al. [57] described an approach for validating and calibrating a digital twin of a prefabricated multifunctional radiant heating façade element. Thermal simulation was used to assess two distinct control strategies for minimally invasive radiant heating systems in terms of energy efficiency and variation of the room temperature from a given fixed point. The room temperature was considered a relevant parameter for the characterization of the thermal comfort and therefore for the satisfaction of the tenants. By implementing measured boundary conditions, the validation was carried out by aligning simulation results with measured data. The minimum input flow temperature, the control method, and the thermal behavior can all be multiplied and used in the refurbishment process. Also focused on a room, [55] presented a hybrid methodology that incorporates physics-based and machine learning methodologies to create a digital twin. A case study for a digital twin of a single room is presented, and the preliminary cooling energy comparison between the physical test and the digital twin model was presented. The aim of the study was to create a hybrid digital twin model that uses the best features of physics and machine learning approaches to capture the dynamic behavior of the building's HVAC system for energy efficiency and occupant satisfaction.
Clausen et al. [58] presented a design and implementation of a framework for digital twins for buildings in which the controlled environments are represented as digital entities. In this study, digital twins are parametrized models that are integrated into a generic control algorithm that performs predictive control using data on weather forecasts, current and planned occupancy, as well as the current state of the controlled environment. The technique was shown in a case study of a university building, where a digital twin was utilized to manage heating and ventilation. Their experiments have shown that the suggested system may maintain comfort levels that are comparable to those maintained by existing control strategies performed by a commercial building management system while also allowing for the application of energy-saving strategies.
Zakharov et al. [59] provided a method for automating the management of a heat supply in a smart building with the aim to lower financial costs while maintaining a high level of thermal comfort. The authors analyzed the Internet of Things-based methods for heat supply process automation and proposed a method to get comprehensive data from temperature sensors. The analysis data allowed real-time monitoring of heat changes and the production of appropriate heat management solutions. The authors also created algorithms for classifying rooms based on the temperature mode characteristic. The proposed approach serves as the foundation for an intelligent data analysis system capable of temperature mode modeling and control of the building's heat supply process. The method also provided in-depth analysis and created a digital twin of the case study.
Topic 3-Building Operation and Maintenance
Regarding this topic, the publications are focused on maintenance and using DT to improve energy efficiency in buildings.
Vering et al. [60] used Product Lifecycle Management and Digital Twin Design for HVAC systems, firstly utilizing an energy recovery ventilation (ERV) simulation model in order to achieve high efficiency for the equipment. To forecast physical system behavior, they created a DT prototype of the ERV unit to test functionality and applicability by calibrating the model against physical twin measurement data. The method allows DT to generate predictions for several situations with a view to increasing the system efficiency. The potential of predictive maintenance with various routines regarding air filter replace-Energies 2022, 15, 7002 11 of 17 ment for the ventilation system was successfully demonstrated. The use of a DF for HVAC systems analysis increased the lifecycle efficiency in terms of both energy and total costs.
The study of Hosamo et al. [44] also focused on the use of DT for maintenance. They specifically used predictive maintenance strategies to predict the faults in the AHU units. Three aspects were required to implement a practical predictive maintenance program: (i) the collecting of large amounts of data from sensors such as temperature, pressure and air volume, which is critical to understanding how the equipment operates; (ii) a platform for implementing automated fault detection and diagnostics (AFDD) algorithms and determining how to optimize the maintenance system and forecast failures; and (iii) a BIM to avoid the use of traditional data transfer methods (2D models) and depict the findings in a 3D model.
Tan et al. [61] proposed a visualized operation and maintenance platform for a DT lighting system through the combination of computer vision and BIM. The research contributes to the intelligent decision-making on lighting control, which can result in reducing energy consumption and electricity costs.
Torres et al. [63] implemented an artificial neural network as a DT for several existing hotels in Mexico. The DT was then used to model different scenarios for partially replacing energy consuming devices in the hotels. The goal of this research was to obtain the scenario that best combined three indicators: energy use index, equivalent-CO 2 -emission index and the energy-cost index. The results aimed to better inform managers in their decision-making processes on the replacement of energy consuming devices in the existing hotels.
Blume et al. [62] outlined a method for developing a data-driven DT for technical building services (TBS) such as cooling towers (CT). In order to enhance operational strategies, this research analyzed the relationships between operational business and technological system. The comparison of various DM algorithms showed that they were all capable of accurately and quickly predicting key operational KPIs like cooling capacity and electricity demand. Accurate cooling capacity predictions offer important insights into the overall system performance and operating dependability, two factors that are essential for the entire production system.
Topic 4-Energy Consumption Simulation
The articles related to this topic are focused on the extensive use of data analysis for energy efficiency.
In the study of Ni et al. [64], a digitization framework for historic buildings is proposed. Advanced techniques such as the Internet of Things (IoT), cloud computing and AI were used to construct DTs for historic buildings. Through analytics of real-time and historical data for specified features, this study employed DT to protect, forecast and optimize building energy efficiency. The DT can accurately portray real-time functioning circumstances and predict future states of historic buildings based on continuously acquired sensing data. With trained AI models, the framework also supports maintenance in order to achieve energy efficiency optimization and long-term preservation.
Also using IoT, AI and machine learning, Agostinelli et al., in the studies [65] and [66], focused on the potential of digital-twin-based methods and approaches for achieving an intelligent optimization and automation system for residential district energy management. The use of integrated dynamic analytic algorithms enabled the evaluation of several energy efficiency intervention scenarios aimed at attaining virtuous energy management of a complex (16 eight-floor buildings) while maintaining current internal comfort and climatic conditions. Using BIM as-built models, IoT and AI, a smart-energy-grid management system was created, resulting in a large as-performed and up-to-date city digital twin. Furthermore, the paper explored the idea that the notion of DT is exceptionally transversal and applicable both to macroscopic and microscopic sizes (from district to apartment). The results of DT-based real-time monitoring can help bridge the gap between building energy performance (as simulated by energy diagnosis) and actual building performance. This was made feasible through data analysis, which enabled more sophisticated energy management techniques to be developed, as well as revealing ineffective user behaviors and rules.
HosseiniHaghighi et al. [68] also focused on an urban scale, developing a city digital twin in CityGML format. The study estimated the district's thermal demand in an urban building energy model (UBEM). Moreover, they evaluated an alternative scenario with the configuration of heat pumps and photovoltaic systems on individual buildings, demonstrating the potential of UBEM for retrofit decision-making, as well as the district's ability to plan net-zero actions. Similarly, Agostinelli et al. [70] adopted an urban scale approach to create a DT of the port area. They used the DT to develop energy efficient procedures for the port's operations. Furthermore, they ran simulations with data from open-source platforms about renewable energy systems (RESs). The aim of this research was to inform the decision-making process to integrate RESs in the port. The port was meant as a starting point to facilitate further investigation and implementation of the DT and the strategies adopted in the surrounding city area. This approach also used building energy models of the buildings integrated in a BIM model for the DT. Also on a city scale, Bass et al. [69] outlined a method for developing urban-scale building energy models, and illustrated the distribution of potential savings from energy efficient building systems. Several corporations, universities and national laboratories are working on urban-scale energy modeling, which will allow for the production of a digital twin of buildings for simulation and optimization of real-world, city-sized areas. A utility's top five use cases and nine monetization scenarios for a digital twin of buildings were reported in this study.
Francisco et al. [67] presented how the results of DT-based real-time monitoring can help bridge the gap between building energy performance (as simulated by energy diagnosis) and actual building performance. This was possible through data analysis, which enables more sophisticated energy management techniques to be developed, as well as revealing ineffective user behaviors and rules. While a building may be efficient overall, it may not be efficient during specific periods, and a building that is inefficient overall may be efficient at specific periods. Fluctuations in energy efficiency over time revealed whether a facility was regularly performing well, consistently underperforming, or whether there was a significant change in performance. This is an important distinction that can help decision makers decide whether to look into operational procedure changes or potential for more capital-intensive improvements. Daily efficiency indicators that were temporally divided and integrated into digital-twin-enabled energy management platforms could transform energy management across a portfolio of buildings.
Pignatta and Alibrandi [71] presented ongoing research to develop a risk-informed Digital Twin (RDT) for the decarbonization of the built environment. A smart building located in Australia was used to demonstrate the framework. The uncertainty quantification module of the DT assesses the probability distribution of the daily energy consumption, while the risk analysis module forecasts the annual lifecycle energy consumption.
Discussion
It is relevant to note that most of the studies that focused on design optimization had a retrofit approach considering the entire building or a specific building element to be adapted (e.g., solar chimney, HVAC system). That is, they implemented DT to simulate and inform retrofit scenarios that would be more energy efficient. Only a few studies focused on the initial design. However, even in these cases they were focused on the initial design of certain elements meant to be placed in existing buildings. This approach is consistent with the definitions of DT, which is meat to mimic an existing entity. However, a possible direction which is still to be explored refers to the integration of the original building design as a DT throughout the life cycle of the building. Conceptualizing the design to work as a DT for energy efficiency purposes from the beginning may not only affect the design process but also the final design itself. How it may be affected and potential benefits in terms of energy efficiency are still to be evaluated. The papers analyzed under topic 2 show that the use of a DT was useful in maintaining or improving appropriate levels of comfort for the occupants while also increasing the energy efficiency of the building or cluster of buildings. In this sense, the automation of some systems through the DT was a key enabler for increasing energy efficiency across the studies in this category. Similarly, IoT and wireless sensor networks seem to play a key role in continually updating the DT, allowing the optimization in energy efficiency while maintaining adequate comfort levels. As shown, most of these publications that consider occupants' comfort focus their studies on university environments. This is probably a reflection of both the convenience of investigating where the research is being conducted and the need of university campuses for more energy efficient solutions. However, equally extensive studies on other types of environments are needed. Such studies may provide insight into the specificities and difficulties faced in in other types of buildings (such as industrial, commercial centers, among others) regarding the interface between occupant comfort and energy efficiency. Challenges faced in other environments may also push the resolution of problems and the optimization of DT processes which could benefit its implementation focused on occupant comfort for any scenario.
The operation and maintenance (O&M) of buildings and infrastructure represents a strategic activity to ensure that they work as intended over time and to lower energy consumption and maintenance costs at building level. The studies presented in topic 3 show the potential of use of DT and predictive maintenance to forecast faults in building equipment and systems (HVAC and MEP systems). This requires a huge collection of data from sensors and methods using AI to automatically detect faults in order to then optimize the building maintenance. The integration of IoT, BIM and AI is becoming more widely used, and DT technology is what O&M will look like in the future. A possible direction in this research line are the challenges posed by the amount of data collection, that is, the creation of intelligent models with this data to provide facility managers the ability to decide and act to improve the operation and maintenance of buildings.
Notably, most of the publications on topic 4 present studies at a macro scale, that is, the development of a DT for an urban or city level. Rather than just allowing them to develop virtual models, DTs enable cities to perform simulations of new policies or infrastructure projects and preview their possible impacts before making decisions in the real world. Future smart cities will be shaped by urban digital twins, but they still have a long way to go. Large data sets that can be analyzed and processed by a variety of sophisticated algorithms and computer models would provide the foundation for these DTs. However, it would call for the use of the cloud as well as the IoT and sensors that gather data on the ground. Another future direction is challenging researchers to create algorithms to evaluate the environmental impact of a city and suggest green components for decarbonization of the built environment. Regarding the methodologies used for creating the DT, the analyzed papers provided different methodologies. Many studies used machine learning (ML) models. Several ML algorithms are available for the modeling step of a DT: supervised, predictive, unsupervised, or descriptive. Within the publications revised, supervised ML algorithms were frequently used. Supervised ML algorithms include regression approaches (e.g., linear, polynomial regression), classification approaches (e.g., support vector machines, decision trees,) or probabilistic algorithms (e.g., ANN, Naive Bayes) [62]. Artificial neural networks (ANN) was identified as the one most frequently applied in the revised literature [44][45][46]53,62], followed by support vector machines, decision trees [44,59] and Bayesian networks [65,71]. Other researchers utilized a hybrid approach (grey-box models), combining knowledge about the system (white-box models) and statistical information from the data (black-box models) [55].
The majority of the studies reviewed here also adopted BIM as the foundation for the DT, that is, the digital replica of the built asset [44,[47][48][49]53,56]. Within the publications that mentioned the specific software used, Autodesk Revit is the most frequent BIM platform for modeling the DT. Furthermore, it is often integrated with other applications from the same provider such as Green Building Studio, Insight and Dynamo, usually for individual building models [51,52,56]. For those cases which assessed more than one building simultaneously, building performance simulation using BEM models at building level [47,48] and UBEM models at urban level (city) combined with GIS dataset [65,68] were frequent strategies.
The majority of the papers also used IoT devices (e.g., sensors), using smart buildings that track data in real-time, as case studies [54][55][56]65]. Data were obtained from different kinds of sensors that measure various parameters of interest, such as CO 2 , lighting levels, temperature and humidity. Most of the case studies are residential buildings [48,49,52,65], followed by university buildings [56,59,67].
It is important to note that in most of the publications the DT was created for a specific goal. Therefore, the choice of method for its creation is directly linked to the kind of output expected from the DT. Furthermore, the availability of data, tools for data collection and modeling tools also seemed to influence the choice of method.
Conclusions and Future Trends
This paper presented an in-depth review of current digital twins' applications in the field of energy efficiency for buildings and, thus, contributes to its body of knowledge. A total of 32 articles published between 2019 and 2022 were identified and reviewed. This review analysis classified the literature into four different topics related to applications of digital twins for building energy efficiency: topic 1. Design optimization; topic 2. Occupants' comfort; topic 3. Building operation and maintenance; and topic 4. Energy consumption simulation. The relatively small number of publications found across the three databases, combined with how recently they were published, demonstrates that the use of digital twins in the field of buildings energy efficiency is still novel. Furthermore, as shown in Section 3, the increase of publications in the last two years demonstrates a recognition of its potential for building energy efficiency. Therefore, it is expected that further topics of application will emerge in the near future as well as further specialization and sub-division of the four topics already identified.
Many different AI methods based on artificial neural networks, algorithms and data analytics were investigated for optimizing building design, improving predictive maintenance and forecasting energy consumption. Through analytics of real-time and historical data, existing studies employ digital twins to protect, forecast and optimize building energy efficiency. The content analysis of this study, specific to DT applications in building energy efficiency, shows that BIM is the starting point of DT platforms which also integrate cloud computing and IoT technologies. This is in line with previous studies that also found most DT applications for the construction industry to be centered around BIM [36]. For the specific application reviewed in this study, we did not find DT to be synonymous to a BIM model as [36] had indicated. Only a few of the studies reviewed here used DT as synonymous to a BIM model.
The research trends demonstrate that there is an increasing interest in implementing DT in the use and maintenance of buildings, and one of the main research interest areas is maintenance management and simulation to improve energy efficiency. Considering that DTs are based on sensors to capture real-time data, developing classification and integration systems and data analysis were found to be the most challenging needs for the future. Future directions of all identified DTs for energy efficiency applications should focus on current gaps such as lack of data integration systems and complex decisionmaking processes.
The selection of the most suitable sensors to capture data for each application and the development of automatic means to transfer and process data from different databases are needed. Cloud computing and IoT are the basis of this change. Furthermore, data analysis, such as machine learning, allows the creation of algorithms to produce energy prediction models. Future research should also focus on improving these algorithms to make the decision-making processes more efficient, accurate and flexible. Another important research direction refers to improving data visualization for non-experts to facilitate understanding and interpretation of the data analysis, forecasts, etc.
The analyzed publications showed an array of different methods for implementing the DT both for building and city scales. DTs often have a specific purpose and heavily depend on the data and means available for collecting such data. Therefore, it would be relevant for future research to focus on the results yielded by DTs produced through different methods, comparing them. Establishing the potentials and shortcomings of each method specifically for this purpose would help inform future research and DT implementations. Furthermore, there is still a lack of consensus on what can and what cannot be considered a DT [27]. Researchers consider DTs differently. Some researchers do not include exchanging data in real-time between the twins in their studies. In some papers there is no differentiation between DT and BIM. Although not being the goal of this review paper, further analysis is needed to clearly outline what can be considered a DT in the construction industry, and for energy efficiency more specifically.
Conflicts of Interest:
The authors declare no conflict of interest. | 11,130 | 2022-09-23T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
Dynamo Action of Jupiter's Zonal Winds
The new data delivered by NASA's Juno spacecraft significantly increase our understanding of Jupiter's internal dynamics. The gravity data constrain the depth of the zonal flows observed at cloud level and suggest that they slow down considerably at a depth of about $0.96\,r_J$, where $r_J$ is the mean radius at the one bar level. Juno's magnetometer reveals the planet's internal magnetic field. We combine the new zonal flow and magnetic field models with an updated electrical conductivity profile to assess the zonal wind induced dynamo action, concentrating on the outer part of Jupiter's molecular hydrogen region where the conductivity increases very rapidly with depth. Dynamo action remains quasi-stationary and can thus reasonably be estimated where the magnetic Reynolds number remains smaller than one, which is roughly the region above $0.96\,r_J$. We calculate that the locally induced radial magnetic field reaches rms values of about $10^{-6}\,$T in this region and may just be detectable by the Juno mission. Very localized dynamo action and a distinct pattern that reflects the zonal wind system increases the chance to disentangle this locally induced field from the background field. The estimates of the locally induced currents also allow calculating the zonal flow related Ohmic heating and associated entropy production. The respective quantities remain below new revised predictions for the total dissipative heating and total entropy production in Jupiter for any of the explored model combinations. Thus neither Ohmic heating nor entropy production offer additional constraints on the depth of the zonal winds.
Introduction
Two of the main objectives of NASA's Juno mission are to measure Jupiter's magnetic field with unprecedented resolution and to determine the depth of the fierce zonal winds observed in the planet's cloud layer. The first Juno-based internal magnetic field model JRM09 (Connerney et al. 2018) already provides the internal magnetic field up to spherical harmonic degree 10 and shows several interesting features that seem unique to Jupiter's dynamo (Moore et al. 2018). Better resolved models are expected as the mission continues.
Based on Juno gravity measurements (Iess et al. 2018), Kaspi et al. (2018) deduce that the speed of the equatorially antisymmetric zonal flow contributions must be significantly reduced at a depth of about 3000 km below the one bar level, which corresponds to a radius if 0.96 r J . Kong et al. (2018) come to roughly similar conclusions with a different inversion procedure, but they also point out that the solution is not unique. While the gravity data only allow constraining the equatorially antisymmetric winds, the results likely also extend to the symmetric contributions. New interior models (Guillot et al. 2018;Debras & Chabrier 2019) and also the width of the dominant equatorial jet (Gastine et al. 2014;Heimpel et al. 2016) both support the idea that the fast zonal winds are roughly confined to the outer 4% in radius.
The fast planetary rotation enforces geostrophic flow structures with minimal variation along the direction of the rotation axis. Geostrophic zonal winds are thus expected to reach right through the planet's gaseous envelope, and it remains unclear which mechanism limits their extend in Jupiter. The demixing of hydrogen and helium and the subsequent precipitation of helium deeper into the planet offers one possible explanation (Militzer et al. 2016). This process would have established a helium gradient that suppresses convection. In Jupiter, this stable helium-rain layer may start somewhere between 0.93 and 0.90 r J and perhaps extends down to 0.80 r J (Debras & Chabrier 2019). Note, however, that ab initio simulations by Schöttler & Redmer (2018) predict that the hydrogene/helium demixing may not even have started. Recent analysis of gravity measurements by the Cassini spacecraft suggest that Saturn's zonal winds may only reach down to about 0.85 r S (Iess et al. 2019;Galanti et al. 2019). Since the stably stratified layer is thought to start significantly deeper, at about 0.62 r S according to (Schöttler & Redmer 2018), it cannot be the reason for this limited depth extend of Saturn's zonal winds.
A second possibility to brake the zonal winds at depth are Lorentz forces. Lorentz forces are tied to dynamo action and thus to the electrical conductivity profile. Ab initio simulations for Jupiter suggest that ionization effects lead to a super-exponentially increase of the electrical conductivity in the outermost molecular gas envelope. We will refer to this layer as Jupiter's Steeply Decaying Conductivity Region (SDCR) in the following. At about 0.9 r J , hydrogen, the planet's main constituent, becomes metallic, and the conductivity increases much more smoothly with depth (French et al. 2012) (see panel a) of fig. 1). Though dynamo action and the potential braking of the zonal winds due to Lorentz forces are classically attributed to the metallic region, they may already become significant where the electrical conductivity reaches sizable levels in the SDCR.
Different dynamo-related arguments have been evoked to estimate the depth of the zonal winds without, however, directly addressing the role of the Lorentz forces. Liu et al. (2008) estimate that the Ohmic heating caused by zonal-wind related induction would exceed the total heat emitted from Jupiter's interior, should the winds reach deeper than 0.96 r J with undiminished speed. Ridley & Holme (2016) argue that the secular variation of the magnetic field over 30 years of pre-Juno observations is rather small and thus likely incompatible with an advection by undiminished zonal winds. They conclude that the winds cannot reach to depths where the magnetic Reynolds number exceeds one and more significant induction can be expected. This puts the maximum depth somewhere between 0.96 r J and 0.97 r J , as we will discuss below. A recent analysis by Moore et al. (2019) suggests that the observations over a 45 year time span including Juno data would by compatible with zonal wind velocities of 2.4 m/s at 0.95 r J , two orders of magnitude smaller than observed in the cloud layer.
Another interesting question is how much the dynamo action in the SDCR contributes to Jupiter's total magnetic field. Using a simplified mean-field approach, Cao & Stevenson (2017) predict that the radial component of the Locally Induced Field (LIF) may reach 1 % of the background field and could thus be detectable by the Juno magnetometer. Wicht et al. (2019) analyze the dynamo action in the SDCR of fully self-consistent numerical simulations that yield Jupiter-like magnetic fields. Because of the dominance of Ohmic diffusion, the dynamo dynamics becomes quasi-stationary in the SDCR of their simulations. A consequence is that the locally induced electric currents and field can be estimated with decent precision when flow, electrical conductivity profile, and the surface magnetic field are known. Refined information on all three ingredients has recently become available for Jupiter, allowing for a fresh look on the problem.
Here we use three different zonal flow models, two electrical conductivity models, and the new Juno-based magnetic field model JRM09 to predict the electric currents and magnetic fields produced in Jupiter's SDCR. In addition, we also derive new estimates for the total dissipative heating and related entropy production and explore whether either value is exceeded by the zonal-flow related Ohmic dissipation.
The article starts off with outlining the methods and introducing the used data in sect. 2. Sect. 3 discusses dissipative heating and entropy production in Jupiter. Estimates for dynamo action, Ohmic heating, and entropy production are then presented in sect. 4. Sect. 5 closes the article with a discussion and conclusion.
Estimating Dynamo Action
The ratio of inductive to diffusive effects in the induction equation, can be quantified by the magnetic Reynolds number where λ = 1/(µσ) is the magnetic diffusivity, with µ the magnetic permeability and σ the electrical conductivity. Angular brackets generally denote rms values at a given radius throughout the paper; thus U stands for θ being the colatitude and φ the longitude. The typical length scale D is hard to estimate, and the planetary radius is often used for simplicity. Where σ decreases steeply in the SDCR, however, the length scale is determined by the conductivity or magnetic diffusivity scale height and the modified magnetic Reynolds number should be used. Since D λ is small and λ decreases steeply with radius, most of the SDCR is characterized by a small magnetic Reynolds number Rm (1) < 1, and the magnetic field dynamics becomes quasi-stationary (Liu et al. 2008), obeying the simplified induction equation Here, j is the current density andB the strong background field produced by the dynamo acting deeper in the planet. The locally induced fieldB is given by Ampere's law: The steep σ profile dominates the radial dependence of j andB in the SDCR. The current density is thus dominated by the horizontal components, where radial gradients inB contribute (Liu et al. 2008;Wicht et al. 2018): Index H denotes the horizontal components; the radial current can be neglected in comparison. Along the same lines, the horizontal components of eqn. (6) can be approximated by wherer is the radial unit vector. Integration in radius yields the integral current density estimate introduced by Liu et al. (2008), which we identify with an upper index (I): (10) The square brackets with a lower index r J indicate that the expression should be evaluated at the outer boundary.
For a predominantly zonal flow, we can use the approximation where U φ is the zonal flow component andθ andφ are unit vectors in latitudinal and azimuthal direction, respectively. The integral estimates for the two horizontal current components are then given by Since the latitudinal length scale of the zonal winds is smaller than the azimuthal length scale of the magnetic field, we expect that the latitudinal component dominates. The integral estimate requires the knowledge of the surface currents. While the surface currents are certainly very small, the scaled version σ(r)/σ(r J ) j may remain significant. Liu et al. (2008) argue that neglecting the surface contribution at least provides a lower bound for the rms current density. Wicht et al. (2019) confirm that the dynamics indeed becomes quasi-stationary in the SDCR of Jupiter-like dynamo simulations where Rm (1) < 1 and show that j θ is indeed the dominant current component in the SDCR of their Jupiter-like dynamo simulations. They also report that the simplified Ohm's law for a fast moving conductor, provides a significantly better estimate than j (I) . We identify the respective current estimate with an upper index (O). The general Ohm law, also contains currents driven by the electric field, which reduces to E = −∇Φ in the quasi-stationary case, where Φ is the electric potential. In the SDCR, this contribution likely proves secondary because the potential differences remain small compared to the induction by fast zonal winds (Wicht et al. 2019).
As the electrical conductivity decreases in the SDCR, the magnetic field approaches a potential field with its characteristic radial dependence. We use this dependence to approximate the background field with where the index denotes the magnetic field contribution at spherical harmonic degree . This provides a decent approximation as long as the LIF remains a small contribution of the total field (Wicht et al. 2019).
Given a surface field model and an electrical conductivity profile, Ohm's law for a fast moving conductor and a predominantly zonal flow suggests When using this result to constrain the outer-boundary currents, the alternative integral estimates, eqn. (12) and eqn. (13), yield and respectively. A comparison of the estimates shows that j (I) and j (O) will remain very similar at shallow depths. When the flow decays very deeply with depth, however, the integral contributions in eqn. (18) and eqn. (19) will dominate below some radius and cause larger deviations, as we will see below.
Calculating the LIF requires to uncurl Ampere's law, which reduces to integrating eqn. (8) in the SDCR. When using j (O) , this yieldŝ Since the electrical conductivity profile rules the radial dependence, the integral can be approximated bŷ We have assumed here that the LIF vanishes at the outer boundary. For a dominantly azimuthal flow, the primary LIF component is also azimuthal: This suggests that the rms value scales with Rm (1) , assuming that the correlation between U φ andB r is of little relevance.
The radial LIF can be estimated based on the radial component on the quasi-stationary induction equation (1): When approximating Ohmic dissipation by λB r /D 2 λ , this yields:B which reduces toB for a predominantly zonal flow. This suggest that the rms radial LIF should roughly scale with the second modified magnetic Reynolds number Here D φ is the azimuthal length scale of the background field. Since D λ D φ , the radial LIF is much smaller than its horizontal counterpart (Wicht et al. 2019).
Data
The electric current and LIF estimates discussed above require a conductivity profile, a zonal flow model, and a surface magnetic field model. For the heating and entropy estimates that we will derived in sect. 3, we also need density, temperature, and thermal conductivity profiles. We adopt the interior model calculated by Nettelmann et al. (2012) and French et al. (2012), which is the only one proving all the required information. Note, however, that recent Juno gravity data suggest that Jupiter's interior may be more complex than anticipated in this model (Debras & Chabrier 2019).
Ab initio simulations of the electrical conductivity by French et al. (2012) provide 12 data points at different depths. Fig. 1 shows the values in the outer 20% of Jupiter's radius and the parametrization σ F (r) developed for our analysis. A linear branch, Figure 1: (a) Electrical conductivity profiles in the outer 20% of Jupiter's radius. The black line shows the parametrization σ F (r) of the ab initio simulation data points (black circles) by French et al. (2012). The dotted red line shows the profile published in Zaghoo & Collins (2018), while the solid red line shows the extension σ Z (r) used here. The profiles suggested by Liu et al. (2008) (green) and Nellis et al. (1999) covers the smoother inner part r < r m . An exponential branch, describes the steeper decay for r m < r < r e with b = 7.2. Matching radius r m = 0.89 r J and reference radius r r = 0.77 r J are chosen where ab initio data points have been provided.
A double-exponential branch, is required to capture the super-exponential decrease for r ≥ r e = 0.972 r J . The additional free parameter is c = 10, while σ e = σ(r e ) and The dotted red line in fig. 1 shows the conductivity model used to study dynamo action in Jupiter and Jupiterlike exoplanets by Zaghoo & Collins (2018). This is based on measurements which suggest a higher electrical conductivity in the metallic hydrogen phase than previous data. Unfortunately, Zaghoo & Collins (2018) do not discuss how the results were extrapolated to Jovian conditions. The solid red line in fig. 1 shows the respective parametrization σ Z (r) used for our analysis, which retraces the published curve and connects to previously published parametrizations (green and blue) at lower densities Liu et al. 2008). Note, however, that these parametrizations are based on data which may have been attributed to too low temperatures according to a recent analysis by Knudson et al. (2018).
Though model σ Z (r) is somewhat arbitrary, it serves to illustrate the impact of conductivity uncertainties in our study. Close to r J where conductivities remain insignificant, σ F is many orders of magnitude larger than σ Z . The ratio σ F /σ Z decreases with depth, reaching 10 2 around 0.97 r J and 10 around 0.96 r J . The two models finally cross at about 0.95 r J . At about 0.925 r J , the ratio reaches a minimum of 0.05 and then slowly increases with depth to 0.35 at 0.8 r J . Tab. 1 list values of both conductivity models for selected radii.
Panel b) of fig. 1 and selected values listed in tab. 1 demonstrate that the magnetic diffusivity scale heights D λ differ much less than the conductivities themselves. Electric currents, locally induced fields, and Ohmic heating depend linearly on σ but on different powers of D λ . The differences between the results for the two conductivity models is thus predominantly determined by σ and can easily be scaled from one to the other. Table 1: Rms flow velocities, electrical conductivities σ, magnetic diffusivities λ, diffusivity scale heights D λ , and magnetic Reynolds numbers Rm (1) at selected radii.
The three different zonal flow models explored here are illustrated in fig. 2. Tab. 1 lists rms values U φ at selected radii. All reproduce the observed zonal winds at r = r J (Porco et al. 2003;Vasavada & Showman 2005). We use running averages of the surface profiles with a window width of one degree and represent the result with 256 (nearly) evenly spaced latitudinal grid points for our calculations.
The three flow models differ at depth. The most simple one, U G , assumes geostrophy in each hemisphere, i.e. the flow depends only on the distance s = r sin θ to the rotation axis. Kaspi et al. (2018) describe the depth decay of the equatorially antisymmetric zonal flow with profiles constrained by the Juno gravity measurements. We apply their 'latitude independent' model version to the total zonal flow and refer to this model as U K . The rms amplitude of U K has decreased by one order of magnitude at about 0.95 r J and by two orders of magnitude around 0.925 r J .
We also consider the 'deep' model suggested by Kong et al. (2018), who assume an exponential depth decay and an additional linear dependence on the distance z = cos θ to the equatorial plane. Like for U K , our respective model U Z assumes that the depth and z dependencies, which were originally derived for the equatorially antisymmetric contributions, apply to the whole flow. The rms velocity in U Z decays smoother with depth than in U K , having decreased by one order of magnitude at about 0.935 r J and by two orders of magnitude at about 0.905 r J . Fig. 2 shows that U G and U K have discontinuities at the equatorial plane. These pose a problem when calculating the latitudinal zonal flow derivatives required for the integral estimate j (I) θ (see eqn. (18)). Formally, the derivative becomes infinite at the equator. Practically, however, the impact of the discontinuity depends on the model setup and on the methods used for calculating the derivatives. We tested the impact on rms current density estimates by comparing calculations covering all latitudes with counterparts where the derivatives were explicitly set to zero in a six-degree belt around the equator. Simple first order finite differences with 256 grid points at each radial level are generally used for calculating the derivative. For flow U Z , which has been constructed to avoid the discontinuity (Kong et al. 2018), the belt contributes not more than one percent to j (I) θ at any radius, which is less than the surface fraction it represents. For flow U K , the contribution is even smaller due to the faster decay of the flow amplitude. However, for U G the belt contributes 20% to the rms current for radii below 0.94 r J , which is a clear sign that the unphysical discontinuity causes problems. In order to be on the safe side, we will only consider flow model U Z in connection with estimate j (I) θ below. The radius where Rm (1) = 1, which we will refer to as r 1 in the following, roughly marks the point where the approximations discussed above break down (Wicht et al. 2019). Fig. 3a) illustrates the Rm (1) profiles that result from combining σ F and σ Z with rms values for the three zonal flow models. Tab. 1 compares values at some selected radii. These modified magnetic Reynolds numbers exceed unity between r 1 = 0.957 r J for the combination σ Z and U K and r 1 = 0.967 r J for σ F and U G . All r 1 values are listed in tab. 2.
Green lines in fig. 3 show Rm (1) profiles for a typical convective velocity of 10 cm/s suggested by scaling laws (e.g. see Duarte et al. 2018). Numerical simulations show that the velocity increases with radius, an effect not taken into account here. The comparison of the different Rm (1) profiles suggests that zonal flow related dynamo action should dominate at least in the outer 9% in radius.
For Jupiter's surface magnetic field we use the JRM09 model by Connerney et al. (2018), which provides information up to spherical harmonic degree = 10. The more recent model by Moore et al. (2018) is only slightly different. In order to check the impact of smaller scale contributions, we also tested the numerical model G14 by Gastine et al. (2014), which reproduces Jupiter's large scale field and provides harmonics up = 426. Since it turned out that the impact of the smaller scales is very marginal, the results are not shown here.
Dissipative Heating and Entropy
Production in Jupiter Liu et al. (2008) constrain the depth of the zonal winds by assuming that the related total Ohmic heating should not exceed the heat flux out of the planet. Unfortunately, this assumption is not correct, as we will show in the following. In order to arrive at more meaningful constraints, we start with reviewing some fundamental considerations.
In a quasi-stationary state, where flow and magnetic field are maintained by buoyancy and induction against dissipative losses, the conservation of energy simply states that the heat flux Q o = Q(r o ) through the outer boundary is the sum of the flux Q i = Q(r i ) through the inner boundary and the total internal heating H: Note that neither viscous nor Ohmic heating contribute to H. Since flow and magnetic field are maintained by the heat flux through the system, they cannot be counted as net heat sources (Hewitt et al. 1975;Braginsky & Roberts 1995). When furthermore also neglecting the effects of helium segregation, core erosion, or planetary shrinking as potential energy sources, the only remaining contribution is the slow secular cooling of the planet. The volumetric heat source is then given by where the tilde indicates the hydrostatic, adiabatic, background state (Braginsky & Roberts 1995). Assuming that convection maintains an adiabat at all times, ∂S /∂t remains homogeneous throughout the convective region and obeys (Jones 2014): Here, V dV denotes an integration over the whole convective volume. Note, however, that the thermal evolution could be more complex, should Jupiter indeed harbor stably stratified regions. In order to get a handle on dissipative heating, one has to consider the local heat equatioñ ρT ∂s ∂t where ϕ denotes the volumetric dissipative heat source, and k is the thermal conductivity. When assuming a steady state and adopting the anelastic approximation ∇ · (ρU) = 0, the integration over the shell between the inner boundary r i and radius r yields (37) The left hand side is the total flux through the boundary at r, i.e. the sum of the diffusive contribution and the advective contribution The right hand side of eqn. (37) reflects the influx through the lower boundary Q i plus three volumetric contributions: the slow secular cooling, the dissipative heating, and the adiabatic cooling. Writing the adiabatic cooling in terms of Q A yields the relation where D T = −T /(∂T /∂r) is the thermal scale height. Integrating eqn. (40) over the whole convective volume and using eqn. (33) reveals that the total dissipative heating Φ T is balanced by the total adiabatic cooling: The total adiabatic cooling is actually identical to the buoyancy power P that drives convection and thus the dynamo mechanism. Multiplying the buoyancy term in the Navier-Stokes equation with velocity and integrating over the convective volume to yield the total convective power input indeed gives the same expression (Braginsky & Roberts 1995). Eqn. (41) thus simply states that dissipation is balanced by the power input P to the system, a fact used in many scaling laws to establish how the rms magnetic field strength or rms velocity scale with P (Christensen & Aubert 2006;Christensen et al. 2009;Davidson 2013;Yadav et al. 2013). Eqn. (41) requires to know Q A at each radius. Since Q A itself depends on the distribution of dissipative heat sources, however, an additional condition is required. Assuming that Ohmic heating and adiabatic cooling not only cancel globally but, at least roughly, also at each radius offers a simple solution used in most scaling laws (though never stated explicitely). With the exception of thin thermal boundary layers, the heat flux is then dominated by the advective contribution, so that Adopting the interior model by Nettelmann et al. (2012) and French et al. (2012) and the observed flux Q o = 3.35×10 17 W from the planet's interior (Guillot & Gautier 2015) allows calculating h via eqn. (34). Because the inner core occupies only 10% in radius, Q i can be neglected. When, for example, assuming that h also describes the cooling of the rocky core, Q i is two orders of magnitude smaller than Q o .
Plugging eqn. (42) into eqn. (41) finally allows calculating the total dissipative heating: The result reveals that dissipative heating can in fact exceed the heat flux out of Jupiter's interior by a factor of 3.6. Gastine et al. (2014) came up with a power estimate that is about 50% smaller because they used a simplified formula provided by Christensen et al. (2009). Considering the entropy rather than the heat balance avoids the need to come up with an additional condition (Hewitt et al. 1975;Gubbins et al. 1979;Braginsky & Roberts 1995;Gubbins et al. 2003). Dividing the heat equation eqn. (36) by temperature and integrating over the convective volume yields the entropy budget where we have once more used the anelastic approximation ∇ · (ρU) = 0. When assuming that the temperature profile stays close to the adiabat, the total dissipative entropy production Θ can thus be approximated by: (45) An upper bound for the total dissipative heating can be derived when assuming thatT i is the highest temperature in the system (Hewitt et al. 1975;Currie & Browning 2017): Using once more the internal model by Nettelmann et al. (2012) puts the upper bound at 10 2 Q o for Jupiter, which is at least consistent with estimate (43).
When complementing the internal model with the thermal conductivity profile by French et al. (2012), we can quantify the different terms in Jupiter's entropy budget (45). Because of the strong temperature contrast between the outer boundary and the deeper convective region, the entropy flux through the outer boundary clearly dominates. The total dissipative entropy production is thus given by: The second largest term in eqn. (45), the entropy due to the secular cooling, is already two orders of magnitude smaller at 3.0×10 13 W/K. The two remaining terms, entropy flux through the inner boundary and the diffusive entropy flux down the adiabat, are only of order 10 11 W/K. Since the magnetic diffusivity is about 10 6 times larger than its viscous counterpart in planetary dynamo regions, Ohmic heating by far dominates. We can use the current density estimates to predict the Ohmic heating due to the zonal flows above radius r: The conditions provides a possible constraint for the depth of the zonal winds in Jupiter. The dissipative entropy production related to the Ohmic heating is given by This can be used for the alternative depth constraint 4 Dynamo Action in Jupiter's SDCR
Electric Currents and Locally Induced Field
We start with discussing the current estimates for the different zonal flow and conductivity model combinations. The radial LIF is between two and three orders of magnitude smaller than its horizontal counterpart. The rougher estimates (23) and (28), based on Rm (1) and Rm (2) respectively, provide values that are less than a factor two smaller and can thus safely be used for order of magnitude assessments. They correctly predict that the rms azimuthal LIF reaches the level of the background field at r 1 and also that the ratio of radial to azimuthal LIF is about Rm (2) /Rm (1) = D λ /r J . At r 1 , the rms radial LIF is thus roughly three orders of magnitude smaller than the background field or the horizontal LIF. Tab. 2 lists the relative rms radial LIF (column 7) at r 1 (column 3) for all σ and flow combinations when using j (O) . Wicht et al. (2019) demonstrate that the Ohm's-law based estimates not only provide good rms but also decent local values for their Jupiter-like dynamo simulations. Fig. 5 shows the radial surface field of JRM09 in panel a) and the radial LIF for σ F and U Z at r 1 in panel b). A very distinct pattern of localized field patches can be found where the fast zonal jets around the equator interact with the strong blue patch in the JRM09 model.
The zonal flow pattern remains recognizable in the LIF, as is clearly demonstrated in fig. 6, which compares zonal flow profiles in panel a) with the azimuthal rms of the radial LIF in panel b). Due to the flow geometry, the currents and LIF show a depth-dependent phase shift relative to the surface jets. The equatorial jet, which is so prominent at the surface, contributes very little to dynamo action, since it does not reach down to depths where the electrical conductivity is more significant. Fig. 7 compares spherical harmonic power spectra of the background radial field and the radial LIF. As already apparent from the map shown in fig. 5, the LIF is dominated by smaller scale contributions. The spherical harmonic degree spectrum results from the convolution of the complex latitudinal zonal flow structure with the background field. At r 1 , the dipole contribution in the LIF is about 10 −4 times smaller than the respective background field contribution. For degree = 10, the ratio has increased to 10 −2 . The spectrum peaks at = 12 but has also significant contributions from even higher degrees.
The spherical harmonic order spectrum, shown in panel b) of fig. 7, is very different. The action of the axisymmetric zonal flow onB r excites no additional harmonic orders so that the spectrum remains confined to m ≤ 10. The LIF spectrum is rather flat but has no axisymmetric contribution. At m = 10, the rms LIF amplitude reaches roughly 25 % of the background field.
The results for the conductivity model σ F presented so far can roughly be scaled to model σ Z by multiplying with the conductivity ratio σ Z /σ F . Around 0.97 r J , the LIF is two orders of magnitude weaker, and the difference decreases with depth, reaching about one order of magni-conduct. flow r 1 r J r 10 r J r Φ r J r Θ r J
Ohmic Heating and Entropy Constraint
We now use the electric current estimates to calculate Ohmic heating and entropy production. Panel c) of fig. 5 shows the map of Ohmic heat flux density q = r J r dr j 2 /σ at radius r 1 when using j (O) , σ F , and U Z . The currents induced by interaction between the fierce zonal jets close to the equator and the strong blue patch in JRM09 not only yield a highly localized LIF but also intense local heating. While the action of various other zonal jets reaches a lower level, the related pattern remains roughly recognizable in the form of thin heating bands. The azimuthal mean of q, shown in Panel c) of fig. 6, clearly illustrates the correlation between heating and the zonal jets. Like for the LIF, there is a depth-dependent phase shift between the observed surface zonal wind profile and the Ohmic heating pattern.
Panel a) of fig. 8 compares the Ohmic heating profiles Φ O (r) for the different zonal flow and electrical conductivity models. Because of the extremely low conductivity, heating remains negligible in outer two percent in radius. When using j (O) , the outermost radius where Φ O reaches the level of Φ T is r Φ = 0.950 r J for flow U G and both conductivity models. When using U K and σ F , the Ohmic heating always remains below Φ T . Results based on j (I) (not shown) are less sensitive to the differences between the three flow models at depth and are generally similar to the results for U G and j (O) .
The different r Φ values where Φ O = Φ T have been marked by vertical lines in fig. 8 and are listed in column 5 of tab. 2. All are located below the radii r 1 where Rm (1) = 1 for the respective model combinations (column 3) and thus in a region where the approximations employed here break down. The maximum Ohmic heating reached at r 1 remains nearly one order of magnitude below Φ T (see column 8 of tab. 2). Similar inferences hold for the entropy production shown in panel b) of fig. 8. The entropy condition is less strict than the power-based heat condition, and the radii r Θ where the different models exceed the threshold Θ T (column 6 of tab. 2) are somewhat deeper than respective r Φ values. The largest value of r Θ = 0.955 is found for the combination U G and σ Z . The combination of U K and σ F , on the other hand, yields the deepest value of r Θ = 0.929.
The exploration of numerical dynamo simulations by Wicht et al. (2019) suggest that j (O) may provide an acceptable estimate for a limited region below r 1 , at least down to where Rm (1) = 5. Column 4 of tab. 2 demonstrates that even the radius r 10 where Rm (1) = 10 lies deeper than r Φ for most flow and conductivity combinations. The only exceptions are the results for the geostrophic flow. This could indicated that strictly geostrophic flows would indeed violate the heating contraint. igure 7: Power spectra of rms radial field contributions (Mauersberger-Lowes) for JRM09, the downward contin-uedB r and the radial LIFB r at r 1 = 0.965 r J . (a) shows the spherical harmonic degree spectrum, while (b) shows the harmonic order spectrum. The LIF has been amplified with 10 3 . Flow U Z and conductivity σ F have been used. Figure 8: Profiles of (a) Ohmic heating and (b) entropy production in the layer above radius r for current estimate j (O) . In (a) the solid horizontal line shows the total convective power of 1.2×10 18 W, while the dotted horizontal line shows the heat flow of Q o = 3.35×10 17 W out of Jupiter's interior. In (b) the horizontal line indicates the total dissipative entropy production predicted by the entropy flux Θ T = Q o /T o = 2.0×10 15 W/K through the outer boundary. Vertical lines mark the radii r 1 where Rm (1) = 1 (see fig. 3).
Discussion and Conclusion
The dominance of Ohmic dissipation in the outer few percent of Jupiter's radius leads to simple quasi-stationary dynamo action. This can be exploited for estimating the electric currents and the Locally Induced Fields with surprisingly high quality (Wicht et al. 2019), once a conductivity profile, a surface magnetic field model, and flow model are given. Here we explored two conductivity profiles, used the new Juno-based JRM09 field model, and tested two zonal flow models suggested from inversions of Juno gravity measurements. A geostrophic zonal flow model was also considered as a third option.
The estimates roughly apply to the upper four percent in radius, or roughly 3000 km, where the modified magnetic Reynolds number Rm (1) is smaller than one. The radial LIF in this quasi-strationary dynamo region typically reaches rms values in the order of µT with peak values up to 15 µT. Could such a small contribution be measured by the Juno magnetometer? The instrument has been designed to provide a nominal vector accuracy of 1 in 10 4 . Since the surface field reaches peak values of about 2 mT, the LIF could indeed be detectable.
One would still have to separate the LIF from contributions produced deeper in the planet. What should help with this task, is the distinct pattern imprinted by the zonal flows, which also leads to a distinct magnetic spectrum. The LIF spectrum peaks at degree = 12 and has significant contributions at even higher degrees. At = 10, the largest degree provided by JRM09, the LIF amounts to about 1% of the background field, which seems smaller than the estimated JRM09 precision (Connerney et al. 2018). Updated future models, based on a larger number of Juno orbits, will provide smaller scale details and increase the chances of identifying the LIF. Another possibility is a dedicated analysis of measurements around the 'big blue spot' in JRM09, where inductions effects are particularly strong.
Our analysis of Jupiter's heat balance shows that Ohmic heating can significantly exceed the heat flux Q o out of the planet's interior. Using the interior model by Nettelmann et al. (2012) and French et al. (2012) suggests a total dissipative heating of Φ T = 3.58 Q o = 1.20×10 18 W.
It would be interesting to repeat this assessment for the newer Jupiter models that include stably stratified regions (Debras & Chabrier 2019). However, the most important constraint is the knowledge of Q o , and the somewhat different distribution of internal heat sources implied by the newer models can only have a limited effect.
While the total Ohmic heating remains typically one order of magnitude below Φ T , we find extreme lateral variations. Peak values in the Ohmic heating density reach 25 W/m 2 around the 'blue spot' in the JRM09, which is nearly five times larger than the mean heat flux density from Jupiter's interior. These peak values are reached at the bottom of the quasi-stationary region, i.e. at a depth of 3000 km. This is much deeper than any (current) remote instrument could reach for. For example MWR, the micro-wave instrument on Juno, hopes to detect temperature radiation from up to 1 kbar, which corresponds to a depth of about 600 km. However, the local heating may trigger convective plumes that rise to shallower depths and thus become detectable.
We also estimated the entropy flux out of Jupiter's interior to 2.0×10 15 W/K. The entropy produced by zonalwind related Ohmic heating in the quasi-stationary region does not exceed this value for any model combination. This means that neither Ohmic heating nor the entropy production offer any reliable constraint on the depth of the zonal winds.
Below the quasi-stationary region, electric fields become a significant contribution to Ohm's law, tend to oppose induction effects, and lead to weaker electric currents than predicted by our approximations. Wicht et al. (2019) demonstrate that the currents remain roughly constant below the depth where Rm (1) ≈ 5 in their numerical simulations. However, this may be different in Jupiter where the magnetic Reynolds numbers reach values orders of magnitude higher than in their computer models. Fig. 3 demonstrates that Rm (1) increases to a value of at least 10 3 at 0.90 r J . This is a consequence of the electrical conductivity profiles that easily overcompensate the depth-decrease in zonal flow velocities indicated by Juno gravity measurements. The zonal flows may thus actually play a larger role for dynamo action below than in the quasi-stationary region. While the gravity data convincingly show that the zonal winds must be significantly weaker below about 0.96 r J , they cannot uniquely constrain their structure or amplitude at this depth.
It has been speculated that the fast observed zonal winds may remain confined to a thin weather layer, where differential solar heating and also moist convection could significantly contribute to the dynamics (see for example Showman (2007) or Thomson & McIntyre (2016)). Kong et al. (2018) show that the gravity signal can then be explained by an independent zonal flow system that reaches down to about 0.7 r J with typical amplitudes of about 1 m/s and has larger latitudinal scales than the surface winds. The strongest local dynamo action happens towards the bottom of the quasi-stationary region where models U K and U Z reach velocities of about 10 m/s. The currents and magnetic fields induced by this alternative flow model should thus be roughly an order of magnitude weaker than for U K or U Z . Consequently, Ohmic heating and entropy production would be two orders of magnitude lower and play practically no role for the global power or entropy budgets.
Below 0.96 r J , full 3d numerical simulations would be required to model the zonal-wind related dynamo action. However, since they cannot be run at altogether realistic parameters and generally yield a much simpler zonal wind pattern, the results must be interpreted with care (Gastine et al. 2014;Jones 2014;Duarte et al. 2018;Dietrich & Jones 2018). These simulation suggest that even the weaker zonal winds at depth would significantly shear the large scale field produced by the deeper primary dynamo action. The resulting strong longitudinal (toroidal) flux bundles are converted into observable radial field by the small scale convective flows present in this region. The combined action of primary and secondary dynamo typically yields a radial surface field that is characterized by longitudinal banded structures and large scale patches with wavenumber one or two, resulting in a morphology is often reminiscent of the recent Juno-based field model JRM09 (Gastine et al. 2014;Duarte et al. 2018;Dietrich & Jones 2018). | 9,842 | 2019-06-21T00:00:00.000 | [
"Physics",
"Geology"
] |
Weakly-Supervised Defect Segmentation on Periodic Textures Using CycleGAN
The importance of an automated defect inspection system has been increasing in the manufacturing industries. Various products to be examined have periodic textures. Among image-based inspection systems, it is common that supervised defect segmentation requires a great number of defect images with their own region-level labels; however, it is difficult to prepare sufficient training data. Because most products are of normal quality, it is difficult to obtain images of product defects. Pixel-wise annotation for semantic segmentation tasks is an exhausting and time-consuming process. To solve these problems, we propose a weakly-supervised defect segmentation framework for defect images with periodic textures and a data augmentation process using generative adversarial networks. With only image-level labeling, the proposed segmentation framework translates a defect image into its defect-free version, called a golden template, using CycleGAN and then segments the defects by comparing the two images. The proposed augmentation process creates whole new synthetic defect images from real defect images to obtain sufficient data. Furthermore, synthetic non-defect images are generated even from real defect images through the augmentation process. The experimental results demonstrate that the proposed framework with data augmentation outperforms an existing weakly-supervised method and shows remarkable results comparable to those of supervised segmentation methods.
I. INTRODUCTION
Most manufacturing industries have aimed to provide their clientele with defect-free products to enhance their corporate competitiveness. In order to achieve this goal, product inspections are usually conducted at the final stage of the manufacturing process, and a large number of manufactured products have been examined by human inspectors. It is rare for the accuracy of the inspector to be uniform during long working hours because inspectors become tired as time elapses. Furthermore, because it is difficult for novices to adequately check the product quality initially, it is likely that The associate editor coordinating the review of this manuscript and approving it for publication was Li He . the inspection results will be unsatisfactory, and time will be required for training purposes. For these reasons, the demand for automated defect inspection has been increasing in the manufacturing industries. Especially, defect inspection is quite important in the semiconductor manufacturing process.
To satisfy the need, several automated inspection systems have been proposed. Image-based, thermography-based, and ultrasonic-based inspection systems, which are utilized for specific purposes, have been widely applied. Image-based systems perform inspection applying image processing and computer vision techniques on images. These systems focus on the appearance of the area where a defect exists on visible objects. Thermography-based systems examine defects by analyzing the thermal distribution of objects [1]- [4]. It is FIGURE 1. Examples of periodic texture image for inspection: (a) TFT-LCD [8], (b) wafer [25], and (c) fabric [26].
applied to objects whose defects are derived from certain thermal characteristics. Ultrasound-based systems that transmit ultrasonic waves into objects to detect flaws are mainly used when an inspection for the internal structures of the objects is needed [5]- [7].
An image-based inspection system is one of the most utilized approaches and has been widely applied to the inspection of products such as textiles, wafers, and thin film transistor liquid crystal displays (TFT-LCDs). As shown in Figure 1, such products have periodic patterns, respectively. In order to detect defects in periodic patterns, various methods utilizing image processing techniques have been introduced: template-based, filter-based, and statistical methods [8]. Template-based methods are simpler than other approaches, and are used to compare an input image with its defect-free shape, called a golden template [9]- [12]. This approach is only useful when a golden template for the input image can be obtained. Filter-based methods perform convolution with filter banks and detect a defective region by analyzing the response of the results [13]- [15]. In this approach, knowing the structure of both defective and defect-free regions helps to design appropriate filters. Statistical methods inspect defects based on the statistical difference between the defective and defect-free regions [16]- [18]. In order to discriminate between the two regions, an adequate number of sample images are needed. In recent years, convolutional neural networks (CNNs) have shown remarkable outcomes in various computer vision applications. In particular, several networks including FCN [19], SegNet [20], and Adap-Net [21] demonstrated a notable performance for semantic segmentation, and thus, deep learning-based approaches have been widely applied to defect inspections [22]- [24].
Although the field of image-based defect inspection has advanced, some challenging issues still remain: data insufficiency, data imbalance, and annotation cost, to name a few. The first issue is data insufficiency. Several datasets for general object detection and semantic segmentation have been publicly released such as PASCAL VOC [27], COCO [28], and Citiscapes [29]. Unlike general objects, it is difficult to obtain images that contain defects because most products are not faulty. It is common that only three or four occurrences per million units or events are allowed in the modern manufacturing industries employing Six Sigma methodology. Especially, it is more difficult to capture defects in semiconductors than other products since defects in semi-conductors are several nanometers in size. Furthermore, generating defective products for the purpose of inspection is even more difficult. In addition, in most cases, the inspection results are treated as strictly in-house and confidential. The second issue is data imbalance, which indicates that data of certain classes are lack or missed. In general cases of defect segmentation, imbalanced data are concerned with the kinds of defects; however, this is occasionally broadened to the existence of defects. Because inspection tasks mainly focus on defective products, it is possible that data of normal products will not exist. The last but not the least one is annotation cost. In the field of machine learning, model training often requires labels of data called the ground truth (GT), which are the correct answers to the data (e.g., class, bounding box, and segmentation mask) on the object of interest. In detail, the annotation required for a semantic segmentation task is tedious and time-consuming owing to the pixel-wise labeling.
In this paper, we propose an image-based defect segmentation framework for periodic texture images using GAN-based golden template generation and data augmentation process. The proposed framework generates the golden template of the input image and then segments defects in a pixel-wise manner using simple post-processing. The concept of this framework is inspired by the dissimilarities between the defective and normal regions. Moreover, synthetic defect and non-defect images are generated from a small number of real defect images through the proposed data augmentation process, the volume of which is sufficient to train a model. The process does not apply simple geometric transformations to existing images, such as scaling, translation, and rotation, but generates whole new images. The main contributions of this paper are: • We propose a framework using CycleGAN [30] for defect segmentation on periodic textures. We achieve a competitive performance of pixel-wise segmentation compared to supervised learning-based methods while using only class labels. The proposed framework lowers the burden of the annotation cost.
• We propose two data augmentation processes for generating synthetic defect and non-defect images, respectively. The proposed data augmentation process alleviates the data insufficiency and data imbalance problems described earlier. The rest of this paper is organized as follows. Section II introduces existing image-based defect segmentation methods. Section III describes the proposed defect segmentation framework and data augmentation process. Section IV presents our experimental settings and results. Finally, Section V provides some concluding remarks.
A. TRADITIONAL APPROACHES
As mentioned in the introduction section, various methods based on traditional image processing and computer VOLUME 8, 2020 vision techniques have been reported. Template-based methods utilize defect-free template images for comparison. Khalaj et al. [9] constructed the building block, the structure of repeated patterns, from direct patterned wafers. The repetition periods of the patterns were estimated along the horizontal and vertical directions in the input image. Xie and Guan [10] generated defect-free images using a simulated building block whose size is equal to the horizontal and vertical periods of repeating patterns for patterned wafer inspection. A golden template was built in a manner analogous to that of constructing a building block. Shankar and Zhong [11] introduced a template-based vision system to inspect semiconductor wafer surfaces. In this system, the mean square error (MSE) analysis between the reference circuit image and the test image was performed using a two-dimensional discrete cosine transform (DCT). A rule-based approach for semiconductor defect segmentation was reported in which the segmentation was performed based on the diagnostic rule, similarity rule, and logical rule using an error image [12]. The resulting error image is generated by matching the inspected die image and the golden master (GM) image.
Filter-based methods are based on various types of filters that output different responses in each of the defective and defect-free regions. Tsa and Wu [13] chose the best parameters of a Gabor filter based on the output response of convolution for each textured surface to deal with unseen defects in the given surface. They demonstrated the effectiveness of their method for both structural and statistical textures. Gabor filter-based supervised and unsupervised approaches for defect detection in textured materials were presented [15]. In the supervised scheme, the best representative Gabor filter was determined based on the filter-selection methodology. For unsupervised defect detection, a multichannel filtering scheme was used and an imaginary Gabor function (IGF) was employed to lower the computational time. Chan and Pang [14] analyzed the frequency spectrum to detect defects in the patterned fabric on the basis that the frequency spectrum will vary when the fabric image has defects. They utilized a fast Fourier transform (FFT) instead of a discrete Fourier transform (DFT) for computational efficiency.
In the statistical methods, defects can be discriminated against flawless regions based on the characteristics of the texture, such as the intensity distribution. Liu et al. [16] introduced an inline defect-defection (IDD) system for TFT-LCD inspection. After some pre-processing to obtain patches for classification, they classified whether a patch is defective or not using locally linear embedding (LLE) and support vector data description (SVDD). In order to solve the problems in SVDD, automatic target defect identification based on fuzzy support vector data description (F-SVDD) ensemble was reported [17]. A partitioning-entropy-based kernel fuzzy c-means (KFCM) algorithm was utilized for constructing F-SVDD ensemble. Yu and Lu [18] developed a wafer map defect detection and recognition method using local and nonlocal preserving projection (LNPP) and joint local and nonlocal linear discriminant analysis (JLNDA). They used several features for representing wafer maps: geometrical features, gray features, textual features, and projection features.
In particular, there were some studies where local binary patterns (LBP) variants were used to detect defective regions. Tajeripour and Fekri-Ershad [31] developed an approach for porosity detection in stone textures using one-dimensional local binary patterns (1DLBP). They divided a stone image into sub-windows and compared a feature vector of each window with that of the porosity-less image. A surface defect detection approach using noise-resistant color local binary patterns (NrCLBP) was presented [32]. They combined feature vectors that have different sizes of neighborhood radius in NrCLBP for multi-resolution analysis. Cao et al. [33] introduced a nickel foam surface defect detection method using multi-scale block local binary patterns (MB-LBP). They utilized a non-subsampled contourlet transform (NSCT) to extract multi-scale texture characteristics.
B. METHODS IN THE ERA OF DEEP NEURAL NETWORKS
After the success of AlexNet [34], several defect inspection methods using deep neural networks (DNNs) have been recently reported. According to the level of supervision for the training data, DNN-based methods can be categorized into three groups: supervised, weakly-supervised, and unsupervised learning.
Ouyang et al. [22] constructed a network called PPAL-CNN, which consists of seven layers for fabric defect detection. In order to localize fine defects and deal with data imbalance, they generated a defect probability map from an input image and utilized it as a dynamic activation layer (PPAL) instead of an activation function. Marino et al. [35] applied class activation mapping (CAM) [36] to potato defect classification and localization. CAM gives a network trained for classification tasks the ability to localize target objects in images by adding a convolutional layer and a global average pooling layer before the last fully-connected layer [36]. They employed several well-known networks such as AlexNet [34], VGGNet [37], and GoogLeNet [38] as backbone networks and modified these networks to extract the CAM results. A network called LEDNet, which is based on CAM for classifying and localizing defects in LED chip images, was presented [39]. In addition, data augmentation was performed randomly for the collected images using geometric transformation techniques including rotation, flipping, translating, noising, and blurring to improve the accuracy. Schlegl et al. [40] introduced AnoGAN to detect lesions in medical images with deep convolutional generative adversarial networks (DCGAN) [41]. They trained a model with only normal data, and thus, the trained network can represent the distribution of normal data. They then compared the input images and the images generated from the latent variable computed by the inverse operation of the generator. When an input image is normal, it is analogous to its generated image. By contrast, a visual difference exists between the input image and the generated image when the input image is anomalous.
Niu et al. [42] presented DefectGAN using CycleGAN [30] for weakly-supervised defect detection. This study is similar to our proposed framework in terms of the non-defect image generation using CycleGAN; however, the total loss function used in the study was inadequate to make a golden template well for periodic patterns. In order to deal with this problem, we utilized another term, identity mapping loss [30], to the total loss function and demonstrated the effectiveness of the additional term. Moreover, we performed data augmentation to solve the problems of data insufficiency, and generalization performance of the golden template generation is enhanced through our data augmentation scheme.
C. DATA AUGMENTATION FOR IMPROVING PERFORMANCE
The volume and diversity of data are crucial to data-driven approaches such as DNN-based methods. Various data augmentation approaches for improving performance in different tasks have been presented.
Budvytis et al. [43] investigated the effect of video data augmentation for semantic segmentation in driving environments. They increased the segmentation performance of different networks by performing label propagation from coarsely labeled frames to adjacent unlabeled ones. The effectiveness of data augmentation in image classification tasks was demonstrated [44]. Three augmentation approaches including traditional transformations, GANs, and the augmentation network were utilized in this study. Bowles et al. [45] improved segmentation accuracy in medical imaging with augmenting training data. They investigated the performance of segmentation networks trained with different amounts of synthetic data. GAN-based medical image augmentation was performed for liver lesion classification [46]. In this study, two GAN variants were employed to generate synthetic liver lesion images, and the classification performance was improved by using the generated synthetic images. Zhao et al. [47] synthesized labeled medical images for the segmentation task in magnetic resonance imaging (MRI) brain scans. They trained spatial and appearance transform models for generating synthetic images and labels.
III. PROPOSED FRAMEWORK A. OVERVIEW
Before describing the details of the defect segmentation in periodic texture images, we first describe the overall scheme. Flowcharts of the proposed data augmentation process and the defect segmentation framework are depicted in Figures 2 and 3, respectively.
As shown in Figure 2, the proposed data augmentation process includes the two subordinate procedures for synthetic image generation: defect and non-defect. By using DCGAN [41] and CycleGAN [30], we create synthetic defect images whose volume is sufficient for training a network. PatchMatch [48] and periodic spatial generative adversarial network (PSGAN) [49] are utilized to create synthetic non-defect images. The proposed data augmentation allows the golden template generator to produce more plausible results. This suggests that our data augmentation scheme improves the generalization performance of the golden template generation.
The proposed defect segmentation framework consists of the golden template generation and post-processing, as illustrated in Figure 3. The golden template generation is performed using CycleGAN, which makes a defect-free version of the input image. After the golden template, straightforward image processing techniques are applied for detecting defects. To make a golden template for periodic patterns, we employ another loss term, identity mapping loss [30]. Although the loss is often auxiliary in other CycleGAN applications, in this work at least, it is crucial to the golden template generation from the perspective that the periodicity of the pattern must remain.
B. SYNTHETIC DEFECT IMAGE GENERATION
Synthetic defect images are generated out of real defect images through DCGAN [41] and CycleGAN [30]. DCGAN is utilized to make synthetic defect images; however, whose resolution is too small to be used as training samples for golden template generation. In addition, it is inadequate to apply a naive scaling method to the images. To solve this problem, image-to-image translation using CycleGAN is carried out for super-resolution.
In the training phase, each of the two networks is trained for their own purposes. In the generating phase, synthetic defect images are created through the trained networks.
DCGAN consists of two adversarial modules, a generator G and a discriminator D, which are trained by min-max game with the loss function V (G, D): where x is the input data for a discriminator D, and z is a latent variable for a generator G. In this work, a generator that creates fake defect images indistinguishable from real ones is learned when given the actual defect images as training samples. From now on this network will be called as N FD . CycleGAN for super-resolution learns the two mapping functions G H : X L → X H , G L : X H → X L where X L is the low-resolution domain of the defect and X H is the high-resolution domain of the defect. The two adversarial discriminators, D X H and D X L , aim to distinguish the data of the domain and the data mapped from the other domain, respectively. Loss functions for the total loss are expressed as: Our total loss function used to train a model for superresolution is: where L GAN , L cyc , and L idt are the adversarial loss, cycle consistency loss, and identity mapping loss, respectively. λ cyc and λ idt are used to control the impact of L cyc and L idt , respectively. The network for super-resolution will be termed N SR hereafter. After the two networks are trained, the images produced from N FD trained to create fake defect images are fed into N SR trained for super-resolution. We use the resulting images of N SR as our synthetic defect images for training the golden template generator.
C. SYNTHETIC NON-DEFECT IMAGE GENERATION
To train CycleGAN for golden template generation, the two domains of the image are required. For this reason, Patch-Match [48] and PSGAN [49] are employed to perform synthetic non-defect image generation. In order to deal with a situation in which real non-defect images can not be acquired, factitious images such as defect-removed are generated using PatchMatch. Periodic textures are then synthesized from the defect-removed images using PSGAN.
In PatchMatch, a nearest-neighbor field (NNF) is initialized as patches, which are at the uniformly random offset f (x, y) across the whole image. Based on the patch distance D(v) between the patch at (x, y) in an image and the patch at (x, y) + v in the other image, the offset f (x, y) is propagated. On odd iterations, f (x, y) is changed into a value that minimizes {D(f (x, y)), D(f (x − 1, y)), D(f (x, y − 1))}. Furthermore, this propagation is performed in the opposite direction using f (x + 1, y) and f (x, y + 1) on even iterations.
After the propagation, the offset v 0 = f (x, y) is checked with different candidate offsets to avoid convergence to the local minima. A series of candidate offsets exponentially decrease as: where w is the maximum search radius, which is initially set to the maximum image dimension, α is the decaying parameter for reducing the search window sizes, and R i is a random value in [−1, 1] × [−1, 1]. This random search finishes when wα i is less than 1 pixel. PSGAN is based on the DCGAN architecture; however, it is composed solely of convolutional layers. In addition, the generator G is extended in a two-dimensional spatial domain to map a latent variable Z ∈ R L×M ×d to an image X ∈ R H ×W ×C . A latent variable Z consists of three sections: local independent part Z l , spatially global part Z g , and periodic part Z p . The channel dimension of Z , d, is the sum of the channel dimensions of the three parts d l , d g , d p . In accordance with the extension of the generator G, the discriminator D outputs a L × M field from an image X .
As the variation of the generator and discriminator in PSGAN, the standard GAN loss function is also altered as: where D ij (X ) is the discriminator at (i, j), 1 ≤ i ≤ L and 1 ≤ j ≤ M , for a local part X in an input image X . After a network for texture synthesis is trained, the enlarged synthesis results are randomly cropped. The resulting images of the random cropping are used as our synthetic non-defect images for the training of a golden template generator. Henceforth, the network for texture synthesis will be dubbed N TS .
D. GOLDEN TEMPLATE GENERATION
For golden template generation, CycleGAN learns the two mapping functions G N : X → Y and G D : Y → X between the defect domain X and the non-defect domain Y . In order to train a network for golden template generation, we used the same total loss function in Equation (5). The total loss function for a golden template generator is expressed as: where L GAN , L cyc , and L idt are calculated using Equations (2) to (4). After this section we will call the network for golden template generation as N GT . We employ the identity mapping loss [30] which allows a model to have the capability to preserve the color composition after translation as the additional term in the total loss function. Although the loss term is not commonly used in other applications, we found that it helps to preserve the periodicity of patterns. When the coefficient of the identity mapping loss is zero, flawless regions are slightly varied. This phenomenon occurs when N SR is trained without L idt in Equation (5). Only the defective region should be changed while the defect-free region is unaltered, which is our goal and the reason for using the identity mapping loss.
E. DEFECT SEGMENTATION
After the golden template of the input image is obtained, simple image processing techniques are applied for detecting defects. To measure the similarity between the input image and its golden template, the patch-wise sum of the absolute difference (pSAD) is calculated as: where (i, j) is a patch ranging from (i − W , j − H ) to (i + W , j + H ). I D and I G denote the input defect image and the golden template of the input image created by the golden template generator, respectively. In general, defective regions are quite different from their golden templates. On the contrary, defect-free regions are extremely similar to the templates. Therefore, the values in the pSAD results are usually larger in the defective region than in the flawless region. Defects are then segmented by applying hysteresis thresholding to the pSAD results.
IV. EXPERIMENTS A. IMPLEMENTATION DETAILS
We utilized several networks to verify our framework. In our experiments, all the network training and testing were performed on an AMD Ryzen 7 2700X CPU and an NVIDIA RTX 2080Ti GPU using CUDA 10.0.
Synthetic Defect Image Generation Firstly, we trained N FD on defect images of 64 × 64 resolution and employed the architecture introduced by Radford et al. [41]. The network was trained for 2500 epochs and the dimension of the latent variable was set to 100. Secondly, we trained N SR on low-resolution defect images and high-resolution defect images of 256×256 resolution with the architecture presented by Zhu et al. [30]. The 64 × 64 low-resolution images were resized to 256 × 256 for training. We trained the network for 200 epochs keeping the learning rate for the first 100 epochs and linearly decaying the initial rate to zero in the next 100 epochs. The coefficients λ cyc and λ idt in Equation (5) were 10 and 0.5, respectively.
Synthetic Non-defect Image Generation In order to obtain defect-removed images from real defect images, we performed PatchMatch. The defective region was located manually and re-drawn through the approximate nearest-neighbor algorithm. For texture synthesis, N TS was trained on images of 160 × 160 resolution with the architecture introduced by Bergmann et al. [49]. The network was learned for 100 epochs to generate fake texture images from the latent variables. For the channel dimension of the latent variables, we set the dimension of the three parts as d l = 10, d g = 0, and d p = 2. With the trained network, we produced synthesized texture images of 640 × 640 resolution and then randomly cropped the images to obtain small texture patches.
Golden Template Generation
To perform golden template generation, we trained N GT on defect images of 256 × 256 resolution and non-defect images of 256 × 256 resolution with the aforementioned architecture in synthetic defect image generation. In addition to random flip in training phase, random rotation with a maximum of ±5 • was performed to achieve a generalization performance of the representation. The coefficients λ cyc and λ idt in Equation (8) were both 10.
Defect Segmentation We chose the parameters of the patch size and thresholding values empirically. The pSAD results were calculated with a patch whose size ranges from 5 × 5 to 21 × 21. Hysteresis thresholding was performed using an upper bound T u and a lower bound T l . The upper bound T u and the lower bound T l were multiples of 0.02 with a constraint that T u should be between 0.5 and 0.98, and that T l should be lower than T u .
B. DATASET
We experimented with three periodic textures. One is the images of defects in semiconductor wafers, and the others are those of defects in textiles. Each of them has distinctive defects and periodic textures. Our Dataset In order to demonstrate our proposed framework, we experimented with our private dataset. This dataset is about defects occurring in semiconductors, the images of which were captured by scanning electron microscope (SEM). There are seven types of defects in the dataset, and the sample images and their descriptions of our dataset are shown in Table 1. The dataset contains 264 grayscale defect images of 480×480 resolution with periodic textures; however, there are no non-defect images.
We resized the original images to images of 256 × 256 resolution and used the resized images for the experiments. For the golden template generation and data augmentation process, 200 images were randomly selected as the training data. The performance on this dataset was evaluated with the others.
Because our dataset does not include non-defect images, we used some non-defect images acquired at different scales to construct the real subset. By utilizing the scale VOLUME 8, 2020 information in the defect and non-defect images, we resized the non-defect images to make the textures in the non-defect images similar to the patterns of the defect images. Then, we randomly cropped the resized non-defect images as the size of real defect images, as shown in Figure 4. Consequently, we obtained 200 non-defect images with the modification scheme. As shown in Table 2, the real subset includes 200 real defect and non-defect images. The real+syn subset contains 200 real defect and synthetic non-defect images. The syn subset consists of 10, 000 synthetic defect and non-defect images.
TILDA In addition, we utilized a public dataset, TILDA textile texture-database [50], to apply our framework to other periodic textures. This is a dataset of defects in textiles, and there are eight types of fabric. Among the textiles, we used the subsets, {C3R1, C3R3}, which have periodic structures. In the subcategories of the two subsets, we used {E1, E2, E3, E4} as target defects and {E0} as defect-free textures. Each subcategory contains 50 grayscale images of 768 × 512 resolution and the sample images of the subcategories in the two subsets are shown in Figures 5 and 6.
In order to be adapted for the proposed framework, the original non-defect images were divided into six patches of size 256 × 256 without overlap. For defect images, first, we manually labeled defects in a pixel-wise manner. Based on the annotation results, we set the smallest ROI which covers all the defective regions in the image. The ROI has a constraint that the center of the ROI is equal to that of defects and the width and height of ROI are multiples of 256. With this constraint, we divided the ROI into patches of size 256 × 256. Among the acquired patches, those which contain defects less than 100 pixels were discarded. As shown in Figure 7, the patches covered with the blue bounding boxes were obtained as defect images, whereas the regions in the red boxes were not used. Accordingly, we obtained 397 and 323 defect patches in the {C3R1, C3R3}, respectively. In order to train networks for the golden template generation and data augmentation process, 300 and 250 images were arbitrarily chosen. The other images were used as test data.
C. QUALITATIVE RESULTS OF THE PROPOSED FRAMEWORK
To verify the effectiveness of our data augmentation process and the identity mapping loss, we trained the network for golden template generation with the three subsets {real, real+syn, and syn} of our dataset. Through our data augmentation process, we generated synthetic defect images and synthetic non-defect images using our dataset, as shown in Figures 8 and 10. The resulting non-defect images seemed to be almost the same as the golden templates of the real defect images; however, the generated defect images did not look like completely the real ones. Nonetheless, we achieved remarkable results with them. With many synthetic images, the network for golden template generation could learn a robust mapping from various defective regions to defect-free textures.
We acquired the most reasonable results for the test images when the network for golden template generation was trained on the syn subset, as shown in Figure 9. The defective regions in the input image changed as normal, and it was difficult to distinguish where the original defective region had been. When the network was trained on the real+syn subset, the defect-free regions in the golden template were very similar to those in the input image. By contrast, the regions in the golden template where defects had existed were slightly different from the defect-free regions. As shown in Figure 11, our framework generated clean defect-free images from the input defective images. The post-processing for defect segmentation could be seemed to be superfluous; however, we dealt with the noises that are due to the great change of the intensity in a normal pixel. Despite the different intensity changes of the pixels in the defective region, we detected the region as one defect. With this step, we achieved better performance.
As shown in Figure 12, the identity mapping loss has a great effect on the golden template generation. The golden template generator trained without the identity mapping loss missed the periodicity of the textures, so that the difference values in the defect-free regions were as large as those in the defective regions. On the contrary, the generator trained with the loss represented the periodicity while removing defects in the images.
In order to analyze these results numerically, we obtained the average values of both the defective and defect-free regions in the absolute difference images with region-level labeling, as shown in Figures 13 and 14. Because the non-defect images in the real subset are quite different from the defect-free regions in the real defect images, the average value of the defect-free region was the largest among the three subsets. The network trained on the syn subset reduced the average value of the absolute difference in the defect-free region. This suggests that our data augmentation process made more realistic images. It was difficult to discriminate between the defective regions and the defect-free regions in the absolute difference images when the identity mapping loss was not utilized; however, the use of the loss made the gap between the average values of the two regions clear.
The very similar method, DefectGAN [42], does not include the identity mapping loss in the total loss function. The model trained without the loss created the textures shifted from those in the input image. This could make defect-free regions be segmented as defects. In addition, with our data augmentation process, the regions where defects existed in the input image could turn into the textures more similar to those of defect-free regions. For these reasons, our framework could be differentiated from DefectGAN for segmenting defects on periodic textures.
Additionally, we applied our framework to the TILDA dataset and adjusted the training scheme and hyperparameters of some networks for data augmentation of the TILDA dataset. Because directions of the pattern in the subsets are different, the two networks, N FD and N TS , for synthetic defect and non-defect image generation were trained by direction of pattern. We generated synthetic defect and non-defect images, the number of which was as same as the number of those of our dataset.
in Figures 16 and 18, our framework made decent golden templates for input defect images; however, there was a little vestige of the defective region in the generated golden template.
D. QUANTITATIVE EVALUATION OF THE PROPOSED FRAMEWORK
In order to demonstrate the competitiveness of our framework on defect segmentation, we compared our framework with the three methods: CAM [36], FCN [19], and AdapNet++ [51].
The two supervised semantic segmentation networks were trained on the real subset of our dataset. With the real defect images and their region-level labels, FCN and AdapNet++ learned the semantic context for 200 and 100 epochs, respectively. Our framework and CAM were trained on the real, real+syn, and syn subsets of our datasets with only image-level labels. They were trained for 200 epochs, except that our framework learned two mapping functions between the defect and non-defect domains on the syn subset for 50 epochs. In the training and testing phase of the CAM, several backbone networks were employed: DenseNet [52], ResNet [53], and SqueezeNet [54]. The segmentation masks of the CAM were obtained by applying the same thresholding scheme of our framework to the resulting heatmaps of the CAM. We selected the thresholding parameters to achieve the best performance of the CAM.
As shown in Figure 19, our framework produced remarkable segmentation results for various defects. These outcomes were comparable to those of the other two supervised methods. While the CAMs of the three backbone networks localized defects, our framework segmented the defective region more accurately. Specifically, our framework showed better segmentation results for the relatively smaller defects, compared with the three CAMs. These results could be due to the downsampling in the backbone networks of the CAM. The spatial resolution of the feature map was reduced through pooling layers or strided convolutions, and the resulting heatmap was enlarged to the original input size. We adopted the intersection over union (IoU) to evaluate the segmentation performance [27]. The IoU of the two regions, R p and R gt , is expressed as: and the mean IoU (mIoU) is calculated as: where area R p ∪ R gt denotes the union of the predicted segmentation mask and the ground truth, area R p ∩ R gt indicates their intersection, and N is the number of test data. In this work, the mIoU was measured with only the defective region. As shown in Table 4, the performance of our framework was enhanced with the proposed data augmentation process. Moreover, our framework outperformed the CAMs by a huge margin. Unfortunately, the performances of the two supervised segmentation methods were superior to those of our framework. However, given the qualitative segmentation results, it seemed that our framework achieved a decent performance of defect segmentation. Above all, our framework produced competitive segmentation results without pixel-wise labeling. In addition, we compared the inference time of the proposed framework with those of the two segmentation networks. As shown in Table 5, our framework showed the fastest inference time. In our framework, the inference time of the golden template generation and defect segmentation was about 16 and 2 ms, respectively.
E. DISCUSSION
There were a few limitations and failure cases of our framework, as shown in Figure 20.
Because pSAD was applied to deal with noises in the absolute difference images between the input image and its golden template, our framework produced less detailed results than the supervised methods. Hysteresis thresholding was employed to determine defective regions; however, there were the limitations that the number and the area of defects were heavily dependent on the two parameters of the thresholding. When a small foreign body was between line patterns, the segmentation result of our framework was wider than its actual region. When some line patterns were broken or bridged, a few defective regions were not detected because the intensities of the regions hardly changed.
TILDA dataset is more challenging than our semiconductor dataset because the textures of the two subsets are less strict and more complex than those of our dataset. Particularly, the directions of the patterns in the original images are different from those of one another. Furthermore, there are some indistinct defects that are indistinguishable from defect-free textures. The qualitative results of the golden template generation seemed to be decent with the naked eye; however, the segmentation results were not good. This was because the intensities in defective regions were similar to those in defect-free textures. The structures of the defects in the two subsets are more complex than that of the defects in our dataset. Detailed shapes of defects and surrounding textures could not be formed in the synthetic defect generation.
Particularly, the patterns in the C3R1 subset have both global and local periodicity. This means that the images in the C3R1 subset have check patterns globally and line patterns locally. Unfortunately, it was difficult to reproduce these distinctive textures in the synthetic defect images. For these reasons, the quantitative performance on the TILDA dataset was not good, as shown in Tables 6 and 7.
V. CONCLUSION
In this paper, we propose a weakly-supervised defect segmentation framework for periodic textures and a data augmentation process applicable to our framework. We generated a golden template from an input defect image and segmented the defective region by applying straightforward post-processing to the two images. Furthermore, we found that the identity mapping loss is crucial to the golden template generation of defect images with periodic textures. As a result, we localized the defects in a pixel-wise manner without region-level labeling. Through the proposed augmentation process, we created synthetic defect and non-defect images even from only real defect images. With the augmented data, the golden template generator made more plausible results, and the segmentation performance of our framework was enhanced. The proposed framework was qualitatively and quantitatively compared to other defect segmentation methods on periodic texture images with various defects. The experimental results suggest that the proposed framework outperformed the CAM-based method and showed results comparable to those of the supervised segmentation in strictly periodic textures.
In future work, we plan to simplify the whole proposed framework and develop the data augmentation process to make more realistic images. Since the difference of intensities showed a limitation for segmenting defective regions, we plan to develop more in-depth post-processing. Particularly, we will try to improve the quality of the golden template generation for loosely periodic textures and to segment the detailed structure of defects. University, in 1994, where he is currently a Professor with the Department of Electronic Engineering. His research interests include health monitoring using mobile devices, visual surveillance, virtual devices, machine vision systems, advanced driver assistance systems, and 3D vision systems in sports, as well as home appliances. | 9,447.2 | 2020-01-01T00:00:00.000 | [
"Materials Science",
"Computer Science"
] |
Electroweak properties of ρ-meson in the instant form of relativistic quantum mechanics
Charge and magnetic radii, magnetic and quadrupole moments of the ρ-meson are calculated in framework of the developed by authors version of the instant form of relativistic quantum mechanics with fixed number of particles (IF RQM). The calculations are performed with different wave functions of quarks in the ρ-meson and using the socalled modified impulse approximation (MIA). The electromagnetic characteristics of ρ-meson are obtained without fitting parameters. The value of the magnetic moment coincides with available experimental data: μρ = 2.1 ± 0.5 e/2Mρ.
Introduction
In recent years there has been significant progress in the experimental study of ρ-meson: measurement of the lepton decay constant from the process τ → ρν τ [1,2], extraction of the ρ-meson magnetic moment from processes γ * → 4π [3], measurement of the magnetic moment of decay ρ → πγ * [4], the expected measurement of the electromagnetic form factors of the ρ-meson from the reaction e + + e − → ρ + + ρ − [5].To the moment, there are a large number of calculations of the rho meson electroweak properties in different approaches (see, e.g., [6][7][8][9][10][11]).The theoretical calculations of the electroweak properties of the ρ-meson by different methods have important implications for understanding the processes in the transition region between nonperturbative and perturbative quark dynamics, for example.A fairly large amount of experimental information enables to estimate these approaches from the point of view of their ability to self-consistent description of these data.
In the present work the some electroweak characteristics of the ρ-meson (magnetic and quadrupole moments, the charge and magnetic mean-square radii, the lepton decay constant) are calculated in the framework of instant form of relativistic quantum mechanics (IF RQM)(see, e.g., review [12]).Developed by authors formulation of IF RQM is based on a direct realization of the Poincaré algebra on the set of the dynamic observables of the composite system and dates back to the work of P. Dirac [13].Distinctive features of our variant of IF RQM are the original procedure of construction of the electroweak currents of a composite system and new formulation of the impulse approximation -the so-called modified impulse approximation (MIA).Unlike conventional impulse approximation MIA is formulated in the terms of the reduced current matrix elements on Poincaré group (form factors) and does not lead to violation of Lorentz-covariance and conservation law for the currents of composite systems.
In this work the model parameters are fixed from the calculations of electroweak properties of pion [14] and from description of the lepton decay constant of ρ-meson [15].So, our calculations of the magnetic moment and the charge radius are performed without fitting parameters and give the good agreement with experimental data.
Electroweak characteristics of ρ-meson and numerical calculations
The electromagnetic current matrix element of the ρ-meson can be written in terms of conventional Sachs form factors for the system with the total angular momentum equal 1.To do this let us write the parameterization of the electromagnetic current matrix element in the Breit frame (see, e.g., [16]): Here G C , G Q , G M are the charge, quadrupole and magnetic form factors, respectively, M ρ is the ρmeson mass.The polarization vector in the Breit frame has the following form: The variables in ξ are total angular momentum projections.
In the Breit frame: Our approach gives following integral representation for the electromagnetic form factors of composite systems with total angular momentum equal 1 in MIA (see,e.g., [17]): Here g 0C (Q 2 ) , g 0Q (Q 2 ) , g 0M (Q 2 ) are so-called free two-particle charge, quadrupole and magnetic form factors, respectively, ϕ(s) is wave function of quarks in the ρ-meson in sense of RQM.
From physical point of view the free two-particle form factors describe the electromagnetic properties of system of the two non-interacting particle with discrete quantum numbers of the ρ-meson.These form factors are reduced matrix elements of the electromagnetic current operator of the free two-particle system on the Poincaré group.Generally speaking, these form factors are regular generalized functions (distributions) (see [12] for details).These generalized functions can be calculated by the methods of relativistic kinematics and have the form given in [17].Exactly the appearance of g 0C (Q 2 ), g 0Q (Q 2 ), g 0M (Q 2 ) in the integral representation for the electromagnetic form factor of the ρ-meson is the essence of MIA.
The wave functions in sense of IF RQM in (4) at J = S = 1 , l = 0 are defined by the following expressions (see, e.g., [12]): where M is mass of constituents quark.Normalization is given by the following condition: here ψ(k) is a model wave function.
The magnetic moment μ ρ and the quadrupole moment Q ρ of ρ meson were calculated using the relations given in [16]: The static limit in (4) gives the following relativistic expressions for moments: where κ q is the quark anomalous magnetic moment.The ρ-meson charge ( r 2 ρ C ) and magnetic ( r 2 ρ C ) radii are calculated from relation: The lepton decay constant of the ρ-meson, f ρ is defined by the following matrix element of the electroweak current (see, e.g., [18]): where P ρ is the meson three-momentum, m ρ = −1, 0, 1 is the spin projection, ξ μ (m ρ ) is the polarization vector that in the Breit frame has the form (2).
For the calculation of the ρ-meson lepton decay constant we used the method of construction of current matrix element which is nondiagonal with respect to the total angular momentum [15].Final expression for f ρ in approximation of four-fermion interaction in our approach has form [15]: For the description of the relative motion of quarks the following phenomenological wave functions are used: 1.A Gaussian or harmonic oscillator wave function (see, e.g., [19]): 2. A power-law wave function (see, e.g., [19]): For Sachs form factors of constituent quarks we used the following expressions: where e q is the quark charge.
For f q (Q 2 ) the form proposed in [20] is chosen: here r 2 q is the MSR of constituent quark.The choice of form factor of the constituent quark as ( 15), ( 16) due to the fact that the asymptotic of the electromagnetic pion form factor at Q 2 → ∞ obtained in our nonperturbative approach with (16) does coincide with that predicted by QCD including the pre-asymptotic factor (see, e.g., [21] for details).
Parameters in our calculation are fixed from description of electroweak properties of pion [14].Let us note that our RQM describes well the experimental data for the pion form factor including the recent points [22].Let us emphasize that the parameters used in our calculations were obtained from the fitting to the experimental data up to Q 2 0.26 GeV 2 [23].At that time the data for higher Q 2 was not correlated in different experiments and had significant uncertainties.The later data for pion form factor in JLab experiments up to Q 2 =2.45 GeV 2 were obtained with rather good accuracy.All experimental points obtained in JLab up to now agree very well with our prediction of 1998.
So, by analogy with pion calculations from [14] we use the following set of parameters: M=0.22 GeV for the constituent quark mass; the quark anomalous magnetic moments enter our expressions through the sum (κ u + κ d) and we take κ u + κ d = 0.0268 in natural units; for the quark MSR we use the relation r 2 q 0.3/M 2 .The parameters of the wave functions b in ( 13), (14) were fixed from the requirements of the description of experimental values of ρ-meson lepton decay constants 152 ± 8 MeV [1].The ρ-meson mass in our calculations is taken from [2] So, all parameters of our model are fixed.Results of numerical calculations are presented in table 1.Let us remark that all electroweak characteristics of the ρ-meson except the lepton decay constant are calculated without fitting parameters.
As it can be seen from table the calculated value of the magnetic moment is in agreement with experimental data.The values of r 2 ρ C , while not measured directly, are important for testing various conjectures about strongly interacting systems.One of the interesting related prediction was introduced as a consequence of the so-called Wu-Yang hypothesis [24] (see also [25]), though it is remarkable by itself.Namely, one may define the radius of a hadron either in terms of the electroweak interaction (the mean square charge radius, r 2 ch , calculated for the ρ meson in this paper) or in terms of the strong interaction (this radius, r 2 st , is defined by the slope of the cross section of Table 1.The ρ-meson electromagnetic moments and lepton decay constant obtained with the different model wave functions ( 13) -( 14).r 2 ρ C , r 2 ρ M , are charge and magnetic MSR, respectively, in fm 2 ; μ ρ is relativistic magnetic moment ( 8) in e/2M ρ ; Q ρ is quadrupole moment (9) in fm 2 .The parameters b in ( 13) and ( 14) are in GeV, f ρ is ρ-meson lepton decay constant in MeV.hadron-proton scattering).The conjecture [25], which may be derived from, though not necessary implies, the hypothesis of [24], is the equality of the two radii,
Wave functions
This remarkable equality has been verified experimentally with a great degree of accuracy for the proton, πand Kmesons.Even more demonstrative is figure 1, analogous to a figure from the paper [25], but presenting more recent data.We can see that the value of the ρ-meson charge radius obtained in this paper fits perfectly the conjecture (17).
Conclusions
In does not violate the Lorentz covariance and conservation law for the electroweak currents in contrast with conventional impulse approximation.Other distinctive feature of our work is that all calculations were made without fitting parameters.The results for magnetic moment of the ρ-meson are consistent with recent experimental data [3].The values of the charge radius satisfy the hypothesis of Wu-Yang that has been experimentally verified for a number of hadrons.One of the authors (VT) thanks S.Troitsky for interesting discussions.Authors (AK and RP) thank the Organizing committee for their kind invitation to XXIII International Baldin Seminar on High Energy Physics Problems and hospitality.This work was supported in part (AK and RP) by the Ministry of Education and Science of the Russian Federation (grant No. 1394, state task).
Figure 1 .
Figure 1.Relation between the strong-interaction hadronic radius r 2 st and the charge radius r 2 ch for light hadrons.Result for ρ-meson is obtained in our work with wave function (14) at n = 3. ) the present work the calculations of electroweak characteristics of the ρ-meson in the framework IF RQM are performed with different wave functions in modified impulse approximation (MIA).MIA | 2,535.4 | 2017-03-01T00:00:00.000 | [
"Physics"
] |
Towards leading-twist $T$-odd TMD gluon distributions
We present exploratory studies of the 3D proton tomography through polarized $T$-odd gluon TMDs at leading twist, obtained in a spectator-model framework. We embody in our approach a flexible parameterization for the spectator-mass spectral function, suited to catch both small- and moderate-$x$ effects. All these studies are relevant to unveil the gluon dynamics inside hadrons, which represents a core research line of studies at new-generation colliders, such as the Electron-Ion Collider, NICA-SPD, the High-Luminosity LHC, and the Forward Physics Facility.
Introduction
One of the ultimate goals of frontier researches in particle physics is unraveling the inner structure of nucleons in terms of the distribution of their constituents. The collinear factorization is a wellestablished formalism that has collected many successes since the advent of the parton model. A key role in the description of high-energy hadronic and lepto-hadronic collisions is played by the onedimensional parton distribution functions (PDFs). However, there are fundamental questions about the deep nature of strong interactions that are still open and whose answers go beyond the reach of a pure collinear description. As an example, unveiling the origin of proton mass and spin requires a viewpoint stretched to a three-dimensional, tomographic description, which is naturally provided by the so-called transverse-momentum-dependent (TMD) factorization.
A striking difference between TMD and collinear densities is represented by the gauge-link sensitivity. In particular, the fact that TMDs are sensitive to the transverse components of the gauge link makes them process dependent (see Refs. [13][14][15]). Quark TMDs depend on processes through the [+] and [−] staple links, which determine the direction of future-and past-pointing Wilson lines, respectively. The gluon TMDs have a more complicated gauge-link dependence, since they are sensitive on combinations of staple links. This fact leads to a more diversified kind of modified universality. Two major gluon gauge links emerge: the f -type and the d-type ones. They are also known in the context of small-x studies as Weiszäcker-Williams and dipole structures, respectively. The antisymmetric f abc QCD color structure is part of the f -type T -odd gluon-TMD correlator, whereas the symmetric d abc structure appears in the d-type T -odd one. This brings to a dependence of f -type gluon TMDs on the [±, ±] gauge-link combinations, while d-type gluon TMDs are characterized by the [±, ∓] ones. More intricate, box-loop gauge links appear in processes where multiple color exchanges connect both initial and final state states [16], thus leading however to a violation of the TMD factorization [17].
A spectator-model calculation of quark TMDs in the proton was done in Refs. [43,44]. A comprehensive framework was recently built [45] (see also Refs. [46][47][48][49]) for all the T -even gluon TMDs at twist-2 by defining an enhanced spectator model for the parent proton to effectively catch effects coming from the high energy resummation.
In this work we report a preliminary study on the T -odd gluon TMDs, the f -type Sivers and linearity functions, which are connected to relevant single-spin asymmetries arising from the distribution of unpolarized and linearly-polarized gluons inside a transversely polarized proton.
T-odd gluon TMDs in a spectator model
The spectator-model framework is based on a simple and intuitive assumption, namely that the incoming proton with mass M and four-momentum P emits a gluon having longitudinal fraction x, four-momentum p, and transverse momentum p T , and the remainders are effectively treated as an onshell spectator particle with mass M X and spin-1/2. The nucleon-gluon-spectator vertex is modeled as follows the τ 1 and τ 2 functions being dipolar form factors in p 2 T . A dipolar choice for the couplings is useful to remove gluon-propagator divergences, suppress large-p T effects which are beyond the reach of a pure TMD description, and dampen logarithmic singularities coming from p T -integrated distributions. All the unpolarized and polarized spectator-model T -even gluon TMDs at twist-2 in the proton were obtained in [45]. In that work the naive spectator-model approach was improved by allowing the spectator mass M X to spread over a continuous range of values via a flexible spectral function suited to capture both small-and moderate-x effects (see Eqs. (16) and (17) of Ref. [45]). The model parameters encoded in the definition of the spectral function and in the spectator-model correlator were determined through a simultaneous fit of the unpolarized and helicity gluon TMD densities, f g 1 and g g 1 , to the corresponding collinear PDF distributions obtained from NNPDF [50,51] at the initial scale Q 0 = 1.64 GeV. The size of the statistical uncertainty was assessed by means of the bootstrap method.
Since the tree-level approximation for the gluon correlator does not account for the gauge link, our T -even TMD distributions turn out to be process-independent. In order to generate T -odd structures in the gluon correlator, we need to go beyond the tree level and include its interference with a distinct channel. Similarly to the quark TMD case, we have considered the one-gluon exchange in eikonal approximation. This diagram corresponds to the truncation at first order of the whole gauge-link operator. The main effect of this procedure is that the obtained T -odd functions become sensitive to gauge links, and thus process dependent. For the given f -type gauge link, two Sivers TMDs ( f ⊥ 1T ) and two linearity TMDs (h 1 ) are obtained by suitably projecting the transverse part of the corresponding gluon correlator. For each pair, the two partners are connected by the following modified-universality relation In our preliminary analysis we have employed a simplified expression for the nucleon-gluon-spectator vertex, with the τ 2 form factor in Eq. (1) set to zero. For the sake of consistency, we have fitted the model parameters to NNPDF parametrizations by using the simplified expression for the vertex.
In upper panels of Fig. 1 we present the transverse-momentum dependence of the p T -weighted [+, +] Sivers function for two representative values of the longitudinal fraction, x = 10 −1 and x = 10 −3 , and at the initial scale Q 0 = 1.64 GeV. Corresponding results for the [+, +] linearity function are given in ower panels. By inspecting our plots, it emerges that both the distributions have a non-Gaussian pattern in p 2 T , with a large flattening tail at large p 2 T -values and a small but nonzero value when p 2 T → 0, which suggests that in this limit both TMDs diverge at most as 1/|p T |. At variance with the T -even unpolarized and the Boer-Mulders gluon functions (see Fig. (4) of Ref. [45]), the bulk of our f -type T -odd functions increases when x grows. This suggests that transverse single-spin asymmetries could be less manifest in the low-x regime. We remark, however, that our results could change even radically when the full-vertex calculation will become available.
Conclusions and prospects
We have enhanced our spectator-model framework by performing preliminary calculation of two f -type T -odd gluon TMDs: the Sivers and the linarity functions. The full calculation of all the Todd gluon TMDs, including the d-type ones is underway. They can serve as a useful guidance to shed light on gluon-TMD dynamics at new-generation particle colliders and experiments, such as the Electron-Ion Collider (EIC) [52], NICA-SPD [53], the High-Luminosity Large Hadron Collider (HL-LHC) [54], and the Forward Physics Facility (FPF) [55]. | 1,720.6 | 2022-01-25T00:00:00.000 | [
"Physics"
] |
APPLICATIONS OF CHEMICALLY SYNTHESIZED CUS: PBO ALLOYED THIN FILMS IN MULTILAYER SOLAR CELLS AND OPTOELECTRONICS
Http://www.ijetmr.com©International Journal of Engineering Technologies and Management Research [13] APPLICATIONS OF CHEMICALLY SYNTHESIZED CUS: PBO ALLOYED THIN FILMS IN MULTILAYER SOLAR CELLS AND OPTOELECTRONICS Joseph Ijeoma Onwuemeka, Ngozi Patricia Ebosie, Michael Chukwukadibia Anumaka, Margaret Chinyelu Enedoh 1 Department of Physics, Imo State University, Owerri, Imo State Nigeria Abstract: CuS: PbO, alloyed thin films were successfully deposited on glass substrates under the deposition condition of 40oC of NaOH solution, using two solution based methods: successive ionic layer adsorption and reaction (SILAR) and solution growth technique. The crystallographic studies were done using X-ray diffractometer (XRD) and scanning electron microscope (SEM). The deposited alloyed samples were annealed at 250oC and 1500C. using Master Chef Annealing Machine. Rutherford backscattering Spectroscopy (RBS) analysis confirmed the percentage of the elements of copper, lead, sulphur and oxygen in the alloyed thin films. The optical characterization was carried out using UV-1800 double beam spectrophotometer. Sample cp1 annealed at 250 oC has an optical transmittance of 27% -71% in the ultraviolet region, 71%-83% in the visible and 83%-88% in the near infrared regions of electromagnetic spectrum. The alloyed thin films of samples cp2 of CuS:PbO annealed at 150oC, show optical transmittance of 15%-61% in the ultraviolet region, 61%-59% in the visible, and becomes linear through the near-infrared regions of electromagnetic spectrum. The two samples, have equal direct wide band gap of 3.65±0.05eV. From the spectral qualities, these compounds alloyed thin films may found useful in passive layer in heat and cold mirror application, vulcanization in tyre production due its thermal stability, active multilayer in various types of solar cells, liquid crystal displays, flat panel displays for optoelectronic applications and gas censor applications.
Introduction
The increase in thin film researches is due to their extensive applications in the diverse fields of solar energy conversion, electronics, space science, optics, aircrafts and other industries. These investigations had led to numerous forms of active and passive components, piezoelectric devices, rectification and amplification, magnetic memories, superconducting films. Because of The sulphide and oxide are attractive semiconductor materials that exhibit strong size quantization effects due to the high dielectric constant and the small effective mass of electron and holes, suggesting that its band gap energy can be easily manipulated from the bulk value to a few electron volts by the changes in the material's size. Those materials have also been used in many fields such as infrared photography, diode laser, humidity and temperature sensors, and decorative and solar control coatings among other applications. Novel materials are needed for thin film solar energy conversion apart from the most extensively studied material [3] Copper sulphides (CuS) are important materials for applications in p-type semiconductors and optoelectronics. This find use in photo thermal conversion applications, photovoltaic applications, solar control coatings and other electronic devices fabrication of microelectronic devices, optical filters as well as in low temperature gas sensor applications. Special attention is now given to the study of copper sulphide thin films probably due to the discovery of heterojunction solar cell [4] [5] Metal oxide nanoparticles have attracted considerable attentions in the last decades. Among them copper oxide (CuO) based materials have various technological applications in solar energy conversion, ceramics, sensors, catalysis, batteries, solar cells, magnetic storage media, semiconductors, capacitors, diodes, and so forth [6] because of their novel mechanical, electronic, magnetic, and optical properties compared with those of conventional bulk materials. Generally, these materials indicate high optical transmission, low electrical resistivity and high transparency in the visible region of the electromagnetic spectrum. Recently, these materials have been intensively investigated as a potential candidate material for solar energy conversion, smart window, gas sensors, IR detector, photodiode, conducting electrode, anti-reflection coatings and liquid crystal display [7] [8].
Considering the factors of continued consumption for long run, it is possible to have sustainable energy by utilizing renewable energy sources particularly solar energy which is very available. Solar energy conversion is mainly classified as solar thermal energy and solar photovoltaic electricity.
The direct conversion of solar energy into electricity by photovoltaic (PV) solar cells was studied for the past 30years. Some countries are still far from making these sources cost effective. Photovoltaic solar energy conversion offers one of the few ways of producing electricity in urban areas which is free of various emissions and noise. World energy demands are met from a variety of energy sources both conventional and non-conventional.
In spite of substantial increase during the last many decades in the supply of commercial sources of energy such as coal, oil and gas, and wood still meet about half of our energy needs, particularly in the rural areas. Besides being inefficient in terms of end use, loss of green wood for fuel also results in adverse impact on the environment and pollution control. There exists a substantial potential in non-conventional source such as solar, wind and tidal energy etc.
Alloys
An alloy is a mixture of metals or a mixture of a metal and another element. Alloys are defined by a metallic bonding character. An alloy may be a solid solution of metal elements (a single phase) or a mixture of metallic phases (two or more solutions). An alloy is distinct from an impure metal in that, with an alloy, the added elements are well controlled to produce desirable properties, while impure metals such as wrought iron, are less controlled, but are often considered useful. Alloys are made by mixing two or more elements, at least one of which is a metal. This is usually called the primary metal or the base metal, and the name of this metal may also be the name of the alloy [9]. The other constituents may or may not be metals but, when mixed with the molten base, they will be soluble and dissolve into the mixture.
The mechanical properties of alloys will often be quite different from those of its individual constituents. Although the elements of an alloy usually must be soluble in the liquid state, they may not always be soluble in the solid state. If the metals remain soluble when solid, the alloy forms a solid solution, becoming a homogeneous structure consisting of identical crystals, called a phase. If as the mixture cools the constituents become insoluble, they may separate to form two or more different types of crystals, creating a heterogeneous microstructure of different phases, some with more of one constituent than the other phase has. However, in other alloys, the insoluble elements may not separate until after crystallization occurs. If cooled very quickly, they first crystallize as a homogeneous phase, but they are supersaturated with the secondary constituents. As time passes, the atoms of these supersaturated alloys can separate from the crystal lattice, becoming more stable, and form a second phase that serve to reinforce the crystals internally. Some alloys, such as electrum which is an alloy consisting of silver and gold, occur naturally [9].
The primary metal is called the base, the matrix, or the solvent. The secondary constituents are often called solutes. If there is a mixture of only two types of atoms (not counting impurities) such as a copper-nickel alloy, it is called a binary alloy. If there are three types of atoms forming the mixture, such as iron, nickel and chromium, then it is called a ternary alloy. An alloy with four constituents is a quaternary alloy, while a five-part alloy is termed a quinary alloy. In this respect, all the various forms of an alloy containing only two constituents, like iron and carbon, is called a binary system, while all of the alloy combinations possible with a ternary alloy, such as alloys of iron, carbon and chromium, and is called a ternary system [10] [12] In this present work, the synthesis and characterization of CuS:PbO alloyed thin films have been studied.
Reaction Mechanism
The synthesis of the alloyed thin films using SILAR method constituted: 4ml of 3M solution of ammonia used as complexing agent was measured with a syringe and added into separate beaker containing 0.2M solution of CuSO4.5H2O by dissolving 10g in 150cm 3 of water. CuSO4.5H2O produced blue gelatinuous precipitates when reacted with NH3, which dissolved in excess ammonia solution forming copper tetra-amine complex ion, [Cu (NH3)4] 2+ as given in equation (2.1). as given in Figure 2.1a, represented in equation (2.1).
De-ionized water was added up to 50ml and the solution was stirred vigorously in order to achieve uniformity in the mixture. The suitable pH value for this work is 9 for the alloyed thin films of CuS: PbO as detected by the piston pH meter.
CuS thin films were deposited on substrates in cycles; one cycle is completed by dipping the substrate first into the beaker containing the cationic precursor and then rinsed in a beaker of deionize water, shown in Figure 2.1b and immersed into the third beaker, containing the anionic precursor, shown in Figure 2.1c which, is 0.8M solution of 17.51g of thiourea (C2H4)CS after which the substrates were rinsed in de-ionized water, Figure 2.1d and this is repeated based on the number of chosen cycles [11]. This is given in equations (2.2). The parameters for SILAR deposition are depicted on Several bath compositions were employed, but the optimum result was achieved with the specification shown on
Composition and Thickness Characterization
It is often necessary to determine the elements and the thicknesses of the thin film samples. In this work, atomic compositions and the thicknesses of the samples were determined using 2.
Crystallographic Studies of The Deposited Samples
The XRD analysis was carried out using X-ray diffractometer modeled GBC Enhanced Mini Material Analyzer (EMMA). XRD pattern gives information relative to the nature and structure of the alloyed thin films of CuS: PbO, prepared at 40 0 C of sodium hydroxide solution. Figure 3.3 show x-ray diffraction of the above listed alloyed thin films. The XRD patterns show sharp and well defined peaks which indicates the crystalline nature of alloys of CuS: PbO . The crystallite sizes given in Table 3.3 are obtained using Debye-Scherer's equation [14].
where k is the shape factor (k= 0.9), D is the grain size or average crystallite size, λ is the wavelength of CuKα radiation used (λ = 1.54Å), β is the experimentally observed diffraction peak width at half maximum intensity (full width at half maximum FWHM) and θ is the Bragg's diffraction angle.
Microstructure of the Grown Samples
Microstructure of the thin films of CuS:PbO was determined using electron microscope Phenom Prox, Model number MVEO16477830 manufactured by Phenom World Eindhoven Netherland. The process of analysis is through back scattering electron imaging method.
Sample cp1, of CuS:PbO has minor cracks. It has incoherent surfaces which is due to synthesis conditions. It contains non-agglomerated morphology. The sample has rough texture and granular microstructures. This is shown in Figure 3
Optical Characterization
UV1-1800 series double beam spectrophotometer was used to study the optical properties of the deposited samples. The transmittance spectra of the two samples, have good transparency in the UV (15%-61%) for sample cp2 annealed at 150 0 C and good transmittance (27%-71%) for sample cp1 annealed at 250 0 C in the wavelength range (320nm-400nm). The high transmittance (60%-58%) for sample cp2 in the visible region (400nm-700nm), becomes almost linear through the near infrared region of electromagnetic spectrum. The transmittance of sample cp1 increases as wavelength increases up to the near infrared region of electromagnetic spectrum. In the nearinfrared region, sample cp1 has high transmittance (83%-88%) within the wavelength of 700nm-1080nm of the region of electromagnetic spectrum as shown in Fig.3.5. This makes these alloyed films good materials for UV filter [15]. It can also be used as optoelectronic material, It can also be a good material for cold and heat windows, dazzling coatings, solar thermal-energy conversion. The optical energy band gap is obtained in k space using equation (3.2) given as where α is the absorption coefficient, h is the Planck's constant, v is the frequency, Eg is the energy band gap and A is the constant which depends on the materials. The energy band gaps of samples B1 and B2 are obtained by extrapolating the linear portion of the plot (h) 2 against h at (αh) 2 = 0 as shown in Figure 3.6. A direct band gap value of 3.65 ± 0.05eV is obtained for both samples cp1 and cp2.The wide band gap obtained in this work makes the CuS:PbO a good material for the production of laser diodes and light emitting diodes (LEDs) [16]. It will also be useful in solar energy conversion, liquid crystal displays and flat panel displays for optoelectronic applications.
Conclusion
CuS:PbO, alloyed thin films were deposited on glass substrates using two solution based methods: successive ionic layer adsorption and reaction and solution growth technique at constant temperature of 40ºC of NaOH solution while other reactants were kept at room temperature of 20 o C. NH3 solution was used as complexing agent. The deposited samples were annealed between 250 o C and 150 o C, using Master Chef Annealing Machine. The alloyed thin films exhibited appreciable good transmittance from the ultraviolet region, through the visible to near infrared regions of electromagnetic spectrum. Other optical properties of the samples were determined using appropriate equations. Direct average energy band gap of 3.65±0.05eV was obtained for CuS:PbO alloyed thin films. The other properties investigated are absorbance, reflectance, optical conductivity, optical constants and absorption coefficient using their appropriate equations. These material alloy thin films prepared under this condition with wide energy band gap and high transparency in the visible region can be found useful in passive applications as dazzling coating, cold and heat windows, solar thermal-energy collector, selective absorbing layer and active solar cell applications, semiconductor materials, for optoelectronic applications, UV light emitting devices, laser diodes, sensors, and optical communications etc. Also, this material, has higher breakdown voltage, ability to sustain large electric field, low electronic noise, stable at higher temperature and high-power operation due to the simultaneous combination of the two separate binary compounds. Due to its thermal stability, it can be found useful in the area of tyre production(vulcanization). | 3,284.4 | 2020-03-23T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
Investigation of bioluminescence-based assays for determination of kinetic parameters for the bifunctional Neisseria meningitidis serogroup W capsule polymerase
Objective Neisseria meningitidis is a Gram-negative bacterium that causes meningitis. N. meningitidis serogroup W (NmW) capsule polymerase synthesizes capsular polysaccharide of this serogroup. This enzyme could be a tool for meningococcal glycoconjugate vaccine development. Our long-term goal is to control activity of the NmW capsule polymerase for production of defined carbohydrates for vaccines. The enzyme lacks a simple, high-throughput activity assay. Here, we describe the use of high-throughput bioluminescence assays (CMP-Glo and UDP-Glo by Promega) to investigate NmW capsule polymerase activity. These assays detect free nucleotides produced during transfer of sugar from UDP-Galactose and CMP-Sialic Acid to an acceptor. Kinetic studies using NmW hydrolyzed polysaccharide (PS) acceptor are described as well as preliminary work with a sialic acid trimer (DP3) acceptor. Results In CMP-Glo kinetic studies, with constant donor (80 µM) and varied NmW hydrolyzed polysaccharide (0–2000 µg/mL), a Km of 629.2 ± 101.4 µg/mL and a Vmax of 0.8965 ± 0.05823 µM/min was obtained. Using UDP-Glo, Km and Vmax values of 13.84 ± 9.675 µM and 0.6205 ± 0.1331 µM/min were obtained with varied CMP-NeuNAc (0–80 µM) and constant acceptor (400 µg/mL) and UDP-Gal (80 µM). This is the first report of using bioluminescence assays for NmW kinetics. Supplementary Information The online version contains supplementary material available at 10.1186/s13104-021-05831-1.
Introduction
Neisseria meningitidis is a gram-negative bacterium that causes most cases of bacterial meningitis [1]. Of the 13 serogroups of the bacteria, there are six to which disease is attributed [1][2][3]. Each serogroup is defined by its capsular polysaccharides. The capsule polymerase enzymes, responsible for biosynthesis of capsular polysaccharides, from the six pathogenic serogroups (A, B, C, W, Y and X) have been characterized to varying degrees [4][5][6][7][8][9][10][11][12][13][14][15]. Our current focus is the Neisseria meningitidis serogroup W (NmW) capsule polymerase enzyme. This bifunctional enzyme transfers sialic acid and galactose from two nucleotide donor substrates (CMP-Neu5Ac and UDP-Gal) to an acceptor during synthesis of capsular polysaccharide. The long-term goal is to gain insight into the NmW capsule polymerase as a tool for controlled synthesis of carbohydrates for use in glycoconjugate vaccines [16]. Glycoconjugate vaccines with defined carbohydrate length and well-characterized attachment to carrier proteins will be key to maximizing vaccine efficacy [17].
There are no assays described in the current NmW capsule polymerase literature that allow for determination of the kinetic parameters of the enzyme in a simple Sheikhi Moghaddam et al. BMC Research Notes (2021) 14:417 high-throughput manner using unlabeled acceptors. At the time we began this work in 2018, the only report of kinetics was our previous work which used a continuous, absorbance-based assay with unlabeled acceptors [18]. In 2020, the Chen lab published elegant work using one-pot multienzyme synthesis (OPME) of chromophore-labelled oligosaccharides to determine kinetics in an HPLC-based assay [8]. In this work, we investigate commercially available bioluminescence-based assays (CMP-Glo and UDP-Glo) as tools to determine kinetics of the NmW capsule polymerase. We selected these kits for their high-throughput 96-well format, sensitivity and the ability to use unlabeled acceptors. These kits detect free UDP and CMP produced by glycosyl transfer [19][20][21] (Additional file 1: Fig. S1). UDP-Glo or CMP-Glo nucleotide detection reagents (NDR, each containing a proprietary luciferase) are added to sample on the 96 well plates to quench the glycosyltransferase reaction. Any free CMP or UDP present is converted to ATP and luminescence is produced and monitored. Thus, increased luminescence correlates with increased ATP which correlates with more glycosyltransferase activity. Here we describe our initial efforts to obtain kinetics using both kits and lessons learned from adapting these assays for the bifunctional NmW capsule polymerase.
Growth, expression, and purification of the Neisseria meningitidis serogroup W capsule polymerase
The enzyme was recombinantly overexpressed in E. coli KRX cells, purified and characterized according to a published procedure [18].
UDP-Glo bioluminescence assays
All studies performed with the UDP-Glo kit and reagents were done in a similar manner as described for CMP-Glo except as described. Specific modifications were the use of 1250 ng of NmW capsule polymerase in experiments to determine activity effects of removing one component and in time quenching experiments. In reactions using DP3 as an acceptor, 2 mM was used except in studies to determine optimal acceptor concentration (0-4 mM DP3 acceptor was used).
NmW capsule polymerase elongation of DMB-labelled hydrolyzed W polysaccharide
Hydrolyzed W capsular polysaccharide (10 mg/mL) was prepared as described previously [18]. The hydrolyzed NmW capsular polysaccharide was labeled with DMB according to a published procedure [22] to give a final concentration of 5 mg/mL hydrolyzed sugar. Activity was tested by reaction with NmW capsule polymerase with DMB-labelled hydrolyzed W polysaccharide (0.25 mg/ mL), DTT (2 mM) in the presence or absence of 2 mM CMP-NeuNAc and 2 mM UDP-Gal respectively in the same buffer used in other studies. Control reactions contained no enzyme. Fluorescence detection was checked by HPLC using previously described conditions [7,23] after 15 h incubation at 37 ºC. GraphPad Prism 8.0 was used to graph chromatogram results.
Results and discussion
The long-term goal of this research is to control activity of the NmW capsule polymerase for production of welldefined carbohydrates for glycoconjugate vaccines. This work describes the application of facile, high-throughput assay methods to advance this goal. Results described here are the culmination of 55 individual experiments in which two or three replicates were performed.
Differences in reactivity observed between UDP-Glo and CMP-Glo assays
In efforts to determine the optimal conditions to perform the enzyme reactions using these kits, a series of reactions (containing CMP-NeuNAc, UDP-Gal, DTT, hydrolyzed serogroup W polysaccharide acceptor, with or without enzyme) were performed in which the only component varied was the amount of enzyme. The results for the UDP-Glo assay (Additional file 2: Fig. S2A) show increase in activity as the amount of enzyme increases (max. with 1000 ng). The results for the CMP-Glo assay (Additional file 2: Fig. S2B), a measure of sialyltransferase activity, indicate a bell-shaped activity curve (max. with 50 ng). This was an unexpected result as it was assumed that the same enzyme concentration would be used for both assay kits. However, these results suggested that the activities of the two catalytic domains were not tightly correlated under these conditions. Nevertheless, the enzyme amount selected for further studies using the CMP-Glo assay was 50 ng. vs. 750 ng of enzyme to be used in the UDP-Glo assay because the luminescence output was comparable.
CMP-Glo kinetic assays using hydrolyzed serogroup W acceptor
With knowledge of how much serogroup W enzyme to use, both the optimal amount of nucleotide donors and the linearity of the enzyme reaction were investigated. The optimal amount of luminescence was obtained with 80 µM CMP-NeuNAc and UDP-Gal in both kits. In addition, the enzymatic reactions were found to be linear over 10 min (Additional file 3: Fig. S3). Initially, kinetic measurements used only the CMP-Glo assay. For all kinetic assays, one component (either UDP-Gal, CMP-NeuNAc or hydrolyzed acceptor) was varied while all other components were held constant. When both nucleotide donor sugars were constant (at 80 µM each) and the amount of hydrolyzed serogroup W acceptor was varied (0-2000 µg/mL), a K m value of 629.2 ± 101.4 µg/mL and a V max of 0.8965 ± 0.05823 µM/min (Fig. 1A) were obtained. In kinetic studies with varied CMP-NeuNAc (0-80 µM) and constant acceptor, a K m and V max values were obtained 13.84 ± 9.675 µM and 0.6205 ± 0.1331 µM/ min (Fig. 1B). This data set showed more variability as evidenced by the error bars and reduced R 2 value. The related standard curves show the data is reliable (R 2 > 0.95) (Fig. 1C, D).
Hydrolyzed serogroup W polysaccharide acceptor contains mostly sialylated material
Because of the continued variability in the data, further confirmation that the change in luminescence observed was enzyme-mediated was needed. A series of reactions in the absence of selected components was performed using both bioluminescence kits. Galactosyltransferase activity (as observed using UDP-Glo) was seen only in the presence of all components as expected ( Fig. 2A).
The results of monitoring sialyltransferase activity (using CMP-Glo) were unexpected. There was an enzymemediated increase in activity in the absence of UDP-Gal (Fig. 2B). To gain more understanding into this finding, the products of enzymatic elongation of DMBlabeled hydrolyzed acceptor by the serogroup W capsule polymerase was visualized by anion exchange HPLC-FD. The goal was to observe whether there was any change in the chromatogram in the presence of the capsule polymerase, acceptor and with no UDP-Gal present or with no CMP-NeuNAc. As reported previously by Romanow et al., there are signature peak retention time shifts observed in elongated fluorescent products [4]. Decreased retention time indicates addition of galactose (due to the decrease in polarity by addition of the neutral sugar) and increased retention time indicates addition of sialic acid. Our data suggests that the hydrolyzed acceptor being used was primarily galactosylated (Fig. 2C, D). In the absence of CMP-NeuNAc and the presence of UDP-Gal, there is only a shift of one peak, and this is towards decreased retention time. In contrast, when CMP-NeuNAc is included and UDP-Gal omitted, there is a shift of nearly all remaining peaks towards increased retention time suggesting sialylation. At this point, it was unclear whether there was preferential hydrolysis of the polysaccharide or whether there was preferential labeling during incubation with DMB [22]. The DMB dye will only label free reducing end sialic acids so this phenomenon may influence the products that are visualized. We transitioned to a well-defined oligosaccharide which is a known substrate of the enzyme: a trimer of α, 2-8 linked sialic acid [4,5].
CMP-Glo assay optimization with sialic acid trimer
While the UDP-Glo kit includes commercially available ultrapure UDP-Gal (essential in avoiding high background rates) there is no commercially available ultrapure CMP-NeuNAc. Our source of CMP-NeuNAc was the highest purity commercially available, [guaranteed 97% by Nacalai-Tesque and verified by HPLC analysis (not shown)] yet this seemingly small 3% impurity was having a large effect on results because of the sensitivity of the assay. To circumvent this, CMP-NeuNAc solutions were pre-treated with alkaline phosphatase (AP) (Additional file 4: Fig. S4A, Additional file 5). This enzyme removes phosphoryl groups from nucleotide mono-and diphosphates [24]. There were decreased levels of background CMP after this pre-treatment. DP3 trimer was also subjected to AP treatment with no change observed (Additional file 4: Fig. S4B). Despite this, there was still a considerable amount of unexplainable background luminescence (results not shown). The decision was made to focus solely on the UDP-Glo assay for subsequent studies with DP3 acceptor and continue AP pre-treatment of CMP-NeuNAc because better luminescence was observed (Additional file 6: Fig. S5A, B).
UDP-Glo assay optimization with sialic acid trimer
Similar optimization assays were performed with sialic acid trimer using the UDP-Glo system. The trends mirrored those observed with hydrolyzed acceptor. Namely, there was an increase in activity with increasing levels of enzyme present in the reaction (Fig. 3A).
The highest signal was seen with 4 mM DP3 as an acceptor and there was very little background observed in the control reactions (Fig. 3B). Results for the optimum amount of nucleotide donor sugar to use and the time course of the reaction were like our previous observations with hydrolyzed serogroup W sugar. The optimal luminescence was obtained with 80 µM CMP-NeuNAc and UDP-Gal (Fig. 3C) and the enzymatic reaction was found to be linear over 10 min (Additional file 7: Fig. S6).
Conclusions
The work described here represents the first literature report of kinetics of the NmW capsule polymerase using non-chromophore labelled acceptors and commercially available bioluminescence kits. Although NmW hydrolyzed capsular polysaccharide was not the ideal acceptor for continuation of this work, the lessons learned from these studies were essential and continue to inform our studies using well-defined acceptors. Future studies will focus on mutational approaches to control carbohydrate synthesis by the NmW capsule polymerase to develop well-defined glycoconjugate vaccines.
Limitations
The original acceptor used, NmW hydrolyzed sugar, was not a suitable acceptor to move forward with due to not being well-defined and being mostly sialylated material. When using a well-defined acceptor in the CMP-Glo assay, high background signals in control reactions were observed that were not remedied by AP treatment. Current efforts are focused on solely using the UDP-Glo assay for kinetic studies with the NmW capsule polymerase and DP3-based acceptors. | 2,884 | 2021-11-18T00:00:00.000 | [
"Chemistry",
"Biology",
"Medicine"
] |
Research on the mechanism of consumer participation in value co-creation by innovative enterprises: An evolutionary game analysis framework
The profound changes brought about by informatization and digitalization have given rise to the user-centered innovation concept, and value co-creation by enterprises has become an inevitable trend. It has become a pressing issue for scholars to analyze the mechanism of consumer participation in the value co-creation of innovative enterprises. In this paper, by establishing an evolutionary game model between consumers and innovative enterprises, we analyze in depth the mechanism of consumer participation in the value co-creation of innovative enterprises. The results show that the initial cooperation probability between consumers and innovative enterprises directly affects their strategic choices; the establishment of reward mechanisms makes consumers more inclined to choose active participation in value co-creation strategies; as the probability of non-cooperation between the two parties being reported increases, the probability of consumers and innovative enterprises choosing cooperation also increases. Studying the mechanism of consumer participation in the value co-creation of innovative enterprises has essential theoretical and practical significance for enterprises to achieve value creation, enhance competitiveness, and promote innovation. This study not only enriches and develops relevant theories but also provides guidance and support for the practice of enterprises, promoting sustainable development and successful co-creation.
Introduction
Under the background of globalization, digitalization, and social transformation, the business environment faced by enterprises has become increasingly complex and dynamic.On one hand, the development of information and digital technology has imposed stricter requirements on the development of enterprises.Traditional business models and modes can no longer meet the needs of the times.On the other hand, stakeholders' influence and power over enterprises, including consumers, employees, shareholders, government, social organizations, suppliers, and others, have been enhanced.Among stakeholders, consumers hold a pivotal position.With increased power and decision-making autonomy in purchasing, consumers are no longer passive participants but have become vital forces actively involved in and influencing enterprise decisions.
Currently, value co-creation can be broadly categorized into two main theories.One is the experiential value co-creation theory based on consumer experience, which suggests that through interactions with consumers, enterprises can create personalized experiences [1].The other is the service-dominant logic of value co-creation, which argues that value is co-created through the collaboration and interaction of multiple stakeholders, including employees, consumers, and enterprises.Stakeholders contribute their knowledge and skills to the value-creation process of enterprises, thus achieving value-creation [2,3].Both of these theories indicate the vital role consumers play in transitioning from a product-dominant logic to a consumer experience logic and a service-dominant logic.The relationship between consumers and enterprises is a co-creative one aimed at achieving value co-creation, stimulating product and service innovation, and ultimately gaining competitive advantage [4,5].Therefore, enterprises must actively collaborate with stakeholders, especially consumers, to engage in value co-creation and jointly tackle the challenges and opportunities brought about by societal changes.
Consumers are critical actors in the value-creation ecosystem.Consumers contribute significantly through effective interactions as co-creators of value for enterprises [1,6].The rise of internet services and social media has provided a favorable platform and channel for consumer participation in value creation for enterprises [7,8].Consumers can co-create value with enterprises through online and offline channels [9,10].Enterprises can improve their products and services online by considering consumer reviews and feedback [11].In offline channels, enterprises provide opportunities for consumers to experience their products and services.This experiential engagement positively influences consumer participation in value co-creation and enhances brand loyalty and satisfaction [12,13].Furthermore, enhancing enterprises' brand value also contributes to consumer participation in value co-creation [14].It is important to note that whether in online or offline channels, paying attention to differentiated consumer demands is essential for value co-creation by enterprises [15].
In value co-creation with consumers, enterprises must focus on guiding consumer psychology.Understanding consumer needs from dimensions such as psychological ownership, selfidentity, and spatial requirements can enhance consumer brand loyalty and ultimately improve the competitiveness of enterprises through value co-creation [16].
However, it is undeniable that many factors still hinder consumer participation in value cocreation with enterprises.On the one hand, the asymmetry and inapplicability of resources and information between consumers and enterprises lead to a lower willingness of consumers to participate in value co-creation [17,18].On the other hand, the lack of consumer expertise results in lower contribution capabilities in the value co-creation process [19].Additionally, consumer consumption inertia, perceived complexity, perceived risk, and perceived justice can also become critical barriers that hinder consumer participation in value co-creation [20].
Evolutionary game theory mainly studies the dynamic process of subject evolution over time, which is different from traditional game theory's complete rationality and static analysis.Based on bounded rationality, evolutionary game theory is a dynamic game theory.Due to its ability to help understand and explain the evolution and change of individual and group behavior, as well as the interaction and results of inter-group games, evolutionary game theory has been widely applied in fields such as biology, economics, management, and social behavior.
For instance, evolutionary game theory is used in biology to predict social behavior and other characteristics that influence individual interaction patterns and to analyze the social dominance hierarchy of group-living animals by combining evolutionary game theory with behavioral mechanisms [21].
In economics, scholars have utilized a combination of epidemiological models and behavioral dynamics concepts from evolutionary game theory to analyze the gradual weakening of compliance with economic shutdowns over time and the development of shield immunity during the COVID-19 pandemic [22].Additionally, researchers have employed evolutionary game theory models to analyze the issue of data openness in the digital economy, focusing on critical actors such as data providers, users, and regulatory agencies [23].
In management, scholars have used evolutionary game theory to study reputation management in the Internet of Vehicles (IoV) [24].Evolutionary game models have also been employed to analyze group decision-making in signed social networks, specifically examining the dynamics between selfish and collectivist agents [25].
In the field of social behavior, scholars have utilized reputation mechanisms and Markov process-based individual game transitions to describe changes in individual psychology [26].Evolutionary multigame models combined with dynamic complex networks have been employed to analyze and predict group decision-making behavior in interactive environments [27].Furthermore, the application of evolutionary game theory has been explored in constructing centralized exclusionary institutions as global exclusion models and analyzing their potential impacts on the replicator dynamics of public goods games [28].Additionally, numerous reviews have highlighted the wide-ranging applications of evolutionary game theory in both natural and social sciences [29].
Although the literature above analyzes consumer participation in value co-creation with enterprises, some limitations remain.Firstly, current research mainly relies on qualitative and case analysis methods to analyze the process of consumer participation in value co-creation with enterprises.Secondly, the above studies should have considered the issue of consumer strategic choices in the analysis of consumer participation in value co-creation with enterprises.Thirdly, current research has yet to analyze the path changes in consumer participation in value co-creation with enterprises.In comparison to existing literature on value co-creation in innovative enterprises, this study makes several contributions in the following aspects: 1. Currently, scholars mainly rely on case studies and qualitative descriptions to research value co-creation in innovative enterprises, with few scholars conducting in-depth analyses using empirical methods or mathematical models.This study takes consumers and innovative enterprises as the game participants.It conducts a comprehensive mathematical analysis of consumer participation in value co-creation with innovative enterprises by establishing an evolutionary game model.
2.
In analyzing the process of consumer participation in value co-creation with innovative enterprises, this study considers the issue of consumer strategic choices, which breaks the existing research paradigm of value co-creation in innovative enterprises.It expands the research boundaries and enriches the research content of value co-creation in innovative enterprises.
3. After using the evolutionary game model to analyze the process of consumer participation in value co-creation with innovative enterprises, this study further simulates and analyzes the evolutionary path of consumer participation in value co-creation with innovative enterprises using Matlab.It also investigates the impact of parameter changes on value co-creation in innovative enterprises.
The remaining structure of this article is arranged as follows: Part 2 analyzes the underlying mechanisms of consumer participation in value co-creation with innovative enterprises.Part 3 proposes research hypotheses for consumer participation in value co-creation with innovative enterprises.Part 4 establishes an evolutionary game model with consumers and innovative enterprises as the primary entities.Part 5 uses Matlab to simulate the evolutionary path of consumer participation in value co-creation with innovative enterprises.Part 6 presents the conclusions and discussions of this study.
Analysis of mechanisms
With the rapid development of information technology, represented by digital technology, innovative enterprises must continuously update their technology and business models to adapt to technological advancements.Engaging in value co-creation enables enterprises to leverage new technologies better and provide more advanced products and services to meet consumer needs.Consumer participation in value co-creation with innovative enterprises benefits both parties.On one hand, it enhances consumer trust and loyalty towards the enterprise.On the other hand, it provides the enterprise with diverse, innovative ideas and creativity.Consumers can contribute novel insights and perspectives, challenging the existing mindset of the enterprise and driving innovation and improvement in products and services.Consumer participation also helps enterprises better identify product quality issues and provides directions and opinions for improvement.From Fig 1, it can be observed that when consumers participate in the value co-creation process with innovative enterprises, they go through the following specific stages.The first stage is feedback on consumer needs.Through consumer communication and feedback, innovative enterprises can better understand market demands, make timely adjustments to their products or services, and enhance their competitiveness.The second stage is consumer involvement in innovation.As participants in the innovation process, consumers can provide new ideas and insights, fostering innovation and development within the enterprise.Innovative enterprises attract consumer participation through innovation and social activities, collaborating to create value.The third stage is consumer word-of-mouth promotion.As loyal users of the enterprise, consumers can promote the products and services to others through word- of-mouth, helping expand the market.The innovative enterprise earns consumer trust and support by delivering high-quality products and services, thus driving its growth.
Behind the mechanisms of consumer participation in value co-creation with innovative enterprises, the enterprise must establish an open, transparent, and interactive cultural atmosphere that encourages consumer involvement in innovation and value co-creation.The enterprise must also establish corresponding participation platforms and tools that enable consumers to interact and engage efficiently.By collaborating with consumers, the enterprise can better meet market demands, increase the success rate of innovation, and enhance market competitiveness.
Model Hypothesis
The involvement of consumers in co-creating value with innovative enterprises has been a topic of significant academic interest.As one of the most important stakeholders in this process, consumers are critical in co-creating value with innovative enterprises.In order to further analyze the underlying mechanisms of consumer participation in value co-creation with innovative enterprises, as well as the strategic choices made by both parties, this paper proposes the following hypotheses.
Hypothesis 1: In the value co-creation process of innovative enterprises, this study assumes the presence of only two game entities.On the one hand, there is the innovative enterprise, which serves as the primary carrier of value co-creation and acts as a central force in the value co-creation process.On the other hand, there are the consumers, who are the primary participants in value co-creation with the innovative enterprise and play a crucial role in enterprise value co-creation.
Hypothesis 2: Although innovative enterprises (E) and consumers (C) are essential participants in value co-creation, both parties are characterized by bounded rationality.Due to differences in cognitive abilities, information acquisition, time constraints, and experience, innovative enterprises (E) and consumers (C) cannot make optimal decisions based on unlimited information and analysis.Instead, they have to make decisions based on limited time and information.Moreover, the decision-making process of both parties is also influenced by factors such as emotions, biases, habits, and risk aversion when making choices within limited information.Therefore, innovative enterprises (E) and consumers (C) seek to maximize their interests in the game process.They also adjust their strategies based on changes in their benefits and the strategies of the other party in order to achieve maximum self-interest.
Hypothesis 3: To simplify the subsequent analysis, this study assumes that the innovative enterprise (E) has only two strategies in the game process.One strategy is active value cocreation, while the other is passive.Therefore, the strategy space of the innovative enterprise (E) can be represented as = (active value co-creation, passive value co-creation).Simultaneously, it is assumed that the consumers () have only two strategies.One strategy is active participation in value co-creation, while the other is passive participation.The strategy space of the consumers (C) can be represented as = (active participation in value co-creation, passive participation in value co-creation).Additionally, it is assumed that the innovative enterprise (E) has a probability of choosing the active value co-creation strategy as x and a probability of choosing the passive value co-creation strategy as 1 − x.Furthermore, it is assumed that the consumers (C) have a probability of choosing active participation in the value co-creation strategy as y and a probability of choosing passive participation in the value co-creation strategy as 1 − y.Both x and y are probabilistic variables and x,y 2 [0,1].Additionally, both variables are functions of time t.
Hypothesis 4: When the innovative enterprise chooses the active value co-creation strategy, it invests significant human, material, and financial resources-for example, research and development costs.Innovative enterprises need to invest resources in research and development to create new products or enhance the value of existing products.This may involve costs such as human resources for research and development teams, equipment, and technology inputs.Communication and coordination costs.When engaging in value co-creation with consumers, innovative entrepreneurs need to engage in more communication and coordination efforts.This may include communicating with consumers, gathering feedback, addressing questions, coordinating needs and expectations, etc.These communication and coordination efforts require time, workforce, and resource investments.Let us assume the cost of these activities as c E1 .Similarly, when the innovative enterprise chooses the passive value co-creation strategy, it incurs certain costs.For instance, if the innovative enterprise fails to listen to consumers' feedback and address their issues, it may negatively impact the enterprise's reputation.This may result in customer attrition due to product and service problems.Let us assume the cost of this scenario as c E2 .Innovative enterprises obtain additional benefits when they successfully engage in value co-creation.For instance, consumer loyalty increases.Innovative entrepreneurs can establish deeper relationships and enhance consumer loyalty through proactive interactions and consumer participation -product and service improvements.Innovative entrepreneurs better understand consumer needs and preferences by involving consumers in the innovation and improvement of products and services.The feedback and suggestions consumers provide during the cocreation process can help identify shortcomings in products and services and drive corresponding improvements.Let us assume the value of these additional benefits is denoted as R E1 .
Hypothesis 5: When consumers choose active participation in a value co-creation strategy, they incur certain costs.For example, time costs.Consumers actively participating in value co-creation strategies may require more time to engage in activities, provide feedback and suggestions, communicate and collaborate with the firm, etc.These time costs may consume consumers' leisure time or working hours-energy costs.Participating in value cocreation strategies requires consumers to invest more energy in thinking and providing valuable opinions, participating in discussions and feedback, and so on.This may require them to gain a deeper understanding of the product and the market, research and analyze relevant issues, and engage in continuous learning and reflection.Let us assume the cost of these activities as c C .
Moreover, when consumers actively participate in value co-creation strategies, they are rewarded with economic incentives.By providing valuable feedback, suggestions, and opinions, consumers actively contribute to improving products and services.In recognition of their participation, firms may provide economic incentives to these contributors, such as coupons, discounts, or gift cards.Let us assume the value of these incentives is denoted as ω.
Consumers may face certain losses when they passively participate in value co-creation strategies.For instance, consumers may need more personalized customized products or services.If consumers opt for passive participation, they may not be able to enjoy tailored products or services, instead having to settle for generic offerings that may only partially meet their needs.Consumers may also miss opportunities for interaction and engagement with innovative firms.By choosing passive participation, they may forego these opportunities for interaction and engagement, thereby missing the chance to contribute their creativity and insights.Additionally, consumers may miss out on opportunities for interaction and sharing with other consumers.In value co-creation, consumers can interact and communicate with each other through sharing experiences, recommending products, and more.This interaction and sharing can help consumers gain more information and knowledge, enabling them to make more informed decisions.By opting for passive participation, consumers may miss out on these opportunities for interaction and sharing, potentially needing more information and knowledge to make fully informed decisions.Let us assume the value of these losses is denoted as f.
The probability of both parties being detected as non-cooperative and reported is represented as η.Reporting non-cooperative behavior can prompt both parties in a game to reassess their actions and strengthen adherence to cooperative norms.Reporting can serve as a warning and deterrent, making participants realize the adverse consequences of non-cooperative behavior, guiding them to comply with cooperative rules, and ultimately achieving value cocreation.However, the success of reporting depends on several factors, such as the rigor of regulatory agencies, the number of reporters, and the effectiveness of the reporting system.Therefore, let us denote the probability of successful reporting as η.
Hypothesis 6: Even if consumers and innovative enterprises choose not to participate in the value co-creation game, both parties still receive fundamental benefits.Let us assume that the fundamental benefit for the innovative enterprise is denoted as R E , and the consumer's fundamental benefit is denoted as R C .
Table 1 shows the variable parameters and their meanings for consumers and innovative enterprises, with detailed content in Table 1.
The payoff matrix for the evolutionary game between consumers and innovative enterprises can be presented based on the above assumptions.For detailed information, please refer to Table 2.
Model establishment and solution
The expected payoffs of consumer and innovative enterprises can be obtained based on the game theory payoff matrix.Assuming the expected payoffs of consumers who actively or passively participate in value co-creation are u C1 and u C2 , respectively, the average expected payoff of consumers is The specific expressions of expected payoffs are shown in formulas ( 1)- (3).
Similarly, assuming that the innovative enterprise chooses to actively or passively participate in value co-creation and the expected payoffs are u E1 and u E2 , respectively, the average expected payoff of the innovative enterprise is The specific expressions of the payoffs are shown in formulas (4)- (6).
Based on the principle of Malthusian dynamic equations and formulas (1)-( 6), the replicator dynamic equations for consumers and innovative enterprises can be derived.The specific formulas are shown as (7) and (8).
Consumers(C) Actively participating in value co-creation y
Passively participating in value co-creation 1 − y
Stability analysis
To further analyze the stability of consumers and innovative firms at various equilibrium points, it is necessary to solve for the Jacobian matrix of the two-dimensional dynamic system.The Jacobian matrix is derived from the replicator dynamics equations of consumers and innovative firms, taking partial derivatives with concerning the replicator dynamics equations of consumers and innovative firms, as shown in Eq (9) in the following: J e ¼ @FðxÞ=@x @FðxÞ=@y @FðyÞ=@x @FðyÞ=@y Based on Eq (9) and the replicator dynamics equations of consumers and innovative firms, the Jacobian matrix of the game system can be calculated as shown in Eq (10).
To facilitate further analysis, it is necessary to calculate the determinant(detJ e ) and trace (trJ e ) of the aforementioned Jacobian matrix.The specific expressions are given by Eqs (11) and (12).
In order to calculate the eigenvalues of the Jacobian matrix, it is necessary to substitute the equilibrium points obtained from solving the replicator dynamic equations into the Jacobian matrix.The specific details are described in Table 3.
Due to the complexity and difficulty in determining the eigenvalues of equilibrium point E 5 (x*, y*) from its formula, this article only analyzes equilibrium points E 1 − E 4 .It determines the stability of the equilibrium point based on the signs of the determinant (detJ e ) and trace Table 3. Eigenvalues of the Jacobian matrix.
Equilibrium point
Eigenvalue (trJ e ).Since it is difficult to determine the sign of the eigenvalues, it is necessary to analyze different cases.The specific analysis is shown below.
0, the sign of the eigenvalues can be determined.Based on the sign of the eigenvalues, the sign of the determinant (detJ e ) and trace (trJ e ) can be determined, and the stability of the equilibrium point can be obtained.Please refer to Table 4 for specific details.
Scenario 2: When (1 0, the sign of the eigenvalues can be determined, and based on the sign of the eigenvalues, the sign of the determinant (detJ e ) and trace (trJ e ) can be determined, which allows for the determination of the stability of the equilibrium point.Please refer to Table 4 for specific details.
Scenario 3: 0, the sign of the eigenvalues can be determined, and based on the sign of the eigenvalues, the sign of the determinant (detJ e ) and trace (trJ e ) can be determined, which allows for the determination of the stability of the equilibrium point.Please refer to Table 4 for specific details.
Scenario 4: 0, the sign of the eigenvalues can be determined, and based on the sign of the eigenvalues, the sign of the determinant (detJ e ) and trace (trJ e ) can be determined, which allows for the determination of the stability of the equilibrium point.Please refer to Table 4 for specific details.
Table 4 shows that in scenario 1, When (1 > 0, the game system converges to E 4 (1,1).In this case, consumers are more inclined to choose active participation in the value co-creation strategy.At the same time, the innovative enterprise is also more inclined to choose the active value co-creation strategy.In scenario 2, the game system converges to E 3 (0,1) when ( 1 Consumers are inclined to choose active participation in the value co-creation strategy, while the innovative enterprise chooses the passive value co-creation strategy.In scenario 3, When (1 > 0, the game system converges to E 4 (1,1).In this case, consumers are more inclined to choose active participation in a value co-creation strategy.The innovative enterprise is also more inclined to choose the active value co-creation strategy.In scenario 4, the game system converges to E 3 (0,1) when (1 In this case, consumers are inclined to choose active participation in the value co-creation strategy, while the innovative enterprise is inclined to choose the passive value co-creation strategy.Please refer to Table 4 for more details.
In addition, it is necessary to explain why the equilibrium points do not converge to E 1 (0,0) and E 2 (1,0).Converting equilibrium points to a specific value requires fulfilling certain conditions: detJ e > 0 and trJ e < 0. Based on the previous assumptions and the eigenvalues of the Jacobian matrix in Table 3, it is impossible to determine the signs of (1 in the eigenvalues.Therefore, discussing the positive or negative nature of these two eigenvalues in different scenarios is necessary.In all four scenarios mentioned above, regardless of the signs of (1 (1,0) satisfies the conditions detJ e > 0 and trJ e < 0. Therefore, the equilibrium points will not converge to E 1 (0,0) and E 2 (1,0).The eigenvalues X and Y corresponding to point E 5 (x*, y*) are purely imaginary roots.According to the literature lemma and theorem [30,31], it is known that point E 5 (x*, y*) is a stable equilibrium point but not asymptotically stable.The system's trajectory will form a closed-loop motion around E 5 (x*, y*).Therefore, the equilibrium points of the system will not converge to E 5 (x*, y*).
Evolutionary simulation study
Section 4 uses an evolutionary game model to analyze the intrinsic mechanism of consumer participation in value co-creation with innovative enterprises.In this section, numerical simulation analysis of the above results is conducted using Matlab, and the effects of various parameter changes on the evolution path of both sides of the game are further analyzed.The specific analysis is shown below.
Simulation analysis of game results
(1) When (1 , the parameters are set as indicated in the second row of Table 5 0 hold, the parameters are set as indicated in the third row of Table 5 0 hold, the parameters are set as indicated in the fourth row of Table 5 0 hold, the parameters are set as indicated in the fifth row of Table 5.For specific details, please refer to Table 5. Under the parameter settings for Scenario 1, the game system will converge to point E 4 (1,1).At this point, consumers are more likely to choose a proactive value co-creation strategy, and innovation-oriented enterprises are also more likely to choose to engage in value co-creation proactively.See Fig 2 for more details.Under the parameter settings for Scenario 2, the game system will converge to point E 3 (0,1).At this point, consumers tend to choose a Under the parameter settings for Scenario 4, the game system will converge to point E 3 (0,1).At this point, consumers tend to choose a proactive value co-creation strategy.In contrast,
Simulation analysis of parameter variation
Numerical simulation analysis was performed using Matlab to analyze further the impact of parameter variations on the evolutionary paths of both players in the game.Please refer to the following sections for detailed information.can be seen that, under unchanged conditions, as consumers' initial cooperation probability increases, the convergence speed to the equilibrium point also accelerates.Detailed information can be found in Fig 7.
Why does an increase in the initial cooperation probability of innovative enterprises and consumers accelerate their convergence to equilibrium?The main reasons are as follows: On one hand, an increase in the initial cooperation probability of innovative enterprises and consumers can enhance the trust and closeness between consumers and enterprises, making it easier for consumers to accept the cooperation proposals and strategies of enterprises, thereby facilitating cooperation and accelerating the pace of coordination.On the other hand, it can also reduce the uncertainty for both parties involved in the cooperation, thereby decreasing the risks and obstacles associated with uncertainty and enabling both parties to adapt to cooperation more quickly, thus speeding up the convergence towards the equilibrium point.Therefore, an increase in the initial cooperation probability of innovative enterprises facilitates their convergence to the equilibrium point.
(2) Variation in loss and Reward Parameters can be observed that as consumer losses increase, the probability of choosing to participate actively in value co-creation also increases.Specifically, when consumer losses are six units, the probability of consumers participating actively in value cocreation is relatively low.However, as consumer losses increase, i.e., when consumer losses are 60 and 100 units, the probability of consumers actively participating in value co-creation increases.For consumers, bearing certain losses may stimulate their desire to participate actively in value co-creation.When consumers perceive that they may face losses, they will pay more attention to and value the outcomes and development of value co-creation.In order to avoid losses, they will actively participate and contribute their thoughts and opinions.Secondly, the perceived losses ignite a sense of responsibility and mission in consumers, motivating them to participate actively in value co-creation.This sense of responsibility and mission can drive consumers to participate and contribute more actively towards achieving common goals and maximizing benefits.Detailed information can be found in Fig 8. shows that the higher the rewards for consumers, the higher the probability of actively participating in value co-creation.Precisely, when rewards for consumers are five units, consumers' probability of actively participating in value co-creation is relatively low.However, as rewards for consumers increase, i.e., when rewards are 50 and 100 units, the probability of consumers actively participating in value co-creation increases.The possible reasons are: Firstly, rewards can stimulate consumer interest and motivation.When consumers are rewarded more for value co-creation, they become more motivated to participate actively and willing to invest significant effort and time.Secondly, rewards can enhance the effectiveness and value of consumer participation.When consumers participate in value co-creation, they can contribute by sharing their experiences, ideas, and perspectives to drive and refine the project.Detailed information can be found in Fig 9 .(3) Probability of Being Reported Changes shows that the higher the probability of innovative enterprises being reported for choosing a non-cooperative strategy, the higher the probability of them choosing to cooperate.Specifically, when the probability of innovative enterprises being reported is 0.3, the probability of them actively engaging in value co-creation is relatively low.However, as the probability of innovative enterprises being reported increases, i.e., when the probability is 0.6 and 0.9, the probability of them actively engaging in value co-creation also increases.Detailed information can be found in Fig 10.
Fig 11 depicts the scenario where consumers choosing a non-cooperative strategy are reported.Fig 11 shows that the higher the probability of consumers being reported for choosing a non-cooperative strategy, the higher the probability of them choosing to cooperate.Specifically, when the probability of consumers being reported is 0.3, the probability of them choosing to cooperate is relatively low.However, as the probability of consumers being reported increases, i.e., when the probability is 0.6 and 0.9, the probability of them choosing to cooperate also increases.Detailed information can be found in Fig 11.
Conclusion and discussion
Consumers play a significant role in value co-creation with innovative enterprises.A thorough analysis of the mechanisms behind consumer participation in value co-creation with innovative enterprises has essential theoretical and practical implications.Theoretically, studying the mechanisms of consumer participation in value co-creation helps deepen our understanding of consumer needs and values and uncover the influence and impact of consumers on value co-creation within enterprises.This contributes to enriching and expanding consumer value theory, offering essential guidance for enterprise value creation and innovation.Researching the mechanisms of consumer participation in value co-creation with innovative enterprises aligns with the core principles of open innovation theory.It helps reveal the roles and value of consumers in the innovation process, deepening our understanding and application of open innovation theory.By conducting in-depth research on the processes, methods, and effects of consumer participation in innovation, a practical foundation and empirical support can be provided for participatory innovation theories.
From a practical perspective, studying the mechanisms of consumer participation in value co-creation with innovative enterprises can enable businesses to be more market-oriented and better meet consumer demands.Through interaction and collaboration with consumers, enterprises can gain a more accurate understanding of market dynamics and enhance the competitiveness of their products and services.This, in turn, helps to stimulate innovation and entrepreneurial vitality.As innovation's leading actors and beneficiaries, consumers can provide valuable ideas and feedback, driving better innovation and entrepreneurial practices within enterprises.This can facilitate continuous improvement and optimization of products and services.Through consumer participation and feedback, enterprises can promptly identify and address issues, enhance product and service quality, and improve user experience and satisfaction.The mechanisms of consumer participation in value co-creation with innovative enterprises can also enhance enterprises' brand image and reputation.Through interaction and collaboration with consumers, enterprises can establish a strong brand image, make improvements and adjustments based on consumer needs and opinions, and enhance brand value and competitiveness.
This study focuses on the game between consumers and innovative enterprises.It provides an in-depth analysis of the underlying mechanisms of consumer participation in value co-creation with innovative enterprises by constructing an evolutionary game model.The following important conclusions are drawn based on the game analysis and systematic simulation analysis.Firstly, the initial cooperation probability between consumers and innovative enterprises directly affects the strategic choices of both parties.Specifically, as the initial cooperation probability between consumers and innovative enterprises increases, both parties choose cooperative strategies.Consumers are more inclined to actively participate in value co-creation strategies, while innovative enterprises are more inclined to engage in value co-creation strategies proactively.Secondly, establishing reward mechanisms makes consumers more inclined to choose to actively participate in value co-creation strategies.
Moreover, as the intensity of rewards increases, the probability of consumers choosing to actively participate in value co-creation strategies also increases.Thirdly, as the probability of both parties being reported for non-cooperation increases, the probability of consumers and innovative enterprises choosing cooperation also increases.In comparison to consumers, the impact of reporting on innovative enterprises is more significant.
Based on the above conclusions, this paper proposes the following strategies and recommendations: 1. Establish a trust relationship and enhance willingness to cooperate.To begin with, a consumer participation platform should be established.On the one hand, enterprises can create dedicated platforms for consumer communication, encouraging consumer involvement in decision-making and innovation activities through online surveys, discussion forums, and idea solicitation, among other methods.On the other hand, enterprises can establish platforms for knowledge and experience sharing, allowing consumers to share and exchange their innovative ideas and experiences.This will help build a learning and innovation community, promoting mutual learning and inspiration among consumers.Secondly, continuously cultivate consumer participation awareness and establish feedback mechanisms.Through relevant promotion and educational activities, enterprises can continuously cultivate consumer awareness and capabilities for innovation.For instance, consumer innovation training courses and relevant innovation tools and methods can be offered to encourage consumers to participate actively in innovation activities.Moreover, enterprises should establish effective feedback mechanisms to collect consumer feedback and opinions promptly.This can be achieved through customer service hotlines, social media interactions, product evaluations, and other means.Valuing and actively responding to consumer feedback will help enterprises improve their products and services, enhancing their competitiveness and customer satisfaction.
2. Establish reward and incentive mechanisms and actively engage in cooperation.Establishing reward and incentive mechanisms to encourage consumer participation in value co-creation with innovative enterprises is essential.For example, consumer innovation awards can be established to provide recognition and rewards to consumers who contribute significantly, stimulating more consumer involvement in innovation activities.Additionally, enterprises can collaborate with consumers on research and development projects, involving them in product design and feature development activities.Through collaborative R&D with consumers, enterprises can better understand consumer needs and enhance the market adaptability and competitiveness of their products.
3. Establish a reasonable reporting mechanism to urge consumers to co-create value with innovative enterprises.Firstly, strengthen monitoring and inspection of non-cooperative behavior, such as increasing the frequency and coverage of regulatory inspections to increase the likelihood of detecting non-cooperative behavior.Secondly, establish reward and punishment mechanisms.For example, rewarding individuals who discover and report non-cooperative behavior while simultaneously imposing penalties on reported non-cooperative behavior to increase the likelihood of being reported.An anonymous reporting mechanism can also be established to allow individuals unwilling to report formally to report anonymously, thereby increasing the likelihood of reporting non-cooperative behavior.
This paper focuses on the game between consumers and innovative enterprises.It establishes a bi-directional evolutionary game model to analyze the mechanisms of consumer participation in value co-creation with innovative enterprises.However, there are some limitations in exploring consumer participation in value co-creation with innovative enterprises.Firstly, the role of government in value co-creation with innovative enterprises is not considered.The government plays a crucial role in value co-creation with innovative enterprises by providing policy support, tax incentives, and other forms of assistance.Secondly, repeated games need to be considered when discussing consumer participation in value cocreation with innovative enterprises.Consumer participation in value co-creation with innovative enterprises is a progressive and repetitive game process.Therefore, in future research, the government could be included in the game model, and the issue of repeated games should be considered in analyzing the game process.
Fig 1 depicts the logic and mechanisms of consumer participation in value co-creation with innovative enterprises.
Fig 1 .
Fig 1. https://doi.org/10.1371/journal.pone.0297475.g001 proactive value co-creation strategy, while innovation-oriented enterprises tend to choose a passive value co-creation strategy.See Fig 3 for more details.Under the parameter settings for Scenario 3, the game system will converge to point E 4 (1,1).At this point, consumers are more likely to choose a proactive value co-creation strategy, and innovation-oriented enterprises are also more likely to choose to engage in value co-creation proactively.See Fig 4 for more details.
Fig 8
Fig 8 depicts the evolutionary path when consumer losses increase while keeping other conditions constant.From Fig 8, it can be observed that as consumer losses increase, the probability of choosing to participate actively in value co-creation also increases.Specifically, when consumer losses are six units, the probability of consumers participating actively in value cocreation is relatively low.However, as consumer losses increase, i.e., when consumer losses are 60 and 100 units, the probability of consumers actively participating in value co-creation increases.For consumers, bearing certain losses may stimulate their desire to participate actively in value co-creation.When consumers perceive that they may face losses, they will pay more attention to and value the outcomes and development of value co-creation.In order to avoid losses, they will actively participate and contribute their thoughts and opinions.Secondly, the perceived losses ignite a sense of responsibility and mission in consumers, motivating them to participate actively in value co-creation.This sense of responsibility and mission can drive consumers to participate and contribute more actively towards achieving common goals and maximizing benefits.Detailed information can be found inFig 8.
Fig 8 depicts the evolutionary path when consumer losses increase while keeping other conditions constant.From Fig 8, it can be observed that as consumer losses increase, the probability of choosing to participate actively in value co-creation also increases.Specifically, when consumer losses are six units, the probability of consumers participating actively in value cocreation is relatively low.However, as consumer losses increase, i.e., when consumer losses are 60 and 100 units, the probability of consumers actively participating in value co-creation increases.For consumers, bearing certain losses may stimulate their desire to participate actively in value co-creation.When consumers perceive that they may face losses, they will pay more attention to and value the outcomes and development of value co-creation.In order to avoid losses, they will actively participate and contribute their thoughts and opinions.Secondly, the perceived losses ignite a sense of responsibility and mission in consumers, motivating them to participate actively in value co-creation.This sense of responsibility and mission can drive consumers to participate and contribute more actively towards achieving common goals and maximizing benefits.Detailed information can be found inFig 8.
Fig 9
Fig 9 describes the evolutionary path when increasing consumer rewards while keeping other conditions constant.Fig9shows that the higher the rewards for consumers, the higher the probability of actively participating in value co-creation.Precisely, when rewards for consumers are five units, consumers' probability of actively participating in value co-creation is relatively low.However, as rewards for consumers increase, i.e., when rewards are 50 and 100 units, the probability of consumers actively participating in value co-creation increases.The possible reasons are: Firstly, rewards can stimulate consumer interest and motivation.When consumers are rewarded more for value co-creation, they become more motivated to participate actively and willing to invest significant effort and time.Secondly, rewards can enhance the effectiveness and value of consumer participation.When consumers participate in value co-creation, they can contribute by sharing their experiences, ideas, and perspectives to drive and refine the project.Detailed information can be found inFig 9.
Fig 10
Fig 10 depicts the scenario where innovative enterprises choosing a non-cooperative strategy are reported.Fig 10shows that the higher the probability of innovative enterprises being reported for choosing a non-cooperative strategy, the higher the probability of them choosing to cooperate.Specifically, when the probability of innovative enterprises being reported is 0.3, the probability of them actively engaging in value co-creation is relatively low.However, as the probability of innovative enterprises being reported increases, i.e., when the probability is 0.6 and 0.9, the probability of them actively engaging in value co-creation also increases.Detailed information can be found in Fig10.Fig11depicts the scenario where consumers choosing a non-cooperative strategy are reported.Fig11shows that the higher the probability of consumers being reported for choosing a non-cooperative strategy, the higher the probability of them choosing to cooperate.Specifically, when the probability of consumers being reported is 0.3, the probability of them choosing to cooperate is relatively low.However, as the probability of consumers being | 9,275 | 2024-05-15T00:00:00.000 | [
"Economics",
"Business",
"Computer Science"
] |
The Downside of Upkeep: Analysing Railway Infrastructure Maintenance Impact on Train Operations in Sweden
: Efficient and seamless railway operations depend on the systematic and well-coordinated maintenance of both rolling stock and infrastructure. However, track maintenance, or ‘trackwork’, can cause substantial delays if not properly aligned with train schedules. This study comprehensively investigates how trackwork influences train operations in Sweden. It involves an in-depth analysis of an extensive dataset comprising over 225,000 recorded instances of planned trackwork and approximately 32.5 million train passages throughout the year 2017. Multiple logistic and negative binomial regression models showed that train running time delay occurrence is higher in the sections with scheduled trackwork. Trains passing through trackwork are 1.43 times more likely to experience delays compared to trains that do not pass through scheduled trackwork. The likelihood of an opportunity for the train delay recovery passing the section with scheduled trackwork is reduced by 11%. Additionally, the frequency of train delay increase is 16% higher, and delayed recovery is 4% lower in relation to trackwork. With the number of trackwork set to increase over the coming years, these results bring attention to train scheduling and the performance of trackwork.
Introduction
Ensuring the reliability of railway operations is crucial, especially with an anticipated shift of more traffic to rail [1].With the increase in train traffic, the wear and tear on railway infrastructure components intensify, necessitating regular track maintenance [2].Trackwork refers to the maintenance or renewal of railway infrastructure components that require planned temporary capacity restrictions for the section on the track where the activity is taking place.Such limitations can include complete track closures, reduced speed limits, or switching to single-track operations [3,4].These restrictions might lead to train delays, thus often requiring adjustments in train schedules [4,5].To address this issue, substantial research has been conducted in the field of maintenance optimisation and train operations [6][7][8][9].
Punctuality is vital for the competitiveness of railway services, as delays can severely compromise the quality of service for both passenger and freight railway operations [10][11][12].Punctuality and delay refer to trains running either at or behind the scheduled arrival time.In Sweden, punctuality is assessed in terms of the percentage of trains arriving at the final destination within 5 min of the scheduled time.While punctuality is the metric that is most commonly used to evaluate the performance of railway operations, it is a result of delays that have occurred throughout the journey.Delay is a measurement (in minutes) of a negative deviation from the train timetable [12,13].Running time delay is measured as the time difference between the scheduled and actual train travel time between stations.Another important aspect linked to train punctuality is delay recovery, defined as a delay time reduction [14].
Railway Infrastructure Maintenance in Sweden
The Swedish Transport Administration, as an infrastructure manager, is responsible for the maintenance and renewals of railway infrastructure in Sweden [27].The maintenance is delegated to five main maintenance companies and over 1000 subcontractors, governed by 34 different contracts.In line with current regulations, the maintenance contractors must conduct operational planning and request railway capacity [28], initiating applications 12 weeks before the scheduled trackwork and finalising them at least four weeks in advance.A detailed description of this process can be found in [29].Once these applications are authorised, they are recorded in the track utilisation plan.If trackwork is not aligned with train schedules during the annual capacity allocation, it raises the likelihood of train disruptions [30].
Trackwork that is performed frequently to preserve the condition of the infrastructure usually lasts for less than 24 h.In Sweden, this regular maintenance is referred to as "basic maintenance", which includes inspections, snow removal, switch lubrication, maintenance at level crossings, signal repair, tamping of tracks, and turnouts [7].This paper focuses on basic infrastructure maintenance, which does not lead to prolonged track closures but implies certain operational restrictions for train traffic.
The trackwork schedule is documented in the track utilisation plan, a digital record of all maintenance activities kept by the Swedish Transport Administration.There is an absence of systematic digital records concerning the actual execution of the scheduled trackwork.While dispatchers do maintain logs of conducted trackwork, these records are traditionally consigned to logbooks and have not yet been systematically transcribed into a digital format.Therefore, the present study is predicated upon the data available from the scheduled trackwork as outlined in the track utilisation plan.
As highlighted by [31], the current Swedish train planning system lacks established guidelines governing single-track operations during maintenance activities.Consequently, there is a minimal expectation for timetables to be meticulously adjusted in line with scheduled trackwork.Moreover, given the substantial volume of trackwork, we do not expect operators to cancel a majority of trains.Nevertheless, during instances of extensive closures, operators possess the requisite capacity to either cancel or reroute trains as necessary.This scenario underscores the significance of analysing the impact of planned maintenance on train operations.
Study Objectives
This research focuses on assessing the impact of trackwork on train delays.It analyses Swedish data, including over 225,000 scheduled track maintenance events and approximately 32.6 million train passages throughout the country in 2017.This study is designed to answer the following research questions: (1) To what extent does trackwork influence the probability and frequency of train delays in Sweden?(2) How does the scheduled trackwork affect a delay recovery opportunity of the train?While this paper focuses on the Swedish railway system, we believe our findings apply in the European Union, as the railway capacity allocation process follows the same regulations [32].
Methodology
This section outlines our methodology to assess the impact of trackwork on train delays in Sweden, using two regression analyses: multiple logistic regression and negative binomial regression.The section begins by presenting the datasets obtained from the Swedish Transport Administration covering the Swedish railway network in 2017 [33].The data preparation process involves combining and structuring this data to make it suitable for regression analysis.Following this, we describe our use of multiple logistic regression to analyse the probability of train delays in relation to trackwork and other factors.Then, we explain the application of negative binomial regression to examine the frequency of these delays.Both methodologies are chosen for their effectiveness in handling the complex nature of our dataset and their relevance to railway operations analysis.
Overview of Data
The first dataset comprises the trackwork records from the track utilisation plan, detailing 225,507 instances of scheduled trackwork.Each record provides specific information about the scheduled time, location, and the restrictions imposed on train traffic due to maintenance.Our study focused on basic maintenance trackwork, which is characterised by the absence of full track closures and a duration of less than 24 hours.
In the track utilisation plan, locations of trackwork are identified by unique signal numbers situated along the track segments that span between two designated stations, marked as Ss and Se in Figure 1.Out of the 225,507 trackwork activities listed for 2017, we identified 3218 distinct track segments, which may include up to nine intermediary stations.Within these segments, the plan records a set of smaller trackwork that is performed at the same time in the same area.To streamline our dataset, we merged overlapping activities into single records, thereby eliminating duplication and simplifying the dataset for analysis.As a result, adjacent trackwork events, such as those depicted in Figure 1 as Ss.1-Sn.1 and Sn.2-Se.2, were combined into consolidated entries, labelled as trackwork 1-2 in the figure.The second dataset comprises the train punctuality data, extracted from the train plan 2017.This dataset provides information about the scheduled departure/arrival time and actual departure/arrival time to each station on the assigned train path, with a time precision of one minute.It includes specific details for each train route, such as a unique identification number, the type of train, and the type of track (whether single, double, or quadruple).In total, this dataset captures 32,591,482 train observations (Figure 2).Each recorded train passage is captured as a sequence of stations along its route, providing a precise geographical profile compared to the trackwork dataset (Figure 1).To integrate the datasets, we matched each unique journey in the punctuality records with corresponding track segments between the start (Ss) and end (Se) stations on the route.Given that trains traversed numerous segments or bypassed them entirely on their routes, 32.6 million recorded journeys throughout 3218 designated segments comprised roughly 27.2 million distinct train passages (Figure 2).Following this, we prepared the datasets for analysis with two regression models: multiple logistic regression and negative binomial regression.For the logistic regression, we defined two additional variables to capture both the presence and absence of train running time (runtime) delay increases, without altering the overall number of observations (Figure 2).In contrast, for the negative binomial regression, we aggregated the data based on a unique mix of train type, track type, trackwork, train entry status, daytime, and location.We then grouped the dataset with three new variables to quantify the counts of train running time delay increases, decreases, and instances where delays remained constant.
Train route
Table 1 shows a summary statistic of trackwork duration and train delay size.On average, the trackwork lasted for 181 min but had a large range and a standard deviation of 207 min.The running time delay was calculated as a difference between the scheduled and actual train running times between analysed stations.The measurements were conducted with a precision of up to 1 min.The mean value of the observed train running time delays is -0.15 min, and a standard deviation of 5.The range of delay times spans substantially, with the earliest arrivals recorded at minus 444 min, and the maximum value 1447 min.The analysed 27.2 million train passages have the following characteristics presented in Table 2.The count of the trains was evenly distributed over 12 months in the year 2017, with an average count of 2.3 million train passages per month.Table 2 shows the following characteristics of analysed train passages: train subtype, track type, running time delay, trackwork, train enter status, and day time.Each category of these variables is listed, along with the percentage of observations per category, and reports delay-increase observations within four thresholds (1-4 min, 5-9 min, ≥10 min, and ≥1 min).Notably, among all categories, freight trains most frequently faced increases in running time delays.In contrast, when passing the analysed section, commuter trains were less prone to such delay increases.Instead, these commuter trains predominantly experienced reductions in running time delays during the period of study.Scheduled trackwork overlapped with about 0.4% of the train passages, whereas 99.6% of the passages did not pass through scheduled trackwork.10% of the train passages were on quadruple-track, 52% on double-track, and 39% on single-track.Our sample was composed of 81% passenger trains and 19% freight trains.In total, 29% of the train passages in our sample were ahead of schedule entering the analysed track section, and 43% were behind schedule.Interestingly, trains that entered the section ahead of schedule often encountered a subsequent increase in running time delay.Finally, 86% of the passages occurred in the daytime and 14% at night.Night-time was defined (according to the labour act of Sweden [34]) as the period between 22.00 and 06.00.The total count of observations in the sample is 27,182,178.
Regression Modelling
In this study, we analyse how train running time delay and delay recovery (attributed to delay decrease) are associated with trackwork.The control factors are train type and subtype (passenger or freight train, with subtypes of each) and train entry status (early, late, on time) to the analysed track segment.Track type and day time are control variables for the trackwork relevant to this study's context.We develop two types of regression models: (i) Multiple logistic regression to explore the probability of train running time delay, and (ii) negative binomial regression to explore the frequency of train running time delay affected by the presence of scheduled trackwork.In addition to the main models, which account for more or equal to a 1 min train running time delay, we have also performed a sensitivity analysis regarding different running time train delay thresholds, accounting for delays of more than 5 or 10 min.
Table 3 provides a comprehensive statistical summary of the response variables used in both the logistic and negative binomial regression models.For the logistic regression model, we consider running time delay increases and decreases of at least one minute, with the observations totalling 27,182,634.Within this model, the average instance of delay increases of at least one minute is noted as 0.22, with a standard deviation of 0.42.The mean for delay decreases of the same threshold is 0.45, reflecting a higher frequency of delay decreases with a standard deviation of 0.50.The sensitivity of the model to more substantial delays is also examined, with thresholds at five and ten minutes, revealing lower average instances, signifying fewer occurrences of longer delays.The negative binomial regression model is employed for count data, chosen due to the over-dispersion present in the delay counts.The variables for this model are aggregated counts by trackwork, track type, train subtype, train enter status, and day time and location (Figure 2), with a total of 406,563 observations.The response variable running time delay increase/decrease count is a count variable representing the number of increased/decreased delays in the running time for each train passage in the studied track segment.The count of running time delay increases of at least one minute shows an average of 15 with a standard deviation of 44, indicating variability in delay occurrences.For running time delay decreases of one minute or more, the mean count is 30, with a higher standard deviation of 100, suggesting a wider spread in the data.Sensitivity analysis for this model includes delay increases at five-and ten-minute thresholds, with 142 and 86 instances, respectively, reflecting a marked decline in counts as the delay duration increases.
Multiple Logistic Regression
We use a multiple logistic regression model to analyse the effect of trackwork, along with other explanatory variables, on the train running time delay increases (1)/decreases (2).Logistic regression is commonly used to study functional relationships between a categorical dependent variable and one or more independent variables [35,36].The response variable for the first model captures the presence and absence of train running time delay increase while passing an analysed track segment, coded as 1 and 0 accordingly.In the second model, the response variable reports the presence and absence of train delay decrease in the same circumstances coded as 1 and 0 accordingly.The multiple regression model predicts the train running time delay increase/decrease (Y) occurrence by the explanatory (x i ) variables described in Table 2.The summary of this model is presented in the equation: where: • Y is the response variable capturing the presence or absence of the train running time delay increase (1 min) for the first model and of running time delay decrease (1 min) for the second model, given the predictor variables.The possible values are 0 or 1; • X 1 , X 2 , . . ., X 5 are the predictor variables in the model (trackwork, track type, train subtype, train enter status, and day time, respectively); • β 0 is the intercept term, and β 1 , β 2 , . . ., β 5 are the coefficients for each predictor variable.
The explanatory variable trackwork is a binary variable where 1 is assigned to cases where the train passage on the studied track segment overlaps with scheduled trackwork; otherwise, it is: 0. Track type, train type, train enter status, and night are categorical explanatory variables representing the track type, train subtype, whether the train is on time, early, or late, and whether the train operates at night, respectively.The time variable shows when the train passed the analysed line day (0) or night (1).Pearson's chi-squared test was used to check the independence of qualitative variables entering the regression model.The results show that all tested variables were independent.The selection variables chosen for this model were made by testing several logistic models.
For ease of interpretation, in line with multiple logistic regression coefficients, we computed the odds ratio (OR).OR is a measure of association between a given exposure in a logistic regression and an outcome Y: The OR, therefore, indicates how much more likely the event is to happen given a particular exposure (in this case, trackwork) compared to its absence.An OR greater than 1 suggests a higher likelihood of the event when the exposure is present, whereas an OR less than 1 indicates a reduced likelihood.This measure is particularly useful in logistic regression as it provides a clear and interpretable metric of the strength and direction of the association between predictors and the outcome variable.
Negative Binomial Regression
We employed two negative binomial regression models to analyse the relationship between the count of train running time delay increases (1)/decreases (2) and a set of explanatory variables.The regression coefficients were estimated using the glm.nb function in R (2023.06.2).The equation for the model is as follows: where: • E[Y|X] is the expected count of running time delay increase (1 min) for the first model and of running time delay decrease (1 min) for the second model given the predictor variables; • X 1 , X 2 , . . ., X 5 are the predictor variables in the model (trackwork, track type, train subtype, train enter status, and night, respectively); • β 0 is the intercept term, and β 1 , β 2 , . . ., β 5 are the coefficients for each predictor variable.• log(ε i ) is the natural logarithm of the exposure variable for observation.For ease of interpretation, in line with the coefficients obtained from the negative binomial regression, we computed the incidence rate ratio (IRR) by taking the exponent of the estimated coefficients, which is expressed as IRR = e β i .This allows us to directly interpret the proportional change in the count of running time delay increases or decreases associated with a one-unit change in the predictor variable, with all other variables held constant.
Results
This paper conducts a comprehensive analysis of how trackwork impacts train running time delays, utilising two distinct types of regression models.Firstly, the multiple logistic regression model elucidates the probabilities of train delays in relation to scheduled trackwork, taking into account other predictor variables.Secondly, the negative binomial regression model sheds light on the frequency of delay occurrences, specifically focusing on the correlation between the presence of scheduled trackwork and the delays experienced by trains traversing these segments.
Train Running Time Delay Increase
We employed multiple logistic regression and negative binomial regression models to examine the correlation between increases/decreases in train running time delays and trackwork (Table 3), adjusting for a set of categorical independent variables (Table 2).Our sensitivity analysis focused on understanding the impact of train delay thresholds of 5 and 10 min on this association.We assessed the statistical significance of each coefficient using the Wald chi-square test.Comprehensive summaries of these models can be found in Appendix A, Tables A1 and A2.
The multiple logistic regression analysis presented in Table 4 reports the probability of train running time delays, categorised into delays of ≥1 min, ≥5 min, and ≥10 min, in relation to scheduled trackwork and other operational factors.The regression coefficients are significant at the 0.1% level, except for the airport train type and the impact of trackwork on delays of at least 10 min.For delays of at least 1 min, the model reveals an increase in the likelihood of delay (OR = 1.43) when trackwork is scheduled.This effect diminishes slightly for delays of 5 min or more (OR = 1.37), and becomes non-significant for substantial delays of at least 10 min (OR = 1.04).Track and train type play a considerable role in predicting delays.Quadruple tracks demonstrate a decreased probability of short and moderate delays but an increased likelihood of longer delays (OR = 1.28).Conversely, single tracks and commuter trains consistently correlate with higher odds across all delay thresholds.The analysis also indicates that unspecified passenger and high-speed trains are less likely to experience significant delays.Notably, late departures and night-time operations do not emerge as significant predictors of delay.
The negative binomial regression analysis, summarised in Table 5, investigates the frequency of train running time delays at thresholds of ≥1 min, ≥5 min, and ≥10 min, considering other explanatory variables (track type, train type, train departure status, and day time).For delays ≥1 min, the presence of trackwork slightly increases the frequency of delays (IRR = 1.16).This effect is marginally more pronounced for delays ≥5 min (IRR = 1.20) but becomes non-significant for substantial delays of ≥10 min (IRR = 0.98).The track type shows a differential impact, with quadruple tracks slightly reducing the frequency of shorter delays (IRR = 0.95) but increasing for longer delays (IRR = 0.72).Single tracks and unspecified passenger train types tend to increase the frequency of delays across all thresholds.
The analysis reveals significant variability across different train types in influencing train running time delay increase occurrences.For instance, intercity and regional trains consistently show a decreased frequency of delays across all delay size thresholds for intercity trains for delays ≥1 min).In contrast, although airport trains have a non-significant impact on the shortest delays, they considerably increase the likelihood of longer delays.Departure status and time of day also contribute to delay frequencies, with late departures and night-time operations showing varying degrees of influence.All coefficients are significant at the 0.1% level except for those marked with 'ns' (not significant).
Train Running Time Delay Decrease
We utilised both multiple logistic regression and negative binomial regression models to explore the opportunity for train delay reduction whilst traversing segments with scheduled trackwork.Detailed summaries of these models are presented in Appendix A, specifically in Tables A3 and A4.The statistical significance of each coefficient was determined using the Wald chi-square test, providing a robust basis for our analysis.
The outcomes of the multiple logistic regression model are summarised in Table 6.All coefficients are significant at the 0.1% level.The model indicates that trackwork is associated with a slight decrease in the likelihood of delay reduction (OR = 0.89).There is a notable increase in the probability of delay reduction for the quadruple track type (OR = 1.24).Among all train types, commuter trains exhibit an increased probability of delay reduction (OR = 1.30).If the train departs late, it is more likely to reduce delays (OR = 1.96).The time of day shows a minimal impact, with night-time operations slightly less likely to reduce delays (OR = 0.95).
Table 7 presents the negative binomial regression model outcomes, examining the count of train running time delay decreases exceeding or equal to 1 min.The model indicates a slight reduction in the frequency of delay reductions in the presence of trackwork (IRR = 0.96).For track types, quadruple tracks correlate with a lower frequency of delay reduction (IRR = 0.78), while single tracks demonstrate a marginal increase (IRR = 1.05).In terms of train types, commuter trains are more likely to reduce delays (IRR = 1.15), in contrast to airport trains, which show a notable decrease (IRR = 0.37).Departure status is a significant predictor, with late departures more frequently reducing delays (IRR = 1.26) and similar trends observed for on-time departures (IRR = 1.18).The time of day does not have a statistically significant impact.All coefficients are significant at the 0.1% level except for those marked with 'ns' (not significant).
Discussion
In this paper, we have analysed the association between trackwork and train delays, employing two distinct types of regression models: multiple logistic regression and negative binomial regression.These models provide a comprehensive understanding of the impact of trackwork on train delays.The logistic regression model sheds light on the probability of delay occurrences, while the negative binomial regression offers insights into the frequency of these delays.
Our study concludes that trackwork is linked to an increased rate of delay occurrences and a higher probability of delay increase.Trains passing through sections with scheduled trackwork are 1.43 times more likely to experience an increase in running time delay (≥1 min).Simultaneously, there is a 16% increase in the expected count of instances where train delays increase by at least one minute, compared to scenarios without trackwork.Conversely, the opportunity for train delay recovery diminishes in the presence of trackwork.The frequency of delay reduction decreases by 4%, and the likelihood of a delay decrease is 11% lower than when there is no trackwork.The sensitivity analysis regarding the size of the delay reveals a more pronounced effect for delays between 1 and 10 min, while the impact of trackwork on delays exceeding 10 min is insignificant.This indicates that trackwork primarily contributes to smaller, more frequent delays.
Although the negative impact of scheduled trackwork on train punctuality is relatively minor, primarily causing smaller delays (1-10 min), it still affects the reliability of railway operations.This effect might be mitigated by providing sufficient time for the trackwork to be completed and ensuring on-time performance.One strategy for achieving this is through the use of "maintenance windows", which involve reserving capacity for trackwork in advance of the completion of the train timetable.This allows train paths to adapt to capacity restrictions ahead of time and avoid any negative impact on performance.However, it has been observed that this approach is not yet utilised to its full potential, and train operators may have difficulty adapting to the restrictions.Additionally, there may be uncertainty [37] in the trackwork schedule even close to the execution period, which can lead to changes in the schedule and difficulties for train operators to adapt, resulting in train cancellations.
The trackwork scheduling approach used in this study is consistent with the SERA directive [32], which is widely adopted in European Union member states.Therefore, the findings of this study have broad relevance and demonstrate the need for increased attention to be given to trackwork scheduling.
Conclusions
In this paper, we have investigated the extent to which scheduled trackwork is associated with the probabilities and frequencies of train delays.Based on 32.5 million train passages and 225,000 instances of planned trackwork throughout the year 2017, the paper presents two regression models: multiple logistic regression and negative binomial regression.
The results show that trackwork significantly increases the likelihood of train delays, with trains 1.43 times more likely to experience delays of at least 1 min in these conditions and a 16% increase in instances of delay increases.However, trackwork also reduces the opportunities for delay recovery, leading to a 4% decrease in the frequency of delay reductions and an 11% lower likelihood of delay decrease.The analysis particularly highlights that trackwork predominantly affects shorter delays (1-10 min), with negligible impact on longer delays exceeding 10 min.
Only a small share of trains overlap with scheduled trackwork.However, the absolute number is likely to increase as both the number of trains and trackwork increase.While this issue was not a major contributor to delays in 2017, we expect that it will grow significantly with time.While the analysis indicates a relatively modest impact of trackwork on train delays, the anticipated increase in trackwork activities over the coming years could potentially magnify this issue.Therefore, exploring improved scheduling and performance strategies for trackwork may contribute to minimising conflicts between trackwork and train passages, albeit with the current effect being marginal.This study serves as a preliminary insight into the dynamics between trackwork and train operations, suggesting a measured approach towards optimising trackwork scheduling to accommodate the evolving demands of the railway network.All coefficients are significant at the 0.1% level except for those marked with 'ns' (not significant).
Figure 1 .
Figure 1.Railway track segment where trackwork happens between stations S1 and Sn.
Figure 2 .
Figure 2. Data processing workflow for train punctuality and trackwork analysis.
Table 1 .
Statistical summary of trackwork duration and train delays length.
Table 2 .
Characteristics of the analysed sample of train passages.
Table 3 .
Statistical summary of the analysed response variables for logistic and negative binomial regression models.
All coefficients are significant at the 0.1% level except for those marked with '*' (significant at the 1% level) and 'ns' (not significant).
Table 5 .
Negative binomial regression model summary.Response variables: count of train running time delay increase ≥1 min, train running time delay increase ≥5 min and train running time delay increase ≥10 min.
Table 7 .
Negative binomial regression model summary.Response variable: count of train running time delay decrease ≥1 min.
Table A2 .
Multiple logistic regression model summary.Response variable: train running time delay increase ≥5 min (0; 1) and train running time delay increase ≥10 min (0; 1).All coefficients are significant at the 0.1% level except for those marked with '*' (significant at the 1% level) and 'ns' (not significant).
Table A3 .
Negative binomial regression model summary.Response variable: train running time delay increase/decrease count ≥1 min.All coefficients are significant at the 0.1% level except for those marked with 'ns' (not significant).
Table A4 .
Negative binomial regression model summary.Response variable: train running time delay increase ≥5 min count and train running time delay increase ≥10 min count. | 6,680.8 | 2023-12-22T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Detecting affine equivalences between certain types of parametric curves, in any dimension
Two curves are affinely equivalent if there exists an affine mapping transforming one of them onto the other. Thus, detecting affine equivalence comprises, as important particular cases, similarity, congruence and symmetry detection. In this paper we generalize previous results by the authors to provide an algorithm for computing the affine equivalences between two parametric curves of certain types, in any dimension. In more detail, the algorithm is valid for rational curves, and for parametric curves with non-rational but meromorphic components admitting a rational inverse. Unlike other algorithms already known for rational curves, the algorithm completely avoids polynomial system solving, and uses bivariate factoring, instead, as a fundamental tool. The algorithm has been implemented in the computer algebra system {\tt Maple}, and can be freely downloaded and used.
Introduction.
We say that two curves are affinely equivalent if one of them is the image of the other curve by means of an affine mapping.If the affine mapping preserves angles, then the two curves are similar, i.e. both correspond to the same shape and differ only in position and/or scaling.If the affine mapping preserves distances, the curves are congruent, so they differ only in position.Finally, if the two curves coincide, finding the self-congruences of the curve is equivalent to computing its symmetries.
Because of the nature of the problem, it has received some attention in the applied fields of Computer Aided Geometric Design, Pattern Recognition and Computer Vision.In the last years, the problem has been also addressed in the Computer Algebra field, and this paper follows this trend.Examples of papers where this question has been studied are [2,4,7]; for other related papers, the interested reader can check the bibliographies of [2,4,7].These papers address the problem for rational curves, i.e. parametric curves whose components are quotients of polynomial functions, and aim to the more general question of checking projective, and not just affine, equivalence.While [2,7] provide solutions for this problem, with different strategies, for curves in any dimension and using as a fundamental tool polynomial system solving, the paper [4] addresses the question only for space rational curves, but employing bivariate factoring as an alternative to solving polynomial systems; this leads to better timings and performance.To do this, in [4] two rational invariants, i.e. two functions rationally depending on the parametrizations to be studied which stay invariant under projective transformations, are found and used.
In this paper we generalize the ideas of [4] in three different ways.First, while the development of the invariants in [4] was more of an "art" than of a "craft", here we provide a complete algorithm to generate such invariants.Second, the technique is valid for curves in any dimension, and not just space curves, which was the case addressed in [4]: we provide an algorithm, that can be downloaded from [5], to generate the corresponding invariants for any dimension, that needs to be executed just once for each 2. Background, statement of the problem and required tools.
Background, auxiliary results and statement of the problem
Let us start by describing the kind of parametric curves we will work with.The key idea is that they must be parametric curves with rational inverses.However, we will make precise their structure so that we can algorithmically verify that this requirement is satisfied.
We need two ingredients to do this; the first ingredient is a meromorphic function ξ : we observe that Π is an invertible function over its image, which is the graph G ξ of the function ξ, Indeed, for (z, ω) ∈ C 2 , ω = ξ(z), we have Π −1 (z, ω) = z.The second ingredient is a rational mapping Φ : C 2 → C n .If we compose these two mappings, we get a new mapping which provides a parametrization p(z) = Φ(z, ξ(z)) of a curve C ⊂ C n , which is the image of G ξ under Φ.Notice that p is a vector function with meromorphic components.Of course if ξ is a rational function, p is just a rational parametrization.We will also assume that the curve defined by p is not contained in a hyperplane.
What we really need is the condition that the restriction Π| G ξ is birational, so that p has a rational inverse.If ξ is a rational function, what implies that p(z) is a rational parametrization, we will just assume that p(z) is proper, i.e. generically injective, in which case p −1 exists and is rational.So let us provide sufficient conditions to guarantee this also in the case when ξ is not a rational function.In that case, we will assume that Φ is a birational mapping.Recall that the cardinality of the fiber of Φ, which we denote by #(Φ), is the number of points in the preimage of Φ(q) with q ∈ C 2 a generic point.Then, the birationality of Φ is equivalent to #(Φ) = 1 (see for instance Proposition 7.16 in [6]), so we can check this condition by just picking a random point q, and computing the number of points in the preimage of Φ(q).Furthermore, we have the following lemma, inspired by [13,14].
where for i = 1, 2, where Φ −1 is not defined, V being the union of the sets defined by (B i • Φ)(x 1 , x 2 ) = 0 with i = 1, 2, and . Notice that V is an algebraic planar curve.
Corollary 2. Assume that Φ is a birational mapping, and ξ is a meromorphic, not algebraic function.Then p(z) is invertible over C = Φ(G ξ ), and the inverse p −1 has rational components.
Proof.Since by assumption #(Φ) = 1, Φ −1 exists and is rational.Next from Lemma 1 we have that #(Φ) is constant except perhaps for the points of an algebraic variety V ⊂ C 2 of dimension at most one.Since ξ is not an algebraic function G ξ is not an algebraic curve.Therefore, by the Identity Theorem (see Theorem 3.1.9in [8]) G ξ ∩ V is either finite, or infinite but without any accumulation point.Thus, for a generic point q ∈ G ξ we get that the cardinality of Φ(q) is 1.Therefore Φ −1 is well-defined for almost all points in Φ(G ξ ), and p Example 1.To illustrate the assumptions that we need, we provide now some examples of plane and space curves satisfying these requirements.
(2) Image of the graph of the exponential curve under an inversion.Let the curve C (see Fig. 1, middle) be parametrized by Here ξ(z) = e z and Φ(x, y) = which is an inversion from the origin, and therefore a birational mapping.Then, ξ(z) = e iz , and which is a birational mapping.
Even though for technical reasons we will assume that the parameter space of the curve C parametrized by Eq. ( 1) is C, we will mostly work with real curves, i.e. curves with infinitely many real points.Next, given two curves C 1 , C 2 ⊂ C n parametrized by p(z), q(z) as in Eq. ( 1), we say that C 1 , C 2 are affinely equivalent if there exists a mapping f : A is an n × n matrix (in general, over the complex), A non-singular, b ∈ C n , such that f (C 1 ) = C 2 ; furthermore, we say that f is an affine equivalence between C 1 , C 2 .We are interested in real affine equivalences, so in our case we will be mostly looking for with I the identity matrix, we say that f is a symmetry of C. Now we are ready to state the problem that we want to solve.Problem: Given two curves C 1 , C 2 ⊂ C n , not contained in hyperplanes, parametrized by mappings p(z), q(z) as in Eq. (1) with meromorphic components, admitting rational inverses p −1 , q −1 , compute the affine equivalences, if any, between C 1 , C 2 .
In order to solve this problem, we will make use of the following result, which corresponds to a similar result used in [4,7], adapted to our case.We recall here that a Möbius transformation is a transformation C n be two parametric curves defined by p(z), q(z) as in Eq. (1), where p(z), q(z) are mappings with meromorphic components, admitting rational inverses p −1 , q −1 .If f (x) = Ax + b is an affine equivalence between C 1 , C 2 then there exists a Möbius transformation φ(z) satisfying that i.e. making commutative the following diagram: Proof.Since q −1 exists, φ = q −1 • f • q is well-defined.Furthermore, since q −1 is rational, q −1 • f is also a rational function and therefore φ = (q −1 • f ) • p is meromorphic.Since p exists and is rational, • q is also meromorphic, so φ is a bi-meromorphic function, and therefore it must be a Möbius transformation (see Remark 2 in [1]).
Remark 1. Theorem 3 also works with meromorphic parametrizations p, q admitting global meromorphic inverses.However, guaranteeing the existence of a global meromorphic inverse is a really hard problem.This is the reason why we restrict ourselves to parametrizations, rational or not rational, where this condition is easy to check.Notice that the parametrizations we work with here have rational inverses, so certainly they have global meromorphic inverses.
Additional tools
In this subsection we recall two notions that we will be using later in the paper.The first one is the Schwartzian derivative: given a holomorphic function f : C → C, the Schwartzian derivative [12] The Schwartzian derivative of any Möbius transformation is identically zero.The following lemma is a consequence of this.
Lemma 4. Let ω := φ(z) a Möbius transformation, and let ω (k) denote the k-th derivative of ω with respect to k.For k ≥ 3, Proof.Since the Schwartzian derivative of a Möbius transformation is identically zero, we get that which corresponds to Eq. ( 4) for k = 3.Then the result follows by induction on k.
The second tool that we will need is Faà di Bruno's formula [9] for the derivatives of high order of a composite function.Given a vector function u := u(z) and a scalar function ω := ω(z), Faà di Bruno's formula provides the derivatives of order k ≥ 1 of the composite function u(ω) with respect to z.Although there are other formulations, we will use the expression where the B k,m are the incomplete (or partial) Bell polynomials [10], well-known in combinatorics, where the sum is taken over all sequences ℓ 1 , ℓ 2 , . . ., ℓ k+1−m of non-negative integers such that In particular, and we will assume the convention that B k,m = 0 when k < m.
Overall strategy and first step
We want to exploit Eq. ( 2) to first find the Möbius transformation φ, if any, and then derive f from φ.If we expand Eq. ( 2), we get Our overall strategy will consist of three steps, that we will refer to as steps (i), (ii), (iii): (i) Find initial invariants: we start by constructing certain functions I 1 , . . ., I n satisfying that I i (p) = I i (q • φ), which are rational in the sense that they are rational functions of p and its derivatives.Since by Eq. ( 9) we observe that q • φ is the image of p under an affine mapping f (x) = Ax + b, we say that I 1 , . . ., I n are affine invariants, i.e. functions depending on a parametrization (and its derivatives) that stay the same when an affine transformation is applied.
(ii) Find Möbius-commuting invariants: we say that a function F depending on a parametrization u = u(z) and its derivatives is Möbius-commuting if for any Möbius function we have The functions I i found in step (i) are not, in general, Möbius-commuting.Thus, in a second step we will compute Möbius-commuting functions F 1 , . . ., F n−1 from the I i , also rational.The F j not only satify that F j (p) = F j (q • φ) for j = 1, . . ., n − 1, but they also satisfy that F j (q • φ) = F j (q) • φ.In turn, for j = 1, . . ., n − 1 we have Notice that while we have n initial invariants I i , we have n − 1 Möbius-commuting invariants.
(iii) Compute φ using bivariate factoring, and derive f from φ: setting ω := φ(z), the equalities Then the Möbius function φ corresponds to a common factor of all the M j , and the affine equivalence itself, f (x) = Ax + b, follows from Eq. ( 9).
In this subsection we will present step (i); the remaining steps will be described in the next section.Also, in the rest of the paper we will use the notation [w 1 , • • • , w n ] for an n × n matrix whose columns are w 1 , . . ., w n ∈ C n , and The description of step (i) is analogous to Section 3.2 in [4].Thus, here we focus on the main ideas, and refer the interested reader to [4] for details and proofs.Going back to Eq. ( 2), let us write u := p(z), v := (q • φ)(z), so that Eq. ( 9) becomes simply Au + b = v.Repeatedly differentiating this equation with respect to z yields AD(u) = D(v) where i.e.D(u), D(v) are matrices whose columns consist of the first n derivatives of u, v with respect to z. Whenever p, q and therefore u, v are not contained in hyperplanes, D(u), D(v) are invertible [15].Thus, we can write A = D(v)(D(u)) −1 .Differentiating this equality with respect to z, and taking into account that A is a constant matrix, we get that Expanding the derivative in the left-hand side of the above equation we arrive at Denoting
Next let us define
Thus, A i (u) is the result of replacing n+1) .Finally, for i = 1, . . ., n, let which correspond to the entries of the last column of U ; notice that whenever u, v are not contained in hyperplanes ∆(u) is not identically zero [15], so the I i are well defined.
By Eq. ( 10), U, V are equal and therefore their last columns coincide.Thus, I i (u) = I i (v) for i = 1, . . ., n, i.e.I i (p) = I i (q • φ), which by Theorem 3 is a necessary condition for affine equivalence.The following result, analogous to Theorem 7 in [4] and which can be proved, using Theorem 3, in a similar way, shows that this condition is also sufficient.
Theorem 5. Let C 1 , C 2 ⊂ C n be two curves, not contained in a hyperplane, parametrized by mappings p, q with meromorphic components, admitting rational inverses.If C 1 , C 2 are affinely equivalent then there exists a Möbius transformation φ such that for i = 1, . . ., n.
In order to carry out step (ii), which will be addressed in the next section, we need an auxiliary invariant, I 0 , defined as The following lemma proves that I 0 lies in the differential field spanned by I 1 , . . ., I n .The proof of this lemma is provided in Appendix I.
Since, according to Lemma 6, I 0 is generated by I 1 , . . ., I n , the result in Theorem 5 also holds when we add I 0 to the list of the I i s.
Corollary 7. Let C 1 , C 2 ⊂ C n be two curves, not contained in a hyperplane, parametrized by mappings p, q with meromorphic components, admitting rational inverses.If C 1 , C 2 are affinely equivalent then there exists a Möbius transformation φ such that for i ∈ {0, 1 . . ., n}.
Second step (overview) and third step
The I i developed in the previous section are not Möbius-commuting, i.e.I i (q • φ) ̸ = I i (q) • φ; in other words, calling ω := φ(z), I i (q(ω)) ̸ = I i (q)(ω).For instance, in the case n = 3, expanding I i (q(ω)) for i = 1, 2, 3 we get that where ω ′ , ω ′′ are the first and second derivatives of ω = φ(z) with respect to z; to produce these equalities, we have taken into account the definition of I 1 , I 2 , I 3 as quotients of determinants, the Chain Rule, and the fact that, because of Eq. ( 4) in Lemma 4, the derivatives of ω of order higher than 3 can be written in terms of ω ′ , ω ′′ .However, by eliminating ω ′ , ω ′′ in Eq. ( 17), one can show that 36I 1 (q(ω)) + 6I 2 (q(ω))I 3 (q(ω)) + I 3 (q(ω)) 3 2 so that One can certainly manipulate Eq. ( 17) by hand to get rid of ω ′ , ω ′′ , reach Eq. ( 18), and therefore find the invariant in Eq. ( 19).However, we want to produce invariants like the one in Eq. ( 19) in an algorithmic fashion, and for any dimension: that is the task in step (ii).The rough idea, as in Eq. ( 17), is to get rid of the derivatives ω (k) , k = 1, 2, . . ., n + 2, in the system consisting of the expressions where ξ i is the result of expanding I i (q(ω)), with i = 0, 1, . . ., n.The process is involved, and will be detailed in Section 4, but as a final product of this process we get closed expressions for these invariants (see Theorem 16 in the next Section 4), that we denote F 1 , . . ., F n−1 .The generation of the Möbius-commuting invariants, for any dimension n, is implemented in [5], which can be freely downloaded, and can be done just once for each dimension n.In Table 1 we spell the invariants for low dimension, 2 ≤ n ≤ 4.
Table 1: Möbius-commuting invariants for low dimension
Next let us address step (iii).Let F j be a Möbius-commuting invariant, j ∈ {1, . . ., n − 1}.Since F j is a rational function of the I i , F j is also an affine invariant, i.e. from Theorem 5 we get that F j (p) = F j (q • φ).Therefore, in terms of the variables z and ω := φ(z), and taking into account that F j (q • φ) = F j (q) • φ, we deduce that F j (p)(z) = F j (q)(ω).Then we have the following result.Proposition 8. Let C 1 , C 2 ⊂ C n be two curves, not contained in a hyperplane, parametrized by mappings p, q with meromorphic components, admitting rational inverses.Then C 1 , C 2 are affinely equivalent if and only if there exists a Möbius transformation φ such that for j ∈ {1, 2, . . ., n − 1} with ω = φ(z), such that D(q Proof.(⇒) Let f be an affine equivalence between C 1 , C 2 .By Theorem 3, there exists a Möbius function φ such that f • p = q • φ.By Corollary 7 we have that I i (p)(z) = I i (q(ω)) for all i ∈ {0, . . ., n}.Since the F j are rational functions of the I i , I i (p)(z) = I i (q(ω)) yields F j (p)(z) = F i (q)(ω) for j ∈ {1, 2, . . ., n − 1}.Finally, writing f (x) = Ax + b, the condition f • p = q • φ implies that Ap(z) + b = q(φ(z)), so b = (q•φ−Ap)(z), which is a constant vector.Furthermore, by differentiating the condition Ap(z)+b = q(φ(z)) (see Subsection 3.1) we deduce that But this equality implies that Ap(z) + b, which is the image of C 1 under the affine mapping f (x) = Ax + b, and q(z), parametrize the same curve, namely C 2 .Thus, f (x) = Ax + b is an affine equivalence between C 1 and C 2 .
Algorithm and examples
To finally turn Proposition 8 into an algorithm, let M j (z, ω) be obtained by clearing denominators in F j (p)(z)−F j (q)(ω).We need to request that M j (z, ω) is not identically zero, which amounts to requiring that not all the F j are constant: this can happen, and an example is the circular helix p(z) = (cos(z), sin(z), z).If M j (z, ω) is not zero, then M j (z, ω) = 0 defines an analytic curve in the plane z, ω.Now if is a Möbius function satisfying Proposition 8, calling ω = φ(z) we get that all the points (z, ω) of the curve which is an irreducible analytic curve, are also points of the curve M j (z, ω).As a consequence of Study's Lemma (see Section 6.13 of [3]), H(z, ω) = ω(cz + d) − (az + d) must be a factor of M j (z, ω); we say that H(z, ω) = ω(cz + d) − (az + d) is a Möbius-like factor of M j (z, ω), and that the Möbius function φ in Eq. ( 21) is associated with H(z, ω).So we have the following theorem, which follows from Proposition 8.
Theorem 9. Let C 1 , C 2 ⊂ C n be two curves, not contained in a hyperplane, parametrized by mappings p, q with meromorphic components, admitting rational inverses and where not all the F j are constant.Then C 1 , C 2 are affinely equivalent if and only if there exists a Möbius-like factor H(z, ω) common to M j (z, ω), j = 1, . . ., n − 1 such that the corresponding associated Möbius function φ satisfies that: (1) Thus, we get the following procedure AffineEquivalences to find the affine equivalences between the curves C 1 , C 2 defined by p, q.
3:
if all the M j are identically zero then 4: return Failure: all the Möbius-commuting invariants are constant Compute the common factor L(x, z) of the M j (x, z).
7:
Let L be the list of Möbius-like factors of L(x, z) return The curves are not affinely equivalent In the affirmative case, return f (x) = Ax + b.
If p, q are rational, the M j (x, z) are rational and H(z, ω) is a factor of gcd(M 1 (z, ω), . . ., M n−1 (z, ω)).However, the computer algebra system Maple, where we implemented the procedure (see [5]), can compute H(z, ω) also in the case when p, q are not rational, but satisfy the hypotheses of the procedure.In this last case, we ask Maple to solve H(z, ω) for ω to find the Möbius functions.
Remark 2. Although Maple Help System is not too specific about this, in the case when the M j (x, z) are not rational the idea seems to be that Maple renames repeated non-rational expressions found in the M j (x, z) (e.g.cos(z), e z , etc.) to form rational functions, and then proceeds by applying the algorithm for the rational case.
In order to illustrate the performance of the procedure AffineEquivalences, we consider now two examples where we compute the affine equivalences between curves taken from Ex. 1, and the images of these curves under an affine mapping.These examples were computed with Maple and executed in a PC with a 3.60 GHz Intel Core i7 processor and 32 GB RAM, and are accessible in [5] as well. .
The curve q(z) corresponds to the first curve in Ex. 1, which is a catenary curve.After appying our algorithm, we find two factors H i , (z, ω), i = 1, 2, common to the M j , namely When solving for ω, we get infinitely many (complex) Möbius functions leading to infinitely many (complex) affine equivalences, which reveals that the H i (ω, z) contain Möbius-like factors.The affine equivalences can be classified in three classes f j (x) = A j x + b j , j ∈ {1, 2, 3}, with associated Möbius functions φ j (z): and where i 2 = −1.If we just consider real affine equivalences, we have three of them, which correspond to fixing k 1 = 0 for f 1 (x), k 2 = 0 for f 2 (x), k 2 = −1/2 for f 3 (x).The whole computation took 0.172 seconds.
The curve q(z) corresponds to the third curve in Ex. 1, which is a 3D spiral.After applying our algorithm, we find two Möbius-like factors H i (z, ω), i = 1, 2, common to the M j (z, ω), namely When solving for ω, we get two Möbius transformations φ 1 (z) = −2z and φ 2 (z) = 2z corresponding to the affine equivalences The whole computation took 0.032 seconds.
Example 4 (Rational curves in n-th dimension).Finally, in Table 4 we present the results of performance tests to compute affine equivalences between rational curves of various degrees, in different dimensions.The rational curves in the experiments were randomly generated, see also [5], with coefficients between −10 and 10.After generating the first curve, the second curve was obtained by applying an affine mapping f (x) = Ax + b to the first curve, where the matrix and the translation vector, for each dimension, are shown in Table 2; additionally, the resulting curve was reparametrized using a Möbius transformation φ(z) = 2z − 1.The timings to recover the affine equivalences are shown in Table 2: the rows of Table 2 correspond to dimensions from n = 2 to n = 6, and the columns, to degrees from d = 6 to d = 12.For degrees up to 10, we can compute the affine equivalences between the curves in less than a minute, for all the dimensions tested.
Justification of step (ii): computation of Möbius-commuting invariants
Step (ii) in the strategy presented at the beginning of Section 3.1 corresponds to the computation of what we called Möbius-commuting invariants.The deduction of these invariants is involved; so in order to develope our reasoning, we will distinguish three small substeps, that we present in separate subsections: (ii.1) Rewriting high order derivatives of ω and rewriting Faà di Bruno's formula.
Here we recall the notation ω = φ(z), and the notation ω ′ , ω ′′ , . . ., ω (k) for the derivatives of ω with respect to z. Recall also from Subsection 3.2 that the rough idea in step (ii) is to eliminate the derivatives of ω from the expressions resulting from expanding I i (q(ω)).In order to do that, first we will rewrite all the derivatives ω (k) in terms of just ω ′ ; we will do that in step (ii.1) with the help of the expansion of I n (q(ω)), for which we will make use of the tools introduced in Subsection 2.2 and, in particular, Faà di Bruno's formula.Then, in step (ii.2), we will compute an expansion form for I i (q(ω)) for i = 1, . . ., n − 1; this is the hardest part, where we will need to make use, again, of Faà di Bruno's formula, rewritten in an advantageous form in substep (ii.1), and some combinatorics.Finally, in step (ii.3), we will make use of I 0 (q(ω)), the auxiliary invariant introduced in Eq. ( 15) to finally eliminate ω ′ , the only derivative of ω left after substep (ii.1), and compute the Möbius-commuting invariants.
Substep (ii.1)
Our first step in order to eliminate the derivatives ω (k) is to write all of them in terms of ω ′ ; later, we will use this to rewrite Faà di Bruno's formula, introduced in Section 2.2, in an alternative form that will be useful in substep (ii.2).In order to do this, we will take advantage of the expansion of I n (q(ω)): in general, expanding I i (q(ω)) for i = 0, 1, . . ., n − 1 is messy and will be the hardest part, deferred for substep (ii.2), but expanding I n (q(ω)) is much more accesible.So let focus on this.From Eq. ( 13), we need to expand ∆(q(ω)) and A n (q(ω)).In both cases we will make use of Faà di Bruno's formula.First, from Eq. ( 12) From Faà di Bruno's formula, Eq. ( 7) and Eq. ( 8), we have and for k ≥ 2, where • k−1 is a linear combination of the derivatives of q up to order k −1, evaluated at ω.By expanding the determinant in Eq. ( 22) as a sum of determinants, we observe that all the determinants including terms of the • k−1 , k = 2, . . ., n, must be zero.Thus, we are left with one determinant, whose columns k = 1, 2, . . ., n are (ω ′ ) k q (k) (ω), and we get the following result. ∆q(ω).
Lemma 11.I n (q(ω)) = n(n + 1) 2 Notice that the formula for I n (q(ω)) in Lemma 11 is linear in ω ′′ .This allows us to write Furthermore, using Eq. ( 28) and invoking Lemma 4 in Section 2.2, we can write all the derivatives of ω in terms of just ω ′ , which was one of the goals of this substep.We formulate this as a corollary of Lemma 11.
Finally, let us use Corollary 12 to rewrite Faà di Bruno's formula.In order to do this, let us introduce the notation Thus, using Corollary 12 and the above notation, the Bell polynomial in Eq. ( 5) can be written as where only ω ′ is involved.Let be the Lah number L(k, m) (see for instance [10]), and let us denote Then we have the following result, proved in Appendix I.
Using Lemma 13 we can rewrite Faà di Bruno's formula as We will use this in the next substep.Additionally, notice that Bk,k = 1; also, we will assume the convention that Bk,m = 0 when k < m.
Substep (ii.2)
The goal of this substep is to expand the I i (q(ω)), for i = 1, . . ., n − 1, defined in Eq. ( 13).Since from Lemma 10 we already know ∆(q(ω)), we need to analyze, for i ̸ = n, By using Faà di Bruno's formula in Eq. ( 32), we know that each column, i.e. each derivative, in Eq. ( 33) is a linear combination of derivatives, where the number of terms is equal to the order of the derivative.Furthermore, we observe that for the first i − 1 columns, only the last term, i.e. the term involving the highest derivative, matters, since the other terms lead to vanishing determinants when expanding A i (q(ω)) as a sum of determinants.Thus, we can write Next let us express Eq.(34) as a sum of determinants, using Faà di Bruno's formula in Eq. (32).In order to do this, we consider the following two subsets: • P represents the set consisting of the permutations of {i, i + 1, . . ., n}.Notice that #{i, i + 1, . . ., n} = n − i + 1, where # denotes here the cardinal of a set.
(2) The power of Φ is the result of the sum Thus, when summing over the elements of P, the sum of the determinants in the expansion of A i (q(ω)) with ℓ 1 ̸ = n + 1 yields which we can also write as The underbraced expression corresponds exactly to the definition of an (n−i+1)×(n−i+1) determinant, whose j-th column consists of the values of Bn+1,n+1−j , Bn,n+1−j , . . ., Bi+1,n+1−j , where Bk,m = 0 when k < m, and Bk,k = 1.We call this determinant M n+1,i+1 , so We observe that for each i, where recall that i = 1, 2, . . ., n − 1, the degree of K i as a polynomial in ω ′ is n − i + 1.In particular, the degree of K 1 is n, and the degree of K n−1 is 2. Considering we notice that when collecting Eq. ( 53) for i = 1, . . ., n − 1, we get a linear system S in α 1 , α 2 , . . ., α n consisting of n unknowns and n − 1 equations.However, one can prove that for all i the coefficient of α 1 := ω ′ is always zero, i.e., K i does not depend on α 1 := ω ′ : indeed, from the formula in Eq. ( 53), for any i, the coefficient of ω ′ corresponds to the indices k = n − i and j = n − i, j = n − i + 1, which yields But this number is zero, as stated in the following lemma, which is proven in Appendix I.
Now we are almost ready to find our Möbius-commuting invariants.Because of Lemma 15, the system S is in fact a system with n unknowns, namely α 2 , . . ., α n .Furthermore, S is upper triangular.Using back substitution, we find the solution of S as where we assume that K n+1 = G n+1 = −1.Note that, for all k ∈ {2, 3, . . ., n}, α k is a rational function of the K j , G j whose numerator only depends on the K j and the denominator only depends on the G j .
Conclusion
We have presented an algorithm, generalizing the algorithm in [4], to compute the affine equivalences, if any, between two parametric curves in any dimension.Our strategy relies on bivariate factoring, and avoids polynomial system solving.The algorithm works for rational curves and also non-algebraic parametric curves with meromorphic components, admitting a rational inverse.We have implemented the algorithm in Maple, and evidence of its performance has been presented.
The algorithm works whenever not all the Möbius-commuting invariants are constant.This happens generically, but identifying the curves where this does not occur, as well as providing a solution to the problem for this special case, are questions that we pose here as open problems.
Additionally, in the case of non-algebraic curves, right now we need some hypotheses that are not always satisfied: for instance, planar curves like the cycloid, or the tractrix, or classical planar spirals, do not satisfy our hypotheses.However, we have observed that the algorithm seems to work also for many of those curves, which makes us think that our hypotheses could be relaxed.This requires more theoretical work regarding analytic curves.
It would be desirable to extend our ideas to the case of rational surfaces/hypersurfaces.This probably requires some extra hypotheses, e.g.non-existence of base points or special types of surfaces/hypersurfaces, that allow us to guess the type of transformation that we have in the parameter space: such transformation would play a role similar to the role played by Möbius transformations here.These are questions that we would like to address in the future.
Example 2 .
[2D catenary curves] Consider the curves C 1 and C 2 parametrized by
Example 3 .
[3D spirals] Consider the curves C 1 and C 2 parametrized by p(z)
Table 2 :
Affine mappings used in the examples
Table 3 :
CPU time in seconds for affine equivalences of random rational curves with various degrees in various dimensions | 8,365 | 2024-03-25T00:00:00.000 | [
"Mathematics"
] |
Ex vivo isolation, expansion and bioengineering of CCR7+CD95-/or CD62L+CD45RA+ tumor infiltrating lymphocytes from acute myeloid leukemia patients’ bone marrow
T cell based immunotherapies can be applicable to acute myeloid leukemia (AML). Therefore, the selection of optimal T cells, cell manufacturing, and therapeutic T cell engineering are essential for the development of effective adoptive T cell therapies for AML. Autologous tumor-infiltrating lymphocytes (TILs) have been in clinical trials to treat solid malignancies. Herein, we assessed whether TILs can be isolated from the bone marrow (BM) of AML patients, expanded ex vivo and utilized as a novel therapeutic strategy for AML. To this end, firstly we analyzed the immunophenotypes of a series of primary BM samples from AML patients (N = 10) by flow cytometry. We observed a variable amount of CD3+ TILs (range ∼2.3–∼32.6% of mononuclear cells) among BM samples. We then developed a novel protocol that produced a three-log ex vivo expansion of TILs isolated from AML patient BM (N = 10) and peripheral blood (PB) (N = 10), including from patients with a low number of CD3+ T cells, within 3, 4 weeks. Further, we identified previously described naïve T cells (CCR7+CD95-/or CD62L+CD45RA+) in AML BM and PB samples, which seemed to be required for a successful TILs ex vivo expansion. Finally, we showed that the expanded TILs could: (1) cause cytotoxicity to autologous AML blasts ex vivo (90.6% in control without T cell treatment vs. 1.89% in experimental groups with PB derived T cells and 1.77% in experimental groups with BM derived TILs, p < 0.01), (2) be genetically engineered to express CYP27B1 gene, and (3) infiltrate the BM and reside in close proximity to pre-injected autologous AML blasts of engrafted immunodeficiency mice. Altogether, these results provide a rationale for further studies of the therapeutic use of TILs in AML.
hematopoietic stem cell (HSCs) transplantation (Allo-HCT) have resulted in long-term remission [2][3][4] . However, treatment failure is common, as manifested by disease refractory and relapse [5] . One of the major difficulties in AML treatment is the inability to track and recognize antigens found on leukemia stem cells (LSCs), and eliminate those quiescent and chemotherapyresistant LSCs [6][7][8] . In addition, the treatment options for older, unfit patients are limited with molecular targeted therapies offering palliation but they do not lead to disease remission. Thus, the outcome for this specific population remains dismal, with a median survival of only 5 to 10 months. Therefore, a novel, effective therapy is an unmet need for AML patients with relapsed/refractory disease and elderly patients [ 9 , 10 ].
Cancer immunotherapy utilizes components of the immune system to eliminate cancer cells while sparing healthy cells [ 11 , 12 ]. Bioengineering T cells by generating Chimeric Antigen Receptor T cells (CAR-T) is a new approach to provide precise and personalized immunotherapy for each cancer patient [13] . Albeit effective in acute lymphocytic leukemia (ALL), the applicability of CAR-T to AML remains to be fully proven. Recently, engineered tumor infiltrating lymphocytes (TILs) or marrow-infiltrating lymphocytes (MILs) have received widespread attention because of their efficacy in treating metastatic solid tumors [14] and multiple myeloma [15] . In contrast to CAR-T therapies, TILs have the advantage of being able to respond to multiple signals from the tumor microenvironment and antigens on cancer cells [ 16 , 17 ]. However, problems including deficiency, dysfunction and exhaustion of TILs in the cancer microenvironment remain to be solved [ 18 , 19 ].
We hypothesize here that TILs can be found in the bone marrow (BM) of AML patients and that derived TILs combined with immune checkpoint blockage is a novel, therapeutic strategy for AML.
Materials and methods
The list of reagents including manufacturers and catalogues of antibodies and kits are found in the supplementary data ( Supplementary Table 1 ).
Human samples
AML BM samples ( Patients #1-10, Table 1 ) were obtained from Loma Linda University Cancer Center Biospecimen Laboratory (LLUCCBL). AML Peripheral Blood and BM samples ( Patients #11-20, Table 1 ) were obtained from the City of Hope National Medical Center (COHNMC). All donor patients signed an informed consent form. Sample acquisition was approved by the Institutional Review Boards at the LLUMC and the COHNMC in accordance with an assurance filed with and approved by the Department of Health and Human Services, and it met all requirements of the Declaration of Helsinki.
Mice NRG (OD-Rag1 null IL2rg null ) mice were purchased from the Jackson Laboratory (Bar Harbor, ME) and housed in a specific pathogen-free animal facility at Loma Linda University (LLU). All mice were used at the age of 8 weeks. All experiments were performed in compliance with an Institutional Animal Care and Use Protocol approved by LLU Animal Care and Use Committee.
Isolation of TILs from AML patient bone marrow samples
CD3 + T cells from bone marrow mononuclear cells (BMMNC) were separated by using CD3 microbeads (Miltenyi Biotech, Germany) and a MiniMACS TM Separator with an MS Column according to the manufacturer's protocol. Selected CD3 + T cells were considered AML TILs.
Ex-vivo expansion of high number TILs by a traditional T cell protocol
CD3 + TILs were isolated from AML BMMNC by the pull-down through CD3 microbeads and magnetic separation. The non-CD3 + cells (feeder cells) were pre-treated with 10 μg/mL mitomycin-C for 2 h to arrest cell proliferation. CD3 + TILs and feeder cells were co-cultured at 37 °C and 5% CO 2 in a RPMI 1640 culture medium containing 10% fetal bovine serum (FBS, HyClone), 100μg/ml penicillin/streptomycin, and Interleukin (IL-2) (1000 U/ml, Peprotech). Seeding cell density was 300 μl of 100,000 cells/ml in each well of 48-well-plates. For the maintenance of quickly expanded TILs, we performed a medium change every 2, 3 days, and split cells at the ratio of 1:4 when reaching 80 % confluent. TILs were stimulated with 30 ng/mL human anti-CD3 (OKT3, Biolegend). Around 10-14 days, we started to culture TILs in 12-well-plates or T25 flasks for expansion of large amounts before further analyses.
b/2) timeline of TIL expansion
Stage 1 (Naïve TILs) : The seeding cell density of CD3 + TILs was around 300 μl of 20,000 cells/ml in appropriate wells of 48-well-plates. Due to the low cell density of TILs, we added fresh media at a 1:1 ratio to each well every 2 days and mixed the cells/media. Based primarily on the growth of the TILs, we performed media change every 5-7 days and split cells at the ratio 1:3.
Stage 2 (Ready to grow): After 7 days, IL-7 (25ng/ml, Peprotech) and IL-15 (25ng/ml, Peprotech) were added to the media along with IL-2 (1000U/ml). Every patient BMMNC sample was different; however, we preferred to raise TILs in 48-well-plates for expansion to sufficient amounts during the beginning 10-14 days instead of in large wells and flasks.
Stage 3 (Quickly expand and differentiate into T effectors): After 10-14 days, TILs grew very fast. We performed media change every 2 days and split quickly expanded TILs at the ratio 1:4 to 1:8. Then, we expanded TILs in multiple 48 or 24 well-plates. Dynabeads® Human T-Activator CD3/CD28 was used once for re-stimulation of TILs.
Flow cytometry (FACS)
Expanded TILs were harvested and examined for the expression of cell surface biomarkers (CD) and intracellular proteins for T cells by multichromatic FACS. Briefly, about 1 × 10 4 ∼10 6 cells in 100 μl FACS buffer (PBS containing 1% FBS and 0.05% sodium azide) were stained with various fluorescence-conjugated antibodies specific for the desired cell surface proteins at 4 °C for 30 min. The surface-stained cells were then fixed and permeabilized using the appropriate reagents (e.g. the BD Pharmingen Cytofix/Cytoperm buffer) and stained with different fluorescence-conjugated antibodies specific for the desired intracellular proteins at 4 °C for 30 min in the permeabilizing buffer (e.g. the BD Perm/Wash buffer). Finally, the cells were washed twice in the permeabilizing buffer and twice in the FACS buffer before being analyzed on the BD FACSAria II. Data was analyzed using the FlowJo software (Treestar).
Cytotoxicity assay
We performed the cytotoxicity assays by co-culturing engineered TILs with primary AML blasts from the same patient (isolated by CD33microbeads pull-down) in 24-well plates. AML blasts from BMMNC were separated using APC anti-human CD33 antibody (Biolegend), anti-APC microbeads (Miltenyi Biotech, Germany), and a MiniMACS TM Separator with an MS Column. The ratio of autologous TILs to AML blasts were in the range of 5:1-10:1 according to a previous report [20] . After overnight incubation, cells were collected, stained, and processed for FACS assay of biomarkers including viability dyes (Invitrogen TM ) and CD33 according to manufacturers' protocols. Analyses and graphs will be generated using the GraphPad Prism software to evaluate significance.
Adoptive cell transplantation of engineered human AML cells and TILs in immune-deficient NRG mice
Ex vivo expanded TILs (2 × 10 6 cells/mouse) were pre-labeled to be red fluorescent with Qtracker TM 655 (Molecular Probes) and intravenously (IV) injected into NRG mice through the tail vein. To help the engraftment, 10 mg/kg of Azacitidine was intraperitoneally injected one day before the injection. TILs-engrafted mice were sacrificed at different time points. In another experiment, AML cells (non-CD3 + cells from BMMNC) were transduced with GFP lentivirus to generate GFP + AML cells. Fourteen days after transplantation of GFP + AML cells (1 × 10 6 cells/mouse), Qtracker TM 655 labeled TILs were IV injected into these AML NRG mice. The detailed protocol and plasmids for generating lentivirus and generating GFP + AML cells can be found in our previous report [21] . On day 10 after TILs' engraftment, mice were sacrificed for FACS analyses. Immunofluorescent histology was performed to visualize TILs and GFP + AML cells inside of the bone marrow.
Histology
Preparation of undecalcified frozen sections from bone tissues was performed according to our previous report [21] . Briefly, specimens were fixed in 4% paraformaldehyde, freeze-embedded with an embedding medium (SCEM), and frozen in pentane cooled with liquid nitrogen. The frozen specimen block was fixed to the cryostat and trimmed with a disposable blade. The block's surface was then covered with a pressure sensitive adhesive film (Cryofilm) and cut into 10 μm-thick frozen sections which were stored at -20 °C. The frozen sections were immunohistochemically stained and photographed for further analyses.
Confirmation of the presence of TILs in bone marrows of AML patients
Autologous TILs based therapies could be a novel therapeutic strategy for AML if we can (1) phenotypically identify TILs, (2) expand TILs ex vivo to sufficient numbers for clinical use, (3) demonstrate cytotoxic effect to autologous AML blasts and (4) bioengineer TILs to restore their Ag-specific cytotoxic functions. To this end, we firstly screened AML patient BMMNC ( Patients #1-10, Table 1 ). A different degree of CD3 + T cells infiltration could be detected in all the tested samples. Overall, we observed two groups of patients with low ("Low": upper FACS plots, Fig. 1 A; Patient #1-3, 7, Table 1 ) and high ("High": lower FACS plots, Fig. 1 A; Patient #4-6, 8-10, Table 1 ) numbers of CD3 + TILs (2.3% vs 32.6%, respectively, P < 0.05) ( Fig. 1 B ). This finding is interesting in that certain AML blast environment illicit a suppressed T cell response. Our screening data are consistent with a recent report that the CD3 TILs population are preserved in certain AML BM samples compared to health controls but about 50% of AML patients have a low T cell count in their BM. T cell-mediated cellular immunity is highly regulated by a system of checks and balances through a group of stimulatory and inhibitory proteins, including programmed death receptor 1 (PD-1) [22] . Further analyses revealed that 13% of CD3 + T cells were PD-1 + (arrows, Fig. 1 A, C ). These PD-1 + TILs are likely to have lost anti-leukemic activity against AML blasts [23] , which could be restored functionally by using PD-1 inhibitors [24] .
Ex vivo expansion of TILs from AML BMs using a modified protocol
Next, we examined the ex vivo expandability of AML TILs using our T cell culture system ( Supple. Fig. 1 A ). From the "High" group, we were able to obtain 0.5 to 2 × 10 6 CD3 + T cells/ml using CD3 microbeads. After magnetic separation, these cells ( Patient #10, Table 1 ) were cultured with supporting feeder cells in RPMI-1640 supplemented with IL-2. A 4-fold increase of the CD8 + CD3 + T cell population (red arrow) was obtained after the 5-day culture (30.4% on Day 5 vs 7.6% on Day 0, P < 0.01, Supple. Fig. 1 B-D ). In contrast, from the "Low" we found challenging to obtain a sufficient number of TILs. Thus, to expand these cells we utilized a modified ex vivo culture protocol ( Fig. 2 A ). Ten vials of AML BMMNC with low T cell numbers were used in this experiment ( Patients #11-20, Table 1 ). CD3 microbeads were applied to pull down the TILs from 1 ml of each BMMNC sample; 2 to 5 × 10 4 /ml CD3 + T cells were obtained and cultured with CD3/CD28 microbeads without feeder cells in RPMI-1640 supplementing them sequentially with IL-2, IL-7, and IL-15 (see experimental methods for details). At different time points, we collected cells and stained them for FACS analyses to determine their immunophenotypes. At the early stage (day 7), most CD3 + TILs were found to be CD4 + (87%, Fig. 2 B ), while also expressing PD-1 + (97.9%). At day 21, we found a three-log increase of CD3 + TIL populations (26795470 on day 21 vs 25600 on day 0, P < 0.01, Fig. 2 C ). Also at day 21, the percentage of CD4 + reduced to 33.2%, while CD8 + TILs increased to 55% with low expression of PD-1 (7.63%, Fig. 2 D ).
Bioengineering expanded TILs pharmaceutically and genetically ex vivo
We next investigated the possibility of pharmaceutically and genetically bioengineering expanded TILs ex vivo ( Supple. Fig. 2). The PD-1 pathway has received considerable attention because of its negative role during acute T cell activation and being a marker for T cell exhaustion [25] . We used nivolumab, an FDA-approved monoclonal antibody PD-1 inhibitor, to suppress PD-1 expression on TILs, as evidenced by FACS analyses (significantly reduced from 62.8% to 1.8%, Supple. Fig. 2 A-C ). Previously, we reported a new Vitamin D gene therapy to treat AML mice by overexpressing the CYP27B1 ectopic gene, which encodes the 1-alphahydroxylase to generate active Vitamin D in situ [21] . In this study, using a lentivirus system we also genetically engineered ex vivo expanded TILs, which were demonstrated to overexpress the CYP27B1 ectopic gene (arrow, Supple. Fig. 2 E ). In future studies, we will examine the anti-leukemia function of CYP27B1 + TILs and also explore whether TILs will be a potential cell vehicle candidate for gene therapies.
Investigate possible explanations for difficulties with expansion of CD3 + TILs in some AML patients
During the culture of TILs from 10 AML patient samples, one consistent aspect for TIL cultures ( n = 10) was that the proliferation status of TILs during the early stages (days 3-5) was a good predictor for whether TILs (representative images of early TIL clusters, Fig. 3 C ) could be expanded to clinical scale in later stages. We found that not every sample could generate TILs ex vivo ( Fig. 3 ). After CD3 microbead pull-down, CD3 + T cells were present in these BMMNC samples (patients #11, 12, 19), but they failed to expand ( Fig. 3 A ) and generate TIL clusters ( Fig. 3 C ). There was another sample (patient #13) which could generate small clusters ( Fig. 3 B ), but it grew relatively slow compared to those quickly expanding TILs (patients #14-18, 20, Fig. 3 C ). To investigate the mechanism underlying differential growth capabilities of TILs, we performed an immunophenotypic comparison of these AML BMMNC by using biomarkers for naïve T cells, including CD62L, CD45RA, CCR7, CD95 [26] . No significant difference of CD62L + CD45RA + naïve TILs was found between the no/slow growth BMMNC and the quick growth BMMNC(P = 0.38, Supple. Fig. 3). However, we found that there was a significant loss of CCR7 + CD95naïve T cell population (red arrow, Fig. 3 E, F ) in patients #11 and #12 (9.6fold decrease, 0.45% of no growth vs 4.31% of quick growth, P < 0.05, Fig. 3 D ). There were clear CCR7 + CD95-naïve T cell populations in patients #14, #15, and #16 (green arrow, Fig. 3 G, H, J ), part of which also expressed CD62L + CD45RA + naïve biomarkers. When comparing the detailed immunophenotypic pattern of patient #13 (slow growth) with patient # 16 (quick growth), we found that there was a 3-fold increase of CD62L + CD45RA + cells in patient #16 BMMNC versus patient #13 BMMNC in each compartment of CCR7 + or CD95 + subpopulations Where applicable, data are means ± SEM and were analyzed by Student t -test. * * P < 0.01. ( Fig. 3 I, J ). Our data suggest that to effectively expand TILs to a sufficient amount, the CCR7 + CD95-naïve T cell population in AML patient BM are needed to support the quick expansion ex vivo . To explore alternative sources of T cells for TILs therapy in patients with low BM TILs, we investigate whether their peripherally isolated T cells can be expanded by our novel culture system. We first found similar patterns of naïve T cells and differentiated T cells in the peripheral blood (PB) and BM samples of same patients ( Fig. 4 A, B ). Next, expansion culture experiments revealed similar growth patterns between PB and BM samples including no expansion of #11 and #12 PB samples ex vivo (data not shown).
In all, the presence of naïve T cells in the marrow of a subgroup of AML patients was found to be critical for sufficient expansion of TILs and further treatment development.
Functional characterization of ex vivo expanded TILs from AML patients using ex vivo cytotoxic assays and in vivo homing assays
To examine the function of ex vivo expanded TILs, we performed cytotoxic tests. CD33, a surface biomarker, is expressed on leukemia blasts from the majority of AML patients [27] . 0 4 -10 5 autologous CD33 + blasts with 2 × 10 5 -10 6 isolated and ex vivo expanded TILs (E: T ratio 10:1). After 18 h, cells were collected and stained for FACS analysis. We observed a significant decrease of viable CD33 + blast population in TIL treatment versus the control of no treatment group (90.6% vs. 1.89%; p < 0.01) ( Fig. 5 A, B ). TILs expanded from either PB or BM from the same patients were similarly effective against autologous blasts.
To investigate whether ex vivo expanded TILs will home to the BM and maintain their proliferation and functional capabilities in vivo , we performed a set of preliminary transplantation experiments. TILs were pre-labeled with Qtracker 655 and then intravenously injected to naïve immune-deficient mice (NRG) ( n = 5). On day 14, mice were sacrificed and examined for the location of transplanted TILs. Transplanted TILs were found in the BMs of NRG mice, and continued to express CD3 + ( Supple. Fig. 4 B ). We then engrafted AML blasts (GFP-labeled) in NRG ( n = 5) mice followed by infusion of TILs (Qtracker 655 labeled) (see experimental procedures, Suppl. Fig. 4 A ). On day 24, mice were sacrificed for histology. TILs were found in BMs next to with GFP-labeled AML blasts ( Supple. Fig. 4 C ).
Discussion
In this study, we demonstrated the presence of TILs in AML patients' bone marrow samples. We were able to characterize their phenotypic and functional features using immunophenotying and cytotoxic assay. To examine the ex vivo expandability of TILs for the possibility of autologous transplantation, we developed a novel ex vivo culture system to expand TILs from AML patient BM samples with low numbers of CD3 + T cells to clinical scales. Furthermore, we immunophenotypically determined that these TILs expressed either CCR7 + CD95-/or CD62L + CD45RA + , which are makers for naive T cells [28] . We have observed that some patients have high numbers of CD3 + TILs while others have low numbers of CD3 + TILs in their BM. The presence of naïve T cells is the hallmark of expandability of T cells, even in patients with low initial CD3 + TILs. Finally, we demonstrated that TILs can cause cytotoxicity to autologous blasts ex vivo , can be engineered to express desirable genes, and are able to migrate to BM after being transplanted into immunodeficiency mice in vivo . As a preliminary in vivo experiment, our results provided evidence that transplantation of expanded TILs is feasible and that we could track IV injected cells to the bone marrow. These ex vivo expanded TILs are likely to maintain their BM homing capability, proliferation and therapeutic capabilities in vivo . Our current preliminary in vivo data also suggested that primary TILs could be engineered to overexpress a desirable gene for therapeutic purpose as we previously showed ( Supple. Fig. 2 E ). Thus, TILs could be used a vehicle for gene therapy for autologous transplantation to treat AML. In future experiments, it would be important to compare homing, proliferating, cytotoxic and therapeutic capabilities between BM derived TILs with circulating TILs from peripheral blood (PB) ex vivo and in vivo transplantation studies. Our results suggest that BM derived TILs based cell therapies is a promising, novel therapeutic strategy for AML patients and should be further explored.
Significance of our current study for basic research of TILs and clinical application
The complexity of AML suggests that AML patients require personalized therapies to achieve long term remission [ 13 , 29 ]. Previous studies have demonstrated that availability of CD3 + TILs and high percentages of CD8 + TILs in situ were essential in preventing disease progression or relapse, and prolonging the survival in cancer patients [ 30 , 31 ]. Very little is known of whether TIL have a role in the evolution and treatment-response of AML patients and whether TIL-based approach can be used to elicit a therapeutic immune response. Thus, understanding how the immune system in AML BM interacts with malignant cells and the tumor microenvironment is likely to be critical for the development of successful immunotherapeutic strategies [32] . In this study, we showed that the presence of TILs in the BM from AML patients, albeit with a variable degree. Whether the level of BM TIL infiltration has any prognostic significance, however remains to be determined. Naive T cells are immature cells, commonly characterized by the surface expression of CD62L (L-selectin) and CCR7 (C-C Chemokine receptor type 7) [33] . In addition, serial adoptive transfers of a single CD62L + memory T cell, a subset of a naïve T cell, demonstrated its stemness including selfrenewal and multipotent capabilities in vivo [34] . Herein, we report that CCR7 + CD95-/or CD62L + CD45RA + naïve T cells exist in some AML BMs, which could be isolated and expanded by our modified ex vivo culture system ( n = 7/10). By using our novel TIL culture method, we could ex vivo expand TILs over a three log-fold, which would be useful for studying subsets of TILs and bioengineering them to fit potential clinical applications for AML ( Fig. 6 ). Our TIL protocol has incorporated co-stimulation from anti-CD3/CD28 microbeads supplemented with cytokines i.e., IL-7 and IL-15, which have been shown to increase the viability and induce expansion naïve T cells for sustainable expansion [35] . Interestingly, no or small subfraction of CCR7 + CD95-/or CD62L + CD45RA + were found in TILs that failed to expand ex vivo (Patients #11, 12, 19, Table 1 ). These results however will need to be confirmed in a larger cohort of AML patients and to genetic abnormalities, which would reveal the underlining mechanisms.
Remaining questions and potential engineering tools to reverse and rejuvenate exhausted TILs for clinical applications
We recognized that albeit provocative, our results raise additional questions. For example, how long the transplanted TILs will survive in AML BM and will they expand in the BM? Their efficacy in vivo after transplantation is unknown, although our preliminary in vivo transplantation experiments showed TILs homing to the BM and identifying AML cells within a short period of times. It needs also to be evaluated whether bioengineered TILs will retain their Ag-specific functions. Recently, several key transcriptional regulators (NR4A, TOX, etc.) have been discovered to induce the anergy and exhaustion of T cells and to affect the therapeutic efficacy of immunotherapies for solid tumors [36][37][38][39][40][41] . Another question is whether we could reverse the process of TIL exhaustion and enhance their proliferative and functional capabilities in vivo ( Fig. 6 ). In summary, our preliminary data provided proof of principle evidence that reprogrammable | 5,840.8 | 2021-11-11T00:00:00.000 | [
"Medicine",
"Engineering"
] |
Powder Coating for Healthcare Aluminum Packaging
: Restrictive regulations concerning the toxicity of certain compounds and the use and disposal of solvents present in the liquid epoxy protection system have been analyzed in this work to evaluate powder coatings as an alternative in the protection of aerosol aluminum cans, which are employed in cosmetics and pharmaceutical product packaging. In this paper, the chemical resistance of polyester and mixed epoxy-polyester powder coatings is assessed, considering di ff erent aggressive environments employed in healthcare commercial products. The samples’ performances are also compared with the currently used liquid organic coatings. The pack test has been used to evaluate the protective system behavior in contact with both the liquid and the gaseous part of the cosmetic product. However, the visual observation, required by the test, enabled the highlight of only very evident degradation phenomena. The chemical resistance of the powder coatings has proved to be appropriate only for less aggressive environment, where the critical compounds are propellants, propane, butane and isobutane. When exposed to other environments containing alcohol, water and dimethyl ether, most samples have been susceptible to layer degradation phenomena. Polyester layers lose their corrosion protection properties. Epoxy systems, instead, result more performant than polyester resins, but they particularly su ff er from the contact with dimethyl ether.
Introduction
The aluminum aerosol cans in the cosmetics market are usually coated inside with organic coatings in order to protect the surface from the contents, and to avoid possible contaminations of their content. Clearly, healthcare materials contaminated with corrosion products change their aspects and their properties. This represents a critical aspect, as the aluminum aerosol cans on the market often contain several types of materials, with different chemical reactivity. It is therefore not easy to protect aluminum from these materials, especially considering the long storage times of commercial products, before and during their use by consumers. To avoid possible contaminations, the organic coatings used in this field must satisfy very restrictive requirements. In fact, these products (hair lacquers, deodorants, shaving foams) come into contact with delicate body parts (face, skin, hair, lips), and they should not be contaminated by organic compounds coming from coatings and by corrosion products. In addition, the product visual appearance is very important. The contamination of products takes place in various ways. Organic coatings components can be extracted by the contained matter, which would act, in this case, as a solvent. Coating compounds could interact with the substances present in the liquid by forming other unwanted compounds. Moreover, the organic layers may delaminate from the metallic substrate as a consequence of adhesion loss. The last phenomenon is harmful either because the fragments of detached protective layer can block the dispenser, or because the consumer may find parts of the organic matter in the product, and the aluminum can surface loses its corrosion
•
Low criticality products, which possess a combination of propane/butane/isopropane, or other kind of propellant; • Medium criticality products, containing propylene, including ethyl alcohol, or products containing ethyl alcohol and dimethyl ether (DME); • High criticality products, such as high-performance lacquers usually presenting a combination of water, ethyl alcohol and DME.
The three critical levels described above can be addressed by applying standard organic liquid paints. Low criticality products can in fact be used by protecting the container with epoxy coatings, while higher criticality products require the use of PAM (polyamide-imide) based coatings.
Powder coatings are trying to get inside this market, monopolized until just a few years ago by liquid organic coatings, achieving good results for the medium-low criticality products, but finding obstacles for more aggressive substances. The mixed epoxy-polyester powder resin is already on the market, replacing in some cases the standard liquid epoxy coatings, thus covering the low and mid-low critical market slice.
The two main aims for the development and study of the powder organic coating systems are: the improvement of the properties of the available epoxy-polyester resin to cover even more critical products (medium and high criticality), and the development of a resin for medium-low critical issues, without the presence of bisphenol A (BPA) which is contained in epoxy resins and suspected of being dangerous to humans [7,[11][12][13][14][15][16][17].
In this work, the behavior of two different types of resins for the protection of aluminum aerosol cans in the healthcare market were considered. Both types of resin were also modified by changing their formulation, increasing the crosslinking. The resistance properties of these four protection systems were compared to the behavior of a reference liquid epoxy resin, traditionally used on the market. These resins have been exposed to four different types of environments, representative of the most used commercial products, by means of the pack test, which allows to put in contact the inside product with the coating without loss of volatile components and gas. The samples degradation was evaluated by measurements of weight and hardness loss and infrared (FTIR) analysis. Finally, the corrosion protection properties of the coatings have been studied by electrochemical impedance spectroscopy (EIS) measurements.
Materials
In this work, two types of powder organic coatings are considered. The first is an epoxy-organic coating (labelled as E sample), already commercially used for medium-low critical products, and the second one is a polyester coating that is not on the market yet. Along with the standard epoxy coating, another organic coating with the same type of resin has been developed using a modified hardener with higher functionality in order to increase the crosslinking of the organic coatings, and therefore, in theory, a higher density and chemical resistance. The hardener, in fact, leads to an increase of the coating Tg value, as a consequence of a stronger crosslinking level. This coating is labelled as E-Mod. Standard polyester coating (P sample), which falls into the class of organic coatings not containing BPA, is not yet commercially available and is still at the preliminary stage of the production tests. Additionally, in this case the standard polyester coating has been modified with higher functionality in order to increase the crosslinking density (sample P-Mod). To have a comparison with the most commonly used liquid organic coating, the standard epoxy (Ref sample) has been also considered in pack test. Table 1 summarizes the different studied organic coatings samples with their characteristics and labels. The five types of coatings were supplied by Akzo Nobel Powder Coatings S.p.A. (Como-Italy). Typical aluminum alloy AA1050 (99.50% Al) aerosol cans obtained from a rolled coil, were used as substrate. Before deposition of a protective layer, the surface aluminum panel was degreased in acetone subjecting it to a 10 min ultrasound stirring step. The powder coating was applied with a spray gun followed by 20 min at 190 • C as curing process. The reference liquid organic layer was realized by spray deposition followed by a curing process at 230 • C for 7 min. The thickness of the obtained coatings was equal to 15-25 µm. Figure 1 shows the coating of sample E, taken as a reference, as it is representative of the different coatings under examination. In fact, all the samples possess a compact layer, homogeneous in thickness and without microscopic defects such as bubbles or porosity. Considering the bending necessity of the coated aluminum on the top for the construction of the complete aerosol can, as the first indication, the aluminum panel was bent using a cylindrical mandrel with a 3 mm bar following ISO 1519, without the presence of visible cracks or layer delamination. Finally, the samples (175 × 25 mm and 0.5 mm of thickness) for the different tests were obtained by mechanical cutting from the coated aerosol cans.
In addition, the deformation suitableness is connected with the hardness of the materials. Considering this aspect, the hardness of the coatings, before and after degradation tests, was collected as indicated in Section 2.2.
The most critical point in testing these coatings is to find an environment for the simulation of the service life of the protective layers in contact with the cosmetic products. Due to the complex products and the presence of volatile component, it is not possible to easily reproduce in laboratory the aggressive environments with the contemporaneous presence of liquid and gas phases with different chemical compounds. Industry companies recognize the pack test as a reliable procedure for the characterization of the coating resistance in operating environments, in case of protective layers used for aerosol cans under pressure. For the pack test conduction, in order to have different aggressive environments with specific chemical compositions, three of the most common hair lacquers and one spray deodorant present in the cosmetics market were chosen. In this way it was possible to cover all the representative critical classes and the various combinations of critical compounds for coatings. The used products are mentioned, but the quantities of the individual components are not specified. The chosen products are taken as a reference, as each manufacturer slightly changes the chemical concentrations, so it is impossible to find a product whose composition is perfectly equal to that of another product. The four environments, however, meet the requirements of the study presented in this work, as they are representative of the different levels of aggressiveness present in commercial products.
• Environment 3: Simultaneous presence of DME, water and ethanol (Taft Classic). This combination should be the most aggressive. • Environment 4 (obtained from deodorant): Presence of a mixture of propane-isopropane-butane, with low aggressiveness, and it also has three phases unlike other products, a solid, a liquid and a gaseous one (Nivea Invisible Black and White).
The hair lacquers (environments 1, 2 and 3) are mainly composed of three components: filmogenes, vinyl synthetic resins (such as vinyl acetate, vinyl-pyrrolidone or acrylates) that create a resistant film; solvents, which keep in solution the film-forming compounds and allow the spraying process; and propellants, for the product dispensing. There is also a portion of additives (preservatives, fragrances, surfactants, etc.), and for some products even a percentage of water, which is preferable to keep low, because it is desirable to dry the product once stored on the hair as quickly as possible [18][19][20]. Alcohols, predominantly ethanol and isopropanol, are the mainly used solvents, while dimethyl ether (DME), propane, butane and isobutene are widely exploited as propellants. DME is a water-soluble ether, easy to liquefy even at low pressures (such as in the aerosol can) and is also an excellent solvent as well as a propellant [20]. The various samples have been labeled with a letter, indicating the type of resin used in the coating, followed by a number, indicating the environment to which they are subjected, as shown in Table 2. For example, sample P_2 is made of a standard polyester resin that has been subjected to the test environment number 2. Table 2. Sample labels.
Coating Type Environment
Sample Label
Characterization
The pack test consists of an 85 mL cylindrical waterproof container (200 mm height × 40 mm diameter), shown in Figure 2, which allows the cosmetic products (in this case hair lacquers) to be injected inside. Here the coated samples are located and subsequently submitted to the substances they will find in use. A part of the sample is immersed in the liquid and the other part is in contact with a gas phase, as in commercially available cans. The liquid fraction inside the container is equal to 50% of the entire volume, as in many commercial products. This aspect is very important, as the two phases (liquid and gaseous), present in these products, interact differently with the sample. The peculiarity of the pack test is that it allows the contents of the aerosol can to be emptied in the test environment, keeping the pressure constant and avoiding the loss of gas and volatile substances. The vessel has been therefore placed in an oven at 55 • C for 8 days, to speed up the processes. Thus, the "aged" samples have been subjected to several characterization tests (hardness, adhesion, adsorption, electrochemical test) to evaluate the integrity of the coating and its environmental resistance. In contrast to behavior during service life, the samples that were coated and tested with the pack test present edges (the samples consist of coated aluminum strips cut from the can), and they are not as continuous as the material inside the typical industrial aerosol can. This fact must therefore be taken into account in the coating resistance evaluation, as the samples possess weak spots at the edges, where defects and delamination can occur. However, this test is accepted and widely used by the organic coating producers and cosmetic companies. The samples were characterized before and after the pack test to compare the behavior of different kinds of coatings as well as to evaluate any leakage of property produced by the exposure to various environments. Therefore, the property loss was evaluated following the pack test by measuring any weight differences, as well as hardness tests. The hardness property of the coatings was measured with the Buchholz indentation test following the EN ISO NF 2815-2003 standard, both in the part of the sample immersed in the liquid and in the part in contact with the gases, in order to assess which part of the product (gaseous or liquid) has a greater interaction with the coating. Different characterization tests were carried out, such as FTIR analysis using a Varian Excalibur 4100 instrument at 4000-400 cm −1 , and differential scanning calorimetry (DSC) analyses, performed by using a Mettler DSC30 calorimeter. The hardness measurements were carried out by means of an ARW Misure indenter, following the UNI-EN-ISO 2815 standard. Finally, the coating defectiveness were analyzed by optical stereomicroscope (Nikon SMZ25) observation. Considering the protection properties of the coatings, EIS measurements were carried out at 15 mV (peak-to peak) and 10 5 -10 −2 Hz with the potentiostat Parstat 2273 and software PowerSuit ZSimpWin. The cell setup was composed of a platinum counter electrode, and an Ag/AgCl reference electrode (+207 mV versus SHE) immersed in a 0.1M sodium sulphate solution.
Each type of coating was studied by subjecting three samples of the same type to each of the four Pack Test environments. Consequently, all subsequent characterization analyses were reproduced on three coatings per sample (see Table 2-Sample label column). Table 3 shows the behavior of the samples subjected to pack test in the different test environments, observed by optical microscope analysis. As for the epoxy coating samples, environments 1 and 3 are very harmful, leading to the complete delamination of the coatings. Environment 2 appears instead to be less aggressive, especially for sample E-Mod, which is free of defects.
Pack Test
Polyester coatings, instead, seem to better behave if exposed in environment 1, especially in the case of sample P, which however shows high defectiveness in environment 2.
Finally, for all 4 series of samples, environment 4 is harmless, as it does not cause defects in protective layers. The Ref sample, on the other hand, does not show macroscopic defects, even after exposure in aggressive environments.
As described by Table 3, it seems that epoxy coatings are less resistant than the polyester based ones. However, the latter degradation level (though not completely delaminated) is very high (loss of hardness, solid residue inside the lattice, loss of corrosion barrier properties, etc.), and most of these coatings are unprotected.
The changes made to improve coating E (epoxy-based) have deteriorated its chemical resistance; in fact, this sample presents delamination in most environments.
As already explained, the pack test shows some critical issues. The edges in the test samples represent weak points that are not present in the final packaging application. Delamination may be due in some cases to the presence of edges and defects caused by sample cutting, and may therefore start more easily from these areas. For example, in samples E, shown in Figure 3, the bubbles are formed mainly near the edges, confirming the previous hypothesis. One of these bubbles is visible on the right of the figure: it has very high dimensions, with the diameter that exceeds 1000 µm in length.
By analyzing the pack test's results, it is possible to identify the most responsible compounds for the loss of coating properties. Epoxy resins fail for environments 1 and 3, which contain DME. In fact, in addition to being a propellant, DME is a good solvent, and it will be found both in the liquid phase and in the gaseous one. Epoxy resins in particular suffer from contact with DME, because it is a similar compound presenting an epoxy ring. These considerations are not valid for the Ref sample, which is also epoxy based, as it has different cross-linking times and a more compact structure. In addition, it contains bisphenol-A (BPA), which the healthcare packaging industry would like to eliminate from their products. Polyester based resins undergo an accentuated degradation, which sometimes does not lead to delamination, for environments 2 and 3. There are the presence at the same time of both water and alcohol, a particularly critical blend for polyester resins. The weight variations of the samples were collected before and after the Pack Test, and checked again 30 days after the degradation test to study a possible evaporation of uptake compounds in organic coatings. During this period, the samples were stored at ambient temperature and atmosphere. Table 4 shows the results of the weight difference measurements carried out on samples immediately after the pack test, and then after 30 days. The selected samples did not show delamination or bubbles formation during the pack test. All organic coatings absorb a certain amount of product they come into contact with. Resins that do not undergo significant degradation have a modest weight increase, while delaminate resins absorb high amounts of solvents (and other compounds), about 10 times more than the other coatings. Therefore, there is a correspondence between absorption and failure of lattice. The weight variation measurements, made 30 days after the end of the pack test, show the tendency of mixed epoxy-polyester resins to have a weight gain while polyester resins show a weight loss. In both cases, the most degraded coatings have more weight variations. The weight loss of polyester samples is due to the fact that some parts of the protection layer are brought to solution, a symptom of a lattice coherence loss and a diffuse degradation of the protection layer. The increase of weight is instead due to the solid residue of the products used in the pack test; the various compounds penetrate and then remain within the coating lattice (for example the filmogenic part of the hair lacquers). In both cases, because these phenomena occur, the coating lattice loses consistency and lets other compounds enter into it, with consequent coating degradation. The Ref coating, on the other hand, shows no significant weight variations, with minimal contamination of the lacquer residues.
Hardness Measurements
The hardness property of the samples was evaluated using the Buchholz indentation technique, expressing the hardness value as 100/L, where L represents the average measured length of the grooves made during the measurement. The hardness of the coatings before the degradation tests is too high to be accurately measured with this technique, higher than 130-140 degrees Buchholz hardness. At the same time, the hardness test could not be performed on completely delaminated samples. Table 5 shows the results of the samples hardness tests, performed for those samples which got in contact with both gaseous and liquid environments. Some of the obtained results are outside the validity range of the standard, since the measured groove is too long in relation to the thickness of the coating. Therefore, some particularly low hardness values are not acceptable from the regulatory point of view, but are in any case reported and treated as reliable data, as they give an indication of the condition of the protective layer. It seems that the coatings suffer the most when immersed in the liquid of the environments 1 and 3, while in environment 2 there is a loss of hardness in both parts of the sample. On the contrary, the samples that have been subjected to environment 4 exhibit less hardness in the part in contact with the gases. This result is in line with expectations, in fact the solvents contained in the hair lacquers, which are the major responsible for the degradation of the resins, are in the liquid phase; ethyl alcohol, water and even a part of the DME present in the liquid phase can be found in environments 1, 2 and 3. The 4th environment does not have any particular solvents, but in the gas phase there are propellants (propane, butane and isobutane) that interact with the part of the sample in contact with them. Environment 2 also contains the propellants that are present in gas phase. In fact, there is a loss of hardness in particular in the sample part in contact with the gas. The various tests and coatings can be compared. Environment 2 reduces hardness in comparison with environment 1; environment 3 does not show enough undamaged coatings to be able to make concrete considerations; while environment 4 has little effect on coatings hardness change. Considering the type of resin, epoxy coatings maintain greater hardness than polyester: even for samples with delaminated zones or with bubbles, epoxy coatings still retain good hardness and have no weight leakage. Although the polyester-based samples withstand better in environments 2 and 3, as shown in Table 3, these coatings suffer instead from high weight variation (Table 4) and loss of hardness (Table 5), both symptomatic phenomena of a high degradation of the resin. These tests show that the various substances that come in contact with the coatings lead to a loss of coherence of the polymeric lattice and hence to a decrease in properties.
DSC Measurements
The differential scanning calorimetry (DSC) analysis were performed to measure the Tg of the various used organic coatings, showed in Table 6. Table 6. Tg values of the 4 samples series.
Sample
Tg ( • C) The measured Tg values show that for epoxy-polyester mixed resins there is a substantial increase in crosslinking with the hardener change (sample E-Mod), while for polyester resins there is only a few degrees' increase. Tg of about 170 • C represents a high value for epoxy-polyester coatings, which therefore exhibit a high degree of brittleness. This is a typical value for a coating, which needs to present at the same time a good chemical resistance and a sufficient ductility for the aluminum coated foils bending without crack nucleation.
To highlight the influence of the aggressive environment on the polymeric matter, FTIR analysis are carried out on the samples after the contact with an environment possessing an intermediate aggressiveness such as environment 2. Environment 3, in fact, results too much aggressive, leading to the total degradation of the polymer, while in environment 1 the interactions result very light, with minimal change in polymeric structures. Figure 4 shows the FTIR spectra of sample E-Mod_2 (a), sample E-Mod_2 in contact with the liquid phase during the Pack Test (b) and sample E-Mod_2 in gas (c). The measurements (b) and (c) were carried out 30 days after the end of the Pack Test. The image shows the spectra of only sample E-Mod, as they are the least performing. All the powder coatings, after exposure to the liquid environment of the Pack Test, show the loss of the anhydrides (1785 cm −1 ), which are part of the hardening agent. The anhydrides in contact with ethyl alcohol or water should in fact open and react by forming an ester group and a carboxylic acid; the greater intensity of the peak of these functional groups is also explained (2800-2600 cm −1 range) [21]. In the sample in contact with the liquid (b), a decrease in intensity of the peaks at 1500 and 1250 cm −1 is observed, associated to the C-O bonds, due to an interaction between polymer and alcohol and isobutene present in the liquid part of the environment 2 [22,23]. Following the pack test, all the samples present peaks associated with the solid residues of the used products (environments 1, 2 and 3). In particular, the compounds, present in these environments, absorb in the spectrum region between 1800 cm −1 and 1000 cm −1 , and it is precisely in these areas that the part of the sample in contact with the liquid phase presents the greatest modifications ( Figure 4). The Ref sample shows minimal contamination and modification of the FTIR spectrum, confirming the lower tendency to deterioration in contact with this type of products, as already observed in Table 4, with non-significant weight variations.
Electrochemical Impedance Spectroscopy
Impedance tests were performed in a 0.1 M sodium sulphate solution; all the results shown below were collected after a day of immersion. The chosen test solution is not very aggressive, as this type of analysis was carried out to simply point out the coating's damage, without influence on the degradation process. It was not possible to carry out these tests for all samples after the Pack Test, because it would not make sense to perform this type of analysis on delaminated or seriously compromised coatings. For example, Figure 5 shows the Bode impedance modulus spectra of sample E, before the pack test and after a 24 h of exposure in the different test environments. Before the pack test, the coating is practically free from defects, presenting very high protection properties, with a 10 11 Ohm cm 2 order of magnitude of impedance modules, typical of protective powder coatings. Instead, after the pack test, a decrease in the value of the impedance module measured at low frequencies (10 −2 Hz) is observed. For sample E, and in general for mixed epoxy-polyester (E-Mod) samples, the exposure in environments 1-2-3 produces a decrease of two to three orders of magnitude, due to the presence of alcohol and water. The exposure in environment 4 leads to a less severe degradation of the coating with a limited decrease of impedance modulus. However, in all cases, the impedance modulus at low frequencies remains higher than 10 6 Ohm cm 2 , indicating the permanence of protection properties [24][25][26]. The polyester resin (sample P) instead, and in general all the polyester-based samples, undergoes a degradation consistent with environments 1, 2 and 3, as shown in Figure 6.
The impedance decrease results are very high, lower than the protection threshold. Polyester coatings in fact particularly suffer from the contact with alcohol. For environment 4, which presents only propellant, the impedance module remains unchanged and very high, symptom of the low aggressiveness of this environment, as previously confirmed by Tables 3 and 4. Figure 7 reported the EIS Bode modulus spectra comparing the behavior of the four types of samples after being exposed in environment 2: it shows that polyester samples (P and P-Mod) have an extremely low value of |Z|, while E (epoxy-based) maintains a just acceptable behavior. Despite this, the modification of the epoxy resin leads to a decrease in corrosion resistance properties. Sample E-Mod, in fact, degrades and shows a very similar impedance module to that of the polyester resins. In the electrochemical impedance measurements, the test in environment 2 has proved to be critical for sample E-Mod, as seen in Figure 7, but this result was not found in other tests. The E-Mod_2 sample appeared to be intact and without visible defects after the pack test. However, it must be considered that the EIS measurements are more sensitive to the presence of defects in the organic layers, and therefore they are more representative of the true state of degradation of the coating. For example, at high frequencies there is the time constant relative to the organic coating. The difference in modulus observed is relative to the fact that, except for E_2, the coatings show low protection and therefore a very low coating resistance.
Conclusions
In this paper, the powder coatings were studied as alternative to solvent tradition organic coatings in the cosmetic matter packaging. The pack test allowed us to expose the different types of coatings in real environments, representative of the life in use of the products, both in the gaseous and liquid phase.
The coatings seem to suffer the most from the contact with the liquid phase of the test environment. A visual observation allows to highlight only very evident degradation phenomena, with delamination and blisters. To get an idea of the real drop in protection, it is necessary to use EIS measures that show also the formation of microscopic defects that drastically reduce the protective properties under the minimum required threshold. The chemical resistance of the powder coatings has proved to be appropriate only for the less aggressive environment, where the critical compounds are only propellants, propane, butane and isobutane. Following exposure to other environments, where alcohol, water and DME are present, most samples have been susceptible to layer delamination, blisters formation and degradation of protection properties.
Considering epoxy coatings, the collected data showed that these layers particularly suffer from the contact with dimethyl ether, which is a good solvent for this resin, since it is a compound similar to polyether, which form the epoxy resin lattice chains. Compared to epoxy coatings, polyester coatings showed that, when exposed in environments with different critical levels (1, 2 and 3), they lose their corrosion protection properties. This happens even to those coatings that, following the pack test, have no particular visible defects and seem more intact than the respective epoxy samples. The polyester coatings also suffer from a greater loss of hardness, a higher solvent absorption and weight variations, which are symptom of degradation and loss of coherence in the lattice, more pronounced than the epoxy ones. On the other hand, the reference liquid coating Ref does not undergo any appreciable degradation or interactions with the various compounds it comes into contact with. To conclude, this study on powder coatings for application in the field of cosmetics shows that there is still a great deal of distance between powder and liquid coatings. The nature of the resins that can be used with powder technology is still too limiting for now, and being able to create homogeneous, thick and defect-free coatings is not enough to overcome the problem of chemical affinity with the compounds these coatings come in contact with. | 7,068.6 | 2020-03-26T00:00:00.000 | [
"Materials Science"
] |
Peer-to-peer loan acceptance and default prediction with artificial intelligence
Logistic regression (LR) and support vector machine algorithms, together with linear and nonlinear deep neural networks (DNNs), are applied to lending data in order to replicate lender acceptance of loans and predict the likelihood of default of issued loans. A two-phase model is proposed; the first phase predicts loan rejection, while the second one predicts default risk for approved loans. LR was found to be the best performer for the first phase, with test set recall macro score of 77.4%. DNNs were applied to the second phase only, where they achieved best performance, with test set recall score of 72%, for defaults. This shows that artificial intelligence can improve current credit risk models reducing the default risk of issued loans by as much as 70%. The models were also applied to loans taken for small businesses alone. The first phase of the model performs significantly better when trained on the whole dataset. Instead, the second phase performs significantly better when trained on the small business subset. This suggests a potential discrepancy between how these loans are screened and how they should be analysed in terms of default prediction.
Comments to the Author(s)
A list of acronyms would be most useful, with the constituent words appearing in brackets next to each acronym.
Owing to its central role in the performed research, P2P should be defined at the very beginning of the manuscript.
Artificial neural networks belong to the computational (rather than artificial) intelligence paradigm.
The "significant step forward to applying Big Data and Artificial Intelligence techniques to P2P" is not made clear.
The review of the literature is not always pertinent to the main theme of the submission.
It is not clear why the two datasets have been used concurrently rather than independently. It is also not obvious how or why the results obtained can be transferred to other datasets.
Carrying on with the previous point: How uniform are the datasets over the kind of loan requested? Their discrepancies over parameters as well decision type should be clearly stated. Would it be to advantage to treat each category independently (in a "one against the others" fashion)?
Since imputation is below 10%, both the full and the reduced datasets could be used for training and testing, with the results compared and important derivations made.
Along the same lines, how uniform is the data over the kind of loan requested over the two datasets? Would it be to advantage to treat (consider, in terms of training/testing and results) each dataset independently, thus simplifying the problem as well as the implemented methods?
At present Section 2.b is quite descriptive, with the implemented procedure not being adequately detailed to allow its duplication by the interested reader.
Why have the specific ANN architectures been selected? Different neural network training criteria including • pertinent architectures as well as nodes per layer; • earlier termination of the training stage; • cross-validation on the dataset (e.g. five-, 10-and/or leave-one-out cross-validation, with the folds being created either at random, or following ordering of the patterns with -for instance, for five-fold cross-validation -the 1st , 6th, 11th, … pattern belonging to the first fold, the 2nd, 7th, 12th etc. to the second fold, and so forth to the fifth fold), so that each fold contains the same number of patterns which extend over the entire problem space, instead of just a "split between training and validation sets" should be investigated.
The authors should ensure that all methodologies, metrics, data handling techniques etc. are accompanied by the corresponding primary references.
Section 3 contains information that should be moved to the previous section.
If logistic regression is, indeed, as successful as stated in section 3.a(iii), there is no need for more complicated (especially non-parametric) methodologies, which can add redundant and distorting detail to the problem methodology/solution and are not directly/easily (or even at all) expressed in a direct/parametric fashion.
The authors should ensure that the dataset is stationary; in case this is not so, alternative methodologies and/or on-line (re)training should be implemented.
Class imbalance is not optimal for ANN training; the appropriate measures should be taken in order to avoid training (and, thus, also) testing bias.
For this kind of problems, on-line training would be advisable so as to ensure the appropriateness/capability of the ANN to handle the changing (in time) data characteristics.
There is a considerable distance between linear and deep neural networks. Why has not an alternative (in-between) ANN architecture also been tested?
In all cases, it should be ensured that the number of ANN free parameters (weights and biases) is smaller than the number of training patterns. The authors should check for overfitting in the DNNs used, especially as the datasets are small (in relation to the size of the ANN).
What distinguishes test from recall patterns? Has no validation set been used? Testing can, by no means, be implemented on data that has been used for training.
At times it is not clear whether the aim is to mimic human decision making or to optimise/maximise recall as well as prediction.
The choice of features for the first stage should be fully justified.
If properly constructed and trained, the ANNs should be able to accurately learn the training patterns, as well as predict the test data.
A list of acronyms would be most useful, with the constituent words appearing in brackets next to each acronym.
Owing to its central role in the performed research, P2P should be defined at the very beginning of the manuscript.
Artificial neural networks belong to the computational (rather than artificial) intelligence paradigm.
The "significant step forward to applying Big Data and Artificial Intelligence techniques to P2P" is not made clear.
The review of the literature is not always pertinent to the main theme of the submission.
It is not clear why the two datasets have been used concurrently rather than independently. It is also not obvious how or why the results obtained can be transferred to other datasets.
Carrying on with the previous point: How uniform are the datasets over the kind of loan requested? Their discrepancies over parameters as well decision type should be clearly stated. Would it be to advantage to treat each category independently (in a "one against the others" fashion)?
Since imputation is below 10%, both the full and the reduced datasets could be used for training and testing, with the results compared and important derivations made.
Along the same lines, how uniform is the data over the kind of loan requested over the two datasets? Would it be to advantage to treat (consider, in terms of training/testing and results) each dataset independently, thus simplifying the problem as well as the implemented methods? At present Section 2.b is quite descriptive, with the implemented procedure not being adequately detailed to allow its duplication by the interested reader.
Why have the specific ANN architectures been selected? Different neural network training criteria including • pertinent architectures as well as nodes per layer; •earlier termination of the training stage; • cross-validation on the dataset (e.g. five-, 10-and/or leave-one-out cross-validation, with the folds being created either at random, or following ordering of the patterns with -for instance, for five-fold cross-validation -the 1st , 6th, 11th, … pattern belonging to the first fold, the 2nd, 7th, 12th etc. to the second fold, and so forth to the fifth fold), so that each fold contains the same number of patterns which extend over the entire problem space, instead of just a "split between training and validation sets" should be investigated.
The authors should ensure that all methodologies, metrics, data handling techniques etc. are accompanied by the corresponding primary references.
Section 3 contains information that should be moved to the previous section.
If logistic regression is, indeed, as successful as stated in section 3.a(iii), there is no need for more complicated (especially non-parametric) methodologies, which can add redundant and distorting detail to the problem methodology/solution and are not directly/easily (or even at all) expressed in a direct/parametric fashion.
The authors should ensure that the dataset is stationary; in case this is not so, alternative methodologies and/or on-line (re)training should be implemented.
Class imbalance is not optimal for ANN training; the appropriate measures should be taken in order to avoid training (and, thus, also) testing bias.
For this kind of problems, on-line training would be advisable so as to ensure the appropriateness/capability of the ANN to handle the changing (in time) data characteristics.
There is a considerable distance between linear and deep neural networks. Why has not an alternative (in-between) ANN architecture also been tested?
In all cases, it should be ensured that the number of ANN free parameters (weights and biases) is smaller than the number of training patterns. The authors should check for overfitting in the DNNs used, especially as the datasets are small (in relation to the size of the ANN).
What distinguishes test from recall patterns? Has no validation set been used? Testing can, by no means, be implemented on data that has been used for training.
At times it is not clear whether the aim is to mimic human decision making or to optimise/maximise recall as well as prediction.
The choice of features for the first stage should be fully justified.
If properly constructed and trained, the ANNs should be able to accurately learn the training patterns, as well as predict the test data.
20-Dec-2019
Dear Mr Turiel, The editors assigned to your paper ("P2P Loan acceptance and default prediction with Artificial Intelligence") have now received comments from reviewers. We would like you to revise your paper in accordance with the referee and Associate Editor suggestions which can be found below (not including confidential reports to the Editor). Please note this decision does not guarantee eventual acceptance.
Please submit a copy of your revised paper before 12-Jan-2020. Please note that the revision deadline will expire at 00.00am on this date. If we do not hear from you within this time then it will be assumed that the paper has been withdrawn. In exceptional circumstances, extensions may be possible if agreed with the Editorial Office in advance. We do not allow multiple rounds of revision so we urge you to make every effort to fully address all of the comments at this stage. If deemed necessary by the Editors, your manuscript will be sent back to one or more of the original reviewers for assessment. If the original reviewers are not available, we may invite new reviewers.
To revise your manuscript, log into http://mc.manuscriptcentral.com/rsos and enter your Author Centre, where you will find your manuscript title listed under "Manuscripts with Decisions." Under "Actions," click on "Create a Revision." Your manuscript number has been appended to denote a revision. Revise your manuscript and upload a new version through your Author Centre.
When submitting your revised manuscript, you must respond to the comments made by the referees and upload a file "Response to Referees" in "Section 6 -File Upload". Please use this to document how you have responded to the comments, and the adjustments you have made. In order to expedite the processing of the revised manuscript, please be as specific as possible in your response.
In addition to addressing all of the reviewers' and editor's comments please also ensure that your revised manuscript contains the following sections as appropriate before the reference list: • Ethics statement (if applicable) If your study uses humans or animals please include details of the ethical approval received, including the name of the committee that granted approval. For human studies please also detail whether informed consent was obtained. For field studies on animals please include details of all permissions, licences and/or approvals granted to carry out the fieldwork.
• Data accessibility It is a condition of publication that all supporting data are made available either as supplementary information or preferably in a suitable permanent repository. The data accessibility section should state where the article's supporting data can be accessed. This section should also include details, where possible of where to access other relevant research materials such as statistical tools, protocols, software etc can be accessed. If the data have been deposited in an external repository this section should list the database, accession number and link to the DOI for all data from the article that have been made publicly available. Data sets that have been deposited in an external repository and have a DOI should also be appropriately cited in the manuscript and included in the reference list.
If you wish to submit your supporting data or code to Dryad (http://datadryad.org/), or modify your current submission to dryad, please use the following link: http://datadryad.org/submit?journalID=RSOS&manu=RSOS-191649 • Competing interests Please declare any financial or non-financial competing interests, or state that you have no competing interests.
• Authors' contributions All submissions, other than those with a single author, must include an Authors' Contributions section which individually lists the specific contribution of each author. The list of Authors should meet all of the following criteria; 1) substantial contributions to conception and design, or acquisition of data, or analysis and interpretation of data; 2) drafting the article or revising it critically for important intellectual content; and 3) final approval of the version to be published.
All contributors who do not meet all of these criteria should be included in the acknowledgements.
We suggest the following format: AB carried out the molecular lab work, participated in data analysis, carried out sequence alignments, participated in the design of the study and drafted the manuscript; CD carried out the statistical analyses; EF collected field data; GH conceived of the study, designed the study, coordinated the study and helped draft the manuscript. All authors gave final approval for publication.
• Acknowledgements Please acknowledge anyone who contributed to the study but did not meet the authorship criteria.
• Funding statement Please list the source of funding for each author.
Once again, thank you for submitting your manuscript to Royal Society Open Science and I look forward to receiving your revision. If you have any questions at all, please do not hesitate to get in touch.
Kind regards, Royal Society Open Science Editorial Office Royal Society Open Science<EMAIL_ADDRESS>on behalf of Prof Marta Kwiatkowska (Subject Editor<EMAIL_ADDRESS>Associate Editor's comments: The two reviewers have a number of suggestions and queries that should improve your manuscript -we would urge you to take their recommendations seriously, and be sure to not only include the requested updates, but provide an explanation of what changes have been made, or -even more importantly -if you choose not to include a change, explain why not. We'll look forward to receiving the revision in due course. Here are three points that shall be addressed to do at least the based model validation.
1.
Authors do a lot of manual tuning for obtained models. There is not enough information what was the evaluation scheme. I would expect that one evaluation of the data will be the out-of-time sample, external sample never saw by the model not never seen in the hyperparameter tuning.
2.
It is not clear if there was a nested CV performed or not. The grid search needs its own validation data, external from the out-of-sample validation data. The validation shall be described in larger details.
3.
Reporting AUC or Recall is far not enough. The literature is full of examples of overtrained blackboxes that are not validated. Authors shall do deeper post-hoc analysis of trained models with tools like Partial Dependency Profiles (https://pbiecek.github.io/PM_VEE/partialDependenceProfiles.html), Permutational feature importance (https://pbiecek.github.io/PM_VEE/featureImportance.html). Lots of packages for R and python to do this validation.
All proposed models (logistic regression, DNN and SVN) shall be xrayed with these tools, as it is very easy oto overfit and create an unfair model.
Reviewer: 2
Comments to the Author(s) A list of acronyms would be most useful, with the constituent words appearing in brackets next to each acronym.
Owing to its central role in the performed research, P2P should be defined at the very beginning of the manuscript.
Artificial neural networks belong to the computational (rather than artificial) intelligence paradigm.
The "significant step forward to applying Big Data and Artificial Intelligence techniques to P2P" is not made clear.
The review of the literature is not always pertinent to the main theme of the submission.
It is not clear why the two datasets have been used concurrently rather than independently. It is also not obvious how or why the results obtained can be transferred to other datasets.
Carrying on with the previous point: How uniform are the datasets over the kind of loan requested? Their discrepancies over parameters as well decision type should be clearly stated. Would it be to advantage to treat each category independently (in a "one against the others" fashion)?
Since imputation is below 10%, both the full and the reduced datasets could be used for training and testing, with the results compared and important derivations made.
Along the same lines, how uniform is the data over the kind of loan requested over the two datasets? Would it be to advantage to treat (consider, in terms of training/testing and results) each dataset independently, thus simplifying the problem as well as the implemented methods?
At present Section 2.b is quite descriptive, with the implemented procedure not being adequately detailed to allow its duplication by the interested reader.
Why have the specific ANN architectures been selected? Different neural network training criteria including • pertinent architectures as well as nodes per layer; • earlier termination of the training stage; • cross-validation on the dataset (e.g. five-, 10-and/or leave-one-out cross-validation, with the folds being created either at random, or following ordering of the patterns with -for instance, for five-fold cross-validation -the 1st , 6th, 11th, … pattern belonging to the first fold, the 2nd, 7th, 12th etc. to the second fold, and so forth to the fifth fold), so that each fold contains the same number of patterns which extend over the entire problem space, instead of just a "split between training and validation sets" should be investigated.
The authors should ensure that all methodologies, metrics, data handling techniques etc. are accompanied by the corresponding primary references.
Section 3 contains information that should be moved to the previous section.
If logistic regression is, indeed, as successful as stated in section 3.a(iii), there is no need for more complicated (especially non-parametric) methodologies, which can add redundant and distorting detail to the problem methodology/solution and are not directly/easily (or even at all) expressed in a direct/parametric fashion.
The authors should ensure that the dataset is stationary; in case this is not so, alternative methodologies and/or on-line (re)training should be implemented.
Class imbalance is not optimal for ANN training; the appropriate measures should be taken in order to avoid training (and, thus, also) testing bias.
For this kind of problems, on-line training would be advisable so as to ensure the appropriateness/capability of the ANN to handle the changing (in time) data characteristics.
There is a considerable distance between linear and deep neural networks. Why has not an alternative (in-between) ANN architecture also been tested?
In all cases, it should be ensured that the number of ANN free parameters (weights and biases) is smaller than the number of training patterns. The authors should check for overfitting in the DNNs used, especially as the datasets are small (in relation to the size of the ANN).
What distinguishes test from recall patterns? Has no validation set been used? Testing can, by no means, be implemented on data that has been used for training.
At times it is not clear whether the aim is to mimic human decision making or to optimise/maximise recall as well as prediction.
The choice of features for the first stage should be fully justified.
If properly constructed and trained, the ANNs should be able to accurately learn the training patterns, as well as predict the test data.
A list of acronyms would be most useful, with the constituent words appearing in brackets next to each acronym.
Owing to its central role in the performed research, P2P should be defined at the very beginning of the manuscript.
Artificial neural networks belong to the computational (rather than artificial) intelligence paradigm.
The "significant step forward to applying Big Data and Artificial Intelligence techniques to P2P" is not made clear.
The review of the literature is not always pertinent to the main theme of the submission.
It is not clear why the two datasets have been used concurrently rather than independently. It is also not obvious how or why the results obtained can be transferred to other datasets.
Carrying on with the previous point: How uniform are the datasets over the kind of loan requested? Their discrepancies over parameters as well decision type should be clearly stated. Would it be to advantage to treat each category independently (in a "one against the others" fashion)?
Since imputation is below 10%, both the full and the reduced datasets could be used for training and testing, with the results compared and important derivations made.
Along the same lines, how uniform is the data over the kind of loan requested over the two datasets? Would it be to advantage to treat (consider, in terms of training/testing and results) each dataset independently, thus simplifying the problem as well as the implemented methods?
At present Section 2.b is quite descriptive, with the implemented procedure not being adequately detailed to allow its duplication by the interested reader.
Why have the specific ANN architectures been selected? Different neural network training criteria including • pertinent architectures as well as nodes per layer; • earlier termination of the training stage; • cross-validation on the dataset (e.g. five-, 10-and/or leave-one-out cross-validation, with the folds being created either at random, or following ordering of the patterns with -for instance, for five-fold cross-validation -the 1st , 6th, 11th, … pattern belonging to the first fold, the 2nd, 7th, 12th etc. to the second fold, and so forth to the fifth fold), so that each fold contains the same number of patterns which extend over the entire problem space, instead of just a "split between training and validation sets" should be investigated.
The authors should ensure that all methodologies, metrics, data handling techniques etc. are accompanied by the corresponding primary references.
Section 3 contains information that should be moved to the previous section.
If logistic regression is, indeed, as successful as stated in section 3.a(iii), there is no need for more complicated (especially non-parametric) methodologies, which can add redundant and distorting detail to the problem methodology/solution and are not directly/easily (or even at all) expressed in a direct/parametric fashion.
The authors should ensure that the dataset is stationary; in case this is not so, alternative methodologies and/or on-line (re)training should be implemented.
Class imbalance is not optimal for ANN training; the appropriate measures should be taken in order to avoid training (and, thus, also) testing bias.
For this kind of problems, on-line training would be advisable so as to ensure the appropriateness/capability of the ANN to handle the changing (in time) data characteristics.
There is a considerable distance between linear and deep neural networks. Why has not an alternative (in-between) ANN architecture also been tested?
In all cases, it should be ensured that the number of ANN free parameters (weights and biases) is smaller than the number of training patterns. The authors should check for overfitting in the DNNs used, especially as the datasets are small (in relation to the size of the ANN).
What distinguishes test from recall patterns? Has no validation set been used? Testing can, by no means, be implemented on data that has been used for training.
At times it is not clear whether the aim is to mimic human decision making or to optimise/maximise recall as well as prediction.
The choice of features for the first stage should be fully justified.
If properly constructed and trained, the ANNs should be able to accurately learn the training patterns, as well as predict the test data.
Author's Response to Decision Letter for (RSOS-191649.R0)
See Appendix A.
Comments to the Author(s)
The manuscript has been improved to a satisfactory degree.
Decision letter (RSOS-191649.R1)
We hope you are keeping well at this difficult and unusual time. We continue to value your support of the journal in these challenging circumstances. If Royal Society Open Science can assist you at all, please don't hesitate to let us know at the email address below.
Dear Mr Turiel:
On behalf of the Editors, I am pleased to inform you that your Manuscript RSOS-191649.R1 entitled "P2P Loan acceptance and default prediction with Artificial Intelligence" has been accepted for publication in Royal Society Open Science subject to minor revision in accordance with the referee suggestions. Please find the referees' comments at the end of this email.
The reviewers and Subject Editor have recommended publication, but also suggest some minor revisions to your manuscript. Therefore, I invite you to respond to the comments and revise your manuscript.
• Ethics statement If your study uses humans or animals please include details of the ethical approval received, including the name of the committee that granted approval. For human studies please also detail whether informed consent was obtained. For field studies on animals please include details of all permissions, licences and/or approvals granted to carry out the fieldwork.
• Data accessibility It is a condition of publication that all supporting data are made available either as supplementary information or preferably in a suitable permanent repository. The data accessibility section should state where the article's supporting data can be accessed. This section should also include details, where possible of where to access other relevant research materials such as statistical tools, protocols, software etc can be accessed. If the data has been deposited in an external repository this section should list the database, accession number and link to the DOI for all data from the article that has been made publicly available. Data sets that have been deposited in an external repository and have a DOI should also be appropriately cited in the manuscript and included in the reference list.
If you wish to submit your supporting data or code to Dryad (http://datadryad.org/), or modify your current submission to dryad, please use the following link: http://datadryad.org/submit?journalID=RSOS&manu=RSOS-191649.R1 • Competing interests Please declare any financial or non-financial competing interests, or state that you have no competing interests.
• Authors' contributions All submissions, other than those with a single author, must include an Authors' Contributions section which individually lists the specific contribution of each author. The list of Authors should meet all of the following criteria; 1) substantial contributions to conception and design, or acquisition of data, or analysis and interpretation of data; 2) drafting the article or revising it critically for important intellectual content; and 3) final approval of the version to be published.
All contributors who do not meet all of these criteria should be included in the acknowledgements.
We suggest the following format: AB carried out the molecular lab work, participated in data analysis, carried out sequence alignments, participated in the design of the study and drafted the manuscript; CD carried out the statistical analyses; EF collected field data; GH conceived of the study, designed the study, coordinated the study and helped draft the manuscript. All authors gave final approval for publication.
• Acknowledgements Please acknowledge anyone who contributed to the study but did not meet the authorship criteria.
• Funding statement Please list the source of funding for each author.
Please note that we cannot publish your manuscript without these end statements included. We have included a screenshot example of the end statements for reference. If you feel that a given heading is not relevant to your paper, please nevertheless include the heading and explicitly state that it is not relevant to your work.
Because the schedule for publication is very tight, it is a condition of publication that you submit the revised version of your manuscript before 14-May-2020. Please note that the revision deadline will expire at 00.00am on this date. If you do not think you will be able to meet this date please let me know immediately.
To revise your manuscript, log into https://mc.manuscriptcentral.com/rsos and enter your Author Centre, where you will find your manuscript title listed under "Manuscripts with Decisions". Under "Actions," click on "Create a Revision." You will be unable to make your revisions on the originally submitted version of the manuscript. Instead, revise your manuscript and upload a new version through your Author Centre.
When submitting your revised manuscript, you will be able to respond to the comments made by the referees and upload a file "Response to Referees" in "Section 6 -File Upload". You can use this to document any changes you make to the original manuscript. In order to expedite the processing of the revised manuscript, please be as specific as possible in your response to the referees.
When uploading your revised files please make sure that you have: 1) A text file of the manuscript (tex, txt, rtf, docx or doc), references, tables (including captions) and figure captions. Do not upload a PDF as your "Main Document". 2) A separate electronic file of each figure (EPS or print-quality PDF preferred (either format should be produced directly from original creation package), or original software format) 3) Included a 100 word media summary of your paper when requested at submission. Please ensure you have entered correct contact details (email, institution and telephone) in your user account 4) Included the raw data to support the claims made in your paper. You can either include your data as electronic supplementary material or upload to a repository and include the relevant doi within your manuscript 5) All supplementary materials accompanying an accepted article will be treated as in their final form. Note that the Royal Society will neither edit nor typeset supplementary material and it will be hosted as provided. Please ensure that the supplementary material includes the paper details where possible (authors, article title, journal name).
Supplementary files will be published alongside the paper on the journal website and posted on the online figshare repository (https://figshare.com). The heading and legend provided for each supplementary file during the submission process will be used to create the figshare page, so please ensure these are accurate and informative so that your files can be found in searches. Files on figshare will be made available approximately one week before the accompanying article so that the supplementary material can be attributed a unique DOI.
Once again, thank you for submitting your manuscript to Royal Society Open Science and I look forward to receiving your revision. If you have any questions at all, please do not hesitate to get in touch.
Kind regards, Andrew Dunn Royal Society Open Science Editorial Office Royal Society Open Science<EMAIL_ADDRESS>on behalf of Prof Marta Kwiatkowska (Subject Editor<EMAIL_ADDRESS>Associate Editor Comments to Author: It appears a couple of minor queries are left to address, but otherwise the paper is ready for acceptance. Please provide a final revision incorporating these remaining changes.
Reviewer comments to Author: Reviewer: 2 Comments to the Author(s) The manuscript has been improved to a satisfactory degree. Decision letter (RSOS-191649.R2) We hope you are keeping well at this difficult and unusual time. We continue to value your support of the journal in these challenging circumstances. If Royal Society Open Science can assist you at all, please don't hesitate to let us know at the email address below.
Dear Mr Turiel,
It is a pleasure to accept your manuscript entitled "P2P Loan acceptance and default prediction with Artificial Intelligence" in its current form for publication in Royal Society Open Science.
Please ensure that you send to the editorial office an editable version of your accepted manuscript, and individual files for each figure and table included in your manuscript. You can send these in a zip folder if more convenient. Failure to provide these files may delay the processing of your proof. You may disregard this request if you have already provided these files to the editorial office.
You can expect to receive a proof of your article in the near future. Please contact the editorial office<EMAIL_ADDRESS>and the production office<EMAIL_ADDRESS>to let us know if you are likely to be away from e-mail contact --if you are going to be away, please nominate a co-author (if available) to manage the proofing process, and ensure they are copied into your email to the journal.
Due to rapid publication and an extremely tight schedule, if comments are not received, your paper may experience a delay in publication. Royal Society Open Science operates under a continuous publication model. Your article will be published straight into the next open issue and this will be the final version of the paper. As such, it can be cited immediately by other researchers. As the issue version of your paper will be the only version to be published I would advise you to check your proofs thoroughly as changes cannot be made once the paper is published.
Please see the Royal Society Publishing guidance on how you may share your accepted author manuscript at https://royalsociety.org/journals/ethics-policies/media-embargo/. As most balancing techniques are not optimal and there is no straightforward way to adjust with the chosen loss we downsample the data (i.e. the overrepresented non-default class). We have tried oversampling, but this caused overfitting to the repeated data points. We have now added this discussion to Section 2.b.ii. Future work may try this with bootstrapping to make the examples less similar.
*For this kind of problems, on-line training would be advisable so as to ensure the appropriateness/capability of the ANN to handle the changing (in time) data characteristics.
This may be again related to clarities in the dataset. We do not expect non-stationarites in this type of data. If in the default rates, this should not matter for a well-trained classifier In terms of online learning and re-training we do not have enough data yet/a long enough period yet but as the number of loans in growing super-linearly ( Figure 1) regular training should anyway give more importance to recent loans. We are hence unable to notice the difference yet. Due to imbalance (as many loans take time to default) we need a year gap between training data and the present where we would run the model.
There is a considerable distance between linear and deep neural networks. Why has not an alternative (inbetween) ANN architecture also been tested?
Other architectures are currently being tested and will be the subject of future work. The current work already analyses four families of models and delving into more would drive attention away from the focus of the paper on a first Deep Learning application to P2P lending and its discussion.
In all cases, it should be ensured that the number of ANN free parameters (weights and biases) is smaller than the number of training patterns. The authors should check for overfitting in the DNNs used, especially as the datasets are small (in relation to the size of the ANN). This is checked for in the explainability part Section 3.b where we open up the model and analyse features and more. As a side point, training parameters are in the order of 10 2-3 and data is in the 10 4-5 .
What distinguishes test from recall patterns? Has no validation set been used? Testing can, by no means, be implemented on data that has been used for training.
As explained above, we use cross validation for hyperparameter tuning and an out of sample test set to show results for the selected models. We have now outlined this more in detail in Section 2.b.ii and 3.a.v.
At times it is not clear whether the aim is to mimic human decision making or to optimise/maximise recall as well as prediction.
We have again clarified this by inserting the scheme in the Methods Section 2.b and better defined the difference between first and second phase. The first phase aims to mimic human decisions of acceptance/rejectance as they are its target label. The second phase aims to predict default risk based on factual default data.
The choice of features for the first stage should be fully justified.
We wish to clarify that there was very little choice as no features are excluded, but geographic ones as they are categorical features and bare no intrinsic meaning. These should be used, encoded with related information, in further work.
If properly constructed and trained, the ANNs should be able to accurately learn the training patterns, as well as predict the test data. This is indeed shown, as the model correctly interprets the features (explainability Section 3.b) and the selected models achieve high scores on out of sample test data in Section 3.a.v.
Response to minor revisions
We thanks the reviewers for the previous and current comments whicb have grealty contributed to improving our work.
Reviewer comments to Author: Reviewer: 2 Comments to the Author(s) The manuscript has been improved to a satisfactory degree. OK (no need for changes) Reviewer: 1 Comments to the Author(s) Minor things: -Instead of 'Partial Dependency' it should be 'Partial Dependence'. This has been modified throughout the paper -The OX axes in Figures 4 and 5 should be improved These have been improved (see figures) - Figures 6 and 7 should be complemented with line cart that show how the conditional average probability depends on the values of the variables. These have been added in the now Figures 7,9,11 -Links to the code should be in Chapter 6 and not in references Done (see chapter 6) -References are very incoherent, they need to be made more consistent. In particular, remove all links to the DOI. Links have been removed throughout references if not strictly necessary and reference format has been changed to improve readability. -The small inscriptions in Figure 2 are unreadable. It is worth to enlarge them. These have been enlarged and are now visible. | 8,848 | 2020-06-01T00:00:00.000 | [
"Computer Science"
] |
Typology Analyses and Strategic Stakeholders’ Mapping Using Network on Integrated Crops-Livestock Farming Systems
Stakeholders and their networks play prominent roles in developing the agricultural sector. For instance, the economic, social, and environmental indicators of farms are sustained by the involvement of stakeholders and other relevant parties. Therefore, exploring the importance and roles of actors has become strategic and vital to recognize. This research aims to determine the strategic stakeholders' typology and mapping using their network analyses on integrated crops-livestock farming systems in West New Guinea. The study was carried out in Manokwari using the focus group discussion on twenty various represented individuals, groups, and mass institutions. The queries discussed were based on background, resources delivery, inter-connectivity amongst actors, intervention, and innovation. The result showed that the stakeholders in mixed crop-livestock are dominated by individuals' that privately manage the farms officially in accordance with the laws. The result also showed that the farming systems in West New Guinea, experience real threats which need to be lowered to mitigate the turn-back effect. The top five shared resources are access, satisfaction, power, knowledge, and time allocation. These resources tend to stay longer to sustain the strong needs of the farms, which are dominated by positive similarity with varying ranges of correlation ranging from negative, neutral to positive. This is because the stakeholders are reluctant to deliver the intervention and innovation, therefore, those with low interest and power need to be promoted to high interest and power by using aids, guidance, and services from each actor in the mixed crop-livestock farms business.
result also showed that the farming systems in West New Guinea, experience real threats which need to be lowered to mitigate the turn-back effect. The top five shared resources are access, satisfaction, power, knowledge, and time allocation. These resources tend to stay longer to sustain the strong needs of the farms, which are dominated by positive similarity with varying ranges of correlation ranging from negative, neutral to positive. This is because the stakeholders are reluctant to deliver the intervention and innovation, therefore, those with low interest and power need to be promoted to high interest and power by using aids, guidance, and services from each actor in the mixed crop-livestock farms business.
INTRODUCTIONS
The crop-livestock sector is a mixed agricultural farming system recognized and run by many small-scale farmers worldwide. This type of farming is carried out by combining some commodities from crops and livestock. This system's trend is rapidly developed due to input efficiency, global climate changes, and consumer concerns, which are the goals of sustainable development. In line with consumers concern, people are now involved in determining products obtained from farms which are developed by involving relevant parties. Individuals, groups, and mass are involved in fulfilling and satisfying peoples' agricultural needs and consumers' preferences. In Europe and other Western countries, crops and livestock products are obtained from organic farms. This is due to the growing increase in consumers' concern about the production of healthy food without certain treatment. For instance, in some countries, caging animals in the compartment are forbidden by the animal welfare and right institution.
Similarly, the treatment of livestock with certain drugs and medicines is against some laws. This research question is based on the types of actors' involvement in crops-livestock farming, which qualifies them to play important rules to ensure the policy of right and welfare. It also aims to determine the ability of these institutions to represent consumers' interests and answer questions associated with people's and producers' concerns.
Policies ruled by the laws do not hamper consumers' interest by legalizing another, irrespective of the varying perceptions and constraints faced by mixed farming systems. According to Grimble & Wellard (1997), many stakeholders' publications are discussed without seeing and analyzing the background and back-bound of the actors. The analysis of actors and stakeholders' is qualitatively discussed by drawing diagrams, pictures, and connectivity lines. Furthermore, many analyses are carried out by quantitatively computing the pattern and relationship of the network. Muniesa (2015) stated that the shapes of actors in line with individual, group, and mass determine how actors have to be approached. Meanwhile, Hajjar et al. (2019) reported that law status and types of organization are the criteria of legality that play prominent roles, which also provides certainty and respect for involvement, besides trustworthiness. The roles of stakeholders and shareholders affect how the contribution is delivered in determining crop-livestock business beneficiary and production. This is explained in Iyai et al. (2016) study carried out in Manokwari, West Papua-Indonesia.
Mayulu & Sutrisno (2014) stated that understanding the background and backbound of the actors is of utmost importance. This is because the best fitted and appropriate actors play significant roles in promoting and sustaining cattle farming system particularly in Indonesia and specifically in West Papua. Iyai & Yaku (2015) reported several livestock farming systems in Manokwari, West Papua, where each is associated with a certain relationship and typical involvement of various interests. Therefore, it is urgently needed to deeply dig up the characteristics of the institutions and their performance in livestock development. Therefore, it is important to apply precise technical units of analyses to predict the relationships of related and relevant stakeholders in benefiting from the crop-livestock farming systems' economic and social objectives. Furthermore, stakeholders' or institutions' characteristics provide direction in executing and implementing programs that aid guidance and services in the near future.
One powerful social network analysis beside Gephi (Bastian et al., 2009), Netmap (Schiffer, 2007) and SmartPLS (Ringle et al., 2005), is Social Network Visualizer. Krupa et al. (2017) stated that the Social Network Analysis (SNA) is an adequate and appropriate software to compute network and relationship. Therefore, by mapping the stakeholders, institutions without power and interest, are identified, and they easily promote their roles, comprehensively. These multi-sectors of agriculture development need detailed positioning of the roles and responsibilities of the involved actors. Therefore, this study aims to portrait the typology of actors involved in the old traditional livelihood of crop-livestock farming systems in Manokwari, West Papua(Deny Anjelus Iyai et al., 2020).
METHOD Location and involved actors
This research was carried out in Manokwari, West Papua, with several organizations, groups, and individuals representing institutions, mass, and households. Relevant data were collected on the existing mixed crop-livestock farming business by seeking their consent over the phone and with an invitation letter. The focus group discussions and desk study from qualitative research (Moleong, 1991) were used to collect relevant data from research reports, policy documents, articles, daily newspapers, and magazines. This study is concerned with stakeholders and shareholders' roles in shaping and determining the development pattern of mixed croplivestock business in West Papua. Manokwari was set up and developed as one of the central developments of mixed crop-livestock farms in accordance with the Republic of Indonesia's national plans and by local livestock and veterinary provincial offices of West Papua province. All stakeholders were grouped into local citizens, government, finance institutions (banks), markets, private, and transportation.
Data collection
The collected data were related to organizational function and characteristics of the mixed crop-livestock business. These include shape, types, roles, effect, importance, and status of the organization. Data were also collected on the threats and turn-back effect towards mixed crop-livestock farming development. In determining the roles and presence of the stakeholders, the study also recorded the organization's resources, as well as the duration, continuity, power, and intervention.
Restaurants
Provide animal-based products for consumers.
Method of analyses
This research used the Social Network Visualizer (SocNetV) to analyze the power and flows of information amongst stakeholders. Kalamaras (2019) stated that SocNetV is a cross-platform, that is light and free of charged social-stakeholder related software in network analyses and visualization. The PCC matrix, similarity matrix (SM), power centrality (PC), and Hierarchical clustering (HCA) were used to visualize the graphs. The adjacency matrix of a social network, namely supplement no. 1 & 2, is a matrix where each element a(i,j) is equal to the weight of the arc from actor (node) i to j. When the actors are not connected, then a(i,j)=0. This computes the Cocitation matrix, C = A T * A. C, which is an n x n symmetric matrix where each element (i,j) is the number of actors that have outbound ties/links to both actors i and j. The diagonal elements, C ii , of the Cocitation matrix are equal to the number of inbound edges of i (in Degree). A key notion in SNA with a structural equivalence.
Figure 1. Mapping the involvement of actors amongst crop-livestock production systems
Mapping is used to map the relationships in a graph by creating classes or groups of actors with equivalent characteristics. One of the methods used to identify groups of structurally equivalent actors is to examine the relationships between them for similarity patterns. There are many methods used to measure the similarity or dissimilarity of actors in a network, with the Pearson Correlation Coefficient supported by SocNetV which enables it to create a pair-wise actor similarity/dissimilarity matrix. This method also actors to compute a pair-wise similarity matrix, where each element (i,j) is the ratio of tie (or distance) matches of i and j. In the case of Simple Matching, the similarity matrix depicts the ratios of exact matches of pairs of actors to others. When the element (i,j) equals 0.5, it means that actors i and j have the same available or unavailable ties that are present to other actors 50% of the time. These measures of similarity are particularly useful when ties are binary (not valued) or when it computes a correlation matrix, where the elements are the Pearson correlation coefficients between pairs of actors in terms of their tie profiles or distances (in, out or both). The Pearson product-moment correlation coefficient (PPMCC or PCC or Pearson's r) is a measure of the linear dependence/association between two variables X and Y. This correlation measure of similarity is particularly useful when ties are valued/weighted, thereby denoting strength, cost, or probability. According to Gil and Schmidt (1996), the Power Centrality (PC) is a generalized degree of centrality used for measurement. For each node u, this index sums its degree (weight 1), with the size of the 2nd-order neighborhood (weight 2), and size of the kth order neighborhood (weight k). Therefore, node u is more important compared to its immediate neighbors. This is followed by the nodes of the 2nd-order neighborhood, 3rd-order neighborhood, etc. For each node, the sum obtained is normalized by the total number of nodes in the same component, minus 1. This index can be calculated in both graphs and digraphs, however, it is usually best suited for undirected graphs. It can also be calculated in weighted graphs, although the weight of each edge (u,v) in E is always considered to be 1. Hierarchical clustering, also known as hierarchical cluster analysis (HCA) is a method used to build a hierarchy of clusters, based on their elements dissimilarity. In the SNA context, these clusters usually consist of network actors. This method takes the social network distance matrix as input and uses the Agglomerative "bottom-up" approach, where each actor starts in their cluster (Level 0). In each subsequent Level, with an ascending order of clustering hierarchy, clusters are merged into larger pairs until all actors end up in the same cluster. A measure of dissimilarity between sets of observations is used to determine the clusters to be combined at each level. This measure consists of a metric for the distance between actors such as the Manhattan distance and a linkage criterion, namely single-linkage clustering. This linkage criterion is essentially a definition of distance between clusters and differentiates between the different HCA methods. The result of Hierarchical Cluster Analysis is the clusters per level and a dendrogram. The concept of a clique, which is defined as a group of people that regularly and intensely interact with each other compared to others, is simple. This means that a group of people form a clique when connected to each other. A clique is also defined as the largest subgroup of actors in the social network that is directly connected to each other. In terms of graph theory, this notion is equivalent to the maximal sub-graph of the social network. The word maximal means that for each clique, the group of members is expanded to include many actors to prevent others' addition. A clique in Social Network Analysis essentially consists of several overlapping closed triads.
SocNetV applies the Bron-Kerbosch algorithm to determine all maximal cliques in an undirected or directed graph. This produces a census of all MAXIMAL cliques in the network and reports some useful statistics. The clique census report includes disaggregation by vertex and co-membership information. Information Centrality (IC) is an index suggested by Stephenson and Zalen (1989), which focuses on how information flows through many different paths. Unlike SC and BC, the IC metric uses all paths between actors weighted by the strength of tie and distance.
The IC' score is the standardized IC divided by the sumIC and can be seen as the proportion of total information flow controlled by each actor with the standard IC' values sum to unity, unlike most other centrality measures. This is because there is no known generalization of Stephenson & Zelen's theory for information centrality to directional relations. The index needs to be calculated only for undirected graphs and is more meaningful in weighted graphs/networks. Therefore, to compute this index, SocNetV drops all isolated nodes and symmetrizes the adjacency matrix even when the graph is a directed Algorithm (Wasserman & Khaterine, 1994). In order to calculate the IC index of each actor, a N x N matrix A from the symmetrized sociomatrix is created with: Aii=1+di, Aij=1 if (i,j)=0, and Aij=1−wij if (i,j)=wij. Furthermore, the inverse matrix of A is computed, for instance, C, using the LU decomposition. C can always be computed since the matrix A is always a diagonally strong, and invertible. Finally, IC is computed by the formula: ICi−1Cii+T−2⋅RN, where: T is the trace of matrix C (the sum of diagonal elements), and R is the sum of the elements of any row with a minimum value of IC.
The steps in running this SocNetV version 2.5 are shown in Figure 1. To analyze the intervention shared by organization, this study determined the intervention conducted by stakeholders. All data are collectively typed into a Microsoft Excel worksheet and tabled into the manuscript. Table 2 showed that the typology of organizations such as shapes, law status, types, and roles tends to affect threat and turn-back in establishing and delivering relationships and actions in mixed crop-livestock farming business. This portrait that mixed crop-livestock actors' development in West New Guinea was on the local and grassroots organization stage. National and International involved stakeholders are lagging behind for stimulating development. According to UNDP, the experience shared is similar to the West Papua and CIP-project in Wamena and Pegunungan Arfak, with no bargaining position used to determine the shapes and rate of crop-livestock development. The law of institutions determines the legality and power in the sounding policy of development, therefore, having access and trust for establishing cooperation and resources tends to induce the accelerated development of mixed crop-livestock farming business. The distinguishing status of stakeholders and shareholders enables easy-made and clear-contribution of delivering packages of the aids and services. This tends to lower the negative effect in the short-run, which enables actors to act with insurance. Direct threats are faced by many actors in crop-livestock farming system development. Therefore, serious action is needed to reduce the direct impact. There are various sources of threat, from animal health, wastes including livestock emission (Mariantonietta et al., 2017;Cardoso et al., 2016), forage management (Zanten et al., 2016) and price uncertainty (Asmarantaka et al., 2019). Therefore, Internal and external warning need to be addressed to avoid turn back effect. Table 3 shows the inventorying possibilities of offered resources needed as inputs to stimulate the development of the crop-livestock farming system and enhance farmer capacity, including its actors. Eleven components of resources are found, therefore, further policy and action are needed to arrange it for future establishment and prospects to achieve sustainable crop-livestock farming systems. Prolong period is used to show how serious stakeholders are in establishing livestock development, despite their sustainability with neutral and strong livestock development. Table 4 grouped actors with similar typology and characteristic, while figures 2, 3, and 4 are used to draw rich pictures and interpretations of the actor-network. There are also rich relationships and interlinked connectivity amongst actors. In figure 2 various linking actors with phenomenal attributes were created, showing the degree of mutual connectivity and analyzing the interlinked actors. The relationship between tables 2 and 3, as well as figures 2 and 3 enable the developing actors to be more precise in delivering resources and capacities to share aids and guidance needed for this service. Table 5 explores the computed relational actors and the network and interlinked actors, which consist of positive, neutral, and negative relationships. This means that a negative network needs the adaptation and adjustment of local conditions and targeted goals of crop-livestock development. Neutral relationship needs future intervention and innovation for driving the powers and interest in stimulating tangible roles and future actions. Table 6 investigated and recorded resources of further action that can be conducted. Policy, skills, and feed materials are the three top interventions used by actors. However, according to Table 6, the policy, space, and skill are the top three innovation programs, which means that the actors bring and deliver intervention based on these priorities. In general, the actors and donors convinced the receptors in promoting the development of mixed crop-livestock farming businesses in West New Guinea, Indonesia.
CONCLUSIONS
This study highlights the stakeholders in mixed crop-livestock, which are dominated by individual actors that privately manage the farms in accordance with law. These actors commonly act as stakeholders that are positively important and those that ruled the farms. The threats are real and need to be lowered as much as possible to mitigate the turn-back effect. The top five shared resources are access, satisfaction, power, knowledge, and time allocation. Those resources tend to stay longer to sustain the farms' strong needs with the relationship of actors positively dominated by similar correlations, which varies between negative, neutral to positive. However, this variation is due to the actors reluctant to deliver the intervention and innovation. Actors with low interest and low power need to be promoted to high interest and power by using aids, guidance, and services from each mixed crop-livestock farm business.
CONFLICT OF INTEREST
We certify that there is no conflict of interest with any financial, personal, or other relationships with other people or organization related to the material discussed in the manuscript. | 4,368 | 2020-12-17T00:00:00.000 | [
"Agricultural and Food Sciences",
"Environmental Science",
"Economics"
] |
Prediction of intrinsic topological superconductivity in Mn-doped GeTe monolayer from first-principles
The recent discovery of topological superconductors (TSCs) has sparked enormous interest. The realization of TSC requires a delicate tuning of multiple microscopic parameters, which remains a great challenge. Here, we develop a first-principles approach to quantify realistic conditions of TSC by solving self-consistently Bogoliubov-de Gennes equation based on a Wannier function construction of band structure, in presence of Rashba spin-orbit coupling, Zeeman splitting and electron-phonon coupling. We further demonstrate the power of this method by predicting the Mn-doped GeTe (Ge1-xMnxTe) monolayer—a well-known dilute magnetic semiconductor showing superconductivity under hole doping—to be a Class D TSC with Chern number of −1 and chiral Majorana edge modes. By constructing a first-principles phase diagram in the parameter space of temperature and Mn concentration, we propose the TSC phase can be induced at a lower-limit transition temperature of ~40 mK and the Mn concentration of x~0.015%. Our approach can be generally applied to TSCs with a phonon-mediated pairing, providing useful guidance for future experiments.
INTRODUCTION
The topological phase of superconductors (SC) has recently received intense research interest as the superconducting quasiparticles residing in the non-trivial gapless/zero-energy boundary states are considered a form of Majorana fermions. Majorana fermions are their own anti-particles 1 and obey the non-Abelian exchange statistics 2 , which can be utilized for topological quantum computation 3 . Topological superconductors (TSC) exhibit various exotic phenomena, including zero modes on the magnetic vortex 4 , "fractional" Josephson effect 5 , non-local correlation 6 , and thermal responses 7 . By now, the theoretical aspects of TSCs are reasonably well understood, but the experimental confirmation remains a great challenge due to the requirement of tuning multiple microscopic parameters like the Fermi level, magnetic field, temperature, etc. Hence, it is highly desirable to predict more TSCs and quantify experimental conditions to advance the field.
Unlike the successful first-principles prediction of electronic and topological materials, theoretical predictions of TSCs are challenging because of the uncertainty in the parameters used to construct Bogoliubov-de Gennes (BdG) Hamiltonian. Usually, only the pre-conditions of TSC, e.g., Rashba splitting 8 or topological properties [9][10][11][12] in the normal state of known SC, were analyzed using first-principles method, but not the topology of superconducting quasi-particles. Instead, effective models of TSC states are constructed with empirical parameters, at the best partially fit to the first-principles results 13 . Meanwhile, there is a parallel development beyond the mean-field approximation employing more realistic number-conserving approach 14,15 , which is yet to be made material specific. Moreover, conventional first-principles approaches that estimate the superconducting transition temperature (T c ) by employing the empirically McMillan's formula 16 or solving the Migdal-Eliashberg formula 17 cannot be applied to the cases involving spin-orbit coupling (SOC) and magnetism (internal or external). Therefore, more versatile and accurate methods to predict T c for SC as well as TSC are highly desirable.
In this article, we attempt to further extend first-principles calculations to the field of TSCs, by developing a versatile approach to quantify realistic conditions of TSC. We construct and solve self-consistently a material-specific first-principles BdG Hamiltonian, based on Wannier function (WF) construction of band structure, in presence of Rashba SOC, Zeeman splitting and electron-phonon coupling (EPC). Furthermore, we demonstrate the usefulness of this method by predicting the Mn-doped GeTe (Ge 1-x Mn x Te) monolayer to be a TSC by constructing a firstprinciples phase diagram in the parameter space of temperature and Mn concentration.
Generally, TSC materials can be classified as intrinsic or extrinsic, depending on the experimental conditions of realizing the non-trivial phase. Intrinsic TSCs exhibit inherently a nontrivial superconducting gap without the need of applying an external field or constructing a heterostructure. They may be pwave SCs with natural spin-triplet pairing 18,19 , such as Sr 2 RuO 4 20 , Cu/Sr/Nb-doped Bi 2 Se 3 21 and non-centrosymmetric SCs 22 , or swave SCs with an effective spin-triplet pairing resulting from helical spin-polarized states, such as the two-dimensional (2D) topological electronic states 23,24 , and 1D 25,26 and 2D Rashba electronic states [27][28][29] which belong to the so-called Class D TSC without time-reversal symmetry (TRS). Extrinsic TSCs employ the same physical mechanisms, but realization of their non-trivial properties requires applying external fields or constructing heterojunctions. To the best of our knowledge, all the known Class D TSCs formed by the s-wave superconductivity are 1 extrinsic, such as the semiconductor nanowire with strong SOC 30 , the ferromagnetic atomic chains 31 , the nanoscale magnetic islands 32 , the ferromagnet 33 , and the topological surface 34 and edge states 35 proximitized with conventional SCs with/without applying external magnetic field. Notably, the signature of TSCs observed by applying an external magnetic field in a superconducting material, e.g., FeTe 0.55 Se 0. 45 36 , epitaxial GeTe 37 and β-Bi 2 Pd thin film 38 indicates the possible existence of intrinsic Class D TSC without needing the external magnetic field, which will further enrich the physics of TSC, in the same perspective as from quantum Hall effect (with magnetic field) to anomalous quantum Hall effect (without).
Given the necessary conditions for realizing Class D TSCs with 2D Rashba electrons [27][28][29] , i.e., inversion symmetry breaking, Zeeman gap opening and superconductivity, the IV-VI compound GeTe with Mn doping, a dilute magnetic semiconductors with a ferromagnetic Curie temperatures T FM c up to~200 K for epitaxial layers on BaF 2 (111) substrate [39][40][41][42][43][44] , caught our attention. The superconductivity of GeTe with p-type doping due to Ge vacancy was confirmed as early as the 1960s 45,46 . It is also known as a ferroelectric material with rhombohedral layered, non-centrosymmetric structure below the ferroelectric Curie temperature of~700 K 47 . Recently, a gradual opening of Zeeman gap in the Rashba bands of GeTe with Mn doping was observed, attributed to the entanglement of ferromagnetic and ferroelectric order 48 . Also, a recent experiment has reported possible signatures of extrinsic TSC in GeTe film under external magnetic field 37 .
Specifically, we focus on the recent experimentally exfoliated GeTe monolayer 49 , which was predicted to be useful in optoelectronic devices and may be a type-II Ising superconductor upon slight hole doping 50,51 . We first show that GeTe monolayer inherits all the key characteristics of its bulk phase by using conventional first-principles calculation. Then, the firstprinciples BdG Hamiltonian was constructed via a WF scheme, through which we found that the GeTe monolayer with the hole concentration of~7.4 × 10 13 cm −2 becomes superconducting below~120 mK and the Ge 1-x Mn x Te monolayer is a Class D TSC with T c~4 0 mK characterized by a non-zero Chern number and chiral Majorana edge modes. A phase diagram of Ge 1-x Mn x Te is constructed by employing the developed first-principles approach to guide experimental detection of the predicted SC and TSC phase. Since both the exfoliated GeTe monolayer 49 and epitaxial Ge 1-x Mn x Te thin film already exist [39][40][41][42][43][44] , our prediction should be readily testable experimentally. Our approach provides a benchmark to make material-specific predictions of TSCs by using first-principles calculations.
RESULTS AND DISCUSSION
Crystal and electronic band structure The crystal structure of GeTe monolayer is shown in Fig. 1a, which is a (111) layer fragment of its bulk phase. Each Ge(Te) atom is bonded with three Te(Ge) atoms, forming a buckled honeycomb lattice. The in-plane lattice constant a and buckling height h was optimized to be~3.955 Å and~1.565 Å, respectively, in good agreement with previous report 50 . Due to the absence of inversion symmetry, a large Rashba splitting arises in the electronic band structure (Fig. 1b). The electronic states are doubly degenerate at the Γ and M points, forming the so-called Kramers pairs, while the degeneracy was lifted away from these time-reversal invariant points. For the four valence bands near the Fermi level of our interest, hereafter we name the lower (upper) two bands as the Rashba (Ising) bands for clarity, referring to their respective electronic spin-texture near the Γ point ( Supplementary Fig. 1).
To predict the TSC formed by 2D Rashba electrons [27][28][29] , we focus on the Rashba bands with a significant Rashba splitting coefficient α R = 0.66-0.76 eV Å. It is comparable with that of heavy metals Au(111) and Bi(111) surface 52,53 , but slightly smaller than that of bulk GeTe 54 . A strong Rashba effect is desirable for the electrons to overcome the suppressing effect of Zeeman field on superconductivity. Doping 0.1 holes per primitive cell, corresponding to a hole concentration of~7.4 × 10 13 cm −2 , will move the Fermi level (E F ) to the Dirac point formed by the Rashba splitting (Fig. 1b). The electronic density of states (DOS) at Fermi level, i.e., N F , is thus increased from 0 to~1.4 states/eV/primitive-cell, which stems mainly from the p-orbitals of Te and Ge atoms (Supplementary Fig. 2). Figure 1c shows the spin-texture on the Fermi Having demonstrated the Rashba spin splitting in the GeTe monolayer, we now discuss the second ingredient, the Zeeman gap. It has been reported that a Zeeman gap can be opened in the bulk Ge 1-x Mn x Te with a ferromagnetic order parallel to the (111) direction 48 , which is the easy magnetization direction for small x 55 . By reproducing the experimental results of bulk Ge 1-x Mn x Te based on the virtual crystal approximation (VCA) 56 , the spin state of Mn dopants was determined to be S = 5/2 (Supplementary Note 1). Consequently, the out-of-plane high-spin state (S = 5/2) of Mn dopants is adopted in Ge 1-x Mn x Te monolayer under VCA. As expected, the Zeeman gaps δ z of Rashba and Ising bands opened at Γ increase monotonically with the increasing Mn concentration (Fig. 1d), and can be fit by the equation of δ z = 250 × x and δ z = 1550 × x meV, respectively. The different slopes are the result of different out-of-plane spin magnitude of the electronic states near the Dirac point versus near the valance band maximum ( Supplementary Fig. 1c).
Superconductivity with and without TRS
We next discuss the phonon-mediated superconductivity of the 0.1-hole-doped GeTe monolayer. From the calculated phonon spectra (Fig. 2a), we first confirm its dynamical stability by the absence of imaginary frequency. For the acoustic branch with the lowest vibration frequency, Kohn anomalies can be seen at certain q-points around Γ, which is favorable for enhancing EPC. Then the EPC strength is evaluated based on the conventional firstprinciples approach (see Methods sections). The calculated EPC strength λ qv of a specific phonon mode v at the wavevector q with the frequency ω qv shows two significant features (Fig. 2a). On one hand, all phonon modes can couple with electrons. This is further confirmed by the comparison between the frequency ω dependent phonon DOS F(ω) and isotropic Eliashberg spectral function where α 2 is the average of electron-phonon interaction (Fig. 2b). Meanwhile, the cumulative EPC strength λ(ω) increases quickly to 1.13 at the frequency of ω~10 meV, which is about 81% of the total EPC constant λ = 1.39. This indicates that the EPC stems mainly from the acoustic modes. The convergence of EPC calculation has been carefully checked (Supplementary Note 2). On the other hand, only the vibration modes with a finite wave vector can couple with electrons. This is because for all the FS contours surrounding Γ (Fig. 1c), only a finite length of phonon wave vectors can connect the initial and final scattering states. In addition, both α 2 F(ω) and λ qv illustrate that the soft modes associated with the Kohn anomalies help to enhance the EPC strength 57 .
To estimate the superconducting transition temperature T c , we construct a material-specific BdG Hamiltonian H BdG ðkÞ in the momentum space by employing the electronic Hamiltonian H WFs k ð Þ: Here the H WFs k ð Þ is obtained by the Fourier transform of the realspace Hamiltonian H ðRÞ WFs , and the latter can be constructed from fitting the first-principles band structure of specific materials using WANNIER90 code under the basis of WFs 58 . Each WF with the orbital index i contains two spin-components, leading to 2@ total WFs. The chemical potential E F in H BdG ðkÞ is the Fermi level where the superconducting gap Δ condenses under the basis vector of φ BdG ¼ ðφ WFs ; φ y WFs Þ T . Only the intra-orbital spin-singlet pairing is considered in the H BdG ðkÞ following the previous theoretical proposals [27][28][29] . Then we formulate the superconducting gap equation into the following form: Here E l;k are eigenvalues of the so constructed H BdG ðkÞ. V, l, k B , and T represent material volume, band index of quasi-particles, Boltzmann constant and temperature, respectively. The intraorbital spin-singlet pairing in the form of Eq. 2 ensures i = j, i.e., Δ ii Δ. The absolute pairing strength g ii is usually identical for the bands with similar orbital character in one specific material 57 , which is calculated as g ii ¼ λ À μ Ã ð Þ=N F g with μ * representing effective Coulomb repulsion 59 . This gap equation enables us to solve the superconducting gap self-consistently at different temperatures. Only the quasi-particle states within one Debye temperature θ D around zero energy, i.e., E l;k θ D , are summed over in the k > 0 half of Brillouin zone (BZ) considering the particle-hole symmetry of H BdG ðkÞ. Details of constructing the BdG Hamiltonian H BdG k ð Þ and formulating the gap equation can be found in the Methods sections.
We emphasize that this method is not only different from the conventional method employing the McMillan's formula 16 or solving the anisotropic Migdal-Eliashberg formula 17 in estimating T c , but also extend the first-principles approach to calculate the topological invariant of superconducting gap and the critical magnetic field/doping-concentration of superconductivity (see below). We check the correctness of Eq. 3 by reducing it to the well-known gap equation for a single-band s-wave SC 60 . Here ε k are eigenvalues of normal electronic state; details are given in the Methods sections. Its reliability is further confirmed by reproducing superconductivity of three representative known SCs, i.e., bulk lead ( Supplementary Fig. 4) 61 , bulk GeTe ( Supplementary Fig. 3d) 46 , and MoS 2 monolayer ( Supplementary Fig. 5) For the 0.1-hole-doped GeTe monolayer, we assume the Debye temperature (~200 K) and Coulomb repulsion μ * to be same as that of bulk GeTe and extract the WFs using the p orbitals of Ge and Te. Also, we heuristically reduce the calculated total EPC constant λ from 1.39 to~0.76 by~45.5%, based on the benchmark of correlation effect in MoS 2 monolayer 66 . This should set a lower limit on EPC constant since the correlation effect of p-orbitals is usually weaker than that of d-orbitals. The resulting absolute pairing strength g (~0.4) is comparable to that of bulk GeTe (~0.49) 67 , which enables us to predict the superconducting gap Δ of the 0.1-hole-doped GeTe monolayer at different temperatures. From Fig. 2c, one can see the calculated Δ~18.6 μeV for both Rashba and Ising bands, which is gradually suppressed with the increasing temperature. The T c is around~120 mK, lower than that of GeTe film 46 . We anticipate that the predicted 2D superconductivity may be confirmed by growing GeTe monolayer on Si (111) wafers, as the epitaxial GeTe thin film was observed to be superconducting on this substrate 37 .
Next we simulate the superconductivity of Ge 1-x Mn x Te monolayer by adding an out-of-plane Zeeman energy B z in H WFs k ð Þ first: Here σ z is the Pauli matrix in spin space and the I @ @ is a @ @ identity matrix. Then the BdG Hamiltonian H z BdG k ð Þ can be reconstructed through the Eq. 1 and Eq. 2. The reliability of such treatment in simulating the SC without TRS is confirmed by reproducing the in-plane critical magnetic field of MoS 2 monolayer (Supplementary Note 3.2). 68 By diagonalizing the H z WFs k ð Þ with different B z in momentumspace, we obtain the Zeeman gap δ 0 z of Rashba and Ising bands opened at Γ point ( Supplementary Fig. 6a), which can be fit as δ 0 z = 0.122 × B z and δ 0 z = 2.0 × B z meV, respectively. Combining with the δ z fit to the first-principles results in Fig. 1d, one obtains the relationship between B z and Mn concentration, as B z = 2049 × x and B z = 775 × x meV for the Rashba and Ising bands, respectively. The self-consistently calculated T c ( Supplementary Fig. 6b) and Δ (Supplementary Fig. 6c) domenstrate that they both decrease gradually with the increasing B z due to the pairing breaking effect of magnetism. The superconductivity of Rashba (Ising) bands is fully superssed when B z > 0.35 (0.23) meV, indicating a critical Mn doping concentration of x c~0 .017% (0.03%) (Fig. 2d). This value of x c is two orders of magnitude smaller than that (2%) of Mn doped MgB 2 69 , which is reasonable since the T c of GeTe monolayer is lower than MgB 2 by similar magnitude.
Topological superconductivity and phase diagram To realize TSC formed by 2D Rashba electrons, model analysis proposes that the half of the Zeeman gap opened at the Dirac point of Rashba bands, i.e., δ z /2, should be larger than the superconducting gap Δ 27-29 . In the following, the first-principles approach has been extended to characterize the TSC phase based on a material-specific BdG Hamiltonian H z BdG k ð Þ. Specifically, we take Δ = 0.2 meV and B z = 7.5 meV with δ z~0 .9 meV to construct the H z BdG k ð Þ of Ge 1-x Mn x Te monolayer via Eq. 1, Eq. 2 and Eq. 4. The relatively large B z and Δ are used to show the topological non-triviality more clearly. The H z BdG k ð Þ is analogous to the singleparticle Hamiltonian of electrons with an energy gap mathematically. By diagonalizing H z BdG k ð Þ in the momentum space, we obtain the dispersion relation of superconducting quasi-particles (Fig. 3a). One can clearly see that the superconducting gap is indeed opened, where the topological invariant, i.e., first Chern number (N c ), is well-defined.
For 2D systems, the Chern number of l-th band is calculated by integrating the Berry curvature Ω l ðkÞ ¼ ∇ A l ðkÞ over the first BZ: where A l ðkÞ is Berry connection. The total Chern number N c can be obtained by summing up the Chern numbers of all the states below the superconducting gap, which is quantized to −1. The Berry curvature resides mainly at the Γ point associated with the Zeeman gap opening (Fig. 3b), similar to the band inversion in the quantum anomalous Hall systems. Here we should emphasize that N c does not physically correspond to a quantized Hall conductance because charge is not conserved in the BdG Hamiltonian 24 . Two chiral Majorana edge modes localized at two different edges clearly exist in the continuous superconducting gap due to the bulk-boundary correspondence (Fig. 3c and 3d). The propagation of chiral Majorana fermions could lead to same unitary transformation as that in braiding Majorana zero modes 70 , and the deterministic creation and braiding of chiral edge vortices in hybrid structures were elaborated 71 .
We finally construct a phase diagram of the 0.1-hole-doped Ge 1-x Mn x Te monolayer in Fig. 4, to help guide future experimental detection of the predicted TSC phase formed by the superconducting Rashba bands. At the zero-temperature limit, the SC phase of the Rashba bands will be preserved for x < x c = 0.017% and the TSC phase will arise when x > x min = 0.014%, where the pre-condition of δ z /2 > Δ can be met ( Supplementary Fig. 6d). At finite temperature, both the ferromagnetic and SC order should exist simultaneously for the formation of TSC phase. Referring to the ferromagnetic Curie temperatures T FM c of Ge 1-x Mn x Te that increases linearly with increasing Mn concentration up to x = 0.2 and can be fit by T FM c x ð Þ ¼ 333 x K ( Supplementary Fig. 7a) [39][40][41][42][43][44] , we estimate T FM c x ð Þ and xdependent T c will cross over at x 0 min ¼ 0:014% ¼ x min (Supplementary Fig. 7b), too. Consequently, the TSC phase can be X. Zhang et al. formed for x min < x < x c at the temperature where the SC order occurs. Given that the non-trivial phase can be readily realized in non-centrosymmetric SCs when p-wave component is stronger than s-wave 22 , our results indicate the TSC phase of Ge 1-x Mn x Te monolayer could be robust against parity mixing of Cooper pairs. We suggest preparing the desired Ge 1-x Mn x Te monolayer on BaF 2 (111) substrate 39-44 by molecular beam epitaxy since the growth is known to start in a 2D manner 41 . We anticipate that the chiral Majorana edge modes of Ge 1-x Mn x Te monolayer can be detected using Josephson effect 5 or charge transport 72 , and controlled by magnetic flux 73 . The effects of magnetic anisotropy and GeTe film thickness on the TSC phase are discussed in Supplementary Note 4 and Note 5.
Lastly, in addition to the monolayer Ge 1-x Mn x Te we demonstrated here, we suggest two more candidate materials for Class D TSC. Firstly, since the heterostructures of MnBi 2 Te 4 /Bi 2 Te 3 74,75 and Bi 2 Te 3 /NbSe 2 34,76 have already been fabricated, the MnBi 2 Te 4 /Bi 2 Te 3 /NbSe 2 hold high possibility to be synthesized. We demonstrate that this type of heterostructures with magnetized topological surface states are also a Class D TSC characterized with non-zero Chern number (Supplementary Note 6) 77 . Secondly, it was experimentally reported the desired Rashba-Zeeman splitting can be alternatively achieved by the magnetic order in the Si-terminated surface of HoRh 2 Si 2 78 . With a spin-singlet Cooper pairing being tunneled into this surface state by superconducting proximity effect, the Class D TSC will be readily emerging. By applying our developed first-principles BdG Hamiltonian approach, a complete phase diagram of these systems can be constructed in the near future.
Details of the first-principles calculations
The Vienna ab initio simulation pack 79,80 was utilized to calculate the electronic property of normal states based on the density-functional theory. The exchange-correlation of electrons was treated within the generalized gradient approximation in the form of Perdew-Burke-Ernzerhof 81 . The atomic structures of GeTe monolayer and thin film was set up by introducing a vacuum region of more than 15 Å to avoid the interactions between neighboring images. Structural relaxations and self-consistent calculations as well as the Zeeman gap calculations were performed on a uniform 30 × 30 × 1 (18 × 18 × 18) k-point sampling of the first BZ for monolayer (bulk) GeTe. The energy cutoff was set to 400 eV for plane-wave basis. The dipole correction was used to cancel the artificial electric field imposed by the periodic boundary condition of GeTe thin film.
The QUANTUM ESPRESSO package 82 was used to calculate the phonon spectra and EPC strength based on the density-functional perturbation theory 83 as well as fit the first-principles band structure by interfacing with the WANNIER90-2.1 code 58 . The Optimized Norm-Conserving Vanderbilt Pseudopotential 84 was employed and the kinetic energy cutoff was set to 100 Ry for wave functions. The hole doping was simulated by removing electrons from intrinsic GeTe monolayer and introducing the compensating jellium background to avoid divergence. The dynamic matrix and phonon frequency are computed on a 18 × 18 × 1 q-point mesh with a 18 × 18 × 1 k-point sampling, and a finer 36 × 36 × 1 k-point grid is used in the EPC calculations, where the DOS is converged (Supplementary Fig. 2b). Other q/k-point samplings (Supplementary Table 1) are also employed to check the convergence of EPC calculations. The phonon DOS F(ω) and the isotropic Eliashberg spectral function α 2 F(ω) as well as the cumulative frequency-dependent EPC strength λ(ω) are calculated using a 60 × 60 × 1 q-point sampling by means of the Fourier interpolation. Specifically, the q-and v-resolved EPC strength λ qv is given by: where N F is the electronic DOS at the Fermi level, W k is the weight of wavevector k, ε nk is the eigenvalue for electronic wavefunction ψ nk with band index n and wavevector k, ω qv is the frequency of a phonon branch v at wavevector q, h is the reduced Planck constant, and M 0 is the ionic mass. g mn;v k; q ð Þ represents the scattering amplitude between the electronic states with wavefunction ψ nk and ψ mkþq , induced by derivative ∂ qv Ξ À Á of the self-consistent potential associated with phonon ω qv . δ is the Dirac delta function. The frequency ω dependent isotropic Eliashberg spectral function α 2 F(ω) and the cumulative EPC strength λ(ω) are then calculated from: λðωÞ ¼ 2 Here W q is the weight of wavevector q. The total EPC constant λ equals to λ(ω max ) with ω max being the maximum of phonon frequency.
Constructing the BdG Hamiltonian
To perform first-principles prediction of TSC, the main challenge is to construct a BdG Hamiltonian of superconducting quasi-particles from the electronic Hamiltonian of normal state. Here we propose a strategy to overcome this obstacle by employing the WFs φ WFs ¼ Here i is the orbital index, and the total number of WFs is<EMAIL_ADDRESS>and τ x is the Pauli matrix in particle-hole space.
For the materials with external/internal magnetism, we first add a Zeeman term B in H WFs ðkÞ using the vector of Pauli matrix σ in spin space: Then the first-principles BdG Hamiltonian H B BdG k ð Þ without TRS can be constructed for a specific material by using the above procedure. We should emphasize that the above construction procedure is practically also applicable to materials with revelatively strong SOC. It is noted that the WFs, rather than the maximally localized WFs, are obtained by the WANNIER90 code without minimization procedure, so that the resulting WFs can be approximately seperated into up (majority) and down (minority) pseudospin orbitals in the presence of SOC. This treatment has been widely adopted in the WF-based methods for investigation of topological materials, such as WannierTools 85 . Here we develop another route to quantifying realistic conditions of TSC by solving self-consistently BdG equation based on WFs.
Formulating the gap equation
Under the basis of Ψ k ¼ c 1;k ; c 2;k ; Á Á Á ; c i;k ; Á Á Á ; c y 1;Àk ; c y 2;Àk ; Á Á Á ; c y i;Àk ; Á Á Á T , the multi-band Hamiltonian with s-wave pairing can be written as: H BdG ðkÞ ¼ hðkÞ ÀΔ Δ Àh à ðÀkÞ With the relation of ∂H ∂Δij ¼ À P k c y i;k c y j;Àk þ c j;Àk c i;k , we can derive the gap equation of Δ ij as: (12) Solving this gap equation self-consistently enables us to estimates the superconducting transition temperature T c and the critical magnetic field/ doping-concentration of specific materials based on the material-specific BdG Hamiltonian H BdG ðkÞ with/without TRS constructed from the state-ofart first-principles approach.
To confirm the above derivation, we apply the derived gap equation to the single-band Hamiltonian with s-wave pairing: Its eigenvalues are when the eigenvalues of normal electronic state satisfy ε k ¼ ε Àk . Substituting the four eigenvalues into the derived gap equation, we obtain the well-known gap equation of single-band s-wave superconductor: 60 1
DATA AVAILABILITY
All data needed to evaluate the conclusions of this paper are available within the paper and Supplementary Information. | 6,616.4 | 2020-04-10T00:00:00.000 | [
"Physics"
] |
THE ROLE OF LOCATION MEDIATION ON PURCHASE INTEREST TOWARDS CONSUMER PURCHASE DECISION IN RETAIL INDUSTRY
: Not all purchase interest will lead to a purchasing decision, one of the keys to success in the retail industry is through the role of location (captive market). This study aimed to explore the role of location (captive market) in mediating the linkage between purchase interest and purchase decision of minimarket consumers in Kediri City. The current study uses quantitative methods by collecting data accidentally through surveys of Indomaret and Alfamart minimarkets consumers in Kediri City. Using Hair et al.'s theory, the sample was determined to be 150 respondents. All data obtained were analyzed using path analysis via the SPSS and AMOS applications. The study results indicate that the role of location (captive market) is able to mediate the linkage between purchase interest and consumer purchase decision. Partially, purchase interest and location have a positive and significant impact on consumer purchase decision. Thus, these results enhance the understanding that location (captive market) plays an important role for purchase interest in the consumer purchase decision process, particularly in the context of the retail industry
Introduction
The retail industry, including the food retail sector, still seems to dominate the modern retail market.Based on data from of the United States Department of Agriculture (USDA) the July 2023 edition, Alfamart and Indomaret, which are classified as minimarkets, will still lead the modern retail store market in Indonesia until 2022 compared to larger modern retail stores.The report presents a comparison of the sales figures of the most popular modern retail stores in Indonesia.It is evident that Alfamart and Indomaret have sales that are significantly higher than their competitors.This is likely due to the fact that Alfamart and Indomaret operate in a high-demand captive market and have a strong emotional connection with their customers.It can even be seen as an indication of the high level of consumer loyalty that has been established towards their products and brand names.The strategic location of these minimarkets, their widespread presence throughout Indonesia, and the availability of ample parking space are potential advantages for Alfamart and Indomaret.Each consumer has different purchasing behaviors, consumer behavior demonstrates how consumers form their purchase decisions as a result of their sacrifices of time, money, and effort to obtain specific products or services (Schiffman & Kanuk, 2007).In this concept, purchase interest plays a crucial role in influencing consumer purchase decisions in minimarkets.When consumers express an interest in buying something and visit a minimarket, it is generally expected that they will make a purchase.However, there are instances where consumers who are interested in buying something and visit a minimarket do not always make a purchase (Nuraeni & Hadita, 2022).When consumers are in the process of making a purchase decision, their purchase interest plays a crucial role in determining whether they proceed with the purchase or not (Karimi et al., 2018).This suggests that purchase interest can vary depending on the consumer's relationship with the minimarket and their previous purchasing behavior (Amanah & Harahap, 2018).In this decision-making process, various factors come into play, and one of them is the location of the purchase.The location can act as a captive market and influence consumer considerations when they are interested in making a purchase (Hafizi & Ali, 2021).Furthermore, Multichannel marketing which involves using multiple channels to reach consumers, can enhance purchase interest.As purchase interest increases, consumers are more likely to make purchasing decisions, and the location of the minimarket can play a role in this process.In other side, the presence of a strong purchase interest can be a driving force behind the establishment of minimarkets in a particular area, as potential consumers are located in that area (Bakewell & Mitchell, 2003).Also, The convenience and accessibility of the location were found to significantly impact consumers' decision to make a purchase (Rachmawati et al., 2019).This suggests that when consumers have a high level of purchase interest, they are more likely to consider factors such as location when making their purchasing decisions.Consistently, current studies strive to provide significant practical and theoretical contributions and empirically present the linkage between purchase interest and consumer purchase decisions.The present research will contribute to the theoretical marketing literature by examining the mediating role of location between the dependent and independent variables.
Literature Review Location as a Captive Market for Minimarket
The location of a minimarket plays a crucial role in influencing consumer purchasing decisions.Several factors related to the location, such as accessibility, parking convenience, visibility, traffic conditions, and the presence of public facilities, can significantly impact consumer behavior (Sutanto & Keni, 2021;Baker et al., 2002;Xu & Hu, 2022).A minimarket that is easily accessible and has convenient parking facilities can enhance the convenience for consumers, making it easier for them to make purchases.When a minimarket is located in a convenient location with ample parking space, consumers are more likely to choose it over other options due to the ease of access and the convenience of parking their vehicles.This convenience factor can positively influence consumer satisfaction and loyalty towards the minimarket (Sutanto & Keni, 2021).The convenience of a minimarket's location can also act as a barrier for consumers to switch to other minimarkets.When a minimarket is situated in a convenient location that meets consumers' needs and preferences, they may develop a habit of shopping at that particular minimarket and become less likely to explore other options.This can create a sense of loyalty and attachment towards the minimarket, leading to repeat purchases and increased customer retention (Baker et al., 2002).Furthermore, a minimarket's location that is visible, has low traffic congestion, and offers public facilities can significantly impact consumer purchasing decisions.A visible location increases the exposure of the minimarket to potential customers, attracting their attention and increasing the likelihood of them choosing to shop there.Additionally, a minimarket located in an area with low traffic congestion provides a hassle-free shopping experience for consumers, making it more appealing and convenient.The presence of public facilities, such as restrooms or seating areas, can also enhance the overall shopping experience and contribute to consumer satisfaction and loyalty (Xu & Hu, 2022).Therefore, the indicators of location include visibility, heavy traffic, public facilities, access, and available market potential (Guswai, 2009).Consumers are more likely to choose a minimarket that offers these location-related advantages, and they may develop a habit of shopping at that particular minimarket, making it difficult for them to switch to other options.
Purchase Decision
Consumer behavior demonstrates how consumers form their purchase decisions as a result of their sacrifices of time, money, and effort to obtain specific products or services (Schiffman & Kanuk, 2007).Consumers consider factors such as product features, quality, price, and suitability to their needs and preferences when making a product selection The perceived quality of a product and its alignment with consumer preferences play a significant role in influencing purchase decisions (Pascucci et al., 2022).Consumers often have brand preferences based on factors such as brand reputation, perceived quality, brand loyalty, and brand image (Sen & Bhattacharya, 2001).Brand perception and associations can significantly impact consumer decision-making, as consumers tend to choose brands that align with their values and meet their expectations.Furthermore, consumers consider factors such as convenience, availability, reputation, and trustworthiness of the supplier when making their choice (Rachmawati et al., 2019;Wan et al., 2022).The location and accessibility of the supplier, as well as the availability of online and offline channels, can also influence consumer decisions (Wan et al., 2022).Consumers may consider factors such as bulk discounts, package deals, or the need to stock up on certain products when determining the quantity of their purchase.Price sensitivity and budget constraints can also play a role in determining the purchase quantity (Rosyid & Pratiwi, 2022).On the other hand, the timing of visits to a supplier or retailer is also a factor that can influence consumer purchase decisions.Consumers may consider factors such as sales promotions, seasonal discounts, or personal preferences when deciding when to make their purchase (Rosyid & Pratiwi, 2022).The timing of visits can be influenced by factors such as availability of time, convenience, and the desire to take advantage of specific offers or discounts.Also, the method of payment is another factor that can be measured in consumer purchase decisions.Consumers may consider factors such as convenience, security, and personal preferences when choosing a payment method.The availability of various payment options, such as cash, credit cards, mobile payments, or installment plans, can impact consumer decisions (Rachmawati et al., 2019).
Purchase Interest and Purchase Decision
When consumers are in the process of making a purchase decision, their purchase interest plays a crucial role in determining whether they proceed with the purchase or not (Karimi et al., 2018).The level of interest or intention to buy a product or service can strongly influence the final decision to make a purchase (Lee & Lin, 2005;Zheng et al., 2020;Yucha et al., 2022).This suggests that purchase interest can vary depending on the consumer's relationship with the minimarket and their previous purchasing behavior (Amanah & Harahap, 2018).Furthermore, the preferences and interests of consumers in terms of where they want to make their purchases can guide minimarket owners in selecting the most advantageous locations (Feldmann & Hamm, 2015;Stranieri et al., 2022).Consumer perceptions and preferences for local products can influence the decision-making process of minimarket owners when choosing a strategic location.If consumers have a strong preference for locally produced goods, minimarkets may opt to establish their stores in areas where local products arbe readily available and in high demand.This alignment with consumer preferences can attract a loyal customer base and contribute to the success of the minimarket (Feldmann & Hamm, 2015).Several indicators measuring purchase intention are: firstly, transactional interest, when consumers are interested in making purchases on a product.Secondly, referential interest, when consumers tend to want to provide references or recommend a product to other consumers.Thirdly, preferential interest, when consumers are interested in making a product the first choice in shopping activities.Fourthly, explorative interest, when consumers are interested in finding out more about a purchased product (Hui, 2017).
Method
This research is designed to a causal research design with quantitative approach, which tests the linkage between purchase interest and consumer purchase decision mediated by location.The study was conducted in Alfamart and Indomaret minimarket in Kediri city.The population consisted of Alfamart and Indomaret consumer in Kediri city, The current study uses quantitative methods by collecting data accidentally through surveys of Indomaret and Alfamart minimarkets consumers in Kediri City.Using Hair et al.'s theory, the sample was determined to be 150 respondents.Data collection was carried out by distributing questionnaires consisting of indicators that form the research variables, using a Likert scale ranging from 1 (strongly disagree) to 5 (strongly agree).All data obtained were analyzed using path analysis via the SPSS and AMOS applications.Hypotesis will be applied for this research are: H1: Purchase interest has a direct effect on location H2: Location has a direct effect on purchase decision H3: Purchase interest has a direct effect on purchase decision H4: Purchase interest has an indirect effect on purchase decision through location mediation
Direct Effect
Based on the results of path analysis using SPSS, significant relationships were found between purchase interest (X) and location (Z), as well as between location (Z) and purchase decision (Y).However, a non-significant relationship was observed between purchase interest (X) and purchase decision (Y).These findings suggest that the indirect influence of purchase interest on purchase decision may potentially be better mediated through the factor of location, rather than a direct influence of purchase interest on purchase decision.The table demonstrates that the path coefficients for all three effects are positive, and the tstatistic values are above 1.97 for the effect of purchase interest on location, as well as location on purchase decision.However, the t-statistic value is below 1.97 for the effect of purchase interest on purchase decision.Similarly, the significance values are also below the threshold.Therefore, it can be concluded that the three hypotheses regarding direct effects are accepted, indicating that Hypotheses 1, 2, and 3 are accepted.The following conclusions can be drawn as the table above: 1) The purchase interest affects the location by 5,5%, while 94,5% is influenced by other variables.2) the purchase interest and location affect the purchase decision by 10,1%, while the remaining 89,9% is influenced by other variables.
Indirect Effect
The Sobel test is used to examine the indirect effect of purchase interest on purchase decision through location.The Sobel test statistic obtained was 2.32978752, which is greater than 1.97, and the probability value was 0.00990869, which is smaller than 0.05.These results indicate that location can serve as a strong mediating variable between purchase interest and purchase decision.Therefore, H4 is supported and accepted.
Based on the testing results using SPSS and the Sobel test, the research model can be illustrated in AMOS as depicted below.These findings given us a new understanding that purchase interest could not stand alone to effecting the consumer purchase decision in Alfamart and Indomaret minimarket in Kediri city.By location help, purchase interest will increse consumer purchase decision.Its true if consumers who are interested in buying something and visit a minimarket do not always make a purchase (Nuraeni & Hadita, 2022).The location can act as a captive market and influence consumer considerations when they are interested in making a purchase (Hafizi & Ali, 2021).When consumers of Alfamart and Indomaret minimarket have a high level of purchase interest, they are more likely to consider factors such as location when making their purchasing decisions.
Conclusions
The main findings of current research conclude that purchase interest has a direct effect on location, location has a direct effect on purchase decision, purchase interest has no a direct effect on purchase decision, and purchase interest has an indirect effect on purchase decision through location mediation in Alfamart and Indomaret minimarket in Kediri city.The current implication shown that as a form of captive market strategy of Alfamart and Indomaret minimarket in Kediri city, the convenience location plays an important role to influence consumer considerations in making a purchase.As a limitation, the wider research area and more respondents involvement should be applied in the future.
Figure 1 :
Figure 1: Retail Company with the Largest Sales Value in Indonesia Throughout 2022 Sources: United States Department of Agriculture (USDA) Juli 2023 | 3,297 | 2024-01-16T00:00:00.000 | [
"Business",
"Economics"
] |
Hadamard multiplexed fluorescence tomography
Depth-resolved three-dimensional (3D) reconstruction of fluorophore-tagged inclusions in fluorescence tomography (FT) poses a highly ill-conditioned problem as depth information must be extracted from boundary data. Due to the ill-posed nature of the FT inverse problem, noise and errors in the data can severely impair the accuracy of the 3D reconstructions. The signal-to-noise ratio (SNR) of the FT data strongly affects the quality of the reconstructions. Additionally, in FT scenarios where the fluorescent signal is weak, data acquisition requires lengthy integration times that result in excessive FT scan periods. Enhancing the SNR of FT data contributes to the robustness of the 3D reconstructions as well as the speed of FT scans. A major deciding factor in the SNR of the FT data is the power of the radiation illuminating the subject to excite the administered fluorescent reagents. In existing single-point illumination FT systems, the source power level is limited by the skin maximum radiation exposure levels. In this paper, we introduce and study the performance of a multiplexed fluorescence tomography system with orders-of-magnitude enhanced data SNR over existing systems. The proposed system allows for multi-point illumination of the subject without jeopardizing the information content of the FT measurements and results in highly robust reconstructions of fluorescent inclusions from noisy FT data. Improvements offered by the proposed system are validated by numerical and experimental studies. ©2014 Optical Society of America OCIS codes: (170.3880) Medical and biological imaging; (170.0110) Imaging systems; (170.6960) Tomography; (110.6955) Tomographic imaging. References and links 1. V. Ntziachristos, “Fluorescence molecular imaging,” Annu. Rev. Biomed. Eng. 8(1), 1–33 (2006). 2. V. Ntziachristos, C. Bremer, E. E. Graves, J. Ripoll, and R. Weissleder, “In vivo tomographic imaging of nearinfrared fluorescent probes,” Mol. Imaging 1(2), 82–88 (2002). 3. V. Ntziachristos, C. H. Tung, C. Bremer, and R. Weissleder, “Fluorescence molecular tomography resolves protease activity in vivo,” Nat. Med. 8(7), 757–761 (2002). 4. A. Corlu, R. Choe, T. Durduran, M. A. Rosen, M. Schweiger, S. R. Arridge, M. D. Schnall, and A. G. Yodh, “Three-dimensional in vivo fluorescence diffuse optical tomography of breast cancer in humans,” Opt. Express 15(11), 6696–6716 (2007). 5. S. C. Davis, H. Dehghani, J. Wang, S. Jiang, B. W. Pogue, and K. D. Paulsen, “Image-guided diffuse optical fluorescence tomography implemented with Laplacian-type regularization,” Opt. Express 15(7), 4066–4082 (2007). 6. P. Mohajerani, A. A. Eftekhar, J. Huang, and A. Adibi, “Optimal sparse solution for fluorescent diffuse optical tomography: theory and phantom experimental results,” Appl. Opt. 46(10), 1679–1685 (2007). 7. J. C. Baritaux, K. Hassler, and M. Unser, “An efficient numerical method for general Lp regularization in fluorescence molecular tomography,” IEEE Trans. Med. Imaging 29(4), 1075–1087 (2010). 8. D. Han, J. Tian, S. Zhu, J. Feng, C. Qin, B. Zhang, and X. Yang, “A fast reconstruction algorithm for fluorescence molecular tomography with sparsity regularization,” Opt. Express 18(8), 8630–8646 (2010). 9. A. Behrooz, H. M. Zhou, A. A. Eftekhar, and A. Adibi, “Total variation regularization for 3D reconstruction in fluorescence tomography: experimental phantom studies,” Appl. Opt. 51(34), 8216–8227 (2012). 10. D. Sliney and M. Wolbarsht, Safety with Lasers and Other Optical Sources, Plenum, New York (1980). 11. ANSI Standard Z136.1, American National Standard for the Safe Use of Lasers, American National Standards Institute, Inc., New York (2000). 12. A. Ishimaru, Wave Propagation and Scattering in Random Media (Academic Press, New York, 1978). #199881 $15.00 USD Received 21 Oct 2013; revised 10 Jan 2014; accepted 5 Feb 2014; published 18 Feb 2014 (C) 2014 OSA1 March 2014 | Vol. 5, No. 3 | DOI:10.1364/BOE.5.000763 | BIOMEDICAL OPTICS EXPRESS 763 13. S. R. Arridge and J. C. Hebden, “Optical imaging in medicine: II. Modelling and reconstruction,” Phys. Med. Biol. 42(5), 841–853 (1997). 14. D. A. Boas, D. H. Brooks, E. L. Miller, C. A. DiMarzio, M. Kilmer, R. J. Gaudette, and Q. Zhang, “Imaging the body with diffuse optical tomography,” IEEE Signal Process. Mag. 18(6), 57–75 (2001). 15. H. Jiang, “Frequency-domain fluorescent diffusion tomography: a finite-element-based algorithm and simulations,” Appl. Opt. 37(22), 5337–5343 (1998). 16. M. Harwit, and N. J. A. Sloane, Hadamard Transform Optics, Academic Press, New York (1979). 17. L. Streeter, G. R. Burling-Claridge, M. J. Cree, and R. Künnemeyer, “Optical full Hadamard matrix multiplexing and noise effects,” Appl. Opt. 48(11), 2078–2085 (2009). 18. R. A. DeVerse, R. M. Hammaker, and W. G. Fateley, “Hadamard transform Raman imagery with a digital micro-mirror array,” Vib. Spectrosc. 19(2), 177–186 (1999). 19. V. V. Fedorov, Theory of Optimal Experiments, Academic Press, New York (1972). 20. R. Gordon, R. Bender, and G. T. Herman, “Algebraic reconstruction techniques (ART) for three-dimensional electron microscopy and x-ray photography,” J. Theor. Biol. 29(3), 471–481 (1970). 21. A. Behrooz, C. Kuo, H. Xu, and B. W. Rice, “Adaptive row-action inverse solver for fast noise-robust 3D reconstructions in bioluminescence tomography: theory and dual-modality optical/CT in vivo studies,” J. Biomed. Opt. 18(7), 076010 (2013). 22. E. E. Graves, J. Ripoll, R. Weissleder, and V. Ntziachristos, “A submillimeter resolution fluorescence molecular imaging system for small animal imaging,” Med. Phys. 30(5), 901–911 (2003). 23. R. Cubeddu, A. Pifferi, P. Taroni, A. Torricelli, and G. Valentini, “A solid tissue phantom for photon migration studies,” Phys. Med. Biol. 42(10), 1971–1979 (1997). 24. S. T. Flock, S. L. Jacques, B. C. Wilson, W. M. Star, and M. J. van Gemert, “Optical properties of Intralipid: a phantom medium for light propagation studies,” Lasers Surg. Med. 12(5), 510–519 (1992). 25. V. Ntziachristos and R. Weissleder, “Experimental three-dimensional fluorescence reconstruction of diffuse media by use of a normalized born approximation,” Opt. Lett. 26(12), 893–895 (2001). 26. A. Joshi, W. Bangerth, and E. M. Sevick-Muraca, “Non-contact fluorescence optical tomography with scanning patterned illumination,” Opt. Express 14(14), 6516–6534 (2006). 27. A. Joshi, W. Bangerth, K. Hwang, J. C. Rasmussen, and E. M. Sevick-Muraca, “Fully adaptive FEM based fluorescence optical tomography from time-dependent measurements with area illumination and detection,” Med. Phys. 33(5), 1299–1310 (2006). 28. J. Dutta, S. Ahn, A. A. Joshi, and R. M. Leahy, “Illumination pattern optimization for fluorescence tomography: theory and simulation studies,” Phys. Med. Biol. 55(10), 2961–2982 (2010). 29. V. Venugopal, J. Chen, and X. Intes, “Development of an optical imaging platform for functional imaging of small animals using wide-field excitation,” Biomed. Opt. Express 1(1), 143–156 (2010). 30. D. J. Cuccia, F. Bevilacqua, A. J. Durkin, and B. J. Tromberg, “Modulated imaging: quantitative analysis and tomography of turbid media in the spatial-frequency domain,” Opt. Lett. 30(11), 1354–1356 (2005). 31. A. Mazhar, D. J. Cuccia, S. Gioux, A. J. Durkin, J. V. Frangioni, and B. J. Tromberg, “Structured illumination enhances resolution and contrast in thick tissue fluorescence imaging,” J. Biomed. Opt. 15(1), 010506 (2010). 32. N. Ducros, C. D’andrea, G. Valentini, T. Rudge, S. Arridge, and A. Bassi, “Full-wavelet approach for fluorescence diffuse optical tomography with structured illumination,” Opt. Lett. 35(21), 3676–3678 (2010). 33. C. D’Andrea, N. Ducros, A. Bassi, S. Arridge, and G. Valentini, “Fast 3D optical reconstruction in turbid media using spatially modulated light,” Biomed. Opt. Express 1(2), 471–481 (2010). 34. S. Bélanger, M. Abran, X. Intes, C. Casanova, and F. Lesage, “Real-time diffuse optical tomography based on structured illumination,” J. Biomed. Opt. 15(1), 016006 (2010). 35. S. D. Konecky, A. Mazhar, D. Cuccia, A. J. Durkin, J. C. Schotland, and B. J. Tromberg, “Quantitative optical tomography of sub-surface heterogeneities using spatially modulated structured light,” Opt. Express 17(17), 14780–14790 (2009). 36. V. Venugopal and X. Intes, “Adaptive wide-field optical tomography,” J. Biomed. Opt. 18(3), 036006 (2013).
Introduction
Fluorescence tomography (FT) aims at in vivo 3D localization and quantification of fluorescent contrast agents distributed in biological tissue [1,2].Fluorescent agents are used for in vivo tagging and tracking of inclusions or molecules of interest such as cancer lesions, test drugs, and protein expressions in small animals and human subjects [3,4].In FT, the subject is illuminated at a sequence of points on the skin by visible or near infrared (NIR) radiation from a laser or light emitting diode (LED).Photons from the illumination source diffuse through the tissue and excite the exogenously administered fluorescent agents that, in turn, emit visible or NIR fluorescent light at wavelengths higher than the excitation [1].The fluorescent photons are collected and their intensity measured by optical detectors at various points on the surface of the subject.These surface intensity measurements are used in an inversion algorithm to reconstruct the 3D distribution of fluorescence in tissue.The 3D reconstruction of fluorescence distribution is a highly ill-posed problem as depth information must be extracted from diffuse boundary data.Consequently, noise and errors in the FT data and modeling can produce significant artifacts in the 3D reconstructions.FT inverse solvers utilize regularization techniques to provide robustness and stability against noise and errors [5][6][7][8][9].However, as the level of noise and error contamination rises, the quality of regularized reconstructions deteriorates.Depending on the dynamics and nature of the inversion algorithms, different types of artifacts and errors arise in the 3D reconstructions when FT data are considerably noisy [9].
Improvements in the modeling and conditioning of the FT inverse problem can be of great help in enhancing the accuracy of the reconstructions.However, these improvements are not available in many FT scenarios, e.g., imaging of optically heterogeneous regions of small animals.In such cases, the level of modeling errors can be remarkably high and, depending on the bulk and shape of the animal, the associated inverse problem can often be extremely illconditioned.Alternatively, enhancing the signal-to-noise ratio (SNR) in the FT data can strongly contribute to the quality of the reconstructions and reduce noise-induced artifacts, irrespective of the inversion algorithm being used.The SNR of FT data is determined by several factors including the sensitivity of the data-acquisition system, e.g., charge-coupled device (CCD) camera, the absorption of the turbid medium, the quantum yield and absorption cross section of the administered fluorescent agent, and the radiative power of the illumination source.Existing FT systems are equipped with extremely costly, ultra-sensitive, cooled CCD cameras to guarantee a high data-acquisition SNR.Disadvantages of ultrasensitive CCD systems are two-fold; they are only available at high costs, and they require lengthy integration times for extremely low-noise data acquisition, which result in extremely lengthy FT scan times.Absorption of the tissue sample being imaged and the properties of the administered fluorescent dye vary from experiment to experiment and cannot be controlled in FT systems.
The power of the light source used for illumination of the turbid medium and excitation of the administered fluorescent agents can be increased to raise the SNR of the FT data.However, the power of the illumination source must not exceed the threshold beyond which human skin and biological tissue get injured by the source radiation [10].Therefore, for existing single-point illumination FT systems, the power entering the medium is bounded by the skin maximum permissible exposure (MPE) in the visible and NIR range (~2 mW/mm 2 ) [11].In this work, a multi-point illumination FT configuration is presented that allows for orders-of-magnitude increase in the source radiation entering the subject compared to existing single-point illumination FT systems.Since instead of single-point illumination, multiple points are illuminated simultaneously, more power can enter the subject without causing radiation injury.The only trade-off is that as the number of simultaneously illuminated points increases, the number of uncorrelated or minimally correlated measurements obtainable by changing the illumination points decreases.As an example, if all the sources were activated simultaneously, a tremendously high level of radiation would enter the subject and excite the administered fluorophores.However, the acquired data would not possess the same level of information obtainable through a series of single-point illumination measurements.As a result, a trade-off exists between the average number of simultaneously illuminated points per measurement, and the information content of the data acquired in the corresponding series of measurements.This trade-off can be optimized by applying the Hadamard transform to the illumination patterns in FT measurements.
In this paper, we introduce a multiplexed multi-point illumination architecture governed by the Hadamard transform to replace the existing single-point illumination architecture in FT for the purpose of increasing the FT data SNR and hence the robustness of the 3D reconstructions.We perform numerical studies to show the improvements offered by Hadamard-multiplexed FT over existing single-point illumination FT.Moreover, we present experimental results using a Hadamard-multiplexed FT system, which was developed in house in its entirety for this work, to demonstrate the advantages of multiplexed illumination over single-point illumination in FT.
Fluorescence tomography
The propagation of light in highly scattering turbid media such as tissue is modeled by the radiative transport equation (RTE) [12].It has been shown that a first-order approximation to RTE reduces the computational complexity and numerical burden of modeling while maintaining a relatively high level of accuracy sufficient for optical imaging purposes [13].The first-order approximation, which is broadly used by the optical tomography community, results in a first-order partial differential equation (PDE) called the diffusion equation formulated as where (r) Φ and q(r) represent the average light intensity and the illumination source flux at location r, respectively.Furthermore, a μ and D represent the absorption coefficient and the diffusion coefficient, respectively.In FT, the propagation of excitation photons and fluorescent photons can be described by a pair of coupled diffusion equations as below exc a exc exc where exc (r) Φ is the average intensity of excitation photons at location r; exc q (r) is the power density of the excitation source used for illumination of the tissue at location r (as a result, exc q (r) is zero inside the tissue and non-zero at the boundary source locations); em (r) Φ is the average intensity of the fluorescent light at location r, η is the dimensionless quantum yield of the fluorescent dye; fl μ is the per-molar fluorescent absorption coefficient of the fluorophores at the excitation wavelength, and c(r) is the molar concentration of the fluorescent dye at location r.In FT, the goal is to use the model in Eqs. ( 2) and (3) to estimate the fluorescence distribution c(r) by varying exc q (r) (through changing the location of the illumination source) and measuring exc (r) Φ and em (r) Φ on the boundary of the tissue while D(r) , a μ (r) , η , and fl μ are either known a priori or determined using diffuse optical tomography (DOT) measurements [14].
Since analytical solutions are not available for the coupled PDEs formulated in Eqs. ( 2) and (3), numerical techniques such as the finite element method (FEM) must be used to discretize the coupled PDEs and numerically solve for the corresponding Green's functions [15].The most common approach is the Galerkin formulation of the FEM where the volume of the turbid medium is discretized by a 3D tetrahedral mesh and spatial functions defined over the medium are transformed to discrete vectors defined over the voxels of the 3D mesh.As a result, Eqs. ( 2) and (3) are transformed to discrete matrix equations using which a linear relationship can be established between the fluorescence distribution vector (x) and the boundary measurements of fluorescent photons (y) through a system matrix (M) that depends on the geometry and optical properties of the tissue [15]:
Hadamard multiplexing for FT
The Hadamard transform is a linear transform used for SNR enhancement in multi-source or multi-input measurement systems [16][17][18].Instead of performing measurements with one source active at a time, measurements are acquired while multiple sources are active simultaneously.As a result, the radiation power entering the subject in a Hadamard multiplexed FT scan is increased in proportion to the number of sources that are on during the scan.The increase in the illumination power boosts the SNR of the data images acquired during the FT scan.As discussed in Section 1, a trade-off exists between the information content of the acquired data and the average number of sources active during each measurement.The theory of Hadamard transform provides the optimal multiplexing scheme for the enhancement of the SNR of multi-source measurements without lowering the information content of the measurements [16].This optimal (0, 1)-weighing scheme is encoded in the Hadamard S-matrix that is a square matrix with entries that are either 0 or 1.The S-matrix codes offer the highest increase in SNR (or highest number of simultaneously active sources) while maintaining non-singularity and independence between successive measurements.From a theoretical perspective, the S-matrix coded measurements are Aoptimal and D-optimal [19].The S-matrix is constructed such that each column (or row) has the maximum possible number of 1's while the matrix maintains full rank and remains nonsingular.Each column of the S-matrix is used as a multiplexing code for a corresponding measurement.The 1's and 0's in each column encode the sources that should be on and the sources that should be off, respectively, in the corresponding measurement.The number of columns of the S-matrix represents the number of FT measurements with distinct source distributions.As an example, a Hadamard S-matrix of size seven-by-seven is shown below: The multiplexing scheme encoded in the first column of the S-matrix in Eq. ( 5) stipulates sources numbered 1, 2, 3, and 5 to be on and sources numbered 4, 6, and 7 to be off in the first measurement.The matrix has seven columns so a total of seven measurements can be obtained.Moreover, in the measurements obtained using this multiplexing scheme, an average of four sources are active during each measurement making the data SNR orders greater than that of single-source measurements.In general, in a measurement system with N sources, the Hadamard-multiplexing scheme increases the SNR by a factor of N approximately (more accurate for large N) [16].Therefore, Hadamard multiplexing is extremely advantageous in measurement systems with a high number of sources.
In Fig. 1, a graphical comparison of conventional single-point illumination FT architecture and Hadamard-multiplexed FT architecture is presented in which the multiplexing scheme of the S-matrix in Eq. ( 5) is applied to an FT system with seven sources that illuminate a slabshaped turbid medium with two cylindrical fluorescent inclusions.As shown in Fig. 1, the radiative power entering the slab and exciting the fluorescent rods is considerably higher in the case of Hadamard-multiplexed FT.Meanwhile, Hadamard multiplexing does not require complex changes to the configuration of the conventional FT system.The only requirement is that the system must be modified so that simultaneous illumination of multiple points is possible.The linear model of FT, as formulated in Eq. ( 1), is modified with Hadamard multiplexing as y = WMx, (6) where W is the multiplexing matrix constructed from the Hadamard S-matrix entries as follows populating all of its diagonal entries.Here, d n and s n represent the number of FT detectors and sources, respectively.Hence, the matrix W in Eq. ( 7) is a square matrix with a size of d s n n -byd s n n .Also, the system matrix in the case of Hadamard-multiplexed FT becomes WM.Therefore, as predicted by the theory of Hadamard transform, the statistics of the noise in the FT data remain unchanged under multiplexing while the noiseless data vector, Mx , is amplified by W. This results in a boost in the FT data SNR.
Numerical studies
To study the performance of Hadamard-multiplexed FT, we apply it to a 2D numerical study as shown in Fig. 2. A rectangular turbid medium of dimensions 60 mm by 80 mm with scattering and absorption coefficients of 1 mm −1 and 0.01 mm −1 , respectively, houses two circular fluorophore inclusions and is illuminated by a varying number of equally spaced sources distributed on its boundary.The emitted fluorescent signal is collected on the boundary of the turbid medium by 23 equally spaced detectors.The propagation of the excitation and fluorescent photons in the 2D turbid medium is simulated by the FEM where a triangular mesh with 32305 nodes and 64000 elements is used to discretize the medium.To study the effect of Hadamard multiplexing on the quality of FT reconstructions, the simulated data is contaminated with various levels of additive white noise, which models the combined effects of read-out, dark-current, and shot noise, resulting in SNRs of 60, 50, 40, 30, 20, 10, and 0 dB in the single-point illumination configuration.Also, FT data are simulated for a varying number of sources increased in increments of 4 resulting in five FT configurations with 7, 11, 15, 19, and 23 sources illuminating the medium, as shown in rows labeled (i) of Figs.2(a), 2(b), 2(c), 2(d), and 2(e), respectively.The results from these varying source configurations can reveal the effects of the number of illuminating sources on the advantages offered by the Hadamard-multiplexed FT architecture.
The reconstructions are performed by multi-level scheme algebraic reconstruction technique (MLS-ART) which is a fast commonly used inverse solver that does not require optimal parameter selection [20,21] unlike regularized least-squares techniques [5,6,9].Hence, the reconstruction algorithm (including its parameters) is the same for all data SNRs and source configurations (the relaxation parameter of MLS-ART is set to 1 in all cases).The rows labeled (ii) in Fig. 2 show the reconstructions by MLS-ART from conventional singlepoint illumination FT data, and the rows labeled (iii) in Fig. 2 show the reconstructions by MLS-ART from Hadamard-multiplexed FT data.For high SNRs (60-40 dB), the reconstructions from single-point illumination data and Hadamard-multiplexed data are similar and possess high accuracy.As the data SNR decreases below 40 dB, the reconstructions from single-point illumination data become inaccurate and contaminated with noise-induced impulses.However, the reconstructions from Hadamard-multiplexed data remain considerably accurate down to a data SNR of ~10 dB.Furthermore, as shown in the results presented in Fig. 2, the denoising power of Hadamard-multiplexed FT increases as the number of sources illuminating the medium increases.The reconstructions from Hadamardmultiplexed data presented in row (iii) of Fig. 2(e) (corresponding to the study with 23 sources) possess higher noise-robustness compared to those in row (iii) of Fig. 2(b) (corresponding to the study with 11 sources).To compare the results presented in Fig. 2 quantitatively, the relative mean-square error (MSE) corresponding to each reconstruction is plotted in Fig. 3.This error is defined as where x represents the actual ground-truth fluorescent distribution vector, and x represents the reconstructed fluorescent distribution.As shown in Fig. 3, ε values higher than 5 are not included within the limits of the graph.Figure 3 clearly demonstrates that Hadamard multiplexing becomes considerably advantageous to single-point illumination as the data SNR decreases, and the number of sources increases.This advantage is expected based on the theoretical discussions presented in Sections 1 and 2.2.For high-SNR cases (>40 dB), the relative errors for both architectures are the same.However, for low-SNR studies, the error in the Hadamard-multiplexed cases remains below 1 for SNRs down to 10 dB, whereas in the single-point cases the relative errors grow larger than one for SNRs around or below 30 dB.It must be noted that we have used low number of sources to demonstrate the effect of Hadamard multiplexing on FT reconstructions.In practice, the number of FT sources is considerably higher resulting in remarkably higher practical advantage for Hadamard-multiplexed FT.
Experimental studies
In existing single-point illumination FT systems, the grid of source points is scanned sequentially by an optical fiber mounted on a translation stage [22].In Hadamard-multiplexed FT, a different scheme must be used for multi-point illumination.In this work, we developed a simple non-contact illumination configuration that allows for simultaneously flooding light on multiple points in the source grid.As presented in Fig. 4, after collimation, the visible or NIR radiation passes through a masked lenslet array.The mask grid blocks lenslets corresponding to the 0's of the Hadamard S-matrix while allowing the radiation to pass through lenslets corresponding to the 1's.The FT image acquisition configuration of this system is similar to existing non-contact FT systems where the excitation trans-illumination and fluorescent emission are imaged to a CCD by a lens and separated using a motorized filter wheel.The experimental studies were carried out using an in-house developed Hadamardmultiplexed phantom-based FT system as shown in Fig. 5.The collimated light beam of a 20-mW 635-nm He:Ne continuous-wave laser passes through an engineered diffuser and an opening that functions as an aperture to limit the beam waist arriving at the lenslet array.The 9-by-7 lenslet array focuses the light beam onto a grid of 63 points with a vertical pitch of 3 mm and horizontal pitch of 4 mm.The liquid phantom vessel is placed at the focal plane of the lenslet array so that its focal grid functions as a multi-point illumination pattern.As presented in Figs.To compare the performance of the Hadamard-multiplexed FT architecture with existing single-point illumination systems, the phantom experiments were repeated with single-point illumination architecture by replacing the Hadamard coded masks with single-element masks to keep the per-point radiative illumination power constant between experiments.3D reconstructions were performed on both sets of experimental studies by MLS-ART (with 10 full iterations through the system of equations and a relaxation parameter of 1) on a tetrahedral mesh discretizing the phantom volume with 132,325 nodes and 634,149 voxels.The results are presented in Fig. 6.The row labeled (i) in Fig. 6 shows the double-tube configuration of the phantom-based FT experiment.The reconstructions are presented in rows labeled (ii) and (iii) of Fig. 6.Columns labeled (a), (b), and (c) in Fig. 6 correspond to inclusion depths of 3, 6, and 9 mm, respectively.Reconstructions from single-point illumination FT are presented in the row labeled (ii), and those from Hadamard multiplexed FT in the row labeled (iii) in Fig. 6.As expected, the quality of the reconstructions deteriorates as the depth of the inclusions increases.While reconstructions of shallow inclusions (3 mm) from both single-illumination and multiplexed data have a reasonable level of accuracy as shown in column (a) of Fig. 6, the advantage of Hadamard multiplexed FT in enhancing robustness becomes evident as the depth of inclusions increases as presented in columns (b) and (c) of Fig. 6.Similar to the numerical studies, it can be observed that Hadamard multiplexing adds considerable robustness to 3D reconstructions particularly for deeper inclusions as the data will be more noise-sensitive and the reconstructions more prone to noise-induced errors.To quantitatively verify the robustness offered by Hadamard multiplexing, the relative MSEs associated with the 3D reconstructions presented in Fig. 6 are plotted versus inclusion depth in Fig. 7.The errors in the reconstructions from single-point illumination and multiplexed data for the 3 mm inclusion depth are very close as shown in Fig. 7.The MSE increases with inclusion depth for both illumination architectures.However, the increase in the reconstruction error associated with multiplexed architecture is significantly lower than the single-point illumination architecture especially for the 9-mm deep inclusions.The quantitative results presented in Fig. 7 further validate the observed improvements offered by Hadamard multiplexing in the 3D reconstructions of Fig. 6.
Discussion and conclusions
In this work, we introduced a multiplexing scheme built upon Hadamard S-matrix codes to replace and improve the existing single-point illumination architecture in FT with multi-point illumination for the purpose of increasing the SNR and throughput in FT systems and reducing the required tomographic scan times.The high cost of wide-band tunable highpower light sources and per-area illumination power limitations in in vivo optical imaging pose considerable challenges for developing high-throughput high-SNR FT systems.Hadamard multiplexing allows us to overcome these challenges without over-complicating the architecture of the FT system or significant cost increase.Hadamard multiplexing provides an optimal trade-off between the throughput (SNR) and information content of a set of FT measurements.As discussed in Section 1, single-illumination FT measurements provide high information content because of their spatially disjoint sensitivity maps while suffering from low-throughput making them attractive to cases involving thin or low absorptive tissues.Hadamard multiplexed FT offers an optimal trade-off where without significantly jeopardizing the information content of the measurements, a boost in the measurement SNR and throughput is obtained.
As shown in Figs. 2 and 6, the 2D and 3D FT reconstructions indicate that for low-noise FT scenarios with shallow inclusions, the performance of single-point illumination architecture is not significantly different from Hadamard-multiplexed architecture.Due to changes in the system matrix, its condition number, and singular values, along with changes in the experimental setup, the reconstructions from single-point illumination data in both numerical and phantom studies differ from reconstructions from Hadamard-multiplexed data, even for low-noise FT scenarios, as presented in Figs. 2 and 6.The difference between the two, however, becomes more significant as the noise level and depth of the inclusions increase.The data from deeper inclusions is more diffuse and hence more sensitive to and affected by noise contamination.The accuracy of reconstructions of shallow sources from low-noise data is high and of the same order of magnitude for both architectures as shown in the reconstruction error plots of Fig. 3 and Fig. 7.This shows that though the condition of system matrix is affected by Hadamard multiplexing, the reconstruction accuracy is very little jeopardized if any.This is in part due to the low condition number of the Hadamard Smatrices.The condition numbers corresponding to S-matrices of sizes 7, 15, 23, 31, and 63 are 2.82, 4, 4.89, 5.65, and 8, respectively.These condition numbers are significantly low compared to the typical condition numbers of the system matrix in FT, which can range from around 10 10 to above 10 20 depending on the geometry and optical properties of the turbid medium.As a result, when multiplied by the multiplexing matrix, W, as formulated in Eqs. ( 6) and ( 7), the condition number and singular values of the FT system matrix M do not change significantly.Hence, the FT reconstruction accuracy is negligibly impaired by Hadamard multiplexing.
The advantage of Hadamard multiplexed FT becomes evident as the data noise level, inclusion depth, and number of FT sources increase.This can be observed in the comparative trend of 2D and 3D reconstructions and their relative errors in Figs. 2, 3, 6, and 7.In numerical studies, as the FT data SNR decreases to ~30 dB and below, the reconstructions from single-point illumination data completely lose their accuracy and become dominated by artifacts.Meanwhile, reconstructions from Hadamard-multiplexed data preserve their accuracy down to a noise level equivalent to ~10 dB single-point data SNR.In phantom studies, as the depth of the two fluorescent rods increases, the corresponding FT data becomes more diffuse, and hence the 3D reconstruction become more ill-posed and noise-sensitive.As a result, though the noise characteristics of the CCD remain approximately the same (darkcurrent, read-out, and image noise), artifact contamination in the reconstructions increases with depth.Hadamard multiplexing offers improved robustness in reconstructing the rods at 9 mm depth over single-point illumination architecture as shown in Figs. 6 and 7.
Consequently, Hadamard multiplexed FT can enhance the performance of FT systems especially when suffering from limited illumination power or in imaging scenarios dealing with highly absorbing organs, such as the liver or the lungs in small animals.In this work, full Hadamard S-matrix multiplexing was proposed and studied for FT systems.Nevertheless, partial Hadamard multiplexing of the FT illumination architecture can also offer benefits over existing systems.In partial multiplexing, the illumination grid points are divided into groups (e.g., each grid line forms a group of 5 points), and the S-matrix multiplexing is applied to these groups instead of individual illumination points.In FT systems with limited flexibility over modification of the illumination geometry and optics such as commercial FT systems that use translation stage-based illumination raster scan, partial Hadamard multiplexing can be used to boost the throughput by simply adding one source (and one stage) per each line of the illumination grid.Depending on the degree of partial multiplexing (the total number of multiplexed entities or groups), the data SNR and system throughput can be improved over single-point illumination systems.In other FT systems where optical fiber bundles are used for raster scanning the illumination points, full S-matrix multiplexing can be implemented by simply re-programming the illumination sequence of light sources coupled to the fibers.
Modifications to existing FT system architecture required for full or partial Hadamard multiplexing do not add considerable complexity or cost to these systems, unlike recently explored surface illumination FT architectures [26][27][28][29].While structured illumination FT systems can offer advantages over single-point illumination FT, they require more complex hardware that can pose challenges and complications for arbitrary non-flat subject geometries [30][31][32][33][34][35][36].The advantage of Hadamard multiplexed FT is that it offers high robustness and high-throughput wide-field illumination similar to structured-illumination FT without the complex hardware requirements.Hadamard multiplexed FT only requires simple modifications to single-point illumination FT.In FT systems that use fiber bundles for subject illumination, Hadamard multiplexing can be realized by modifying the illumination sequence of the source fibers.In scanning source FT systems, multiplexing can be realized either by a masked lenslet array configuration as presented in this work, or by adding extra source fibers to the system.Compared to structured-illumination FT systems where spatial light modulators (SLM) add significant complexity, cost, and power loss and are only optimal for flat or slab subject geometries, Hadamard multiplexing offers lower cost, complexity, and versatility.As a result, Hadamard multiplexed FT can offer improvements over existing FT systems.
In conclusion, it was shown that Hadamard multiplexed FT provides a versatile solution for improving the SNR, throughput, robustness, and speed of FT systems.The Hadamardmultiplexed FT architecture enhances the accuracy and robustness of FT reconstructions in low-SNR scenarios especially when the number of sources used for illumination is sufficiently high.Additionally, Hadamard multiplexing does not harm the quality of the reconstructions for high-SNR FT scenarios.It was shown that Hadamard multiplexed FT can be realized using hardware whose complexity and cost are not higher than existing singlepoint illumination FT architectures.These characteristics of Hadamard multiplexed FT, as demonstrated in this work, make it advantageous over existing FT systems.
Fig. 1 .
Fig. 1.In a conventional FT system depicted in row (a), in each measurement one source illuminates the box-shaped turbid medium housing two fluorescent rods.In Hadamardmultiplexed FT, depicted in row (b), multiple sources (four out of seven) illuminate the medium in each measurement.A total of seven measurements are performed in each configuration.The S-matrix Hadamard encodings are based on the S-matrix formulated in Eq. (5).
a square diagonal matrix of size d n -byd n with an scalar denoted by ij S
Fig. 3 .
Fig. 3.The relative mean-square errors (ε) of the MLS-ART reconstructions presented in Fig. 2 versus the data SNR.
Fig. 4 .
Fig. 4. Schematic of the non-contact Hadamard-multiplexed FT system.Visible or NIR radiation from a laser source is collimated and directed onto a lenslet array with an S-matrix mask mounted on it.The phantom is placed at the focal plane of the lenslet array.The nonmasked lenslets form multi-point Hadamard S-matrix illumination patterns on the phantom.The radiation diffuses through the liquid phantom, excites the fluorescent inclusions (two rods) whose emission is imaged to a cooled CCD camera by an objective lens.
5(b) and 5(c), S-matrix masks are mounted on the lenslet array to create Hadamard-coded multi-point illumination patterns on the phantom.Given the number of source locations, 63 Hadamard codes are used sequentially for the multiplexed FT scan.The liquid phantom used in the experimental studies is a water-based mixture of Intralipid-1% and India ink with scattering and absorption coefficients of 0.8 mm −1 and 0.05 mm −1 [23, 24].The mixture is poured into a rectangular vessel with transparent plexiglass sides and dimensions of 120 mm by 90 mm by 14 mm.The fluorescent dye used in the phantom experiments is a 100 µM dimethyl sulfoxide (DMSO)-based solution of Oxazine 750 Perchlorate whose emission peaks around 700 nm when excited at 635 nm.Two capillary glass tubes with an inner diameter of 1 mm are partially filled with the fluorescent dye to form a pair of fluorescent cylinders with 1 mm diameter and 10 mm height.The capillary tubes are made of relatively thin glass (thickness of around 100 microns).Hence, the error in the light diffusion model from the thin glass is negligible considering the dimensions of the slab phantom.The dye-filled tubes are suspended in the center of the liquid phantom by an optical post mounted on a translation stage for accurate positioning as depicted in Fig. 5(a).Using the translation stage, the dye-filled tubes are positioned at depths of 3 mm, 6 mm, and 9 mm from the front surface of the phantom vessel facing the camera.The trans-illumination and fluorescent emission are imaged from the front side of the phantom to a cooled CCD camera (SBIG ® ST-10E) through a motorized filter wheel for separate acquisition of transillumination and emission images.The image acquisition is performed at a field of view (FOV) of 12 degrees with a binning factor of 4 and average exposure time of 15 sec⁄image.The CCD camera is cooled down to around −10 °C to minimize the thermal noise.Darkframe images (with laser off) are acquired in each measurement and subtracted from data images to correct for read-out noise, stray light effects, and other unwanted signals.Born normalization is performed on the acquired data images to facilitate quantification in the 3D reconstructions [25].#199881 -$15.00USD Received 21 Oct 2013; revised 10 Jan 2014; accepted 5 Feb 2014; published 18 Feb 2014 (C) 2014 OSA1 March 2014 | Vol. 5, No. 3 | DOI:10.1364/BOE.5.000763 | BIOMEDICAL OPTICS EXPRESS 772
Fig. 5 .
Fig. 5.The phantom-based Hadamard-multiplexed FT system: a) Picture of the experimental system.b) A Hadamard S-matrix mask mounted on a lenslet array is illuminated with collimated beam of laser radiation.c) The S-matrix mask produces the desired excitation source pattern on the phantom surface.
Fig. 6 .
Fig. 6.Phantom-based experimental results: (i) the double-tube configuration of the fluorescent inclusions in the slab-shaped phantom.3D reconstructions are performed by MLS-ART on (ii) conventional single-point illumination phantom FT data, and (iii) Hadamard-multiplexed FT data, where the depth of the pair of fluorescent tubes is (a) 3 mm, (b) 6 mm, and (c) 9 mm from the phantom surface facing the camera. | 8,687 | 2014-03-01T00:00:00.000 | [
"Mathematics"
] |
Development of a fluorescence-based method for the rapid determination of Zika virus polymerase activity and the screening of antiviral drugs
Zika virus (ZIKV) is an emerging pathogen that has been associated with large numbers of cases of severe neurologic disease, including Guillain-Barré syndrome and microcephaly. Despite its recent establishment as a serious global public health concern there are no licensed therapeutics to control this virus. Accordingly, there is an urgent need to develop methods for the high-throughput screening of antiviral agents. We describe here a fluorescence-based method to monitor the real-time polymerization activity of Zika virus RNA-dependent RNA polymerase (RdRp). By using homopolymeric RNA template molecules, de novo RNA synthesis can be detected with a fluorescent dye, which permits the specific quantification and kinetics of double-strand RNA formation. ZIKV RdRp activity detected using this fluorescence-based assay positively correlated with traditional assays measuring the incorporation of radiolabeled nucleotides. We also validated this method as a suitable assay for the identification of ZIKV inhibitors targeting the viral polymerase using known broad-spectrum inhibitors. The assay was also successfully adapted to detect RNA polymerization activity by different RdRps, illustrated here using purified RdRps from hepatitis C virus and foot-and-mouth disease virus. The potential of fluorescence-based approaches for the enzymatic characterization of viral polymerases, as well as for high-throughput screening of antiviral drugs, are discussed.
pair (Supplementary
). The amplification conditions used were as follows: 98 °C (3 min), 30 cycles of 98 °C (15 s), 51 °C (20 s) and 72 °C (4 min) each, and 10 min of elongation at 72 °C. The RdRp domain was amplified from a pcDNA vector containing the full-length NS5 gene. PCR amplification was similar to that described above but using the specific primers NS5 short_pET16_Fw and NS5_pET16_rv (Supplementary Table S1). After purification, the vector and insert were mixed in the presence of 2 × Gibson Assembly Master Mix and the assembly reaction was carried out following the recommendations of the manufacturer. The assembled product was transformed into E. coli BL21(DE3)-pRIL cells. After plasmid extraction from three independent bacterial colonies, nucleotide sequencing determined that two DNA samples contained the correct construct. The resulting plasmid pET16a-ZIKV-NS5RdRp encodes for ZIKV RdRp fused to an HHHHHHHHHHSSGHIEG amino acid tract in its N-terminus that is used for affinity purification using HisPur TM Ni-NTA resin The predicted molecular weight of this protein is 75 kDa.
A catalytically inactive enzyme was prepared by site-directed mutagenesis of the pET16a-ZIKV-NS5RdRp plasmid, encoding for substitutions D665N and D666N in the active site, which affects two catalytic Asp residues. The amplification reagents were the same as above with primers NS5_GNN_Fw and NS5_GNN_rv (Supplementary Table S1) and pET16a-ZIKV-NS5RdRp as template. PCR reaction conditions used were 98 °C (3 min), 30 cycles of 98 °C (15 s), 54 °C (20 s) and 72 °C (4 min), followed by an elongation step of 10 min at 72 °C. The resulting expression plasmid was termed pET16a-ZIKV-NS5RdRp-GNN.
The plasmid for the expression of HCV NS5B polymerase was prepared using the Gibson assembly method as described above. Briefly, the pET28a vector backbone was amplified by PCR using primers pET28a_Fw and pET28a_rv (Supplementary Table S1). An insert containing the HCV polymerase sequence was obtained by PCR amplification of plasmid Jc1FLAG2(p7-nsGluc2A) 37 , using primers NS5B_HCV_Fw and NS5B_Δ21_HCV_rv (Supplementary Table S1). Both vector and insert were assembled as described above. The resulting plasmid, termed pET28-HCV NS5bΔ21, encodes for HCV NS5b polymerase lacking the most C-terminal 21 amino acids and containing a C-terminal His-tag (LEHHHHHH). The predicted molecular weight of this recombinant protein is 65 kDa. For the construction of pET28-HCV NS5bΔ21-GNN (expressing a catalytically-inactive RdRp) we used Gibson assembly and plasmid Jc1FLAG2(p7-nsGluc2a)/GNN 37 to generate the insert. All the constructs were analyzed by sequencing to confirm the presence of the expected insert and the absence of undesired mutations.
Expression and purification of viral polymerases. For the expression of ZIKV NS5 RdRp (hereafter referred to as ZIKV RdRp), E. coli cells were transformed by electroporation with pET16a-ZIKV-NS5RdRp. Single kanamycin and chloramphenicol resistant colonies were cultured overnight in 10 mL of LB in the presence of antibiotics at 37 °C. Each culture was then inoculated into 200 mL of LB with antibiotics and incubated at 37 °C. When an optical density at 600 nm of 0.7 was reached, 500 μM IPTG, 50 μM MgCl 2 and 50 μM ZnCl 2 were added to the culture, which was incubated at 30 °C for 4 additional hours. Cells were then pelleted by centrifugation at 5,000 rpm for 15 min at 4 °C and stored at −80 °C until further use.
The bacterial pellets recovered from 200 mL cultures were resuspended in 20 mL of lysis buffer [50 mM Tris HCl, pH 8.0, 300 mM NaCl, 400 mM ammonium acetate, 4 mM MgCl 2 , 10%, glycerol, 10 mM imidazole, and 0.1% (v/v) Tween 20] and sonicated on ice for 6 cycles of 20 s alternating with 5 cycles of 10 s. Cell debris was pelleted at 11,000 rpm for 30 min at 4 °C and the supernatant mixed with 800 μL of Ni-NTA resin previously equilibrated with 20 volumes of lysis buffer without ammonium acetate (BWE buffer). The lysate was incubated with the resin in batch method with gentle mixing during 1 h at 4 °C. The unbound fraction was then removed by decantation and the resin was then loaded onto a column and extensively washed with 20 × column volumes of BWE buffer and 20 × column volumes of BWE buffer containing 25 mM imidazole. The resin was further washed with increasing concentrations of imidazole (successive one-column volumes of BWE buffer containing 50, 60, 70, 80, 90, 100 and 125 mM imidazole). Finally, the His-tagged protein was eluted in 400 μL of BWE buffer containing 400 mM imidazole. The sample was dialyzed for 3 hours at 4 °C against 200 volumes of dialysis buffer [50 mM Tris-HCl, pH 8.0, 150 mM NaCl, 5 mM MgCl 2 , 10%, glycerol, 1 mM DTT and 0.05% (v/v) Tween 20]. Samples obtained from different purification batches were pooled, quantified, aliquoted and stored at −80 °C until further use. Expression and purification of the recombinant ZIKV NS5RdRp-GNN (with D665N and D666N substitutions) was carried out following the same protocol. Likewise, the expression and purification of HCV NS5bΔ21 and NS5bΔ21-GNN polymerases was carried out following the same protocol described for ZIKV NS5. The protocol for the expression and purification of FMDV 3D polymerases has been described previously 35,36 . Fluorescence-based activity assay for ZIKV RdRp. For the detection of RNA synthesis by ZIKV RdRp, we established a real-time assay based on the fluorescent dye SYTO 9, which binds dsRNA but not ssRNA template molecules. The fluorescence emitted was recorded in real-time using a Fluostar Optima fluorimeter (BMG Labtech) using excitation and emission filters at 485 and 520 nm, respectively. The assay records the synthesis of dsRNA in a reaction using a poly-U molecule as a template and ATP as the nucleotide substrate. This technique has been adapted from methods previously documented for the detection of DNA synthesis 38 .
Reactions were performed in individual wells of black 96-well flat-bottom plates. The standard reaction contained 50 mM Tris-HCl, pH 7.5, 2.5 mM MnCl 2 , 500 μM ATP, 20 μg/mL poly-U, 0.1 mg/mL BSA and 0.25 μM SYTO 9 (50 μM stock solution in TE buffer pH 7.5). The assay was initiated by the addition of 250 nM ZIKV RdRp and the fluorescence was recorded over 30 min at 30 °C.
Variations on this assay, for example, different concentrations of reagents and/or the presence of additional compounds, are specifically indicated in each corresponding section. For graphical representation, background fluorescence obtained at time point 0 was subtracted from each value.
To determine K m and V max constants for ZIKV RdRp binding to poly-U ssRNA, standard reactions were carried out in increasing concentrations of the template (0.5-50 μg/mL) in the presence of ATP at 500 μM. The kinetic parameters for ATP were obtained from assays in the presence of increasing concentrations of this nucleotide (200-2250 μM) and using 3 μg/mL of poly-U.
IC 50 values were obtained from standard reactions carried out in the presence of 3 μg/mL poly-U and 1500 μM ATP, and increasing concentrations of each inhibitor.
End-point fluorometric reactions were performed in black 96-well black-flat bottom plates at 30 °C in the presence of the same reagents as described above, but in the absence of dye. The reactions were quenched at 60 min by adding in 25 mM EDTA to the samples. Either SYTO9 or SYBR Green II dye was then added to the sample (0.25 μM or 1x, respectively) and the mix reaction was incubated at room temperature for 5 min to allow the stabilization of RNA-dye complexes and fluorescence emission. To determine background fluorescence levels, a negative control was assayed in parallel, where the reaction was quenched before adding ZIKV RdRp. The quenched control reaction was incubated for 1 h at 30 °C, and then 0.25 μM SYTO9 or 1 × SYBR Green II, respectively, was added to the sample and fluorescence recorded as described above.
Data analysis.
Fluorometric results were expressed as mean ± SD. Statistical significance was analyzed by two-way ANOVA using GraphPad Prism, version 7, as specified in the figure legends. K m determinations were obtained by plotting the velocity of the reaction as a function of nucleotide or ssRNA template concentrations using nonlinear regression. IC 50 values were obtained by fitting the velocity data to a four-parameter logistic equation. Kinetic parameters and IC 50 values were calculated using Sigmaplot, version 11. Z' factor was calculated according to Zhang et al. 39 where "c+" is the activity obtained in a standard assay and "c−" is the nonspecific activity obtained in a control performed in the absence of MnCl 2 .
Results
Purification and biochemical characterization of recombinant ZIKV RdRp. The RdRp domain of ZIKV NS5 and a catalytic inactive mutant (GNN) were purified as described in Methods. Recombinant proteins were ≥95% pure as judged by PAGE analysis and Coomassie brillant blue R-250 staining (Fig. 1A).
The overexpression and biochemical characterization of ZIKV RdRp under different experimental conditions has been previously published [40][41][42] . For the preliminary evaluation of ZIKV NS5 RdRp domain activity in vitro, we adapted a polymerization assay based on the detection of radioactive nucleotides incorporated by the polymerase. This method makes use of a homopolymeric ssRNA as a template in the absence of any primer, since it has been previously demonstrated that ZIKV NS5 can initiate RNA synthesis de novo 42 . The reactions were performed in the presence of radioactive-labeled nucleotides, and polymerization products were resolved by PAGE. RNA synthesis in the absence of primer was observed both in the presence of poly-U and [α-32 P]ATP as template and nucleotide substrates (Fig. 1B), and in the presence of poly-C and [α-32 P]GTP (Fig. S1). We observed polymerization activity de novo in the presence of Mn 2+ but not Mg 2+ , in agreement with a previous observation 41 (Fig. 1B). The same reaction in the presence of the catalytically-inactive mutant GNN showed no detectable signal (Fig. 1B, lanes 7 to 9).
Previous studies suggested that, under certain circumstances, flaviviral polymerases can catalyze the terminal transference of nucleotides to RNA. However, this transferase activity has never been reported for ZIKV RdRp 42 . To rule out the possibility that the incorporation of nucleotides detected in our assay was due to the terminal transference of nucleotides and not to de novo RNA synthesis (as we expect), we performed the same assay but in the presence of radioactive nucleotides that were less competent for viral RNA synthesis: [α-32 P]GTP to poly-U and [α-32 P]ATP to poly-C. As shown in Fig. S2B, no elongation was detected under these conditions, supporting the notion that the activity detected was due to de novo RNA synthesis. Thus, these results show that both homopolymeric templates, poly-C and poly-U, can be used by ZIKV to initiate RNA replication, as has been previously documented 42 .
Detection of ZIKV RdRp polymerization activity by fluorometric assays in real time. Based on the above results, we next sought to detect polymerization activity using a fluorescence-detection method. For this aim, we attemped to establish an assay to quantify RNA synthesis activity as the relative increase in fluorescence emitted by SYBR Green II dye after binding to dsRNA. This procedure was adapted from methods previously described to detect dsDNA synthesis by the human primase-polymerase PrimPol 38 . We anticipated that binding of this intercalating agent to dsRNA generated by ZIKV RdRp polymerization activity would lead to an increase in the emitted fluorescence.
Preliminary real-time assays, involving the addition of SYBR Green II to the sample before initiating the reaction, showed an undetectable (using poly-C) or barely detectable (using poly-U) increase in fluorescence. In contrast to real-time experiments, we found significant increases in polymerase activity in an end-point experiment where the dye was added after the reaction was completed (Fig. S1). Previous studies have documented that an excess of SYBR Green I, chemically related to SYBR Green II, can inhibit other polymerase activities, such as those of Taq polymerase 43 or human PrimPol 38 . Our resulted suggested that SYBR Green II acts as an inhibitor of ZIKV RdRp activity. Thus, we decided to test other fluorescent dyes for the real-time detection of newly synthesized dsRNA. It has been reported that SYTO 9 dye shows lower interference on polymerization assays when binding to dsDNA 44,45 . In contrast to the assays with SYBR Green II, we found that both end-point and real-time polymerization assays resulted in similar increases in fluorescence when using poly-U as template (Fig. S1, compare A with D). The relative increase in emitted fluorescence (ratio between the values obtained after a 60 min reaction and the background value observed at time 0) was similar using both approaches. This result suggested that SYTO 9 does not inhibit ZIKV RdRp, and thus can be used for real-time detection of activity. We also (2019) 9:5397 | https://doi.org/10.1038/s41598-019-41998-1 www.nature.com/scientificreports www.nature.com/scientificreports/ www.nature.com/scientificreports www.nature.com/scientificreports/ observed high reproducibility among different experimental samples, as reflected in the modest standard error values in different experiments (Fig. S1D). According to our assay, dsRNA synthesis was linear up to 60 min, and then reached the maximum accumulation of product at 150-180 min ( Fig. 2A). ZIKV RdRp activity is dependent on the presence of Mn 2+ in agreement with our data in radioactivity-based assays. The disruption of the catalytic site (RdRp GNN) also led to complete loss of polymerase activity (Fig. 2B). Likewise, terminal transferase activity was not detected in assays using poly-U as template and either GTP or UTP as substrate.
To further investigate the possible use of SYTO 9 dye to detect activity in the presence of other templates, we used poly-C. However, we detected increases in fluorescence only in end-point reactions, when SYTO 9 was added after the recation was complete, and not in a continuous reading assay when it was added before initiating the reaction (Fig. S1B). These results suggest that poly-C is not a suitable substrate for real-time assays.
Optimization of the fluorescence-based assay. To improve the detection of RNA synthesis, we examined how changes in the concentration of reagents (i.e., NaCl, DTT, MnCl 2 and enzyme) affected RdRp activity. The presence or absence of DTT and NaCl in the assay had little effect on RNA synthesis, which was only slightly impeded at high concentrations (Fig. S3A,B). As expected, no increase in fluorescence was detected when using MgCl 2 (0 to 20 mM), whereas maximum activity was recorded with 2.5 mM MnCl 2 (Fig. S3C). The increase in the velocity of reaction correlated linearly with increases in RdRp concentration along the 10-250 nM range. The maximum velocity was reached with 750 nM RdRp in the assay (Fig. S3D). From these assays we obtained a K m for poly-U of 3.3 ± 0.5 μg/mL (~31 nM) and a K m for ATP of 561 ± 38 μM (Fig. S4).
Fluorescence-based activity can be inhibited by broad-range antiviral compounds.
We hypothesized that this fluorescence-based method could be exploited for the development of high-throughput screening methods to identify polymerase inhibitors. To test this, we used several broad-spectrum nucleoside and non-nucleoside polymerase inhibitors. Addition of polymerase NNI heparin 46,47 to the reaction completely abrogated fluorescence-associated activity (Fig. 3A). To further confirm the sensitivity of our assay to inhibitors, we tested two nucleoside analogs: cordycepin 5′-triphosphate (3′dATP; a chain terminator analog of ATP [48][49][50] and ribavirin 5′-triphosphate (RTP; a purine analog that inhibits but does not terminate RNA elongation during viral replication [51][52][53][54]. Both compounds reduced the polymerase activities (Fig. 3A). We calculated the IC 50 values of these compounds (Fig. S3): as expected, the most potent inhibitor was the NNI heparin (IC 50 = 81 ± 21 nM), followed by the NAIs, 3′dATP (54 ± 7 μM) and RTP (946 ± 46 μM). To confirm that the decrease in fluorescence was linked to the inhibition of RNA synthesis, we repeated these experiments using radioactive-labeled nucleotides. We found a reduced polymerization activity in the presence of inhibitors (3′dATP, RTP and heparin) that correlated with the aforementioned IC 50 values (Fig. 3B). Similar inhibitory activities were observed when poly-C or poly-U were used as template molecules in radioactive-based activity assays (Fig. S2).
The robustness and suitability of this assay as a prospective, high-throughput method to screen polymerase antiviral compounds was examined by calculating the Z′ value, a standard statistical measure to evaluate the quality for high-throughput platforms 39 . The relative activity of both positive and negative controls was calculated as the average value obtained from 8 independent experiments (see Methods). Each experiment was carried out in triplicate and on independent days. Relative activity values were determined as the velocity of polymerization www.nature.com/scientificreports www.nature.com/scientificreports/ recorded during the first 10 min of the reaction. The mean Z' value obtained was 0.62, which according to published standards, qualifies our method as an excellent assay for high-throughput screening application 39 .
Fluorescence-based activity assay can be adapted to monitor different viral RdRps. To investigate whether the assay can be adapted to other viral polymerases, we used two unrelated RdRps from FMDV (3Dpol) and HCV (recombinant NS5B) (Fig. 4A). We found that recombinant HCV polymerase can synthesize RNA de novo using radiolabeling (Fig. 4B) and the fluorescence-based approach (Fig. 4C), which is in agreement with the mechanism of genome replication for this virus 52,53 . Again, an increase in fluorescence was only observed with a catalytically-active HCV RdRp (NS5bΔ21) but not an inactive mutant, and the activity was dependent on the presence of Mn 2+ (Fig. 4C). We also found that FMDV 3D polymerase can catalyze RNA synthesis in vitro, in an assay primed by the viral protein-primer VPg. It has been previously demonstrated that FMDV 3D catalyzes the addition of a uridine-monophosphate residue to Tyr3 in VPg. Once uridylylated, VPg can act as a competent primer to initiate viral genome replication 55,56 . Vpg protein-primed polymerization in vitro can be achieved using a polyadenylic acid as template, UTP as substrate, Mn 2+ as catalytic metal and the VPg1 peptide synthetically produced 35 . Real-time experiments showed an increase of fluorescence as a function of time when using active 3D, whereas no activity was monitored in the presence of a catalytically-inactive protein, or in the absence of Mn 2+ or VPg1 (Fig. 4D). Overall, our data show that this 96-well format assay can be exploited to characterize different viral RdRps in vitro, as it permits real-time monitoring of replication, allowing the characterization of small-molecule libraries in a cost-effective and rapid manner.
Discussion
There is an urgent need to develop new treatments for ZIKV infection and to control its rapid geographical spread. Different approaches for the discovery of potential small-molecule inhibitors include the screening of chemical libraries, molecular modeling and virtual screening 29,57-59 . Although promising developments in this direction have been achieved (reviewed in 32 ), there are as yet no antiviral agents licensed against ZIKV at the www.nature.com/scientificreports www.nature.com/scientificreports/ clinical level. Owing to significant differences in the mechanisms of replication between cellular DNA and viral RNA genomes, the latter involving the synthesis of RNA molecules templated by RNA, RdRps are attractive targets for the development of specific antiviral treatments. The use of antivirals against non-RdRp viral polymerases, such as human immunodeficiency virus and hepatitis B virus reverse transcriptases, and herpes virus DNA polymerase, supports the suitability of this group of enzymes as therapeutic targets 60 . Accordingly, the development of a fast and reproducible method for the screening of compounds with anti-ZIKV properties is a promising advance.
Several methods for high-throughput drug screening against virus RdPs have been described and validated [61][62][63][64][65][66] . However, there are several practical limitations to these approaches, such as the requirement for radioactive substances, entailing additional biosafety measures 63 , or an arduous experimental setup 65,66 when compared with fluorescence-based methods 61,62,64 . Indeed, an advantage of fluorescence-based methods over traditional approaches is the absence of radioactive compounds, which facilitates their broad use in different laboratory settings without specific facilities or training requirements. An additional advantage of our strategy is that it allows the time-resolving determination of the polymerase activity, which we believe gives further insight into the mechanisms of replication and inhibition.
There have been previous attempts to develop high-throughput activity assays to identify drug inhibitors specifically against ZIKV RdRp. These methods include the use of either radioactive nucleotides 41 , or costly fluorescent-labeled RNA substrates 40 . Our method has the advantages of being a more economically affordable alternative, as it uses inexpensive homopolymeric RNA as a template substrate instead of labeled synthetic heteropolymeric RNA. Our approach would also allow for assay scale-up to high-throughput formats (e.g., 96 or 384-well formats, automation, etc.), for the rapid testing of small-molecule compound libraries 61,62,64 . www.nature.com/scientificreports www.nature.com/scientificreports/ Fluorometric measurements of polymerase activity in the absence/presence of three representative inhibitors (heparin, 3′ATP and RTP) revealed a positive correlation with the results using a traditional assay based on radiolabeled nucleotides, further validating the fluorescence-based approach for the screening of antiviral compounds. The possibility of visualizing dsRNA synthesis in real-time increases the sensitivity of the assay, allowing an accurate determination of replication kinetics in the presence or absence of the drug and the predicted affinity constants of the compound tested. This method also permits the characterization of inhibitory molecules in vitro, which is of use in the identification of prospective antivirals. As part of our studies we have shown that RTP elicits a mild inhibition on ZIKV RdRp polymerization in vitro. We posit that the observed inhibition might be a consequence of reduced polymerase efficacy to elongate molecules where an RTP residue is incorporated [52][53][54]67 . RTP is not a chain terminator and its incorporation into the viral RNA has been linked to an increase in transition mutation rates in vitro and in cell culture for different RNA viruses, including ZIKV [68][69][70][71] .
Under different experimental conditions, including the use of different concentrations of SYBR Green II and SYTO 9, we found that poly-C RNA was not a suitable substrate for the detection of polymerase activity in real-time. Conversely, both poly-A and poly-U homopolymers were effective as template molecules in real-time assays. We hypothesize that the inhibition of polymerase activity is produced by the interaction of RdRp with the dye during dsRNA synthesis. It is possible that poly-G synthesis as a result of replicating poly-C can lead to non-canonical G-quadruplex structures 72 , which in the presence of a fluorophore can further increase their inherently elevated stability 44 . Thus, these RNA-dye complexes might impede the effective elongation by the viral RdRp.
In conclusion, we have demonstrated that the procedures developed here can be easily adapted to measure polymerization activity of several viral RdRps, strengthening our method as a universal procedure for the development of high-throughput tools to characterize viral polymerases (e.g., enzymology, polymerase variants of interest) and to screen small-molecule libraries to identify antiviral drugs. In particular, we believe that this platform can be a useful tool for the development of therapeutics against ZIKV and other flaviviruses, which are currently unavailable. | 5,632.2 | 2019-04-01T00:00:00.000 | [
"Biology"
] |
The Comprehensive Study of Electrical Faults in PV Arrays
The rapid growth of the solar industry over the past several years has expanded the significance of photovoltaic (PV) systems. Fault analysis in solar photovoltaic (PV) arrays is a fundamental task to increase reliability, efficiency, and safety in PV systems and, if not detected, may not only reduce power generation and accelerated system aging but also threaten the availability of the whole system. Due to the current-limiting nature and nonlinear output characteristics of PV arrays, faults in PV arraysmay not be detected. In this paper, all possible faults that happen in the PV system have been classified and six common faults (shading condition, open-circuit fault, degradation fault, line-to-line fault, bypass diode fault, and bridging fault) have been implemented in 7.5 KW PV farm. Based on the simulation results, both normal operational curves and fault curves have been compared.
Introduction
Renewable energy is energy which can be obtained from natural resources that can be constantly replenished like water, wind, sun rays, and so forth [1].It is necessary to achieve more sustainable energy system.Among the most important renewable sources and most widely used across the globe is the solar energy [2,3].PV markets are growing fast because of their advantages such as long life of PV panel, installation in different geographical conditions such as impassible areas and mountains, usability on mobile hosts, easy maintenance, off-grid installing, and ability to connect to utility grid which have depicted a bright future for the use of photovoltaic system in the world [4,5].On the other hand, the rapid growth rate is mainly due to the need for alternatives to fossil fuel-based electricity generation, concerns over the global environment, reduced photovoltaics costs, and interests in distributed energy sources to improve power system reliability [6,7].The efficiencies of inverters which convert the direct current generated by the modules into alternate current are already close to maximum about 99 percent [8].Therefore, no significant gains are possible from improving inverter efficiency.Alternately, PV array outputs can be increased by improving the efficiency of the PV modules.The last surveys have showed that the median values of efficiency of different modules technologies such as GaAs (thin film), crystalline silicon, and Si (Amorphous) were close to 28.8, 25.6, and 10.2 percent, respectively [9,10].Improving efficiencies through better materials is an important field [11].Another way to improve PV array output is to ensure that the array operates in optimal output conditions at all times.PV arrays once installed are expected to operate with minimal human intervention.PV arrays perform below optimum output power levels due to faults in modules, wiring, inverter, and so forth.Most of these faults remain undetected for long periods of time resulting in loss of power.Technicians sent to locate and fix the faults within an array need to take time consuming field measurements.So enquiries for lower cost and high efficiency-devices motivate the researchers to increase the reliability of PV system.
Fault analysis in the solar PV arrays is a fundamental task to eliminate any kind of dangerous and undesirable situations arising in the operation of PV array due to the presence of faults.They must be detected and cleared off rapidly.Without proper fault detection, noncleared faults in PV arrays not only cause power losses but also might lead to safety issues and fire hazards [12].Photovoltaic systems are subjected to different sort of failures; thus, before starting monitoring system and fault diagnosis methods, it is necessary to identify what kind of failures can be found in the real system.The first step in this challenge is recognition and classification of all possible electrical faults in PV arrays.The fault detection methods for the PV system are classified in the visual (discoloration, browning, surface soiling, and delamination), thermal (thermal extraordinary heating), and electrical (dark/illuminated - curve measurement, transmittance line diagnosis, and RF measurement).Using electrical signatures is more advantageous and promising for the monitoring and diagnostic systems [13,14].This characteristic of electrical methods offers helpful data in diagnosing a PV cell's health.Furthermore, the - and - curves analyses are fundamental tool to understand the fault scenarios among PV strings and the impact of these fault in basic output parameters such as open-circuit voltage ( oc ), short-circuit current ( sc ), maximum power point voltage ( mpp ), and maximum power point current ( mpp ) in - and - curves.
In this paper in Section 1 the basics of PV modules model as electrical components are described.In Section 2 challenges to fault analysis in PV arrays are expressed.In Section 3 we introduce comprehensive classification of electrical faults in a PV system.Finally, based on the circuit-based simulation model, various types of faults will be developed by changing conditions or inputs in the simulation, and the - and - characteristics of faulted and clean PV array have been compared for each type of fault.
Background
The issue of modeling of PV arrays under electrical faults has been largely investigated in the literature and gets some certain results.A survey of state-of-the-art of ground, lineto-line, and arc fault detection is presented in [15].
In [16] Chao et al. developed a circuit-based simulation model of a photovoltaic panel using the PSIM software.A 3 kW PV array system was established using extended correlation function to identify the different fault types of the PV system.In [17] Takashima et al. used earth capacitance measurement to locate faults in PV module arrays.Furthermore, in another study [14], they experimentally studied earth capacitance measurement and Time-Domain Reflectometry (TDR) to detect degradation (increase in series resistance between the modules) and the fault position in the string.In [18] Yagi et al. developed a diagnostic technology for PV systems based on statistically analyzed data to detect shading effect and inverter failure in PV arrays.In [19] unique fault evolution in a PV array during night-to-day transition and effect of a maximum power point tracker on fault current have been discussed.
In [20] Yamada et al. conducted simulations for PV modules on the reflection loss using the optical performance of a four-layer encapsulation.In [21] Nguyen studied impact of varying position, different levels of solar irradiation, and the performance of bypass diode under nonuniform irradiation levels.In [22] Firth et al. developed novel analysis techniques to identify four types of faults: sustained zero efficiency faults; brief zero efficiency faults; shading; and nonzero efficiency nonshading faults.Three independent applications to measure the effects of soiling have been suggested by Hammond et al. in [23].In [16,24] only power versus voltage (-) characteristics is simulated for a few types of faults in a PV array.In addition, [25] discusses the MPPT reliability of a PV array under partial shading rather than faults in the array.
In none of previous studies mentioned above comprehensive classification of electrical faults scenarios in the PV system and the impact of all possible faults on the - and - curves have been performed properly.
Modeling and Simulation of PV Modules
3.1.Models for Solar Cells.Because of the nonlinear - characteristics of solar cells, it is not appropriate to simply model them as a constant voltage source or a constant current source.The electrical performance of a photovoltaic cell can be approximated by the equivalent circuit shown in Figures 1(a) and 1(b).The one-diode model and the double-diode model are most commonly used to describe the electrical behaviors of solar cells [26,27].
In this paper we adopt the one-diode model for solar cells in simulation, because the one-diode model has several advantages over the double-diode model such as enough accuracy for steady-state and fault analysis for PV modules in system level, data available for the most PV modules in market, and rapid responses in simulation environment [28].
Based on the properties of - semiconductors and onediode model, the - characteristics of a PV panel with cells are characterized using the following equation: The dependence of the photocurrent on the irradiance () and cell temperature () can be described by the following empirical equation [11,26,27]: The reverse saturation current varies with solar cell surface temperature () [11,26,27].It can be described by Depending on the semiconductor material used for PV modules, may have different values.Usually is approximate 1.12 eV for crystalline silicon, 1.03 eV for copper indium diselenide (CIS), 1.7 eV for amorphous silicon, and 1.5 eV for cadmium Telluride (CdTe) under room temperature [11,26,27].
Modeling Algorithm.
In real working conditions, solar cells packaged in the same module usually have almost the same irradiance conditions.For these reasons, assume that all the solar cells in each PV module have identical characteristics and working conditions.Thus, a PV module can be viewed as a basic unit consisting of identical solar cells.Therefore, modeling and simulation of PV modules become key steps for PV system normal and fault analysis.A bypass diode is usually connected in parallel across multiple cells to improve operation of solar system under nonuniform condition.
According to the one-diode model of PV modules in Figure 2, by using voltage PV , , and as input parameters, the modeling algorithm solves equations to find the mathematical solution for and feeds the solution to a controlled current source in Figure 2. Figures 3(a
Challenges to Fault Analysis
Only PV array has been considered as source of electrical fault in this paper.According to National Electrical Code Standard [29], fuses blow when the fault currents that flow through them become greater than at least 1.56 times their rated short-circuit current.However, because of the nonlinear - characteristics, the current-limiting nature of PV arrays, high fault impedances, low irradiance conditions, PV grounding schemes, or MPPT of PV inverters faults in PV arrays may not be cleared [30].But because of some factors such as environmental conditions (varying irradiance level and temperature), PV array configurations and fault locations, aging, hot-spot, mismatch faults unique to PV technology, and MPPT effect, fault analysis would be more complicated and conventional protection devices may not be able to clear faults correctly.Since PV array normal operation can be affected by the presence of faults that reduce power output and cause potential damage to the array, so analysis of the - curves for describing the effect of the faults that occur in PV arrays is very important.
Typical Faults
Since some of the electrical faults, such as mismatches, occur in all arrays at all times, they result in available DC power from the array being significantly below predicted levels.Table 1 shows the most common types of fault in a PV system.
Curves and Interpretation
A typical solar PV array with 6 × 5 PV modules (rated at 7.5 kW) is simulated, which consists of 6 modules in series per string and 5 strings in parallel.MATLAB/Simulink models of PV array (Figure 3) under electrical faults are developed to study the performance of the faulted PV array.According to Table 1, the most frequent faults are major catastrophic failures in PV arrays which are ground faults, line-to-line faults, and arc faults [15].This research studies six common fault types from Table 1 in 12 cases and compared the results with the normal condition.The characteristics of the PV panel with different types of faults are shown in Figures 4-9.
Partial Shading (F1).
The shading patterns can be very complicated due to no uniform insolation.Two identical PV arrays are used for comparison.One PV array with an arbitrary shading pattern is divided into two groups.In Case 1 the half of string one has been shaded with irradiance density = 800 W/m 2 and in Case 2 shaded modules receive two different insulations, = 500 and 800 W/m 2 .- and - characteristics of these two cases are illustrated in Figure 4 for fault analysis.Under partial shading conditions, the shortcircuit current for two cases remains identical, while the open-circuit voltage slightly decreases with the increase in the number of shaded modules.- curves of all shaded groups have multiple steps, while - curves of shaded groups are characterized by multiple peaks, whose number is equal to the number of solar insolation levels received by string, respectively.The results indicate that the higher number of shaded solar modules is the lower value of power output and the position of maximum power point does not depend on location of modules under shadow.The surface temperature of solar cell is assumed to remain 298 ∘ K.
Line-to-Line Fault in a PV Array under STC (F9).
As shown in Figure 5, line-to-line faults could happen inside PV arrays and potentially may involve large fault current or dc arcs.This research focuses on line-to-line faults, which are defined as an accidental short-circuiting between two points in the array with different potentials.In the following simulations, two cases are studied, a line-to-line fault with 2 modules (Case 3) and a line-to-line fault with 4 modules (Case 4).When a line-to-line fault occurs, the - curve of the faulted PV string will change accordingly.Since the faulted string has 4 number of modules less, it will have an opencircuit voltage reduced by 4x oc .But the short-circuit current remains the same as other normal strings at sc .
F2
Various irradiance intensity during the day Soiling [23] F3 The bird droppings and dirt on the surface of a PV module Snow covering and hot spot [33,34] F4 The worst temperatures depending on the geographical location and different weather conditions
Earth fault
Upper ground fault [35,36] F5 An unintentional path to ground with zero fault impedance occurs between the last two modules at PV string Lower ground fault [35,36] F6 An unintentional path to ground with zero fault impedance occurs between the 2nd and the 3rd two modules at PV string with large backfeed current
Arc fault
Series arc fault [37,38] F7 An arc fault due to discontinuity in any of the current carrying conductors resulting from solder disjoint, cell damage, corrosion of connectors, rodent damage, abrasion from different sources Parallel arc fault [37,38] F8 Insulation breakdown in current carrying conductors Line-to-line faults [19,35] F9 An accidental short-circuit between two points in a string with different potentials Bypass diode faults [39] F10 Short-circuit in case of incorrect connection Degradation faults [40] F11 Yellowing and browning, delamination, bubbles in the solar module, cracks in cells, defects in antireflective coating and delamination over cells and interconnections lead to degradation and increasing of the internal series resistance Bridging fault [19,39] F12 Low-resistance connection between two points of different potential in string of module or cabling Open-circuit fault [41] F13 Physical breakdown of panel-panel cables or joints, objects falling on PV panels, and loose termination of cables, plugging and unplugging connectors at junction boxes MPPT faults [42,43] F14 Problem in MPPT charge controllers Cabling faults [44] F15 -
Ac
Inverter faults [45] F16 Failure of each component of inverter such as IGBTs, capacitors, and drive circuitry can result in inverter failure Sudden natural disasters [46] F17 Total blackout due to Lightning, storm, and so forth
Bypass Diode Fault in a PV Array under STC (F10).
Assume in Case 5 that one bypass diode is conducted or shorted and in Case 6 two bypass diodes are shorted.- and - characteristics of these two cases are shown in Figure 6.
Even if only one full module is shorted by bypass diode, the maximum power and oc of the PV array drops significantly and short-circuit current remains the same as other normal strings.
Degradation Fault in a PV Array under STC (F11).
The reason for power degradation could be the increase in the series resistance between the modules due to decreased adherence of contacts or corrosion caused by water vapor.In this case, two different resistance values are considered.Group one (Case 7) has small resistance 1 = 1 ohm and another group (Case 8) has larger resistance with 2 = 2 ohm.This PV array with resistance is compared with the normal PV array, shown in Figure 7.Although the open voltage and short current do not change much under these different conditions, the maximum power point is reduced due to increase in resistance.Therefore, an increase of the internal series resistance can result in degradation of the peak power.
Bridging Fault in a PV Array under STC (F12).
In simulations, bridging fault or line-to-line faults with zero fault impedance are solid faults that occur immediately.In Figure 8, there is a bridging fault with one-module level difference between String 1 and String 2, which lead to unbalanced currents among PV array defined as bridging fault with small voltage difference (Case 9).Bridging fault with large voltage difference is a line-to-line fault with three-module level difference between String 1 and String 2 expressed as Case 10.Bridging faults usually involve reduced array voltage ( oc ) but have much small reduction in array current ( sc ).The fault with larger voltage difference between two fault points will lead to larger reduction in oc and mpp and mpp .
Open-Circuit Fault in a PV Array under STC (F13
).An open-circuit fault is an accidental disconnection at a normal current-carrying conductor.In this section, assume in Cases 11 and 12 that PV arrays have a disconnection problem in one string and two strings, respectively, and then the - and - characteristics have been compared with array without any disconnection under normal condition, as shown in Figure 9.The open voltage of these cases remains almost the same, while the short current and maximum power decrease linearly with the increase in the number of disconnected strings.
Conclusions
In this paper, a comprehensive definition of faults in DC side of PV system based on location and structure is presented.The performance of a typical PV array has been investigated under typical fault conditions such as shading condition, open-circuit fault, degradation fault, line-to-line fault, bypass diode fault, and bridging fault.To better visualize the PV data under normal and fault conditions, the - and - characteristics of the array have been evaluated.The off-line method used in this research can distinguish many types of different faults but cannot detect the location of the fault within the PV array.It would be useful to develop special MPPT schemes to track the maximum peak under these conditions and further methods capable of determining these locations.Band gap energy of the material (eV).
2 JournalFigure 1 :
Figure 1: Equivalent circuits for (a) the one-diode model and (b) the double-diode model.
Figure 2 :
Figure 2: The numerical one-diode model for a PV module.
) and 3(b) show the model for PV modules in MATLAB/Simulink.Using the widely used one-diode model for each individual solar panel, this paper builds simulation PV array (7.5 kW) in MATLAB/Simulink consisting of 6 × 5 PV panels that is capable of studying faults among panels.The related parameters of each PV panel under STC ( = 1000 W/m 2 and = 25 ∘ C) are mpp = 250 W, mpp = 31 V, mpp = 8.07 A, = 20, oc = 37.92 V, and sc = 8.62 A and = −0.33%/∘ C. As shown in Figure 3(b), panels connected in parallel increase the current and those connected in series provide greater output voltages.
Figure 3 :
Figure 3: (a) Equivalent circuit of a solar and (b) a schematic diagram of a PV farm system with 6 × 5 modules.
Figure 6 :Figure 7 :
Figure 6: The PV array configuration for bypass diode short circuit.
Figure 8 :
Figure 8: The PV array configuration for bridge fault.
Figure 9 :
Figure 9: The PV array configuration for string disconnection.
Table 1 :
Classification and definition of faults in PV system based on location and structures. | 4,498.2 | 2016-12-01T00:00:00.000 | [
"Engineering"
] |
Navigation and ionosphere characterization using high-frequency signals: Models and solution concepts
A navigation concept is being developed that relies on passive one-way ranging using pseudorange and beat carrier-phase measurements of high-frequency (HF) beacon signals that travel along non-line-of-sight paths via ionosphere refraction. This concept is being developed as an alternative to GNSS positioning services. If the set of signals that reaches a user receiver has sufficient geometric diversity, then the position of that receiver can be determined uniquely. Ionospheric modeling uncertainty causes errors in the deduced user position, but these errors are compensated by estimating corrections to a parametric model of the ionosphere as part of the navigation solution. A batch filter estimates the user position and corrections to an a priori ionosphere model. In simulated case studies involving significant errors in the a priori ionospheric parameters, the total positioning error is on the order of tens of meters in the horizontal plane and on the order of meters vertically.
INTRODUCTION
Satellite-based navigation vulnerability to electronic jamming and spoofing is of immense concern for developers of both civil and defense systems. A broad search for alternative methods has been conducted and widely discussed in the navigation literature in the past decade. Highfrequency (HF) signals propagating in the atmosphere have been used for communications and over-the-horizon radar. Signals with frequencies in the range 2-10 MHz can bounce successively off the ionosphere and the Earth to arrive at a receiver along a non-line-of-sight (NLOS) path. Such signals have been proposed for geolocation purposes, as in Huang and Reinisch (2006). The present study represents a further effort to examine the potential use of such signals as an alternative radio navigation method. Given perfect knowledge of the electron density distribution in the ionosphere and of the number of bounces This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. © 2020 The Authors. NAVIGATION published by Wiley Periodicals LLC on behalf of Institute of Navigation. between a transmitter and a receiver, it is possible to develop a model of the measured pseudorange, also known in the literature as range-equivalent group delay, which is the difference between a signal's reception and transmission times multiplied by the speed of light. The pseudorange depends on the unknown user receiver location and the receiver's unknown clock offset. Given four or more such pseudoranges from four or more independent transmitters with an appropriately diverse geometry, it should be possible to solve for the unknown user position and clock offset, similar to GPS.
The problem with such an approach is that the ionosphere's HF signal propagation/refraction/reflection properties are highly uncertain due to the variability of its three-dimensional electron density distribution. The approach of Huang and Reinisch (2006) and Zaalov et al. (2017) is to use ionosonde data (Juras, 1985) in order to refine a local electron density model. This local model is then used to estimate the unknown location of a transmitter. Fridman et al. (2006) suggests an alternative procedure for local electron density modeling using GPS data inversion. Nickisch et al. (2016) further expands the work of Fridman et al. (2006) to assimilate HF near-vertical propagation time delay, HF Doppler shift, and HF angle-ofarrival measurements in ionosphere 3D electron density characterization. Performance is then evaluated based on a comparison between computed and measured angle-ofarrival values, where the reference measured values are provided by highly accurate equipment.
The present study, however, seeks to estimate simultaneously the location of an unknown receiver, its clock offset, and corrections to relevant portions of the electron density distribution. The approach taken for the estimation problem, model, and solution method involves several components. They are 1) a nominal ionosphere model, 2) estimated corrections to that model, 3) raytracing calculations for the paths of the HF signals and their observables, and 4) model inversion calculations to estimate the user receiver position, the user clock offset, and corrections to the ionosphere model. These model inversion calculations are carried out using a modified nonlinear batch least-squares solution technique.
Given a limited number of transmitters and a limited number of observables, a key question for such an approach concerns ionosphere parameter observability and the extent to which position accuracy can be attained. It has been demonstrated in Baumgarten & Psiaki (2017) that it is possible to combine a priori information for a parameterized model of ionosphere electron density with measured pseudoranges in order to arrive at a reasonable result. The work described in Baumgarten & Psiaki (2017) demonstrated feasibility for the joint positioning/ionosphere-characterization approach based on a simplified thin-sheet ionosphere model for propagating signals. The present paper describes a follow-up study that utilizes a 3D ionosphere electron density variation model, raytracing calculations that rely on advanced signal propagation models, and an enhanced batch-filtering algorithm. Its fundamental input data are the measured pseudoranges between an array of transmitters at known locations and the user receiver. A second type of measurement, introduced to this study, is the beat carrier phase. It counts carrier cycles over an arbitrary time interval and differences the resulting count with the expected nominal count for the transmitted signal waveform (Misra & Enge, 2011). The ionosphere model utilized in both phases of this study is a Chapman vertical profile whose three parameters vary with latitude and longitude.
The current study makes three contributions to the area of radio navigation based on bouncing HF signals. First, it develops a model of the pseudorange and beat carrier-phase measurements of multi-hop HF signal paths from known beacon transmitter locations to an unknown user receiver location. This model includes techniques for solving its nonlinear bounce conditions and for computing first-partial derivative sensitivities of the bounces and of the pseudorange and beat carrier-phase measurements with respect to the unknown user location and the unknown ionosphere parameters. Second, this study develops a batch nonlinear least-squares estimation algorithm for determining the unknown user receiver position, user receiver clock offset, and ionospheric parameter corrections. This algorithm incorporates a priori information about the ionosphere parameters in order to compensate for the lack of strict simultaneous observability of the location, clock offset, and ionosphere parameters. Third, the potential performance of the proposed HF navigation scheme is evaluated in a preliminary manner using data from a truth-model simulation.
The remainder of this paper is divided into six sections. Section II covers the physical and mathematical models of HF signals propagating in the ionosphere. It describes the fundamentals of the raytracing computations that this study uses to model HF signal paths. Section III presents models for the Earth and the ionosphere, including the parameterized Chapman vertical profile model. Section IV covers definitions and derivation of bounce points and their equations, ray-hops, and multi-hop ray paths. It also discusses the two measurement models. Section V develops the batch filter that estimates the quantities of interest. It starts by formulating the batch-filtering problem, and it outlines a modified Gauss-Newton method that solves the problem. Section VI presents a preliminary evaluation of the batch filter's performance and this method's potential accuracy. Section VII summarizes this study's developments.
High frequency signals and signal propagation
This study considers transmitted RF signals with carriers in the range 2 MHz-10 MHz. Transmitted signals are assumed to maintain a constant carrier frequency or to utilize a smoothed stepping pattern for altering their carrier frequencies. The latter type of signal can be useful for navigation and ionosphere correction if its beat carrier phase is well defined at the transmitter and accurately measured at the receiver. Beat carrier-phase measurements are made after a given frequency step is complete and the signal is oscillating with a new, constant frequency. These measurements are particularly useful because the signal traverses a perturbed ray path due to the frequency change and therefore probes more of the ionosphere.
The presumed ability to measure pseudorange depends on the signal having been modulated by a binary phaseshift keying (BPSK) pseudo-random code or some similar spread-spectrum modulation. The resulting accuracy for this sort of ranging in terms of measurement noise 1 sigma is about 1 kilometer assuming a signal bandwidth of 100 KHz. Carrier-phase measurements are assumed to be derived using a stable internal oscillator and a phase-lock loop so that the expected accuracy for a rangeequivalent beat carrier-phase measurement is 1 meter based on extrapolation of the fraction-of-a-cycle phase accuracy that can be resolved for L-band signals using standard signal-processing techniques.
The wave propagation mechanism that is considered in this study relies on ionospheric refraction that bends skyward-propagating radio waves back towards the Earth in a way that resembles reflection. This effect can occur for signals in the frequency range of up to 40 MHz (Hysell, 2018). HF signals traveling in the ionosphere are characterized not only by a curved trajectory shape, but also by the frequency-and path-dependent propagation speeds of their BPSK modulated code and carrier phase. Propagation speed dependence on wave frequency is known as dispersive wave propagation and is typical of propagation in a plasma (Cummer, 2000;Ishimaru, 2017;Juras, 1985). Further details about the behavior of electromagnetic waves as they traverse through plasma are provided in (Baumgarten, 2018).
Electromagnetic wave propagation in the birefringent, inhomogeneous, and lossy ionosphere obeys the Appleton-Hartree formula (Yeh & Liu, 1972): where ≡ 2 2 ; ≡ Ω ; = sin ( ) ; = cos ( ) , and where n is the index of refraction, θ is the angle between the magnetic field B and the wave vector k, ω is the wave frequency, ω p is the plasma frequency, and Ω e is the electron gyrofrequency. The latter is dependent on Earth's magnetic field. The two solutions for the refractive index correspond to the ordinary (O) mode and the extraor-dinary (X) mode that have different polarization and, consequently, different ray paths and measured observables (Baumgarten, 2018;Seybold, 2005). This formula makes it straightforward to obtain the conditions for which waves propagate, i.e., the conditions on Y, X, and θ for which n 2 > 0.
Raytracing
Raytracing calculations lie at the core of this study. The ability to accurately model signal trajectories in the ionosphere is essential to the modeling of the pseudorange and beat carrier-phase observables. Calculations are based on a numerical solution to the wave equations through propagation of Hamilton's equations that apply for an RF signal traversing in an ionized medium (Bennett et al., 2004). The fundamental set of raytracing equations is provided by Jones & Stephenson (1975) in the form of nonlinear ordinary differential equations that can be written as H is the Hamiltonian, and the independent variable P'≡ct g is the range-equivalent group delay parameter that takes the value P' 0 = 0 at the beginning of the trajectory and P' f at its end. The same Hamiltonian can be used to develop a differential equation for the range-equivalent carrier phase P = ϕ/k 0 with ϕ being the carrier phase in radians. This differential equation takes the form: where k 0 = ω/c is the free-space wave number with ω being the transmission frequency. Note that the last two equations would have taken a somewhat different form had the position been given in geographic coordinates as in Nickisch (1988). Jones and Stephenson (1975), which combines the work of Lighthill (1965) and Haselgrove (1954), gives several Hamiltonians that can be used in Equation (3). They are generally based on the Appleton-Hartree formula. Alternative Hamiltonian formulations are presented in . The latter are the Hamiltonians that are utilized with this study. The first Hamiltonian, which is used where the electron density is relatively small, is given by where p is a vector of parameters that characterizes the ionosphere electron density profile, and n AH is the lossy Appleton-Hartree index of the refraction of Yeh & Liu (1972). A second Hamiltonian is used near a reflection point/spitze. This second Hamiltonian, given in Baumgarten (2018), is used because it does not experience any singularities in this vicinity. A state space system of equations is defined for the unknown wave-front position and wave vector. It is based on the two expression of Equation (3) and takes the form: Thus, the state vector consists of the three Cartesian coordinates of the propagating wave front's position, r w , and the three components of its wave vector, k. For practical reasons, the state vector that is used with the current numerical implementation is defined in the normalized form: Normalization of the first term by P' f and of the second term by k 0 results in a non-dimensional state vector, whose derivative with respect to the nondimensionalized independent variable τ≡P'/P' f is given by This nonlinear differential equation can be numerically propagated from the initial point at τ = 0 to the final point at τ = 1. The terminal value P' f is unknown and must be determined as part of a two-point boundary value problem solution. Further details about the raytracing two-point boundary value problem numerical solution technique are given in Section III of Psiaki (2019).
EARTH AND IONOSPHERE MODELS
Models of the Earth and the ionosphere are used to define the physical environment for the propagating signals. These models have been chosen because they satisfy the need for a reasonably realistic representation of physical phenomena and the need to limit the complexity of the models and the resulting computational effort.
Earth geometry and magnetic field models
The Earth is modeled as a closed, continuous and smooth surface that is known as the WGS-84 ellipsoid (National Geospatial Intelligence Agency, n.d. where r 1 , r 2 , and r 3 are the Earth-fixed Cartesian coordinates of the surface point in meters. This approach for modeling the Earth has been chosen for its relative simplicity and the fact that it does not rely on availability of additional data. A more realistic method for modeling the shape of the Earth would use biquintic surfaces, approximating an existing digital representation of the Earth such as a digital terrain model (DTM) or digital elevation map (DEM).
Raytracing computations for propagating HF signals require knowledge of the Earth's magnetic flux vector field at any desired location. This study uses the 11 th generation model for the International Geomagnetic Reference Field (IGRF), known as IGRF-11. Its magnetic flux is modeled as the gradient of a time-varying spatial potential function. Additional information about this model can be found in International Association of Geomagnetism and Aeronomy (2019). It should be noted that an updated model for the Earth's magnetic flux vector field that is based on a 2020 epoch, IGRF-13, is now available.
The ionosphere model
A three-parameter Chapman beta vertical profile is used to model the location-dependent electron density distribution of the ionosphere. This model regards the F I G U R E 1 A map for the 424 nodes for an example latitude/longitude bi-quintic spline [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com and www.ion.org] ionosphere as a medium with an altitude-dependent electron density, whose altitude density distribution is characterized by parameters that vary with latitude and longitude. Under certain assumptions for radiation, geometry, and chemistry, the Chapman profile can be regarded as the exact solution for an ionosphere density profile (Stankov et al., 2003). For a given date and time, electron density is given by where ϕ(r), λ(r), and h alt (r) are, respectively, the latitude, longitude, and altitude above the WGS-84 ellipsoid of the ECEF position r. N e (r) is the electron density at this position given in units of electrons/m 3 . The quantity h max [ϕ(r),λ(r)] is the altitude of the maximum electron density of the Chapman profile. The quantity VTEC[ϕ(r),λ(r)] is the vertical total electron contentthe integral of the electron density along a vertical path.
The quantity h sf [ϕ(r),λ(r)] is the Chapman profile's altitude scale height. These last three functions depend on the date and the time of day in addition to latitude and longitude, but this dependence is not explicitly noted for the sake of convenience. The N e (r) function is assumed to be slowly time varying so that it can be assumed to be constant over the short interval of an HF radio wave propagation from a transmitter beacon to a user receiver.
Note that one could switch to using N e,max [ϕ(r),λ(r)] as one of the three Chapman profile parameters in place of VTEC[ϕ(r),λ(r)]. Such a switch might be appealing because the top-side contributions to VTEC do not affect the bottom-side refraction that matters to the present developments. In practice, such a switch is unlikely to change the underlying performance of this concept.
The latitude/longitude dependencies of the three Chapman profile parameter functions h max [ϕ(r),λ(r)], VTEC[ϕ(r),λ(r)], and h sf [ϕ(r),λ(r)] are modeled using a special bi-quintic spline. It works with data that are defined at spline nodes. Spline nodes are placed at predefined latitudes and longitudes with subsets of nodes grouped into common small circles of constant latitude. Along each small circle, the spline nodes can have any desired longitude spacing, and this spacing can vary. There is no need to align node longitudes on one small circle with node longitudes on adjacent small circles. The latitude spacing between small circles is also free to vary. Figure 1 illustrates the placement of spline nodes, where each node is identified by a unique number, starting at 1 for the node that is located at the South Pole and ending at 424 for the node that is located at the North Pole. The set of spline nodes that is used in this study has been defined in a way that gives a sufficient density of nodes over North America, the region in which simulation and analysis cases are planned to be considered.
The latitude/longitude variations of the three Chapman vertical profile parameters are modeled using bi-quintic splines as described in Baumgarten (2018) and . Three sets of parameters are stored for each grid node. Each set consists of the natural logarithm of a Chapman parameter's value and eight of its partial derivatives with respect to latitude ϕ and longitude λ. Thus, a vector of 3 × 9 = 27 parameters, − , is associated with the i th node as follows: Note that special shortened forms of this parameter vector apply at the North and South Poles. These special forms have only 18 elements rather than 27 elements.
The natural logarithms of the three Chapman profile parameters are modeled rather than the parameters themselves as an ad hoc means of ensuring that the final parameter values will be positive. The spline models of their natural logarithms can take on any real values. The parameters themselves will be positive after their splined natural logarithms have been input to exponential calculations.
Given the latitude ϕ 0 and the longitude λ 0 of a point at which one wants to compute electron density (and possibly various of its partial derivatives), the spline calculations use the nearest four bi-quintic spline nodes that lie northwest, northeast, southwest, and southeast of (ϕ 0 ,λ 0 ). The details of the spline evaluation procedure are given in Baumgarten (2018) and .
It is important to note that the simplistic Chapman model ignores the possibility of distinct D and E layers, including a sporadic E layer. This level of simplification may produce unsatisfactory results if working with real daytime data, but it is reasonable to use a Chapman profile at this stage of simulation-and analysis-based study of the proposed system's potential accuracy.
The ionosphere parameters variability matrix
Let p denote a stacked vector of all 424 p i vectors. This means that a given vector p stores information that describes the entire electron density distribution near the Earth at a particular time. Understandably, p vectors that describe electron density distributions at different times take different values. While the values that the various terms of p take may vary significantly with time, the manner at which they vary reflects the fact that those values represent physical phenomena. For instance, at a specific time, values for a particular Chapman parameter at neighboring spline nodes, that are relatively near, are expected not to differ by much, while values that are measured for one of its derivatives at a single spline node, but at different yet close times, are similarly expected to be close.
The batch-filtering algorithm that is used in this study requires a model for the likely correlations between the various terms of p. This model should effectively embody the manner at which these terms vary in time and the interdependencies between them. One method for generating such a model is computing an empirical covariance matrix through the processing of parameterized models of the ionosphere that have been computed for a large series of times. The International Reference Ionosphere (IRI) model was used to compute the best-fit Chapman parameter values four times a day throughout the calendar year 2009. See Bilitza (2001) and Reinisch and Bilitza (2004) on current IRI modeling and Bilitza (2011) on further improvement efforts. See Psiaki et al. (2015) on the Chapman parameter fitting procedure. The resulting 365 × 4 = 1,460 parameter vectors were used to compute a covariance matrix for p, i.e., for the natural logarithm and natural logarithm partial derivatives of all three Chapman model parameters at the spline nodes. This matrix is defined to be the ionospheric parameters' variability matrix, designated M 0 throughout this paper. It characterizes the likely variability of the p vector over a year, and it contains information about the correlations between elements of p. It should be noted that M 0 , which has 11,430 rows and 11,430 columns, contains much more information than is required when considering a typical HF navigation problem. The matrix M is an N p xN p covariance matrix that was constructed from the M 0 matrix by retaining only the rows and columns that are associated with the set of applicable spline nodes, i.e., spline nodes that define grid cells through which propagating rays travel. The value N p is the number of ionosphere parameters that apply for a particular problem setup -typically in the order of hundreds.
Definitions
In the scope of this study, it is assumed that waves are perfectly reflected from the Earth's surface in a specular F I G U R E 2 The ray-path definitions and notation manner at bounce points. This simplification of the quasiisotropic nature of ground reflections has been chosen for its relative simplicity in the multi-hop path analysis. This model will most likely need to change when dealing with real HF data to characterize ground reflections' sensitivity to surface conditions and polarization. The position of the k th bounce point in Cartesian coordinates is denoted by η k . The unit vector that is perpendicular to the Earth's surface at bounce point k is called the bounce point normal vector and is denoted by u k . The ray-path direction from which a signal approaches bounce point k is v f,k . The direction of the reflected signal at bounce point k is v 0,k . The curved signal trajectory between the transmitter and the first bounce point, between two sequential bounce points, or between the final bounce point and the receiver is termed a ray hop. The k th ray hop is denoted s k . An ordered sequence of ray hops that starts at a transmitter and ends at the location of the receiver, r R , constitutes a ray path. Figure 2 illustrates these definitions, showing three sequential bounce points, the receiver location, the ray hops connecting them, and other terms. The associated vector p̑j consists of all ionosphere parameters that apply in the vicinity of the j th ray path that is illustrated in the figure. All P' x,y and P x,y terms refer, respectively, to rangeequivalent group delays and beat carrier phases that will be considered in a later discussion.
Bounce-point equations
where ⌢ is a stacked vector of all η k bounce-point locations of that ray path, and u k is the outward unit vector normal to the Earth's surface at the k th bounce point (Baumgarten, 2018).
Each Type-C equation constrains the normal vector to the Earth at the bounce point to bisect the angle between the incoming and reflected ray hops. It can be written in the form: Finally, the set of three equations that defines the k th bounce point of a given ray path can be written in the following shorthand form: ) .
Recognizing that a signal's trajectory within a single ray hop, and in particular its directional vectors v 0 and v f , depends on the location of the hop's start and end points and on the values taken by the ionosphere parameters that apply in the vicinity of that hop, Equation (14) can be rewritten as , where the formulation in Equation (15) implicitly uses the ray-tracing calculations for each of the single hops in order to compute the corresponding v 0,k-1 and v f,k directions from the given bounce-point locations defined from components of ⌢ and the transmitter location for the initial hop or the receiver location r R for the final hop. Both formulations of Equations (14) and (15) for the set of three equations are used in this study. The first is used with most batch-filtering Gauss-Newton process-related calculations and the second with the ray-path solver that is described in this section. Additional details are available in Baumgarten (2018).
Single-hop calculations
Given the signal trajectory's known start and end locations for a single hop, and given a set of applicable ionosphere parameters, one can determine the ray-hop trajectory by determining the initial state X 0 of the raytracing differential equation in Equation (8) that applies at the beginning of the hop's trajectory and the total signal range-equivalent group delay P' f for which the signal ultimately arrives at the known end location. This two-point boundary value problem (TPBVP) and an iterative solution method are thoroughly discussed in Psiaki (2019). Baumgarten (2018) puts the discussion in Psiaki (2019) in the context of the current study. It shows how the problem is solved based on the principle of a zero-valued Hamiltonian, which is kept fixed throughout the signal propagation along its trajectory. It additionally describes how the problem is solved using a Newton method and outlines the derivation of the sensitivity matrices that are required for the computation of each Newton step within this process. The Newton's method solution uses ray-path sensitivity calculation of partial derivatives with respect to the initial unknown wave vector and with respect to the unknown total range-equivalent group delay. Related calculations can be used to compute the partial derivative sensitivity of the resulting ray path with respect to changes in the initial or final bounce points and with respect to changes in the ionosphere parameters that affect the single hop. These latter sensitivities are not needed in order to solve a given single ray hop, but they are needed by the solution procedure for a multi-hop ray path and by the partial derivative sensitivity calculations of the batch filter that uses ray-path solutions to model HF pseudorange and beat carrier-phase observables.
Multiple-hop calculations
The work that is presented in this subsection utilizes the single-hop method of Psiaki (2019) as reviewed in the previous subsection. In the following discussion, single-hop calculations are extended to multiple-hop calculations that determine the bounce points' locations, the group delays, the range-equivalent beat carrier phases, and these quantities' partial derivative sensitivities to inputs. Determination of the ray path for an HF signal that is traversing from a transmitter beacon to a receiver involves the solution of coupled, nonlinear equations that define the physical characteristics of its trajectory. Given the locations of the receiver and the transmitter, the number of ray-path hops connecting them, and a model for the ionosphere, the objective is to solve for the position of all of the ground bounce points in ⌢ in a way that satisfies the governing reflection equations while also solving for the set of single hops that properly connect the bounce points. Ultimately, bounce-points solutions in ⌢ depend on the receiver position r R and the relevant ionosphere parameters p̑j.
An algorithm for determining this nonlinear function ⌢ (r R ,p̑j) has been developed based on the implicit equations that define it. It is called a ray-path solver. The raypath solver assumes fixed known locations for the signal's start and end points, fixed ionosphere parameters, and a known number of sequenced ray hops that constitute the ray path. The ray-path solver's standard outputs are the locations of the bounce points. Auxiliary outputs include the ray-traced single hops' trajectories between each pair of bounce points along with the pseudorange and beat carrier-phase increments along each single hop. If required by the batch filter, associated computations can determine the partial derivatives of bounce-point locations, total group delays, and beat carrier-phase increments with respect to the location of the ray path's end point location r R and with respect to the ionosphere parameters in p̑j The multi-hop solution is obtained using Newton's iterative method to solve through minimization of with respect to The iterative solution procedure uses linearization about a current guess to compute a solution increment. It takes a step along the resulting search direction with a step-length scaling in the ⌢ space that is chosen to ensure that the Equation (17) and where D denotes the total derivative operator. The solution for Δη̑j ,guess , which constitutes the Gauss-Newton step (or correction vector), is computed by matrix inversion. For bounce point k, the required set of sensitivity matrices that are included in the leftmost term of Equation (18) is obtained through computation of the total derivativẽ=̃, , +̃0 , 0, +̃, wherẽis the subset of the elements of ⌢ , consisting of the three equations that apply at bounce point η k . η l is the l th bounce point of that ray path where l takes the values k-1, k, and k+1. Computations of ∂̃/∂v 0,k , ∂̃/∂v f,k, and ∂̃/∂η k are analytical and therefore immediate as noted earlier. However, computations of ∂v f,k /∂η k-1 , ∂v f,k /∂η k , ∂v 0,k /∂η k, and ∂v 0,k /∂η k+1 are implemented as auxiliaries of numerical raytracing as described in Baumgarten (2018) and Psiaki (2019). An initial guess for ⌢ is generated using several methods that are described in Baumgarten (2018). The possible methods include the use of the simplified raypath solver of Baumgarten and Psiaki (2017), the use of a latitude/longitude/altitude-dependent thin-shell ionosphere model, and the use of a constant-altitude thin-shell ionosphere model.
Ray paths' feasibility and solution uniqueness
Every ray path is evaluated for physical feasibility. Physical feasibility concerns the question of whether there exists a ray path between the given transmitter and receiver locations with the given number of hops at the given carrier frequency and for a given set of ionospheric parameters. For simulated test cases, feasibility is evaluated by trying to compute a raytracing solution using the true ionosphere model. The answer for the feasibility question is often not straightforward as it consists of the questions of whether a solution for a given set of inputs exists and, if so, whether it can be found with the ray-path solver. In the absence of an ability to distinguish between a negative answer for the two questions, a failure in obtaining a solution for ⌢ during this phase of assessment is generally regarded as an indication that the path is physically infeasible for the given inputs.
Ray-path solution uniqueness is a second matter of concern. Multiple solutions are theoretically possible if the cost function has multiple minima that are zero as demonstrated in Australian Government (2016). In the early work that utilized a simplified ray-path model (Baumgarten & Psiaki, 2017), a given set of inputs that included transmitter and receiver position, an ionosphere model, and the number of ray hops sometimes yielded more than one possible solution. Such observations have not been made so far with the full, raytracing-based model of the present paper. The current study, therefore, does not consider the possibility of having more than one solution for ⌢ .
Measurement models
In the context of this study, range measurements are based on signal propagation time measurements and wave-phase measurements. Errors in processing the measured data may arise from 1) errors in the modeled signal propagation paths, 2) clock synchronization errors, and 3) signalprocessing related errors. The first two types of error sources are addressed through proper modeling of these error sources in the batch-filtering-based algorithm. The third error source is generalized as a noise term in the following measurement equations. A typical signal runs from the transmitter, traverses through the ionosphere in a refraction-based curved trajectory, bounces off of Earth, and eventually arrives at the receiver. Let ρ g,j = P' f,m(j) be the true total range-equivalent group delay of the j th ray path, which equals the true signal propagation time multiplied by the speed of light c. Let y g,j be the measured range-equivalent group delay of that ray path, which equals the speed of light multiplied by the difference between the measured reception time according to the erroneous receiver clock and the true transmission time according to a calibrated beacon transmitter clock. Let δ be the receiver clock's offset and let x g = [r R T ,cδ] T denote the vector of unknown receiver position components and the range-equivalent clock offset. Then, the j th group delay measurement equation can be written as where the computed functionsh g,j andh g,j both model the true range-equivalent group delay of the j th ray path. ν g,j is a zero-mean measurement noise term that embodies the effect of signal-processing related errors. The measurement model in Equation (20) applies for a total of N measured pseudoranges in a given navigation/ionosphere-correction problem. For convenience in batch estimation, this model is stacked into an N-dimensional vector equation model of all the measurements. Let p̑equal the union of all p̑j vectors applying for all N ray paths. The stacked measurement model vector equation takes the form: with measurement error covariance matrix R g typically a diagonal matrix. Note that it is possible for two or more of the N ray paths to originate from the same transmitter location. In that case, either the signal transmission frequency, the number of hops, or both must be different in order for the measurements to provide independent information. The second type of measurement used in this study, beat carrier phase, is based on a comparison between measured changes in the received signal's phase and changes in the phase of a receiver-generated nominal replica signal. In effect, the beat carrier phase is the negative of the time integral of the received carrier Doppler shift (Bennett, 1967). This measurement involves an unknown bias term that originates from its integral nature, i.e., it is an unknown integration constant. Let ρ c,j = P f,m(j) be the total true range-equivalent beat carrier phase of the j th ray path, and let y c,j be the measured range-equivalent beat carrier phase of that ray path. Recall that P is computed by integrating the differential equation in Equation (4). Let λ w,j be the corresponding signal's wavelength, and let β i(j) be an unknown bias term in units of carrier cycles. Then, the j th beat carrier-phase measurement equation can be writ- where x c = [r T ,cδ,β T ] T and where the computed functions ℎ c,j andh c,j both model the ionosphere-refraction-induced range-equivalent carrier-phase change of the j th ray path, ρ c,j . The vector β consists of all unknown bias terms that apply for all N beat carrier-phase measurements. The integer function i(j) in the index of β maps ray-path indices j to indices of their corresponding terms in β. The ability to map a common bias term to multiple beat carrierphase measurements is needed because the beat carrierphase data are not useful unless multiple measurements are made with a common bias. This can be accomplished by transmitting a signal with a frequency time history that follows a smoothed staircase pattern with a known continuous phase time history at the transmitter. Coherent reception of this signal with a PLL that tracks phase followed by differencing of the measured phase time history from the known transmitted phase time history results in a set of beat carrier-phase measurements that have a common bias in terms of cycles, a common transmitter location, and a common number of hops, but a different transmission frequency. If the beat carrier-phase measurement data are used from a set of two or more times when the signal is temporarily staying at a constant frequency, but with a different constant frequency for each of those times, then the model in Equation (22) applies for more than one value of j that map to an identical bias index i(j). It is assumed that a given transmitter's smoothed stair-stepping frequency transmission time history will occur over a relatively short time window, perhaps just 10 msec, so that the receiver location, the receiver clock offset, and the ionosphere model can be assumed to remain constant during that window.
As with group delay measurements, carrier-phase measurement equations are stacked into an N-dimensional vector measurement model equation that takes the form: where Λ is an NxN β matrix and N β is the dimension of β. The random measurement noise vector, ν c , is characterized by its mean (assumed to be zero) and its covariance matrix R c .
Finally, both types of measurement vector, the first for the range-equivalent group delays and the second for the range-equivalent beat carrier phases, can be stacked into a single 2N-dimensional measurement vector. The same can be done with the vector functions h g and h c and with the noise vectors ν g and ν c : Note that x g ⊂x c so that h is conveniently defined as a function of x c . The resulting measurement model takes the form:
Measurement model sensitivity matrices
Gradient-based nonlinear estimation algorithms, such as batch least-squares, require partial derivatives of the measurement model with respect to the unknown estimated quantities. These sensitivities must be computed at a succession of improved guesses of the optimal estimates of the unknowns. In the present context, the required partial derivatives are those of each h j measurement model function with respect to the elements of the unknown x and pv ectors. Derivatives with respect to the elements of r and pȓ equire special calculations. The sensitivity of the j th range measurement to the input variables r R and p̑j is The column vector g̑j is a stacked vector consisting of the m three-term g k function vectors associated with the m bounce points of ray path j. Similarly, v̑0 ,j , v̑f ,j , and ȗj are stacked column vectors for the m three-term vectors of the approaching signals, reflected signals, and bouncepoint normal vectors, respectively.
BATCH ESTIMATION OF RECEIVER POSITION, RECEIVER CLOCK OFFSET, AND IONOSPHERE PARAMETERS
A batch filter has been developed. It estimates x c and p by minimizing a cost function that includes weighted squared differences between the measurements and their modeled values and between the estimated p elements and their a priori estimates.
Batch-filter problem definition
In the general case, the batch-filtering problem seeks the values that jointly minimize the cost function: where y is the 2Nx1 stacked vector of the N measured pseudoranges and N measured range-equivalent beat carrier phases for the given N ray paths. R is the square, symmetric, 2N-by-2N, positive-definite measurement error covariance matrix. p̅ is the a priori estimate of the ionosphere parameter vector, and M is the square, symmetric, positive definite covariance matrix that models the uncertainty in the a priori ionosphere parameter vector p̅ . The elements of p consist of ionosphere parameters, which apply in the vicinity of the unknown, true signal ray paths. ζ is a positive scaling parameter that effectively re-scales the inverse of the a priori ionosphere parameter error covariance matrix. Its role is described in more detail in Baumgarten (2018). The batch least-squares cost function of Equation (28) does not include a priori values of the elements of x c with penalties for differences between those values and the estimated x c . This means that no prior knowledge about receiver position, receiver clock offset, or beat carrierphase biases is assumed.
The minimizing solution to this estimation problem is equivalent to the optimal least-squares solution to the following over-determined system of nonlinear equations: where R −1/2 and M −1/2 are the inverses of the Choleskyfactor square roots of, respectively, the matrices R and M, and where ν 1 is a zero-mean, identity-covariance Gaussian random error vector whose norm squared is minimized by the batch solution. Baumgarten (2018) provides additional details about this nonlinear least-squares problem. In some cases, it is desirable to solve for the unknown ionosphere model (and, potentially, for the unknown beat carrier-phase biases) while the receiver location and clock offset are assumed known and fixed (or, at least, closely monitored and corrected). In such cases, the optimization problem takes the general form: If beat carrier-phase measurements are not processed, then the problem reduces to estimation of ionosphere parameters only.
A modified Gauss-Newton solution algorithm
The Gauss-Newton method has been used to solve this estimation problem by finding the minimum of the cost function in Equation (28). This method is described in Gill et al. (1995) and Nocedal and Wright (2006). It is additionally discussed in the context of convex function optimization through nonlinear programing in Bertsekas (1997). Adap-tations to this method have been made in order to address some special characteristics of the present problem.
Each iteration starts with guesses of the optimizing values of x c and p. The Gauss-Newton method linearizes Equation (29) about these guessed values. Next, it solves the resulting over-determined linear least-squares problem to get candidates for improved solution guesses of x c and p. Finally, it searches along the line in [x c ;p] space from the old guess to the candidate new guess in order to find a new guess that reduces the cost J 1 (x c ,p).
Linearization of Equation (29) about the current guess for the unknown x c and p, x c,guess and p guess , takes the form This over-determined system of equations is solved through a short series of operations. The details of these procedures are presented in Baumgarten (2018). The sequence of procedures is repeated iteratively until the cost function is minimized.
The method used in this study deviates from the classic Gauss-Newton method in two respects. First, it uses an approach that allows the sets of considered measurements to change during the iterative process. This feature, which requires modifications to the way cost function reduction is approached, has been developed in order to deal with occasional failures in ray-path solving attempts due to a poor estimate for the location of the receiver and the ionospheric parameters, to difficulties in the numerical raytracing computation for one or more of the ray path's ray hops, or to the physical non-feasibility of the ray path. Regardless of the cause, the particular measurements that failed to be computable in the filter's model are temporarily excluded from the set of measurements that are considered. A second feature that is used with the batch filer is a measurement rejection mechanism. These are common practice in sensor-based systems due to the potential of significant, un-modeled measurement errors that affect sensor readings. In the context of this study, excessive measurement errors are handled with likelihood tests that are designed to detect and reject outliers as bad data.
Recognizing the challenges posed to the first-order Gauss-Newton method when it starts with a guess that is far from the receiver's true location, the algorithm distinguishes between two cases. In the nominal case that has been described above, the current position guess is assumed to be close to the solution. In this case, the algorithm will consider variations in the three components of the ECEF representation of the receiver's location r R , variations in the range-equivalent receiver clock offset cδ, variations in the carrier-phase measurement biases β, and variations in the ionosphere parameters at all bi-quintic spline nodes that affect the ray paths. If the current position guess is suspected of being far from the final solution, however, then only group-delay measurements will be processed. In addition, the algorithm will only consider variations in the receiver position's latitude and longitude and in its clock bias. Variations of altitude and of ionospheric model parameters are excluded in the calculation of the Gauss-Newton step. Additional details about this far-fromthe-solution case are described in Baumgarten (2018).
PRELIMINARY RESULTS
Assessment of the proposed navigation system's effectiveness evaluates the batch filter's performance in terms of positioning accuracy and its ability to estimate corrections to an erroneous a priori model of the ionosphere. A limited assessment has been performed through analyzing several test cases that differ in the number of ground stations and their placement, the number of ray paths, and the difference between true and a priori ionospheric models.
Truth-model simulation
A truth-model simulation has been developed for algorithm validation and assessment and for solution accuracy analysis. The simulation enables testing of different combinations of ground station arrays, ionosphere error models, and other parameters. Computation for an N e (r) electron density truth model utilizes a Chapman profile that is fit to an IRI model for a particular time (Psiaki et al., 2015). A similar procedure is used to generate an a priori estimate of the ionosphere parameter vector for use in the batch filter's cost function. The values that characterize this estimate may deviate substantially from the values that characterize the truth-model ionosphere parameters. This would be the case of an inaccurate a priori ionosphere model. The simulation uses truth values of the x and p vectors in the vector pseudorange and beat carrier-phase measurement models of Equations (21) and (23) to generate measurement values that are input directly into the main batch-filtering algorithm.
Solution convergence
A key event in the execution of the main solver algorithm is identifying solution convergence. This is performed using a step magnitude criterion that is applied to the correction vector generated at every iteration of the Gauss-Newton solver. In theory, the Gauss-Newton method with line search is guaranteed to converge to a local minimum, but the minimum is not guaranteed to be global. Testing experience indicates that convergence to a local minimum that is different from the global minimum occurs rarely, if ever. Therefore, for all simulation-generated test cases, the algorithm seems to be insensitive to the initial guess and the corresponding magnitude of the initial error and is nearly guaranteed to converge to its global minimum. Validity of the latter statement is further explored and demonstrated in Baumgarten (2018).
A posteriori position and ionosphere model accuracy
A test case that considers a setup with 32 varying-frequency signals transmitted from 11 ground stations has been studied. In this test case, parameterized ionosphere models for the true ionosphere and the a priori ionosphere that is input to the batch filter are similar. This is the case where a relatively accurate model for electron density in the vicinity of the transmitting stations and the receiver is available, possibly due to the use of one of the ionospherecharacterization methods mentioned in the introduction. Position accuracy for this test case is within 30 meters horizontal and 2 meters vertical 90% of the time, where the mean error is within meters from the receiver's true location. These results imply that, with a sufficient number of signals received and dual group-delay/beat-carrier-phase measurement processing, the achieved accuracy is adequate for the purpose of navigation and guidance for many significant applications.
A second test case, that considers only 17 transmitted signals, exhibited somewhat inferior position accuracy with errors that are roughly three times larger horizontally. Vertical accuracy, however, remained in the order of meters. In a third test case, that is characterized by a less accurate a priori model for the ionosphere, the mean error rose from several meters to 60 meters, resulting in total errors up to a hundred meters.
With respect to the filter's ability to apply corrections to an erroneous a priori ionosphere model, the algorithm has proven successful in reducing errors in the ionosphere model parameters. As one might expect, smaller errors have been observed near the locations where ray paths travel through the ionosphere. A posteriori ionosphere models are improved for all test cases in comparison to their a priori counterparts, evident in smaller a posteriori errors computed for the three Chapman profile parameters in the vicinity of the true ray paths.
Additional details on the studied test cases are available in Baumgarten (2018) and Baumgarten and Psiaki (2019).
Further performance assessment
A thorough study of performance for the developed batch filter is reported in Baumgarten (2018) and Baumgarten and Psiaki (2019) and may be published in a future journal article. It includes an analysis of a series of test cases that differ in the sets of parameters that define them. These parameters include the following: type of available measurements, number of ground stations and their placement, number of ray paths, ray-path geometry, the number of hops for each ray path, signal frequencies, true vs. a priori ionospheric models, receiver clock error, and the true location of the receiver. Analyses of results for these test cases explore the positioning sensitivity to scenario parameters. They also explore the extent to which errors in an a priori parameterized ionosphere model can be reduced. Additional analyses study other aspects of the filter's performance. These include filter convergence characteristics, scenario setup feasibility, and algorithm robustness.
SUMMARY AND CONCLUSION
An algorithm has been developed that utilizes groupdelay/pseudorange and beat carrier-phase measurements from HF signals propagating in the ionosphere to solve a combined positioning/ionosphere-corrections problem. These HF signals are transmitted from stationary groundbased beacons at known locations. They propagate to an over-the-horizon user receiver at an unknown location via refraction-induced bounces off of the ionosphere and, possibly, intervening reflections off of the Earth's surface.
A navigation filter estimates user position, user clock error, beat carrier-phase measurement biases, and corrections to parameters that characterize the ionosphere's three-dimensional electron density profile. The nonlinear batch least-squares estimation problem is solved using a modified Gauss-Newton method. This method has a high rate of achieving successful convergence to the optimal value of the underlying cost function.
This paper presents the main assets that have been developed in this study: physical models for the Earth and the ionosphere, a model for signals propagating in the ionosphere, two different signal measurement models, and a batch filter that estimates the user location and corrections to an a priori parameterized model of the ionosphere.
A limited investigation of system performance has been carried out using a truth-model simulation. Simulated test cases that consider different combinations of problem characteristics have been studied. Results indicate feasibility for the combined HF navigation/ionosphere-correction concept. It has been shown that, with sufficient availability of received signals, navigation grade accuracy for positioning may be achievable. Furthermore, a posteriori ionosphere model estimates are consistently improved for these cases in comparison to their a priori counterparts. | 11,989.6 | 2021-03-17T00:00:00.000 | [
"Physics"
] |
Mean-Field Limits for Entropic Multi-Population Dynamical Systems
The well-posedness of a multi-population dynamical system with an entropy regularization and its convergence to a suitable mean-field approximation are proved, under a general set of assumptions. Under further assumptions on the evolution of the labels, the case of different time scales between the agents’ locations and labels dynamics is considered. The limit system couples a mean-field-type evolution in the space of positions and an instantaneous optimization of the payoff functional in the space of labels.
Introduction
Overview of the topic. After being introduced in statistical physics by Kac [22] and then by McKean [27] to describe the collisions between particles in a gas, the mean-field approximation has become a powerful tool to analyze the asymptotic In the later contribution [28], the well-posedness theory as well as the mean-field approximation of the above system have been inserted in a more general framework which is suitable for a broader range of applications. In this setting, the velocity v of each agent is also depending on the behavior of the other ones, and the replicator dynamics for the strategies has been replaced by a more general vector field T , that is for i = 1, . . . , N, t ∈ (0, T ], (1.1) where Λ N t = N j=1 δ(x j t , σ j t ) ∈ P(R d × P(U )) is a distribution of agents with strategies at time t. The interpretation, given in [28], of these types of systems has a wider scope than the one of game theory: the interacting agents are assumed to belong to a number of different species, or populations, and therefore, more in general, we deal with labels i instead of (mixed) strategies σ i . This point of view can be used to distinguish informed agents steering pedestrians, to highlight the influence of few key investors in the stock market, or to recognize leaders from followers in opinion formation models. Throughout this work, we will adopt this perspective. Under a rather general set of assumptions on v and T (which, in particular, encompass the case of the replicator dynamics), it has been shown in [28] that the empirical measures Λ N t associated with system (1.1) converge to a probability measure on the state space, which solves the continuity equation where b Λ t is the vector field which drives the state in system (1.1). In [7], a further research direction has been explored. There, the replicator equation is slightly modified adding an entropy regularization H, see (1.3) below. Besides providing a mean-field theory for such systems, the authors discuss the fast reaction limit scenario, modeling situations in which the strategy (or label) switching of particles in the systems is actually happening at a faster time scale than that of the agents' dynamics. This leads us to the purpose of our paper. Contribution of the Present Work. In the present paper, we complement the abstract framework of [28] by adding an entropy regularization and we analyze its effects on the dynamics from an abstract point of view. We fix a reference probability measure η ∈ P(U ) and we consider only diffuse probability densities with respect to η. We Then we analyze the system ⎧ ⎨ i = 1, . . . , N, t ∈ (0, T ], (1.4) where i t denotes the label of the i-th agent, ε > 0 is a small parameter which modulates the intensity of the entropy functional, and λ ≥ 1 takes into account the possible time scale difference between the positions and labels dynamics. In the particular case where T Λ is the operator of the replicator dynamics, this is exactly the system considered in [7]. The motivation for this regularization has already been discussed in [7]: it serves to avoid degeneracy of the labels (see [7,Example 2.1] for a precise discussion) and allows for faster reactions to changes in the environment. We also refer to [16] for an earlier contribution on entropic regularizations in a game-theoretical setting.
From the mathematical point of view, the state space for the labels becomes now P(U )∩L p (U, η) for some p > 1. As non-degeneracy is a desirable feature also for the wider setting considered in [28], our first goal is then to establish a well-posedness theory in a similar spirit for system (1.4). As it happened in [28], a crucial point is giving a suitable set of assumptions on the dynamics which allows one to rely on the stability estimates for ODE's in convex subsets of Banach spaces developed in [9, Section I.3, Theorem 1.4, Corollary 1.1] and recalled in Theorem 2.1 below. In particular, a sufficient set of assumptions on the operator T which complies with this setting is given at the beginning of Sect. 3, see (T1)-(T3). It slightly adapts and, to some extent, simplifies the assumptions on [28], since here we are only considering the case of diffuse measures, and comprises both the case of the replicator dynamics and some models of leader-follower interactions with label switching modeled by reversible Markov chains [2] (see Remark 3.1).
The well-posedness of the particle model is proved in Theorem 3.3 as a consequence of the estimates in Proposition 3.2. The convergence to a mean-field limit is discussed in the subsequent Sect. 4. In Sect. 5, instead, we focus on the special case of replicator-type models and revisit the results of [7] from an abstract and more general point of view, which may also account for further modeling possibilities.
More precisely, we assume that the operator T takes the form for x ∈ R d and ∈ P(U ) ∩ L p (U, η), and where μ is the marginal of Λ in R d . In (1.5), ∂ ξ denotes the derivative of F with respect to its second variable. As we discuss in Remark 5.1, for a proper choice of F μ , the above setting encompasses the case of undisclosed replicator dynamics. By undisclosed it is meant that the players are not aware of their opponents' strategies. This is exactly the case dealt with in [7]; see [7,Remark 2.9] for the difficulties connected to the fast reaction limit in the general case. We stress, however, that (1.5) has a more flexible structure than the case-study of the replicator dynamics. For instance, as we discuss again in Remark 5.1, it allows one to consider pay-offs depending also on how often a strategy is played, penalizing choices that become predictable by other players. 5), we perform the fast reaction limit λ → +∞. This corresponds to a reasonable modeling assumption, that the label dynamics takes place at a much faster rate that the spatial dynamics. In Theorem 5.12 we prove the convergence of system (1.4)-(1.5) to a Newton-like system of the forṁ where * i t optimizes the functional for fixed x and μ. We stress that, differently from [7], we do not need to explicitly compute the minimizer as it was done in the special case of the replicator dynamics. We remark that a crucial assumption for our proofs in Sect. 5 is convexity of the function F with respect to and actually our proofs are guided by the heuristic intuition that, for fixed x and μ, the label equation in (1.4)-(1.5) is the formal gradient flow of (1.6) with respect to the spherical Hellinger distance of probability measures [24] (see also [2]). However, we provide explicit computations which do not resort to this gradient flow structure.
Outlook. The present paper provides the well-posedness theory and the mean-field approximation for multi-population agent-based systems with an entropic regularization on the labels. We remark that such a regularization in the trajectories prevents concentration in the space of labels. An analogous role could be played by diffusive terms in the space of positions, whose effects we plan to address in future contributions. We also provide an abstract structure on the evolution of the labels to perform fast reaction limits, which in particular contains the special case of [7]. On the one hand, the assumption that one agent is not fully aware of the label distribution of the other ones (the so-called undisclosed setting we consider here) is realistic in many applications. On the other hand, it would be interesting to single out the right assumptions to overcome this restriction while performing the fast reaction limite, for instance allowing one to consider F depending on the whole Λ, and not only on the marginal μ, in (1.5).
Overview of the Paper. In Sect. 2, we present our notation, recall some tools of functional analysis and measure theory, and outline the basic settings of the problem. In Sect. 3, we present the general assumptions and we study the entropic dynamical system (1.4), proving its well-posedness. In Sect. 4, we prove the mean-field limit of (1.4) to a continuity equation such as (1.2). In Sect. 5, we obtain the fast reaction limit of system (1.4), together with the explicit rate of convergence in terms of the parameter λ.
Basic Notation
If (X , d X ) is a metric space we denote by P(X ) the space of probability measures on X . The notation P c (X ) will be used for probability measures on X having compact support. We denote by C 0 (X ) the space of continuous functions vanishing at the boundary of X , and by C b (X ) the space of bounded continuous functions. Whenever X = R d , d ≥ 1, it remains understood that it is endowed with the Euclidean norm (and induced distance), which shall be simply denoted by |·|. For a Lipschitz function f : X → R we denote by its Lipschitz constant. The notations Lip(X ) and Lip b (X ) will be used for the spaces of Lipschitz and bounded Lipschitz function on X , respectively. Both are normed spaces with the norm f := f ∞ + Lip(f ), where · ∞ is the supremum norm.
In a complete and separable metric space (X , d X ), we shall use the Kantorovich-Rubinstein distance W 1 in the class of P(X ), defined as (2.1) or, equivalently (thanks to the Kantorovich duality), as involving couplings Π of μ and ν. It can be proved that the infimum is actually attained. Notice that W 1 (μ, ν) is finite if μ and ν belong to the space and that (P 1 (X ), W 1 ) is complete if (X , d X ) is complete. For a probability measure μ ∈ P(X ), if X is also a Banach space, we define the first moment m 1 (μ) as So that, the finiteness of the integral above is equivalent to μ ∈ P 1 (X ), whenever the distance d X is induced by the norm · X . Let μ ∈ P(X ) and f : X → Z a μ-measurable function be given. The pushforward measure f # μ ∈ P(Z) is defined by f # μ(B) = μ(f −1 (B)) for any Borel set B ⊂ Z. It also holds the change of variables formula whenever either one of the integrals is well defined.
For E being a Banach space, the notation C 1 b (E) will be used to denote the subspace C b (E) of functions having bounded continuous Fréchet differential at each point. The notation Dφ(·) will be used to denote the Fréchet differential. In the Vol. 91 (2023) Mean-Field Limits for Entropic Multi-Population 181 case of a function φ : [0, T ] × E → R, the symbol ∂ t will be used to denote partial differentiation with respect to t, while D will only stand for the differentiation with respect to the variables in E.
Functional Setting
The space of labels (U, d) will be assumed to be a compact metric space. Consider the Borel σ-algebra B on U induced by the metric d and let us fix a probability measure η ∈ P(U ) which we can assume, without loss of generality, to have full support, i.e., spt(η) = U . Notice that the measure space (U, B, η) is σ-finite and separable. For p ∈ [1, +∞], we consider the space L p (U, η), which is a separable Banach space. Given r and R such that 0 ≤ r < 1 < R ≤ +∞, we introduce the set of probability densities with respect to η, having lower bound r and upper bound R: notice that C 0,∞ is the set of L p -regular probability densities with respect to η. Since η(U ) = 1, the inclusion L p (U, η) ⊂ L 1 (U, η) holds for all p ∈ [1, +∞] and therefore the sets C r,R are closed with respect to the L p -norm. Thus, when equipped with the L p -norm, the sets C r,R are separable. 1 Finally, notice that C r,R are also convex and their interiors are empty. The state variable of our system is y : The component x ∈ R d describes the location of an agent in space, whereas the component ∈ C 0,∞ describes the distribution of labels of the agent. A probability distribution Ψ ∈ P(Y ) denotes a distribution of agents with labels. To outline the functional setting for the dynamics, we define Y := R d × L p (U, η) and the norm · Y by Since Y ⊂ Y , we equip Y with the · Y norm. For a given > 0, we denote by B the closed ball of radius in R d and by B Y the closed ball of radius in Y , namely, The Banach space structure of Y allows us to define the first moment m 1 (Ψ) for a probability measure Ψ ∈ P(Y ) as so that the space P 1 (Y ) defined in (2.2) can be equivalently characterized as Whenever we fix r and R in (2.3), we set Y r,R := R d × C r,R and we modify the notation above accordingly.
We conclude this section by recalling the following existence result for ODEs of convex subsets of Banach spaces, which is stated in [ Then for every c ∈ C there exists a unique curve c :
Well-Posedness of the Entropic System
In this section, we study the well-posedness of the ε-regularized entropic system (1.4); for convenience, in this section, we fix λ = 1. We start by listing the assumptions on the velocity field y → v Ψ (y) and on the transfer map y → R ε Ψ (y) := T Ψ (y) + εH( ). We assume that the velocity field v Ψ : Y → R d satisfies the following conditions: (v2) for every > 0, there exists L v, > 0 such that for every y ∈ B Y , and for every be an operator such that (T1) T Ψ (y) has zero mean for every (y, Ψ) ∈ Y × P 1 (Y ) : Mean-Field Limits for Entropic Multi-Population 183 (T2) for every > 0 there exists L T , > 0 such that for every (y 1 , and a constant C T > 0 such that for every (y, Ψ) ∈ Y r,R × P 1 (Y ) (for some 0 < r < 1 < R < +∞), for η-almost every u ∈ U . Finally, the entropy functional H : C 0,∞ → L 0 (U, η) that we consider is defined by where I( ) is the negative entropy of the probability density , namely We notice that, for every r, R ∈ (0, +∞) and every ∈ C r,R , we have that Remark 3.1. We remark that assumptions (v1)-(v3) already appeared in [1,2,28] and in [3,7] in a stronger form and are rather typical in the study of ODE systems. Conditions (T1)-(T3), instead, are slightly different from the usual hypotheses on the operator T Ψ introduced in [28, Section 3]. In particular, (T3) involves a pointwise condition on T Ψ (y), which is crucial to show existence and uniqueness of solutions to the N -particles system (3.30) below. The role played by such assumption is that of guaranteeing a pointwise control on the strategy (u), ensuring a bound from above and from below away from 0. For more details, we refer to the proof of Proposition 3.2.
Here, we report two fundamental examples that fall into our theoretical framework. The first one is the replicator dynamics (see also [3,7]). If Ψ ∈ P(Y ) stands for the distribution of players with mixed strategies ∈ C 0,∞ , the pay-off that a player in position x gets playing the strategy u ∈ U against all the other players writes and the corresponding operator T is In [28, Proposition 5.8] sufficient conditions on J are provided, that imply conditions (T1) and (T2). If J is bounded in R d × U × R d × U , then T also satisfies (T3). The second example stems from population dynamics and models a leaderfollower interactions (see [28,Sections 4 and 5]). We assume that U = {1, . . . , H} for some H ∈ N denotes the set of possible labels within a population. Given a distribution Ψ ∈ P(Y ) of agents with labels ∈ L p (U, η), for h = k ∈ U we denote by α hk (x, Ψ) ≥ 0 the rate of change from label h to label k and set Since η is supported on the whole of U , we may identify ∈ L p (U, η) with the vector ( 1 , . . . , H ). Hence, the operator T Ψ is defined by where the matrix Q(x, Ψ) writes as Suitable assumptions on α kh that ensure (T1) and (T2) are given in [28, Proposition 5.1]. Once again, if α kh are bounded, we have (T3) as well thanks to the precise structure (3.2): in particular, the positivity of α kh for every k = h is crucial to estimate T Ψ (y)(u) − in terms of the sole (u).
satisfies the following properties: (1) for every > 0, there exists L ε, > 0 such that for every Ψ ∈ P(B Y ε ), and for every y 1 , there exists θ ε > 0 such that for every > 0 and for every y ∈ B Y ε and for every Ψ ∈ P(B Y ε ) Vol. 91 (2023) Mean-Field Limits for Entropic Multi-Population 185 Proof. The proof is divided into three steps.
Step 1 (boundedness of H). We start by proving that H(C r,R ) ⊂ L ∞ (U, η) for every r, R ∈ (0, +∞) with r < 1 < R, which in turn implies that for every ∈ (0, +∞), Thus, using the convexity of the function t → t log(t) in (0, +∞) we get Since is a probability density it is straightforward to check that Therefore, To simplify the notation, we define so that inequality (3.8) reads Moreover, by Jensen's inequality we have that Since ∈ C r,R and (3.10) and (3.11) hold, we deduce that Since H( ) has zero mean and (T1) holds true, we have that Step 2 (Lipschitz continuity of H we may estimate for every 1 , 2 ∈ C r,R and every u ∈ U Thus, there holds 14) where we have used that η ∈ P(U ).
we only have to find θ ε such that for every Ψ ∈ P(B Y ε ) and every y = (x, ) ∈ B Y ε , In view of (3.13), we already know that for any θ ε > 0 Hence, we have to show that upper and lower bounds of C ε are preserved for a suitable choice of θ ε independent of y ∈ B Y ε and of Ψ ∈ P(B Y ε ). The precise θ ε will be specified along the proof. Let y ∈ B Y ε and Ψ ∈ P(B Y ε ). We start by imposing that for η-a.e. u ∈ U Using (T3) and (3.10) we get that (3.21) Because of (3.16) we have that Inequalities (3.21) and (3.22) imply that there exists R ε < R ε such that If (u) ≤ R ε , by (T3) and by (3.12) we estimate It follows from (3.24) that there exists θ 1 ε ∈ (0, +∞) such that for every θ ε ∈ (0, In fact, using (T3) and (3.11) If (u) ∈ 4 3 r ε , R ε , by monotonicity of ω we continue in the previous inequality with From inequality (3.27) we infer the existence of θ 2 ε ∈ (0, θ 1 ε ] (depending only on r ε and R ε ) such that for every θ ε ∈ (0, θ 2 ε ] it holds If (u) ∈ r ε , 4 3 r ε , instead, by (T3) and by the choice of r ε in (3.15), we estimate 29) which concludes the proof of (3.26) for θ ε ∈ (0, θ 2 ε ]. Combining (3.19), (3.20), and (3.26), we conclude that for every θ ε ∈ (0, θ 2 ε ], for every Ψ ∈ P(B Y ε ), and every y = (x, ) ∈ B Y ε , (3.18) holds. Notice, in particular, that θ ε is independent of . From now on, whenever a choice of r ε and R ε is made according to Proposition 3.2, the corresponding space Y r ε ,R ε will be denoted by Y ε . Moreover, for any N ∈ N, we will denote by Y N ε := (Y ε ) N the cartesian product of N copies of Y ε . Finally, we will consistently use the notation b ε Ψ for the velocity field introduced in (3.3).
As a consequence of Theorem 2.1 and Proposition 3.2, we obtain the following theorem.
; let ε > 0 and let r ε , R ε be as in Proposition 3.2. Then for any choice of initial conditionsȳ = (ȳ 1 , . . . ,ȳ N ) ∈ Y N ε , the system Proof. We let y := (y 1 , . . . , y N ) ∈ Y N ε ⊂ Y N , whose norm we define as and we consider the associated empirical measure Λ N : Then the Cauchy problem (3.30) can be written as In order to apply Theorem 2.1 to the system above, we first notice that assumption (ii) is automatically satisfied since the system is autonomous. To see that the other assumptions are satisfied too, we fix a ball B Therefore, by triangle inequality, (3.4), and (3.5), we obtain the estimate To see that also assumption (iv) of Theorem 2.1 holds, we apply (3.6), upon noticing that Existence and uniqueness of the solution to system (3.30) follow now from Theorem 2.1. Finally, because of (3.6), we have that Taking the supremum over i = 1, . . . , N in the left-hand side and applying Grönwall's Lemma, we conclude that which is (3.31).
We state here a second existence and uniqueness result, which will be useful in the next section.
has a unique solution.
Proof. The result follows by a direct application of Theorems 2.1 and 3.2, as this time the field b ε Λ t is fixed. In view of the previous result, the following definition is justified. Definition 3.5. Let ε > 0, let r ε , R ε be as in Proposition 3.2, let > 0, and let Λ ∈ C([0, T ]; (P 1 (Y ε ); W 1 )) be such that Λ t ∈ P(B Y ε ) for every t ∈ [0, T ]. We define the transition map Y Λ (t, s,ȳ) associated with the ODE (3.32) as where t → y t is the unique solution to (3.32) where we have replaced the initial condition by y s =ȳ.
Mean-Field Limit
In this section we aim at passing to the mean-field limit as N → ∞ in system (3.30).
Along the whole section, we fix ε > 0, r ε ∈ (0, 1), and R ε ∈ (1, +∞) as in Theorem 3.3. As it is customary in the study mean-field limits of particles systems, we look at the limit of the empirical measure assumptions on the initial conditions, the sequence of curves t → Λ N t converges to a curve Λ ∈ C([0, T ]; (P 1 (Y ε ); W 1 )) solution to the continuity equation We start by recalling the definition of Eulerian solution to (4.1).
The main result of this section is an existence and uniqueness result of Eulerian solutions to (4.1) and its characterization as the mean-field limit of the particles system (3.30).
Theorem 4.2. Let > 0 andΛ ∈ P(B Y ε ) be a given initial datum. Then, the following facts hold: (1) there exists a unique Eulerian solution
then the corresponding sequence of empirical measures Λ N t associated to the system (3.30) with initial dataȳ i N fulfill lim Before proving existence of an Eulerian solution, we briefly discuss its uniqueness. This result is a consequence of the following superposition principle (see [28,Theorem 3.11] and [3, Theorem 5.2]).
Theorem 4.3. (Superposition principle) Let (E, · E ) be a separable Banach space, let b : (0, T ) × E → E be a Borel vector field, and let μ ∈ C([0, T ]; P(E)) be such that
If μ is a solution to the continuity equation The following uniqueness result holds.
Proof. Uniqueness of Λ follows from Theorems 4.3 and 3.3. Indeed, we notice that by continuity of t → Λ t there exists finite which is precisely (4.3). Since L p (U, η) is a separable Banach space, we may apply Theorem 4.3 and deduce that there exists η ∈ P(C([0, T ]; Y ) concentrated on solutions to the Cauchy problem (4.4) and such that Λ t = (ev t ) # η for t ∈ [0, T ]. AsΛ ∈ P 1 (Y ε ), Theorem 3.3 implies that for any initial condition y 0 ∈ spt(Λ) system (4.4) admits a unique solution. This yields the uniqueness of Λ.
In order to prove existence of a Eulerian solution Λ to (4.1), we need to pass through the notion of Lagrangian solution, which we recall below (see also [10,Definition 3.3]). Definition 4.5. LetΛ ∈ P 1 (Y ε ) be a given initial datum. We say that Λ ∈ C 0 ([0, T ]; (P 1 (Y ε ); W 1 )) is a Lagrangian solution to (4.1) with initial datumΛ if it satisfies where Y Λ (t, s,ȳ) are the transition maps associated with the ODE (3.32).
Remark 4.6.
Recalling the definition of push-forward measure, it can be directly proven that Lagrangian solutions are also Eulerian solutions.
We first need the following lemma.
Vol. 91 (2023) Mean-Field Limits for Entropic Proof. It suffices to show that there exists ∈ (0, +∞) such that We first observe that by definition of Lagrangian solutions and the fact thatΛ ∈ P(B Y ε δ ), we immediately have Arguing as in Theorem 3.3, by definition of the transition map, by (3.6), and by (4.7), for every y ∈ B Y ε δ we have that By Grönwall inequality we deduce that (4.6) holds true with = (δ+M ε T )e 2M ε T .
We are now in a position to prove Theorem 4.2.
Proof of Theorem 4.2. The structure of the proof follows step by step that of [28,Theorem 3.5] (see also [3,Theorem 4.1]). We report it here briefly for the reader convenience, underlying the use of different function spaces. In particular, we notice that closed and bounded subsets of L p (U, η) are not compact, which does not allow us to apply Ascoli-Arzelà Theorem in combination to Theorem 3.3 to obtain a mean-field limit result. The proof goes through a finite-dimensional approximation and involves three steps.
Step 2: Existence and approximation of Lagrangian solutions. We fix a sequence of atomic measuresΛ N ∈ P(B Y ε δ ) such that Such a sequence can be constructed as follows: letȳ i (z) ∈ Y ε be independent and identically distributed with lawΛ, so that the random measuresΛ N := 1 N N i=1 δȳi (z) almost surely converge in P 1 (Y ε ) toΛ. Then, choose a realization z such that this convergence takes place. By Theorem 3.3, there exists unique the solution to system (3.30) with initial conditionȳ = (ȳ 1 , . . . ,ȳ N ) and let Λ N t be the associated empirical measures. As Λ N t are also Lagrangian solutions to (4.1) with initial con-ditionΛ N , (4.8) provides a constant C := C(ε, δ, T ) such that for every t ∈ [0, T ] and every N, ) is a Cauchy sequence, and there exists Λ ∈ C([0, T ]; (P 1 (B Y ε ), W 1 )) such that Λ N t converges to Λ t with respect to the Wasserstein distance W 1 , uniformly in t ∈ [0, T ]. Moreover, arguing as in the proof of (4.6), we may find¯ ≥ such that Y Λ (t, 0,ȳ) ∈ B Y ε for every t ∈ [0, T ] and everyȳ ∈ B Y ε δ . In view of (3.4) and (3.5) we obtain that Step 3: Uniqueness and conclusion. Uniqueness of Lagrangian solutions, given the initial datum, follows now from (4.8
Fast Reaction Limit for Undisclosed Replicator-Type Dynamics
The aim of this section is to address the case in which the dynamics for the labels runs at a much faster time scale than the dynamics for the agents' positions. In this case, introducing the fast time scale τ = λ t, with λ 1, system (3.30) takes the form Note that, for ε > 0 and 0 < r ε < 1 < R ε < +∞ as in Proposition 3.2, the wellposedness of (5.1) is still guaranteed by Theorem 3.3 (see Proposition 5.3). We focus on the behavior of system (5.1) as λ → +∞, thus we are interested in the case of instantaneous adjustment of the strategies. From now on, for Ψ ∈ P 1 (Y ε ) we denote ν := π # Ψ, where π : Y ε → R d is the canonical projection over R d . If Λ N , Λ are curves with values in P 1 (Y ε ), the symbols μ N and μ will instead indicate the curves of measures μ N t , μ t , obtained as push-forward of Λ N t and Λ t for t ∈ [0, T ] through π. We assume that the strategies dynamics is of replicator type, i.e., we suppose that in the second equation in (5.1) the operator T Ψ takes the form for a map F : , +∞] satisfying the following properties: (F1) for every > 0, every ν ∈ P(B ), every x ∈ B , and every ∈ C ε , the map u → F ν (x, (u), u) is η-integrable; (F2) for every > 0, every ν ∈ P(B ), every x ∈ B , and every u ∈ U , the map g (ν,x,u) : (0, +∞) → R defined as g (ν,x,u) (ξ) := F ν (x, ξ, u) is convex, is differentiable, and its derivative g (ν,x,u) is Lipschitz continuous in (0, +∞), uniformly with respect of (ν, x, u) ∈ P(B ) × B × U ; (F3) there exists C F > 0 such that for every > 0, every ν ∈ P(B ), every x ∈ B , every ξ ∈ (0, +∞), and every u ∈ U are Lipschitz continuous in P 1 (B ) × B uniformly with respect to u ∈ U and ξ ∈ (0, +∞). Namely, there exists Γ > 0 such that for every ξ ∈ (0, +∞), every x 1 , x 2 ∈ B , every ν 1 , ν 2 ∈ P(B ), and every u ∈ U (F5) for every > 0, every ν ∈ P(B ), every ξ ∈ (0, +∞), and every u ∈ U , the map F ν (·, ξ, u) is differentiable in R d .
The following proposition provides a set of conditions under which assumptions (F1)-(F5) are satisfied for integral functionals.
As we did in Sect. 3, from now on we fix ε > 0 and 0 < r ε < 1 < R ε < +∞ as in Proposition 3.2 (or, equivalently, as in Proposition 5.3). We recall that we set Our goal is to prove the convergence, as λ → +∞, of system (5.1) to a suitable system of agents with labels, where such labels are defined as minima of some particular functionals. In Proposition 5.7 we introduce the prototype for these functionals and present some of its properties. Before stating Proposition 5.7, we recall the definition of Fréchet differentiability on C ε (see, e.g., [3, Appendix A.1]).
Definition 5.5. (Fréchet differentiability) Let us set E
Remark 5.6. Notice that the linear operator L in Definition 5.5 is not uniquely determined on E C ε , while it is unique on the cone E := R + (C ε − ). For this reason, we will always use the notation DF( ) to denote the operator L.
As a consequence of Proposition 5.7 we have the following corollary.
and let G be defined as in (5.4). Then, for every > 0, every ν ∈ P(B Y ε ), every x ∈ B , and every 1 ≤ p < +∞, there exists a unique solution x,ν to the minimum problem min Moreover, there exists β ε > 0 and A ε, > 0 such that for every x, x 1 , x 2 ∈ B , every ν, ν 1 , ν 2 ∈ P(B ), and every ∈ C ε 10) 11) Proof. The existence and uniqueness to the minimum problem is a direct consequence of the strong and uniform convexity of G ν (x, ·) and of the convexity of C ε . Then, by the minimality of x,ν and by the local strong convexity of t → t log t, there exists β ε > 0 such that for every L 2 (U,η) , which proves (5.9).
As intermediate step towards the main result of this section we have the following lemma, where we estimate the behavior, as λ → +∞, of the labels i t in system (5.1). For later use, we introduce here the map Δ : R d × P 1 (R d ) → C ε defined as Δ(x, ν) := argmin ∈C ε G ν (x, ).
Step 1. We first show that the player's' locations x i t are bounded in R d independently of λ, N , and t. Indeed, using (v3) and recalling that m 1 (Λ N t ) ≤ max i=1,...,N y i t Y and that i t ∈ C ε , we have that for every i = 1, . . . , N Thanks to (ii) of Lemma 5.9, we may continue in (5.27) with λ,t − t (L p (U,η)) N ≤ ω ε,δ Combining (v1), (v2), and inequality (5.28), we further estimate | 8,349.6 | 2022-10-03T00:00:00.000 | [
"Mathematics"
] |
Effects of Overused Top-hammer Drilling Bits
The drill bits are the foremost common consumables in the mining industry but an essential part of the rock excavation process. The management of the bit wear directly influences the drilling quality and the productivity of the mine but often overlooked as a common consumable part. The study aims to analyze the effects of overused top-hammer drilling bits to various type of bit failure modes. 341 drill bits samples (ST68-102mm) were visually investigated to check the status of overused and failures. The button chipped (BC) type of failure occurs most frequently among all other types of bit failures. Subsequently, a positive correlation between the number of grinding rounds and bit failures were found. In addition, a cost analysis was conducted to demonstrate the adverse effects of drilling with overused bits. The results explicitly show the cost loss of using overused bits as the cost per metre (CPM) of 75% flat bits calculated to 3.1 AUD per metre while the CPM stays at 1.6 AUD per metre for using rock bits with 30% flat buttons.
Introduction
In rock drilling, the wear on drill bit button significantly affects its service life and machine operating cost [1]. Thus, continuous failure analysis and performance evaluation of the drill bit is crucially required to reduce the operation cost. Few pieces of research have been conducted on the tungsten carbide (WC/ Co) button failures modes and button wear characteristics. Most studies have focused on microscopy level of WC/Co button failure analysis. For instance, Swick, et al. [2] conducted experiments using microscopy methods on WC/Co button worn surfaces to reveal the bit wear characteristics. The drilling experiments were conducted using a rotary-percussive rock drill with Sandvik Coromant 33mm button bit to three different types of rock, i.e., granite, dolerite and diorite from the Boddington Mine, Western Australia. Rapid tool wear was observed with the granite sample experiencing micro and macro spalling on the bit buttons. Contrast, only micro spalling was monitored on both dolerite and diorite samples of which demonstrated less bit wear than the bit applied to the granite sample. The experiment discovered the critical dependence of drilling efficiency to the scale of rock spalling. Gupta, et al. [3] introduced different wear modes from experiments on the bit scanning electron microscopy (SEM). The research introduced a high qualitative wear classification system through accurate microscopy observations providing a close-range view of button bits figures. The study aims to analyze the effects of overused drill bits to various type of bit failure modes through statistical analysis. In following Section 2 demonstrates the effects of overused drill bit failures, Section 3 explains common bit failure modes, Section 4, describes the statistical analysis of bit failures from MINE-A, and Section 5 demonstrates an example of an annual cost of bit failures of mines with different production capacities. Section 6 has discussions and conclusions of the study.
The Effects of Overused Bits to Drill Bit Failures
Drill bit failure is governed by various conditions. The influencing factors of the bit failure can be broadly classified into manufactures, end users, and rock types. Especially, rock properties must be clearly understood to evaluate the performance of drilling and the wear of drilling tools as the rock tool wear is significantly dependent upon the rock type [4]. The hard rock consists with high silica and quartzite generates extremely high pressures that increased the chance of the bit button removal rate, regional failures, and crushes of the tungsten carbide (WC/Co) bit surfaces [5]. Another critical factor to drill bit failures is the overusing. The mining industry often refers it to 'overboard' which is an industrial terminology of overused bits. Industry classifies overboard bits if the diameter of the flat face of the bit button is larger than one-third of the original button diameter. Figure 1 is an example of an overboard ST68-102mm drill bit from MMTC (Mitsubishi Material Trading Corporation) shows flat worn tungsten carbide buttons. The overboard bit tends to cause cracked buttons which leads to tremendous adverse effects on the tool efficiency and triggering drill hole deviations. In other words, the rate of button damage increases when a bit is overused. Furthermore, the overboard bit significantly drops the penetration rate. When a wear flat of a button is equivalent to one-third of the button diameter, the penetration rate will be dropped by 5%. Further use of the overboard bit to two third of the wear flat will drop its penetration rate by 30% [6]. A common industrial practice to extend the drill bit service life is the bit button grinding. The industrial rule of thumb suggests having around 10 times of bit button grinding for the bit sizes around 100 mm. Furthermore, the performance of the entire drilling operation can be significantly enhanced by proper bit button grindings. The grinding of bit buttons naturally causes the tungsten carbide material loss [5]. However, the ideal shape of the button by removing sources of stress concentration should be consistently maintained to protect the WC/Co buttons from catastrophic fractures. Button sheared-off with body level. c) Button sheared-off below body level. d) Cracked carbide. e) Lost button. f) Failure at Skirt.
Common Button Bit Failure Modes
The study focuses the visible classification of each failure mode that occurred during the drilling operation. Thus, knowledge of bit failure modes is critical. In this section, six common bit failure modes and the main causative factors will be introduced. The main reason for the button chipped (BC) failure is the overboard. If a bit is overused, the overboard causes to create micro-fractures on the WC/Co buttons that are likely developed to further cracks. Figure 2 (a) shows a typical primary breakage on top of the WC/Co button. In excessive condition, trailing edges with more than two chipping progress on the same button crossing through the bottom line of the button. Prior to the button chipped via brittle fracture, the WC/Co undergoes a plastic deformation at an overused carbide component with high-stress concentrations [7]. The button chipped failure can be prevented with a regular inspection of bits to grind the wounded WC/Co surface to remove micro-cracks if necessary. The button chipping phenomenon can also be reduced using a bit with a softer grade WC/Co or increasing rotation speed while drilling [8].
The failure mode button sheared-off with body level (SOW) usually shows a clear flat sheared surface on a button as shown in Figure 2 (b). This failure normally left trailing edges and mainly occurred due to the overuse of bits with poor operational skills. The WC/Co button is often sheared off when encountered with unexpected metallic materials (i.e., rock bolts or cable bolts) during the operation. The failure mode sheared-off button below body level (SOB) has similar features with the SOW failure mode as shown in Figure 2 (c). The failure mode can occur due to the incorrect size correlation between a button and a buttonhole which can be acknowledged as a manufacturing error. The main causative factor of cracked tungsten carbide (CC) failure in Figure 2 (d) is the bit overusing. As the fracturing progressions to the BC failure, the bit overusing generally creates micro fractures in WC/Co and weakens the WC/Co material. Visible cracks arise from these micro fractures after excessive use of bits and a fine abrasion mechanism is generated. After the material resistance to thermal fatigue is exceeded, small cracks start to grow through the cobalt phase into the tungsten phase. WC/Co grains start to get fragmented into debris and create material. The reptile skin pattern becomes visible once WC/Co starts to fracture and rock debris is pressed into cracks. Subsequently, entire WC/Co grains will be removed by abrasion, which also affects the cobalt phase as well [9]. The lost buttons (LB) failure mode (Figure 2 (e)) rarely occurs as the failure is not caused by the mechanical impacts during the drilling operation. The main cause of the failure mode is the free hammering of the bit in the air. The free hammering generates massive dynamic shock impacts that propagate back and heat up all steel parts of a drill rig. Especially when a bit is free hammered in a borehole, the probability of the failure of gauge buttons is increased as they are easily impacted by the borehole wall. Another main cause of the lost button's failure is improper soldering of the buttons in a bit base steel which can be considered as a manufacturing error [10].
The failure at skirt (FS) mode seldom occurs during the drilling operation which is mainly caused when a bit is excessively used in extraordinary situations. For instance, if an excessive rotational speed is applied, a stuck bit gets heated along the thread inside the steel body which can cause the skirt failure. In addition, incorrect collaring practices could be another reason but material fatigue from excessive hammering is the most common cause of the failure. On the other hand, material failure could be acknowledged as a manufacturing error [10]. In the majority bit failure modes, the regular bit inspection is the most effective method to reduce bit failures. Through the regular inspection, proper grinding intervals of the bit for the given geological condition can be determined to prevent possible bit failures and productivity loss.
Data Analysis and Discussion
Over a period of 4 months, 341 drill bit samples (ST68-102mm-MMTC) had been collated. The number of used bits were regularly sent to a bit grinding service center from MINE-A, Kalgoorlie, Australia. Prior to the bit grinding process, each bit had been visually inspected to examine the status of overboard and failures. Table 1
Effects of bit grindings to failure modes
It is an apparent theory to propose that the more often a bit is ground, the more likely the bit will fail. This can be proved via calculating the percentage of failure modes in each round of grindings (NG) which is demonstrated in Figure 3 including the number of samples in each grinding round. As can be seen from Figure 3, the number of samples is steadily increased to the 3 rd grinding round and rapidly drops after 70 in the 4 th grinding. Given that the bits had been consistently collated from one mine, one can expect that the number of recyclable bits was gradually reduced after 3 rd grinding round. In other words, bits were gradually discarded due to the accumulated damages through three rounds of grinding and reuse. Except for the LB failure mode (less than 2 or no occurrence), the percentage of bit failure modes is gradually increased up to 3rd grinding round. The percentage of BC and SOW type failure modes is steadily increased up to 7 th grinding round while the percentage of CC and SOB type failure modes is fluctuated between 5th and 7 th grinding round due to the lack of samples. The BC, SOW, SOB, and CC can be recognized as the major bit failure modes as they have frequently appeared through the data collection. The relation between the number of bit grinding round and the percentage of bit failure can be analyzed via the linear regression analysis as demonstrated in Figure 4. The analysis was conducted excluding the 7 th grinding data of SOB and the 5 th to 7 th grinding data of CC as they show abnormal trends due to small number of data sets. As a result, the grinding round and the percentage of bit failure shows a very high correlation with the correlation determination (R2) of 0.86. This result proves the higher number of bit grindings will cause a higher frequency of bit failures.
Comparison between normal bits and overboard bits
As shown in section 4.1, the overusing bit significantly increases the bit failure frequencies. This section attempts to compare the failure frequencies between the normal bits (NB) and the overboard drill bits (OB). The number of the overboard bits were 169 while 172 samples were classified as the normal bit as the wear flat of buttons was less than 30% of the button diameter. The comparison has been conducted using the average percentage of bit failure of 1 st to 7 th grinding rounds in each failure mode. As a result, the average percentage of failures for normal and overboard bits were compared in Figure 5. The overboard bits significantly increase the chance of bit failures. As shown in Figure 5, the av-erage percentage of bit failure of the overboard bit in BC, CC and SOW types of failures increases to 23.53%, 23.53%, and 16.91% respectively. The results are 1.68, 1.52, and 2.56 times higher than the average percentage of normal bit failures. Furthermore, the overboard bits significantly increase the average percentage of SOB type bit failure to 26.47% which is 4.5 times greater than the average percentage of the normal bit failure (5.88%).
Cost Analysis on the Drill Bits
In this section, a cost analysis has been conducted to identify how much the operating cost for drill bits might be saved annually. The comparison is conducted between the normal bits with 30% size of the flat wear and the overboard bits with 50% and 75% size of the flat wear. According to the company factsheet, MINE-A has an annual production of 1.8 million tonnes per year from its underground operation. The other necessary parameters are assumed for further calculation as below. The drilling distance of a new drill bit is assumed as 40m per bit and 30m for the overboard drill bit regardless of the number of grinding round (NG). For instance, drill bits with 1 NG and 7 NG have the same 30m travel distance. From this assumption, the drilling distance of one normal drill bit (less than 30% of WC/Co wear) can be calculated to 250m as 40m + (30m × 7 times of grinding). In the same way, the drilling distance of 50% and 75% overboard bits can be calculated as 190m and 130m respectively. A specific drill bit, ST68-102mm by MMTC, has a diameter of 102mm steel body matrix. The range of the drill bit price is generally from $400 to $ 500 Australian Dollar (AUD). In this analysis, the drill bit cost has been assumed as $400 AUD. Top-hammer drilling bits are used in underground production with hard rock geological conditions. This analysis as-sumed the overall density of 2.7 t/m 3 in the MINE-A. In order to calculate the annual operating cost of the drill bit, Cost per metre (CPM) is required which are listed in Table 2 with respect to the size of WC/Co wear flat. The CPM rate is $1.6 per metre for the normal bit and $2.1 and $3.1 per metre for 50% and 75% overboard bit respectively. The drilling length per tonne is assumed as 5.48 t/m 11. Since the MINE-A has an annual production of 1.8 million tonnes, the annual operating cost on the drill bit can be calculated as $525,547 AUD per year when the rate of bit wear is managed within the range of normal bit (1,800,000 (tonne/ year) × 1.6 ($/m) × (1/5.48) (m/tonne)). In order to compare the cost increments with different production rate, 1 and 2.5 million tonnes per year have been analysed as demonstrated in Figure 6. With a production rate of 1 million tonnes per year, the operational cost loss of employing 75% overboard bits is $0.28 million AUD/year comparing with using the normal bits with less than 30% WC/Co button wear. Furthermore, the operational costs increase as the production rate increases. The cost increments of 75% overboard bits in comparison of the normal bits of 1.8Mt and 2.5Mt are $0.49 million AUD and $0.68 million AUD respectively. Given that the overboard drill bits cause higher frequencies of drill bit failures, the operating cost will be further increased.
Conclusion
The drill bits are one of the most common consumables but an essential part of the rock excavation process. The maintenance of the drill bit has a direct influence on not only the quality of the drilling but also the efficiency of the operation. The drilling operation in the mining industry often left to contractors and the maintenance of the drill bits is often overlooked. The study aims to analyze the effects of overboard (overused) bits to various failure modes in top-hammer drill bits. 102mm drill bit (ST68-102mm by MMTC) was used in underground production at 'MINE-A' and 341 drill bits failure data were collated over the four months at a bit grinding centre at Kalgoorlie, WA, Australia. The collected bits were visually inspected and Button chipped (BC) type failure was revealed as the most frequent failure mode among six other failure modes, i.e., Sheared-off with body level (SOW), Shearedoff below body level (SOB), Cracked carbide (CC), Lost buttons (LB), and Failure at skirt (FS). The relation between the numbers of bit grinding round and the percentage of bit failure had been analyzed using linear regression analysis applying data samples of four major bit failure modes (BC, SOW, SOB, and CC). The result shows a very strong correlation between the grinding round and the percentage of bit failure with the correlation determination (R2) of 0.86. Subsequently, the average percentage of bit failure of the normal bits and the overboard drill bit is compared. The comparison demonstrates that the overboard bits significantly increase the possibility of BC and CC type bit failures approximately 2.5 times greater than the normal bits. Furthermore, the overboard bits significantly affect the SOB type bit failure with 4.5 times higher chance of bit failure than the normal bits. Lastly, a cost analysis of utilizing a different percentage of WC/Co button wear is conducted employing the cost per metre (CPM) rates and annual operating costs. CPM of the normal bits (the bit button wear flat is less than 30% of the original button diameter) is calculated as $1.6 per metre while CPM of 75% overboard bits is calculated as $3.1 per metre. The annual operating cost increases as the production increases. The cost increments of 30% and 75% WC/Co button wear in the annual production of 1.0 Mt, 1.8 Mt and 2.5 Mt are calculated as $0.28M, $0.49M, and $0.68M respectively. The results explicitly show the cost loss of using overboard bits. Given that the increasing bit failure percentage of overboard bits compared with normal bits, the cost loss would be significantly increased when bit wear is improperly managed. | 4,285.4 | 2019-09-25T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
LPI Optimization Framework for Target Tracking in Radar Network Architectures Using Information-Theoretic Criteria
Widely distributed radar network architectures can provide significant performance improvement for target detection and localization. For a fixed radar network, the achievable target detection performance may go beyond a predetermined threshold with full transmitted power allocation, which is extremely vulnerable in modern electronic warfare. In this paper, we study the problem of low probability of intercept (LPI) design for radar network and propose two novel LPI optimization schemes based on information-theoretic criteria. For a predefined threshold of target detection, Schleher intercept factor is minimized by optimizing transmission power allocation among netted radars in the network. Due to the lack of analytical closed-form expression for receiver operation characteristics (ROC), we employ two information-theoretic criteria, namely, Bhattacharyya distance and J-divergence as the metrics for target detection performance. The resulting nonconvex and nonlinear LPI optimization problems associated with different information-theoretic criteria are cast under a unified framework, and the nonlinear programming based genetic algorithm (NPGA) is used to tackle the optimization problems in the framework. Numerical simulations demonstrate that our proposed LPI strategies are effective in enhancing the LPI performance for radar network.
Introduction
Radar network architecture, which is often called as distributed multiple-input multiple-output (MIMO) radar, has been recently put forward and is becoming an inevitable trend for future radar system design [1][2][3].The performance of radar network heavily depends on optimal power allocation and transmission waveform design, so enhanced improvements on target detection and information extraction would be realized by spatial and signal diversities.
Currently, system design for target detection and information extraction performance improvement has been a long-term research topic in the distributed radar network literature.In [4], Fishler et al. propose the distributed MIMO radar concept and analyze the target detection performance for distributed MIMO radar.Yang and Blum in [5] study the target identification and classification for MIMO radar employing mutual information (MI) and the minimum mean-square error (MMSE) criteria.The authors in [6] investigate the problem of code design to improve the detection performance of multistatic radar in the presence of clutter.Niu et al. propose localization and tracking approaches for noncoherent MIMO radar, which provides significant performance enhancement over traditional phased array radar [7].
Power allocation problem in radar network architecture has been attracting contentiously growing attention, and some of the noteworthy publications include [8][9][10][11][12][13][14].The work of [8] investigates the scheduling and power allocation problem in cognitive radar network for multiple-target tracking, in which an optimization criterion is proposed to find a suitable subset of antennas and optimal transmitted power allocation.Godrich et al. in [9][10][11] address the power allocation strategies for target localization in distributed multipleradar configurations and propose some performance driven resource allocation schemes.In [12], the authors investigate target threatening level based optimal power allocation for LPI radar network, where two effective algorithms are 2 International Journal of Antennas and Propagation proposed to enhance the LPI performance for radar network.Furthermore, in [13,14], several optimal power allocation algorithms for distributed MIMO radars with heterogeneous propagation losses are presented to enhance target detection and information extraction performance.However, up to now, the low probability of intercept (LPI) optimization for radar network architecture is still an open problem, which is playing an increasingly important role in modern electronic warfare [1,[15][16][17][18].Therefore, it is an urgent task to investigate the LPI optimization problem in radar network.This paper will extend the results in [6] and propose two novel LPI optimization algorithms based on informationtheoretic criteria for radar network architecture.Our purpose is to minimize Schleher intercept factor by optimizing transmission power allocation among netted radars for a predefined threshold of target detection.Due to the lack of analytical closed-form expression for receiver operation characteristics (ROC), we employ two information-theoretic criteria including Bhattacharyya distance and J-divergence as the metrics for target detection performance.As demonstrated later, the proposed algorithms can provide significant LPI performance improvement for radar network.To the best of the authors' knowledge, no literature discussing the information-theoretic criteria based LPI optimization for radar network architecture was conducted prior to this work.
The remainder of this paper is organized as follows.Section 2 provides the radar network system model and binary hypothesis test.We first derive Schleher intercept factor for radar network in Section 3 and formulate the problems of information-theoretic criteria based LPI optimization, where the resulting nonconvex and nonlinear LPI optimization problems associated with different informationtheoretic criteria are cast under a unified framework and solved through the nonlinear programming based genetic algorithm (NPGA).Numerical examples are provided in Section 4. Finally, conclusion remarks are drawn in Section 5.
Radar Network SNR Equation.
We consider a radar network architecture with transmitters and receivers, which can be broken down into × transmitter-receiver pairs each with a bistatic component contributing to the entirety of the radar network signal-to-noise ratio (SNR) [1].Depicted in Figure 1 is an example of 4 × 4 radar network.All the radars have acquired and are tracking the target with their directional antenna beams.The netted radars 1, 2, 3, and 4 transmit orthogonal waveforms (as solid lines) but receive and process all these echoes that are reflected from the target (as dotted lines) and send the estimates to one of the radars in the network for data fusion with data link.
For the radar network here, orthogonal polyphase codes are employed in the system, which have a large main lobeto-side lobe ratio.These codes have a more complicated signal structure making it more difficult to be intercepted and detected by a hostile intercept receiver.It is also supposed that the network system has a common precise knowledge of space and time.The radar network SNR can be calculated by summing up the SNR of each transmitreceive pair as [1] follows: where the is the th transmitter power, is the th transmitting antenna gain, is the th receiving antenna gain, is the radar cross-section (RCS) of the target for the th transmitter and th receiver, is the th transmitted wavelength, is Boltzmann's constant, is the receiving system noise temperature at the th receiver, is the bandwidth of the matched filter for the th transmitted waveform, is the noise factor for the th receiver, is the system loss between the th transmitter and th receiver, is the distance from the th transmitter to the target, and is the distance from the target to the th receiver.
Radar Network Signal Model.
According to the discussions in [14], the path gain contains the target reflection coefficient and the propagation loss factor .Based on the central limit theorem, ∼ CN(0, ), where denotes the target reflection gain between radar and radar .The propagation loss factor is a function of radar antenna gain and waveform propagation distance, which is expressed as follows: It is supposed that the transmitted waveform of the th netted radar is √ (), and then the collected signals at the th receiver from a single point target can be written as follows: where ∫ | ()| 2 = 1, represents the time delay, () denotes the noise at receiver , and the Doppler effect is negligible.At the th receiver, the received signal is matched filtered by time response * (−), and the output signal can be expressed as follows: where ñ () = ∫ ()⋅ * (−) and ∫ ()⋅ * (+) = 0 for ̸ = .The discrete time signal for the th receiver can be rewritten as follows: where is the output of the matched filter at the receiver sampled at , = ñ ( ), and ∼ CN(0, ).As mentioned before, we assume that all the netted radars have acquired and are tracking the target with their directional radar beams, and they transmit orthogonal waveforms while receiving and processing all these echoes that are reflected from the target.In this way, we can obtain .
Binary Hypothesis
Test.With all the received signals, the target detection for radar network system leads to a binary hypothesis testing problem: where 1 ≤ ≤ , 1 ≤ ≤ .The likelihood ratio test can be formulated as follows: As introduced in [14], the underlying detection problem can be equivalently rewritten as follows: Then, we have the optimal detector as follows: where denotes the detection threshold.
Problem Formulation
In this section, we aim to obtain the optimal LPI performance for radar network architecture by judiciously designing the transmission power allocation among netted radars in the network.We first derive Schleher intercept factor for radar network system and then formulate the LPI optimization problems based on information-theoretic criteria.For a predefined threshold of target detection, Schleher intercept factor is minimized by optimizing transmission power allocation among netted radars.It is indicated in [6] that the analytical closed-form expression for ROC does not exist.
As such, we resort to information-theoretic criteria, namely, Bhattacharyya distance and J-divergence.In what follows, the corresponding LPI optimization problems associated with different information-theoretic criteria are cast under a unified framework and can be solved conveniently through NPGA.
Schleher Intercept Factor for Radar Network.
For radar network, it is supposed that all signals can be separately distinguished at every netted radar node.Assuming that every transmitter-receiver combination in the network can be the same and 2 net ≜ ⋅ , in which case the radar network SNR equation ( 1) can be rewritten as follows (see Appendix A): where is the total transmitting power of radar network system.Note that, when = = 1, we can obtain the monostatic case where mon is the distance between the monostatic radar and the target, while, for intercept receiver, the SNR equation is where SNR int is the SNR at the interceptor signal processor input, is the gain of the radar's transmitting antenna in the direction of the intercept receiver, is the gain of the intercept receiver's antenna, is the interceptor noise factor, is the bandwidth of the interceptor, int is the range from radar network to the intercept receiver, and refers to the losses from the radar antenna to the receiver.For simplicity, we assume that the intercept receiver is carried by the target.
As such, the interceptor detects the radar emission from the main lobe; that is, = .Herein, Schleher intercept factor is employed to evaluate LPI performance for radar network.The definition of Schleher intercept factor can be calculated as follows: where rad is the detection range of radar and int is the intercept range of intercept receiver, as illustrated in Figure 2.
Based on the definition of Schleher intercept factor, if > 1, radar can be detected by the interceptor, while if ≤ 1, radar can detect the target and the interceptor cannot detect the radar.Therefore, radar can meet LPI performance when ≤ 1.Moreover, minimization of Schleher intercept factor leads to better LPI performance for radar network architecture.
With the derivation of Schleher intercept factor in Appendix B, it can be observed that, for a predefined target detection performance, the closer the distance between radar system and target is, the less power the radar system needs to transmit on guarantee of target detection performance.For simplicity, the maximum intercept factor max mon is normalized to be 1 when the monostatic radar transmits the maximal power max tot , and SNR net = SNR mon .Therefore, when the transmission power is , the intercept factor for radar network system can be simplified as follows: where mon is the Schleher intercept factor for monostatic radar.From (16), one can see that Schleher intercept factor net is reduced with the increase of the number of radar receivers and the decrease of the total transmission power in the network system.
Bhattacharyya Distance Based LPI Optimization
Scheme.It is introduced in [6] that Bhattacharyya distance ( 0 , 1 ) measures the distance between two probability density functions (pdf) 0 and 1 .The Bhattacharyya distance provides an upper bound on the probability of false alarm fa and at the same time yields a lower bound on the probability of detection .Consider two multivariate Gaussian distributions 0 and 1 , 0 ∼ CN(0, 0 ), and 1 ∼ CN(0, 1 ); the Bhattacharyya distance ( 0 , 1 ) can be obtained as [6] ( 0 , Let [( | 0 ), ( | 1 )] represent Bhattacharyya distance between 0 and 1 , where ( | 0 ) and ( | 1 ) are the pdfs of r under hypotheses 0 and 1 .For the binary hypothesis testing problem, we have that Based on the discussion in [6], maximization of the Bhattacharyya distance minimizes the upper bound on fa while it maximizes the lower bound on .As expressed in (18), the Bhattacharyya distance derived here can be applied to evaluate the target detection performance of radar network as a function of different parameters, such as the transmitting power of each netted radar and the number of netted radars in the network.Intuitively, the greater the Bhattacharyya distance between the two distributions of the binary hypothesis testing problem, the better the capability of radar network system to detect the target, which would make the network system more vulnerable in modern electronic warfare.Therefore, the Bhattacharyya distance can provide guidance to the problem of LPI optimization for radar network architecture.
Here, we focus on the LPI optimization problem for radar network architecture, where Schleher intercept factor is minimized by optimizing transmission power allocation among netted radars in the network for a predetermined Bhattacharyya distance threshold, such that the LPI performance is met on the guarantee of target detection performance.Eventually, the underlying LPI optimization problem can be formulated as follows: min where ⇀ P t = [ 1 , 2 , . . ., ] is the transmitting power of radar network, th is the Bhattacharyya distance threshold for target detection, max tot is the maximum total transmission power of radar network, and max (for all ) is the maximum transmission power of the corresponding netted radar node.
J-Divergence
Based LPI Optimization Scheme.The Jdivergence ( 0 , 1 ) is another metric to measure the distance between two pdfs 0 and 1 .It is defined as follows: where (⋅) is the Kullback-Leibler divergence.It is shown in [19] that, for any fixed value of fa , and, for any fixed value of , we can obtain With the derivation in [6], we have that Consequently, the corresponding LPI optimization problem can be expressed as follows: where th is the J-divergence threshold for target detection.
The Unified Framework
where net ∈ { net , net } and th is the corresponding threshold for target detection.
In this paper, we utilize the nonlinear programming based genetic algorithm (NPGA) to seek the optimal solutions to the resulting nonconvex, nonlinear, and constrained problem (25).The NPGA has a good performance on the convergence speed, and it improves the searching performance of ordinary genetic algorithm.
The NPGA procedure is illustrated in Figure 3, where the population initialization module is utilized to initialize the population according to the resulting problem, while the calculating fitness value module is to calculate the fitness values of individuals in the population.Selection, crossover, and mutation are employed to seek the optimal solution, where is a constant.If the evolution is 's multiples, we can use NP approach to accelerate the convergence speed.
So far, we have completed the derivation of Schleher intercept factor for radar network architecture and the information-theoretic criteria based LPI optimization schemes.In what follows, some numerical simulations are provided to confirm the effectiveness of our presented LPI optimization algorithms for radar network architecture.
Numerical Simulations
In this section, we provide several numerical simulations to examine the performance of the proposed LPI optimization algorithms as (19) and (24).Throughout this section, we assume that max tot = ∑ =1 = 24 KW, = = 30 dB, = 10 −10 , and = 1.The SNR is set to be 13 dB.The traditional monostatic radar can detect the target whose RCS is 0.05 m 2 in the distance MAX = 106.1 km by transmitting the maximum power max tot = 24 KW, where the intercept factor is normalized to be 1 for simplicity.Carlo trials.We can observe in Figures 4 and 6 that as Schleher intercept factor increases from net = 0 to net = 2 the achievable Bhattacharyya distance and logarithmic Jdivergence are increased.This is due to the fact that as the intercept factor increases more transmission power would be allocated, which makes the achievable Bhattacharyya distance and logarithmic J-divergence increase correspondingly as theoretically proved in ( 18) and ( 23).Furthermore, it can be seen from Figures 4 and 6 that, with the same target detection threshold, Schleher intercept factor can be significantly reduced as the number of transmitters and receivers in the network system increases.Therefore, increasing the number of netted radars can effectively improve the LPI performance for radar network.This confirms the LPI benefits of the radar network architecture with more netted radars.As shown in Figures 5 and 7, we illustrate the Bhattacharyya distance and logarithmic J-divergence versus Schleher intercept factor for different target scattering intensity with = = 4, respectively.It is depicted that as the target scattering intensity increases from = 1 to = 10 the achievable Bhattacharyya distance and logarithmic J-divergence are significantly increased.This is because the radar network system can detect the target with large scattering intensity easily with high and low fa .
Target Tracking with LPI Optimization.
In this subsection, we consider a 4 × 4 radar network system ( = = 4) in the simulation, and it is widely deployed in modern battlefield.The target detection threshold th can be calculated in the condition that the transmission power of each radar is 6 KW in the distance 150 km between the radar network and the target, which is the minimum value of the basic performance requirement for target detection.As mentioned before, it is supposed that the intercept receiver is carried by the target.It is depicted in Figure 8 that the netted radars in the network are spatially distributed in the surveillance area at the initial time = 0.
We track a single target by utilizing particle filtering (PF) method, where 5000 particles are used to estimate the target state.Figure 9 shows one realization of the target trajectory for 50 s, and the tracking interval is chosen to be 1 s.With the radar network configuration in Figure 8 and the target tracking scenario in Figure 9, we can obtain the distances changing curve between the netted radars and the target in the tracking process as depicted in Figure 10.Without loss of generality, we set 1 as the distributed data fusion center and capitalize the weighted average approach to obtain the estimated target state.
International Journal of Antennas and Propagation To obtain the optimal transmission power allocation of radar network, we utilize NPGA to solve (19) and (24).Let the population size be 100, let the crossover probability be 0.6, and let the mutation probability be 0.01.The population evolves 10 generations.Figure 11 shows the transmitting power of netted radars utilizing Bhattacharyya distance based LPI optimization in the tracking process, while Figure 12 depicts the J-divergence based case.Before = 36 s, netted radars 2, 3, and 4 are selected to track the target, which are the ones closest to the target, while netted radar 1 is selected instead of radar 2 after = 36 s, which is because netted radars 1, 2, and 3 have the best channel conditions in the network.From Figures 11 and 12, we can see that the transmission power allocation is determined by the locations of single target relative to the netted radars and their propagation losses.To be specific, in the LPI optimization process, more transmitting power is allocated to the radar nodes that are located closer to the target; this is due to the fact that they suffer less propagation losses.Figure 13 demonstrates the advantage of our proposed optimization problems based on information-theoretic criteria.The traditional monostatic radar transmits 24 KW constantly, while the ordinary radar network has a constant sum of transmitted power 24 KW and each radar node transmits uniform power.One can see that Schleher intercept factor for radar network employing the information-theoretic criteria based LPI optimization strategies is strictly smaller than that of traditional monostatic radar and ordinary radar network across the whole region, which further shows the LPI enhancement by exploiting our presented LPI optimization schemes in radar network to defend against passive intercept receiver.Moreover, it can be seen in Figure 13 that, in terms of the same system constraints and fundamental quantity, Bhattacharyya distance based LPI optimization is asymptotically equivalent to the J-divergence based case.4-13, we can deduce the following conclusions for radar network architecture.
Discussion. According to Figures
(1) From Figures 4 to 7, we can observe that as the predefined threshold of target detection increases more transmission power would be allocated for radar network to meet the detection performance, while the intercept factor is increased subsequently, which is vulnerable in electronic warfare.In other words, there exists a tradeoff between LPI and target detection performance in radar network system, and the LPI performance would be sacrificed with target detection consideration.
(2) In the numerical simulations, we observe that the proposed optimization schemes ( 19) and ( 24) can be employed to enhance the LPI performance for radar network.Based on the netted radars' spatial distribution with respect to the target, we can improve the LPI performance by optimizing transmission power allocation among netted radars.As indicated in Figures 11 and 12, netted radars with better channel conditions are favorable over others.In addition, it can be observed that exploiting our proposed algorithms can effectively improve the LPI performance of radar network to defend against intercept receiver, and Bhattacharyya distance based LPI optimization algorithm is asymptotically equivalent to the J-divergence based case under the same system constraints and fundamental quantity.
Conclusions
In this paper, we investigated the problem of LPI design in radar network architecture, where two LPI optimization schemes based on information-theoretic criteria have been proposed.The NPGA was employed to tackle the highly nonconvex and nonlinear optimization problems.Simulations have demonstrated that our proposed strategies are effective and valuable to improve the LPI performance for radar network, and it is indicated that these two optimization problems are asymptotically equivalent to each other under the same system constraints.Note that only single target was considered in this paper.Nevertheless, it is convenient to be extended to multiple targets scenario, and the conclusions obtained in this study suggest that similar LPI benefits would be obtained for the multiple targets case.Future work will look into the adaptive threshold design of target detection performance in radar network architectures.
Appendices
A.
Assume that every transmitter-receiver combination in the network can be the same and
B.
According to (15), we can derive the intercept factor for radar network as where max mon is Schleher intercept factor corresponding to the maximal transmitting power max tot .
Figure 1 :
Figure 1: Example of an LPI radar network.
Figure 2 :
Figure 2: The geometry of radar, target, and interceptor.
Figure 4 :
Figure 4: Bhattacharyya distance versus Schleher intercept factor for different radar network architectures.
4. 1 . 6 4 × 4
LPI Performance Analysis.Figures 4 and 6 show the Bhattacharyya distance and logarithmic J-divergence versus Schleher intercept factor for different radar network architectures, respectively, which are conducted 10 Monte radar network with R g = 1 4 × 4 radar network with R g = 5 4 × 4 radar network with R g = 10
Figure 8 :Figure 9 :
Figure 8: The radar network system configuration in two dimensions.
Figure 10 :
Figure 10: The distances between the netted radars and the target.
Figure 11 :
Figure 11: The transmitting power of netted radars utilizing Bhattacharyya distance based LPI optimization in the tracking process.
Figure 12 :Figure 13 :
Figure 12: The transmitting power of netted radars utilizing Jdivergence based LPI optimization in the tracking process.
2net ≜ ⋅ , where the radar network SNR (1) can be written as follows:SNR net = rad 2 (4) 3 .(A.2)Assume that the sum of the effective radiated power (ERP) from all the radars in the network is equivalent to that of monostatic radar; that is,=1 = , (A.3)where and are the transmitting power and transmitting antenna gain of the monostatic radar, respectively.For = (for all ), we can rewrite (A.1) as follows:SNR net = rad 4 net.
2 int ⋅ SNR net rad ⋅ ⋅ SNR 2When SNR net = SNR mon , we can readily obtain the relationship between the intercept factors for radar network net and for the monostatic case mon : max tot is the maximal power of the monostatic radar and MAX is the corresponding maximal detection range.Then, we can obtain | 5,705 | 2014-07-06T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Evaluating regression and probabilistic methods for ECG-based electrolyte prediction
Imbalances in electrolyte concentrations can have severe consequences, but accurate and accessible measurements could improve patient outcomes. The current measurement method based on blood tests is accurate but invasive and time-consuming and is often unavailable for example in remote locations or an ambulance setting. In this paper, we explore the use of deep neural networks (DNNs) for regression tasks to accurately predict continuous electrolyte concentrations from electrocardiograms (ECGs), a quick and widely adopted tool. We analyze our DNN models on a novel dataset of over 290,000 ECGs across four major electrolytes and compare their performance with traditional machine learning models. For improved understanding, we also study the full spectrum from continuous predictions to a binary classification of extreme concentration levels. Finally, we investigate probabilistic regression approaches and explore uncertainty estimates for enhanced clinical usefulness. Our results show that DNNs outperform traditional models but model performance varies significantly across different electrolytes. While discretization leads to good classification performance, it does not address the original problem of continuous concentration level prediction. Probabilistic regression has practical potential, but our uncertainty estimates are not perfectly calibrated. Our study is therefore a first step towards developing an accurate and reliable ECG-based method for electrolyte concentration level prediction—a method with high potential impact within multiple clinical scenarios.
• Lin et al. [27] use 66 321 ECG recordings from 40 180 patients and related potassium concentration in a time frame of ±60 minutes.
• Galloway et al. [26] use 2 835 059 ECG recordings from 787 661 patients and related potassium concentrations.The authors develop their model on 60 %(= 449 380) of the patients.All ECGs were recorded within 4 hours before potassium measurements.
• Kwon et al. [17] have 92 140 patients, whereof 48 356 patients were used for model development with 83 449 ECGs.The study considered potassium, sodium and calcium within ±30 minutes of ECG recordings.
We analysed our datasets in more detail to observe possible causes of errors or shortcuts for our model.In Figure S-1 we show histograms of age, recording year and the time difference between ECG recording and blood measurement.In Figure S-2 we show the distribution of electrolyte concentrations for all four electrolytes, which shows a Normal distribution for all electrolytes except for creatinine which is skewed towards large values.In order to validate our inclusion filter of ±60 minutes, we analyze the concentration of electrolytes vs the time difference and observe no clear change of concentration value over time.A similar analysis is done for age and sex.Here, we observe that older patients tend to have more extreme electrolyte concentration values for all four electrolytes.
A.3 Pre-processing
For the high-pass filter to remove the baseline (trends and low frequencies), we use an elliptic filter with a cut-off frequency of 0.8 Hz and an attenuation of 40 dB which is applied to the forward and reverse direction to avoid phase distortions.We additionally include a notch filter after observing that some ECGs are distorted by power line noise.The notch filter removes the 50 Hz with a quality factor of 30.Also, this filter is applied to the forward and reverse directions for the same reason.We use the pre-processing from the public library github.com/antonior92/ecg-preprocessing.
For the traditional machine learning methods, which we compare in "Results", "Deep Direct Regression", we further apply Principal Components Analysis (PCA) to reduce the dimensionality of the data.Here, we first concatenate all leads to get a 1D signal of length leads • samples = 8 • 4096 = 32 768.Then we fit PCA on our train dataset.We choose the number of principal components based on the eigenvalues in Figure S-3.We see that the eigenvalues decrease fast and start to converge between 200 and 300, which is why we choose to use 256 components.[10], and later also in Lima et al. [13], which also provides a public GitHub repository: https://github.com/antonior92/ecg-age-prediction.We adjust the last linear layer of the model for the different tasks, for example, different number of outputs for classification.
Our ResNet backbone from [10] consists of one convolutional layer followed by four residual blocks.The convolutional layer and each residual block modify the sequence length by {4096, 1024, 256, 64, 16} and the filter size by {64, 128, 196, 256, 320}.We use a kernel size of 17 and a dropout rate of 0.5.
B.2 Hyperparameters
We use the default training hyperparameters from the original network architecture repository.The only deviation is the number of epochs which we reduced from 70 to 30, since this is sufficient for our datasets to converge.The exact hyperparameters are listed in Table S-1.
C Additional Results
Below we present additional results.First, we have a detailed performance table (more detailed than Table 3) for all electrolytes in Table S-2 for the random test set and in Table S-3 for the temporal test set.No significant difference in performance between the test sets is observed which shows that our model is robust to shift and trends over time.
Second, we list more results for classification and ordinal regression.In
Figure S- 1 .
Figure S-1.Histogram of metadata age (top left), recording year (top right) and minutes difference between ECG recording and blood measurement (bottom) for our four datasets.
Figure S-3.Eigenvalues of PCA components fit on train set.We show the first 512 eigenvalues of possible 8 • 4096 = 32 768 ones.We choose to reduce the dimensionality of our signal to 256 as it covers most information according to this figure.
We use a modified ResNet which was first developed in Ribeiro et al.
Figure S-4 we show the MAE for potassium and calcium which complements Figure 4 that shows the Macro ROC. Figure S-5 complements the electrolytes by showing the Macro ROC and MAE for the other electrolytes (creatinine and sodium).Third, we show additional results for probabilistic regression.Figure S-6 gives the calibration plot for potassium.The tables in TableS-4 and TableS-5 contain numeric details about the sparsification plot for more uncertainties, and the correlation between MSE and the variance to quantify the uncertainty calibration.TableS-6 lists the results of the OOD experiments.While the experiments for the SNR are expected (larger MAE and uncertainties for lower SNR), the results for masking are not as clear.While the MAE still increases, notably, especially the epistemic ensemble uncertainty decreases.This means that there is less variance in the mean predictions between the different ensemble members.Finally, Figure S-7, Figure S-8 and Figure S-9 yield the results for the remaining electrolytes that were previously shown for potassium alone.
Figure S-4.Classification (C) and Ordinal regression (O) MAE: Similar to Figure 4, we plot the MAE against the number of classes.The dashed line is the MAE of the corresponding deep direct regression model.
Classification (C) and Ordinal (O) regression: Same plot as Figure 4 and Figure S-4 but for creatinine and sodium (here we only used 4 seeds for shown mean and sd).
Figure S- 6 .
Figure S-6.plot, potassium: Top row and bottom left: calibration plots as standard deviation vs. absolute error (to have the same units) for different uncertainties.Colours indicate frequency by a fitted Gaussian kernel density estimate.A perfectly calibrated model would follow the diagonal.Bottom right: sparsification plot with more results than in the main paper.
Table S - 2 .
Regression performance on the random test dataset: Table shows metrics for different electrolytes of the regression models from "Results", "Deep Direct Regression".Target variance refers to the variance of the dataset and therefore yields a worst case MSE performance (since a model with that MSE just predicts the mean of the dataset).
Table S -
3. Regression performance on the temporal test dataset:Table shows metrics for different electrolytes of the regression models from "Results", "Deep Direct Regression".Target variance has same meaning as in Table S-2.
Table S - 4 .
Sparsification against MAE: Numbers in columns show different levels of sparsification (in per cent), and the corresponding row shows MAE values.This table gives the numeric values of the bottom right plot of Figure S-6.
Table S -
5. Correlation between MSE and Variance: we correlate the MSE with the variance from different uncertainties.A correlation of 1 would indicate perfect calibration.
Table S -
6.OOD experiments: This is an extended table from Table5.SNR X refers to OOD experiments with varying SNR; Mask X refers to OOD experiments where X per cent of the data is masked. | 1,932 | 2024-07-03T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
ILLUMINATION OF A PLANET BY A BLACK-HOLE MOON AS A TECHNOLOGICAL SIGNATURE
,
INTRODUCTION
In this paper, I consider the possibility that an advanced civilization would choose to manufacture a furnace that returns clean energy from the matter fueling it, with a efficiency of mass to energy of nearly 100%, two orders of magnitude above the most efficient nuclear fuel.Once the fuel enters the furnace, it gets consumed and disappears from view.This enclosure of this ideal furnace is the event horizon of a mini black hole.
Advanced civilizations could satisfy their energy needs by processing matter through an accretion disk around a mini black hole that orbits their planet like a moon.
The main technological challenge in producing a mini black hole involves the enormous mass density required to make it.If it is possible to manufacture a mini black hole and keep it as a luminous moon around the planet, then this artificial furnace could replace a star in illuminating and warming a rogue planet that is otherwise frozen and uninhabitable.Rogue (free-floating) planets without a host star to warm them up were recently discovered by gravitational microlensing (Mróz et al. 2020;Sumi et al. 2023;Mróz et al. 2024;Kunimoto et al. 2024;Rektsini & Batista 2024).
For the past half century, cosmologists conjectured that mini black holes might have been produced in the infant Universe, when the radiation energy density was high enough (Carr & Hawking 1974;Carr & Green 2024).It is possible that the dark matter is made of primordial black holes in the mass range ∼ 10 17 -10 22 g (Carr & Kuhnel 2021;Green 2024).Here, we consider a different possibility that a sufficiently advanced technological civilization might have been able to trap a primordial black hole or manufacture a mini black hole in order to satisfy its energy needs.
Stephen Hawking realized in 1974 that a mini black hole would shine on its own, even without an external supply of fuel (Hawking 1974).The associated Hawking radiation is brighter for smaller black holes, causing them to evaporate over a short time.Given the Hawking relations, we identify below the optimal black hole mass for providing the solar energy flux on an Earth-size planet.For simplicity, we focus the discussion on non-spinning (Schwarzschild) black holes.
DESIRED PARAMETERS OF THE BLACK HOLE
To obtain specific numbers, consider a mini black hole that circles a rocky planet like the Earth at an altitude of ∼ 1.5 × 10 3 km, about a quarter of the Earth's radius.This is commonly called a Low Earth Orbit for artificial satellites and was chosen here to obtain modest energy and mass requirements.Such a black hole would supply the energy flux of 1.4 × 10 6 ergs s −1 cm −2 that the Earth is currently receiving from the Sun if its luminosity is L • ∼ 4 × 10 23 ergs s −1 = 10 −10 L ⊙ .
This moon would illuminate the ground under it periodically over an orbital time of about ∼ 90 minutes.The duration of the luminous period scales with the orbital radius of the moon to the 1.5 power.
In terms of fundamental constants, the Hawking luminosity is given by, . (1) The required energy flux at this low-altitude orbit can be supplied by Hawking radiation from a mini black hole with a mass of M • = 0.96 × 10 11 g (L • /10 −10 L ⊙ ) −1/2 , which is equivalent to the mass of an asteroid with a 40-meter diameter.The Hawking evaporation time for such a black hole is, (2) In order to maintain the operation of the furnace for a period longer than a year, it is necessary to supply it with a modest accretion rate of, (3) so as to keep its mass constant.This mass supply resembles the deposition of logs in a wood-burning fireplace.The civilization could automate the process by steadily releasing material from a companion satellite, orbiting in the vicinity of the black hole and feeding its accretion disk in a steady state to compensate for its Hawking radiation loss.If the feeding ever stops, the ∼ 10 11 g black hole would evaporate and disappear within t • ∼ 1.5 years.In the regime where the black hole accretion rate balances the Hawking mass loss rate, the accretion luminosity would make a small fractional correction to the Hawking luminosity, of order the radiative efficiency with typical values 10%.The Hawking temperature of the mini black hole is given by, with the peak of the Hawking radiation emitted in γ-rays with an energy of E γ = 415 (M • /10 11 g) −1 GeV.These photons would be reprocessed by matter in the surrounding accretion disk, as well as the planet's atmosphere and rocky surface, into low-energy radiation and heat that could supply the energy needs of the host civilization.
The Klein-Nishina cross-section for scattering of the emergent radiation on electrons is ∼ 10 −30 cm 2 (E γ /415 GeV) −1 or equivalently ∼ 10 −6 (E γ /415 GeV) −1 of the Thomson cross-section.As a result, the emergent luminosity of ∼ 10 −10 L ⊙ is comparable to the effective Eddington limit for infalling matter onto a ∼ 10 11 g black hole.This numerical coincidence allows matter to accrete into the black hole in response to its attractive gravity despite the repulsive radiative force from the Hawking radiation.
Loeb
The technology to produce a mini black hole of this mass must reach a mass density of, (5) near its event horizon.Whether such a technological feat was accomplished by an advanced civilization in the Milky-Way galaxy remains to be seen.Gamma-ray and infrared telescopes could search for an anomalous gamma-ray moon occulted every 1-2 hours by a warm, infrared-emitting planet.
DETECTION AS A TECHNO-SIGNATURE
If we ever detect a rogue rocky planet which is illuminated by a gamma-ray moon with no stellar-mass companion, we would need to consider the possibility that the source was created (or trapped as a primordial black hole) by a highly-advanced technological civilization.There is no better marker of technological innovation than creating a furnace out of spacetime curvature in the form of a mini black hole.
Currently, there are a large number of startup companies aiming to make compact fusion reactors.A mini black hole would be far more efficient and environmentally friendly. | 1,464.4 | 2024-08-16T00:00:00.000 | [
"Physics",
"Engineering"
] |
Effect of End-winding on Electromagnetic Performance of Fractional Slot and Vernier PM Machines with Different Slot/pole Number Combinations and Winding Configurations
In this paper, the effects of end-winding on electromagnetic performance of surface mounted permanent magnet machines (SPMMs) (including vernier machines) with different slot/pole number combinations and winding configurations are analyzed and compared. By using genetic algorithm based on finite element analysis, SPMMs with different coil pitches are optimized for maximum average torque under fixed copper loss or fixed copper and iron losses, with/without considering the influence of end-winding. The effects of coil pitch and stack length on torque and torque density are investigated, and the optimal coil pitch for each slot/pole number combination is obtained. The efficiencies, inductances, and power factors of optimized SPMMs are also compared. The torques of optimized SPMMs considering iron loss decrease, especially at high speed and larger rotor pole number. It is found that the end-winding has significant effect on torque, torque density, winding inductance, and power factor etc. Compared with the fractional slot concentrated winding SPMMs with the same lamination stack length but slot number higher than pole number, the integer-slot distributed winding SPMMs with pole number higher than slot number have higher torques due to field modulation effect, but lower torque densities due to longer axial end-winding lengths.
I. INTRODUCTION
Owing to high torque density and efficiency, permanent magnet (PM) machines have been extensively employed in various applications, e.g. aerospace, domestic appliance, electric and hybrid electric vehicles, wind power generation, etc. [1]- [4]. Amongst different types of PM machines, radialfield rotor PM machines are the most common topology in terms of electromagnetic performance, manufacturability, and cost, etc. For radial-field rotor PM machines, PMs could be located either on the surface or the interior of the rotor. To date, the surface-mounted PM machines (SPMMs) are widely used due to simple structure since the PMs are mounted on the rotor surface and adjacent to the airgap. Meanwhile, SPMMs are also more preferred in ultra high-speed applications although a rotor sleeve is required to withstand the centrifugal force [3] [5].
From perspective of winding configurations, the SPMMs can be configured by either overlapping distributed windings (DW) or overlapping/non-overlapping concentrated windings (CW). On the one hand, integral slot (IS) SPMMs, having integral slot numbers per pole per phase, could achieve high winding factors by using the DW configurations, i.e. they could achieve the largest fundamental harmonic winding factor for maximum average output torque. Since an ISDW SPMM has more sinusoidal winding magnetomotive force (MMF) waveform with fewer higher order harmonics, it is beneficial for reducing iron and PM eddy current losses [6] [7]. On the other hand, fractional slot (FS) PMMs which have fractional slot numbers per pole per phase have been extensively investigated so far. In comparison to ISDW SPMMs, FSCW SPMMs have the advantages of high power and torque densities, high efficiency, lower torque ripple, and enhanced flux weakening capability [1], [7]. Meanwhile, it has shorter endwinding and lower copper usage. However, FSCW SPMMs result in abundant sub-harmonic field contents, which lead to higher iron and PM losses [8], [9].
Furthermore, to tradeoff between the winding factor and the end-winding length, SPMMs with different coil pitches have also been investigated [10]- [13]. In [10], windings with two slot coil pitches are used for PMMs to eliminate and/or reduce undesirable space harmonics resulting from nonoverlapping FSCW. [11] and [12] investigate the feasible slot/pole number combinations of SPMMs with two slotpitch. The influences of windings with different coil pitches for flux reversal PMMs are compared in [13]. Meanwhile, [14] compares the small-size 6-slot/2-pole high-speed SPMMs with one, two, and three coil-pitch windings and underpins that the two coil-pitch winding is the promising candidate for high-speed PMMs.
Besides, vernier PM machines (VPMMs) are one of the FSCW SPMMs with special slot/pole number combinations, typically with pole number higher than slot number. VPMMs have been gaining more attention recently due to their high torque as a result of field modulation and magnetic gearing effect [15]- [18]. In [19], the FS VPMM having two slot-pitch coils is developed to improve the power factor and achieve a compromise between axial end-winding length and torque capacity. [20] proposes a general instantaneous torque equation of VPMM based on a 12-slot/22-pole ISDW VPMM, which further compares with a 12-slot/10-pole FSCW SPMM in terms of torque and torque density. In [21], the variations of the optimized geometric parameters for VPMMs with different pole ratios and winding pole numbers are presented. The torque production mechanism of VPMMs with different pole ratios and winding pole numbers is investigated and analyzed in [22] [23], but the influences of end-winding on torque density, efficiency, and power factor are not analyzed. Likewise, it has been demonstrated in [24] that the ISDW VPMMs have higher torque per machine volume than the FSCW VPMMs, while the volume of end-winding is also not considered.
To date, there are few papers systematically evaluating the electromagnetic performances of SPMMs considering the influences of end-winding in terms of machine optimization, machine volume, end-winding flux leakage, and power factor simultaneously. Furthermore, in some applications with limited space, torque density is one of the most critical machine design parameters, which is strongly affected by the axial end-winding length. To address this issue, based on the common 12-slot ISDW and FSCW SPMMs (including VPMMs), this paper comprehensively evaluates the influences of different slot/pole number combinations and winding configurations, as well as iron loss at different speeds, on their electromagnetic performances, with particular concerns on the effects of end-winding on torque, torque density, efficiency, inductance, and power factor. Overall, this paper attempts to present a complete procedure from various machine optimization scenarios to electromagnetic performance analysis and provides a comprehensive and informative reference regarding end-winding effects on SPMMs.
This paper is organized as follows. The machine topologies, corresponding electromotive force (EMF) phasor diagrams, and winding connections of the six SPMMs with 12-slot and different pole numbers, i.e. 12-slot/8-pole (12s/8p), 12slot/10-pole (12s/10p), and 12-slot/14-pole (12s/14p) FSCW SPMMs, and 12-slot/4-pole (12s/4p), 12-slot/20-pole (12s/20p), and 12-slot/22-pole (12s/22p) ISDW SPMMs, are presented in Section II. Then, the six machines are optimized for maximum average torque with the same active lamination stack length with different coil pitches under fixed 40 W copper loss neglecting end-winding by using genetic algorithm (GA) in two-dimensional finite element analysis (FEA) in Section Ⅲ. The end-winding is taken into consideration during the optimization to determine optimal coil pitches in Section IV and the electromagnetic performances are compared. The effect of stack length on torque and torque density of SPMMs is analyzed in Section V. The SPMMs with optimal coil pitches are further optimized under different speeds with fixed 40 W copper and iron losses in Section Ⅵ. The efficiencies are compared in Section Ⅶ while the power factors and inductances are compared in Section Ⅷ. This paper is concluded in Section Ⅸ.
II. MACHINE TOPOLOGIES
FSCW and ISDW can be implemented in SPMMs. Typical examples of cross-sections of the SPMMs with different slot/pole number combinations and winding configurations are illustrated in FIGURE 1. The SPMMs with different coil pitches, i.e. 1, 2, and 3 slot pitches, will be investigated in Section III.
The number of slots per pole per phase (q) is an integer in ISDW machines while a fraction in FSCW machines according to (1). 2 s N q= mp (1) where N s is the slot number, p is the number of pole pairs, and m is the phase number. It should be noted that for 12s/20p and 12s/22p, the number of pole pairs in (1) are 2 and 1, respectively, which are the numbers of pole pairs of the armature windings. Therefore, for 12s/8p, 12s/10p, and 12s/14p FSCW SPMMs, the corresponding q equals 1/2, 2/5, and 2/7, respectively. For 12s/4p, 12s/20p, and 12s/22p ISDW SPMMs, the corresponding q equals 1, 1, and 2, respectively.
The star-of-slot theory [25] can be used to obtain the winding layout by the phasors of coil EMFs. The coil EMF phasors of the six SPMMs are shown in FIGURE 2. For 12s/8p, the coils of one phase locate in separated slots. For 12s/10p, the phase consists of two adjacent coils with different directions and two opposite coils while the phase sequence for 12s/14p swaps compared with that for 12s/10p. The coil locations of 12s/4p and 12s/20p are the same because their pole numbers of stator armature windings are 4. For 12s/22p, the phase consists of two adjacent coils with same direction and two opposite coils.
III. OPTIMAL COIL PITCH FOR MAXIMUM TORQUE
The typical winding connections for the six machines have been presented in Section II. Furthermore, the winding connections for machines can be varied with different coil pitches, leading to different winding factors. In this Section, the six SPMMs are optimized for maximum average torque with the same active lamination stack length and different coil pitches to investigate the influence of coil pitch and winding factor on the performance of SPMMs by using GA in FEA.
During the optimization, the stator outer diameter (d so ), the lamination stack length (l fe ), the air-gap length (δ), the shaft diameter (d sh ), the PM volume(v pm ), and the pole arc (α p ) are fixed, as listed in Table Ⅰ, while the stator inner diameter (d si ), the thickness of stator yoke (h y ), the width of stator tooth (w t ), and the stator slot opening (b so ) will be globally optimized under fixed 40 W copper loss in the active part of winding by using GA in FEA. The thickness of the PM (h pm ) is determined by the stator inner diameter since the volume of PMs is fixed during the optimization.
The dimensional variables during the optimization are shown in FIGURE 3. Windings with different coil pitches are illustrated in FIGURE 4. When coil pitch equals 1, the coils of one phase are concentrated and wound on one tooth, and thus, there is no overlapping end-winding part. Otherwise, there may be overlapping parts when coil pitch is larger than 1. It is obvious that the end-winding of non-overlapping FSCW is much shorter than that of overlapping winding. Taking 12s/4p SPMM as an example, windings with different coil pitches are shown in FIGURE 5. When coil pitch equals 3, ISDW are implemented as shown in FIGURE 1 (a). The torques of all optimized six SPMMs with different coil pitches are shown in FIGURE 6 (a) and the corresponding winding factors are shown in FIGURE 6 (b). When the SPMMs are optimized for maximum average torque under fixed 40 W copper loss neglecting end-winding, the optimized average torques almost show the same trend with winding factors.
For 12s/4p SPMM, the optimal coil pitch is 3 due to its highest winding factor. For 12s/8p/10p/14p SPMMs, the optimal coil pitch is 1. For 12s/20p and 12s/22p SPMMs which are vernier machines, the optimal coil pitch is 3 and 6, respectively, due to their highest winding factors and field modulation effect [18]. FIGURE 6 (c) shows the ratio between average torque and winding factor. The ratio increases significantly with pole number because the magnetic gearing effect increases with the pole number [18], as shown in FIGURE 7, where T U is the torque component produced by the principle of the conventional electrical machine while T M is the torque component produced by the principle of the magnetic gearing effect. Meanwhile, the ratio of torque produced by the magnetic gearing effect to the total torque is defined by: The optimal torques and coil pitches for all SPMMs are shown in FIGURE 8. According to the optimization results, the optimal coil pitches for 12s/8p, 12s/10p, and 12s/14p SPMMs for maximum average torque are 1 due to the largest winding factor. For 12s/4p, 12s/20p, and 12s/22p SPMMs, the optimal coil pitches are 3, 3, and 6, respectively.
IV. GLOBAL OPTIMIZATION CONSIDERING END-WINDING A. OPTIMAL COIL PITCH CONSIDERING END-WINDING
The coil pitch not only affects winding factor, but also leads to different end-winding lengths. The end-winding length of ISDW is much longer than that of FSCW, and thus, for fair comparison, the copper loss of end-winding should be taken into consideration during optimization. The end-winding will increase phase resistance and axial length of the machine, which will decrease the torque density. In this Section, the SPMMs are optimized for maximum average torque under fixed 40 W copper loss considering end-winding. Other constrains are the same as Section Ⅲ . The modelling of endwinding and calculation of end-winding length are shown in Appendix A, and the experimental results of optimized machines are presented in Appendix B. At first, the six machines with different coil pitches are optimized for maximum average torque under fixed 40 W copper loss considering end-winding. The optimization results are shown in FIGURE 9. Compared with FIGURE 6 (a), the optimized average torques considering end-winding decrease, especially for overlapping windings. When coil pitch equals 1, the end-winding length of FSCW is relatively short, and thus, the torque decreases slightly. The torques reduce more obviously as the coil pitches increase. According to the optimization results, the optimal coil pitches for the six SPMMs with fixed 40 W copper loss and 50 mm stack length are selected. For 12s/4p SPMM, the optimal coil pitch for maximum average torque under fixed 40 W copper loss considering end-winding is 3, due to its highest winding factor even though it has the longest end-winding. When coil pitch is 2, the end-winding length will be shorter, but the lower winding factor (0.866) cannot guarantee higher torque. For 12s/8p, 12s/10p, and 12s/14p SPMMs, the optimal coil pitch is 1, due to their shorter end-winding length and high winding factor. For 12s/20p SPM vernier machine, the optimal coil pitch is 3, due to its highest winding factor and field modulation effect. For 12s/22p SPMs, when coil pitch equals 4, 5, and 6, the winding factors are 0.837, 0.933, and 0.966, respectively. The corresponding average torques are 5.40 Nm, 5.70 Nm, and 5.69 Nm. Thus, the optimal can be either 5 or 6. In this paper, 6 is selected for 12s/22p.
The optimized average torques of all SPMMs with respective optimal coil pitch considering end-winding and those neglecting end-winding are compared in FIGURE 10.
B. COMPARISON OF OPTIMIZATION RESULTS
The optimized parameters of all six machines with the same active lamination stack length under fixed 40 W copper loss considering end-winding with optimal coil pitch are shown in Table Ⅱ compares the optimized average torques, torque densities, and dimensional parameters. The optimized average torques under 40 W copper loss considering endwinding increase with pole number. However, for ISDW vernier machines, the axial length of end-winding is much longer than FSCW SPMMs, the axial end-winding length and torque density are compared in FIGURE 14 (b). The torque densities of 12s/8p, 12s/10p, and 12s/14p FSCW SPMMs are larger than those of 12s/20p and 12s/22p vernier machines, which have relatively longer axial end-windings even though vernier machines can generate higher torque for the same lamination stack length. The split ratio almost increases with pole number, while the PM thickness is inversely proportional to the split ratio as the PM volumes of all six SPMMs are fixed during optimization. For 12s/8p, 12s/10p, and 12s/14p FSCW SPMMs, the yoke thickness and tooth width decrease with the pole numbers. Therefore, the slot area and current amplitude increase with pole numbers, and the torque increases with pole numbers under fixed copper loss. The yoke thickness of 12s/22p vernier machine is larger than 12s/20p vernier machine. When the vernier machines operate on on-load condition, the armature reactions will affect the magnetic field distribution. For 12s/22p vernier machine, the stator pole number is 2 while the stator pole number is 4 for 12s/20p vernier machine, and thus, the yoke thickness of 12s/22p needs to be larger than that of 12s/20p vernier machine. The optimal slot opening increases with the rotor pole numbers except for 12s/4p SPMM. The ratio of tooth tip width and PM pole pitch increases when the pole number smaller than the slot number, while decreasing when the pole number larger than the slot number, as shown in FIGURE 14 (d). 12s/22p VPMM due to more significant magnetic saturation. In contrast, FSCW SPMMs, especially 12s/10p and 12s/14p SPMMs, have better torque output capacity at heavy load conditions.
V. EFFECT OF AXIAL LENGTH
As mentioned in the previous section, due to the axial endwinding length differences between ISDW and FSCW, the torque densities of 12s/20p and 12s/22p ISDW vernier machines are lower than those of 12s/8p, 12s/10p, and 12s/14p FSCW SPMMs even they have higher average torque. The effect of the stack length on the torque and torque density of SPMMs with different slot/pole number combinations will be investigated in this Section.
The torques and torque densities of the optimized SPMMs in Section Ⅳ with different lamination stack lengths are shown in FIGURE 16. As can be seen, the average torque increases with lamination stack length and rotor pole number. The axial endwinding length has adverse effect on torque density. Because the axial end-winding length of FSCW is relatively short compared with stack length, the torque density of FSCW SPMMs decreases with the increase of lamination stack length under fixed 40 W copper loss. However, the end-winding length of ISDW is relatively long when the lamination stack length of the machine is short, and thus, the increase of lamination stack length will eliminate the adverse effect of axial end-winding length on torque density. Therefore, the torque densities of 12s/4p, 12s/20p, and 12s/22p ISDW SPMMs will increase first while increasing the stack length with fixed 40 W copper loss. While further increasing the lamination stack length, the torque density will decrease. The torque densities of 12s/8p, 12s/10p, and 12s/14p FSCW SPMMs are higher than those of 12s/20p and 12s/22p ISDW vernier machines when the lamination stack length is short. While increasing the lamination stack length, the torque density of vernier machines will be higher.
In general, the torque densities of ISDW vernier machines are lower compared with FSCW SPMMs due to long axial end-winding length. However, the vernier machines with long lamination stack lengths can maintain high average torque and torque density simultaneously.
VI. MACHINE OPTIMIZATION CONSIDERING IRON LOSS
Iron loss varies with machine topologies and operating conditions. In some cases, higher iron loss leads to high temperature rise due to higher frequency, which should be taken into consideration during optimization [14].
The iron loss density is computed as [26]: where B m is the amplitude of the AC flux component, f is the frequency, K h is the hysteresis core loss coefficient, K ec is the eddy-current core loss coefficient, and K ex is the excess core loss coefficient. The iron loss is calculated by FEA [27], where the coefficients are 109.91W/m 3 , 0.42W/m 3 , and 4.94W/m 3 , respectively. The iron losses in the optimized SPMMs under different speeds are compared in FIGURE 17. The iron loss increases with the rotor pole number and speed. The iron losses of 12s/20p and 12s/22p vernier machines are much larger than other SPMMs. Hence, in this Section, the iron loss will be considered in the design optimization for fair comparison, especially for high speed SPMMs and SPMMs with large pole number. The six SPMMs are optimized for maximum average torque under fixed 40 W copper and iron losses accounting for end windings. As the iron loss increases with rotor rotation speed, the SPMMs are optimized under two different specific speeds to illustrate the influence of iron loss in the design optimization.
During the optimization, the fixed parameters are the same as the optimization in previous sections as shown in Table Ⅰ. Parametric analysis in FEA is used to calculate the electromagnetic performance of all SPMMs with the same active lamination stack length and with different stator inner diameter (d si ), thickness of stator yoke (h y ), width of stator tooth (w t ), slot opening (b so ), and phase current. According to the calculation, the optimal SPMMs with maximum torque are selected from those results with 40 W or less copper and iron losses.
A. LOW SPEED OPTIMIZATION (400 R/MIN)
The optimized parameters of the SPMMs optimized under fixed 40 W copper and iron losses accounting for end windings at 400 r/min are shown in Table Ⅲ. FIGURE 18 shows the optimization results compared with SPMMs optimized under 40 W copper loss. As copper loss plays a dominant role in lower speed condition in FIGURE 18 (e), the optimized dimensional parameters considering iron loss during the optimization in lower speed condition (400 r/min) almost unchanged compared with those optimized under fixed 40 W copper loss, such as split ratio and yoke thickness. The optimized torque and phase current decrease slightly as the total copper loss decreases slightly.
B. HIGH SPEED OPTIMIZATION (2000 R/MIN)
The optimized parameters of the SPMMs optimized under fixed 40 W copper and iron losses accounting for end windings and 2000 r/min are shown in Table Ⅳ. FIGURE 19 shows the optimization results between 2000 r/min and 400 r/min. When the speed increases, the iron loss increases, and the copper loss is no longer the dominant loss especially when the pole number increases, as shown in FIGURE 19 (f). Therefore, the phase current, torque, and torque density decrease. FIGURE 19 (b) shows that the torque reduction ratio between lower speed and higher speed increases with the pole number because iron loss increases with the pole number under the same speed. In general, vernier machines have higher torque output capacity [15]- [20]. However, the optimized average torque of 12s/20p vernier machine under 40 W copper and iron losses is lower than that of 12s/10p and 12s/14p SPMMs due to its higher iron loss. The optimized dimensional parameters considering iron loss during the optimization in higher speed condition (2000 r/min) almost unchanged compared with those optimized results in lower speed condition (400 r/min), such as split ratio and yoke thickness.
A. SPMMS OPTIMIZED WITH 40 W COPPER LOSS
As shown in the previous sections, the average torque and iron loss increase with rotor pole number. The PM loss also varies with rotor pole number as shown in FIGURE 20. Therefore, the efficiencies of the optimized machines are compared in this Section.
The efficiency η is calculated by where P out is the output power, P Cu is the copper loss, P S is the stator iron loss, P R is the rotor iron loss, and P PM is the PM loss.
B. SPMMS OPTIMIZED WITH 40 W COPPER AND IRON LOSSES
As the parameters of the SPMMs optimized with fixed 40 W copper and iron losses at lower speed condition are almost the same with those of under 40 W copper loss, the efficiencies of SPMMs optimized with fixed 40 W copper and iron losses at higher speed condition are shown in FIGURE 23. The efficiencies of SPMMs optimized considering iron loss are almost the same with those of optimized with fixed 40 W copper loss.
A. INDUCTANCE
Winding inductances have significant effect on the electromagnetic performances of SPMMs. The phase inductance includes synchronous inductance, harmonic leakage inductance, slot leakage inductance, and end-leakage inductance [28] [29]. The differences of end-winding length and disposition between FSCW and ISDW lead to different end-winding leakage inductances. However, end-leakage inductance cannot be calculated in 2D FEA, and thus, the simplified calculation method for end-winding leakage inductance in [29] is used. (5) where n cond is the number of conductors per slot, n coils is the number of coils per phase, K w1 is the winding factor, and l end,avg is the average end-turn length.
The d-axis inductances (L d ) of optimized SPMMs with fixed 40 W copper loss and 40 W copper and iron losses at 2000 r/min are similar, as shown in FIGURE 25 (a). The comparison between d-axis inductances of SPMMs optimized with fixed 40 W copper loss neglecting and considering endleakage inductance is also shown in FIGURE 25 (b). L d of ISDW SPMMs is much higher than those of FSCW SPMMS. As ISDW SPMMs have longer end-winding and overlapping part, the end-winding leakage inductance cannot be ignored for accurate calculation.
B. POWER FACTOR
The power factor is calculated by (6) where E 1 is the no-load back EMF, X q is the q-axis inductive reactance, and I q is the q-axis current. In SPMMs, the d-axis and q-axis inductances (and reactances) are similar.
As the power factor increases with the decrease of the qaxis current, the q-axis currents of SPMMs optimized with fixed 40 W copper and iron losses at 2000 r/min are adjusted to the same values of currents of SPMMs optimized with fixed 40 W copper loss for fair comparison. The power factors neglecting end-winding leakage inductance of SPMMs optimized with fixed 40 W copper and iron losses at higher speed condition are shown in It can be seen from (6) that the q-axis inductance has large impact on the power factor. FIGURE 27 compares the power factors of SPMMs optimized with fixed 40 W copper loss considering end-winding or not. The end-winding leakage inductance will decrease the power factor, especially for 12s/20p and 12s/22p ISDW VPMMs.
IX. CONCLUSION
In this paper, three FSCW and three ISDW SPMMs with 12-slots but different slot/pole number combinations are optimized and compared. The effects of coil pitch and endwinding on torque, torque density, efficiency, winding inductance, and power factor of SPMMs with different slot/pole number combinations are investigated.
These machines are optimized with the same axial lamination stack length for maximum average torque under fixed 40 W loss for various slot/pole number combinations and different coil pitches, and under various scenarios, i.e. without/with considering end-winding length and/or iron loss, as well as different lamination stack lengths. The optimal coil pitch and winding factor for each slot/pole number combination have been determined for maximum average torque.
It shows that the optimized average torques almost show the same trend with winding factors. The end-winding will decrease the optimized average torque, torque density, and power factor, especially for ISDW SPMMs. As copper loss plays a dominant role at lower speed condition, the optimization results considering iron loss at lower speed condition are almost unchanged. However, the optimized torques reduce obviously while considering iron loss at higher speed, especially for SPMMs with higher rotor pole number. Considering iron loss in machine optimization helps to increase efficiency. In general, the torque densities and power factors of ISDW vernier machines are lower compared with FSCW SPMMs due to long axial end-winding length. However, with long lamination stack length, the vernier machines can maintain high average torque and torque density simultaneously.
Currently, the investigations are being extended to the consequent pole PM machines with different slot/pole number combinations and winding configurations, and the results will be presented in a future paper.
A. MODELLING OF END-WINDING
FSCW and ISDW have different end-winding structures. In general, the length of end-winding of FSCW is much shorter than that of ISDW. Since end-winding structure is quite complicated and difficult to model accurately, there are several simplified models to calculate the length of endwinding [14], [29], [31]- [36]. The model in [33], [35] uses several serially connected straight lines to model the endwinding in FIGURE 28 (a), which is often used for large AC machines. In most cases, the end-winding is assumed to be circular or semi-circular in FIGURE 28 (b) [29], [34], especially in FSCW machines. In [32], [36], quarter-circles and straight lines are used to model the end-winding in FIGURE 28 (c), whose shape is much closer to the stack. For double-layer FSCW, the coil center locates in the quadrant of the slot as shown in FIGURE 29, while for single layer ISDW, the coil center locates in the center of the slot.
In this paper, two quadrants, two end-winding extensions, and an arc are used to model the half turn of the end-winding, which is shown in FIGURE 28 (c). The corresponding lengths of winding per turn can be calculated by where n is the number of the overlapping part, 1 for 12s/20p and 2 for 12s/22p.
B. EXPERIMENT VERIFICATION OF FEA RESULTS
In this Section, 12s/14p FSCW and 12s/22p ISDW SPMMs are prototyped and tested [17]. The main dimensional parameters of the prototypes are given in Table Ⅴ. The prototypes and test rig are shown in FIGURE 31. The back EMFs and output torques are tested to validate the results calculated by FEA, as shown in FIGUREs 32 33, respectively. The static torques of the prototypes are measured [37] by applying DC current to phase A in series connection to the parallel phase B and phase C (I a =−2I b =−2I c ). The static torques within 0-180 electric degrees of the prototypes are measured by rotating the stator housings over 180 electric degrees with fixed steps. The measured results are given and compared with the 2D FEA results in FIGURE 33. The back EMFs and torques calculated by FEA show good agreements with the measured results. | 6,792.8 | 2022-01-01T00:00:00.000 | [
"Engineering"
] |
Sequence-to-Nuggets: Nested Entity Mention Detection via Anchor-Region Networks
Sequential labeling-based NER approaches restrict each word belonging to at most one entity mention, which will face a serious problem when recognizing nested entity mentions. In this paper, we propose to resolve this problem by modeling and leveraging the head-driven phrase structures of entity mentions, i.e., although a mention can nest other mentions, they will not share the same head word. Specifically, we propose Anchor-Region Networks (ARNs), a sequence-to-nuggets architecture for nested mention detection. ARNs first identify anchor words (i.e., possible head words) of all mentions, and then recognize the mention boundaries for each anchor word by exploiting regular phrase structures. Furthermore, we also design Bag Loss, an objective function which can train ARNs in an end-to-end manner without using any anchor word annotation. Experiments show that ARNs achieve the state-of-the-art performance on three standard nested entity mention detection benchmarks.
Introduction
Named entity recognition (NER), or more generally entity mention detection 1 , aims to identify text spans pertaining to specific entity types such as Person, Organization and Location. NER is a fundamental task of information extraction which enables many downstream NLP applications, such as relation extraction (GuoDong et al., 2005;Mintz et al., 2009), event extraction (Ji and Grishman, 2008;Li et al., 2013) and machine reading comprehension (Rajpurkar et al., 2016;. Previous approaches (Zhou and Su, 2002;Chieu and Ng, 2002;Bender et al., 2003;Settles, 2004;Figure 1: An example of nested entity mentions. Due to the nested structure, "the","department","of" and "education" belong to both PER and ORG mentions. Lample et al., 2016) commonly regard NER as a sequential labeling task, which generate label sequence for each sentence by assigning one label to each token. These approaches commonly restrict each token belonging to at most one entity mention and, unfortunately, will face a serious problem when recognizing nested entity mentions, where one token may belong to multiple mentions. For example in Figure 1, an Organization entity mention "the department of education" is nested in another Person entity mention "the minister of the department of education". Nested entity mentions are very common. For instance, in the well-known ACE2005 and RichERE datasets, more than 20% of entity mentions are nested in other mentions. Therefore, it is critical to consider nested mentions for real-world applications and downstream tasks.
In this paper, we propose a sequence-to-nuggets approach, named as Anchor-Region Networks (ARNs), which can effectively detect all entity mentions by modeling and exploiting the headdriven phrase structures (Pollard and Sag, 1994;Collins, 2003) of them. ARNs originate from two observations. First, although an entity mention can nest other mentions, they will not share the same head word. And the head word of a mention can provide strong semantic evidence for its entity type (Choi et al., 2018). For example in Figure 1, although the ORG mention is nested in the PER mention, they have different head words "department" and "minister" respectively, and these head words strongly indicate their corresponding entity types to be ORG and PER. Second, entity men- Figure 2: The overall architecture of ARNs. Here "minister" and "department" are detected anchor words for two mentions respectively.
tions mostly have regular phrase structures. For the two mentions in Figure 1, they share the same "DET NN of NP" structure, where the NN after DET are their head words. Based on above observations, entity mentions can be naturally detected in a sequence-to-nuggets manner by 1) identifying the head words of all mentions in a sentence; and 2) recognizing entire mention nuggets centered at detected head words by exploiting regular phrase structures of entity mentions.
To this end, we propose ARNs, a new neural network-based approach for nested mention detection. Figure 2 shows the architecture of ARNs. First, ARNs employs an anchor detector network to identify whether each word is a head word of an entity mention, and we refer the detected words as anchor words. After that, a region recognizer network is used to determine the mention boundaries centering at each anchor word. By effectively capturing head-driven phrase structures of entity mentions, the proposed ARNs can naturally address the nested mention problem because different mentions have different anchor words, and different anchor words correspond to different mention nuggets.
Furthermore, because the majority of NER datasets are not annotated with head words, they cannot be directly used to train our anchor detector. To address this issue, we propose Bag Loss, an objective function which can be used to train ARNs in an end-to-end manner without any anchor word annotation. Specifically, our Bag Loss is based on at-least-one assumption, i.e., each mention should have at least one anchor word, and the anchor word should strongly indicate its entity type. Based on this assumption, Bag Loss can automatically select the best anchor word within each mention during training, according to the association between words and the entity type of the mention. For example, given an ORG training instance "the department of education", Bag Loss will select "department" as the anchor word of this mention based on its tight correlation with type ORG. While other words in the mention, such as "the" and "of", will not be regarded as anchor words, because of their weak association with ORG type.
We conducted experiments on three standard nested entity mention detection benchmarks, including ACE2005, GENIA and TAC-KBP2017 datasets. Experiments show that ARNs can effectively detect nested entity mentions and achieve the state-of-the-art performance on all above three datasets. For better reproduction, we openly release the entire project at github.com/ sanmusunrise/ARNs.
Generally, our main contributions are: • We propose a new neural network architecture named as Anchor-Region Networks. By effectively modeling and leveraging the headdriven phrase structures of entity mentions, ARNs can naturally handle the nested mention detection problem and achieve the stateof-the-art performance on three benchmarks. To the best of our knowledge, this is the first work which attempts to exploit the headdriven phrase structures for nested NER. • We design an objective function, named as Bag Loss. By exploiting the association between words and entity types, Bag Loss can effectively learn ARNs in an end-to-end manner, without using any anchor word annotation. • Head-driven phrase structures are widely spread in natural language. This paper proposes an effective neural network-based solution for exploiting this structure, which can potentially benefit many NLP tasks, such as semantic role labeling (Zhou and Xu, 2015;He et al., 2017) and event extraction (Chen et al., 2015;Lin et al., 2018).
Related Work
Nested mention detection requires to identify all entity mentions in texts, rather than only outmost mentions in conventional NER. This raises a critical issue to traditional sequential labeling models because they can only assign one label to each token. To address this issue, mainly two kinds of methods have been proposed.
Region-based approaches detect mentions by identifying over subsequences of a sentence respectively, and nested mentions can be detected because they correspond to different subsequences. For this, Finkel and Manning (2009) regarded nodes of parsing trees as candidate subsequences. Recently, Xu et al. (2017) and Sohrab and Miwa (2018) tried to directly classify over all subsequences of a sentence. Besides, proposed a transition-based method to construct nested mentions via a sequence of specially designed actions. Generally, these approaches are straightforward for nested mention detection, but mostly with high computational cost as they need to classify over almost all sentence subsequences.
Schema-based approaches address nested mentions by designing more expressive tagging schemas, rather than changing tagging units. One representative direction is hypergraph-based methods (Lu and Roth, 2015;Katiyar and Cardie, 2018;, where hypergraphbased tags are used to ensure nested mentions can be recovered from word-level tags. Besides, Muis and Lu (2017) developed a gap-based tagging schema to capture nested structures. However, these schemas should be designed very carefully to prevent spurious structures and structural ambiguity . But more expressive, unambiguous schemas will inevitably lead to higher time complexity during both training and decoding.
Different from previous methods, this paper proposes a new architecture to address nested mention detection. Compared with region-based approaches, our ARNs detect mentions by exploiting head-driven phrase structures, rather than exhaustive classifying over subsequences. Therefore ARNs can significantly reduce the size of candidate mentions and lead to much lower time complexity. Compared with schema-based approaches, ARNs can naturally address nested mentions since different mentions will have different anchor words. There is no need to design complex tagging schemas, no spurious structures and no structural ambiguity.
Furthermore, we also propose Bag Loss, which can train ARNs in an end-to-end manner without any anchor word annotation. The design of Bag Loss is partially inspired by multi-instance learning (MIL) (Zhou and Zhang, 2007;Zhou et al., 2009;Surdeanu et al., 2012), but with a different target. MIL aims to predict a unified label of a bag of instances, while Bag Loss is proposed to train ARNs whose anchor detector is required to predict the label of each instance. Therefore previous MIL methods are not suitable for training ARNs.
Anchor-Region Networks for Nested Entity Mention Detection
Given a sentence, Anchor-Region Networks detect all entity mentions in a two-step paradigm. First, an anchor detector network identifies anchor words and classifies them into their corresponding entity types. After that, a region recognizer network is applied to recognize the entire mention nugget centering at each anchor word. In this way, ARNs can effectively model and exploit head-driven phrase structures of entity mentions: the anchor detector for recognizing possible head words and the region recognizer for capturing phrase structures. These two modules are jointly trained using the proposed Bag Loss, which learns ARNs in an end-to-end manner without using any anchor word annotation. This section will describe the architecture of ARNs. And Bag Loss will be introduced in the next section.
Anchor Detector
An anchor detector is a word-wise classifier, which identifies whether a word is an anchor word of an entity mention of specific types. For the example in Figure 1, the anchor detector should identify that "minister" is an anchor word of a PER mention and "department" is an anchor word of an ORG mention. Formally, given a sentence x 1 , x 2 , ..., x n , all words are first mapped to a sequence of word representations x 1 , x 2 , ..., x n where x i is a combination of word embedding, part-of-speech embedding and character-based representation of word x i following Lample et al. (2016). Then we obtain a context-aware representation h A i of each word x i using a bidirectional LSTM layer: The learned representation h A i is then fed into a multi-layer perceptron(MLP) classifier, which computes the scores O A i of the word x i being an anchor word of specific entity types (or NIL if this word is not an anchor word): where O A i ∈ R |C| and |C| is the number of entity types plus one NIL class. Finally a softmax layer is used to normalize O A i to probabilities: is the probability of word x i being an anchor word of class c j . Note that because different mentions will not share the same anchor word, the anchor detector can naturally solve nested mention detection problem by recognizing different anchor words for different mentions.
Region Recognizer
Given an anchor word, ARNs will determine its exact mention nugget using a region recognizer network. For the example in Figure 1, the region recognizer will recognize that "the minister of the department of education" is the mention nugget for anchor word "minister" and "the department of education" is the mention nugget for anchor word "department". Inspired by the recent success of pointer networks (Vinyals et al., 2015;Wang and Jiang, 2016), this paper designs a pointer-based architecture to recognize the mention boundaries centering at an anchor word. That is, our region recognizer will detect the mention nugget "the department of education" for anchor word "department" by recognizing "the" to be the left boundary and "education" to be the right boundary.
Similar to the anchor detector, a bidirectional LSTM layer is first applied to obtain the contextaware representation h R i of word x i . For recognizing mention boundaries, local features commonly play essential roles. For instance, a noun before a verb is an informative boundary indicator for entity mentions. To capture such local features, we further introduce a convolutional layer upon h R i : where h R i−k:i+k is the concatenation of vectors from h R i−k to h R i+k , W and b are the convolutional kernel and the bias term respectively. k is the (one-side) window size of convolutional layer. Finally, for each anchor word x i , we compute its left mention boundary score L ij and right mention boundary score R ij at word x j by In the above two equations, the first term within the tanh function computes the score of word x j serving as the left/right boundary of a mention centering at word x i . And the second term models the possibility of word x j itself serving as the boundary universally. After that, we select the best left boundary word x j and best right boundary word x k for anchor word x i , and the nugget {x j , ..., x i , ..., x k } will be a recognized mention.
Model Learning with Bag Loss
This section describes how to train ARNs using existing NER datasets. The main challenge here is that current NER corpus are not annotated with anchor words of entity mentions, and therefore they cannot be directly used to train the anchor detector. To address this problem, we propose Bag Loss, an objective function which can effectively learn ARNs in an end-to-end manner, without using any anchor word annotation. Intuitively, one naive solution is to regard all words in a mention as its anchor words. However, this naive solution will inevitably result in two severe problems. First, a word may belong to different mentions when nested mentions exist. Therefore this naive solution will lead to ambiguous and noisy anchor words. For the example in Figure 1, it is unreasonable to annotate the word "department" as an anchor word of both PER and ORG mentions, because it has little association to PER type although the PER mention also contains it. Second, many words in a mention are just function words, which are not associated with its entity type. For example, words "the","of" and "education" in "the department of education" are not associated with its type ORG. Therefore annotating them as anchor words of the ORG mention will introduce remarkable noise.
To resolve the first problem, we observe that a word can only be the anchor word of the innermost mention containing it. This is because a mention nested in another mention can be regarded as a replaceable component, and changing it will not affect the structure of outer mentions. For the case in Figure 1, if we replace the nested mention "the department of education" by other ORG mention(e.g., changing it to "State"), the type of the Figure 3: An illustration of bags. B i represents the bag where word x i is in. This sentence forms five bags, two of which correspond to two entity mentions and three of which correspond to NIL.
outer mention will not change. Therefore, words in a nested mention should not be regarded as the anchor word of outer mentions, and therefore a word can only be assigned as the anchor word of the innermost mention containing it.
To address the second problem, we design Bag Loss based on the at-least-one assumption, i.e., for each mention at least one word should be regarded as its anchor word. Specifically, we refer to all words belonging to the same innermost mention as a bag. And the type of the bag is the type of that innermost mention. For example, in Figure 3,{the, minister, of} will form a PER bag, and {the, department, of education} will form an ORG bag. Besides, each word not covered by any mention will form a one-word bag with NIL type. So there are three NIL bags in Figure 3, including {convened}, {a} and {meeting}.
Given a bag, Bag Loss will make sure that at least one word in each bag will be selected as its anchor word, and be assigned to the bag type. While other words in that bag will be classified into either the bag type or NIL. Bag Loss selects anchor words according to their associations with the bag type. That is, only words highly related to the bag type (e.g., "department" in "the department of education") will be trained towards the bag type, and other irrelevant words (e.g., "the" and "of" in the above example) will be trained towards NIL. Bag Loss based End-to-End Learning. For ARNs, each training instance is a tuple x = (x i , x j , x k , c i ), where x j , ..., x k is an entity mention with left boundary x j and right boundary x k . c j is its entity type and word x i is a word in this mention's bag 2 . For each instance, Bag loss considers two situations: 1) If x i is its anchor word, the loss will be the sum of the anchor detector loss (i.e., the loss of correctly classifying x i into its bag type c i ) and the region recognizer loss (i.e., the loss of correctly recognizing the mention boundary x j and x k ); 2) If x i is not its anchor word, the loss will be only the anchor detector loss (i.e., correctly classifying x i into NIL). The final loss for this instance is a weighted sum of the loss of these two situations, where the weight are determined using the association between word x i and the bag type c i compared with other words in the same bag. Formally, Bag Loss is written as: where − log P (c i |x i ) is the anchor detector loss.
is the loss for the region recognizer measuring how preciously the region recognizer can identify the boundaries centered at anchor word x i . We define L lef t (x i ; θ) using max-margin loss: where γ is a hyper-parameter representing the margin, and L right (x i ; θ) is similarly defined.
Besides, ω i in Equation (6) measures the correlation between word x i and the bag type c i . Compared with other words in the same bag, a word x i should have larger w i if it has a tighter association with the bag type. Therefore, ω i can be naturally defined as: where B i denotes the bag x i belonging to, i.e., all words that share the same innermost mention with x i . α is a hyper-parameter controlling how likely a word will be regarded as an anchor word rather than regarded as NIL. α = 0 means that all words are annotated with the bag type. And α → +∞ means that Bag Loss will only choose the word with highest P (c i |x i ) as anchor word, while all other words in the same bag will be regarded as NIL. Consequently, Bag Loss guarantees that at least one anchor word (the one with highest P (c i |x i ), and its corresponding w i will be 1.0) will be selected for each bag. For other words that are not associated with the type (the ones with low P (c i |x i )), Bag Loss can make it to automatically learn towards NIL during training.
Experimental Settings
We conducted experiments on three standard English entity mention detection benchmarks with nested mentions: ACE2005, GENIA and TAC-KBP2017 (KBP2017) datasets. For ACE2005 and GENIA, we used the same setup as previous work (Ju et al., 2018;Katiyar and Cardie, 2018 (Pennington et al., 2014) vectors 3 . Hyper-parameters are tuned on the development sets 4 apart from α in Equation (8), which will be further discussed in Section 5.4.
Baselines
We compare ARNs with following baselines 5 : • Conventional CRF models, including LSTM-CRF (Lample et al., 2016) and Multi-CRF. LSTM-CRF is a classical baseline for NER, which doesn't consider nested mentions so only outmost mentions are used for training. Multi-CRF is similar to LSTM-CRF but learns one model for each entity type, and thus is able to recognize nested mentions if they have different types. • Region-based methods, including FOFE (Xu et al., 2017), Cascaded-CRF (Ju et al., 2018) and a transition model (refered as Transition) proposed by . FOFE directly classifies over all sub-sequences of a sentence and thus all potential mentions can be considered. Cascaded-CRF uses several stacked CRF layers to recognize nested mentions at different levels. Transition constructs nested mentions through a sequence of actions. • Hypergraph-based methods, including the LSTM-Hypergraph (LH) model (Katiyar and Cardie, 2018) and the Segmental Hypergraph (SH) by . LH used an LSTM model to learn features and then decode them into a hypergraph. SH further considered the transition between labels to alleviate labeling ambiguity, which is the state-of-the-art in both ACE2005 and GENIA 6 datasets. Besides, we also compared the performance of ARNs with the best system in TAC-KBP 2017 Evaluation (Ji et al., 2017). The same as all previous studies, models are evaluated using microaveraged Precision(P), Recall(R) and F1-score. To balance time complexity and performance, proposed to restrict the maximum length of mentions to 6, which covers more than 95% mentions. So we also compared to baselines where the maximum length of mention is restricted or unrestricted. Besides, we also compared the decoding time complexity of different methods. Table 1 shows the overall results on ACE2005, GENIA and KBP2017 datasets. From this table, we can see that:
Overall Results
1) Nested mentions have a significant influence on NER performance and are required to be specially treated. Compared with LSTM-CRF and Multi-CRF baselines, all other methods dealing with nested mentions achieved significant F1-score improvements. So it is critical to take nested mentions into consideration for real-world applications and downstream tasks. (Xu et al., 2017) 76 2) Our Anchor-Region Networks can effectively resolve the nested mention detection problem, and achieved the state-of-the-art performance in all three datasets. On ACE2005 and GENIA, ARNs achieved the state-of-the-art performance on both the restricted and the unrestricted mention length settings. On KBP2017, ARNs outperform the top-1 system in the 2017 Evaluation by a large margin. This verifies the effectiveness of our new architecture.
3) By modeling and exploiting head-driven phrase structure of entity mentions, ARNs reduce the computational cost significantly. ARNs only detect nuggets centering at detected anchor words. Note that for each sentence, the number of potential anchor words k is significantly smaller than the sentence length n. Therefore the computational cost of our region recognizer is significantly lower than that of traditional regionbased methods which perform classification on all sub-sequences, as well as hypergraph-based methods which introduced structural dependencies between labels to prevent structural ambiguity . Furthermore, ARNs are highly parallelizable if we replace the BiLSTM context encoder with other parallelizable context encoder architecture (e.g., Transformer (Vaswani et al., 2017)).
Effects of Bag Loss
In this section, we investigate effects of Bag Loss by varying the values of hyper-parameter α in Equation (8) KBP2017 datasets when α varies. We can see that: 1) Bag Loss is effective for anchor word selection during training. In Figure 4, setting α to 0 significantly undermines the performance. Note that setting α to 0 is the same as ablating Bag Loss, i.e., the model will treat all words in the same innermost mention as anchor words. This result further verifies the necessity of Bag Loss. That is, because not all words in a mention are related to its type, it will introduce remarkable noise by regarding all words in mentions as anchor words.
2) Bag Loss is not sensitive to α when it is larger than a threshold. In Figure 4, our systems achieve nearly the same performance when α > 0.8. We find that this is because our model can predict anchor word in a very sharp probability distribution, so slight change of α does not make a big difference. Therefore, in all our experiments we empirically set α = 1 without special declaration. This also verified that Bag Loss can discover head-driven phrase structure steadily without using anchor word annotations.
Further Discussion on Bag Loss and Marginalization-based Loss
One possible alternative solution for Bag Loss is to regard the anchor word as a hidden variable, and obtain the likelihood of each mention by marginalizing over all words in the mention nugget with For P (x i , c), if we assume that the prior for each word being the anchor word is equal, it can be refactorized by P (xi, c) = P (c|xi)P (xi) ∝ P (c|xi).
However, we find that this approach does not work well in practice. This may because that, as we mentioned above, the prior probability of each word being the anchor word should not be equal. Words with highly semantic relatedness to the types are more likely to be the anchor word. Furthermore, this marginalization-based training object can only guarantee that words being regarded as the anchor words are trained towards the mention type, but will not encourage the other irrelevant words in the mention to be trained towards NIL. Therefore, compared with Bag Loss, the marginalization-based solution can not achieve the promising results for ARNs training.
Analysis on Anchor Words
To analyze the detected anchor words, Table 2 shows the most common anchor words for all entity types. Besides, words that frequently appear in a mention but being recognized as NIL are also presented. We can see that the top-10 anchor Figure 5: A representative error case of ARNs, where the right boundary of the PER mention is misclassified. Braces above the sentence indicate the output of ARNs, and brackets in the sentence represent the golden annotation. We find that the majority of errors occur because of the long-term dependencies stemming from postpositive attributive and attributive clauses.
words of each type are very convincing: all these words are strong indicators of their entity types. Besides, we can see that frequent NIL words in entity mentions are commonly function words, which play significant role in the structure of mention nuggets (e.g., "the" and "a" often indicates the start of an entity mention) but have little semantic association with entity types. This supports our motivation and further verifies the effectiveness of Bag Loss for anchor word selection.
Error Analysis
This section conducts error analysis on ARNs. Table 3 shows the performance gap between the anchor detector and the entire ARNs. We can see that there is still a significant performance gap from the anchor detector to entire ARNs. That is, there exist a number of mentions whose anchor words are correctly detected by the anchor detector but their boundaries are mistakenly recognized by the region recognizer. To investigate the reason behind this above performance gap, we analyze these cases and find that most of these errors stem from the existence of postpositive attributive and attributive clauses. Figure 5 shows an error case stemming from postpositive attributive. These cases are quite difficult for neural networks because long-term dependencies between clauses need to be carefully considered. One strategy to handle these cases is to introduce syntactic knowledge, which we leave as future work for improving ARNs.
Conclusions and Future Work
This paper proposes Anchor-Region networks, a sequence-to-nuggets architecture which can naturally detect nested entity mentions by modeling and exploiting head-driven phrase structures of entity mentions. Specifically, an anchor detector is first used to detect the anchor words of entity mentions and then a region recognizer is designed to recognize the mention boundaries centering at each anchor word. Furthermore, we also propose Bag Loss to train ARNs in an end-to-end manner without using any anchor word annotation. Experiments show that ARNs achieve the state-of-theart performance on all three benchmarks. As the head-driven structures are widely spread in natural language, the solution proposed in this paper can also be used for modeling and exploiting this structure in many other NLP tasks, such as semantic role labeling and event extraction. | 6,598.2 | 2019-06-10T00:00:00.000 | [
"Computer Science"
] |
Direct measurement of stationary objects’ dimensions with Michelson type incremental laser interferometer
The paper deals with a design of an original length measuring equipment for a measurement by Michelson-type laser interferometer. Laser measuring operate on the principle of relative movement of two components, where one component is fitted with an interference unit and the other one with a reflector. Such system is capable of measuring the length of displacement post motion, however, it is unable to measure the length of a stationary object. Functionality of the measuring equipment is ensured by a measuring board and a ruler. At a relatively low additional cost, the measuring configuration is capable of measuring the lengths of motionless objects achieving the precision of laser. The paper describes the measurement methodology verified in an experiment. Expanding verification tests of the device may facilitate possible commercial production of such auxiliary device for the Michelson-type laser interferometer, which would make the symposium motto “The future glimmers long before it comes to be” sound especially true.
Introduction
The laser interferometer is structurally adapted to measure the positioning accuracy of coordinate systems. The coordinate system comprises a moving part in the form of a guideway in either of the axes. The positions are programmable and the laser interferometer can verify the degree to which the programmed value has been reached [1]. Should the coordinate system lack its own measuring ruler, even then the movement of the moving part can be measured accurately.
The solution is based on a two-carrier measuring equipment for interferometer components. The carriers have a fixed arm with a three-point contact for the rulers, a spot for fitting the interferometer and the reflector, respectively. The carriers can move freely along the ruler and their function is to limit the contact of the arm with the object measured. At the end of both arms, co-linear contact sensors are placed, facing each other. All other movements related to handling the object in preparation for the measurement are performed by the object. This ensures high accuracy and reliability of measurement.
To verify the measurement methodology, a granite surface plate and a precise knife straight edge were used. The carriers and the arms were made of balsa wood. A high accuracy digital sensor was subject to verification as it served as a contact sensor.
Length measurement is principally done in many ways, ranging from utilizing mechanical systems to laser metering. The authors are not aware of any commercial application using the proposed principle IOP Publishing doi:10.1088/1742-6596/1379/1/012065 2 of measurement linked to Michelson-type laser interferometer. The experiments below confirm that this system could expand the portfolio of measuring products produced and sold by laser interferometer manufacturers and sellers, respectively.
Theory
It is commonly known that there are 21 sources of kinematic inaccuracy accompanying spatial movement of a body in three axes [2][3][4]. In linear motion, this is reduced to 9 sources of kinematic errors. When measurement is done with the arms, the primary source of errors, in accordance with the Abbe principle, is the measuring arms' length. Therefore, the most accurate part must be the guide ruler and the flatness of the base surface. The most beneficial arrangement based on this principle is when the arms are non-reconfigurable. All other moves required to measure the dimension must be done by the object. These moves are another source of errors, but they are second-order errors. These are mostly cosine errors. The nature of the measuring contact is a force which acts not only on the object but also on the end of the measuring arm. This force must be minimized to avoid arm deformation [5]. Deformation propensity increases with the increase of the arm length. On the other hand, the arm length determines the range in which the object can be measured. Ideally, a proximity sensor should be used. However, there is a risk that such sensors will not determine a clear distance from the surface due to roughness and type of the material. A compromise solution is to use a special contact sensor with a small and precisely defined contact force. For the methodology proposed, a high sensitivity sensor should be used. Linearity requirements are smaller, because repeated measurements must always be made to the same load of the sensor. However, this last requirement gives rise to special demands on positioning accuracy of the interferometer component carrier [6].
Naturally, thermomechanical sources of errors may also occur. In this respect, the proposed measurement methodology poses an advantage, because the measurement starts with the system reset. If the whole system is thermally stabilized, this error source is also minimized by resetting the system before the measurement, and the error increases only with the increased time of measurement duration. This even has an advantage over conventional CMM systems, where the temperature compensation needs to be ensured by means of a complex system design.
Measurement procedure
The first step in the measurement is resetting the measuring system. Measuring contact sensor heads at the end of the measuring arms are aligned co-linearly ( Figure 1). Resetting is done when both sensors touch by gently bringing the interferometer to the reflector. The laser system is reset and the values of the contact sensors, the accuracy of which corresponds to the accuracy required, are recorded.
The second step is the separation move of the arms, which creates space for the object to be measured in. The object is inserted into this space so that the desired dimension is measured exactly in the axis of the contact sensor.
The third step is moving the reflector arm to touch the object. At this step, the value of the contact sensor recorded at reset has to be obtained.
The fourth step is the same as the third step but the contact is achieved with the interferometer arm. When the correct contact is achieved, the laser display shows the dimension measured.
When determining a different size, the system does not need to be reset. All that needs to be done is moving the arms away from the object, and after setting a new position or inserting a new object, the procedure can be repeated from step three. If there is considerable thermal expansion between the measurements, it is advisable the system be reset before starting a new measurement.
The accuracy and reliability of the measurement can be increased by eliminating random error resulting from dynamic motion of the arms. Position verification in this case can be performed by a set of repeated micro-arm movements in the alternating direction around the contact point. The most frequent value obtained by the measurements is considered to be the correct value.
Experiments to verify contact impact during measuring
The most important part of drafting the measurement methodology is to verify the contact of the measuring arm with the object measured. This task gives rise to a lot of possible influences, which will definitely affect the design of the measuring equipment's correct rendering. Problems arise in precision positioning, in setting the arm stiffness and the in selecting the correct contact sensor.
Arrangement of the contact-measuring experiment
The highly accurate digital GT2-P12KL sensor from Keyence was used to verify the contact force. The sensor accuracy is 1 μm and its resolution is 0.1 μm. The XL80 laser from Renishaw was used as a laser system. The laser system precision is ± 0.5 μm/m. A 500 mm, DIN 874/00 knife straight edge was used to guide the carriers. The system is set on a granite measuring slab 5000x400x100 mm. Figure 2 describes the experiment arrangement. The principal arrangement is shown in figure 3. The length of the arm from the laser beam axis is 180 mm. The arm carriers have a three-point contact with the measuring board and a two-point contact with the knife ruler. Two-sided arm was used to balance the carrier stability. To eliminate the backlash in the motion along the ruler, weight on a rope transmission was applied.
Contact counterbalance spring and measuring the contact stiffness
To minimize the contact force, the sensor was counterbalanced by a preloaded spring. The magnitude of contact force dropped from about 0.14 N to 0.007 N. The principle of negative spring preload allows for a significant reduction in the contact force ( Figure 4).
Repeated contact measurements
Repeated contact measurements enable to determine the accuracy of the measurement under the given conditions. Figure 5 shows 32 repeated measurements. The biggest problem was encountered in positioning the carrier arm. Manual position tuning brings considerable scatter of measured positions to the system. For practical purposes, it is worthwhile to use a motor drive in this respect.
Based on the evaluation of deviations in the logs from two positions, the scattering of the sensor contact is about ± 9.5 μm (the confidence interval is in the 4 sigma range). This is insufficient for precise measurement. The accuracy of the contact sensor is considerably higher. The measurement error is affected mostly by dynamic phenomena occurring during changes in position. The magnitude of the contact force acting on the carrier arm is assumed to have less influence. Motorically driven micromotions can greatly improve accuracy. This has been proven by preliminary experiments measuring the contact of the carrier with the object of measurement. The contact was repeated by the contact force relief by means of motoric drive. The contact force was generated by the weight through rope transmission. The results were significantly better than those achieved by manual positioning. Figure 6 shows the results of repeated measurement of the contact of the carrier with the measured object. Only values that were repeated several times were selected to determine the actual dimension. In this case, precision is determined in the order of a tenth of a micrometer.
Design of the future device
Device design is based on experimental experience. The first proposal's idea was that the measuring arms would be adjusted during measurement. Such intervention, however, introduces a first-order error into the measurement. Therefore, the best length measurement philosophy is that the arms are not reconfigured and that all the adjustments needed to measure the size are carried out by the measured object. In this case, moving the object causes a second order error. In this respect, the measuring configuration consists of two separate parts. The first part consists of a measuring ruler with the laser interferometer carrier and the arms fitted with the contact sensors. The second part is a measuring jig handling the measured object. The measuring jig ensures that the correct position is achieved relative to the measuring arms, that the object is removed from the measuring zone after a change in the measured dimension or before the system is reset. If a rotary table is located on the measuring jig, this is an advantage. Measuring surface plate is not required for measurement purposes. Precise guideway of the carriers can be ensured by two parallel knife edge. This part requires the most accurate execution. Any deviation from linearity causes angular rotation of the measuring arm, which results in the greatest contribution to the measurement error. For this reason, it is suitable for the arms to be as short as possible. On the other hand, by shortening the arms, the measure range is limited. The compromise solution is to use IOP Publishing doi:10.1088/1742-6596/1379/1/012065 6 exchange arms of different lengths. The guideways should be in horizontal position, which is properly achieved by a three-point arrangement.
To ensure accuracy, it is also necessary to deal with the axial alignment of contact sensors. The sensors should terminate in a sphere shape. Co-ordination could be set up with a measuring instrument. A better solution is to use the laser interferometer itself.
The contact sensors usually have a high contact force. Proximity sensors are highly sensitive to the object changing the environment and the position. Therefore, it is suitable to produce a combined sensor. The sensor measuring head operates on the proximity principle. For example, the capacitive sensor achieves very high accuracy [7]. The end of the head is the contact part, which has a small stroke, very low stiffness to the axis of measurement.
Conclusion
The laser interferometer measuring equipment does not replace conventional measuring systems. The system can only be used where a Michelson laser interferometer is already available. On the other hand, the measuring equipment expands the possibilities of its use. The measurement methodology itself is progressively designed to eliminate the effect of the thermal expansion of the measuring system. This achieves a very high accuracy of measurement. The method provides the ability to measure the length dimensions of static objects at a given level of accuracy. But it is a less accurate method than the direct measurement of moving objects by laser interferometer. | 2,963.8 | 2019-11-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Cooperative Recognition of Internationally Disseminated Ceftriaxone-Resistant Neisseria gonorrhoeae Strain
Ceftriaxone remains a first-line treatment for patients infected by Neisseria gonorrhoeae in most settings. We investigated the possible spread of a ceftriaxone-resistant FC428 N. gonorrhoeae clone in Japan after recent isolation of similar strains in Denmark (GK124) and Canada (47707). We report 2 instances of the FC428 clone in Australia in heterosexual men traveling from Asia. Our bioinformatic analyses included core single-nucleotide variation phylogeny and in silico molecular typing; phylogenetic analysis showed close genetic relatedness among all 5 isolates. Results showed multilocus sequence type 1903; N. gonorrhoeae sequence typing for antimicrobial resistance (NG-STAR) 233; and harboring of mosaic penA allele encoding alterations A311V and T483S (penA-60.001), associated with ceftriaxone resistance. Our results provide further evidence of international transmission of ceftriaxone-resistant N. gonorrhoeae. We recommend increasing awareness of international spread of this drug-resistant strain, strengthening surveillance to include identifying treatment failures and contacts, and strengthening international sharing of data.
Ceftriaxone remains a first-line treatment for patients infected by Neisseria gonorrhoeae in most settings. We investigated the possible spread of a ceftriaxone-resistant FC428 N. gonorrhoeae clone in Japan after recent isolation of similar strains in Denmark (GK124) and Canada (47707). We report 2 instances of the FC428 clone in Australia in heterosexual men traveling from Asia. Our bioinformatic analyses included core single-nucleotide variation phylogeny and in silico molecular typing; phylogenetic analysis showed close genetic relatedness among all 5 isolates. Results showed multilocus sequence type 1903; N. gonorrhoeae sequence typing for antimicrobial resistance (NG-STAR) 233; and harboring of mosaic penA allele encoding alterations A311V and T483S (penA-60.001), associated with ceftriaxone resistance. Our results provide further evidence of international transmission of ceftriaxone-resistant N. gonorrhoeae. We recommend increasing awareness of international spread of this drugresistant strain, strengthening surveillance to include identifying treatment failures and contacts, and strengthening international sharing of data. C eftriaxone is among the last remaining recommended therapies for treating Neisseria gonorrhoeae infections and is used in many countries around the world as part of a dual therapy with azithromycin. Cephalosporin resistance in N. gonorrhoeae has been associated with modifications of the penA gene, which encodes penicillin-binding protein 2 (PBP2), a target for β-lactam antimicrobial drugs (1). During 2009-2015, several ceftriaxone-resistant (MIC 0.5-4 mg/L) N. gonorrhoeae strains were reported: in 2009, H041 in Japan (2); in 2010, F89 in France (3); in 2011, F89 in Spain (4); in 2013, A8806 in Australia (5); in 2014, GU140106 in Japan (6); and in 2015, FC428 and FC460 in Japan (7). However, until 2017, all of these strains were considered to have occurred sporadically because, except for limited transmission of F89 among persons in France and Spain during 2010-2011, there had been no reports of sustained transmission of these strains identified nationally or internationally. In 2017, this changed, substantiated by independent reports from Canada (8) and Denmark (9) of gonococcal isolates that had substantive similarity to the previously described FC428 strain in Japan.
The first reported case of the FC428 ceftriaxone-resistant N. gonorrhoeae strain was in Japan during January 2015 in a heterosexual man in his twenties who had urethritis (7). The FC428 isolate was resistant to ceftriaxone (MIC 0.5 mg/L), cefixime (MIC 1 mg/L), and ciprofloxacin (MIC >32 mg/L); susceptible to spectinomycin (MIC 8 mg/L) and azithromycin (MIC 0.25 mg/L); and, unlike all previously described ceftriaxone-resistant strains, a penicillinase-producing N. gonorrhoeae (PPNG; MIC ≥32 mg/L) bacterium. The patient was treated successfully with a single dose of spectinomycin 2 g intramuscularly (IM); however, a second isolate with an identical susceptibility profile (FC460) was subsequently cultured from the same patient 3 months later, suggesting reinfection by a separate contact.
In Canada, during January 2017, a gonococcal isolate (47707) (8) of similar susceptibility to the first reported case (including ceftriaxone-resistant MIC 1 mg/L and PPNG; Table 1 [10]) was isolated from a sample collected from a 23-year-old woman. This patient had no history of travel, but her male partner, who had been treated empirically and had no culture results available, reported sexual contact during travel in China and Thailand during the fall of 2016. She was successfully treated with combination therapy of a single dose each of cefixime (800 mg orally) and azithromycin (1 g orally) and an additional dose 13 days later of azithromycin (2 g orally). The strain from Denmark (GK124) was also isolated in January 2017, had a similar susceptibility profile to FC428, and was obtained from a heterosexual man in his twenties who had reported unprotected sexual contact with women from Denmark, China, and Australia (9). The patient was successfully treated with single doses of ceftriaxone (0.5 g IM) and azithromycin (2 g orally). Here, we report additional FC-428-like cases among persons in Australia, providing further evidence of the sustained international transmission of a ceftriaxoneresistant N. gonorrhoeae strain.
Methods
We confirmed N. gonorrhoeae isolates by using matrixassisted laser desorption/ionization time-of-flight mass spectrometry (Bruker Daltonics, Melbourne, Victoria, Australia; bioMérieux, Brisbane, Queensland, Australia). We determined antimicrobial susceptibilities of N. gonorrhoeae to ceftriaxone, penicillin, tetracycline, azithromycin, gentamicin, and ciprofloxacin by using Etest (bio-Mérieux) and spectinomycin by using the agar dilution method (11). We interpreted MIC on the basis of interpretive criteria from the Clinical and Laboratory and Standards Institute (12): penicillin resistance (MIC ≥2.0 mg/L); tetracycline resistance (MIC ≥2.0 mg/L); ciprofloxacin resistance (MIC ≥1.0 mg/L); and spectinomycin resistance (MIC ≥128.0 mg/L). Because the Clinical and Laboratory Standards Institute does not have an azithromycin breakpoint, and ceftriaxone breakpoints only state susceptibility (≤0.25 mg/L), we used the European Committee on Antimicrobial Susceptibility Testing (13) breakpoints for ceftriaxone resistance (MIC>0.12 mg/L) and azithromycin resistance (MIC>0.5 mg/L). β-lactamase production was analyzed by using nitrocefin (Thermo-Fisher Scientific, Melbourne, Victoria, Australia). We subcultured isolates on GC agar base with Vitox Supplement (Thermo-Fisher Scientific) and incubated for 24 h at 35°C in a 5% CO 2 atmosphere with or without antimicrobial drugs and stored in Tryptone (Thermo-Fisher Scientific) soya broth with 10% glyercol at -80°C.
Genomic Analyses
We put each isolate from Japan and Australia through DNA extraction, library preparation, and sequencing (Illumina, San Diego, CA, USA). From the strains from Japan, FC428 and FC460, we extracted DNA samples with the DNeasy Blood & Tissue Kit (QIAGEN, Tokyo, Japan). We created multiplexed libraries with Nextera XT DNA sample prep kit (Illumina) and generated paired-end 300-bp indexed reads on the Illumina MiSeq platform (Illumina) yielding 6,121,575 reads/genome and genome coverage of 845× for FC428 and 1,272,909 reads/genome and genome coverage of 845× for FC460. To analyze the strains from Australia, A7536 and A7846, we extracted DNA on the QIAsymphony SP (QIAGEN) by using the DSP DNA Mini Kit (QIAGEN). We prepared the libraries according to manufacturer instructions for the Nextera XT library preparation kit (Illumina) and sequenced on the NextSeq 500 (Illumina) by using the NextSeq 500 Mid Output V2 kit (Illumina). Sequencing generated 6,763,774 reads and genome coverage of 361× for A7536 and 3,672,072 reads and genome coverage of 202× for A7846.
We then provided sequencing data to the Canadian National Microbiology Laboratory, where bioinformatic analyses were performed as previously described (14). Quality reads were assembled by using SPAdes (15) (http://bioinf. spbau.ru/spades) and annotated with Prokka (16) (https:// github.com/tseemann/prokka), and produced an average of 86 contigs per isolate, an average contig length of 26,276 nt, and an average N50 length of 68,884 nt. Quality metrics for whole-genome sequencing (WGS) are shown in online Technical Appendix Table 1 (https://wwwnc.cdc.gov/EID/ article/24/4/17-1873-Techapp1.pdf). A core single-nucleotide variation (SNV) phylogeny was created by mapping reads to FA1090 (GenBank accession no. NC_002946.2) by using a custom Galaxy SNVPhyl workflow (17). Repetitive and highly recombinant regions with >2 SNVs per 500 nt were removed from the analysis. The percentage of valid and included positions in the core genome was 97.6%; 567 sites were used to generate the phylogeny. We used a metaalignment of informative core SNV positions to create a maximum-likelihood phylogenetic tree for A7536, A7846, FC428, FC460, and 47707 ( Figure). The H041, F89, and A8806 ceftriaxone-resistant strains (available in the World Health Organization [WHO] reference panel as WHO-X, WHO-Y, and WHO-Z, respectively) (10) were included for comparison. WGS read data for A7536, A7846, FC428, and FC460 are available under BioProject PRJNA416507, and previously reported 47707 was submitted under Bio-Project PRJNA415047 (8) .
Case Histories and Isolate Details
The first documented case-patient in Australia was a man in his forties who was visiting from the Philippines. He went to a sexual health clinic in Adelaide in April 2017 reporting urethral discharge and dysuria. He reported recent heterosexual contact with multiple female sex workers in Cambodia and the Philippines; it was unclear where the infection was acquired. An N. gonorrhoeae isolate (A7846) of similar susceptibility to FC428 (showing the characteristic ceftriaxone resistance and PPNG; Table 1) was cultured. The patient was treated with a 1-time dose combination therapy of ceftriaxone (500 mg IM) and azithromycin (1 g orally). A test result 7 days after treatment was negative for N. gonorrhoeae.
A second case-patient in Australia was a man visiting from China. He was in his early 40s and described symptoms of urethral discharge and dysuria to a general practitioner in Sydney in August 2017. He reported heterosexual contact in China, but none in Australia. An isolate (A7536) of similar susceptibility to FC428 (ceftriaxone-resistant and PPNG; Table 1) was cultured. The patient was treated with a 1-time dose combination therapy of ceftriaxone (500 mg IM) and azithromycin (1 g orally); he returned to China shortly thereafter. Attending physicians advised him to return to follow up for test of cure and to trace contacts, but follow-up was not confirmed.
Discussion
The recent reports of the N. gonorrhoeae FC428 clonal strain in Denmark, Canada, and now Australia provide new evidence that there is sustained international transmission of a ceftriaxone-resistant N. gonorrhoeae strain. This strain appears to have been circulating globally for >2 years. Thus, it is highly likely this strain is prevalent elsewhere, possibly in Asia, but undetected. There are serious gaps in N. gonorrhoeae antimicrobial resistance surveillance worldwide (21), and we estimate that samples from as few as 0.1% of the estimated 80 million cases of N. gonorrhoeae reported globally each year (22) are tested for antimicrobial resistance. Therefore, there are many opportunities for such strains to avoid detection.
Fortunately, the ceftriaxone MICs of the FC428 clonal strain remain lower than the H041 strain from Japan (MIC 2 mg/L) (2), and further, the FC428 strain does not exhibit resistance to azithromycin (Table 1). Therefore, treatment failure is arguably less likely against FC428 infections than in H041 and F89 infections, particularly when using ceftriaxone and azithromycin dual therapy; treatment failure was not observed in our study. Nevertheless, previous pharmacodynamic analyses indicate that ceftriaxone MICs of 0.5-1.0 mg/L can result in treatment failures with ceftriaxone 250 mg monotherapy and even (albeit to a lesser extent) when 1.0 g doses are used (23). As such, a dissemination of the FC428 clone could offset dual therapy guidelines because azithromycin resistance is being increasingly reported (24,25). The cases of N. gonorrhoeae described here and the circumstances under which these analyses took place are also a timely reminder of the need for international collaboration in addressing the overall N. gonorrhoeae problem and highlight the benefits of rapid access to genomic data by using electronic communications. In fact, in the absence of WGS data, it would have been very difficult to identify the links between these isolates. Not only have we been able to use these tools to readily identify the problem but we also arguably achieved identification in a sufficiently timely manner as to enable countries to put in place interventions that can limit further the spread of this strain, including intensifying follow-up and contact tracing.
Differences in extraction and sequencing procedures among the 3 countries could introduce variations in DNA concentrations that might affect the quality of the sequencing, such as number of reads and depth of coverage. This limitation was minimized because downstream processing of the data, such as assembly and reference mapping software algorithms, standardizes input data before detailed analyses of the genomes are conducted. Laboratory and epidemiologic findings are critical for surveillance that closely tracks the dissemination and emergence of epidemic antimicrobial-resistant strains and for rapid recognition and implementation of control measures to limit the expansion of clones through sexual networks. We recommend that health departments in all countries be made aware of this spreading resistant strain and strengthen N. gonorrhoeae antimicrobial-resistance monitoring, including treatment failure identification, adequate follow-up and contact tracing of cases, and STI prevention programs.
In conclusion, international collaboration based on WGS typing methods revealed the dissemination of a ceftriaxone-resistant N. gonorrhoeae in Japan, Canada, and Australia. Sustained transmission spanning 2 years suggests unidentified cases are likely present in other locations. These findings warrant the intensification of surveillance strategies and establishment of collaborations with other countries to monitor spread and inform national and global policies and actions. | 2,987.6 | 2018-04-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Adoption of principle-based IFRS and intercompany comparability of operating performance Adoption of principle-basedIFRS
Purpose – The study aims to investigate whether the adoption of IFRS could ensure ultimate intercompany comparability of operating performance in terms of uniformity in the application of accounting methods and reporting style. Design/methodology/approach – Using content analysis on 125 annual financial statements of 25 companiesfromfiveindustrieslistedontheDhakaStockExchangeinBangladesh,thisstudyreportsthatonlythesoleadoptionandapplicationofprinciplebasedIFRScannotensureultimateintercompanycomparabilityoffinancialreports. Findings – ThefindingsdocumentthattheadoptionofIFRScannotensuretheapplicationofsameaccounting methods as well as way of presentations which is a precondition of greater comparability of operating performance of competitive firms. The methodological and reporting direction through local regulatory agencies alongside maximum compliance with principle based IFRS can enhance intercompany comparability of financial reports in the same industry. Originality/value – This study tries to manifest that sole adoption cum implementation of IFRS could not ensure ultimate intercompany comparability of operating performance within the same industry and urges to conduct further research to find out the ways to do so.
Introduction
Why do retail investors ignore accounting information? Blankespoor et al. (2019) concluded that the cost of monitoring and acquiring accounting information demotivates retail investors using accounting disclosures in stock trading decisions. Although the cost of acquisition and processing of accounting information could avert traders using available accounting disclosures (Bhattacharya, 2001), sometimes they are reluctant to utilize all available information (Malmendier and Shanthikumar, 2007). In the hazy environment, users need to spend much time and efforts on acquisition, processing and analysis of accounting disclosures (Francis and Schipper, 1999;Ely and Waymire, 1999;Hope, 2003). Investors sometimes also depend on nonaccounting information while they doubt higher uncertainty on accounting disclosure (Amir and Lev, 1996). Financial statements having greater comparability enhance value relevance in stock trading and facilitate investors to gather and analyze information at a lower cost (De Franco et al., 2011;Kim et al., 2013). Sunder (2002) exposed that the financial reporting quality of firms depends to a greater extent on two essential variables, namely consistency of accounting methods and comparability of financial reports. The greater comparability of accounting information by increasing the quality of financial reporting can reduce the cost of evaluating alternative investment opportunities (Barth, 2013).
Adoption and compliance with International Financial Reporting Standards (IFRS) and Generally Accepted Accounting Principles (GAAP) in a country theoretically enhances the quality of financial reporting as well as enlarges the uniform accounting practices across the world. Li et al. (2017) concluded that adoption of IFRS enhances the capability of accounting information to predict future earnings and cash flows. But Capkun et al. (2016) concluded that greater flexibility having unclear guidance of implementation in IFRS initiated increased earning management in financial reports. Donelson et al. (2012) concluded that the rulesbased accounting standards have less chance to litigation for malpractice because the rulesbased accounting standards require more detailed guidelines, scope expectations and significant volume of application guidance (Nelson, 2003;Schipper, 2003 andDi Piazza et al., 2006), which makes accounting standard more precise and minimizes the chance of applying professional judgments. While Schipper (2003) concluded principles-based accounting standards might enhance the companies' exposure to litigation. Harris and Muller (1999) examined the value relevance of financial reporting under IFRS and US GAAP and found that nonUS companies that employed IFRS in financial reporting with a reconciliation to US GAAP evidenced more value-relevant in the market. Barth et al. (2012) concluded value relevance and comparability of accounting reports were higher for the firms that adopted IFRS mandatorily than US GAAP-based accounting. Zarova et al. (2014) concluded that accounting harmonization aimed to set degrees of variation in accounting practices to reduce differences in financial reporting among different nations having different economic background. It is also claimed that IFRS has lost its overall international character and espoused country specific environments where it has been applied (Nobes and Parker, 2012). In Europe, Br€ uggemann et al. (2013) reported that the mandatory adoption of IFRS enhanced superior comparability of financial reporting at the international level but reduced it at the domestic economies.
Until 2012, Bangladesh adopted 29 IASs and 13 IFRSs but the quality of financial reporting is still claimed far behind to make it lucrative to its intended users. Although compliance with the prescribed principles-based IFRSs can enhance the comparability of financial reporting, two companies with thesame nature are not compelled to follow thesame accounting methodsfor their accounting practices. Consequently, these methodological diversities theoretically distort intercompany comparability of operating performance presented through accounting reports. Thisstudyaimstoinvestigatetowhatextentprinciples-basedIFRSscanguaranteeintercompany comparability of operating performance ensuring intracompany and intercompany consistency in the application of accounting methods in financial reporting. To execute our research goal, this paper is divided into five sections. In the introduction, the aim of this paper is disclosed. In the literature review section, the linkage between adoption of and compliance with IFRSs and quality of financial reports has been tried to establish and then it is turned to comparability of accounting reports. The methodologysection articulates the ways how the aim of this paper is achieved.In the findings section, we have tried to show level of intracompany consistency in the application of accounting methods and then intercompany comparability of operating performance in terms of the application of same accounting methods in financial reporting. Finally, conclusion section summarizes our findings and limitation of this study.
This study finds that sole adoption of IFRS can not ensure intercompany comparability of operating performance of financial reports because of methodological diversities in financial reporting. And, the findings will stimulate stock traders to create continuous pressure on the reporting organizations as well as on the regulatory bodies to formulate favorable relevant policies and accounting guidelines to resolve this critical issue to facilitate investors in stock trading.
Literature review
The literature shows each and every country has its own imposed specified rules, regulations and techniques in preparing financial reports (De Franco et al., 2011). However, the adoption and implementation of international accounting standards has been increasing day by day due to its impacts on improved quality of financial reporting. Peña and Franco (2017) concluded that the adopting IFRS in the UK and France reported significant improvement in quality of financial reporting in the UK but not in France. This is consistent with that of Kim et al. (2012) except that mandatory adoption of IFRS increased the complexity of audit work and the cost of audit fees. Lourenço et al. (2015) concluded that adoption of IFRS generally had a positive impact on quality financial reporting, the capital market, analysts' ability to predict, comparability and information usages; however, the intensity of these impacts were subject to some factors including country's enforcement level and nature of companies. Bassemir and Novotny-Farkas (2018) reported increased earning quality in the firms adopting IFRSs in Germany along with disclosing significantly more information in their financial reports and tending to voluntarily publish their financial reports on their corporate website. Turki et al. (2016) found improved information contents of earnings after the IFRS mandatory adoption and the improvement was reflected in reduced cost of capital as well as in error and dispersion of financial analysts' forecasting. Jermakowicz (2004) concluded that adoption of IFRS in Belgium dramatically changed external reporting activities of the companies and increased the comparability and levels of transparency of financial reporting. Brochet et al. (2013) exposed that adoption of IFRS decreased information asymmetries and increased the firm-level comparability in financial reporting in the UK Wang (2011) also revealed that IFRS adoption increased cross-country information transfer by improving its comparability. The adoption of unified financial reporting standards should enhance superior comparability and transparency of financial reports and reduce information asymmetry among the stakeholders by improving information quality (Thorell and Whittington, 1994). The comparability of financial information of a firm with its industry peers helps retail investors better assess its competitive benefits or drawbacks for specific investments (Ozkan et al., 2012;Young and Zeng, 2015). In a study over 17 European countries, Yip and Young (2012) concluded that adoption of IFRS increased accounting comparability in terms of similarity of accounting practices, degree of information transfer and earning and book value reporting. The higher accounting comparability helps reduce information asymmetry and helps investors analyze firm-specific information to evaluate alternative investment opportunities (Peterson et al., 2015); increases the efficiency of capital allocation (Durnev et al., 2003(Durnev et al., , 2004; benefits both public debt markets and (Kim et al., 2013) private loan markets (Fang et al., 2016). Schiebel (2007) studied the value relevance of accounting information under IFRS and German GAAP and eventually concluded that German GAAP has more value relevance than IFRS.
The contrary evidence is also reported in the literature. For example, Ahmed et al. (2013) found declined quality of financial reporting due to mandatory adoption of IFRS in a country. As the coexistence of both IFRS and local accounting standards in a country adversely affects the comparability, local accounting standards need to be essentially adjusted to make them compatible with IFRSs (Callao et al., 2007). Lin et al. (2019) concluded that adoption of IFRS could not enhance the comparability of accounting information significantly. But Armstrong et al. (2010) concluded that the adoption of IFRS increased comparability and quality of financial reporting which ultimately materialized the investors' perceived net benefits in Europe. Some research studies revealed that adoption of IFRS enhanced value relevance of accounting information (Bartov et al., 2005;Harris and Muller, 1999;Horton and Serafeim, 2006), while others evidenced that the same could worsen value relevance (Lin and Chen, 2005;Schiebel, 2007).
Adoption of principle-based IFRS
The upper echelons theory articulated how individual factors and team practices affect the executive decision-making (Nielsen, 2010) and also mixtures theories like agency theory and positive accounting theory. A variety of theoretical viewpoints along with the upper echelons theory explained how the top management's demographic diversity influences financial reporting quality and discretionary accounting choices. The upper echelons theory identified six observable characteristics (age, functional background, other career experiences, formal education, socio-economic status and financial position) that contribute to the individual personal background or leadership experience that distinguishes each other (Hambrick and Mason, 1984), and the current accounting research also shows how managerial expertise and leadership role have important explanatory power for accounting choices and outcomes (Bamber et al., 2010;Ge et al., 2011). However, both positive and negative relationship of TMT demographic diversity with financial reporting quality are evident in the literature (Bamber et al., 2010;and Ge et al., 2011;Steccolini, 2004;Fawzi et al., 2001). Watts and Zimmerman (1986) developed the positive accounting theory (PAT) to explain and predict firm's choices on accounting practices. The theory hypothesizes that accounting choices may be determined by managers who want to influence reported earnings and capital structure in imperfect markets. Watts and Zimmerman (1986) further noted that the companies having higher information asymmetry among managers and external investors are more conservative in financial reporting. This conservatism is also influenced by the managers' specific characteristics (CFOs) and GAAPs (Ge et al., 2011). The information asymmetry resulting from the agency theory can also describe the financial reporting quality (Jensen and Meckling, 1976). Sweeney (1994) exposed managerial motivation to adapt income increasing accounting policies to inflate reported net income of the firm.
The above literature studies expose that adoption of IFRS to a greater extent enhances the quality of financial reporting and overall comparability of financial reports but assurance for ultimate intercompany comparability of financial reports is in question. Furthermore, it also shows managerial discretion plays a significant role in financial reporting although the local rules and regulations limit the application of managerial discretion in this regard. So, the proposition of this study is adoption of IFRS and compliance with it could confirm intercompany comparability of operating performance of the reporting organizations by ensuring consistency in the application of similar accounting methods. In Bangladesh, the Company Act -1994 mainly describes the basic requirement of the contents of financial statements. ICAB (Institute of Chartered Accountants of Bangladesh) plays the key role of adoption and implementation of IFRSs here. ICAB adopts IFRS with little modification and publishes it as BFRS (Bangladesh Financial Reporting Standards) The Company Act does not make it mandatory to comply with IFRSs but requires the financial statements to be audited by the member of ICAB. And, the members of ICAB ensure compliance with BFRS in financial reporting as per ICAB requirements [1]. This study is designed to investigate whether the adoption of principle based IFRS could ensure extreme intercompany comparability of operating performance of various companies ensuring the application of same accounting principles and methods from company to company within the same industry to make financial reports more lucrative to its intended users.
Methodology
This is a qualitative study and uses content analysis approach to achieve our research objective. No quantitative analysis is included in this paper. The content analysis is done on 125 annual financial statements extracted from 25 companies selected from five industries listed in the Dhaka Stock Exchange (DSE), Bangladesh, for five years from 2013 to 2017. Using a judgmental sampling technique including the market reputation and availability of information, five industries are initially selected from the DSE. The reason behind selecting Bangladesh as empirical setting is that Bangladesh is one of the fast growing economies in the world and accounting practices here is getting more importance day by day to disclose authentic financial performance of various companies to make it trust worthy to the users of accounting information. Bangladesh is now considered as the role model for other developing countries, and the findings of this study can help developing countries to upgrade their accounting practices. From each of the selected industries, five companies are selected resulting in 125 financial statements. The literature review results in five key variables, namely "Reporting period (RP) ", "method of depreciations (MODs)", "inventory valuation method (IVM)", "assets valuation method (AVM)" and "steps in reporting (SOP)". Table 2 is prepared to check intercompany comparability of financial reports. In preparing summary Table 1, we use "√"and "3" to record intercompany consistency and inconsistency, respectively. Summary Table 2 reports intercompany comparison in the same RP among selected organizations under the same industry. To record similarities in selected five dimensions (such as RP MOD and so on) among the companies within the same industry, we use "√" otherwise "3". Finally, we have presented the overall position regarding consistency and comparability of operating performance in corporate financial reporting in Bangladesh.
Findings
Bangladesh adopted 29 IASs and 13 IFRSs with some modifications until 2012. Every organization is compelled to follow "Bangladesh Accounting Standard" adopted from IASs and IFRS along with other local reporting guidelines in financial reporting. The function tables report that of all the statements, 121 statements are reported for annual basis; two statements are reported for six months and remaining two are reported for 18 months. For inventory valuation, various methods such as LC, WAC, FIFO, etc. are used by our sampled organizations. To depreciate their real assets, both the straight line (SL) and the diminishing balance (DB) methods are followed by selected companies. While presenting assets on balance sheet, most of the companies use historical cost. However, only one company uses market value for the same. Only few companies maintain similarities in various steps of presenting (SOP) relevant information in annual reports, but most of the organizations (80% of total) are inconsistent in this regard.
Intracompany consistency in financial reporting
Consistency in the application of various accounting methods is essential to avoid earning manipulation and to ensure intracompany comparability of financial reports time. Table 1 reports the overall status of intracompany periodic consistency in selected five dimensions. More specifically, in cement industry, two companies are inconsistent in reporting period, and one company is inconsistent in the method of depreciation. However, in terms of IVM and AVM, they are consistent, whereas three companies are inconsistent in terms of SOP. In consumer product industry, dimensions except SOP are consistent in implementation of corporate financial reporting and only Gememi Sea Food is consistent in SOP. In engineering industry, no company has shown consistency in the SOP in annual reports. For rest of the dimensions, all the companies are consistent except BSRM in reporting period. In fuel and power industry, only three companies are inconsistent in the SOP. However, in pharmaceutical industry, no company is consistent in SOP and in addition; Beximco Pharma shows inconsistency in reporting period. Overall, in terms of MOD, IVM, CS and AVM, all the companies, except Crown Cement (CC), show consistency in financial reporting because CC shows inconsistency in methods of depreciation. Of the 25 companies, only four companies are inconsistent in reporting period. Only five companies are consistent in the way of presentation, whereas 20 companies (80% of our total sample size) are inconsistent in presenting their all relevant information in corporate financial reports. Overall, our findings report significant level of inconsistency in most of the steps in preparing annual reports through our study period.
Industry
Name of the company 4.2 Intercompany comparability of financial performance Comparability of financial reports is one of the major qualitative characteristics in corporate financial reporting; especially for the current and potential investors. Although principle based IFRS provides flexibilities in the application of various depreciation and assets valuation methods, the application of same accounting methods for the same type of assets across the companies can ensure utmost intercompany comparability of financial performance within the same industry. It is observed that, other things remaining constant, differences in the application of accounting methods may show differences in corporate operating performance. Our thorough investigation on five function tables compiles the check list of intercompany comparability of financial reporting in summary Table 2.
4.2.1 Status of the reporting period. In 2013, 2014 and 2017, all the companies report their financial performance over 12-month period. In cement industry, one company (Confidence ) in 2015 and another one (Premier Cement) in 2016 report their financial performance semiannually. In pharmaceutical industry, Beximco Pharma reports its financial reports in 2015 over 18 months, in engineering industry, BSRM reports its financial reports in 2016 over 18 months. For rest of the periods, all the companies publish their financial reports annually. Our overall findings show very low level of disparity in reporting period.
4.2.2 Status of the application of method of depreciation. The application of methods of depreciation differs vastly among the companies. In cement industry, three companies use SL method, one company uses LFHB and another one uses RB method in 2013 but from 2014 two companies follow RB method, two companies follow SL method and one company follows 3 3 3 3 3 Inventory valuation method 3 3 3 3 3 Asset valuation method 3 3 3 3 3 Steps of presentation 3 3 3 3 3 Source(s): Based on function tables Table 2. Intercompany comparability of financial reports within the same industry Adoption of principle-based IFRS LFHB method of depreciation. In consumer goods industry, two companies use SL method and another three companies use RB method. But, all the companies in engineering industry and power and oil industry follow the RB and SL method, respectively. In pharmaceutical industry, all the companies follow SL method except Beximco which follows RB method. Although intercompany comparison in engineering and power and oil industry is possible in terms of method of depreciation, diversities in the application of methods of depreciation makes it problematic in another three industries. 4.2.3 Status of the application method of inventory valuation. The wide varieties in the application of IVMs have been found among the industries except engineering industry. In cement industry, three companies follow LC (least cost method) and two companies use weighted average cost method (WAC). In consumer goods industry, three companies use WAC method and two companies use LC method. In power and oil industry, one company uses LC method, three companies use WAC method and another one uses FIFO method. Two companies, in pharmaceutical industry, follow both LC and FIFO methods, two companies follow WAC method and the one company follows FIFO method. However, all the companies in engineering industry follow WAC method for inventory valuation. So, in terms of IVM , the intercompany comparability of operating performance becomes very complicated except engineering industry.
4.2.4 Status of assets presentation. All the companies, except ACME in pharmaceutical industry, use historical cost (HC) to report their assets on the balance sheet. HC is accordance with the GAAP and IFRS. Only ACME limited, out of 25 selected companies, uses the market value to report various assets on the balance sheet. So, in terms of assets presentation, our sampled companies show maximum similarities in presenting their various assets which are used as denominator to measure the operating performance of a particular firm.
4.2.5 Status of the steps of presentation. The steps of presentation (SOP) denote the various headings to disclose financial and nonfinancial items in annual reports to comply with full disclosure principle. The SOP reports very informative information required to analyze the operating performance of the companies. Our investigation on annual reports shows no uniformity in presentation sequences throughout the selected companies during our study period. Rather, very few companies are consistent in this regard and most of the companies are inconsistent presenting relevant information in the annual report. This causes wastage of much time of the stakeholders to collect and process relevant information making comparison among the companies.
Adoption of IFRS enhances the quality of financial reporting reducing information asymmetry (Houqe, 2018;Lourenço et al., 2015;Brochet et al., 2013) and has significant both positive (Isaboke and Chen, 2019;Barth et al., 2012;Bartov et al., 2005;Horton and Serafeim, 2006) and negative (Lin and Chen, 2005;Schiebel, 2007) impact on value relevance in the stock market. IFRS definitely enhances the overall comparability of accounting information (Brochet et al., 2013;Wang, 2011;Jermakowicz, 2004) although negative impact (Ahmed et al., 2013;Callao et al., 2007) is also found in literature review. In this study, it is clearly evidenced that adoption of IFRS could guarantee consistency in the application of accounting methods within the company and enhances intracompany comparability to a greater extent. However, the adoption of IFRS and compliance with it cannot guarantee the application of same accounting method for the same accounting issue across the companies within the same industry. If other things (revenues and expenses) remain same of the two companies, because of disparities in the application of accounting methods for deprecation, inventory valuation and so on can distort intercompany comparability of their operation performance. Consequently, IFRS could not ensure ultimate intercompany comparability within the same industry because of diversities in the application of various accounting methods among the companies.
Conclusion
The existing literature discloses that adoption of and compliance with IFRS definitely enriches the quality of financial reporting, its value relevance in the stock markets and even enhances cross-country usages of financial information. However, the effectiveness of IFRS in a county to some extent depends on her enforcement capabilities by regulatory agencies (Lourenço et al., 2015). The contradictory outcomes are also found regarding the comparability of financial reports because of adoption of IFRS. If both IFRS and local reporting standard remain functional without synchronization between them, the overall comparability of financial reports become worsen (Callao et al., 2007). Our findings conclude that the adoption of IFRS in Bangladesh enhances the intracompany comparability to a greater extent but fails to achieve intercompany comparability of operating performance because of methodological diversities in financial reporting. We propose that the strict methodological specifications to neutralize managerial discretion relating to different accounting issues (such as asset valuation, assets presentation, reporting period and timeliness, depreciation, etc.) through local regulatory bodies as well as the proper adoption of and compliance with IFRS are essential to uphold supreme intercompany comparability of operating performance of various companies within the same industry and to make financial reports lucrative to its intended users. In this study, it is assumed that all the selected companies ensure compliance with IFRSs as a statutory requirement of ICAB. Moreover, although the quantitative analysis is absent in this study, the findings of this study is still crucial in accounting research to urge designing future studies to find out the ways of making financial reports more compatible for intercompany comparability of operating performance. | 5,577.6 | 2020-07-22T00:00:00.000 | [
"Business",
"Economics"
] |
ANALYSIS OF MODERN METHODS AND MEANS OF ELECTRONIC INTELLIGENCE FOR SPECIAL PURPOSES FOR MONITORING THREATENING STATIONARY AND MOBILE OBJECTS
Electronic methods and means of reconnaissance are a set of methods and organizational structures for conducting intelligence activities using electronic equipment and radio-technical devices (systems). The development of modern element base and computing facilities allows us to miniaturize modern facilities, introducing into them previously inaccessible algorithms and methods for processing the information received. This allows realSCIENTIFIC COLLECTION «INTERCONF» | No 81 251 time monitoring of potentially dangerous (threatening) stationary and mobile objects, promptly responding to emerging terrorist threats and other dangerous phenomena. On the paper briefly discusses the main modern methods and means of electronic intelligence for special purposes,
of the modern developments of special-purpose electronic intelligence tools are aimed at monitoring threatening (potentially dangerous) stationary and mobile objects. One of the reasons is a rather serious terrorist threat that poses a challenge not only to individual countries, but also to entire regions. Thus, terrorist groups attempted to create their own state in the Middle East and Africa, carried out actions in many parts of the world. In this regard, the work devoted to the study of methods and means of detecting and preventing possible threats is of interest.
Literature review. In general, the methods and means of electronic intelligence can be reduced to active, passive, active-passive (combined). The main advantage of active methods is the ability to adjust, within certain limits, the capabilities and structure of the emitted signals, the predictability of the expected response of the signal reflected from the object, deliberately predictable methods and processing algorithms. However, in modern conditions this leads to the rapid opening of intelligence assets, and, as a rule, to intensive counteraction. The main advantage of passive methods is the possibility of covert observation of objects of interest, the possibility of long-term accumulation of statistical information and, as a result, theoretically high enough secrecy, noise immunity and information content.
However, the main disadvantage is the a priori unknown structure of signals emitted by objects, the dependence of the information received on the radiation properties of the object, a larger number of equipment and computing facilities involved for processing signals in the possible radiation range of the object. Active-passive methods allow combining the advantages of each method and leveling their disadvantages. Their essence about the general form is as follows. There is a certain number of electronic means combined into a single system. part of the funds works for radiation and reception, part only for receiving signals. In this case, the structure and the intended methods of signal processing are known. The secrecy and security of the objects of such a system lies in the "flickering" mode of operation of the emitting devices and their quasi-chaotic radiation, with a constant change of location during the period of "silence". The reconnaissance object, even determining the position of the emitting means, does not have time to quickly react and neutralize the threat that has arisen for it. However, the use of active-passive methods impose rather stringent requirements on the means of communication, topographic reference and orientation, methods of monitoring and predicting the technical state of the system's components . load reduction or undermining cryptographic systems.
Using secretive methods, is a fairly reliable and effective means. Used by special services of almost all states to obtain the necessary intelligence information. Basically, the work is carried out in passive mode, but options for obtaining information using narrow-controlled radiation (for example, laser), which is in some cases a demuscating factor. A distinctive feature of radio repairs is its sufficient subjectivity, which consists in the need for a critical assessment of the data obtained. This is primarily due to the fact that information issued by an intelligence facility can be intentionally distorted, which causes the need for its multiple recheck.
To solve the problems of radio engineering intelligence to determine the structure of the signal received from the object and the coordinates of the radiation source, the methods of spectral analysis, as well as triangulate, difference-distance One of the specific problems of radio engineering intelligence when determining the location of objects is the high dependence of the accuracy of the estimates obtained from the distance between the reception points (system bases) and the accuracy of its measurement, which requires increased attention when solving top acceptance and orientation tasks. As shown in the sources given in the analysis of the literature, to achieve acceptable results of the assessment of the coordinates of the radiation source, the accuracy of measuring the bases should be an order of magnitude more accurate accuracy of measuring the primary coordinates.
Radar intelligence, being one of the oldest types of electronic intelligence, is at the moment one of the most informative. This is mainly due to: a sufficiently long history of the development of the theoretical school; -"more direct" methods for obtaining coordinate information (measurement errors are proportional to, in contrast to the methods of radiotechnical intelligence, Exploration is carried out in a completely autonomous or semi-autonomous mode.
The disadvantage is the impossibility of performing the functions of natural or deliberately supplied optically opaque noise.
Conclusions.
1. The modern development of the element base and computing facilities allows the implementation of most modern methods and algorithms for obtaining information by means of electronic intelligence for special purposes for monitoring threatening stationary and mobile objects. situation. Elimination of this drawback is possible by using systems using data from radio electronic (radio, radiotechnical, radar) and optoelectronic intelligences. 14. Рощупкин | 1,286.6 | 2021-10-23T00:00:00.000 | [
"Computer Science"
] |
Time-Universal Data Compression †
: Nowadays, a variety of data-compressors (or archivers) is available, each of which has its merits, and it is impossible to single out the best ones. Thus, one faces the problem of choosing the best method to compress a given file, and this problem is more important the larger is the file. It seems natural to try all the compressors and then choose the one that gives the shortest compressed file, then transfer (or store) the index number of the best compressor (it requires log m bits, if m is the number of compressors available) and the compressed file. The only problem is the time, which essentially increases due to the need to compress the file m times (in order to find the best compressor). We suggest a method of data compression whose performance is close to optimal, but for which the extra time needed is relatively small: the ratio of this extra time and the total time of calculation can be limited, in an asymptotic manner, by an arbitrary positive constant. In short, the main idea of the suggested approach is as follows: in order to find the best, try all the data compressors, but, when doing so, use for compression only a small part of the file. Then apply the best data compressors to the whole file. Note that there are many situations where it may be necessary to find the best data compressor out of a given set. In such a case, it is often done by comparing compressors empirically. One of the goals of this work is to turn such a selection process into a part of the data compression method, automating and optimizing it.
Introduction
Nowadays lossless data compressors, or archivers, are widely used in systems of information transmission and storage. Modern data compressors are based on the results of the theory of source coding, as well as on the experience and intuition of their developers. Among the theoretical results, we note, first of all, such deep concepts as entropy, information, and methods of source coding discovered by Shannon [1]. The next important step was done by Fitingoff [2] and Kolmogorov [3], who described the first universal code, as well as Krichevsky who described the first such a code with minimal redundancy [4]. Now practically used data compressors are based on the PPM universal code [5] (which is used along with the arithmetic code [6]), the Lempel-Ziv (LZ) compression methods [7], the Burrows-Wheeler transform [8] (which is used along with the book-stack (or MTF) code [9][10][11]), the class of grammar-based codes [12,13] and some others [14][15][16]. All these codes are universal. This means that, asymptotically, the length of the compressed file goes to the smallest possible value (i.e., the Shannon entropy per letter), if the compressed sequence is generated by a stationary source.
In particular, the universality of practically used codes means that we cannot compare their performance theoretically, because all of them have the same limit ratio of compression. On the other hand, the experiments show that the performance of different data compressors depends on a compressed file and it is impossible to single out one of the best or even remove the worst ones. Thus, there is no theoretical or experimental way to select the best data compressors for practical use. Hence, if someone is going to compress a file, he should first select the appropriate data compressor, preferably giving the best compression. The following obvious two-step method can be applied: first, try all available compressors and choose the one that gives the shortest compressed file. Then place a byte representation of its number and the compressed file. When decoding, the decoder first reads the number of the selected data compressor, and then decodes the rest of the file with the selected data compressor. An obvious drawback of this approach is the need to spend a lot of time in order to first compress the file by all the compressors.
In this paper we show that there exists a method that encodes the file with the (close to) optimal compressor, but uses a relatively small extra time. In short, the main idea of the suggested approach is as follows: in order to find the best, try all the compressors, but, when doing it, use for compression only a small part of the file. Then apply the best data compressor for the compression of the whole file. Based on experiments and some theoretical considerations, we can say that under certain conditions this procedure is quite effective. That is why we call such methods "time-universal." It is important to note that the problems of data compression and time series prediction are very close mathematically (see, for example, [17]). That is why the proposed approach can be directly applied to time series forecasting.
To the best of our knowledge, the suggested approach to data compression is new, but the idea to organize the computation of several algorithms in such a way that any of them worked at certain intervals of time, and their course depends on intermediate results, is widely used in the theory of algorithms, randomness testing and artificial intelligence; see [18][19][20][21].
The Statement of the Problem and Preliminary Example
Let there be a set of data compressors F = {ϕ 1 , ϕ 2 , ...} and x 1 x 2 ... be a sequence of letters from a finite alphabet A, whose initial part x 1 ...x n should be compressed by some ϕ ∈ F. Let v i be the time spent on encoding one letter by the data compressor ϕ i and suppose that all v i are upper-bounded by a certain constant v max , i.e. sup i=1,2,..., v i ≤ v max . (It is possible that v i is unknown beforehand.) The considered task is to find a data compressor from F which compresses x 1 ...x n in such a way that the total time spent for all calculations and compressions does not exceed T(1 + δ) for some δ > 0. Note that T = v max n is the minimum time that must be reserved for compression and δT is the additional time that can be used to find the good compressor (among ϕ 1 , ϕ 2 , ...). It is important to note that we can estimate δ without knowing the speeds v 1 , v 2 , ....
If the number of data compressors F is finite, say, {ϕ 1 , ϕ 2 , ..., ϕ m }, m ≥ 2, and one chooses ϕ k to compress the file x 1 x 2 ...x n , he can use the following two step procedure: encode the file as < k > ϕ k (x 1 x 2 ...x n ), where < k > is log m -bit binary presentation of k. (The decoder first reads log m bits and finds k, then it finds x 1 x 2 ...x n decoding ϕ k (x 1 x 2 ...x n ).) Now our goal is to generalize this approach for the case of infinite F = {ϕ 1 , ϕ 2 , . . . }. For this purpose we take a probability distribution ω = ω 1 , ω 2 , ... such that all ω i > 0. The following is an example of such a distribution: Clearly, it is a probability distribution, because ω k = 1/k − 1/(k + 1). Now we should take into account the length of a codeword which presents the number k, because those lengths must be different for different k. So, we should find such ϕ k that the value is close to minimal. As earlier, the first part − log ω k is used for encoding number k (codes achieving this are well-known, e.g., [22].) The decoder first finds k and then x 1 x 2 ...x n using the decoder corresponding to ϕ k . Based on this consideration, we give the following Definition 1. We call any method that encodes a sequence x 1 x 2 ...x n , n ≥ 1, x i ∈ A, by the binary word of the length − log ω j + |ϕ j (x 1 x 2 ...x n )| for some ϕ j ∈ F, a time-adaptive code and denote it byΦ δ compr . The output of Φ δ compr is the following word: where < ω i > is − log ω i -bit word that encodes i, whereas the time of encoding is not grater than T(1 + δ) (here T = v max n). If for a time-adaptive codeΦ δ compr the following equation is valid Comment 1 It will be convenient to reckon that the whole sequence is compressed not letter-by-letter, but by sub-words, each of which, say, a few kilobytes in length. More formally, let, as before, there be a sequence x 1 x 2 . . . , where x i , i = 1, 2, ... are sub-words whose length (say, L) can be a few kilobytes. In this case x i ∈ {0, 1} 8L . Comment 2 Here and below we did not take into account the time required for the calculation of log ω i and some other auxiliary calculations. If in a certain situation this time is not negligible, it is possible to reduceT in advance by the required value.
This description and the following discussion are fairly formal, so we give a brief preliminary example of a time-adaptive code. To do this, we took 22 data compressors from [23] and 14 files of different lengths. For each file we applied the following three-step scheme: first we took 1% of the file and sequentially compressed it with all the data compressors. Then we selected the three best compressors, took 5% of the file, and sequentially compressed it with the three compressors selected. Finally, we selected the best of these compressors and compressed the file with this compressor. Thus, the total extra time is limited to 22 × 0.01 + 3 × 0.05 = 0.37, i.e. δ ≤ 0.37. Table 1 contains the obtained data. Table 2 shows that the larger the file, the better the compression. The following table gives some insight into the effect of the extra time. Here we used the same three-step scheme, but the size of the parts was 2% and 10% for the first step and the second, respectively, while the extra time was 0.74. From the tables it can be seen that the performance of the considered scheme increases significantly when the additional time increases. It worth noting, that if one applied all 22 data compressors to the whole file, the extra time would be 21 instead of 0.74.
Theoretical Consideration
Suppose that there is a file x 1 x 2 ...x n and data compressors ϕ 1 , ..., ϕ m , n ≥ 1, m ≥ 1. Let, as before, v i be the time spent on encoding one letter by the data compressor ϕ i , and The goal is to find the data compressor ϕ j , j = 1, ..., m, that compresses the file x 1 x 2 ...x n in the best way in timeT.
Apparently, the following two-step method is the simplest.
Step 3. Calculate s = arg min i=1,...,m |ϕ i (x 1 ...x r )| Step 4. Compress the whole file x 1 x 2 ...x n by ϕ s and compose the codeword s ϕ s (x 1 ...x n ), where s is log m -bit word with the presentation of s.
It will be shown that even this simple method is time universal. On the other hand, there are a lot of quite reasonable approaches to build the time-adaptive codes. For example, it could be natural to try a three step procedure, which was considered in the previous part (see Tables 1 and 2), as well as many other versions. Probably, it could be useful to use multidimensional optimization approaches, such as machine learning, so-called deep learning, etc. That is why, we consider only some general conditions needed for time-universality.
Let us give some needed definitions. Suppose, a time-adaptive data-compressorΦ is applied to x = x 1 ...x t . For any ϕ i we define τ i (t) = max{r : ϕ i (x 1 ...x r ) was calculated, when extra time δ T was exhausted}.
(iii) for any t the methodΦ(x 1 ...x t ) uses such a compressor ϕ s for which, for any i ThenΦ(x 1 ...x n ) is time universal, that is A proof is given in the Appendix A, but here we give some informal comments. First, note that property (i) means that any data compressor will participate in the competition to find the best one. Second, if the sequence x 1 x 2 ... is generated by a stationary source and all ϕ i are universal codes, then the property (iii) is valid with probability 1 (See, for example, [22]). Hence, this theorem is valid for this case. Besides, note that this this theorem is valid for methods described earlier.
Experiments
We conducted several experiments to evaluate the effectiveness of the proposed approach in practice. For this purpose we took 20 data compressor from the "squeeze chart (lossless data compression benchmarks)", http://www.squeezechart.com/index.html and files from this site http: //corpus.canterbury.ac.nz/descriptions/, and http://tolstoy.ru/creativity/90-volume-collection-of-theworks/ (Information about their size is given in the tables below). It is worth noting, that we do not change the collection of the data compressors and the files during experiments. The results are presented in the following tables, where the expression "worst/best" means the ratio of the longest length of the compressed file and the shortest one (for different data compressors). More formally, worst/best = max i,j=1,...,20 (|ϕ i |/|ϕ j |). The expression "chosen/best" is a similar value for a chosen data compressor and the best one. The value "chosen/best" is the frequency of occurrence of the event "the best compressor was selected". Table 3 shows the results of the two-step method, where we took 3% in the first step. Thus, the total extra time is limited to 20 × 0.03 = 0.6, i.e., δ ≤ 0.6. Table 3. Two-step compression. Extra-time δ = 20 × 0.03 = 0.6. Here ratio "chosen best" means a proportion of cases in which the best method was chosen. Table 4 shows the effect of the extra time δ on the efficiency of the method (In this case we took 5% in the first step). Table 4. Two-step compression. Extra-time δ = 20 × 0.05 = 1. Table 5 contains information about the three step method. Here we took 3% in the first step and then took five data compressors with the best performance. Then, in the second step, we tested those five data compressors taking 5% from the file. Hence, the extra time equals 20 × 0.03 + 5 × 0.05 = 0.85. Table 5. Three-step compression. Extra-time δ = 20 × 0.03 + 5 × 0.05 = 0.85. Table 6 gives an example of four step method. Here we took 1% in the first step and then took five data compressors with the best performance. Then, in the second step, we tested those five data compressors taking 2% from each file. Basing on the obtained data, we chose three best and tested them on 5% parts. At last, the best of them was used for compression of the whole file. Hence, the extra time equals 20 × 0.01 + 5 × 0.02 +3 × 0.05 = 0.45. Table 6. Four-step compression. Extra-time 20 × 0.01 + 5 × 0.02 +3 × 0.05 = 0.45. If we compare Table 6 and Table 3, we can see that the performance of the four step method is better than two step method, where the extra time is significantly less for the four step method. The same is valid for the considered example of the three step method.
Length of File (byte) Number of Files Ratio "Chosen Best" Average "Worst/Best" Average "Chosen/Best
We can see that the three-and four-step methods make sense because they make it possible to reduce the additional time while maintaining the better quality of the method. Also, we can make another important conclusion. All tables show that the method is more efficient for large files. Indeed, the ratio "chosen/best" and the average value "chosen/best" decreases where the file lengths increases. Moreover, the average value "worst/best" increases where the file lengths increases.
The Time-Universal Code for Stationary Ergodic Sources
In this section we describe a time-universal code for stationary sources. It is based on optimal universal codes for Markov chains, developed by Krichevsky [4,24] and the twice-universal code [25]. Denote by M i , i = 1, 2, ... the set of Markov chains with memory (connectivity) i, and let M 0 be the set of Bernoulli sources. For stationary ergodic µ and an integer r we denote by h r (µ) the r-order entropy (per letter) and let h ∞ (µ) be the limit entropy; see for definitions [22].
Krichevsky [4,24] described the codes ψ 0 , ψ 1 , ... which are asymptotically optimal for M 0 , M 1 , ..., correspondingly. If the sequence x 1 x 2 ...x n , x i ∈ A, is generated by a source µ ∈ M i , the following inequalities are valid almost surely (a.s.): where t grows. (Here C is a constant.) The length of a codeword of the twice-universal code ρ is defined as the following "mixture": (It is well-known in information theory [22] that there exists a code with such codeword lengths, because ∑ x 1 ...x t ∈A t 2 −|ρ(x 1 ...x t )| = 1.) This code is called twice-universal because for any M i , i = 0, 1, ..., and µ ∈ M i the equality (8) is valid (with different C). Besides, for any stationary ergodic source µ a.s.
Let us estimate the time of calculations necessary when using ρ. First, note that it suffices to sum a finite number of terms in (9), because all the terms 2 −|ψ i (x 1 ...x t )| are equal for i ≥ t. On the other hand, the number of different terms grows, where t → ∞ and, hence, the encoder should calculate 2 −|ψ i (x 1 ...x t )| for growing number i's. It is known [24] that the time spent on coding one letter is close for different codes Hence, the time spent for encoding one letter by the code ρ grows to infinity, when t grows. The described below time-universal code Ψ δ has the same asymptotic performance, but the time spent for encoding one letter is a constant.
Step 2. Find such a j that Step 3. Calculate the codeword ψ j (x 1 ...x t ) and output where < j > is the − log ω j+1 -bit codeword of j. The decoding is obvious.
.. be a sequence generated by a stationary source and the code Ψ δ be applied. Then this code is time-universal, i.e., a.s.
Conflicts of Interest:
The author declares no conflict of interest.
Proof of Theorem 2. It is known in Information Theory [22] that h r (µ) ≥ h r+1 (µ) ≥ h ∞ (µ) for any r and (by definition) lim r→∞ h r (µ) = h ∞ (µ). Let > 0 and r be such an integer that h r − h ∞ < . From (11) we can see that there exists such t 1 that m(t) ≥ r if t ≥ t 1 . Taking into account (8) and (11), we can see that there exists t 2 for which a.s. ||ψ r (x 1 ...x t )|/t − h r (µ)| < if t > t 2 . From the description of Ψ δ (the step 3) we can see that there exists such t 3 > max{t 1 , t 2 } for which a.s. | 4,698.4 | 2019-05-29T00:00:00.000 | [
"Computer Science"
] |
Generic framework for industrial 4.0 applications based on internet of things
Use your smartphone to scan this QR code and download this article ABSTRACT The Internet of Things (IoTs) is a network of interconnected devices, transportations, home appliances, and other devices. They are functionally embedded in electronics, software, sensors, actuators, and connectivity that allows them to connect and exchange information. On the basis of the IoT concept, implementations are gradually being proposed in a range of areas, ranging from smart house, smart office and smart agriculture. In this research paper, a generic framework for smart monitoring applications based on the IoTs network is proposed. In this framework, low-powered sensor nodes are based on the micro:bit platform, providing a multiple footprints for different sensor connections. The wireless capability on micro:bit provides a complete solution to deploy the system in such places that wire is impractical to draw. The data is wirelessly gathered by a basestation node that is powered by Android Things operating systemprovided by Google. This operating system is based on the Android platform for smart devices and Internet of Things products. The approach to this framework indicates a low cost and minimum setup and especially amenable for applications control. To support many applications with minimummodifications, the framework is designed for easy expansion by supporting popular serial connection ports, including the Universal Asynchronous Receiver/Transmitter and Serial Peripheral Interface. With these connections, on one line data bus, several sensors can be added to match the different application requirements. In this paper, our platform is validated for an automatic water monitoring in aquaculture based on the temperature, pH and dissolved oxygen sensory data. Through our framework, the data is uploaded to a cloud for remote monitoring and providing alarms for users whenever the data is out of a predefined safe domain.
INTRODUCTION
The Internet Of Things (IoTs) is the key point in the development of Industry 4.0 which is characterized by the generation of device connected network. They can be mobile phones, transportation, home appliances and up-to-date wearable embedded with sensors and activators connected to the Internet so that these objects can exchange data with each other 1 . Things will be provided with the unique identifiers (UIDs) and with the ability to transfer data over a network without requiring human-to-human or human-to-computer interactions 2 . Technology research firm Gartner estimates that 6.4 billion wireless devices will be used globally in 2019, more than 30 percent from 2018. Gartner also estimates that the figure will increase by more than threefold, to about 20 billion by 2020. The IoTs network especially smart controlling appliances have continuously developed competitively on wide range of fields from home control 3 , parking lot guidance 4 , healthcare system 5 , and military surveillance 6 . The fundamental similarity between these ap-plications is the combination of small sensor nodes using low-power sensing devices, a micro-controller embedded in the system, and a transceiver connected in wireless protocol. They are randomly deployed, to cover the physical area of the application 7 . The purpose of the embedded micro-controller is processing the collected data from the sensors which has been designed to produce a number of measureable changes such as temperature, moisture, pressure and humidity in physical environments. The wireless transceiver gives a medium for the transmission of information derived from the sensors to the base station or by inter-communication between several nodes. Finally, the gathered information at the base station can be uploaded to a cloud server for remote monitoring. The advantages of smart monitoring applications based on IoTs network compared to traditional approach can be summarized as follows: • The system is easily deployed, especially in remote areas, where wire connection is im- practical to draw. The wireless communication of sensor nodes allows a quick deployment of the application, without the need of complex infrastructure 8 . Moreover, the latest developments in micro-electro-mechanical systems (MEMS) technology, wireless radio transceivers and digital electronics have made modest, lowperforming and multi-purpose sensor nodes small in size and efficient for processing and wireless communication 9 . Therefore, a sensor node can support a long system lifetime, which can be up to two years without the battery replacement or maintenance. • Sensory data is updated frequently. According to the Quality of Service (QoS) of the application, the sensory data are able to upload regularly to the server, keeping the system up to date. Moreover, Sensory data may follow a certain pattern and can be expected for some time 10 .
In spite of these issues, a prediction mechanism can be introduced for forecasts. Leveraging predicted data, the sink node decides the usage of forecast data, the coverage and influences of possible events and the creation of these events. This feature especially provides an interest to monitoring applications, where threads can be predicted and handled as soon as possible. • Different low-powered sensors are available to support a wide range of monitoring applications. In the recent years, wireless sensor networks have reached a wide range of applications and devices with various specifications and characteristics 11 .
In this paper, we present an overview of potential monitoring applications based on the IoTs that is utilized from Wireless Sensor Networks (WSNs). Beside a lot of opportunities of these applications, challenges to widely deploy them are also presented. Moreover, a generic platform based on Android Things operating system is also proposed. This platform is well adapted to different applications by easily changing the sensors. In this paper, this platform is deployed to monitor the quality of water in aquaculture environment. The contributions of the paper are listed below: • An overview of smart monitoring applications: Opportunities and Challenges. • A generic framework for smart monitoring applications based on micro: bit MCU and Android Things base station node. • Implementation automatic water monitoring in aquaculture, providing the temperature, pH and dissolved oxygen sensory data.
The rest of this paper is organized as follows. An overview of monitoring applications based on the IoTs network is presented in Section II, followed by their challenges in Section III. In Section IV, a generic IoT platform based on Android Things operation system is proposed and validated in agriculture water monitoring in Section V. Finally, the paper ends with conclusions.
MONITORING APPLICATIONS BASED ON IOT
IoTs has presented a promising opportunity to develop efficient real-time systems and applications using wireless technology and sensor products. An overview of an IoTs architecture for monitoring applications is depicted in Figure 1. It includes sensor nodes, gateways, a server and a smart-phone application. A sensor node normally is a microcontroller-based system that can sense data, which are application-dependent, in real time and that is low energy consumption for long life working. A sensor node transfers the sensory data to the nearest IoT Gateway using wireless technologies such as WiFi, LoRa and Bluetooth Low energy. A gateway is a processor-based system that runs an operating system such as Linux or Android. It can communicate with sensor nodes to obtain sensory data and send the sensory data to a server via WiFi or 2G/3G/4G/LTE mobile communication. A server processes the sensory data and generates useful information to the third party applications through a security interface. The third party applications can render the useful information to the user through a smartphone or a web page. Following the architecture in Figure 1, a wide spectrum of IoT technologies have been developed and implemented over the last year in a number of fields for instance: home automation agriculture [12][13][14] , food production, environmental monitoring, security surveillance and others 15 . Monitoring applications, such as smart healthcare monitoring, intelligent transport system, environmental tracking system, is one of the most active domains.
Smart Healthcare Monitoring
The real-time healthcare monitoring via connected sensors can save lives in events of a medical emergency such as heart attacks, diabetes, and asthma attacks. The IoTs is improving the healthcare services by enabling real-time alerting, tracking, and monitoring to activate hands-on treatments, better accuracy, apt intervention by doctors and improve complete patient care delivery results. Instead of directly monitoring in hospitals, patients can be monitored regularly even at home using smart devices that provide health status information. Moreover, in order to track the condition in real time using a smart medical system internet -connected, sensors can capture medical and other appropriate health data. and then, transfer collected information to a physician. Finally, medical IoT devices capture crucial data and send it to doctors for real-time surveillance, while notifying people about critical factors through smartphone apps and other related devices.
Intelligent Transport System
An intelligent transportation system (ITS) is an advanced and IoTs-enabled application which focuses on achieving transportation quality by avoiding traffic jams and issues 16 . ITS helps citizens be better informed about traffic, local convenience real-time running information and Available seat that eliminates travel time and guarantees the safety and amenity.
There are a few sub systems that belong to an ITS such as Traffic monitoring system, smart parking system, public transport management, and electronic toll collection system. We briefly describes these systems in the following.
Traffic monitoring system
One of the reason for traffic congestion is fixed and long red light delays. So, controlling the traffic light in the intersections and optimizing the green light period is necessary. By interconnecting and fetching data from the intersections via cameras, traffic lights can be synchronously gathered in order to diverge the traffic at the particular conjunctions. The Artificial Intelligence and Machine Learning are also taken into image processing operation to identify the signalized points and make the controller control the traffic light timing, ensure the smooth traffic flow 17 .
Smart parking system
A practical application for inner-city and outer-city in busy developing and developed countries which provides citizens the information and location of the nearest parking lots. The users can reserve the parking area for their vehicles, or even pays the annual parking fee via the supporting applications and electronic wallet (e-wallet).
Public transport management
The information of public transportation such as locations, velocities, arrival time and routes are provided for the user by the mobile application and electronic boards at the stations. The purpose of this solution is to manage properly the transportation's activities and its owners. Besides, the mobile application not only informs the information of each type of public vehicles, but also guides the user to make the suitable selection for their travel based on the construction of sensing system information.
Electronic toll collection system
Electronic toll collection (ETC) system achieves the goals of reducing the toll booth, expands the area for vehicles, especially on high ways. Furthermore, the process of license identification and payment due to each type of vehicle can be saved by automatic plate recognition and simultaneous toll payment calculation. The user can use e-wallet to pay the fee by scanning the QR Code. In the future, the more ETC system is set up, the more spacious the road is.
Environment Tracking System
Transport emission seems to be the main factor that causes air pollution in big cities around the world because it emits the large amount of Particulate Matter (PM) such as Volatile Organic Compounds (VOC) included NOx, CO and SOx. These pollutants harm to human health, atmosphere and also climate. Being formed by process of incomplete combustion, pollutants such as PM and BTEX (Benzene, Toluene, Ethyl, Xylene) are concerned as pollutants that must be controlled and prevented its effect from affecting to human health according to the report of United
MONITORING APPLICATION CHALLENGES Energy Consumption
Energy optimization is a critical issue for monitoring applications, which normally requires a long-system lifetime. When a large number of sensor nodes are deployed to cover the monitor areas, battery maintenance or replacement becomes a burden. Obviously, there are two different approaches to overcome this issue. Firstly, there has been a variety of strategies to scale back the consuming engergy such as using nanowatt wake-up radio receivers 19 and adequate MAC protocols scheduling implementation 20 . Despite the improval in system operation period, the small battery capacity used as storage devices still cripples it. Secondly, a new paradigm for designing sensor nodes is mentioned. In order to consolidate, or even eliminate batteries, environmental energy sources have been integrated. Thanks to advancements in the field of energy harvest, eternal environmental energy can be harvested and fully autonomous WSNs can be built. A large range is provided for the use of WSNs, for example solar photovoltaics 21 , thermoelectric thermal energy 22 and wind generators for airflow power 23 , which are inexpensive, compact and power-rich harvesters.
Adaptive and Autonomous
Event ambient energy such as solar or wind can be scavenged as long as possible, the sensor nodes have to cope with the energy fluctuations from these sources. For instance, solar energy can be reduced significantly in a rainy day compared to a sunny day and wind energy is a kind of unpredictable source 24 . Therefore, a sensor node must be adaptive to its operations, to reach an ideal state, named Energy Neutral Operation (ENO) 25 . In this state, the overall energy expended is equivalent to energy harvested over a long period of time. This approach will have potentially everlasting life (until the hardware is out of date).
The most common solution is to control power transfer 26 as well as using duty-cycling with a shifting wake-up interval, apart from dynamic voltage and frequency scaling 27 . The solution directly affects the MAC protocol, which is the key consumed sources of the WSN node 20 . In fact, environmental behaviors in PM policies should be taken into consideration.
While fluorescent light provides practically continuous power with rare interruptions in hospitals or heat from industrial equipment, solar or indoor light energies are often occasionally absent, accompanied by energy intervals. The PM must propose plans for the reservation of harvested energy before they are available to guarantee continuous operations.
Wireless Data Collection
Data collection is another issue in monitoring applications based on IoTs. In order to cover such a large area, sensor nodes have to forward their data through many intermediate nodes since the range of the wireless communications is limited (e.g. 30m with radio frequency at 2.4GHz). In order to achieve an efficient data collection at a local base station node, an optimized routing or scheduling must be required otherwise, an intermediate node can become a bottle neck if it has to forward data from many nodes. Moreover, the network topology of the network can be regularly changed due to mobility nodes (e.g. monitoring the bus in a smart city), cause a big burden to rescheduling the whole network. However, the scheduling cannot be performed at each sensor nodes due to the limited resource of memory, computation and energy of a low-power and lowcost device. Currently, Software Defined Wireless Sensor Network (SDWSN) architectures offer significant promise to implement complex scheduling algorithms 28 . In SDWSN, the scheduling adaptations are shifted from sensor nodes to the base station, which has more computational and energy resources (typically a base station has a direct power supply).
Quality of Service
As many presumed applications exist in WSNs, their QoS requirements may vary tremendously. For instance, a failure to identify or collect wrong or incorrect information about a physical occurrence may emerge from several causes in applications involving occurrence identification and target surveillance. The location where the incident occurs cannot be protected by active sensors because of deployment and network maintenance. Intuitive, coverage or number of active sensors can be described as QoS measurement parameters in WSNs. However, focusing on the network QoS, following factors are required to characterize 29 : • End-to-end: end-to-end or non-end-to-end performance.
• Criticality: mission critical or non-mission critical.
Among these factors, the end-to-end delay is the most important in monitoring applications as it has directly impact on the system data up-to-date. This factor is normally concerned with the data collection issue in the previous sub-section. When the scheduling algorithm is not optimized, it takes a long time to forward a packet from a node away from the base station. Concurrently, the energy available in the node also has a significant effect on the QoS. If the energy is not adequate, the node cannot perform further to satisfy the QoS while following the state of the ENO 25 .
GENERIC IOT PLATFORM BASED ON ANDROID THINGS
Bach Khoa University -VNU -HCM (HCMUT), specifically in Computer Science and Engineering Department, provides study programs for training and developing popular applications on the Internet such as IoTs application development, etc. Many laboratories have invested equipment to serve students' uses, support research, research and development. This article wants to introduce a research that the group has been analysing. The topic is aimed to build a system to monitor the status of each environment type and river area, and it will accurately record the measured data and display visually. That helps managers monitor and have timely solutions when there are erratic changes. This system is designed for two-way communication, which is how to send information from sensor nodes to the central station (gateway) and to control signals from the central station to the sensor nodes variable or from user (controlled via web or mobile application). The system will be a network of multiple sensor nodes that measure environmental values and deliver data to the gateway as Figure 2. Also, the system also aims at other criteria such as low cost, high stability, convenience, easy to install, easy to repair. In general, the system consists of several main components: • Sensor node: using micro:bit circuit, combined with sensors that measure water environment data and use radio waves for wireless data transmission.
• Central station (base station -gateway): using Raspberry Pi 3 circuit with operating system embedded Android Things and also using radio waves to communicate with sensor nodes. Concurrently, the central station will be connected to the internet with an Ethernet cable or Wifi to connect to the server.
• Servers and applications on mobile devices: using ThingSpeak or MQTT (Remote queue message transmission) for the server; Phone application with Android platform. Sensor nodes are based on micro:bit circuit platform, an embedded system based on ARM hardware designed by BBC for use in computer education in the UK. With its small size and integrated motion detection technology, compass and Bluetooth, micro:bit can help quickly deploy an Android based sensor application. The sensor node is designed to connect with many popular sensor standards on the market such as Vernier, DFRobot or other 4-20mA industry standard sensors. However, in this paper, we mainly focus on building a generic IoT platform that utilizes micro:bit boards 30 as a controller in a sensor node and Raspberry Pi 3 running Android Things operating system 31 . Please note that the micro:bit board is a micro-controller based board that can obtain the sensory data in realtime. In this paper, we do not focus on security and fault tolerance because these features are normally implemented using software. As for the security feature, one can easily apply Public Key Infrastructure (PKI) 32 on Android Things with a PKI supported server in Figure 4. As for the fault tolerance feature, the number of each sensor can be triplicated as well as there is handshaking among sensor nodes and the gateway. All these techniques can be readily implemented in our proposed platform. Meanwhile, the central station is designed to be compatible with Android Things operating system, a very up-to-date operating system issued by Google for universal Internet applications. This operating system is based on the Android platform for smart devices and Internet of Things products (IoTs). This operating system still has SDK, Android Studio, Google Play Services or Google Cloud Platform developers, etc. Android Things is a platform-based operating system that allows smart devices to handle complex tasks instead of relying on some servers, which means that Android Things will fit into large devices and more functions.
Although Android Things and Raspberry PI are proposed for embedded IoTs device, ADC pins are not supported. The lack of this feature prevents deploying the system in a wide range of applications, whose sensors normally support ADC outputs. To overcome this issue, a master Analog to Digital Converter (ADC) chip, named ADS1118, is added. Based on SPI bus of the chip, our system can support up to 40 different sensors. Therefore, with this feature, our gateway can be used as a sensor in a sparse network, where the number of nodes is less than 5 33 . As it is shown in 34 , it is really a high cost if each sensor node and the gateway are equipped with a module for wireless communications. Therefore, to save this cost, the gateway need to plays the roles of a sensors by sensing data directly from the environment. The system now is no longer required the sensor nodes (see Figure 1) and only has the gateway, sending its data to cloud, without the need of sensor nodes. Moreover, multiple wireless standards are supported in our gateway, including from short range such as Zigbee and Wifi to very long range such as LoRa or 3G. Especially, the keywords for long range (LoRa) communications is a trend for applications in a smart city, such as Intelligent Transport System or Public Transport Management (presented in Section II). The reason is that these applications require mobility networks (e.g. moving car in the city), which a burden for routing algorithms. Therefore, a long range communication is proposed to mitigate this issue since each node in the network can communicate directly with the gateway station, without the need of routing. Therefore, a driver for LoRa communication based on SX1278 chip is implemented in our system. With this driver, popular LoRa modules can be easily integrated in our system. Finally, multiple power supply sources can be used for powering the gateway: a permanent source from grid power line or power extracted from solar cells. Popular output connections such as VGA, DVI or HDMI are also supported by Android Things for visualization the data on a wide screen.
EXPERIMENTAL RESULTS
A prototype -a proposed system is applied into a water for validating the operations of the monitoring application. We use sensors from DFRobot, which helps collect water information such as the temperature, pH and dissolved oxygen (DO). An image of the prototype sensor node is shown in Figure 3. In the case of other applications presented in Section II, appropriate sensors will be used. Sensory data is sent every 30 seconds to the gateway, which is already equipped with a 3G USB in order to upload data to a cloud server. Firstly, the average power consumption of the sensor node and the gateway are 0.07W and 6W, respectively. While the sensor node is very low power, the gateway consumes nearly 100 times higher than a sensor. The main consumed energy source in the gateway is the 3G connection, which is around 2.5W, and the monitor screen, which is 1.4W in average. The power consumption measurements provide a study to choose a solar panel, to prolong the system lifetime for a longterm monitoring application. Secondly, the sensory data is plotted in website for real-time monitoring and is presented in Figure 5. As it can be seen, the temperature is very stable while there are some fluctuations of both pH and dissolved oxygen (DO) values. We found these variations on the measured values of pH and DO when there are some small waves on the water surface. However, considering the average values, our system provides a good accuracy compared to a multi-meter from LeadTec Asia company 35
CONCLUSION
IoTs has opened a novel opportunity for the proliferation of monitoring applications based on sensor nodes. In this paper, an overview of the most interested applications such as smart healthcare monitoring, intelligent transport system and environmental tracking system is presented. Beside of that, the main challenges of these applications, including the energy, wireless data collection, autonomous operations of a node and network QoS, are also discussed. Following the architecture of IoT system, a generic platform based on Android Things is also proposed. This platform is well-adapted for agriculture water monitoring, which provides sensory data concerning the temperature, pH and dissolved oxygen. These sensors are easily replaced to satisfy the requirements of an application. By adding the sensor extension, our gateway provide a complete solution to deploy the system, even in a sparse network with few nodes. In this case, no more sensors or transceivers are required as the gateway will directly collect data by itself. Future works will focus on analyzing the energy consumption of the system and adding additional power from solar cells.
ACKNOWLEDGMENT
This research is funded by Ho Chi Minh City University of Technology (VNU-HCM), under grant number To-KHMT-2019-09.
CONFLICTS OF INTEREST
Song Ngan Pham Le, Trong Nhan Le and Huu Nguyen Nguyen Tran declare that they have no conflict of interest.
HUMAN/ANIMAL RIGHTS
This article does not contain any studies with human or animal subjects performed by the any of the authors.
AUTHOR CONTRIBUTIONS
Song Ngan Pham Le contributes on the system implementations and validations, Trong Nhan Le and Huu Nguyen Nguyen Tran contribute on the related approaches and propose a generic architecture for the platform. Moreover, Huu Nguyen Nguyen Tran also | 5,892.6 | 2020-11-09T00:00:00.000 | [
"Computer Science"
] |
Using computer-vision and machine learning to automate facial coding of positive and negative affect intensity
Facial expressions are fundamental to interpersonal communication, including social interaction, and allow people of different ages, cultures, and languages to quickly and reliably convey emotional information. Historically, facial expression research has followed from discrete emotion theories, which posit a limited number of distinct affective states that are represented with specific patterns of facial action. Much less work has focused on dimensional features of emotion, particularly positive and negative affect intensity. This is likely, in part, because achieving inter-rater reliability for facial action and affect intensity ratings is painstaking and labor-intensive. We use computer-vision and machine learning (CVML) to identify patterns of facial actions in 4,648 video recordings of 125 human participants, which show strong correspondences to positive and negative affect intensity ratings obtained from highly trained coders. Our results show that CVML can both (1) determine the importance of different facial actions that human coders use to derive positive and negative affective ratings when combined with interpretable machine learning methods, and (2) efficiently automate positive and negative affect intensity coding on large facial expression databases. Further, we show that CVML can be applied to individual human judges to infer which facial actions they use to generate perceptual emotion ratings from facial expressions.
Introduction
The ability to effectively communicate emotion is essential for adaptive human function. Of all the ways that we communicate emotion, facial expressions are among the most flexible-their universality allows us to rapidly convey information to people of different ages, cultures, and languages. Further, facial expressions signal complex action tendencies including threat and cooperative intent [1][2][3]. Unsurprisingly, the ability to produce and recognize facial expressions of emotion is of interest to researchers throughout the social and behavioral sciences.
Facial expressions can be interpreted using either message-or sign-based approaches [4]. Message-based approaches describe the meaning conveyed by a facial expression (e.g., happiness), whereas sign-based approaches describe observable facial actions that embody/comprise PLOS fundamental to the recognition of all other perceived emotions. Therefore, determining the extent to which specific patterns of AUs map to positive and negative affect is important for building on and testing contemporary models of emotion production and recognition. Comprehensive follow-up investigations have been difficult to pursue, in part, because facial EMG can only detect a very limited number of AUs simultaneously, and manual alternatives are both labor-and time-intensive and require highly skilled annotators. Indeed, FACS training requires an average of 50-100 hours, and minutes of video can take expert coders multiple hours to rate reliably [21]. These characteristics limit sample sizes, reduce feasibility of replication efforts, and discourage researchers from coding facial expressions. Instead, researchers tend to rely on measures of emotional responding that are not observable in social interactions (e.g., heart rate variability). Recently, automated computer-vision and machine learning (CVML) based approaches have emerged that make it possible to scale AU annotation to larger numbers of participants (e.g., [22][23][24]) thus making follow-up studies more feasible. In fact, inter-disciplinary applications of CVML have allowed researchers to automatically identify pain severity (e.g., [25]), depressive states (e.g., [26]), and discrete emotions from facial expressions (e.g., [27]).
Work using CVML to detect valence intensity from facial expressions is ongoing (see [28]). In fact, there are annual competitions held to develop CVML models that best characterize dimensional features of emotions such as valence and arousal (e.g., [29]). Currently, basic emotions can be coded automatically with accuracy comparable to human coders, but valence intensity models show lower concurrent validity. For example, state-of-the-art CVML models show correlations between human-and computer-coded valence ranging from r = .60-.71 [30,31]. While impressive, there are two limitations that have impeded the use of CVML to make inferences on positive and negative affect intensity. Below, we outline each of these limitations and offer our solutions.
First, CVML models are often constructed using difficult to interpret machine learning models that detect valence directly from frame-by-frame video input without intermediately capturing AUs. Therefore, it is both unclear if: (1) successful valence detection depends on prior detection of specific AUs, and (2) machine learning can provide useful insights into how people interpret specific facial actions. In the current study, we show that CVML can be used to both identify well known relationships between AUs and perceived positive and negative affect intensity in addition to revealing novel relationships.
Second, how valence intensity is represented-and therefore measured-varies substantially across studies. For example, some previous CVML models of valence intensity have been developed from relatively small samples or on continuously collected valence ratings (human ratings collected in real-time using dials or joysticks), while others are developed based on static images. It is unclear if such models generalize to other research settings where participants' emotional expressions to evocative stimuli are coded within discrete, trial-by-trial time intervals (e.g., [32]). Indeed, contemporary work using CVML has shifted from evaluating facial expressions in controlled laboratory settings toward accurately capturing continuous facial expressions of emotion "in the wild", which is a much more difficult task (e.g., [30,33]). However, given the highly contextual nature of facial expression recognition [20], controlled laboratory settings are ideal for identifying AUs that are specific to perceived core affective processes such as positive and negative affect. Further, most valence-detecting CVML models assume a unidimensional valence continuum as opposed to separable continua for positive and negative affect-to our knowledge, there are few opensource datasets used in CVML research that characterize valence as multi-dimensional (see [34]), and very little work has been done with CVML to separate positive and negative affect (cf. [35]). Notably, positive and negative affect can vary independently and have different predictive values [10,15,36], suggesting that CVML models designed to account for each dimension separately may be most beneficial for behavioral science applications.
Using a well-validated method of emotion induction and both computer-vision measurement of discrete facial actions and continuous measures of positive and negative affect intensity, we (1) identified specific correspondences between perceived emotion intensity and discrete facial AUs, and (2) developed a reliable, valid, and efficient method of automatically measuring the separable dimensions of positive and negative affect intensity. Based on previous work on subjective valence intensity using facial EMG, we hypothesized that CVML would identify AUs 12 and 4 as of the most important AUs for positive and negative affect intensity, respectively. Additionally, we hypothesized that the effects of AUs 12 and 4 on positive and negative affect intensity would depend on the activation of other AUs, and that these interactions could be probed with interpretable machine learning methods. Importantly, data used to train and validate our CVML models were collected from a commonly-used psychological task and contained 4,648 video-recorded, evoked facial expressions from 125 human subjects across multiple task instructions. Our findings shed light on the mechanisms of valence recognition from facial expressions and point the way to novel research applications of large-scale emotional facial expression coding.
Participants
Video recordings and human coder data were collected as part of a larger study [32]. The current study included 125 participants (84 females), ages 18-35 years. All participants gave informed consent prior to the study, and the study protocol (#2011B0071) was approved by The Ohio State Behavioral and Social Sciences Institutional Review Board. Self-reported ethnicities of participants were as follows: Caucasian (n = 96), East Asian (n = 14), African-American (n = 5), Latino (n = 3), South Asian (n = 3), and unspecified (n = 4). Note that we tested for racial/ethnic differences in valence coding accuracy, and using Bayesian comparisons we found evidence favoring no differences in accuracy between groups (see Supporting Information).
Measures
Emotion-evoking task. We used an emotion-evoking task, depicted in Fig 1, that has been used in several previous studies to elicit facial expressions of emotion across multiple task instructions [32,37]. Participants viewed 42 positive and negative images selected from the International Affective Picture System (IAPS) to balance valence and arousal. Selections were based on previously reported college-student norms [38]. Images were presented in 6 blocks of 7 trials each, whereby each block consisted of all positive or all negative images. For each block, participants were asked to either enhance, react normally, or suppress their naturally evoked emotional expressions to the images. These instructions effectively increased variability in facial expressions within participants. Further, effortful enhancement and suppression of facial expressions is common across many real-world social situations where specific emotional expressions are expected to reach desired outcomes. Given known individual differences in suppression and enhancement of facial expressions [32,37], we expected that these task instructions would allow us to create a more generalizable CVML model than with no instructions at all. Block order was randomized across participants. Instructions were given so that each valence was paired once with each condition. All images were presented for 10 s, with 4 s between each image presentation. Participants' reactions to each image were video-recorded with a 1080p computer webcam (Logitech HD C270). Due to experimenter error, 1 participant's videos were not recorded correctly, and 7 participants were shown only 41 recordings, resulting in 6,293 usable recordings. Among these, 3 were corrupted and could not be viewed. Thus, 6,290 10-s recordings were potentially available.
In each of the 3 blocks containing positive and negative image content, participants were asked to either enhance, react normally, or suppress their emotional expressions, so that each valence type (i.e., positive or negative) was paired once with each task instruction (enhance, react normally, suppress). All images were selected from the International Affective Picture System [38]. Participants' reactions to the images were video recorded and their facial expressions were subsequently rated for positive and negative emotion intensity by a team of trained coders. The same recordings were then analyzed by FACET, a computer vision tool which automatically identifies facial Action Units (AUs). Note that the individual in this figure is of the first author. The individual in this manuscript has given written informed consent (as outlined in PLOS consent form) to publish these case details.
Manual coding procedure. A team of three trained human coders, unaware of participants' task instructions, independently viewed and rated each 10-s recording for both positive and negative emotion intensity. Presentation of recordings was randomized for each coder. Ratings were collected on a 7-point Likert scale ranging from 1 (no emotion) to 7 (extreme emotion), where positive and negative affect were coded independently following each presentation. Coders completed an initial training phase during which they rated recordings of preselected non-study cases and discussed specific facial features that influenced their decisions (see the Supporting Information for the coding guide). The goal of this training was to ensure that all coders could reliably agree on emotion intensity ratings. In addition, coders participated in once-monthly meetings throughout the coding process to ensure reliability and reduce drift. Agreement between coders across all usable recordings (6,290 recordings) was high, with intraclass correlation coefficients (ICCs(3); [39]) of .88 and .94 for positive and negative ratings, respectively. The ICC(3) measure reported above indicates absolute agreement of the average human-coder rating within each condition (enhance, react normally, suppress) for each of the 150 participants in the original study [32]. To prepare data for CVML analysis, we performed an additional quality check to screen out videos in which participants' faces were off-camera or covered. Any recording in which a participant's face was covered, obscured, or off-camera for 1 s or more was removed from analysis. If 50% or more of a participant's recordings were excluded, we excluded all of his/her recordings to ensure that we had enough within-subject data to use for within-subject model performance analyses. This resulted in a total of 4,648 usable recordings across 125 participants. With over 4,000 individually-coded recordings, our sample size is in the typical range for machine learning applications [40]. Automated coding procedure. We then analyzed each of the 4,648 recordings with FACET [24]. FACET is a computer-vision tool that automatically detects 20 FACS-based AUs (see S1 Table for descriptions and depictions of FACET-detected AUs). While there are no published validation studies of FACET's AU detection accuracy to our knowledge, there are many studies validating the Computer Expression Recognition Toolbox (CERT), which is FACET's opensource predecessor [41]. Validation studies of CERT show that it can discriminate between 18 different AUs with high accuracy rates (e.g., average 2AFC = 80-90%, [41]). Further, FACET has shown better than human accuracy in detecting basic emotions across multiple datasets (e.g., > 95%, [24]), which strongly relies on accurately capturing the AUs that describe each basic emotion category. Note that FACET was recently purchased by Apple Inc. and is no longer available to the public. However, there are other commercial software options available for automated AU detection including Noldus's FaceReader, Affectiva's AFF-DEX, and the opensource OpenFace package, each of which have been validated in previous studies [22][23][24]. Importantly, the methodology we use in the current study is not specific to FACET and any of the above software tools could be utilized to replicate our analyses. FACET outputs values for each AU indicating the algorithm's confidence in the AU being present. Confidence values are output at a rate of 30 Hz, resulting in a time-series of confidence values for each AU being present with each frame of a video-recording. Each point in the time-series is a continuous number ranging from about -16 to 16, whereby more positive and more negative numbers indicate increased and decreased probability of the presence of a given AU, respectively. We refer to this sequence of numbers as an AU evidence time-series.
Each AU evidence time-series was converted to a point estimate by taking the area under the curve (AUC) of the given time-series and dividing the AUC by the total length of time that a face was detected throughout the clip. This creates a normalized measure that does not render biased weights to clips of varying quality (e.g., clips in which participants' faces are occasionally not detected). Point-estimates computed this way represent the expected probability that a participant expressed a given AU across time. We used the AU evidence time-series point estimates as predictor (independent) variables to train a machine learning model to predict human valence intensity ratings. It took FACET less than 3 days to extract AU evidence time-series data from all recordings (running on a standard 8-core desktop computer). Note that we did not use a baseline correction for each subject, which would require human annotation of a neutral facial expression segment for each participant. Therefore, the models reported here may be applied to novel facial recordings with no human judgment.
In addition to raw AU scores, FACET computes scores for positive and negative affect which reflect the probability that a facial expression is of either positive or negative affect. Although these scores reflect presence of positive or negative affect rather than intensity, we report them alongside our results to emphasize the added predictive validity achieved by our method. We used the same preprocessing steps for FACET's positive and negative affect scores as for the AUs (i.e. we computed the normalized AUC values for each recording).
Machine learning procedure
Fig 2 depicts the machine learning procedure. We trained a random forest (RF) model to predict human-coded valence ratings from the AU evidence time-series point estimates described Machine learning procedure. The goal of our first analysis was to determine whether or not CVML could perform similarly to humans in rating facial expressions of emotion. For each AU evidence time-series, we computed the normalized (i.e., divided by the total time that FACET detected a face) Area Under the Curve (AUC), which captures the probability that a given AU is present over time. All AUC values (20 total) were entered as predictors into the random above (see Supporting Information for details on training). RFs are constructed by generating multiple decision trees and averaging predictions of all trees together. We chose the RF model because (1) it can automatically capture interactions between independent variables, and we know that humans use multiple AUs simultaneously when evaluating facial expressions; (2) the importance of each independent variable can be easily extracted from the RF to make inferences regarding which AUs human coders attended to while rating valence intensity (analogous to interpreting beta weights from a multiple regression; [40]); and (3) RFs have previously shown robust representations of the mapping from facial features (e.g., AUs) to discrete emotions and valence intensity [42,43]. We additionally tested regularized regression models including the least absolute shrinkage and selection operator (LASSO), ridge regression, and elastic-net, but these linear models did not adequately capture the human ratings. Further, we tested a Deep Neural Network model that performed similarly to the reported RF results (see Supporting Information for model comparison), and due to its ease of use and interpretation we decided to only report the RF model results in the main text .Given high agreement among coders and a large literature showing that aggregating continuous ratings from multiple, independent coders leads to reliable estimates despite item-level noise (i.e., ratings for each recording; see [44]), we used the average of all coders' ratings for each recording as the outcome (dependent) variable to train the RF.
The RF model contains 2 tuning parameters, namely: (1) ntrees-the number of decision trees used in the forest, and (2) mtry-the number of predictors to sample from at each decision node (i.e., "split") in a tree. A grid search over ntrees 2{100, 200, 300,. . .,1000} showed that out-of-bag prediction accuracy converged by 500 trees for both positive and negative datasets (not reported). A grid search over mtry 2{1, 2, 3,. . .,20} revealed negligible differences in outof-bag prediction accuracy for values ranging from 5 to 20. Because RFs do not over-fit the data with an increasing number of trees [40], we set ntrees = 500 for models presented in all reported analyses to ensure convergence. Because initial grid searches over mtry failed to improve the model, we set mtry heuristically [40] as mtry = p/3, where p represents the number of predictors (i.e., 1 for each AU) in an n × p matrix (n = number of cases) used to train the model. We fit the RF model using the easyml R package [45], which provides a wrapper function for the randomForest R package [46]. All R codes and de-identified data (i.e. FACET output and human coder ratings) used for model fitting along with the trained RF models are available on our lab GitHub, which allow for replication of all analyses and figures (https:// github.com/CCS-Lab/Haines_CVML_2018).
Correspondence between human coders and model predictions. Model performance refers to how similar the model-and human-generated valence intensity rating are. To assess model performance, we split the 4,648 recordings into training (n = 3,060; 65.8%) and test (n = 1,588; 34.2%) sets, trained the model on the training set (see the Supporting Information for details), and then made predictions on the unseen test set to assess how well the RF predicted valence intensity ratings on new data. The data were split randomly with respect to participants so that the training and test data contained 66% and 34% of each participant's recordings, respectively. This separation ensured that training was conducted with all participants, thus creating a more generalizable final model. We fit a separate RF model to positive and negative human ratings. To see if the way we split the training and test data influenced our forest (RF) model to predict the average coder rating for each recording. To test how similar the model ratings were to human ratings, we separated the data into training (3,060 recordings) and test (1,588 recordings) sets. We fit the RF to the training set and made predictions on the unseen test set. Model performance was assessed by comparing the Pearson and intraclass correlations between computer-and human-generated ratings in the test sets. [47,48]. We used Pearson correlations and ICC coefficients to check model performance on training-and test-sets. Pearson correlations measure the amount of variance in human ratings captured by the model, whereas ICCs measure absolute agreement between human-and model-predicted ratings at the item level (i.e., per recording). Therefore, high correlations and ICCs indicate the model is capturing a large amount of variance in human coder ratings and generating ratings using a similar scale as human coders, respectively. We used McGraw and Wong's ICC(1), as opposed to other ICC methods [39], because we were interested in absolute agreement across all clips, regardless of condition/participant. One-way models were used to compute ICCs in all cases. In general, ICCs between .81 and 1.00 are considered "almost perfect" (i.e., excellent) and ICCs between .61 and .80 are considered "substantial" (i.e., good; [49]). We used regression-based approaches and performance measures as opposed to classification-based alternatives (e.g., F1 scores on models trained to classify intensity ratings) because the averaged coder ratings across recordings resembled continuous, real numbers more so than ordinal, categorical intensity scores. Additionally, regression-based models are commonly used in developing models that predict valence and/or arousal intensity. We also checked model performance using a different folding scheme for separating training and test sets which ensured that participants' recordings were not shared across splits. This analysis revealed negligible differences in prediction accuracy for positive ratings and a decrease in accuracy for negative ratings, which suggests that more training data may be necessary to capture negative as opposed to positive affect intensity (see Supporting Information).
Importance of AUs for positive and negative affect. To identify the specific AUs that human coders were influenced most by when making affective ratings, we fit the RF model to the entire dataset (all 4,648 recordings) without splitting into training and test sets. We used this method to identify independent variables that were robust across all samples [47,48]. After fitting the RF models, the importance of each independent variable was estimated using partial dependence [50], a measure of the expected standard deviation in the outcome variable (e.g., positive or negative affect intensity) as a function of a given predictor variable (e.g., AU12) averaged across all other predictor variables (e.g., all AUs except AU12). In fact, in special cases, the absolute values of the multiple regression beta weights are equivalent to the corresponding partial dependence metric [50], which makes partial dependence a useful metric for assessing the importance of predictors when using "black-box" methods such as RFs. Crucially, and unlike other methods of measuring variable importance, partial dependence can also be used to probe both directionality and interaction effects when plotted as a function of the model predictors [50].
To determine if CVML could adequately capture the relative importance of AUs for each individual coder, we also fit the RF to each coder's ratings independently. We used randomization tests to determine the minimum number of ratings necessary to accurately infer which AUs the coders attended to while generating emotion ratings. For each of the 3 coders, we performed the following steps: (1) randomly sample n recordings rated by coder i, (2) fit the RF model to the subset of n recordings/ratings according to the model fitting procedures outlined above, (3) compute the ICC(2) of the extracted RF feature importances (i.e., partial dependence) between the subsampled model and the model fit to all recordings/ratings from coder i, and (4) iterate steps 1-3 thirty times for each value of n (note that different subsets of n recordings/ratings were selected for each of these thirty iterations). We varied n 2 {10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90, 95, 100, 105, 115, 125, 135, 150, 200, 250, 300, 350, 400, 450, 500, 550, 600, 650, 700, 750, 800, 850, 900, 950, 1000, 1200, 1400, 1600, 1800, 2000, 2500, 3000}. Table 1 shows correlations between the model-predicted and the average of the human coders' ratings per recording across both training and test sets. Overall, the RF showed good to excellent performance across both training and test sets for positive and negative ratings. Notably, these results were supported by both the Pearson correlations and the ICCs, suggesting that the RF produced ratings that not only captured variance in, but also showed high agreement with, human ratings. Sensitivity analyses (see Fig 3) indicated that model performance was robust across different training and test splits of the data. These results suggest that variance in human-coded valence intensity can be captured by the presence of discrete AUs.
Model performance within participants
We also checked model performance for each of the 125 participants by computing correlations between human-and model-generated ratings for each participant separately (Fig 4). Although the RF model performed well for many participants in the positive (median r = .91, ICC(1) = .80) and negative (median r = .73, ICC(1) = .51) affect test sets, 5 participants within the positive and 7 participants within the negative affect test-set yielded negative correlations between human-and computer-generated emotion ratings (Fig 4). Further analyses of withinparticipant model performance revealed significant positive associations between within-subject variance in model-predicted ratings and within-participant prediction accuracy (all rs � .54, ps < .001; see S2A Fig). We found the same relation between human-assigned ratings and within-participant variance (see S2B Fig). This suggests that the RF model was more accurate in predicting human-rated emotion if participants expressed a wider range of emotional intensity.
Importance of AUs across task instructions
To identify which facial expressions human coders may have used to generate positive and negative emotion ratings, we examined the importance of all AUs in predicting human emotion ratings (Fig 5). Note that importance values for the RF do not indicate directional effects, but instead reflect relative importance of a given AU in predicting human-coded positive/negative affect intensity. The RF identified AUs 12 (lip corner pull), 6 (cheek raiser), and 25 (lips part) as three of the five most important AUs for predicting positive emotion. In contrast to positive ratings, relative importance values for AUs of negative ratings were distributed more evenly across AUs, a trend which was also found when the RF was fit individually to each coder (see Coder-specific AU importance measures below). Notably, the importance of AUs for positive and negative emotion ratings were largely independent. In fact, when the ICC (3) is computed by treating positive and negative importance weights for each AU as averaged ratings from two "coders", the ICC(3) is negative and non-significant (ICC(3) = -.48, p = .80), which would only be expected if different facial expressions were important for the coders to rate positive versus negative valence. Lastly, the RF identified stronger interactive effects between AUs for positive relative to negative affect intensity (Fig 5). Specifically, interactions between AUs 12 � 18 and 2 � 12 together accounted for~25% of the interactive effects for positive affect, which is exceedingly high given the 190 possible 2-way interactions. Conversely, interactions between AUs for negative affect intensity were more uniformly important, apart from the interaction between AUs 4 � 5. These differences in interactions between positive and negative affect may be partially attributable to the larger number of possible AU combinations that can indicate negative rather than positive affect. The partial dependence analysis measures revealed that the main effects of the 5 most important AUs were in the expected directions for both positive and negative affect intensity ratings (Fig 6). Specifically, AUs 12, 6, and 25 were positively related to increased positive affect intensity, while AUs 4, 5, 9, and 10 were positively related to increased negative affect intensity. Intriguingly, we found that AU18 was negatively related to increased positive affect intensity, which may be attributed to either its masking effects on AU12 or its relation anger. Indeed, the largest interaction for positive affect was between AUs 12 and 18, where high presence scores for AU12 in combination with low presence scores for AU18 predicted high positive affect intensity. For negative affect intensity, we found an interaction between AUs 1 and 5 such that negative affect was most intense when AU5 had high presence scores while AU1 had low presence scores, despite both AUs showing independent, positive relationships with increased negative affect. We found a similar relationship between AUs 5 and 9, which revealed that negative affect was strongest when AUs 5 and 9 had high and low presence scores, respectively. These finding may be attributable to AUs 5 relationships to fear, surprise, and arousal, of which arousal is often used as an indicator of more intense emotion by human judges (e.g, [51]).
Sensitivity of AUs to task instructions
To determine if task instructions (enhance, react normally, suppress) affected model performance or our interpretation of which AUs map onto positive and negative affect, we fit the RF model to all recordings from each condition separately and then compared model performance and AU importance scores across conditions. Table 2 shows correlations between human-and computer-generated valence ratings within the different conditions, and summary statistics for AU evidence scores within each condition are provided in S2 for negative ratings, correlations were highest in the enhance condition, followed by the react normally and suppress conditions. Of note, all correlations between human-and computergenerated ratings were lower when data were separated by condition compared to when condition was ignored (cf., Table 2 to Table 1). This suggests the lower number of recordings included in the training samples may be partially responsible for lower model performance, but also that CVML performs best when trained on a wider range of emotional intensity. Indeed, our supplementary analyses showed that when participants had lower variance in affect intensity (determined by either human or model ratings), the correspondence between human and model ratings tended to be lower as well (see S2 Fig). This finding suggests that lower model performance in the Suppression condition may be due to limited variation in human ratings for the model to predict. Despite only moderate correlations for negative ratings in these conditions, relative importance values for AUs across conditions showed minimal differences (Fig 7). In fact, ICCs between AU importance values across conditions were excellent for both positive and negative ratings (Fig 7). Taken with our supplementary analysis of variation in human ratings and model performance, these results suggest that the task instructions did not strongly influence the interpretation of important AUs for detecting positive and negative affect intensity across coders. Decoding facial expressions of emotion
Coder-specific AU importance measures
All three coders showed similarly-ordered importance profiles, indicating that they attended to similar AUs while generating emotion ratings (S3 Fig). Agreement between all three individual coders' importance profiles supported this claim-non-normalized ICC(3)s were high for both positive (ICC(3) = 0.93) and negative (ICC(3) = 0.90) importance profiles. The randomization test revealed how many recordings were necessary to adequately estimate the relative importance of AUs for each individual coder. For positive ratings, ICC(2)s for all 3 coders reached 0.75 (regarded as "excellent" agreement; see 39) after approximately 60 recordings/ratings. For negative ratings, ICC(2)s for all 3 coders reached 0.75 after approximately 150 recordings/ratings (see S4 Fig). Because the recordings in our task were 10 s long and coders rated positive/negative emotion intensity after each recording, the task used in the current study could be condensed to about 150 recordings (<30 minutes) and still reveal coder-specific AU importance measures with good accuracy. Future studies may be able to shorten the task even further by testing shorter video recordings (i.e., less than 10 s per recording).
Discussion
Our study offers strong evidence that people use discrete AUs to make wholistic judgments regarding positive and negative affect intensity from facial expressions, indicating that patterns of discrete AUs reliably represent dimensions of facial expressions of emotion (analogous to how specific patterns of AUs map to the basic emotions). Our CVML analysis identified AU12, AU6, and AU25 as especially important features for positive affect intensity ratings. Together, these AUs represent the core components of a genuine smile [52]. Note that AU12 and AU6 interact to signify a Duchenne smile, which can indicate genuine happiness [8], and previous research demonstrates that accurate observer-coded enjoyment ratings rely on AU6 [53]. Additionally, the five most important AUs we identified for negative affect intensity map on to those found in negative, discrete emotions such as fear and anger (AUs 4 and 5), disgust (AU9), and sadness (AU4). While AU12 and AU4 have been implicated in positive and negative affect for some time (e.g., [9]), this is the first study of its kind to determine the relative importance of these and other AUs in determining positive and negative affect intensity. Importantly, the strong correspondence that we found between specific sets of AUs and positive and negative valence intensity suggests that contemporary models of constructed emotion may be further tested against basic emotion theories in experimental settings. For example, future studies may investigate the time course of facial expression detection, where basic versus constructed emotion theories make differential predictions on whether basic emotional categories versus emotional dimensions are recognized more accurately and/or rapidly. Together, the AUs that we identified for positive and negative affect are consistent with prior studies suggesting that positive and negative facial expressions occupy separate dimensions [15,54]. Notably, the AUs accounting for the majority of the variance in positive affect had no overlap with those for negative affect, evidenced by near-zero ICCs, indicating that our human coders used distinct patterns of facial expressions to evaluate positive versus negative intensity ratings. The existence of distinct patterns of AUs which represent positive and negative affect intensity explains paradoxical findings that facial expressions can be simultaneously evaluated as both positive and negative (e.g., happily-disgusted; [10]). Importantly, prior studies have shown that automated facial expression recognition tools such as FACET sometimes fail to recognize blended expressions as accurately as human observers do, which is in part human observers rely strongly on affective valence whereas tools such as FACET rely on morphological features when making classifying expressions (e.g., AUs; [55]). Our results suggest that this inherent limitation of automated tools can potentially be overcome if morphological features are used to train models to predict valence intensity, which may then allow CVML to make better distinctions between prototypical and blended facial expressions. Further, our supplementary results suggest that the use of CVML to determine the relative importance of AUs for positive and negative affect recognition within individual coders is a potentially important avenue for future research. While the current study only determined relative AU importance for three trained coders (see S3 and S4 Figs), future studies may collect emotion ratings from larger, naïve groups of participants and perform similar analyses to assess for potential individual differences.
Our results also provide support for the use of CVML as a valid, efficient alternative to human coders, and with further validation we expect CVML to expand the possibilities of future facial expression research in the social and behavioral sciences. For example, adoption of automatic facial coding tools will allow researchers to more easily incorporate facial expressions into models of human decision making. Decades of research show clear links between facial expressions of emotion and cognitive processes in aggregate (see [56,57]), yet the dynamics between cognitive mechanisms and facial expressions are poorly understood in part due to difficulties accompanying manual coding. In fact, we are currently using computational modeling to explore cognition-expression relationships with the aid of CVML [58], which would be infeasible with manual coding of facial expressions. For example, in the current study it took less than three days to automatically extract AUs from 4,648 video recordings and train ML models to generate valence intensity ratings (using a standard desktop computer). In stark contrast, it took six months for three undergraduate human coders to be recruited, trained, and then code affect intensity across our 125 subjects-FACS coding would have taken much longer, rendering the scale of this project infeasible.
Models used in this study predicted positive emotion intensity with greater accuracy than negative emotion intensity, which may be due to the number of discrete facial actions associated with negative compared to positive emotional expressions. To support this claim, we found that importance scores for negative, but not positive, emotion ratings were spread across many different AUs and showed more variation across task instructions (Figs 5 and 7). This suggests that a wider range of facial expressions were used by coders when generating negative rather than positive emotion ratings. Future studies might address this with CVML models that can detect more than the 20 AUs used here. Additionally, our results suggest that negative affect intensity requires more training data for CVML than positive affect, as evidenced by large discrepancies in model performance between our CVML model that ignored the task instructions compared to those that we fit to data from each task instruction separately. Future studies might address this by devoting more time to collecting and coding negative, rather than positive, affective facial expressions.
Our interpretation of the computer-vision coded AUs in this study is potentially limited because we did not compare reliability of AU detection between FACET and human FACS experts. Additionally, FACET only detects 20 of the approximately 33 AUs described by FACS, so it is possible that there were other important AUs to which the human coders attended when generating valence ratings that we were unable to capture. However, our models showed excellent prediction accuracy on new data (i.e., capturing~80% of the variance in human ratings of positive affect intensity), and we identified theoretically meaningful patterns of AUs for positive and negative emotion intensity that are consistent with prior studies (e.g., components of the Duchenne smile). Crucially, of the AUs that were identified as important for positive and negative affect intensity, our interpretable machine learning analyses revealed that each AU had main and interactive effects that were in the theoretically predicted directions (e.g., AU12 and AU4 predicting increased positive and negative affect intensity, respectively). It is unlikely that we would achieve these results if FACET did not reliably detect similar, important AUs which represented the intensity of positive and negative facial expressions produced by our 125 participants. Further, because FACET is intended for commercial use, it has been trained on a large number of participants across a variety of different genders, ages, and ethnicities, which is likely why our model generalized well across ethnicities despite our predominantly Caucasian sample (see Supporting Information). Finally, as computer vision advances, we expect that more AUs will be easier to detect. CVML provides a scalable method that can be re-applied to previously collected facial expression recordings as technology progresses. Our interpretation of the relative importance of AUs for perceptual ratings of positive and negative affect intensity is clearly limited by our relatively low number of coders. However, the strong correspondence we found between human-and model-predicted affect intensity is made stronger by the number of subjects and recordings per subject used to train our models, and our supplementary analyses showed that our design may be expanded to larger numbers of "coders" (i.e. participants) with a substantially reduced number of recordings to empirically probe coder-specific AU importance measures for positive and negative affect intensity recognition (see S4 Fig).
Although this study investigated positive and negative affect, our method could easily be extended to identify facial actions that are associated with other emotional constructs (e.g., arousal). The ability to identify specific AUs responsible for facial expression recognition has implications for various areas within the social and behavioral sciences. Opportunities may be particularly pronounced for psychopathology research, where deficits and/or biases in recognizing facial expressions of emotion are associated with a number of psychiatric disorders, including autism, alcoholism, and depression [59][60][61]. CVML provides a framework through which both normal and abnormal emotion recognition can be studied efficiently and mechanistically, which could lead to rapid and cost-efficient markers of emotion recognition in psychopathology [62].
Supporting information S1 Fig. Sensitivity of model performance to different training scheme. Test set performance for the RF model fit using 1,000 training/test splits where separate participants were used to train and test the model. Note that performance for positive affect intensity-but not negative affect intensity-is indistinguishable from results reported in the main text (c.f. Fig 3), suggesting that models of negative affect intensity may require a more diverse set of training data (i.e. more participants) compared to positive affect intensity. 4) and the logarithm of within-participant human rating standard deviation (SD). Human-rated SDs were computed as the logarithm of the SD of human coders' ratings across a given participants' recordings. Cases with zero variance in human ratings (i.e., all ratings were "1") are excluded from this analysis. Correlations and the number of participants included in each comparison are superimposed on their respective graphs. All correlations are significant (ps < 0.001). (B) Pearson's Correlations between within-participant model performance (see Fig 4) and the logarithm of within-participant computer rating standard deviation. Computer-rated SDs were computed in the same way as human-rated SDs, but the model estimates were used in place of the true human ratings. All correlations are significant (ps < 0.001). (EPS) S3 Fig. Coder-specific AU importance measures. Partial dependence scores (not normalized to show relative differences) extracted from the RF model fit separately to each coder. Coders all show similarly ordered importance profiles, suggesting that they attended to similar facial expressions while generating emotion ratings. Note that positive importance estimates are distributed across fewer predictors (i.e., AUs 6, 12, and 18), whereas negative importance estimates are more spread out throughout all predictors. Agreement between all three individual coders' importance profiles was high, with ICC(3)s of .93 and .90 for positive and negative ratings, respectively. (EPS) S4 Fig. Number of recordings necessary to accurately estimate AU importance. Grid searches over the number of recordings/ratings necessary to achieve reliable estimates of AU importances for each valence-coder pair (coders appear in the same order as in S3 Fig). Reliability is indexed by the ICC(2) between AU importance profiles (i.e. partial dependence) extracted from the model fit to all the recordings that coders rated versus the model fit to subsets of recordings that they rated. Note that the ICC(2) assumes that importance estimates are "average" units (similar to ICC(3)s in Fig 6). The RF model was fit to each sample of size n along the x-axis, AU importance profiles were extracted from the model, and ICC(2)s were then calculated between the given sample and full-data AU importance profile scores. We iterated this procedure 20 times within each different sample size to estimate the variation in estimates across recordings. Shading reflects the 2 standard errors from the mean ICC within each sample across all 30 iterations. The red-dashed line indicates an ICC(2) of .75, which is considered "excellent". For positive ratings, the ICC(2) reached .75 after~60 recordings/ratings for each coder. For negative ratings, all coders reached an ICC (2) | 9,905 | 2018-11-04T00:00:00.000 | [
"Computer Science",
"Psychology"
] |
Assessing the film-substrate interaction in germania films on reconstructed Au(111)
Purely amorphous germania bilayer films are grown on a reconstructed Au(111) surface. The presence of the film affects the native configuration of the Au soliton walls, as observed with scanning tunneling microscopy. They partly avoid the film islands, and partly penetrate under film patches. This behavior indicates a weaker filmsubstrate interaction than the one reported for other oxide films on reconstructed Au(111). Moreover, this new system highlights the impact of the metal support on the structure of ultrathin films of germania: With decreasing film-substrate interaction the amorphous phase is promoted. Density functional theory calculations confirm and rationalize the experimental observations. This work provides a useful generalization of the relationship between film structure and adhesion energy.
The Au(111) reconstructed surface has been widely used in surface science adsorption studies. The main reason for that is its combination of interesting properties ruled by its chemical inertness, high electronegativity, and large reconstruction. Concerning the latter, the surface reconstructs with a (22 × √ 3) periodicity with the presence of an additional gold atom in the [110] direction of the topmost layer [1][2][3][4]. This leads to two stacking domains, the hexagonal close packed (hcp) and the face centered cubic (fcc), which rotate periodically 120 • forming its well-known "herringbone" pattern. This surface termination serves as a nonreactive template for molecules and nanostructures [5]. The soliton walls that separate both domains are typically seen with a scanning tunneling microscope (STM) as bright parallel paired rows, as shown in Fig. 2(a) [6,7].
Here we report the successful preparation and characterization of germania bilayer films on a reconstructed Au(111) (22 × √ 3) surface. The strength of the film-substrate interaction is qualitatively estimated from the change of the herringbone reconstruction observed in STM and quantitatively determined by density functional theory (DFT) calculations.
The preparation procedure of the films follows the steps reported previously for germania films supported on Ru(0001) and Pt(111) [39][40][41]. The Au(111) single crystal surface is cleaned by several cycles of sputtering and annealing at 820 K for 15 min. The cleaning process stops when a clear herringbone reconstruction is observed with the STM [see Fig. 2(a)]. Next, germanium is evaporated from a graphite crucible using an electron-beam evaporator. The deposition and the subsequent annealing step are carried out in 2 × 10 −6 mbar pressure of oxygen. Remarkably, well-defined films are obtained at annealing temperatures (∼580 K) much lower than the one employed to grow germania films on Ru(0001) and on Pt(111) (∼820 K) [40,41]. The amount evaporated onto the surface is inferred from previous experiments and based on the STM image features [40,41].
A germania bilayer film supported on Au(111) is shown in Fig. 1. The film covers the total scanning area by 75%, while the other 25% [black in Figs. 1(a) and 1(b)] expose the gold surface, resulting in a coverage of 1.5 monolayers. The STM image in Fig. 1(c) has been taken in a small scanning area on one bilayer terrace. The film grows atomically flat and forms a number of different ring sizes that are color-coded accordingly in Fig. 1(c). A ring-size distribution from 4-to 8-membered rings is observed. One can note in Fig. 1(c) that the presence of three 6-membered rings sharing the same vertex is a preferred triplet combination. Additionally, in Fig. 1(b) areas with agglomerations of 6-membered rings can be observed. This is also observed in amorphous silica bilayer films supported on Ru(0001) [42]. However, the (6,6,6) triplet combination is very scarce in amorphous germania bilayer films on Pt(111), whose crystalline phase is formed by 8-and 5-membered rings [40]. In any case, due to the close similarity of the present film with this latter, we assume that the germania film on Au(111) grows in the same fashion, displaying a bilayer configuration, as we rationalize below.
In Fig. 2(b) we show another film prepared by depositing half of the amount of Ge (and keeping the other preparation conditions unchanged) with respect to the preparation of the film shown in Fig. 1. The area covered by the film is half of the one discussed above, suggesting once again the formation of a bilayer film. The film of coverage 0.6 monolayers [ Fig. 2 is characterized by small islands that are connected to each other by narrow stripes.
There are many similarities between the currently presented germania films on Au(111) and the previously reported silica films on Pt(111) [43], that are described next. In a range of coverages up to 2 monolayers, only bilayer films have been observed so far. Even after evaporation of small amounts of Si on Pt(111) and of Ge on Au(111) no monolayers are formed. Moreover, in both cases, the bilayer films are amorphous [43]. Silica bilayer films on Pt(111) of low coverage also exhibit islands bridged by narrow stripes. Unlike the germania films, the silica stripes follow the main crystallographic orientations of the Pt(111).
We have observed a bias dependency of the apparent height of the germania bilayer with respect to the bare gold. At a bias of 200 mV we measure a thickness of the film of ∼0.20 nm and increases to ∼0.35 nm at 1.5 V. In both cases the apparent heights are lower than the expected geometrical thickness for a germania bilayer (∼0.48 nm plus ∼0.30 nm of the interfacial distance). A similar underestimation of the STM measured thickness of the film has been observed in silica bilayer films on Pt(111) [43]. In both systems this difference is assigned to electronic effects typically observed in metal supported oxides [32,44].
The contrast of the STM image in Fig. 2(b) is tuned with the adaptive nonlinear color mapping mode available in the scanning probe microscopy data visualizer Gwyddion [45] so that one can simultaneously observe features of the bilayer and of the Au(111) reconstruction. Interestingly, by comparison with the pristine herringbone Au(111) [ Fig. 2(a)], one can see that the gold reconstruction is disturbed by the presence of the film [ Fig. 2(b)]. The herringbone reconstruction no longer has the long-range order and periodicity as in the bare substrate and the soliton walls form more complex patterns. However, some small areas between the film islands keep the same herringbone configuration as in the clean Au(111). The rotational angle of the domains is still 120 • , as is shown with markers in Figs. 2(a) and 2(b). In addition, Fig. 2(c) exhibits STM profile lines taken along the black line in Fig. 2(a), and the red line in Fig. 2(b). The z-profile lines are perpendicular to the direction of the soliton walls. The agreement in the distance of ∼6.3 nm between the soliton walls in both systems manifests that the distance between double rows is unaltered.
STM images of different germania bilayer film preparations are depicted in Fig. 3. The STM images have been locally equalized in order to better visualize the Au (111) reconstruction. In Fig. 3(a) the herringbone reconstruction surrounding a germania bilayer island is affected in such a way that the soliton walls avoid the unreconstructed Au(111) surface underneath the film. Similar behavior has been observed for annealed islands of fullerenes and for MoO 3 monolayer films on the same substrate [21,35]. However, it is also noted that some soliton walls continue under the bilayer film, as it is shown in the area enclosed by the black circles in Figs. 3(b) and 3(c). Thus, the interaction between the film and the Au(111) support is such that the herringbone reconstruction is partly lifted and partly remains under the film.
Further, we have investigated these film characteristics using DFT calculations [46,47]. However, given that the (22 × √ 3) periodicity typically observed in reconstructed Au(111) leads to unfeasible calculations, we hereby adopt a modeling strategy discussed in a previous paper [38], namely, we first relax the Au bulk lattice constant and cut a five-layers-thick slab along the (111) surface; subsequently, the ionic position of the three upper layers is relaxed, keeping the two bottom ones fixed at bulk lattice positions. Finally, a model for the hcp region is created by inducing a 3% compression of the surface interatomic distances while keeping the interlayer distance fixed. We adopt a PBE+D2 functional, yielding a lattice parameter of Au equal to 0.412 nm (close to the x-ray diffraction value of 0.408 nm [48]), corresponding to a surface interatomic distance of 0.291 nm. Upon 3% compression, the interatomic distance reduces to 0.282 nm. It is worth noting that the two surface regions display slightly different work functions, 5.13 and 5.24 eV for the fcc and hcp models, respectively, which compares well with the literature values [49]. An intrinsic limit of this approach, consisting of separate simulations of the fcc and hcp domains, is that the ridges characterizing the herringbone reconstruction are not explicitly included in the models. It is therefore not possible to directly observe their lifting as seen in the experiments, while considerations regarding the nature and the strength of the film-substrate interactions can be done by comparison to similar computational studies. More computational details and the relaxed atomic coordinates for the fcc and hcp systems can be found in the Supplemental Material [50].
In the previously studied cases of hexagonal germania bilayer deposition on Ru(0001) and Pt(111), a simple coincidence of a (1 × 1) film cell on a (2 × 2) substrate cell led to an acceptable strain [40,41]. This is not the case for GeO 2 /Au, where such a coincidence displays a tensile strain as large as 6% and 3% for the fcc and hcp Au(111) regions, respectively. We therefore create models displaying a convenient moiré pattern in order to accommodate the germania film on the gold substrate with a reasonably small lattice mismatch. For the fcc domain a (3 × 3) germania cell is put on a ( √ 31 × √ 31) Au supercell with a rotation of 9 • (depicted in Fig. 4). For the hcp domain, a (4 × 4) GeO 2 cell is put on a ( √ 61 × √ 61) Au supercell with a rotation of 26 • .
The calculation results using this approach can be found in Table I, where the strain, the adhesion energy (E ad ), the average interface distance (d), the amount of electronic charge transferred to the film for surface unit (Q), and the work function change with respect to the bare metal surface ( ) are reported for both the fcc and hcp regions of Au(111). Additionally, for comparison, the table shows the adhesion properties for germania and silica bilayer films on Ru(0001) and Pt(111), NaCl bilayer films on Au(111), and MgO bilayer films on Ag(100). The phases hex and 558 correspond to the formation of a network of 6-membered rings or a combination of 5-and 8-membered rings [40]. All the data have been obtained using the same computational approach, PBE+D2 . The adhesion energy is defined as [40] where E is the total electronic energy of the X A n /M supported film, the freestanding film, and the metal support, respectively. X = Ge, Si, Na, or Mg; A = O or Cl; n = 1 or 2; M = Ru, Pt, Au, or Ag, and S is the supercell area. The adhesion energy on the fcc domain is very small (−1.54 eV/nm 2 ). On the hcp domain, a slightly larger value is reported (−1.75 eV/nm 2 ). Both values are significantly smaller than the adhesion energy calculated for the hexagonal germania bilayer films on Pt(111) and Ru(0001) (−2.20 eV/nm 2 and −6.78 eV/nm 2 , respectively). Moreover, the mean interlayer distance is larger for the hexagonal film on Au(111) (0.301 nm), than on Pt(111) (0.288 nm) and on Ru(0001) (0.217 nm). The mentioned calculated adhesion parameters are in line with the fact that germania supported on Au(111) forms pure amorphous bilayer films, while on Ru(0001) and Pt(111) the observed structures are influenced by the presence of the metal support [40,41]. Moreover, a bilayer film of MgO, a typical ionic oxide, is bound three times stronger to the Ag(001) substrate [51] than the germania film to the Au(111). Notably, the system NaCl/Au(111) [51], an example of an ideal physisorbed film, presents adhesion properties similar to GeO 2 /Au(111). Additionally, similar values are observed for the hexagonal film of SiO 2 /Pt(111), in agreement with the structural similarities, discussed above.
These experimental and DFT results contribute to the understanding of the role of the metal support in the oxide film structure. For instance, ultrathin films of silica (a material structurally analogous to germania) are significantly altered by the nature of the metal support. More specifically, the oxygen adsorption energy of the metal substrate may determine the structure of the silica film [43]. On high oxygen affinity metal substrates, such as Mo, silica forms only chemically bonded monolayer films [52]; on inert metals, such as Pd and Pt, the film-substrate interaction is weaker and forms amorphous decoupled (or crystalline and nonconmensurate) bilayer films [43,[53][54][55]; and on intermediate metal supports, such as Ru, exhibits both types of behaviors [56][57][58]. Jhang et al. has shown by doping the silica films that the structure of the film is also heavily affected by the strain and charge transfer with the metal substrate [53]. Interestingly, silica bilayer films interact only through van der Waals forces with the metal substrate, as was determined by infrared reflection absorption spectroscopy and DFT calculations [58]. In fact, the weak coupling to the substrate is responsible for the randomly oriented unit blocks of SiO 4 that give rise to amorphous networks. In our recently reported ultrathin germania films on Ru(0001) and Pt(111), we observe a similar correlation between the atomic structure of the film and the metal support to the one in the silica films [39][40][41]. While on Ru(0001) germania forms very stable hexagonal monolayer films [39] and ill-defined bilayer films [40], on Pt(111) forms a wide range of structures: monolayer films, crystalline and amorphous bilayer films [41], and a zigzag-line phase (comparable to the silica one [59]). On Au(111), a chemically inert surface, only amorphous bilayer films are observed, as expected for a film weakly coupled to the support. To summarize, in contrast to earlier observations for oxide films on reconstructed Au(111) [32][33][34][35], several soliton walls penetrate below the germania bilayer film patches. This behavior indicates a weaker film-substrate interaction than the one observed for other oxide films. In other words, the changes in the soliton wall behavior of the Au(111) reconstruction upon film coverage yield a qualitative measure for the interaction strength. For the present ultrathin GeO 2 /Au(111) system, the consequence of the observed weak film-substrate interaction is an amorphous germania film growth. Our experiments and the theoretical modeling with DFT calculations highlight quantitatively the impact of the metal support on the oxide film structure concerning strain, adhesion energy, charge transfer, and, most importantly in the present context, crystalline versus amorphous growth. This project has received funding from the European Research Council (ERC) under the European Unions Horizon 2020 Research and Innovation Program (Grant Agreement No. 669179). S.T. and G.P. thank the Italian MIUR for financial support through the PRIN 2017 Project MULTI-e and CINECA for granting access to supercomputing resources via the ISCRA initiative. F.S. acknowledges the Alexander von Humboldt Foundation, MPG partner group program, FAPERJ, and CNPq grants. | 3,575.6 | 2019-12-15T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Current transport and electroluminescence mechanisms in thin SiO 2 films containing Si nanocluster-sensitized erbium ions
containing Si nanocluster-sensitized erbium ions O. Jambois, Y. Berencen, K. Hijazi, M. Wojdak, A. J. Kenyon, F. Gourbilleau, R. Rizk, and B. Garrido Dept. Electrònica, MIND-IN2UB, Universitat de Barcelona, Martí i Fanquès 1, 08028 Barcelona, CAT, Spain CIMAP, UMR CNRS 6252, 6 Boulevard Marchal Juin, 14050 CAEN, France Department of Electronic and Electrical Engineering, University College London, Torrington Place, London WC1E 7JE, United Kingdom
I. INTRODUCTION
There are various methods to obtain electroluminescence ͑EL͒ from silicon-based devices, but there is also a need for optimization of devices and better understanding of underlying photogeneration and current transport processes.For the EL signal from silicon-rich silica thin films, the emission is usually attributed to silicon nanocrystals 1,2 or nanoclusters 3,4 ͑Si-ncls͒ dispersed in the SiO 2 matrix.In order to maximize potential applications, other approaches, including doping stoichiometric silica with rare earth ions, have been studied. 5,6In most of the cases, the mechanism of conductivity was identified to be Fowler-Nordheim tunneling, and the emission was attributed to impact excitation of the ions by hot electrons.Different strategies are proposed at the device level in order to improve EL efficiency in metal oxide semiconductor ͑MOS͒ structure, such as the insertion of a nitride layer to control the energy of injected electrons, 7 or the design of plasmon-enhanced MOS devices. 80][11] However, the role of the Si-ncls in the EL of Er and conduction mechanisms is not clear.It is agreed that the presence of Si-ncls introduces more efficient conduction mechanisms, including variable range hopping, 12 direct tunneling ͑DT͒, trap-assisted tunneling, Poole-Frenkel ͑PF͒ conduction, 13 or space charged limited current ͑SCLC͒. 11Si nanoparticles favor the injection of charge carriers, improve device lifetime, 14 reduce the population of hot electrons, 15 and consequently reduce impact excitation of Er. 10,16 It has also been argued that other mechanisms of EL can be introduced, including energy transfer from electrically excited Si-ncls to erbium ions. 13In general, different processes can be dominant depending on material composition, film thickness, or voltage regimes.An understanding of injection and transport of carriers in erbium-doped silicon-rich silica ͑SiO x :Er͒ is a prerequisite to the design of efficient devices.
We have tested a series of MOS structures that enabled us to study the carrier transport and EL from Er 3+ ions as a function of both silicon and erbium contents.The goal of the present paper is to determine the influence of the Si-ncls on Er EL, on the conductivity of the layers, and on the power efficiency of the device.Finally, our measurements have enabled us to develop a model for the charge transport and EL mechanisms.
II. EXPERIMENTS
The layers were grown on p-type B-doped Si substrates by magnetron cosputtering of three confocal cathodes, SiO 2 , Er 2 O 3 , and Si, under a pure Ar plasma.The rf power applied to each cathode permits the control of the film composition, i.e., the incorporation of Si and Er in the thin layer.More details on the deposition process can be found elsewhere. 17n total, three different SiO x : Er layers were fabricated, with thicknesses around 30 nm.Thickness was determined by spectroscopic ellipsometry measurements.The layers were annealed at 900 °C for 1 h in nitrogen.
X-ray photoelectron spectroscopy ͑XPS͒ spectra were measured with a Perkin-Elmer PHI-5500 instrument using Al K␣ radiation.XPS depth profiles were obtained by measuring the spectra after sputtering the samples to different thicknesses with an Ar + ion beam at 4 keV.The time-offlight secondary ion mass spectrometry ͑TOF-SIMS͒ analyses were performed using a TOF-SIMS IV ͑ION-TOF, Munster, Germany͒, equipped with a Bi primary ion source, a Cs/ O 2 electron impact dual source column, and a low-energy electron flood gun ͑for charge compensation of insulating samples͒.
Photoluminescence ͑PL͒ measurements were performed with a DPSS laser emitting at 473 nm, a Bentham M300 single grating monochromator, and a NIR-sensitive Hamamatsu photomultiplier ͑R5509-72͒.
The electrical contacts on the back side of the wafers were deposited by successive evaporation of chromium and gold.The top contacts were 60 nm sputtered indium tin oxide ͑ITO͒.Different areas from 1.56ϫ 10 −2 to 1 mm 2 were used.To test the transparency of ITO, a series of test samples on glass was prepared, and they showed more than 90% transmission at 1.55 m.Current-voltage characteristics were performed with an Agilent B1500A semiconductor parameter analyzer.EL measurements were performed with a Hamamatsu G8605 photodiode cooled to −30 °C and a long pass filter with cut-on at 1.4 m to integrate the light coming from the band at 1.55 m or an Oriel MS257 monochromator to spectrally resolve the light.The electrical excitation was continuous, not pulsed.
A. Structural characterization and PL properties of the layers
XPS measurements were performed to characterize the Si excess.To determine the concentration of Er that is below the resolution of XPS, the sensitive TOF-SIMS technique was used and calibrated with Rutherford backscattering spectroscopy on another SiO x : Er reference sample.The results for Si excess and Er concentration are given in Table I.
The PL of the four layers was characterized before depositing the electrodes.For each Si-rich layer, a luminescence band at 1.54 m is observed when pumped at 473 nm, which is clearly attributed to the 4 I 13/2 -4 I 15/2 transition in the internal 4f shell of the Er 3+ ions.A typical PL spectrum can be seen in Fig. 1.As Er is not pumped at a wavelength corresponding to a resonant transition, this suggests that Er ions are indirectly excited by the Si-ncls thanks to energy transfer, as proposed in literature. 18,19In the inset of Fig. 1, the PL intensity of the three Si-rich samples has been normalized to the layer thickness and the Er concentration.Different intensities are found, but no clear dependence on Si excess or Er concentration can be inferred.These differences in PL intensities are attributed either to different fractions of optically active Er in the layers, or a different fraction of Er ions coupled to Si-ncls.The fact that the layer with the higher Er content has the lower PL intensity suggests also that for this layer there is some Er agglomeration.
B. Conduction and EL properties
In Fig. 2, we present a typical J-V characteristic obtained for one device in both polarities.By convention, positive voltage to the p-type substrate corresponds to forward polarization.It can be seen that the current increases by several orders of magnitude over the range of voltages used here, which is typical of strong insulators.At lower voltages, the J-V curves are symmetric for both polarities, which suggests that the current is limited by the bulk of the active layer, and not by the electrode. 20However, note that this is not true at higher voltages, as a saturation of the current can be observed in reverse polarization.This can be attributed to the lower density of electrons in the p-type substrate in inversion that can be injected in the dielectric so the injection becomes interface limited.
In the inset of Fig. 3, we present an EL spectrum showing an EL band at 1.55 m, which is clearly attributable to the Er ions in the active layer.Note that the EL spectrum appears broader than the PL spectrum as for EL the slits of the monochromator were opened to the maximum to detect as much light as possible.This leads to a higher signal but lower resolution and a broadening of the spectrum.This kind of spectrum was obtained for samples that contain excess Si ͑C426, C446, and C439͒ when in forward bias.As can be seen in Fig. 3, the EL intensity increases with the applied voltage.Under negative no EL was observed.It is also worth mentioning that sample C422, which has no Si excess, does not show any EL at 1.55 m for any polarization.
In order to understand the conduction mechanisms, we present in Fig. 4͑a͒ the current density-electric field ͑J-E͒ characteristics of the four samples.The electric field, given by the voltage divided by the active layer thickness, has been increased as much as possible before seeing breakdown of the device.In Fig. 4͑a͒, we can see a strong variation of the current with applied field, in particular for the Si-rich samples ͑C426, C439, and C446͒.Moreover, the current is strongly enhanced when a Si excess is introduced in the silicon oxide, as the current is increased by four or five orders of magnitude.This shows that the transport of charges occurs through the Si nanoparticles.This is also supported by the increase in conductivity with Si excess.If we compare the current densities to the ones given by other groups on similar materials made by sputtering, 11 the values found in the present study are much lower.This could suggest that the layer we have fabricated contains fewer matrix defects, leading to a lower conductivity.
Different models are known to describe current transport in silicon oxides through defects-for example, phononassisted tunneling, DT, field assisted thermal ionization, or SCLC. 20,21The first two correspond to tunnel escape of charges from trap to trap.At higher electric fields, transport is generally well described by the Fowler-Nordheim mechanism, which corresponds to injection of charges from the electrode into the conduction band of the dielectric by tunneling through a triangular potential barrier.In our case, the data for the Si-rich samples are best described by the PF model, that is to say, the field-enhanced thermal emission of carriers from trapping sites in the insulator.This law is generally described by the following equation: where kT is the thermal energy, e is the electron charge, 0 is the vacuum permittivity, and r is the relative permittivity of the film. 20From the fit, the value of r can be deduced.From the effective medium theory, the value of r can vary between 4 and 12, corresponding to the permittivities of SiO 2 and of pure Si, respectively.Figure 4͑b͒ shows the J-E curve in the PF representation, i.e., the logarithm of the ratio J / E versus the square root of E. A straight line in a large range of J / E can be found for the Si-rich samples, giving values of permittivity consistent with the effective medium theory.For samples C426, C446, and C439, values of permittivity of 6.6, 6.9, and 7.9 were found.The increasing value of the permittivity with the Si excess is further evidence of the role of the Si-ncls in charge transport.The dominance of this mechanism in these kinds of layers is in agreement with what has previously been reported, 13,14,21 but contradicts other results, 11 in which a SCLC-type mechanism and much higher current density current even at low electric field were reported.We attribute this difference to the lower matrix defect density in the samples presented here than in Ref. 11, consistent with the lower currents we measure.It was also intended to fit the I-V characteristics with the Fowler-Nordheim law discussed above.In principle this kind of mechanism can lead to the injection of hot electrons in the active layer.This process is well known to be dominant for silicon oxides or silicon oxides with Si-ncls at high electric fields, when the films contain a much lower content of matrix defects.For example, we reported that layers of thermal SiO 2 implanted with Si and annealed at high temperature show good agreement with Fowler-Nordheim injection, and the EL mechanism of excitation of the Si-ncls was attributed to impact ionization. 22This is also the case reported by Nazarov et al., 10 who observed a Fowler-Nordheim-type injec-FIG.3. EL power vs voltage of sample C439.The inset shows an EL spectrum 4. ͑Color online͒ J-E characteristics of the layers ͑a͒ in log scale and ͑b͒ in PF representation.The legend that appears in ͑a͒ applies for both pictures.
tion in SiO x : Er layers that have been fabricated by sequential implantation of Si and Er in a SiO 2 layer made by thermal oxidation of a Si substrate.In this study, the higher density of defects due to the preparation method seems to favor the injection of charges inside the dielectric, preventing hot electron injection, when the electric field is not too high.In the case of sample C422 ͑no silicon excess͒, different behavior is observed.Although the fit with the PF model seems reasonable, a permittivity of about 20 is found.This suggests that a mechanism of thermal ionization assisted by electric field could occur, but with some differences.In fact, a better fit with the SCLC mechanism could be found.This law is generally well fitted by a power dependence of I with V.In the ideal case, the exponent is equal to 2, as can be analytically found if we consider a free of traps or with shallow traps situated at a constant energy below the conduction band.In the case of a distribution of deeper states, higher values of exponent can be found. 23In our case, an exponent of 7 has been found, which suggests that the defects present in the layers that assist conduction show a large energy distribution; this is understandable as the medium is essentially amorphous.
The power efficiency of EL is defined as the ratio between the power of the emitted light and the input electrical power.This value has been estimated for the three Si-rich layers by carefully calibrating the EL setup.We obtained power efficiencies of 2.3ϫ 10 −4 %, 1.2ϫ 10 −3 %, and 1.1 ϫ 10 −4 %, for samples C426, C446, and C439, respectively.There is no clear tendency that appears with Si excess or Er concentration.This is because the system is governed by several parameters, including the Si-ncl density and the fraction of coupled Er.In fact, the EL results are contrary to the PL results of Fig. 1; i.e., the sample with the highest PL intensity has the lowest EL power efficiency.Actually, PL and EL do not necessarily coincide as, on one hand, for optical pumping the luminescent centers have to be well isolated to ensure the best confinement, whereas for electrical pumping the luminescent centers have to be close to one another to allow charge transport. 11,24Moreover, even if the layers can be optimized by PL in terms of Er coupled to Si-ncls, we have to deal with different paths for the flow of charges, which can be matrix defects, Si-ncls not coupled to Er ions, or Si-ncls that are coupled to the Er ions.Among these three, only the last one allows excitation of Er, and the samples have to be prepared in order to favor this mechanism.The low matrix defect density has allowed us to obtain a power efficiency of 1.2ϫ 10 −3 % for the best result, corresponding to an external quantum efficiency of 0.03%.In general, authors only report external quantum efficiency, which is the upper limit for power efficiency, 25 but power efficiency values are the one required for application purposes.To our knowledge, the value of power efficiency we report is the highest reported in Si-ncl sensitized Er ion systems.One expects a significant increase in this number for optimized material that is currently the object of intense effort.
C. Excitation mechanisms
Having clearly demonstrated the role of Si-ncls in the enhancement of the conductivity of the stoichiometric layer, we have studied the mechanism of Er excitation.We can imagine three scenarios that may occur in those layers leading to excitation of the Er ions: ͑i͒ impact ionization of Er directly, so the Si-ncls act only as paths for conduction; ͑ii͒ impact ionization of the Si-ncls and transfer of the excitonic energy to the nearest Er; or ͑iii͒ injection of electron and hole in the same Si-ncls, formation of an exciton, and transfer to the closest Er.In the case of impact ionization, the electron needs to reach a high kinetic energy.When it is high enough, the collision of this electron with a Si-ncl allows the creation of an exciton confined inside the Si-ncls, which can transfer its energy to the Er ions.Also, the hot electron can excite Er directly by collision.If the electron does not collide with the scattering centers ͑Si-ncls, Er, and matrix defects͒, it will impact the Si substrate.Thus, if hot electrons can be created by field ionization, radiative emission of the Si substrate due to impact ionization should be observable.In Fig. 5, an EL spectrum has been measured from 1 to 1.7 m, to observe luminescence of Er at 1.55 m, and that from the Si substrate at around 1.15 m.The spectrum is measured at a bias of 22 V, and the device has never previously been subjected to voltages larger than 22 V.By increasing the electric field the EL intensity at 1.55 m increases, as has already been seen in Fig. 3.However, when the applied voltage becomes higher than 35 V, the EL at 1.55 m drops suddenly, and a new band appears at 1.15 m.At the same time, the current shows also a noticeable increase ͑not shown͒.When reducing the bias to 22 V, the spectrum remains changed across the whole measured range, as can be seen in Fig. 5; the new band at 1.15 m remains, and the EL of Er is lower.
In Fig. 5, before applying voltages higher than 35 V, no luminescence from the Si substrate is observed, suggesting that hot electrons are not injected in the active layer.This is supported by the fact that no Fowler-Nordheim injection was observed.Instead, the charges are trapped by the matrix traps ͑Si-ncls or matrix defects͒ with a high probability because of their high density.If the electrons increase their kinetic energy ͑become hot͒ due to the electric field, they would thermalize very quickly.This suggests that the light emitted from Er does not come from impact ionization either of Er or of the Si-ncls.It appears more likely that in these layers we see bipolar injection of electrons from the ITO electrode and of holes from the accumulation region in the Si FIG. 5. ͑Color online͒ EL spectrum from 1000 to 1700 nm at 22 V before applying 35 V ͑full squares͒ and after applying 35 V ͑empty diamonds͒.substrate into the Si-ncls, and that a transfer of the excitonic energy to the Er ions occurs.The fact that no EL is observed in reverse polarization suggests that holes are necessary to obtain Er EL.Moreover, another indication of the indirect excitation of Er is the fact that the SiO 2 : Er layer does not show any EL.If we now consider the regime where very high electric fields are applied to the device-close to the layer breakdown-additional traps can be generated by the applied field.These new traps constitute a new channel for the conduction of charges, leading to a higher current, and some of the charges can reach the conduction band of the matrix.As the mobility is much higher there, electrons can remain hot and reach the Si substrate.This explains the appearance of a luminescence band at 1.15 m.Finally, when coming back to 22 V, the defects generated by the high electric field are still there; hence it is still possible to generate hot electrons and the EL band due to the Si substrate remains.The decrease in the intensity the EL band could be due either to a lower fraction of charges that flow through the traps inside the gap, or to a nonradiative interaction between excitons and hot electrons ͑Auger͒.
From all the above results, we propose a model describing the conduction of charges in our sample for different regimes of electric field, and explaining the different EL bands emitted from the device.We schematize in Fig. 6͑a͒ the mechanism in an energy-space diagram at low voltage.
The active layer in the center is sandwiched between the ITO semimetallic electrode and the p-type Si substrate.The high density of traps characterizing the films ͑Si-ncls or matrix defects͒ allows conduction of charges with a reduced mobility compared to an electron in the conduction band, and hence a PF mechanism occurs.At low voltages in forward bias, electrons are injected from the ITO electrode, and holes from the Si substrate.EL from Er can be observed: we symbolize it by the twisted arrow in the center of the active layer.When a higher voltage is applied, the electric field is close to breakdown ͓see Fig. 6͑b͔͒, and it is likely that defects are created inside the layer.The charges follow these new channels, and are injected into the conduction band of the silicon oxide matrix.There the electrons can become hot, as they suffer less scattering than in the pseudoconduction band created in the gap of the oxide.They are thus able to impact with the substrate, and this explains the luminescence of the substrate that appears at high fields, as observed in Fig. 5, as well as the observed increase in the current intensity.Finally, when coming back to lower field ͓see Fig. 6͑c͔͒, the defects created by the high electric field are still there, and charges can still be injected into the conduction band of the matrix.This leads to a new situation in which a new luminescence band appears at 1.15 m.As a fraction of electrons are deviated from their original trajectory, less EL at 1.55 m is obtained.
IV. CONCLUSIONS
In summary, we have studied the conduction and EL mechanisms in SiO x : Er layers of different compositions, as well as their power efficiency.The presence of Si-ncls leads to an increase in the current of four to five orders of magnitude.The conduction mechanism is found to be dominated by field assisted thermal ionization ͑PF͒ from Si-ncls to Sincls.At lower voltages, no evidence of hot electron injection has been seen.The experiments allow us to attribute the erbium EL to a transfer of the excitonic energy of the Si-ncls to Er ions, and to discount impact ionization of either the Si-ncls or Er ions.A power efficiency above 10 −3 % has been found for the best device.A model is proposed to explain the different regimes of luminescence with the applied voltage that occurs in these devices.
TABLE I .
Results for Si excess and Er concentration of the four monitor layers. | 5,091.8 | 2009-09-24T00:00:00.000 | [
"Physics",
"Materials Science"
] |
The Effect of Nonionic Surfactants on the Kinetics of Methane Hydrate Formation in Multiphase System
: Gas hydrate inhibitors have proven to be the most feasible approach to controlling hydrate formation in flow assurance operational facilities. Due to the unsatisfactory performance of the traditional inhibitors, novel effective inhibitors are needed to replace the existing ones for safe operations within constrained budgets. This work presents experimental and modeling studies on the effects of nonionic surfactants as kinetic hydrate inhibitors. The kinetic methane hydrate inhibition impact of Tween-20, Tween-40, Tween-80, Span-20, Span-40, and Span-80 solutions was tested in a 1:1 mixture of a water and oil multiphase system at a concentration of 1.0% ( v / v ) and 2.0% ( v / v ), using a high-pressure autoclave cell at 8.70 MPa and 274.15 K. The results showed that Tween-80 effectively delays the hydrate nucleation time at 2.5% ( v / v ) by 868.1% compared to the blank sample. Tween-80 is more effective than PVP (a commercial kinetic hydrate inhibitor) in delaying the hydrate nucleation time. The adopted models could predict the methane hydrate induction time and rate of hydrate formation in an acceptable range with an APE of less than 6%. The findings in this study are useful for safely transporting hydrocarbons in multiphase oil systems with fewer hydrate plug threats.
Introduction
Gas hydrates are crystalline inclusion compounds that consist of small guest molecules encapsulated in hydrogen-bonded water cages under suitable thermodynamic conditions [1]. The guest molecules are typically gases such as methane, ethane, and CO 2 , or liquids such as tetrahydrofuran and cyclopentane [2]. The packing of the guest to the cage size ratio of gas hydrates determines its crystalline structure either in a cubic structure I, cubic structure II, or a hexagonal hydrate structure [3], as shown in Figure 1. Gas hydrate formation is a promising technology for gas separation and transportation, water desalination, and refrigeration [4][5][6][7]. However, the formation of gas hydrates in pipelines is one of the major flow assurance problems in the oil and gas industry. Gas hydrate formation may block the production facilities and eventually hinder natural gas production [8,9]. The progressive field development of oil and gas explorations into deep Gas hydrate formation is a promising technology for gas separation and transportation, water desalination, and refrigeration [4][5][6][7]. However, the formation of gas hydrates in pipelines is one of the major flow assurance problems in the oil and gas industry. Gas hydrate formation may block the production facilities and eventually hinder natural gas production [8,9]. The progressive field development of oil and gas explorations into deep water exposes production pipelines to hostile operating environments, which are prone to gas hydrate formation. In oil-dominated systems, the oil phase complicates hydrates' plug formation in pipelines, hence the need to prevent such hydrate plugs for safe production. This necessitates investigations to understand the role of inhibitors in order to prevent gas hydrate formation in gas/oil/water systems.
To prevent and manage hydrate formation, four methods, namely, heating, dehydration, chemical injection, and depressurization, can be used. However, due to its economic and technical feasibility, chemical injection is extensively used. The chemical additives for hydrate inhibition are classified into two main groups, namely, thermodynamic hydrate inhibitors (THIs) and low-dosage hydrate inhibitors (LDHIs). The LDHIs are further divided into kinetic hydrate inhibitors (KHIs) and anti-agglomerates (AAs). THIs such as methanol and glycol shift the hydrate-liquid-vapor curve (HLVE) to a lower temperature and a higher pressure zone [1,10]. THIs have some disadvantages related to their volatility, environmental issues, and economical concerns since their use in high concentrations (~50 wt.%) is needed for them to be effective. Therefore, LDHIs were introduced to retard the onset of hydrate formation by delaying the nucleation (KHIs) or preventing the agglomeration of formed small hydrate crystals (AAs). LDHIs are used at low concentrations (0.1-2 wt.%) such as in polyvinylpyrrolidone (PVP), polyvinyl caprolactam (PVCap), and cationic surfactants.
Surfactants are surface-active substances that play a crucial and diverse role in gas hydrate-related applications. Cationic surfactants such as cetyltrimethylammonium bromide (CTAB) can act as effective anti-agglomerates due to their ability to prevent the agglomeration of the hydrate crystals into bigger particles [11]. These surfactants adsorb onto the hydrate crystals' surface to keep them suspended in a transportable slurry [12]. Contrarily, anionic surfactants such as sodium dodecyl sulfate (SDS) promote hydrate formation by reducing the adhesive forces between hydrate crystals allowing for a larger mass and faster hydrate growth [13]. Non-ionic surfactants such as Span and Tween are used as emulsifiers for many applications. They offer many advantages over ionic surfactants including increased stability, formulating flexibility, and biodegradability. Additionally, being non-ionic, these surfactants are widely compatible and stable in many fluid systems including freshwater, saline water, mild acids, and alkaline, and do not react with ionic compounds and charged substances [14]. These benefits of non-ionic surfactants have encouraged their potential application in hydrate-related studies.
Ganji et al. [15] studied the effects of different surfactants on the methane hydrate formation rate, stability, and storage capacity. The study revealed that the presence of surfactants reduces the gas storage capacity less than the maximum theoretical value for the Structure I hydrate. This implies that surfactants act as anti-agglomerates by preventing hydrate crystals from agglomerating [15]. Pan et al. [16] claimed that Span and Tween series can affect the methane gas hydrate in an emulsion system of 40% water and 60% diesel. The surfactants under study were Span-20, Span-80, Tween-20, and Tween-80. The results show that the surfactants promoted the growth of hydrates in diesel emulsion systems, shortening the hydrate reaction time while improving the gas storage density. In 2003, Sun et al. [17] studied the effect of SDS and a non-ionic surfactant, dodecyl polysaccharide glycoside (DPG), on the methane gas hydrate formation rate and storage capacity. DPG could increase the methane hydrate formation rate and improve the gas storage capacity (79 volume gas/volume hydrate). However, the effect of SDS in this context is still superior compared to DPG (146 volume gas/volume hydrate). The study also concluded that the induction time of hydrate formation is accelerated with the presence of cyclopentane (CP) but the hydrate storage capacity is reduced. According to Zhang et al. [18] in 2008, Tween surfactants have a great promotive effect on the hydrate formation rate because the surfactants can facilitate the dissolving of the gas molecules in the aqueous phase, enhancing the mass transfer of the gas molecules from a bulk phase to form a hydrate. The dissolution of gas is more profound when the surfactant concentration exceeds the critical micelle concentration in the system.
It can be concluded from the reviewed literature that most of the surfactant applications on hydrates are focused on kinetic-promotive effects and anti-agglomerate abilities. This implies that these surfactants could work as inhibitors or promoters depending on the aim of the application and the concentrations at which they are used. Aside the hydrate-promotive and anti-agglomeration tests of surfactants, their KHI potentials are rarely reported in multiphase systems. In addition, the hydrate formation behavior modeling in the presence of surfactants in multiphase systems is underreported and needs to be evaluated to provide data for correctively mitigating hydrate plugs in process flow assurance facilities.
In this work, the effect of six nonionic surfactants operating as kinetic methane hydrate inhibitors was studied in a multiphase system (gas + water + oil) using a non-hydrate former drilling fluid-based oil to represent the oil phase. The induction time, rate of gas consumption, and amount of gas consumed were measured. The effects of the temperature and inhibitor load were studied. A new modelling study was conducted to predict the induction time of hydrate formation in multiphase systems using the classical nucleation theory. Additionally, an empirical correlation was developed based on Englezos' model to predict the rate of hydrate formation. The model considered the variation in the inhibitor concentration and the operating temperature.
Material
The nonionic surfactants used in this study were Tween-20, Tween-40, Tween-80, Span-20, Span-40, and Span-80. The surfactants were purchased from Sigma Aldrich and used without further purification. The details of the chemicals including the HLB (hydrophilichydrophobic balance) and purities of the surfactants are summarized in Table 1. Drilling fluid base oil (MG3), supplied by PETRONAS Sdn Bhd, was used to mimic the oil phase in the multiphase flow pipelines. The oil phase composition was selected cautiously to exclude oil compositions that could form hydrates and complicate the data analysis. The deionized water used to prepare the samples was obtained from Ultra-Pure Water System. The surfactant concentrations in the sample ranged from 1.0-3.0% (v/v) in the water phase (or oil phase).
Experimental Apparatus and Procedure
A 650 mL high-pressure stainless-steel stirred tank reactor (STR) (Figure 2) was used to conduct the experiments in this work. The reactor was designed with thermocouples with an accuracy of ±0.5 K to measure the temperatures from 253.15 K-523.15 K and pressure up to 20 MPa. More details on the experimental setup can be found in our previous work [19].
Experimental Apparatus and Procedure
A 650 mL high-pressure stainless-steel stirred tank reactor (STR) (Figure 2) was used to conduct the experiments in this work. The reactor was designed with thermocouples with an accuracy of ±0.5 K to measure the temperatures from 253.15 K-523.15 K and pressure up to 20 MPa. More details on the experimental setup can be found in our previous work [19]. The desired amount of surfactant was added to a 1:1 mixture of oil and distilled water at 298.15 K. The mixture was subjected to vigorous magnetic stirring for at least 4 h to prepare the emulsion. To start each experimental test, 100 mL of the desired prepared sample was loaded into the reactor; then, the system was placed on the vacuum to remove any excess air inside of the reactor. After purging the system, the methane hydrate phase behavior and kinetic-testing procedures were used to evaluate the systems.
Methane Hydrate Phase Equilibrium Measurement
The hydrate dissociation temperature as an equilibrium point was measured using the T-cycle method at 8.80 MPa. The measurements were initiated by reducing the system's temperature to 271.15 K and maintaining said temperature to allow hydrate formation to occur. After the hydrates were formed, a stepwise heating method was applied at a heating rate of 0.25 K/h to determine dissociation conditions. A slow heating rate was used to accurately detect the hydrate dissociation temperature.
Kinetics of Methane Hydrate Formation
For the kinetic experiments, the reactor was cooled down to the initial temperature of 287.15 K, which is about 2 K higher than the hydrate equilibrium temperature. Methane was then compressed into the reactor up to equilibrate at 8.8 MPa with the stirrer turned The desired amount of surfactant was added to a 1:1 mixture of oil and distilled water at 298.15 K. The mixture was subjected to vigorous magnetic stirring for at least 4 h to prepare the emulsion. To start each experimental test, 100 mL of the desired prepared sample was loaded into the reactor; then, the system was placed on the vacuum to remove any excess air inside of the reactor. After purging the system, the methane hydrate phase behavior and kinetic-testing procedures were used to evaluate the systems.
Methane Hydrate Phase Equilibrium Measurement
The hydrate dissociation temperature as an equilibrium point was measured using the T-cycle method at 8.80 MPa. The measurements were initiated by reducing the system's temperature to 271.15 K and maintaining said temperature to allow hydrate formation to occur. After the hydrates were formed, a stepwise heating method was applied at a heating rate of 0.25 K/h to determine dissociation conditions. A slow heating rate was used to accurately detect the hydrate dissociation temperature.
Kinetics of Methane Hydrate Formation
For the kinetic experiments, the reactor was cooled down to the initial temperature of 287.15 K, which is about 2 K higher than the hydrate equilibrium temperature. Methane was then compressed into the reactor up to equilibrate at 8.8 MPa with the stirrer turned on. The system was then cooled to the desired temperature (274.15-277.15 K) with continuous stirring at 600 rpm during the cooling period. This was performed to initiate the hydrate formation process. The hydrate formation was indicated by a sudden pressure drop and temperature increase in the reactor. When the system attained a constant pressure for 2 to 3 h, the hydrate formation was completed and the experiment was terminated. The changes in pressure and temperature of the system were recorded constantly every 10 s using a data acquisition system.
Induction Time
The induction time of the CH 4 hydrate formation in the multiphase systems was determined as the time taken to detect the onset of hydrate crystal formation. In practice, the measurement of the hydrate formation process is very stochastic even under the same conditions. Therefore, all experiments in this study were repeated at least three times to obtain the mean value for reporting purposes. This was detected at the point when a sudden pressure drop and temperature increase were observed during the experiment. The induction time is always compared to a reference sample, which is the chemical-free sample. In the event of a surfactant increasing the induction time, it acts as an inhibitor, whereas when it is decreased, it is considered a promoter. To evaluate the performance of different surfactants, Relative Inhibition Power (RIP) was calculated according to Equation (1) [20]. The positive value indicates an inhibitive behavior while a negative value indicates that the surfactant is behaving as a hydrate promoter.
Amount of CH 4 Consumption and Gas Uptake
The amount of CH 4 consumed to achieve the maximum degree of hydrate formation is an important parameter to elevate gas hydrate technology to the industrial scale. The gas consumption in this work was calculated using Equation (2). It is assumed that there were no water volume changes during hydrate formation.
where P and T are the system pressure and temperature, respectively; V is the gas phase volume; R is the universal gas constant; Z is the compressibility factor, which is calculated using the Peng-Robinson Equation of State; subscript 0 stands for the start time of the experiment; and t stands for the conditions at time t.
Initial Rate of the CH 4 Consumption
The initial rate of gas consumption is the main parameter for gas hydrate applications. It represents the rate of CH 4 hydrate formation, and was calculated as in Equation (3): where n i−1 i and n i+1 i are the mole numbers of gas in the gas phase at time intervals t i−1 and t i+1 , respectively.
Water to Hydrate Conversion
The fraction of the mole number of water molecules in the hydrates phase per mole of the initial solution is called water to hydrate conversion, and was calculated using Equation (4):
Kinetics Models Theories
Modeling the hydrate formation's nucleation time and formation rate in a multiphase system provides the necessary information for field engineers to efficiently detect any risk of the formation of a hydrate plug. It also allows for the initiation of appropriate actions to mitigate hydrate plug formation in such systems. In this study, the classical nucleation theory and the Englezos rate model [21] were used to describe the hydrates' onset time and the formation rate behavior of the studied systems.
Classical Nucleation Theory (CNT) for Induction Time Prediction
Generally, hydrate formation occurs in two steps: nucleation and growth. The induction time represents the nucleation time and can be predicted if the nucleation rate is known. The nucleation rate, J, is referred to as the number of nuclei formed per unit volume and time (m −3 ·s −1 ) and can be quantitatively predicted by the classical nucleation theory. The nucleation rate in the classical nucleation theory is generally expressed by the following equation (Equation (5)) [22]: where Ns is the number of nucleation sites, Z is the Zeldovich factor, j is the rate of molecules attaching to the nucleus, ∆G * is the Gibbs free energy barrier for the formation of a hydrate's critical nucleus size, k is the Boltzmann constant (1.380649 × 10 −23 J·K −1 ), and T is the absolute temperature in Kelvin. The equation consists of two parts: the first of which is the dynamic part represented by Zj term, which reflects the nucleus growth rate. The second part of the equation N s exp(−∆G * /kT) represents the instantaneous number of critical hydrate nuclei reaching the free energy barrier. The Gibbs free energy for nucleation of the spherical nucleus, which involves the volume term and the surface term, is derived from the following expression (Equation (6)) [23]: where υ is the molecular volume, S is level of supersaturation, and σ is the surface tension between the solid and liquid phases (0.026) [24].
To improve the induction time prediction via CNT, special attention must be paid to the S value. S is the ratio between P and P 0 ; however, it was reported that S is more dependent on the final pressure P 0 than the experimental pressure at a given time P. Since P 0 is mainly affected by subcooling temperature and the presence of surfactants, a set of experiments was performed at four different temperatures (274.15-277.15 K) at the experimental pressure of 8.80 MPa. The P 0 values for each tested system were measured and correlated with temperature. A linear correlation was found with R 2 value of 0.9854, and it can be expressed through the following first-order equation (Equation (7)): where T is the experimental temperature. Equation (7) was optimized for each system and used to determine S in Equation (6). The nucleation rate was then estimated by substituting the resulting parameters in Equation (5). Importantly, since the oil used in this work does not participate in hydrate formations, its effect on the hydrate nucleation and formation rate was neglected in the models.
Prediction of Rate of Hydrate formation
The model proposed by Englezos et al. [21] for the kinetics of methane and ethane hydrates was adopted in this study to predict the methane hydrate formation rate in the oil system. The model is based on the theories of crystallization and mass transfer at a gas-liquid interface. In this model, the driving force for hydrate formation is defined as the difference between the fugacity of the dissolved gas (at operating temperature and pressure) and the equilibrium fugacity at the experimental temperature and at equilibrium Colloids Interfaces 2022, 6, 48 7 of 17 pressure. The growth rate for a hydrate particle with an interfacial area A p is expressed in Equation (8): where n represents the moles of gas consumed in hydrate formation and K is the rate constant that accounts for resistances associated and can be calculated using Equation (9) 1 where k r is the intrinsic rate constant for hydrate particle growth reaction and k d is the mass transfer coefficient around the particle. The rate of growth per particle is expressed by Equation (10): where K app is the apparent rate constant that incorporates the interfacial area. The initial rate of hydrate formation for each of the surfactant concentrations was calculated from the slope of a chart plotting for mole gas consumed vs. time. Subsequently, the apparent rate constant, K app , was generated by multiplying the experimental rate, K exp , with fugacity difference at each interval. Next, the average of the gas rate constant, K aver , was calculated by averaging each K app at the respective interval. Using the K aver , an empirical equation was developed using MATLAB to correlate the rate of hydrate formation with temperature and concentration. The generated empirical equation has a fast and reliable mathematical formula to interpolate between experimental points. The modified Englezos model was established to represent the CH 4 hydration formation rate with the tested systems after deriving an empirical correlation from the rate constant, concentration, and temperature, as shown in Equation (11): where a 0 , a 1 , and a 2 are the correlation coefficients. The absolute percentage error APE% of the predictions was calculated to evaluate the accuracy of the model using Equation (12): where r exp and r pre are the experimental and predicted gas formation rate.
Hydrate Phase Equilibrium
The hydrate phase behavior effect of any chemical is a prerequisite to its potential application in oil and gas production and delivery systems. The methane phase behavior impact by the nonionic surfactants in this study was tested in the pressure range of 4.85-8.73 MPa. This range of pressure was chosen to include the pressure used for kinetic experiments, taking into account economic aspects and the avoidance of high pressures. In addition, a blank sample (deionized water) was also tested to compare and evaluate the phase behavior effect of the surfactants on methane hydrates. In addition, the blank sample result was compared with the predictions of the CSMGem software and the literature data to check the reliability and validity of the equipment and method. The CSMGem software works based on the incorporation of the new hydrate and aqueous phase models into a multi-phase Gibbs energy minimization model. The results accorded well with the predicted and literature data as illustrated in Figure 3. Figure 3 presents the phase behavior of the surfactants at 2.0% (v/v) in 50% water cut at 8.70 MPa (approximately similar to the kinetics testing experimental pressure). The results revealed that the nonionic surfactants have a negligible effect on the methane hydrate phase boundary condition. The findings agree with the literature, where surfactants are mostly known to have a negligible effect on the hydrate phase boundary condition [25,26].
phase behavior effect of the surfactants on methane hydrates. In addition, the blank sample result was compared with the predictions of the CSMGem software and the literature data to check the reliability and validity of the equipment and method. The CSMGem software works based on the incorporation of the new hydrate and aqueous phase models into a multi-phase Gibbs energy minimization model. The results accorded well with the predicted and literature data as illustrated in Figure 3. Figure 3 presents the phase behavior of the surfactants at 2.0% (v/v) in 50% water cut at 8.70 MPa (approximately similar to the kinetics testing experimental pressure). The results revealed that the nonionic surfactants have a negligible effect on the methane hydrate phase boundary condition. The findings agree with the literature, where surfactants are mostly known to have a negligible effect on the hydrate phase boundary condition [25,26]. Figure 3. HLVE curve of methane for pure water (CSMGem software, study [27], and this work), 0.03 wt.% SDS, and 2.0% (v/v) surfactants.
The results further indicate that the molecular size and chemical structure of the surfactants slightly disrupt the activity of water in the hydrate formation region [28]. In addition, as it is well known that changing the type of guest molecules and hydrate structure results in a significant change in phase boundaries, it was concluded that the studied surfactants do not participate or occupy water cavities or alter the methane hydrate structure. Moreover, the low miscibility concentration of SPAN nonionic surfactant in the water phase further accounts for their negligible impact on the methane hydrate phase boundaries conditions. The results in Figure 3 further confirm that the oil in the systems does not participate in hydrate formation in any form. [27], and this work), 0.03 wt.% SDS, and 2.0% (v/v) surfactants.
The results further indicate that the molecular size and chemical structure of the surfactants slightly disrupt the activity of water in the hydrate formation region [28]. In addition, as it is well known that changing the type of guest molecules and hydrate structure results in a significant change in phase boundaries, it was concluded that the studied surfactants do not participate or occupy water cavities or alter the methane hydrate structure. Moreover, the low miscibility concentration of SPAN nonionic surfactant in the water phase further accounts for their negligible impact on the methane hydrate phase boundaries conditions. The results in Figure 3 further confirm that the oil in the systems does not participate in hydrate formation in any form.
Induction Time Measurement
The induction times of methane hydrate formations are detected when the pressure drops drastically following an increase in temperature due to the exothermic nature of hydrate crystallization. However, in this work, a unique and distinctive temperature peak was observed as the temperature increased to a much higher magnitude compared to the conventional temperature profile, as shown in Figure 4. This phenomenon could be attributed to the low heat capacity of oil compared the to pure water system. The induction times of methane hydrate formations are detected when the pressure drops drastically following an increase in temperature due to the exothermic nature of hydrate crystallization. However, in this work, a unique and distinctive temperature peak was observed as the temperature increased to a much higher magnitude compared to the conventional temperature profile, as shown in Figure 4. This phenomenon could be attributed to the low heat capacity of oil compared the to pure water system. The promotional effect at lower concentrations may be attributed to the reduction in surface tension. Span is able to reduce the surface tension more than Tweens as reported in [29]. To be exact, Tweens are ethoxylated Spans. Due to ethoxylation, the Tweens are water-soluble as opposed to the oil-soluble Spans. That is why higher concentrations may interact to a greater extent with water molecules and disturb hydrate formation. The Relative Inhibition Power (RIP) was calculated to evaluate the effectiveness of different sur- The promotional effect at lower concentrations may be attributed to the reduction in surface tension. Span is able to reduce the surface tension more than Tweens as reported in [29]. To be exact, Tweens are ethoxylated Spans. Due to ethoxylation, the Tweens are water-soluble as opposed to the oil-soluble Spans. That is why higher concentrations may interact to a greater extent with water molecules and disturb hydrate formation. The Relative Inhibition Power (RIP) was calculated to evaluate the effectiveness of different surfactants as kinetic methane hydrate inhibitors in multiphase systems. A positive value indicates an inhibitive behavior while a negative value indicates that the surfactant is behaving as a hydrate promoter. Figure 6 reveals that Tween-80 is the most effective inhibitor as it enhances methane inhibition power up to 134%. The promotional effect at lower concentrations may be attributed to the reduction in surface tension. Span is able to reduce the surface tension more than Tweens as reported in [29]. To be exact, Tweens are ethoxylated Spans. Due to ethoxylation, the Tweens are water-soluble as opposed to the oil-soluble Spans. That is why higher concentrations may interact to a greater extent with water molecules and disturb hydrate formation. The Relative Inhibition Power (RIP) was calculated to evaluate the effectiveness of different surfactants as kinetic methane hydrate inhibitors in multiphase systems. A positive value indicates an inhibitive behavior while a negative value indicates that the surfactant is behaving as a hydrate promoter. Figure 6 reveals that Tween-80 is the most effective inhibitor as it enhances methane inhibition power up to 134%. In the Tween series surfactant, the ethoxylation of the sorbitan molecule enhances its hydrophilicity so that it preferentially dissolves in water as opposed to dissolving in an oil phase (a non-polar phase). In a water-oil-CH 4 multiphase fluid system, the Tween molecules orientate themselves at the fluid interfaces and tend to produce an oil-in-water emulsion system. Some of the CH 4 dissolves in the oil phase, which is encapsulated as droplets within the continuous water phase. Consequentially, this limited the availability of the CH 4 molecules to be in direct contact with water molecules to form gas hydrates, thereby delaying the induction time. On the contrary, the SPAN series surfactant with a lower HLB value has a tendency to produce a water-in-oil emulsion. The oil, being the continuous phase, allows more CH 4 to be dissolved within it, thereby increasing the surface area for the water-CH 4 contact to form a gas hydrate, and thus increasing the rate of hydrate formation. When the SPAN concentration increased, excessive SPAN molecules would absorb and form a multilayered structure at the oil-water interface, providing the steric and hindering effect that enabled the CH 4 to diffuse across the interface boundary layer to form a hydrate with the encapsulated water droplets
Initial Rate of the Gas Hydrate Consumption
The initial rate of gas hydrate formation is related to the rate of gas being consumed during the initial period of gas hydrate formation. The rate can be inferred from the slope of the plot of gas consumption vs. time.
The experiments were conducted at an initial pressure of 8.80 MPa and a temperature of 277.15 K with concentrations of surfactants 1% (v/v) and 2% (v/v). For comparison, a blank sample composed of a water and oil mixture (1:1) was tested. Figure 7 illustrates the effect of non-ionic surfactants on the rate of CH 4 hydrate formation in a multiphase system. The SPAN and Tween surfactants promote the initial rate of gas hydrate formation. However, SPAN-20 and SPAN-40 demonstrate the highest rates of this promotional effect. The initial rates are three times higher than the baseline without any surfactant. In comparison, the initial rates are relatively slower for Span-80, Tween-20, Tween-40, and Tween-80.
The initial rate of gas hydrate formation is related to the rate of gas being consumed during the initial period of gas hydrate formation. The rate can be inferred from the slope of the plot of gas consumption vs. time.
The experiments were conducted at an initial pressure of 8.80 MPa and a temperature of 277.15 K with concentrations of surfactants 1% (v/v) and 2% (v/v). For comparison, a blank sample composed of a water and oil mixture (1:1) was tested. Figure 7 illustrates the effect of non-ionic surfactants on the rate of CH4 hydrate formation in a multiphase system. The SPAN and Tween surfactants promote the initial rate of gas hydrate formation. However, SPAN-20 and SPAN-40 demonstrate the highest rates of this promotional effect. The initial rates are three times higher than the baseline without any surfactant. In comparison, the initial rates are relatively slower for Span-80, Tween-20, Tween-40, and Tween-80. The initiate rate is accelerated by the surfactant once the gas hydrate has overcome the gas hydrate nucleation energy, and the hydrate nucleus will continue to increase in The initiate rate is accelerated by the surfactant once the gas hydrate has overcome the gas hydrate nucleation energy, and the hydrate nucleus will continue to increase in size. This is due to the presence of a surfactant, which reduces the interfacial tension and expands the gas-water contact area. As a significant number of methane molecules are available in the system, the hydrate encapsulation process could occur quickly in both types of non-ionic surfactants and accelerate the rate of gas formation. As Span-40 is in a solid form, at a low temperature, it could be separated from the emulsion as a solid phase. Thus, it provided additional nucleation sites and enhanced the hydrate formation rate.
Degree of Gas Consumption and Water to Hydrate Conversion
The degree of methane consumption was calculated from the material balance. The final extent of gas consumption was measured when the system pressure reached a steady state and remained at a plateau for one hour. As the hydrate formation plugs the pipeline, the surfactant could be used to manage and control the risk of hydrate formation. One such method is to delay the hydrate nucleation process by adding kinetic hydrate inhibitors such as the Tween surfactant, as discussed in the induction time results. Another method is to allow for a fast and small degree of hydrate formation in a transportable slurry form within the hydrocarbon phase. Therefore, it is important to study the effect of chemical additives on the amount of gas consumption, which represents the number of hydrates formed.
It can be observed from Figure 8 that Tween surfactants show approximately the same gas consumption activity as in the blank system in the absence of any surfactants. Systems with Span-20 and Span-80 have reduced gas consumption activity. It was also observed that there was a slight impact of the surfactant concentration on the final degree of CH 4 consumption. The final amount of gas consumption is normally controlled by the available amount of water, the equilibrium pressure, and the mass transfer resistance. The experimental temperature in this work was set at 277.15 K, which corresponds to an equilibrium pressure of 3.88 MPa as calculated by the CSMGem software. However, the final pressure in all the experiments does not drop below 7.40 MPa, which indicates that there is no thermodynamic limitation in this case. On the other hand, the water to hydrate conversion results confirm the availability of excessive water to form more hydrates. The maximum number of moles of water used in this work is 2.78 moles, which are supposed to consume 0.483 moles of CH 4 if an ideal conversion rate has been achieved for structure I. Therefore, it can be concluded that due to the low solubility of CH 4 in the water, mass transfer resistance dominated the system. Moreover, the non-ionic surfactants form an emulsion but do not significantly assist in incorporating more gas molecules in the liquid phase. This can be indicated by the lack of significant increase in CH 4 consumption compared to water. For the Span family, the degree of CH 4 consumption decreased significantly compared to water. The lowest amount was observed for Span-20, while the highest was observed for Span-40. The measured water to hydrate conversions in the oil and water system wi without surfactants were listed in Table 2. The hydration number was assumed to it is quite challenging to fully occupy all water cavities and reach the optimal hyd number of a Structure-I hydrate (5.75). The results show that Span-20 and Span-80 the amount of water being converted to methane hydrates. Moreover, Tween-80 the highest conversion rate compared to the other samples. As mentioned earlie respect to gas consumption, the Span forms a water-in-oil emulsion and disturbs the molecules agglomeration to form a larger mass. The measured water to hydrate conversions in the oil and water system with and without surfactants were listed in Table 2. The hydration number was assumed to be 6 as it is quite challenging to fully occupy all water cavities and reach the optimal hydration number of a Structure-I hydrate (5.75). The results show that Span-20 and Span-80 reduce the amount of water being converted to methane hydrates. Moreover, Tween-80 shows the highest conversion rate compared to the other samples. As mentioned earlier with respect to gas consumption, the Span forms a water-in-oil emulsion and disturbs the water molecules agglomeration to form a larger mass. Based on induction time measurements, Tween-80 has been demonstrated as the best KHI amongst all the non-ionic surfactants under study. As such, it was selected and prepared at 1.0, 1.5, 2.0, 2.5, and 3.0% (v/v) for further evaluation to compare its performance with the commercial KHI product. The 0.5 wt.% interval amongst the concentrations was maintained to ensure visible and accurate trends. Usually, convergent concentrations (concentrations intervals below 0.5 wt.%) are not useful in kinetic studies of hydrate formation because of their random nature and tendency to often yield inconsistent data measurement trends. The samples were tested at an initial pressure of 8.80 MPa at a constant temperature of 277.15 K.
The concentration effects of Tween-80 on the induction time, the rate of hydrate formation, and the degree of CH 4 consumption are plotted in Figures 9 and 10. Figure 9 shows that the optimum concentration at which Tween-80 significantly delays hydrate formation is 2.5 vol.%. The RIP at this concentration was reported at 868.10% higher than the blank sample. This finding could be a breakthrough in hydrate kinetic inhibition, not only due to its effectiveness but also due to its biodegradability and economic considerations. However, further analytical studies and molecular dynamic simulations are needed to describe the mechanism(s) provoking this phenomenon of a high induction time for Tween-80 in this study. Figure 9 shows that the optimum concentration at which Tween-80 significantly delays hydrate formation is 2.5 vol.%. The RIP at this concentration was reported at 868.10% higher than the blank sample. This finding could be a breakthrough in hydrate kinetic inhibition, not only due to its effectiveness but also due to its biodegradability and economic considerations. However, further analytical studies and molecular dynamic simulations are needed to describe the mechanism(s) provoking this phenomenon of a high induction time for Tween-80 in this study. In Figure 10, the rate of hydrate formation was slightly decreased with the increasing concentration of Tween-80. The correlation between the concentration and the rate is almost linear, which could be attributed to the quantity of the surfactant molecules in the system. Saturating the system with excessive surfactant molecules has decreased the hydrate formation rates. However, all concentrations have enhanced the rate of formation compared to the blank sample. Moreover, the degree of CH 4 consumed is slightly higher than the blank sample.
Comparison of Tween 80 with PVP
Although commercial KHIs such as PVP and PVCap have been proven to be efficient KHIs, researchers and stakeholders are still in search of better inhibitors that are efficient at higher subcooling temperatures and that are more environmentally friendly. Therefore, the performance of Tween-80 in this work was compared to PVP, as presented in Table 3. It can be observed that PVP exhibits an inhibitory behavior at all the studied concentrations as its RIP values are 155.9, 220.3, and 203.4% at 1.0, 2.0, and 2.5% (v/v), respectively. The maximum RIP for PVP was found at 2.0% (v/v), which is higher than Tween-80 at 2.0% (v/v). However, using PVP at a high concentration causes solubility issues that could lead to a poor hydrate inhibition effect as reported in [30]. Tween-80 at 2.5% (v/v) with RIP of 868.1% has outperformed PVP. PVP is superior to Tween-80 for lowering the CH 4 hydrate formation rate and total gas consumption. This is due to the surface-tension-reductive property of the Tween-80 surfactants. However, both PVP and Tween-80 have hydrophobic and hydrophilic groups as it is known that the KHIs are adsorbed on the hydrate crystal's surface through hydrogen bonding. The presence of multiple OH and O groups in the Tween-80 structure forms stronger hydrogen bonds with the water molecules, which strengthen its adsorption on the hydrate crystal's surface. In contrast, the steric hindrance effect of the hydrophobic groups in the PVP molecules inhibits and delays the nucleation and growth of hydrates at all concentrations.
Induction Time Prediction Using CNT
The induction time of methane hydrate formation in 2.5% (v/v) of Tween-80 was predicted by the CNT. This could offer an efficient tool for predicting the hydrate formation onset time in pipelines according to the operating temperature and pressure conditions. Due to the slight changes in the equilibrium pressure in accordance with the temperature, a first order equation was fitted to the experimental data. The nucleation rate was calculated using Equation (5). The onset of hydrate formation was detected when the nucleation rate reached zero, which indicates that there is no change in the number of nuclei over time and that the nucleation process has been completed. Table 4 presents the experimental and the CNT-predicted data. The maximum average absolute error (APE) was found to be 5.70%, which is quite accurate for such a stochastic kinetic phenomenon. This demonstrates the applicability of using the CNT model to predict the methane hydrate formation rate in the presence of a nonionic surfactant, Tween-80. The same model was successfully used to predict methane hydrate formation in drilling mud. The apparent rate constant K app of CH 4 hydrate formation in the multiphase system in the presence of 1.0-3.0% (v/v) Tween-80 was calculated at the temperature range of 274.15-277.15 K. The empirical correlation representing the relationship between the apparent rate constant, concentration, and temperature is established as follows (Equation (13)): Hence, the modified Englezos model is expressed in Equation (14).
The predicted rates of hydrate formation are presented in Table 5 along with their APEs. The rate predictions from the model agreed with the experimental data. The prediction errors are in the range of 0.38% to 4.93%. The errors are relatively acceptable for kinetic data compared to other reported studies in the literature [31]. The apparent rate constant is useful for comparing the growth rate of hydrate crystals. Therefore, in such a predictive model, the rate constant would help to obtain information on the rate of hydrate formation within the studied range as the operating pressure is known during oil and gas processing. It is worth mentioning that the presence of surfactants has a strong effect on the rate of hydrate formation due to their surface tension reduction property. However, they have a negligible effect on the hydrate phase equilibrium pressure and/or the equilibrium fugacity, f eq , at a given temperature. Therefore, the driving force, f g − f eq , was changed only due to the different operating temperatures.
Conclusions
In this study, nonionic surfactants were investigated as kinetic methane hydrate inhibitors in a multiphase system at 8.70 MPa and a temperature range of 274.15-277.15 K. The results showed that the Span and Tween series have a negligible effect on methane hydrate phase boundary conditions. Among the studied nonionic surfactants, Tween-80 profoundly delayed the methane hydrate nucleation time. However, the presence of surfactants promoted the rate of hydrate formation. The SPAN family significantly reduced the methane consumption and water to hydrate conversion rates, while the Tween family exhibited a negligible effect on the methane consumed. The optimal methane hydrate inhibition concentration of Tween-80 was 2.5% (v/v), which outperforms PVP (a commercial KHI inhibitor) in terms of induction time. However, PVP inhibits and delays the rate of hydrate formation, and its degree of gas consumption is greater than Tween-80. Generally, conventional kinetic hydrate inhibitors such as PVP are known to plug pipelines or promote hydrate formation post-induction time. Therefore, for hydrate prevention purposes, the use of induction time as the main kinetic performance indicator is recommended, thus supporting the noticeable induction time performance of Tween-80 over PVP in this work. The classical nucleation theory predicted the induction time in the presence of Tween-80 with a maximum error of 5.70%. In addition, the modified Englozes model has successfully predicted the rate of hydrate formation as a function of temperature and concentration with a maximum error of 4.93%. | 9,919 | 2022-09-16T00:00:00.000 | [
"Materials Science"
] |
Reply on RC1
The manuscript aims to determine the orogeny-scale (northern Apennine) erosion pattern derived from multiple thermochronometers, and to reconstruct erosion rate variation with space and time. The authors process a large data set of already published apatite fission track and apatite (U/Th)/He data that are accompanied by new detrital AFT data from 7 catchments (modern river sand). Erosion rates have been calculated for each samples using AGE2EDOT code and by applying different values of geothermal gradients.
The paper is well written and in general clear to read. The new data are of high quality. The obtained erosion rates data set is particularly interesting and they worth alone to be published. The application of a kinematic model and an interesting discussion made this paper perfectly suitable to be published in Solid Earth, with a only minor corrections.
The reviewer provides an accurate summary of our study and we appreciate the positive feedback.
My main criticism is focused on the mechanism invoked to explain decreasing in erosion rate along the Ligurian side (the retrowedge of Apennine orogenic wedge). The change in trajectory in the retrowedge seems the first order raison to explain a decrease in erosion rate with time. I feel that the depth of this variation can have a strong impact in the change in erosion rate with time. This variation in trajectory should occur between AFT closure depth and the AHe closure depth. Therefore the closure depth for AFT and AHe systems should deeply control the erosion rate pattern in the retrowedge. In the text it not very clear how closure depth is calculated line 237). Moreover, I am wondering to see the impact of different closure depths in modeling results. This is an important point, and we agree that the methodology for calculating the closure depths should be included. We address this question in the "Kinematic Model" Section of the reviewer's comments.
Regional pattern of several data set (i.e. Ro, fg. 2, thermochronological ages in inset map of fig. 9) shows a clear variation along strike. In the manuscript this along strike variation is never discussed, although has been interpreted in literature as a first-order tectonic control on erosion and exhumation. I would like to know the raison and conditions to apply the same kinematic model of the entire of Apennine wedge.
We briefly explain the pattern of vitrinite reflectance across the orogen and have added additional text to explain the pattern along strike of the orogen.
"Ro values also decrease along strike of the orogen from NW to SE (Fig. 2), illustrating that maximum burial depths also decrease towards the SE. This pattern was in turn interpreted to reflect the shape of the Ligurian Unit as a wedge that thinned towards the east (Zattin et al., 2002), and thus resulted in shallower burial depths for the underlying Cenozoic Foredeep deposits. " We agree that the pattern of vitrinite and cooling ages reflects a first-order tectonic control related to rollback of the Adriatic slab and retreat of the hinge (Thomson et al., 2010) The timing and rate of rollback and hinge rate vary across the orogen, but the fundamental mechanism is the same. In our model, it is not possible to input a spatially variable slab rollback rate, although we do allow for a range of slab rollback rates that is consistent with estimates in our study area. Providing a single rate of rollback may be a model limitation, so we provide some additional text in the discussion to clarify these points.
"The acceleration of exhumation may be related to a change in the timing or rate of slab rollback, which has varied along strike and across the orogen (Faccenna et al., 2014;Rosenbaum and Piana Agostinetti, 2015) and is a first-order tectonic control on exhumation and erosion (Thomson et al., 2010). We allow for a range of rollback rates that are consistent with rates for the field area, although the kinematic model is not able to resolve variability in rollback rates in either space or time."
Line 100 to 102. Variation of Ro is clear to follow also a NW-SE gradient.
The reviewer brings up a good point. We do not discuss the along-strike variability in Ro from the Gottero to the Val D'Arno swaths, although it is clear that 1) Ro values decrease from NW to SE, but that there is also less variability in Ro values on the Adriatic side from NW to SE. We have added extra text to bring up these points.
"Ro values also decrease along strike of the orogen from NW to SE (Fig. 2), illustrating that maximum burial depths also decrease towards the SE. This pattern was in turn interpreted to reflect the shape of the Ligurian Unit as a wedge that thinned towards the east (Zattin et al., 2002), and thus resulted in shallower burial depths for the underlying Cenozoic Foredeep deposits." In the erosion rate result section, I found some difficulties to read the text following figures 7 and 8. Figure 8 is described before figure 7. To be fair, I do not understand the meaning of figure 7 and what information the authors want to explain. It could be useful to add the geographic orientation, i.e NW to SE or NE to SW The purpose of Figure 7 is to illustrate the along-strike differences in ages (related to the reviewer's comment above) and erosion rates for the Adriatic (Figure 7b,d) and Ligurian (Figure 7a,c) sides. This figure is the only example that illustrates the patterns of cooling ages and erosion rates from this orientation, whereas all other figures illustrate data along transects perpendicular to the strike of the orogen. However, we think that this figure is perhaps difficult to read because we have combined all thermochronometers in each panel. To make the figure easier to read, we have created two panels per row, one with the AFT data and one with the AHe data.
We are also more explicit on the difference between Figures 7 and 8 in the text when we introduce the erosion rate results: "Here, we present the erosion rate results for the Adriatic and Ligurian sides, given the two different methods used for constraining the final geothermal gradient, and by illustrating the data with two perspectives: (1) along a profile oriented parallel to the orogen strike (Fig. 7, location shown in Fig. 3) and (2) along swath profiles oriented perpendicular to the orogen strike (Fig. 8)."
Kinematic model.
In this section I suggest to add some lines to describe the code and the environment of modeling. Line 237: It is not clear how the closure depth are chosen.
To calculate the closure depths shown on line 237, we used closure temperatures from the literature: AHe = 70°C (Farley, 2000), AFT = 110°C , (Wagner and Van den Haute, 1992), and ZHe = 180°C (Farley, 2000). These temperatures were converted to a closure depth by assuming a geothermal gradient of 25°C/km. We think that this approach is likely too simplistic, given the constraints we have on final geothermal gradients derived from heat flow maps. Instead, we now use an average final geothermal gradient of 36.4 °C /km for all sample locations in our field area, calculated from the G F_heatflow estimates. Using a higher geothermal gradient will produce shallower isotherms. Given the temperatures listed above for each thermochronometer, this produces closure depths of 1.9 km (AHe), 3.0 km (AFT), and 4.9 km (ZHe). We added the following text to explain our calculations and procedure: "Closure depths were calculated using the closure temperature for each thermochronometer, divided by a spatially and temporally constant geothermal gradient. Closure temperatures are given as: AHe = 70°C (Farley, 2000), AFT = 110°C , (Wagner and Van den Haute, 1992), and ZHe = 180°C (Farley, 2000). Excluding the Alpi Apuane samples, we used the full set of unique sample locations in our field area (Tables 1-4) to calculate an average G f_heatflow = 36.4 °C/km and closure depths for the ZHe (4.9 km), AFT (3.0 km), and AHe (1.9 km) thermochronometers."
At line 372: the best -fit between what data? For large audience could be useful a short description how this model works. Moreover it is not very clear why the authors show this run.
We agree that the term "best-fit" could be clearer for the reader. We add text to both the Methods section and Results section to better describe the objectives of the model and how we found the "best-fit" model results.
Lines 238-242 "We used a range of kinematic and thermal parameters applicable to the Northern Apennines to characterize a kinematic model that aims to: (1) model the path of rock particles from accretion into the wedge to their erosion at the surface, (2) calculate uplift and horizontal rock velocities across the wedge, (3) predict reset cooling ages for AHe, AFT, and ZHe thermochronometers, and (4) calculate maximum burial depths across the model. Here, we describe the model geometry and the kinematic and thermal parameters used to constrain the model."
Lines 383-391
"To construct the best-fit model, our goal is to reproduce a realistic pattern of reset and non-reset thermochronometer ages across the orogen and uplift rates consistent with modern uplift estimates from geodetic releveling (D'Anastasio et al., 2006) for the prowedge (0.5 ̶ 1 km/My) and retrowedge . To this end, we adjust the slab rollback rate within the acceptable range for our field area (6 ̶ 11 km/Ma), and the AHe erosion rates within the range of values calculated from G F_heat flow (0.17 ̶ 1.9 km/My) ( We are not entirely clear about which run the reviewer is referring to. We include the full outputs for both the SCR and VER scenarios. We hope that the above explanation has clarified the reasons for which we included each scenario in our results.
Erosion rate pattern: 459, it could be interesting to specify what kind of tectonic control could be responsible for local high exhumation rate for the Apuane Alps, and to add a reference.
We have added the following text to specify in more detail the tectonic control on the Alpi Apuane, and add the reference from Molli et al. (2018): "An exception to this general pattern may be the Alpi Apuane massif, which represents a structural culmination exposing a deep section and where high exhumation rates from the latest Miocene to the Present likely reflect post-orogenic processes of crustal thinning (Fellin et al., 2007;Molli et al., 2018)." Thank you for pointing this out. This reference is an error and should in fact before Figure 10. We have fixed this mistake. In reference to the scale, we have kept the horizontal scale the same, but have enlarged the vertical scale only so that we can more clearly see the pattern of material motion in the wedge for the depths relevant to the lowtemperature thermochronometers (ZHe, AFT, AHe) that we included in the study . Fig 9. To make the reading easier, it could be better to move the inset map within the panel 9b.
We agree that the position of the inset makes the figure more difficult to read, so we have moved the inset out of the figure panels and place it above as panel (a). The other panels have been relabeled accordingly and adjusted in the text. | 2,637.6 | 2021-11-02T00:00:00.000 | [
"Geology"
] |
Epitope-Based Immunoinformatics Approach on Nucleocapsid Protein of Severe Acute Respiratory Syndrome-Coronavirus-2
With an increasing fatality rate, severe acute respiratory syndrome-coronavirus-2 (SARS-CoV-2) has emerged as a promising threat to human health worldwide. Recently, the World Health Organization (WHO) has announced the infectious disease caused by SARS-CoV-2, which is known as coronavirus disease-2019 (COVID-2019), as a global pandemic. Additionally, the positive cases are still following an upward trend worldwide and as a corollary, there is a need for a potential vaccine to impede the progression of the disease. Lately, it has been documented that the nucleocapsid (N) protein of SARS-CoV-2 is responsible for viral replication and interferes with host immune responses. We comparatively analyzed the sequences of N protein of SARS-CoV-2 for the identification of core attributes and analyzed the ancestry through phylogenetic analysis. Subsequently, we predicted the most immunogenic epitope for the T-cell and B-cell. Importantly, our investigation mainly focused on major histocompatibility complex (MHC) class I potential peptides and NTASWFTAL interacted with most human leukocyte antigen (HLA) that are encoded by MHC class I molecules. Further, molecular docking analysis unveiled that NTASWFTAL possessed a greater affinity towards HLA and also available in a greater range of the population. Our study provides a consolidated base for vaccine design and we hope that this computational analysis will pave the way for designing novel vaccine candidates.
Introduction
The present world has witnessed the outbreak of many life-threatening human pathogens including Ebola, Chikungunya, Zika, severe acute respiratory syndrome coronavirus (SARS-CoV), and Middle East respiratory syndrome coronavirus (MERS-CoV) in the 21st century. More recently in late December 2019, a cluster of pneumonia cases was reported in the city of Wuhan, Hubei province, China, which was of unknown cause. Later it was confirmed that these pneumonia cases were due to a novel coronavirus named SARS-CoV-2 (previously named as 2019-nCoV) and the disease condition of this virus is referred to as COVID-19 [1][2][3]. On 11 March, 2020, the World Health Organization (WHO) assessed that COVID-19 can be characterized as a pandemic. The current COVID-19 pandemic is a global concern and is spreading at an alarming rate and as of 26 October, 2020, more than 43.2 million cases along with over 1.16 million deaths have been reported globally [4].
As COVID-19 is mainly a respiratory disease, in most cases it might affect the lungs only. The primary mode of infection is human-to-human transmission through close contact, which occurs via spraying droplets from the infected individual through their cough or sneeze. The symptoms of this coronavirus can be mild to moderate or severe including, fever, cough, and shortness of breath or pneumonia. Respiratory, hepatic and neurological complications can be seen in case of severe cases that can lead to death. It seems that the severity and fatality rate of COVID-19 is milder than that of SARS and MERS. Although diarrhea was presented in about 20-25% of patients with SARS and MERS, intestinal symptoms were rarely reported in patients with COVID-19 [5][6][7]. Multi-organ failure, especially in elderly people and people with underlying health conditions, such as hypertension, cardiovascular disease and diabetes, are exhibiting a higher mortality rate in COVID- 19. Interestingly, SARS-CoV-2 has 82% similarity with the original SARS-CoV virus attributed to the outbreak in 2003 [8]. A mature SARS-CoV-2 virus generally has a polyprotein (the open reading frame 1a and 1b, Orf1ab), four structural proteins such as envelope (E) protein; membrane (M) protein; nucleocapsid (N) protein; spike (S) protein and five accessory proteins (Orf3a, Orf6, Orf7a, Orf8 and Orf10), and, particularly, SARS-CoV-2 encodes an additional glycoprotein having acetyl esterase and hemagglutination (HE) attributes, which identified it distinct to its two predecessors [9]. The functions of accessory proteins may include signal inhibition, apoptosis induction and cell cycle arrest [10]. The S protein on the surface of the viral particle enables the infection of host cells by binding to the host cell receptor angiotensin-converting enzyme 2 (ACE2), utilizing the S-protein's receptor-binding domain (RBD).
The N protein binds to the RNA genome of the COVID-19 and creates a shell or capsid around the enclosed nucleic acid. The N protein is involved in viral RNA synthesis and folding, which interacts with the viral membrane protein during viral assembly and affects host cell responses including cell cycle and translation. An epitope-based peptide vaccine has been raised in this aspect. The core mechanism of the peptide vaccine is based on the chemical method to synthesize the recognized B-cell and T-cell epitopes that can induce specific immune responses and are immune-dominant. T-cell epitopes are short peptide fragments (8-20 amino acids) while the B-cell epitopes can be proteins [11,12].
Once a mutated virus infects the host cells by escaping the antibodies, it then relies upon the T-cell mediated immunity to fight against the virus. Viral proteins are processed into short peptides inside the infected cells and then loaded onto major histocompatibility complexes (MHC) proteins. After that, the MHC-peptide complexes are presented on the infected cell surface for recognition by specific T-cells. Activated CD8 + T-cells then recognize the infected cells and clear them. T-cell immunity also depends strictly on the MHC-peptide complexes, which are similar to the antigen-antibody association. MHC proteins are encoded by human leukocyte antigen (HLA), which is located among the most genetically variable regions on the human genome. Each HLA allele can only present a certain set of peptides that can be presented on the infected cell surface and recognized by T-cells are called T-cell epitopes. For a vaccine, it is essential to identify T-cell epitopes that originate from conserved regions of the virus T cell responses against the S and N proteins have been reported to be the most dominant and long-lasting [13].
To develop effective diagnostic tests and vaccine, the identification of B-cell and T-cell epitopes for SARS-CoV-2 proteins are critical especially for structural N and S proteins. Both humoral immunity and cellular immunity provided by B-cell antibodies and T-cells respectively are essential for effective vaccines [14,15]. Although humans may mount an antibody response against viruses normally, only neutralizing antibodies can block the entry of viruses into human cells completely [16]. Antibody binding site's location on a viral protein strongly affects the body's ability to produce neutralizing antibodies [17]. It is important to understand whether SARS-CoV-2 has potential antibody binding sites (B-cell epitopes) near their interacting surface with its known human entry receptor, ACE2. Besides neutralizing antibodies, human bodies also depend on cytotoxic CD8 + T-cells and helper CD4 + T-cells to clear viruses completely from the body. For antiviral T-cell responses, presentation of viral peptides by human MHC class I and class II is essential [18]. MHC-I analysis includes common alleles for HLA-A, HLA-B and HLA-C. Multiple investigations have indicated that antibodies generated against the N protein of SARS-CoV are a highly immunogenic and abundantly expressed protein during infection [19].
Our group is targeting for immunoinformatics-based vaccine design using bioinformatics and immunoinformatics tools by utilizing different protein sequences of SARS-CoV-2. Recently, we have already established potential B and T-cell epitopes with a greater candidacy profile using the S protein of SARS-CoV-2 [20]. Moreover, other published work also utilized the S protein of SARS-CoV-2 for epitope-based vaccine design [21]. The purpose of our present study is to promote the designing of a vaccine against COVID-19 using in silico methods, considering SARS-CoV-2 N protein. The reason for focusing particularly on the epitopes in the N structural proteins is due to their dominant and long-lasting immune response, which was reported against SARS-CoV previously [22]. Besides, it has been reported that the N protein of many viruses are highly conserved and immunogenic, which expressed extensively in the course of infection [23]. Particularly, it has been reported recently that the N protein and E protein of SARS-CoV-2 are most evolutionarily conserved [24,25]. For the identified T-cell epitopes, we incorporated the information on the associated MHC alleles so that we can provide a list of epitopes that seek to maximize population coverage globally. Therefore, we designed an epitope-based peptide vaccine through utilizing the SARS-CoV-2 N protein ( Figure 1) to potentially narrow down the search for potent targets against SARS-CoV-2 using the computational approach with an expectation that the wet laboratory research will validate our result.
Sequence Retrieval and Analysis
We retrieved the SARS-CoV-2 N protein sequence from the NCBI database (Accession No.: QIC53221.1). Then we performed BLASTp using NCBI-BLAST for the N protein of SARS-CoV-2.
We searched for a total of 100 homologs with >60% identical sequences. Multiple sequence alignment (MSA) was then performed to find out the conservancy among the target proteins.
(Supplementary Data 1), and a phylogenetic tree was constructed to analyze the evolutionary divergence amongst them ( Figure S1). From the results of the MSA analysis, it has been confirmed that the protein sequences have a close relationship.
Antigenic Protein Prediction
The most potent antigenic protein of SARS-CoV-2 N protein was predicted by VaxiJen v2.0, which is based on the auto-cross covariance transformation of protein sequences into uniform vectors of principal amino acid properties. The VaxiJen tool mainly encompasses the physicochemical properties of the protein sequence [26]. The overall antigen prediction score was 0.5002 (probable antigen) at a 0.4 threshold value.
Toxicity Prediction
Prediction of the toxicity of peptides before considering them, as epitopes are very important for saving both time and to make it cost effective. The toxicity of the selected peptide sequences was assessed using the ToxinPred web server. ToxinPred is a unique tool, which is based on support vector machine (SVM) in predicting toxicity of peptides and several physicochemical properties, including hydrophilicity, hydrophobicity, charge and molecular weight. The results from the ToxinPred tool showed that all of our probable epitopes were found non-toxic (Table 1).
Protein Structure Prediction and Validation
The secondary structure of the SARS-CoV-2 N protein was predicted using the self-optimized prediction method with alignment (SOPMA), an online server, During prediction, the SOPMA server can be able to locate almost all of the stretches with the regular structure, which investigate the recognition of folding pattern in an efficient way [27]. The secondary structure of a protein describes mainly the α-helix, β-sheets and random coil. SARS-CoV-2 N protein has 419 residues ( Figure 2A), of which 89 residues were remained in the α-helix, 70 residues were from the extended strand, 29 residues were observed in the β-sheets, and 219 residues were remained as random coil ( Figure 2B,C). For 3D structure, we built a model using the Robetta online server. The Robetta server predicts the tertiary structure of a given protein from the inputted genomic data. The Robetta server utilizes a fully automated implementation of the Rosetta software package for the inference of the structural information of the protein [28]. In the current experiment, the Robetta server predicted five models for the SARS-CoV-2 N protein, which were validated using PROCHECK and PROSA-Z score. From the result of the validation, it has been observed that Model 4 predicted by the Robetta server have possessed 88.4% amino acid residues in the Rama favored region and delineated Z-score of −7.24, which depicted the model as a good quality model ( Figure 2D,E). Although the Z-score for model 1 was shown −7.42, it possessed less amino acid residues in the Rama favored region ( Figure S2). In addition, we analyzed the Ramachandran plot statistics and Z-score for the crystal structure of SARS-Cov-2 N protein (Resolution: 2.70 Å). The results showed that the Rama favored region for the crystal structure of SARS-CoV-2 N protein was 88.1% and Z-score was −5.06, which was less compared to the model structure ( Figure S2). Hence, model 4 could be used for further analysis.
CD8 + T-Cell Epitope Identification
The NetCTL 1.2 server was utilized for the prediction of T-cell epitopes. The number of T-cell epitopes depended on the length of the sequence. Further the predicted epitopes with strong binding affinities were subjected to several immune filters in order to screen out the best possible epitopes, including conservation among the protein sequences included in the study, should be immunogenic, should be non-allergic and importantly should not overlap with any human proteins. Based on high combinatorial and MHC binding, the top eight epitopes were predicted by the NetCTL server from the selected protein sequence that was selected for further analysis. Using the MHC-I binding prediction tool, which is based on stabilized matrix method (SMM), we selected those MHC-I alleles for which the epitopes showed the highest affinity (half maximal inhibitory concentration, IC 50 < 200 nm).
Proteasomes play an important role in cleaving the peptide bond, resulting in the conversion of protein into the peptide. The peptide molecules that are homogeneous to class I MHC molecules and the peptide-MHC molecule after the proteasomal cleavage were presented as T-helper cells after the transportation into the cell membrane. The total score of each epitope-HLA interaction was taken into consideration and higher processing efficiency was meant by obtaining a higher score. The epitope NTASWFTAL interacted with most of the MHC-I alleles including, HLA-A*68:02, HLA-C*16:01, HLA-C*03:03, HLA-C*03:04, HLA-C*12:03, HLA-A*02:06, HLA-C*03:02, HLA-A*26:01 and HLA-C*14:02 (Table 2). Moreover, the MHC-NP prediction tool was used to find the highest probable score of our predicted epitope NTASWFTAL, with a score of 1.11, for HLA-A*68:02. Furthermore, all the predicted epitopes had a maximum identity for conservancy hit and 100% maximum identity was found (Table 2). Additionally, the I-pMHC immunogenicity prediction analysis of the epitope NTASWFTAL was found 0.22775 (Table 2). Table 2. The potential CD8 + T-cell epitopes along with their interacting MHC class I alleles and total processing score, epitopes conservancy_hits and pMHC-I immunogenicity score.
Population Coverage
Population coverage analysis is crucial in determining a peptide sequence as vaccine candidates. Accordingly, epitope-based vaccines can be designed to maximize the population coverage and minimizing the complexity regarding the variability of the population coverage observed in different ethnic groups. In the current study, the cumulative amount of the population coverage was obtained for the predicted epitope NTASWFTAL. Results from the population coverage demonstrated that with 57.16% coverage, East Asia found the highest coverage region. The results of the population coverage were shown in Table 3 and Figures S3-S6.
Allergenicity Assessment
The AllerTop server was used for the identification of the allergic reaction caused by a vaccine in an individual that might be harmful or life-threatening. The AllerTop server predicts allergenicity based on several factors, including, amino acid descriptors, accounting for residue hydrophobicity, size, abundance, helix-and β-strand forming propensities and a machine learning approach, namely the k nearest neighbors (kNN) method was implemented to classify allergens and non-allergens [29]. The allergenicity of the selected epitope was calculated using the AllerTop tool and predicted as a probable non-allergen.
Molecular Docking Analysis for HLA and Epitope Interaction
Molecular docking analysis is used for the prediction of a ligand-receptor interaction. The advancement in computational biology techniques in the last few decades have allowed for further development in molecular docking algorithms for determining the flexibility of a protein and currently, molecular docking is considered as widespread tools used in computational biology techniques. In this study, the verification of the interaction between the HLA molecules and our predicted potential epitope was done by molecular docking simulation using AutoDock Vina in PyRx 0.8 software. Among all the MHC class I alleles, only HLA-A*68:02 had a maximum probable score for our most potent epitope NTASWFTAL. Therefore, we carried out the molecular docking study using HLA-A*68:02 (PDB ID: 4I48). The 3D structure of the predicted epitope, NTASWFTAL and HLA-A*68:02 molecules are represented in Figure 3. We found that our predicted epitope NTASWFTAL interacted with HLA-A*68:02 with strong binding affinities of -9.4 kcal/mol ( Table 4). The selected epitope interacted with Arg6, Ser4, Ser2 and Asp30 residues of chain-A and Lys59, Asp60, Ser58 and Gly30 of chain-B through hydrogen bonding (H-bond), whereas Lys7 residue of chain-B form bonds as a result of sharing electrons (which may happen as a result of charge distribution; Figure 4). Further, for the validation of the docking study, we performed molecular docking analysis between HLA-A*68:02 and the 9-mer peptide bound with the crystal structure of HLA-A*68:02, where the peptide was considered as a positive control. Conversely, the molecular docking analysis between the positive control and HLA-A*68:02 showed less binding affinities than the predicted epitope, where the positive control exhibited a docking score of −8.2 kcal/mol (Table 4). Although the positive control formed six hydrogen bonds, the formed hydrogen bond was less than NTASWFTAL ( Figure 5). In addition, a salt bridge was formed between the positive control and Asp29 residue from A chain of HLA-A*68:02. Table 4. Results of the molecular docking analysis amongst HLA-A*68:02 and the predicted epitope, NTASWFTAL, and 9-mer peptide from envelope glycoprotein gp160 from HIV type 1 (positive control).
B-Cell Epitope Prediction
B-cell epitopes play an important role in the development of epitope-based vaccine and allergic research. A dominant linear B-cell epitope can be used in the autoimmune diseases as the target of neutralizing antibody responses [30]. In addition, they are able to induce an antibody that cross reacts with the parent protein. In this study, using the amino acid scale-based method, we predicted the B-cell epitope identification. Different analysis methods were used for the prediction of the continuous B-cell epitope. The results of the B-cell predictions were shown in Tables 5-7, Tables S1 and S2 Firstly, BepiPred linear epitope prediction was used, which is regarded as the best single method for predicting linear B-cell epitopes using a Hidden Markov model. The findings from the BepiPred linear epitope prediction showed maximum score of 2.416 and a minimum score of −0.001, where the average scores were displayed as 0.813 (Table S1).
The β-turns were predicted by the Chaus and Fasman β-turn prediction method. The maximum score was found for the amino residues 2-8 ( Figure 6) and the minimum score was attributed for amino acid residues 218-224 ( Figure 6).
For antigenicity prediction, the Kolaskar and Tongaonkar antigenicity prediction methods were implied. The method evaluates the antigenicity based on the physicochemical properties of amino acids and their abundances in experimentally known epitopes. The average antigenic propensity of our SARS-CoV-2 N protein was 0.988 with a maximum of 1.197 and a minimum of 0.874 (Figure 7). In addition, the average flexibility of 1.035 and a minimum of 0.874 were predicted by the Karplus and Schulz flexibility prediction method. The residues from 238 to 244 were found to be the most flexible with the highest score of 1.161. The Parker hydrophilicity prediction tool predicts the hydrophilicity of the SARS-CoV-2 N protein with an average score of 2.80, a minimum of 0.874 and the region from amino acid residues 77-83 have shown the maximum score, where the maximum value was 7.006 ( Figure 7).
For predicting the surface ability, this study included the Emini surface accessibility prediction method. The average surface accessibility was 1.0 and a minimum 0.050 ( Figure 6).
Discussion
As of yet, it has been reported that the reproduction rate of SARS-CoV-2 is greater than SARS and MERS and the symptoms of the COVID-19 infection include fever with more than 38 • C body temperature along with alveolar edema, leading to difficulty in breathing, whereas mild symptoms perhaps not engender a high fever [31]. Surprisingly, with a high fatality rate, the severity of the infection was found to be more than the infection caused by both SARS and MERS, with multiple organ damage, which was reported not long ago [32].
At present, researchers are examining repurposed compounds from other viral infections to treat SARS-CoV-2. For example, both lopinavir and ritonavir are HIV protease inhibitors but in a lopinavir-ritonavir clinical trial report, the treatment benefit derived was dubious [33]. From recovering patients, several convalescent immunoglobulins are derived, which is currently investigated as a potential treatment for the disease [34]. As there have been no approved treatments for COVID-19 that exists until now, but remdesivir has been used in some emergency cases and evidence also showed that convalescent plasma could be used as treatment without severe adverse effects [34,35]. These treatments are the best hope for striving to keep the mortality rate low before vaccines become widely available.
Despite many potential challenges, vaccine development is a crucial factor in modern biotechnology as vaccines are the most important prerequisites for defending the burden of diseases over the world [36].
With the divulgement of sequence-based technology in genomics and proteomics, enough pieces of information are available regarding different eukaryotic and prokaryotic organisms including viruses. Therefore, utilizing various bioinformatics tools, it is possible to design peptide-based vaccines through comprehensibly studying the epitopes and several studies suggested epitope-based vaccines against different diseases including dengue, chikungunya, Saint Louis encephalitis virus [37][38][39]. Although epitope-dependent vaccine design is quite familiar, little research works are done in the case of SARS-CoV-2. Being an RNA virus, SARS-CoV-2 is different from the DNA virus and the rate of mutation is higher than the DNA viruses and according to various research, it can be assumed that the mutations might occur in the N protein [40]. Recently, N proteins of SARS-CoV-2 are regarded as a primary target for vaccine development as its function includes viral replication and directly associated with the infection process, as a consequence related to the pathogenesis of COVID-19 [41]. Previous research works have already established that N proteins of several viruses and SARS are considered as a potential target for the development of vaccines [42][43][44][45]. Moreover, we already mentioned the detrimental role of SARS-CoV-2 in host-cell responses. This aspect led us to conduct in silico experiments for designing a peptide-based vaccine against the novel SARS-CoV-2.
Earlier, it has been thought that vaccine development primarily relies on B-cell immunity, but recent discovery unveiled that T-cell epitopes are more propitious as a result of a more long-lasting immune response mediated by CD8 + T-cells and due to the antigenic drift, by which an antibody is not able to respond against an antibody [46]. In this study, focusing on MHC class I potential peptide epitopes, we predicted T-cell and B-cell epitopes, which were able to show immune responses in various ways. Many characteristics including antigenicity, toxicity need to take into consideration for identifying a protein sequence-based epitope into a vaccine candidate and the predicted eight epitopes fulfilled the entire criterion. Toxicity analysis is regarded as an important parameter during design of a peptide sequence into a vaccine candidate. For instance, melittin, a major peptide of bee venom, is a promising candidate for cancer therapy, but due to its toxicity, its applicability has met with critical challenges [47]. In the current study, only five potent epitopes have been predicted from the NetCTL 1.2 server and the epitopes were further taken for the progressive analysis. Besides, all peptides except SSPDDQIGY were able to interact with the MHC class I alleles, and NTASWFTAL interacted with the most MHC class I alleles. Amongst them, HLA-A*68:02 possessed the highest probable score. Further, the conservancy of the epitopes, which was predicted by the IEDB conservancy analysis tool delineated that all of our predicted epitopes had the maximum identity of 100%. Apart from this, a computational study unraveled that the targeted epitope NTASWFTAL showed conservancy along with several epitopes from SARS-CoV-2 [48]. Previously, NTASWFTAL has been used in order to determine the ability to elicit the SARS-CoV immune response [49]. Furthermore, a previous study has already demonstrated that NTASWFTAL interacted with most of the HLA supertypes, including, HLA- [50]. The amino sequence GLPNNTASWFTALTQHGK of SARS-COV-2 N protein also demonstrated the characteristics of the B-cell epitope, which includes the targeted epitope NTASWFTAL [51]. Therefore, we took the epitope NTASWFTAL for further analysis due to its maximum interaction with MHC class I alleles and the highest conservancy.
Generally, allergy is considered as an overreaction of the immune system to a previously captured, harmless, normal protein in nature. True allergic reactions to vaccines are rare; however, their identification is crucial because they can be detrimental to the body [52]. Occasionally, the vaccine itself causes hypersensitivity due to the toxoids present in it. Hence, allergenicity is regarded as one of the most noteworthy obstacles in vaccine development. Importantly, T-cells not CD4 + T-cells are involved in an allergic reaction and an allergic reaction is stimulated by type 2 T helper cell along with immunoglobulin E [53]. In this experiment, we assessed the allergenicity using AllerTop 2.0, which is well recognized for its high sensitivity, and able to identify structurally diverse allergens in comparison with the known allergens. AllerTop predicted our selected epitope as non-allergen.
It has been proposed that the T-cell epitopes bind with the MHC molecules and MHC class I molecules generally presented short peptides that are 8-11 amino acid long, whereas MHC class II molecules present longer peptides with 13-17 amino acid residues [54]. In this experiment, we determined the binding (presence of the antigen on the surface) affinity of the predicted epitope using molecular docking analysis and demonstrated that NTASWFTAL interacted with HLA-A*68:02 and found a binding affinity of −9.4 kcal/mol, which depicted a greater interaction with the epitope and the HLA molecule as the more negative energy implied to more binding affinity [55]. In addition, our predicted epitope delineated greater binding affinities to HLA-A*68:02 than its native ligand. Importantly, a study from Zhang reported the highest binding affinity of NTASWFTAL towards the HLA-A2/A0201-restricted T-cell epitopes [56]. The results from the molecular docking studies in the current study also revealed that epitope NTASWFTAL formed H-bond with both chain-A and chain-B of the HLA molecule and attractive charges were also responsible for the binding.
Another factor that is considered as the most prominent one during the process of vaccine development is population coverage, as the distribution of HLA varies according to ethnicity and geographical region. Although after implementation of several clinical studies, genetic variability on a global scale could have an effect on the significant application of the vaccine candidates in humans [57]. Our experiment showed that the epitope NTASWFTAL covered almost all regions of the world, where the highest coverage was observed in East Asia, where COVID-19 was first reported. Interestingly, our findings indicated that our predicted epitope specifically binds with the widespread HLA molecules and the vaccine will be easily employed.
Importantly, the accurate prediction of T-cell epitopes along with B-cell epitopes is a crucial challenge for the immunoinformatics study and it is unlikely that different HLAs are expressed at different frequencies amongst the ethnic groups. However, substantial research in several in silico markers including the matrix-based profile, and regular expressions in the immunoinformatics study provide a cogent way for prediction of several immunobiological phenomena, for instance, the prediction of subcellular localization (SCL) of a protein is identified by several computational tools. Similarly, T-cell epitope identification has been undergone by implying numerous computational methods and various research areas, including cancer therapy and other infections, T-cell epitope identification is presently apparent [58][59][60]. Additionally, experimental methods established for the calculation of the binding interaction between MHC molecules and an antigenic protein is complicated and time-consuming. Hence, several computational tools have been introduced concerning simulation of the experimental methods, and the methods of the MHC binder prediction are based on motifs, quantitative matrices (QMs), ab initio prediction, machine-learning techniques, DiscoTope, etc. [61]. Several algorithms including PePSSI (peptide-MHC prediction of structure through solvated interfaces) and PREDEP (prediction of MHC class I epitopes) are implemented for the structural prediction and side-chain orientation of the binding proteins. In the current study, the prediction of MHC-I binding with T-cell antigenic peptides from the SARS-CoV-2 N protein sequences was done through the SMM algorithm, which incorporates proteasomal cleavage, TAP transport and MHC class I affinity into the final output and recent studies suggested that SMM is more established than other algorithms such as EpiJen and MAPPP [62][63][64].
Recently, other research works have suggested vaccine design from antigenic protein sequences of SARS-CoV-2 through utilizing in silico immunoinformatics-based methodologies. A study from Lee et al. reported a comprehensive list of antigenic peptides for vaccine development against SARS-CoV-2 [65]. However, the findings of the research work represented that the N protein patterns retained from SARS-CoV-2 were unable to interact with HLA alleles. Several other studies also delineated the high binding affinity of predicted epitopes towards HLA-A*24:02 and HLA-A*02:01 alleles respectively [66,67]. Conversely, in the current research work, our predicted epitope NTASWFTAL exhibited greater affinity towards HLA-A*68:02, predicted by NetCTL 1.2 server. Besides, molecular docking simulation unveiled the greater interaction between the predicted epitope and HLA-A*68:02 molecules. Moreover, our current study is in alignment with previous research work, which depicted peptide-based sequence against the S protein of the human coronavirus [36]. However, we cannot rule out the role of MHC class II peptides during the design of the epitope-based vaccine, as it plays a phenomenal role in humoral immunity through helping B-cells.
In addition, the B-cell epitope provides a strong immune response without causing any adverse effects. Generally, B-cell epitopes are either linear (continuous) or conformational (non-continuous) [68]. Importantly, flexible regions are observed in several crucial parts of a protein, including binding sites, catalytic sites, proteolytic cleavage susceptible sites, allosteric sites and most importantly the antigenic part of a protein sequence. Flexibility analysis is one of the major concerns for the identification of the surface residues forming a protein, which is further demonstrated as potential continuous epitopes [69]. For vaccine development, it would be crucial for predicting the antigenic region. In addition, hydrophilic amino acid residues are major determinants of antigenic features of a protein sequence, as the point highest hydrophilicity is located in or adjacent to an antigenic portion of the protein [70]. In this experiment, we also calculated the linear B-cell epitope prediction. It has been documented that peptide vaccines that are able to demonstrate immune responses against foreign particles contain peptides that are comprised of linear B cell epitopes [71]. B cell epitopes carry specific antigens that bind to the B lymphocytes, as a result they are recognized as potential antigenic determinants and are crucial for vaccine design [72]. In addition, B cell epitopes elicited a stronger immune response, but no side effects were observed. Recently, Grifoni et al. predicted B cell epitopes by utilizing the structural proteins of SARS-CoV and SARS-CoV-2 [73]. The Grifoni study predicted the identity of three peptide sequences from 42-62, 153-172 and 355-401 amino acid residues having an identity ≥ 90% [73]. In the current experiment, by using several tools from IEDB database, we predicted several B-cell epitopes from the SARS-CoV-2 N protein. As a consequence, our study predicted several B cell epitopes that were in line with those identified by Grifoni et al. (Table S2). Additionally, one of the predicted B-cell epitope from amino acid residues 154-166 was in agreement with the study from Amrun et al. (Table S2) [74]. Moreover, several studies have reported the characterization of B-cell epitopes from the N protein of many viruses from humans and animals [23,[75][76][77].
Recently, immunoinformatics-aided vaccine design has received experimental validation, which targeted multi-epitope protein clusters from Mycobacterium tuberculosis that interacted with HLA class I and II molecules and their prediction was experimentally validated through in vitro studies [78]. On the other hand, our study was more specific than some similar studies, for example, a study from Khan et al. had selected MHC-I alleles for which the epitopes representing higher affinity (IC 50 < 500 nm), but in our study, we showed that epitopes for MHC I alleles showing higher affinity (IC 50 < 200 nm), as peptides with minimum IC 50 values, exhibited greater inhibition [79,80]. In addition, we assessed immunogenicity, allergenicity and toxicity of the selected epitopes. Moreover, B-cell epitopes can pave the way for experimental epitope mapping and also crucial concerning the interpretation of results from several experiments, including ELISA, radioimmunoassay and Western blotting.
Of course, we understood that this research work does not claim to be exhaustive and all-inclusive as it is true that in silico works have its advantages and limitations. However, recently immunoinformatics is regarded as a new branch of computational biology techniques and is effective in the quest of new immunotherapeutics, amalgamating bioinformatics techniques to figure out several unique problems of vaccinology and immunology [81]. Epitope prediction can be regarded as a high parameter in immunoinformatics investigation, and immunoinformatics calculations are considered as the high frontier to develop effective vaccines true of the practical value. However, the experimental validations of the underlying approaches are required to establish a predicted epitope into a vaccine candidate. The accuracy of the predicted computational analysis should be corroborated by more accessible and robust laboratory experiments.
Protein Sequence Retrieval
The SARS-CoV-2 N protein sequence was extracted from the NCBI (National Center for Biotechnology Information) (Bethesda, MD, USA) protein database (Accession no.: QIC53221.1, GI: 1811294683) in the FASTA format.
Sequence Analysis
The understanding of the features, function, structure and evaluation is mainly based on the process of sequence analysis, which depicts the process of subjecting DNA, RNA or peptide sequences to wide ranges of analytical methods. We employed NCBI BLAST (Basic Local Alignment Search Tool) [82] that screens homologous sequences from its database and selects those sequences that are more similar to our SARS-CoV-2 N protein; we also performed multiple sequence alignment (MSA) using the ClustalW (Conway Institute, UCD, Dublin, Ireland) web server with default settings, and a phylogenetic tree was assembled using MEGA6 software [82][83][84].
Protein Antigenicity and Toxicity Prediction
To determine the potent antigenic protein of the SARS-CoV-2 N protein, we used the online server VaxiJen v2.0, with a default threshold value [85]. All the antigenic proteins of SARS-CoV-2 N protein with their respective scores were obtained then sorted in Notepad++. A single antigenic protein with maximum antigenicity scores was selected for further evaluation. The toxicity of epitopes was analyzed using the ToxinPred web server [86].
Protein Secondary and Tertiary Structure Prediction
The secondary structure of the SARS-CoV-2 N protein was predicted by using the SOPMA tool (Institute of Biology and Protein Chemistry, Lyon, France), which correctly predicts 69.5% of amino acids for a three-state description of the secondary structure (α-helix, β-sheet and coil) in a whole database [27]. Additionally, we predicted the 3D structure of the protein using Robetta (University of Washington, Seattle, WA, USA) server, which provides automated tools for prediction and analysis of the tertiary structure of the protein [28]. The model was validated using PROCHECK and PROSA web servers [87,88]. In addition, the 3D crystal structure of SARS-CoV-2 N protein (PDB ID: 6M3M) was downloaded from the Protein Data Bank (PDB) database for comparing the modeled 3D structure of the SARS-CoV-2 N protein.
CD8 + T-Cell Epitope Prediction
For the de novo prediction of the T-cell epitope, NetCTL 1.2 server (DTU Health Tech, Kongens Lyngby, Denmark) was used in this experiment, using a 0.95 threshold to maintain the sensitivity and specificity of 0.90 and 0.95, respectively. The tool expands the prediction for 12 MHC-I supertypes and integrates the prediction of peptide MHC-I binding and proteasomal C-terminal cleavage with TAP transport efficiency. These predictions were performed by an artificial neural network, weighted TAP transport efficiency matrix and a combined algorithm for MHC-I binding and proteasomal cleavage efficiency was then used to determine the overall scores and translated into sensitivity/specificity. Based on this overall score, five best peptides (epitopes) were selected for further evaluation.
For the prediction of peptides binding to MHC-I, we used a tool from the Immune Epitope Database (IEDB) (National Institute of Allergy and Infectious Diseases, Bethesda, MD, USA) and calculate IC 50 values for peptides binding to specific MHC-I molecules [89]. For the binding analysis, all the frequently used alleles were selected with a word length of nine residues and binding affinity <200 nm for further analysis. Another tool (named as MHC-NP) provided by the IEDB server was used to assess the probability that a given peptide was naturally processed and bound to a given MHC molecule [90].
Epitope Conservancy and Immunogenicity Prediction
The degree of similarity between the epitope and the target (i.e., given) sequence was elucidated by epitope conservancy. This property of the epitope gave us the promise of its availability in a range of different strains. Hence for the analysis of the epitope conservancy, the web-based tool from IEDB analysis resources was used [91]. Immunogenicity prediction can uncover the degree of influence (or efficiency) of the respective epitope to produce an immunogenic response. The T-cell class I pMHC immunogenicity predictor at IEDB, which uses amino acid properties as well as their position within the peptide to predict the immunogenicity of a class I peptide MHC (pMHC) complex [92].
Prediction of Population Coverage and Allergenicity Assessment
The population coverage tool from IEDB was applied to determine the population coverage for every single epitope by selecting HLA alleles of the corresponding epitope.
Allergenicity of the predicted epitope was calculated using AllerTop v2.0 (Medical University, Sofia, Bulgaria) [29], which is an alignment-free server, used for in silico based allergenicity prediction of a protein-based on its physiochemical properties.
Epitope Model Generation
The 3D structures of the selected epitopes were predicted by PEP-FOLD, a web-based server [93]. For each sequence, the server predicted five probable structures. The energy of each structure was determined by SWISS-PDB VIEWER and the structure with the lowest energy was chosen for further analysis [94].
Retrieval of the HLA Allele Molecule
The three-dimensional structure of the HLA-A*68:02 (PDB ID: 4I48) was retrieved from Protein Data Bank (RCSB-PDB).
Molecular Docking Analysis
Molecular docking analysis was performed using AutoDock vina (Scripps Research, La Jolla, CA, USA) in PyRx 0.8, by considering the HLA-A*68:02 molecule as the receptor protein and identified epitopes as the ligand molecule [95]. Firstly, we used the protein preparation wizard of UCSF Chimera (Version 1.11.2) to prepare the protein for docking analysis by deleting the attached ligand, adding hydrogens and Gasteiger-Marsili charges [96,97]. The prepared file was then added to the AutoDock wizard of PyRx 0.8 and converted into the pdbqt format. The energy form of the ligand was minimized and converted to the pdbqt format by OpenBabel [98]. The parameters used for the docking simulation were set to the default. The size of the grid box in AutoDock Vina was kept at 50.183 Å × 50.183 Å × 50.183 Å respectively, for X, Y and Z-axis. AutoDock Vina was implemented via the shell script offered by AutoDock Vina developers [99]. Docking results were observed by the negative score in kcal/mol, as the binding affinity of ligands are depicted in negative energies [100,101]. In addition, for validation of the docking approach, we selected 9-mer peptide from the envelope glycoprotein gp160 from human immunodeficiency virus (HIV) type 1 attached with the crystal structure of HLA-A*68:02 as a positive control and performed molecular docking analysis using the aforementioned similar parameters.
B-Cell Epitope Identification
The prediction of B-cell epitopes was performed to find the potential antigen that assures humoral immunity. To detect the B-cell epitope, various tools from IEDB were used to identify the B-cell antigenicity, together with the Emini surface accessibility prediction, Kolaskar and Tongaonkar antigenicity scale, Karplus and Schulz flexibility prediction and Bepipred linear epitope prediction analysis and since antigenic parts of a protein belonging to the beta-turn regions, the Chou and Fasman beta-turn prediction tool was also used [102][103][104][105][106][107].
Conclusions
The advancement in immunoinformatics has now emerged as a potential field for the prediction of epitope-based vaccines. As viruses can delineate both T-cell and humoral immunity, our predicted epitope might suggest enhancing the immunity against SARS-CoV-2. The assumption is based on the basic principles of immunity, which confers the attachment of the virus with the host cell, evoking immune responses and transfers the information to a broad spectrum of T cells and B cells. Our investigated epitopes mimicked the interaction to CD8 + cells antigen presentation using computational approaches. However, our study was an introductory design to predict epitope-based vaccine against SARS-CoV-2 and we hope that this predicted epitope would assist the further laboratory analysis for designing and predicting novel candidates against COVID-19.
Supplementary Materials: The following are available online. Data S1: Multiple sequence alignment of SARS-CoV-2 nucleocapsid protein; Figure S1: Evolutionary divergence analysis of available N proteins of different strains; results are represented in a phylogenetic tree; Figure S2 Funding: This research received no external funding.
Conflicts of Interest:
The authors declare no conflict of interest. | 9,343.6 | 2020-06-22T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Towards Integration of Domain Knowledge-Guided Feature Engineering and Deep Feature Learning in Surface Electromyography-Based Hand Movement Recognition
As a machine-learning-driven decision-making problem, the surface electromyography (sEMG)-based hand movement recognition is one of the key issues in robust control of noninvasive neural interfaces such as myoelectric prosthesis and rehabilitation robot. Despite the recent success in sEMG-based hand movement recognition using end-to-end deep feature learning technologies based on deep learning models, the performance of today's sEMG-based hand movement recognition system is still limited by the noisy, random, and nonstationary nature of sEMG signals and researchers have come up with a number of methods that improve sEMG-based hand movement via feature engineering. Aiming at achieving higher sEMG-based hand movement recognition accuracies while enabling a trade-off between performance and computational complexity, this study proposed a progressive fusion network (PFNet) framework, which improves sEMG-based hand movement recognition via integration of domain knowledge-guided feature engineering and deep feature learning. In particular, it learns high-level feature representations from raw sEMG signals and engineered time-frequency domain features via a feature learning network and a domain knowledge network, respectively, and then employs a 3-stage progressive fusion strategy to progressively fuse the two networks together and obtain the final decisions. Extensive experiments were conducted on five sEMG datasets to evaluate our proposed PFNet, and the experimental results showed that the proposed PFNet could achieve the average hand movement recognition accuracies of 87.8%, 85.4%, 68.3%, 71.7%, and 90.3% on the five datasets, respectively, which outperformed those achieved by the state of the arts.
Introduction
As a precise and noninvasive way of decoding user's intention of hand movements, the surface electromyography (sEMG)-based hand movement recognition has been extensively investigated in the area of rehabilitation engineering [1,2] and human-computer interaction [3,4]. Having realized that one of the key issues of sEMG-based hand movement recognition is a machine-learning-driven decision-making problem of classifying sequences of sEMG signals, many efforts have been made in improving sEMGbased hand movement recognition by designing more representative features [5], developing more sophisticated machine-learning models [6], and increasing the number of sensors [7].
From the perspective of machine learning, existing sEMG-based hand movement recognition approaches can be broadly categorized into (1) methods based on feature engineering and (2) methods based on feature learning [8].
e former refers to methods based on conventional shallow learning models and handcrafted time domain (TD), frequency domain (FD), or time-frequency domain (TFD) features, and the latter refers to methods based on end-toend deep learning models that can learn representative highlevel features from raw sEMG signals without relying on any engineered feature.
Over the past five years, feature learning approaches based on end-to-end deep learning models such as convolutional neural networks (CNNs) [9] and recurrent neural networks (RNNs) [10] have been widely studied in sEMGbased hand movement recognition. On the other hand, due to the noisy, random, and nonstationary nature of sEMG, researchers have also realized that achieving robust sEMGbased hand movement recognition accuracy remains a challenging issue for end-to-end deep learning models. For example, one of the early studies in this field revealed that the average hand movement recognition accuracy achieved by the end-to-end CNN model was significantly lower than that achieved by conventional shallow learning models such as random forests and support vector machine (SVM) on the large-scale noninvasive adaptive prosthetics (NinaPro) database [11]. Later studies on this database [12,13] presented more promising results achieved by the fine-tuned and manually optimized end-to-end deep learning models, which outperformed shallow learning models.
Compared with feature learning approaches, the hand movement recognition performance of conventional feature engineering approaches is largely dependent on the selection and extraction of features, which is usually done manually based on the domain knowledge accumulated through a vast quantity of experiments and evaluations in the field. Such heuristically accumulated domain knowledge is often thought to be useful in enhancing deep learning-based myoelectric pattern recognition [14].
us, a number of recent studies in this field have tried to extract and evaluate multiple engineered features as the input of their deep learning models. For example, Millar et al. [15] extracted a set of 11 TD features from sEMG signals for hand movement recognition using a long short-term memory (LSTM) model and achieved an averaged recognition accuracy of 99.8% in classifying a series of functional grasps on 2 diametric objects. Cheng et al. [16] extracted two TD features and one FD feature from sEMG signals and constructed them into the multi-sEMG feature image for hand movement recognition using a CNN model, and they achieved an averaged recognition accuracy of 82.5% in classifying 52 hand movements over 27 subjects. Allard et al. [17] evaluated different input modalities of a CNN model with transfer learning architecture and found that short-time Fourier transformbased spectrograms and continuous wavelet transform (CWT) features outperformed raw sEMG signals in classifying 7 hand movements over 17 subjects. Shen et al. [18] extracted FD and TFD features from sEMG signals, represented them by images, and used them for stacking ensemble CNN-based hand movement recognition, and they achieved an averaged recognition accuracy of 72.1% in classifying 40 hand movements over 10 subjects. Our previous study [14] extracted three sets of features from sEMG signals, constructed them into multi-view representations of sEMG signals for hand movement recognition, and achieved an averaged recognition accuracy of 83.7% in classifying 50 hand movements over 40 subjects.
To sum up, existing deep learning approaches for sEMG-based hand movement recognition can be categorized into end-to-end deep learning approaches and non-end-to-end deep learning approaches considering their input. Although the existing non-end-to-end deep learning approaches improved the sEMG-based hand movement recognition performance using engineered features instead of raw sEMG signals as their input, they to a considerable extent ignored the feature learning capability of deep learning models. In other words, their hand movement recognition performance was highly dependent on the selection of engineered features, which is usually based on domain knowledge or offline experimental results on a small set of data. Moreover, for methods that employed multiple engineered features as the input of deep learning models [14,18], the feature engineering process required additional computational time and resources, which limited their use in real-time systems.
erefore, in this study, we propose a progressive fusion network (PFNet), which aims at improving sEMG-based hand movement recognition via progressive integration of domain knowledge-guided feature engineering and CNN-based deep feature learning. In particular, the proposed PFNet architecture is composed of three parts, namely the feature learning network, the domain knowledge network, and the progressive fusion module. e feature learning network and the domain knowledge network learn high-level feature representations from raw sEMG signals and engineered features, respectively, and the two networks are progressively integrated together via a 3-stage process in the progressive fusion module. e major contributions of the proposed PFNet architecture are twofold: (1) We built up two independent neural networks, namely the feature learning network and the domain knowledge network, to separately learn discriminative high-level feature representations from raw sEMG signals and the wavelet packet-based TFD features that have been proven to be effective for sEMG-based hand movement recognition in early studies; thus, the hand movement recognition performance can be improved with the help of both deep feature learning and heuristically accumulated domain knowledge.
(2) We employed a 3-stage process to progressive integrated domain knowledge-guided feature engineering and deep feature learning in sEMG-based hand movement recognition. In particular, featurelevel fusion was performed at first to fuse the highlevel feature representations learned at two different depths of the feature learning network and the domain knowledge network together via two subnetworks, and then, the output decisions of the two subnetworks were fused together through decisionlevel fusion. Such a 3-stage integration strategy is believed to be capable of learning more diverse highlevel feature representations, which is helpful for improving the hand movement recognition performance.
2 Computational Intelligence and Neuroscience e experimental results on five datasets not only proved the effectiveness of integration of domain knowledge-guided feature engineering and deep feature learning in sEMGbased hand movement recognition, but also indicated that our approach outperformed other state-of-the-art methods.
Datasets and Data
Preprocessing. Experiments in this study were carried out on 5 subdatasets of the NinaPro repository [19], which provides publicly available multichannel sEMG signals recorded from intact subjects and trans-radial amputees. Table 1 presents brief information of the sEMG datasets used in this study, and detailed descriptions are as follows: e first subdataset of NinaPro (denoted as NinaP-roDB1) provides 10-channel sEMG signals collected from 53 hand movements performed by 27 healthy subjects. e hand movements in NinaProDB1 were categorized into 12 finger movements (denoted as Exercise A), 17 wrist movements and hand postures (denoted as Exercise B), 23 grasping and functional movements (denoted as Exercise C), and the rest movement, and each hand movement was repeated 10 times (i.e., 10 trials per hand movement) [20]. As most of the existing studies on this NinaProDB1 excluded the rest movement from their experiments [10,12,14,22], in our experiments we also excluded the rest movement for the convenience of performance comparison. e second subdataset of NinaPro (denoted as NinaP-roDB2) provides 12-channel sEMG signals collected from 50 hand movements performed by 40 healthy subjects. e hand movements in NinaProDB2 were categorized into 17 wrist movements and hand postures (i.e., as same as Exercise B in NinaProDB1), 23 grasping and functional movements (i.e., as same as Exercise C in NinaProDB1), 9 force patterns (denoted as Exercise D), and the rest movement, and each hand movement was repeated 6 times (i.e., 6 trials per hand movement) [20]. e third subdataset of NinaPro (denoted as NinaP-roDB3) provides 12-channel sEMG signals collected from 50 hand movements performed by 11 trans-radial amputee subjects. e hand movements in NinaProDB3 are exactly the same as those in NinaProDB2, and each hand movement was repeated 6 times (i.e., 6 trials per hand movement) [20]. According to Atzori et al. [20], during the data recording process of NinaProDB3 three trans-radial amputee subjects interrupted the experiment before its end due to fatigue or pain, and two trans-radial amputee subjects used only 10 electrodes to collect sEMG signals due to insufficient space. e data from these subjects were omitted in our experiments to ensure that the number of hand movement repetitions, as well as the number of sEMG channels for each subject, was the same. e fourth subdataset of NinaPro (denoted as NinaP-roDB4) provides 12-channel sEMG signals collected from 53 hand movements performed by 10 healthy subjects. e hand movements in NinaProDB4 are exactly the same as those in NinaProDB1, and each hand movement was repeated 6 times (i.e., 6 trials per hand movement) [21]. After checking the data, we found that two subjects (i.e., subject 4 and subject 6) did not complete all hand movements, and their data were omitted in our experiments. e fifth subdataset of NinaPro (denoted as NinaP-roDB5) provides 16-channel sEMG signals collected from 53 hand movements performed by 10 healthy subjects. e hand movements in NinaProDB5 are exactly the same as those in NinaProDB1, and each hand movement was repeated 6 times (i.e., 6 trials per hand movement) [21]. A subset of 41 hand movements were classified in our experiments, and the specifications of the selected hand movements can be found in [21]. e sEMG signals in NinaProDB1 were recorded by Otto Bock 13E200-50 electrodes at a sampling rate of 100 Hz, the sEMG signals in NinaProDB2 and DB3 were recorded by Delsys Trigno Wireless electrodes at a sampling rate of 2k Hz, and the sEMG signals in NinaProDB4 were recorded by Cometa Wave Plus Wireless sEMG system at a sampling rate of 2k Hz [20,21]. Because of the memory limitation, we downsampled the sEMG signals in NinaProDB2-NinaP-roDB4 from 2k Hz to 100 Hz. e same experimental configuration was also adopted in [14]. e raw sEMG signals in each dataset were segmented by sliding windows. As early studies [23,24] have indicated that the maximum allowable time delay of real-time myoelectric control systems is 300 ms, and for all experiments in this study, we employed sliding windows that were no longer than 200 ms to segment raw sEMG signals. Detailed information of the sliding window lengths and steps used in this study will be presented in the results and discussion section of this study.
Domain Knowledge-Guided Feature Engineering and
Feature Augmentation. Discrete wavelet transform (DWT) is a time-frequency analysis approach that iteratively decomposes the original discrete time series into wavelet coefficients in multiresolution sub-bands via a set of half-band filters that are established based on a pair of orthogonal wavelet basis functions [25]. As shown in Figure 1(a), at the first wavelet level, a half-band low-pass filter and a half-band high-pass filter decompose the original signals X into two sequences of coefficients in the lower resolution space, namely the scaling coefficients C A , which are the approximate representation of X, and wavelet coefficients C D , which are the detailed representation of X, respectively. Such process is iteratively repeated on the decomposed scaling coefficients at the subsequent wavelet levels, resulting in a two-channel tree structure that subsamples the signals by 2 at each node. e discrete wavelet packet transform (DWPT) is an extension of DWT, in which not only scaling coefficients but also wavelet coefficients are decomposed into two sequences of coefficients in the lower resolution space at each wavelet level. As shown in Figure 1(b), when the wavelet level k � 3, the outputs of DWPT are composed of a total of 2 3 � 8 sequences of DWPT coefficients (DWPTCs), which can be regarded as the multiresolution representation of original signals X in 8 sub-bands. e DWPT has been widely used in sEMG-based hand movement recognition as a feature engineering technique for the extraction of TFD features. Conventional shallow learning methods usually extract statistic features, such as energy, average value, standard deviation, skewness, and kurtosis. Conventional shallow learning methods usually extract statistic features, such as energy, average value, standard deviation, skewness, and kurtosis from DWPTCs as the input of their classifiers [26,27], while most of the state of the arts adopt the strategy of using images generated from DWPTCs in all sub-bands to form the input of deep neural networks [14,18]. In our previous study [14], a total of 11 engineered features and feature sets were evaluated as the input of a CNN model for sEMG-based hand movement recognition, and the results showed that the hand movement recognition accuracy achieved by DWPTCs on different datasets outperformed all other features and feature sets.
Based on the aforementioned domain knowledge, the DWPTCs were extracted from raw sEMG signals in this study to generate the input images of the domain knowledge network. e DWPT hyperparameters used in this study are exactly the same as those used in our previous study [14]. In particular, we used the Daubechies 1 wavelet basis function, and the wavelet level k was set to log 2 N , where N is the length of input signals (i.e., length of the sliding window). For each sEMG channel, the resulting 2 k DWPTC sequences in all sub-bands were concatenated together to form a DWPTC vector, and the DWPTC vectors from all sEMG channels were stacked into a DWPTC image.
Two DWPTC images extracted from each sliding window were further augmented by the algorithm proposed by Jiang and Yin [28]. Such feature augmentation strategy, which was also adopted in our previous study [14], enables every sEMG channel to have a chance to be adjacent to every other channel via channel reorganization, thus providing additional spatial correlations between nonadjacent sEMG channels for the deep learning model. Suppose the DWPTC image extracted from each frame sliding window has a shape of D × C, where C is the number of sEMG channels, the D × C DWPTC image was reorganized into an D × M image after feature augmentation. When C � 10, we have M � 50, and when C � 12, we have M � 72. Figure 2 demonstrates the architecture of our proposed PFNet, which consists of a feature learning network, a domain knowledge network, and the progressive fusion module. Suppose N-frame sliding windows are used to segment C-channel sEMG signals, the
Proposed PFNet Architecture.
Input signals x Computational Intelligence and Neuroscience input images of feature learning network are N × C sEMG images, which are formed by stacking C-channel raw sEMG signals together, and the input images of the domain knowledge network are D × M reorganized DWPTC images, which has been discussed in the previous subsection.
Feature Learning
Network. e feature learning network performs feature learning on raw sEMG signals, and it is composed of two convolutional layers with 3 × 3 filters, two locally connected layers with 11 filters, and one fully connected layer with 512 hidden units. e number of output feature maps of every neural network layer in the feature learning network was set to 64. e feature learning network shares the same architecture with the first four neural network layers of GengNet [12], which showed promising sEMG-based hand movement recognition performance in existing studies [12][13][14].
Domain Knowledge Network.
e domain knowledge network learns high-level feature representations from reorganized DWPTC images. e network architecture of domain knowledge network is slightly different from the feature learning network, and it is composed of one convolutional layer with 1 × 1 filters, one convolutional layer with 2 × 2 filters, two locally connected layers with 1 × 1 filters, and one fully connected layer with 1024 hidden units. Computational Intelligence and Neuroscience 5 e number of output feature maps of every neural network layer in the domain knowledge network was also set to 64.
Progressive Fusion Module.
Conventional fusion methods for dealing with feature vectors obtained from multiple sources can be categorized into feature-level fusion and decision-level fusion, and the former concatenates the feature vectors and feeds the resulting feature vector into the classifier, while the latter builds up independent classifiers for feature vector from each data source and then fuses the decisions together to form the final decisions [29].
In this study, we proposed the progressive fusion module as shown in Figure 3 for fusion of feature learning network and domain knowledge network, which is able to obtain more diverse high-level feature representations via a 3-stage fusion process. Suppose F f 4 and F d 4 denote the flattened feature maps learned by the 4th neural network layers (i.e., the 2nd locally connected layers) of feature learning network and domain knowledge network, respectively, and F f 5 and F d 5 denote the feature vectors learned by the 5th neural network layers (i.e., the 1st fully connected layers) of feature learning network and domain knowledge network, respectively, the 3-stage fusion process can be formulated as follows.
1st-stage fusion (feature-level fusion): 2nd-stage fusion (feature-level fusion): 3rd-stage fusion (decision-level fusion): Here, ‖ denotes the concatenation operation and ⊕ denotes the element-wise summation operation, H i (i � 1, 2) are two subnetworks for feature-level fusion of high-level features learned at two different depths of feature learning network and domain knowledge network, and θ i and y i refer to their parameters and output decisions, respectively. As shown in equation (3), the output decisions of two subnetworks H 1 and H 2 , which are in the form of softmax scores, are finally summed up at the 3rd-stage fusion to obtain the final decision (classification result). For a more distinct view of the two subnetworks for feature-level fusion in the 3-stage progressive fusion process, we marked the 1st and 2nd subnetworks (i.e., H 1 and H 2 ) with blue and red lines, respectively, in Figure 3.
Neural Network Configurations and Hyperparameter
Settings. We applied batch normalization [30] to each neural network layer of the PFNet to reduce the internal covariate shift and rectified linear unit (ReLU) activation function [31] after each neural network layer to fasten the training process. As shown in Figure 3, we also applied dropout regularization [32] after five neural network layers (i.e., the 2nd locally connected layers and the 1st fully connected layers of the feature learning network and the domain knowledge network, as well as the 1st fully connected layer of the 1st subnetwork H 1 ) to avoid overfitting.
To prevent overfitting, for all experiments in this study we employed a pre-training strategy that has been widely used in sEMG-based hand movement recognition systems [10,[12][13][14]33]. In particular, during each experiment, we firstly pre-trained a model using all available training data and then used the pre-trained model as the initial model in each fold of the validation. e pre-training and training were based on stochastic gradient descent (SGD) algorithm with batch size of 1000, and the number of training epochs was set to 28. To improve convergence, we also applied a learning rate decay strategy [34], which initialized the learning rate at 0.1 and divided it by 10 at the 16th and 24th epochs, respectively. For layers with dropout regularization, the dropout rate was set to 0.5 during pre-training and set to 0.65 during training.
Evaluation Metrics.
For the convenience of performance comparison, the evaluation metrics used in this study were the same as those used in existing studies on the NinaPro dataset [10,12,14,20,22,33,35]. In particular, we followed the intra-subject classification schemes proposed by the author of NinaPro dataset [20,21], which used sEMG signals from approximately 2/3 of the hand movement repetitions performed by each subject as the training set and sEMG signals from the remaining hand movement repetitions performed by the same subject as the test set. e final hand movement recognition accuracy on each dataset is obtained by averaging the achieved accuracies over all subjects. e selection of training and test set on different subdatasets of NinaPro can be described as follows: NinaProDB1: the sEMG signals from the 1st, 3rd, 4th, 6th, 7th, 8th, and 9th repetitions of all hand movements are used as the training set, while the sEMG signals from the 2nd, 5th, and 10th repetitions of all hand movements constitute the test set. NinaProDB2, NinaProDB3, NinaProDB4, and NinaProDB5: the sEMG signals from the 1st, 3rd, 4th, and 6th repetitions of all hand movements are used as the training set, while the sEMG signals from the 2nd and 5th repetitions of all hand movements constitute the test set.
Computational Time and Efficiency.
All experiments in this study were performed offline with MXNet [36] on a NVIDIA GeForce GTX 1080 Ti GPU. In our experiments, the hardware factors that affected the computational time and training speed include not only GPU utilization percentage, but also the network throughput, as all of the offline experimental data (i.e., sEMG signals) are stored on a network-attached storage (NAS) device; thus, it is hard to estimate the computational time of our proposed PFNet for 6 Computational Intelligence and Neuroscience sEMG-based hand movement recognition in real-world scenarios. Even so, we calculated the approximate computational time and efficiency for training, which are as follows. e training of each fold (i.e., each subject) of intrasubject evaluation took approximately 23-30 minutes on NinaProDB1, 11-17 minutes on NinaProDB2, 18-20 minutes on NinaProDB3, 37-39 minutes on NinaProDB4, and 3-4 minutes on NinaProDB5, and the training speed on NinaProDB1, NinaProDB2, NinaProDB3, NinaProDB4, and NinaProDB5 was approximately 3500 samples per second, 3300 samples per second, 6400 samples per second, 3300 samples per second, and 3500 samples per second, respectively.
Ablation Studies on the Proposed Method.
In machine learning, "ablation studies" usually refer to a procedure to evaluate certain parts of the deep neural network, where the other parts of the deep neural network are removed from the evaluation. In this study, we conducted two ablation studies on the proposed PFNet to verify its effectiveness, which can be described as follows: (1) Ablation Study 1: a performance comparison among the proposed PFNet, PFNet without the domain knowledge network and its input (denoted as FLonly), and PFNet without the feature learning network and its input (denoted as DKonly), to verify the effectiveness of integration of domain knowledge-guided feature engineering and deep feature learning in sEMG-based hand movement recognition. e neural network architectures of FLonly and DKonly are illustrated in Figures 4(a) and 4(b), respectively.
(2) Ablation Study 2: a performance comparison among different approaches for fusion of feature learning network and domain knowledge network, including the proposed progressive fusion module, the decision-level (i.e., score) fusion approach, and two feature-level fusion approaches.
For all experiments in these ablation studies, the sliding window length was set to 200 ms, and the window step was set to 10 ms except for experiments on NinaProDB5, in Computational Intelligence and Neuroscience which we followed the experimental configuration used by Pizzolato et al. [21] and our previous study [14] that set the window step to 100 ms. Figure 5 demonstrates the average hand movement recognition accuracies achieved by FLonly, DKonly, and our proposed PFNet. e experimental results showed that our proposed PFNet outperformed both FLonly and DKonly on all datasets (i.e., NinaProDB1-NinaProDB5). In particular, the average hand movement recognition accuracies achieved by our proposed PFNet were 87.8 ± 4.2%, 85.4 ± 5.1%, 68.3 ± 9.2%, 71.7 ± 7.4%, and 90.3 ± 3.2% on NinaProDB1, NinaProDB2, NinaProDB3, NinaProDB4, and NinaP-roDB5, respectively, which were much higher than those achieved by the FLonly architecture (i.e., 84.0 ± 5.2%, 80.8 ± 5.7%, 48.6 ± 8.0%, 69.9 ± 7.9%, and 72.7 ± 4.1% on NinaProDB1, NinaProDB2, NinaProDB3, NinaProDB4, and NinaProDB5, respectively). Compared with FLonly, the average hand movement recognition accuracies achieved by the DKonly architecture were much closer to, but also significantly outperformed by those achieved by the proposed PFNet, which were 87.4 ± 4.2%, 85.1 ± 5.2%, 66.6 ± 9.4%, 71.2 ± 7.5%, and 89.6 ± 3.7% on NinaProDB1, NinaProDB2, NinaProDB3, NinaProDB4, and NinaP-roDB5, respectively. e experimental results in Ablation Study 1 showed that the integration of domain knowledge-guided feature engineering and deep feature learning is an effective way of improving sEMG-based hand movement recognition. Although the increase in input data may increase computational complexity, the computational time and training speed presented in Section 3.1 are still acceptable for realworld sEMG-based hand movement recognition systems. Moreover, compared with other deep learning methods that relied only on domain knowledge-guided feature engineering [14,18], the integration of domain knowledgeguided feature engineering and deep feature learning achieves the balance between hand movement recognition performance and computational complexity, which is meaningful for real-time application scenarios.
In Ablation Study 2, we carried out a performance comparison among different methods for fusion of feature learning network and domain knowledge network, including our proposed progressive fusion module, the decisionlevel (i.e., score) fusion approach (as illustrated in Figure 4(c)), a feature-level fusion approach (denoted as stage 1 feature-level fusion, as illustrated in Figure 4(d)) that is equivalent to PFNet without stage 2 fusion and stage 3 fusion, and a feature-level fusion approach (denoted as stage 2 feature-level fusion, as illustrated in Figure 4(e)) that is equivalent to PFNet without stage 1 fusion and stage 3 fusion. For the decision-level fusion approach, the number of hidden units of 2nd fully connected layer in both feature learning network and domain knowledge network was set to 512, which is exactly the same as the number of hidden units of the second last fully connected layers in the 1st and 2nd subnetworks. Figure 6 demonstrates the average hand movement recognition accuracies achieved by decision-level fusion, stage 1 feature-level fusion, stage 2 feature-level fusion, and our proposed PFNet. According to the experimental results, the 3-stage progressive fusion was able to achieve higher sEMG-based hand movement recognition accuracies than the conventional single-stage feature-level fusion (e.g., stage 1 feature-level fusion and stage 2 feature-level fusion) approaches or decision-level fusion approach. However, we also found that the performance gap between the proposed progressive fusion module and conventional fusion approaches was not significant. For example, stage 1 feature-
Comparison with the State of the Arts.
We also compared the average hand movement recognition accuracies achieved by the proposed PFNet with those achieved by the state of the arts. For a fair performance comparison, we only considered the state of the arts that used the same intra-subject classification schemes as described in Section 2.4, and we evaluated the hand movement recognition accuracies achieved with sliding windows of 50 ms, 100 ms, 150 ms, and 200 ms. Window step settings were the same as those used in the ablation studies, except for experiments on NinaProDB5 with 50 ms, 100 ms, and 150 ms sliding windows, in which we set the window step to 10 ms. Table 2 presents the hand movement recognition accuracies achieved by our proposed PFNet and the state of the arts on NinaProDB1, NinaProDB2, NinaProDB3, NinaP-roDB4, and NinaProDB5. According to the experimental results, our proposed PFNet achieved higher hand movement recognition accuracies than all the state-of-the-art deep learning methods [10-14, 16, 18, 22, 37, 38] and shallow learning methods [20,21] listed in Table 2 on NinaProDB2, NinaProDB3, NinaProDB4, and NinaProDB5. On NinaProDB1, our proposed PFNet was outperformed by MV-CNN, which was proposed in our previous study [14]. On the other hand, it should be mentioned that MV-CNN is a multi-view deep learning method that used three highdimensional feature sets as its input, and the performance gap between PFNet and MV-CNN was insignificant on NinaProDB1.
ese results indicate that our proposed PFNet framework can effectively improve sEMG-based hand movement recognition with the help of both feature learning and domain knowledge-guided feature engineering.
Conclusion
Aiming at improving sEMG-based hand movement recognition, this study proposed a progressive fusion network (PFNet) framework, which learns high-level feature representations from raw sEMG signals and discrete wavelet packet transform coefficients (DWPTCs) via a feature learning network and a domain knowledge network, respectively, and then employs a progressive fusion module to fuse the two networks together via a 3-stage process and obtain the final decisions.
Ablation studies were conducted on five open-source sEMG datasets (i.e., NinaProDB1-NinaProDB5), and the experimental results proved the effectiveness of integration of domain knowledge-guided feature engineering and deep feature learning in sEMG-based hand movement recognition, as well as the effectiveness of the proposed progressive fusion module.
Moreover, we also carried out performance comparison with the state of the arts on NinaProDB1-NinaProDB5. e experimental results showed that the proposed PFNet could Computational Intelligence and Neuroscience achieve the average hand movement recognition accuracies of 87.8 ± 4.2%, 85.4 ± 5..1%, 68.3 ± 9.2%, 71.7 ± 7.4%, and 90.3 ± 3.2% on NinaProDB1, NinaProDB2, NinaProDB3, NinaProDB4, and NinaProDB5, respectively, which outperformed those achieved by the state-of-the-art methods on most of the evaluated datasets. Compared with our recently proposed method that used multiple engineered feature sets as its input [14], our proposed PFNet could achieve higher or almost the same hand movement recognition accuracies with only one type of engineered feature. Future improvement of the proposed PFNet framework will focus on simplification of the deep neural network architecture while maintaining its performance, as real-time sEMG-based hand movement recognition systems usually required a more lightweight machine-learning model with fewer parameters and less computational complexity.
Data Availability
e sEMG signals supporting the findings of this study are from the NinaPro dataset, which is publicly available at ninapro.hevs.ch. e papers describing the NinaPro dataset are cited at relevant places within the text as references [20,21]. e processed data and trained deep neural networks used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this study. | 7,310.8 | 2021-12-29T00:00:00.000 | [
"Computer Science"
] |
Invariant universality for projective planes
We continue the work of [1, 2, 3] by analyzing the equivalence relation of bi-embeddability on various classes of countable planes, most notably the class of countable non-Desarguesian projective planes. We use constructions of the second author to show that these equivalence relations are invariantly universal, in the sense of [3], and thus in particular complete analytic. We also introduce a new kind of Borel reducibility relation for standard Borel G-spaces, which requires the preservation of stabilizers, and explain its connection with the notion of full embeddings commonly considered in category theory.
Introduction Definition 1.
A plane is a system of points and lines satisfying: (A) every pair of distinct points determines a unique line; (B) every pair of distinct lines intersects in at most one point; (C) every line contains at least two points; (D) there exist at least three non-collinear points.A plane is projective if in addition: (B') every pair of lines intersects in exactly one point.A plane is simple if except for a finite number of points every point is incident with at most two non-trivial lines (i.e.lines containing more than two points).
The class of simple planes and the class of (non-Desarguesian) projective planes are first-order classes, and so we can regard them as standard Borel spaces, and use invariant descriptive set theory to analyze the complexity of analytic equivalence relations defined on them.We recall that a binary relation R defined on a standard Borel space X is called analytic (or Σ 1 1 ), if it is an analytic subset of the product space X × X, i.e., it is the projection of a Borel set B ⊆ Y × X × X, for some Polish space Y .
The main tool to compare equivalence relations is what is known as Borel reducibility.If E and F are two equivalence relations on the standard Borel spaces X and Y , we say that E Borel reduces to F (and write E ≤ B F ) if there is a Borel map f : X → Y witnessing that x E y ⇐⇒ f (x) F f (y), for every x, y ∈ X.We can take the statement "E Borel reduces to F " as a formal way of saying that E is not more complicated than F , as any set of complete invariants for F includes a set of complete invariants for E. When E ≤ B F and F ≤ B E, the complexity of E and F is considered the same, and we say that E and F are Borel bi-reducible (in symbols, E ∼ B F ).
In [9] the authors proved that the bi-embeddability relation ≡ Gr on countable graphs is a complete analytic equivalence relation.That is, ≡ Gr is a ≤ B -maximum among all analytic equivalence relations.It follows that ≡ Gr is strictly more complicated than any isomorphism relation between countable structures, and so it can be argued that the problem of classifying countable graphs up to bi-embeddability is highly intractable.
In [4] the authors proved that the bi-embeddability relation on countable graphs is analytic complete in a very strong sense: every analytic equivalence relation is Borel bi-reducible with the restriction of ≡ Gr to some L ω1ω -subclass of the standard Borel space of countable graphs.Such property reappeared thereafter in [3], where it was considered in a more general framework and called invariant universalitythe definition given in [3] is stated for all analytic equivalence relations (not only for those defined on spaces of countable structures).
Next, the work of [3] was continued by the first author of this paper et al., who proved invariant universality for the bi-embeddibility relation on several L ω1ωclasses, which include countable groups (cf.[2, Theorem 3.5]), and countable fields of fixed characteristic p = 2 (cf.[1,Theorem 5.12]).The main technique used in [1] and [2] require to have a Borel reduction from the bi-embeddability relation between graphs to the bi-embeddability relation on the class under consideration, and the possibility to explicitly describe the automorphism group of each structure in the image of the reduction.
In [13] the second author proved the Borel completeness of both the class of simple planes and the class of non-Desarguesian projective planes.That is, the isomorphism relation on both of those classes of planes is a ≤ B -maximum for all orbit equivalence relations arising from a Borel action of S ∞ , the Polish group of permutations on N. In each case he defined a Borel reduction from the isomorphism relation between countable graphs to the isomorphism relation between the class under consideration.Furthermore, his constructions have the remarkable additional property of preserving automorphism groups.As we point down in the last section this feature is common to many categorical construction which give a full embedding between two L ω1ω -class, and can be adapted to define a Borel reduction between the isomorphism relations defined on the corresponding standard Borel spaces.
Our aim in this paper is twofold: • To study the bi-embeddability relation on the classes of countable planes previously considered in [13], with the stipulation that the bi-embeddability relation between planes coincide with the bi-embeddability relation between the corresponding geometric lattices.• To develop some generalities on the kind of stabilizer preserving Borel reduction (or SPB reduction for short) mentioned above.
Concerning the first aim, we use the main constructions of [13] to prove: Theorem 2. The bi-embeddability relation ≡ pl between countable simple planes is invariantly universal.
Theorem 3. The bi-embeddability relation ≡ ppl between countable non-Desarguesian projective planes is invariantly universal.Consequently, the bi-embeddability relation in the class of countable non-Desarguesian projective planes is strictly more complicated than isomorphism.In fact, we get that ≡ ppl is a complete analytic equivalence relation in the sense of [9, Definition 1.2].It follows that we cannot classify the class of countable non-Desarguesian projective planes up to bi-embeddability in any reasonable way: neither in terms of Ulm-type invariants, nor in terms of orbits of Polish group actions.
Concerning the second aim, we point out how in some cases SPB reductions can be obtained from the existing literature in category theory and list a couple of open questions.
Invariant Universality
Following [9] we consider Borel reducibility between quasi-orders, i.e., reflexive and transitive binary relations.Definition 6.Let Q and R be quasi-orders on the standard Borel spaces X and Y . • In this case we say that f is a Borel reduction from Q to R (in symbols, In particular, when Q and R are equivalence relations, one obtains the usual notion of Borel reducibility previously mentioned in the introduction.When Q is an analytic quasi-order on X and A is a Borel subset of X, we can regard A as a standard Borel space with its relative standard Borel structure and the quasi-order on A obtained by the restriction of Q.We shall denote by Q ↾ A the restriction of Q over A.
We now recall the main definitions from [3, Definition 1.1].
Definition 7. Let Q be a Σ 1 1 quasi-order on some standard Borel space X and let E be a Σ 1 1 equivalence subrelation of Q.We say that (Q, E) is invariantly universal if for every Σ 1 1 quasi-order P there is a Borel subset A ⊆ X which is E-invariant and such that P ∼ B Q ↾ A. Definition 8. Let F be a Σ 1 1 equivalence relation on some standard Borel space X and let E be a Σ 1 1 equivalence subrelation of F .We say that (F, E) is invariantly universal if for every Σ 1 1 equivalence relation D there is a Borel subset A ⊆ X which is E-invariant and such that D ∼ B F ↾ A.
Notice that if (F, E) is invariantly universal, then F is in particular a complete analytic equivalence relation in the sense of [9, Definition 1.2].Moreover, our interest for quasi-orders is easily explained: if (Q, E) is an invariantly universal quasi-order and E Q is the equivalence relation generated by Q, then (E Q , E) is an invariantly universal equivalence relation.
Throughout this paper we will make use of the following notation.
Notation 9. Let X be a standard Borel space of countable structures.
(i) We denote by ⊑ X (or simply ⊑) the embeddability relation on X.
(ii) We denote by ∼ =X (or simply ∼ =) the isomorphism relation on X.
(iii) We denote by ≡ X (or simply ≡) the bi-embeddability relation on X.
(iv) We say that the quasi-order The following fact is an immediate consequence of López-Escobar theorem (cf.[8, Theorem 16.8]) and gives a further insight of the phenomenon of invariantly universality on spaces of countable structures.Fact 10.If X is a standard Borel space of countable structures, and F is a Σ 1 1 equivalence relation on X, then F is invariantly universal if and only if every Σ 1 1 equivalence relation is Borel bi-reducible with the restriction of F to some L ω1ω subclass of X.
We now present a sufficient condition for invariant universality.Let X Gr be the standard Borel space of countable graphs.First we abstract the following fact from [3, Section 3].Fact 11.There is a Borel subset X ⊆ X Gr such that the following hold: (i) the equality and isomorphism relations restricted to X, denoted respectively by = X and ∼ =X, coincide; (ii) each graph in X is rigid; that is, it has no non-trivial automorphism; (iii) for every Σ 1 1 quasi-order P on 2 N , there exists an injective Borel reduction α → T α from P to ⊑ X .Notation 12.We denote by S ∞ the Polish group of permutations on N, and by Subg(S ∞ ) the standard Borel space of closed subgroup of S ∞ , endowed with the Effros-Borel structure (see [8,Section 12.C]).Now we recall the following fact, which is a particular case of [3, Theorem 4.2].
Fact 13.Let X be a standard Borel space of countable structures.Then the relation ⊑ X is an invariantly universal quasi-order provided that the following conditions hold: (I) there is a Borel map f : X → X such that: We stress: Remark 14.Since every graph in X is rigid, whenever the reduction f witnessing (I) of Fact 13 further preserves the automorphisms groups, condition (II) is automatically satisfied.Proof of Theorem 3. Argue as in the proof of Theorem 2 using Theorem 26.
Proof of Corollary 5.The statement follows from Theorem 3 and Fact 10.
SPB reductions
In this section we will denote by G a Polish group and by X and Y two standard Borel spaces.If a : G × X → X is a Borel action of G on X, we shall denote by E a the orbit equivalence relation arising from a (i.e., xE a y ⇐⇒ ∃g ∈ G (a(g, x) = y)).The stabilizer of any point x ∈ X is the subgroup We stress the following: Items (1)-( 2) of both Theorem 25 and Theorem 26 can be briefly reformulated as follows.
Theorem 29 ( [13]).The following SPB reductions hold: We highlight the following fact which follows directly from Fact 13 and Remark 14, and exhibits how Theorem 29 can be used to prove Theorem 2 and Theorem 3.
Fact 30.Let X be a standard Borel space of countable structures.Then the relation ⊑ X is an invariantly universal quasi-order provided that there is a Borel map f : X → X such that: Some examples of SPB reductions directly follow from the existence of full embeddings between categories.In category theory there has been quite a lot of work concerning the complexity of different categories by means of (categorical) embeddings 3 .Several classical examples of categorical embedding typically concern categories whose objects are algebraic structures of a fixed type, and whose 3 The categorical notion of embedding should not be confused with the one of embedding between structures.morphisms are the respective homomorphisms (or embedding) between those structures.A comprehensive reference for this kind of results is the book [14].One of the strongest notion of (categorical) embedding that has been considered in the literature is the one of full embedding, an injective functor which further induces a bijection between the morphisms in the domain category and the morphisms in the target category.An example of full embedding is given by the constructions of the second author, that we previously mentioned in the statements of Theorem 25 and Theorem 26.E.g., the map Γ → P * Γ can be redefined for the category of all graphs, regardless of their cardinality (and in fact this is the setting of [13]), to prove the following: Theorem 32 (essentially [13]).There exists a full embedding from the category of graphs together with graph embeddings into the category of non-Desarguesian projective planes together with planes embeddings (recall Convention 23).
Our interest in full embeddings is easily explained.First, notice that any L ω1ωclass C can be regarded as a category -the morphisms of C are the usual embeddings between the structures which C is formed by.Then, next proposition explains how certain full embedding induce a Borel reduction.
Proposition 33.Let C and D be two L ω1ω -classes so that we can consider the corresponding standard Borel spaces X C and X D .Suppose that F is a full embedding from C into D such that (i) F preserves countability; (ii) F can be realized as a Borel function from X C to X D ; i.e., there is a Borel function f : X C → X D such that for every x ∈ X D , f (x) ∼ = F (x).Then, the isomorphism relation ∼ =C SPB reduces to ∼ =D.
Proof.Since F is full, for every x, y, the sets of isomorphisms between x and y and their images, respectively denoted by Iso(x, y) and Iso(F (x), F (y)), are isomorphic via the map Iso(x, y) → Iso(F (x), F (y)) : h → F (h).In particular, for every x ∈ X C , the map: is a bijection, indeed it is a group isomorphism.Now let f : X C → X D be a Borel function as in (ii).Since every F (x) is isomorphic to f (x), we have that for every x ∈ X C , Aut(x) ∼ = Aut(F (x)).
We add one more comment to Proposition 33.Following the approach of [12], we can regard the subcategories of C and D formed by X C and X D , respectively, together the isomorphism maps as analytic groupoids.The SPB reduction we get is in particular a functorial reduction (see [12,Definition 2.8.1]).
The following full embeddings between categories are well-known in the literature.When not specified, we consider categories with respect to embeddings homomrphisms as morphisms.
Fact 34.There is a full embedding from the category of graphs into any of the following categories.
One can check that for any of the aforementioned categorical embeddings, items (i)-(ii) of Proposition 33 are satisfied, thus we obtain the following.
Proposition 35. The isomorphism relation between countable graphs ∼ =Gr SPB reduces to any of the following isomorphism relation
• the isomorphism relation between countable partial orders ∼ =PO; • the isomorphism relation between countable semigroups ∼ =Smg ; • the isomorphism relation between countable unital rings ∼ =Rng 1 .
We conclude this section with a few more thoughts about SPB reductions.If the L ω1ω -elementary classes X and Y are Borel complete, then the isomorphism relations ∼ =X and ∼ =Y are necessarily Borel bi-reducible, but they need not be SPB bi-reducible.E.g., the isomorphism relation between countable graphs ∼ =Gr does not SPB reduce to isomorphism between countable groups ∼ =Gp, because every infinite countable group have nontrivial automorphisms.
Let ∼ =Tr be the isomorphism relation between countable trees (i.e., connected acyclic graphs) and ∼ =LO be the isomorphism relation between countable linear orders.Although ∼ =Tr and ∼ =LO have been known to be Borel complete, they are not equivalent to ∼ =Gr up to faithful Borel reducibility (cf.[6,Theorem 4.5]).It is then natural to ask the following questions.
Corollary 4 .
Every Σ 1 1 equivalence relation is Borel bi-reducible with the bi-embeddability relation restricted to some L ω1ω -subclass of countable simple planes.Corollary 5. Every Σ 1 1 equivalence relation is Borel bi-reducible with the bi-embeddability relation restricted to some L ω1ω -subclass of countable non-Desarguesian projective planes.
Proof of Theorem 2 .
Consider the restriction of the map Γ → P Γ on X from Fact 11.By items (1) and (3) of Theorem 25 the map Γ → P Γ simultaneously reduces ⊑ X to ⊑ pl and ∼ =X to ∼ =pl.Condition (II) of Fact 13 follows by Theorem 25(2) and Remark 14.The statement now follows from Fact 13.Proof of Corollary 4. The statement follows from Theorem 2 and Fact 10.
Definition 31 .
If C and D are categories, a full embedding F from C into D is a functor F : C → D such that • F is injective on the objects of C; • for every a, b the map Hom C (a, b) → Hom D (F (a), F (b)) : f → F (f ) is a bijection. | 4,012.4 | 2018-01-30T00:00:00.000 | [
"Mathematics"
] |
Exploring the Influence of Resource Management on Learning Strategies in the Learning of Foreign Languages
Language learning strategies (LLS) are essential tools utilized by individuals to facilitate the process of acquiring proficiency in a foreign or second language. These strategies encompass a wide range of cognitive and metacognitive techniques. Understanding the role and effective implementation of these strategies is pivotal in promoting successful language acquisition and fostering learner autonomy. This study aims to identify students' perceptions of their deployment of resource management, metacognitive self-regulation, and cognitive strategies, and to evaluate whether there is a relationship between resource management and metacognitive self-regulation and cognitive strategies. This study used a quantitative research method by employing a questionnaire 5 Likert-scale survey and is adapted from Wenden and Rubin (1987). A total of 118 students from a public university in Malaysia participated in this study. The questionnaire has 4 sections with 41 items consisting of demographic profile, cognitive components, metacognitive self-regulation and resource management. The data were then transferred to SPSS 26 applying the mean and the Pearson correlation test. According to the research, students seeking help is one the most among the components of resource management, followed by environmental management and effort management. In terms of metacognitive self-regulation, students favour employing a variety of approaches, such as reviewing difficult topics, and actively attempting to assess their comprehension of curriculum content. Students have a significant tendency for relating
Background of Study
The significance of studying a foreign language has been mentioned in the Malaysian Education Blueprint 2015-2025 (Jabatan Pendidikan Tinggi, n.d.), which has also been acknowledged as one of the key components in realizing the country's vision of being a fully developed nation.A foreign language is defined as a language spoken by people who speak other than their native language (Boon et al., 2021).Many universities have considered foreign language courses, such as Mandarin, Arabic, French, Korean, and Japanese, a graduation requirement.(Singh et al., 2021).Foreign language proficiency is systematically related with meaningful and productive participation in politics, security, global trade, and education (Zubairi & Sarudin, 2009).It is possible to assert that individuals who can speak multiple languages are better able to compete in the international marketplace and thrive in wide, multiethnic work environments (Singh et al., 2021).Persistently, educational leaders emphasize on the necessity of increasing foreign language proficiency among students nowadays (Christian et al;2005).The field of foreign language education has shifted away from teacher-focused instructional learning and toward learner-centered learning that is centred on the learner's characteristics.Students must be self-directed learners and apply certain strategies and learning styles in order to attain language learning objectives.(Ayu, 2018, Wahyudin & Rido, 2020;Lestari & Wahyudin, 2020).Language learning strategies (LLS) emerged in the 1970s, when research was focused on reflecting the characteristics of a successful language student and the differences in learning success (Lestari & Wahyudin, 2020).There are many learning strategies that have been produced by many scholars and researchers such as Wenden & Rubin (1987), Oxford (1990).Learning techniques are intended to help instructors improve the effectiveness of the language learning process (Lestari & Fatimah, 2020).Thus, it is essential to conduct study on reviewing learning strategies by integrating one strategy with others to provide even more effective and better results for students.
Statement of Problem
Every learning process necessitates the adoption of a method or strategy to achieve its primary objective.Important components of the learning process include the "what" and "how" of learning tools.However, while acquiring a language, humans employ a variety of strategies, some of which are effective and others which are ineffective (Hardan, 2013).The study of LLS has been a subject of research for over three decades, as it has been observed that foreign language learners employ and utilize a diverse range of LLS with high frequency and effectiveness during the process of acquiring a new language (Mila & Gutierrez-Mangado, 2019;Habok, Kong, Ragchaa, & Magyar, 2021).Previous studies have shown that strategies play an important part in the process of learning a language as well as the success of the process (Lai, Saab, & Admiraal, 2022).It has also been noted by studies that these strategies make it possible for students to obtain more responsibility for their own learning and success and that LLS is believed to be one of the most important elements accounting for individual variances in language acquisition (Qasimnejad & Hemmati, 2014).Extensive scholarly inquiry into language learning processes has resulted in the identification and classification of diverse strategies (Seng, Mustafa, Halim, Rahmat, & Amali, 2023).Current research focuses on self-regulated learning which can be differentiated into three main categories which are cognitive, metacognitive, and resource-management strategies ( Biwer, et al., 2021).Cognitive strategies encompass a range of tactics and procedures employed to facilitate the processing and storage of information in an efficient and effective manner (Yang, Zeng, & Xu, 2021), while the term metacognition refers to the capacity of individuals to possess knowledge about their cognitive processes, actively monitor these processes as they occur, exert control over them, and make necessary adjustments in order to optimize the learning experience (Mitsea & Drigas, 2019).Biwer, et al. (2021) and Yusri, Rahimi, and Halim (2011) mentioned the four components of resource management strategies are time and study management, effort regulation, peer learning, and assistance seeking.Many studies have been done to investigate the LLS in learning foreign languages.In a study conducted by Ahmad (2020), the objective was to investigate the level of awareness and utilization of cognitive and metacognitive reading strategies among Omani EFL students from various academic disciplines.The findings indicated that the participants exhibited a preference for cognitive strategies while demonstrating a reluctance to employ metacognitive strategies due to perceived difficulties associated with their implementation.On the other, studies about how foreign-language students at the undergraduate level learn languages showed that metacognition is the most used approach (Lestari & Wahyudin, 2020;Alqarni, 2023;Masitoh, Arifa, Ifawati, & Sholihah, 2023).In their research, Zubir et al. (2023) found that the students studied in a good setting, worked hard on their studies, and asked for help when they needed it.Similarly, a lot of research has been done on how learning strategies affect academic performance.Effort regulation and time management strategies were found to have the strongest relationship between reported strategy use and academic success (Waldeyer, et al., 2022).Given the significant gap between studies on the influence of resource management on learning strategies in the learning of foreign languages, particularly in the learning of Arabic as a foreign language, it is necessary to investigate students' awareness and the relationship between these strategies.
Objective of the Study and Research Questions
This study is done to explore perception of learners on their use of learning strategies.Specifically, this study is done to answer the following questions; • How do learners perceive resource management strategies in their learning?
• How do learners perceive metacognitive self-regulation strategies in their learning?
• How do learners perceive the use of cognitive strategies in their learning?
• Is there a relationship between resource management with metacognition selfregulation and cognitive strategies?
Language Learning Strategies
The awareness of LLS started and developed in 1970's (Adan, D. A., & Hashim, H., 2021, Kosimova, A., 2022& Hardan, A. A., 2013).Learning strategies are defined differently based on what kind strategies that had been used and the way it was used by the language learner.Language strategies as specific actions to enhance language learning effectively in an enjoyable, easier, faster, more enjoyable and more self-directed way, (Oxford, 1990).Zubbir et.al., (2023), as mentioned by Thomas et. al., (2021), LLS can be defined as a conceptualization that can be compared between factors while focusing on the language strategies method used.
Learning strategies for foreign language Learning foreign language as a second or third language for non-native speakers is quite challenging.Generally, engaging LLS help to assist learners to enhance and improve the language performance in language acquisition (Adan, D. A., & Hashim, H., 2021).This also, on the other hand, is applicable to third languages, specifically Arabic language.Xuan et al, (2020) stated that, being aware of the LLS choice helps to utilize and significantly improve learners' performance in Arabic language, instead of solely depending on the learning environment.Thus, suitable learning strategies for learning foreign languages are needed to enhance the language learning process and motivate the students.
Past Studies on the Use of Learning Strategies
In Malaysia, learning foreign languages has become a new trend in the language field as it was offered in educational institutions starting from primary school until university level.Generally, English is the second language for Malaysians while Arabic, Mandarin, French and Korean and other foreign languages are regarded as third languages in Malaysia.Since these languages are taught widely in multiple levels of educational institutions, there have been many past studies on learning strategies and motivation to learn foreign languages.The following are previous studies on learning strategies in language learning.Calafato (2020), studied the motivation, metacognition and autonomy in learning Arabic Language in Scandinavian.A total of 96 university students were involved in the study which take place in Norway, Sweden and Denmark.The results indicates that there are significant statistical difference among the student's motivation in learning Arabic language.Moreover, the results also revealed the gender difference also found in the student's self-regulations.In line with this study objective, understanding the factors of student's motivation helps to provide them with suitable tools and learning strategies in order to enhance and sustain the learning process (Calafato, 2020).Adan, D. A., & Hashim, H. (2021) studied LLS used by art school English Second Language (ESL) learners.The respondents consists of 77 pupils from 7 to 17 years old with different talents in art.Using the Strategy Inventory Language Learning (SILL), the data were collected by questionnaire distribution to the respondents.The result shows that the most employed LLS are Metacognitive strategies while the least usage strategies are compensation strategies.Moreover, LLS is considered as an important item in language learning that effectively helps language learners to improve their proficiency in language acquisition.In the realm of learning strategies, Zaini et al. (2023) conducted a study titled 'Exploring the Relationship between Learning Strategies Used in Language Learning.This quantitative study undertaken among undergraduate students from universities in Malaysia aims to investigate the impact of students' learning strategies on language learning, with a particular focus on exploring motivation factors.Using a 5-point Likert scale survey based on Wenden and Rubin's (1987) learning strategies, 129 participants responded to the survey to reveal the variables.
The survey comprises four sections: Section A covers demographic information, Section B contains 19 cognitive component items, Section C includes 11 metacognitive self-regulation items, and Section D addresses 11 items related to resource management.The findings indicate that metacognitive self-regulation positively enhances learning by enabling individuals to adapt their learning strategies, identify areas for improvement in understanding, set study goals, and encourage persistence and seeking help from peers when facing challenges in understanding the material.Zubbir et al., (2023) investigated the use of learning strategies through reciprocal determinism in learning Japanese language.144 undergraduate students learning Japanese as a third language were involved in the quantitative survey.The merge of Bandura's (1986) reciprocal determinism and Wenden & Ruben (1987) were divided into four sections in the survey.The instruments consist of 41 items, divided into three sections and analyzed using SPSS Frequency Statistics.The findings revealed that the students generally claimed they practiced saying repetitively to themselves, memorized the key words to be reminded of the key concepts and studied the learned class materials.This study also shows that Metacognitive self-regulation is the most positive learning strategies used by the students that align with study conducted by Zaini et al., (2023).
A study was conducted by Seng et al. (2023) to investigate the learning strategies used by 132 undergraduate students in learning French as the third language at one of the public universities.This quantitative study survey instrument consists of three (3) sections, demographic profile, direct learning strategies and indirect learning strategies analyzed using SPSS.As for the result, the highest mean score (M=3.8)most employed by rehearsal items and the critical thinking strategy has the least mean score (M=3.5) is the lowest direct learning strategy used by the students.In contrast with the finding from Zaini et. al, (2023) and Zubbir et. al, (2023), this research findings revealed that help seeking strategy in direct learning strategies scored the highest mean whereas the metacognitive self-regulation has the lowest mean score.On the other hand, this study proves that there are strong relationships between direct and indirect strategies in foreign language learning.It is also suggested to determine if gender factors could influence the choices of learning strategies (Seng et al, 2023).
The majority of the studies, Adan, D. A., & Hashim, H. (2021), Zaini et al. (2023) & Zubir et al., (2023) shows that metacognitive self-regulation learning strategies is the most employed by the students in learning foreign language.Differ from these studies, the study from Seng et al. (2023) shows the contrast results which help seeking in LLS scored the highest usage strategies while metacognitive scored the lowest mean score.
Conceptual Framework
Learners depend on different factors of motivation to make them pursue the learning task further.This motivation pushes the learners to be satisfied with the learning task (Rahmat, 2021).Motivate learners display confidence in learning.Figure 1 shows the conceptual framework of the study.This study is rooted from Wenden and Rubin's (1987) learning strategies and the strategies are resource management, metacognitive self-regulation and cognitive components.This study explores the relationship between resource management with metacognitive self-regulation and cognitive components.
Cognitive Components
The cognitive components can be defined as students actively processing information and structuring the information into memory; it also allows students to analyse information and connect it to existing cognitive structures.McKeachie et al. (1986) developed a unique cognitive model that included rehearsal, elaboration, and organisation.Rehearsal: A strategy that students use to retain material through repetition, such as reciting the topic loudly, reproducing the material, taking selective verbatim notes, and highlighting the most essential sections of the subject (Weinstein & Mayer, 1986).Elaboration: A process that students use to create internal connections between what they are learning and past knowledge.Activities include paraphrasing, summarising, generating analogies, generative note-taking, and question responding (McKeachie et al., 1986).Organization: Organization is the process through which students organise and connect the knowledge obtained in the learning environment.The procedure entails picking the key idea through outlining, networking, and diagramming the data (McKeachie et al., 1986).
Metacognitive Self-Regulation
The metacognitive component focuses on the skills students use to organise their learning techniques, monitor their current learning, and assess their knowledge in a range of subject areas.It is a highly effective method for enhancing self-regulation since it encourages students to assess their knowledge.Regulating process including altering reading speed, rereading, reviewing, or test-taking (Filcher&Miller, 2000) Resource Management Resource management tactics are utilised by many students as a learning strategy.It is a plan that considers the quantity and quality of work involvement.It consists of three components: environment management, effort management, and help-seeking.This technique focuses on establishing well-defined goals and planning the course to achieve the greatest results (McKeachie et al., 1986).Environment management: the development of a learning-friendly environment.Mckeachie et al. (1986) noted that the aspect of the environment is just as essential as the student's awareness that the space is designated for studying.Therefore, students are advised to identify and designate a distinct, peaceful, and organised study place.Effort management: The process whereby a student employs strategies such as attribution to effort, mood, self-talk, persistence, and self-reinforcement (McKeachie et al., 1986).Help-seeking: A strategy that is also known as support of others, and students must learn to harness this support by soliciting assistance from fellow students and the instructor.Eastmond (1995) confirmed the significance of student-instructor connection when students contacted their teachers while completing the course assignment.
METHODOLOGY
This quantitative study is done to explore motivation factors for learning among undergraduates.A purposive sample of 182 participants responded to the survey.The instrument used is a 5 Likert-scale survey and is rooted from Wenden and Rubin (1987) Table 2 shows the reliability of the survey.The analysis shows a Cronbach alpha of .965,thus, revealing a good reliability of the instrument chosen/used.Data was gathered online using a Google form.SPSS version 26 was then used to analyse the data.To answer the four research questions, the analysed data is presented in the form of percentages and mean scores.
FINDINGS
Findings for Demographic Profile Q1 Gender Figure 2-Percentage for Gender Figure 2 illustrates the gender distribution of the study respondents.Most of the respondents (65%) were male, while only a minority (35%) were female.This distribution sheds light on the gender demographics of the sample, emphasizing the predominant presence of males in the study population.
Q2 Age Regarding the highest academic level among the students, Figure 4 depicts that 67% possess a degree, followed by 24% with a diploma, 4% with a Matriculation/Foundation background, and another 4% holding STAM qualifications, thus highlighting the diversity of educational backgrounds within the sample.Despite their academic level, most students are still pursuing undergraduate courses, possibly due to a misunderstanding about their prior academic accomplishments.
Q4 Course: Introductory Arabic showed students did make good use of their study time for the courses and make sure to keep up with the weekly readings and assignments for the courses.This is followed by items RMCEMQ1 (M=3.7)shows that respondents usually study in a place where they can concentrate on their course work.The item that received the lowest score is RMCEMQ3I (M=3.5), which is 'I have a regular place set aside for studying.'This indicates that most of the respondents do not have a designated specific and consistent location where they regularly engage in their study or learning activities.
(b)Effort Management (4 items) RMCEMQ 4Even when course materials are dull and uninteresting, I manage to keep working until I finish.
Figure 9-Mean for Help-Seeking Figure 9 portrays the means for Help-Seeking which is one of the components in resource management strategies.Only two items have appeared in this variable.Students strongly agreed that when they cannot understand the material in a course, they will ask another student in the class for help (M=4.1).They will also try to identify students in the class whom they can ask for help if necessary (M=4.0).
Findings for Metacognitive Self-Regulation
This section presents findings to answer research question 2-How do learners perceive metacognitive self-regulation strategies in their learning?a. Metacognitive Self-Regulation (11 items) 1 MSSRQ1During class time, I often miss important points because I am thinking of other things.
3 2 MSSRQ 2When reading for the courses, I make up questions to help focus my reading.RMCHSQ 2I try to identify students in the classes whom I can ask for help if necessary.
9
MSSRQ 9When studying for the courses in this program I try to determine which concepts I do not understand well.
3.6
10 MSSRQ 10When I study for the courses, I set goals for myself in order to direct my activities in each study period.
3.6
11 MSSRQ 11If I get confused taking notes in classes, I make sure I sort it out afterwards.
3.6
Figure 10-Mean for Metacognitive Self-Regulation Figure 10 indicates the mean score for 11 items related to Metacognitive Self-regulation strategies with the highest mean (M=3.8) which shows most of the students agreed that when they become confused about something they are reading for the classes, they will go back and try to figure it out.Similarly, most of the students agreed with these 3 items which have the same mean score of 3.6 stating that when they are studying for the courses in this program they try to determine which concepts they do not understand well, and the second one is when they study for the courses, they set goals for themselves in order to direct their activities in each study period, and the last one is if they get confused taking notes in classes, but they make sure they sort it out afterward.In contradiction, most of the students do not agree during class time, they often miss important points because they are thinking of other things, as shown by the lowest mean score (M=3).
Findings for Cognitive Strategies
This section presents findings to answer research question 3-How do learners perceive the use of cognitive strategies in their learning?Specifically in learning the Arabic language.There are 19 items for cognitive strategies in language learning that were divided into four subcomponents, a) Rehearsal, b) Organization, c) Elaboration, and d) Critical Thinking.
a. Rehearsal (4 items) Figure 11-Mean for Rehearsal Figure 11 showed the mean score for cognitive strategies first components (rehearsal) used by the students in learning Arabic language.The highest mean score for rehearsal items is (M= 3.9) in which the majority of the students memorize key words to remember the important concept LSCCRQ 3 in learning Arabic language.While the other three (3) rehearsal items shared the same mean score (M= 3.6).Majority of students get prepared for the class by practicing saying the material (LSCCRQ 1) and reading course reading including class notes (LSCCRQ 2) over and over.Other than that, one of the other strategies used was make lists of important items and memorize the lists for the courses (LSCCRQ 4) 3.6 3.6 3.9 3.6 3.45 3.5 3.55 3.6 3.65 3.7 3.75 3.8 3.85 3.9 3.95 LSCCRQ1When I study for the classes, I practice saying the material to myself over and over.
LSCCRQ 2When studying for the courses, I read my class notes and the course readings over and over again.
LSCCRQ 3I memorize key words to remind me of important concepts in this class.
LSCCRQ 4I make lists of important items for the courses and memorize the lists.
b. Organization (4 items) Figure 12-Mean for Organization The results for the second component in cognitive strategies (organization) indicated that the highest mean score was (M= 3.9) for LSCCOQ 2. It shows that the most frequent strategy used by the students in learning Arabic language was to go through the readings and class notes to find the important ideas.It is followed by LSCCOQ 4 mean score (M= 3.8) where the students make outlines for the important concepts from the notes given.While, for item LSCCOQ 1 mean score (M= 3.6) the students organize the thoughts using outline materials from reading.The least mean score for organization in cognitive strategies was (M= 3.2) for item LSCCOQ 3 which students seldom use simple charts, diagrams or tables in organizing the course materials for learning Arabic language. 3.6 3.9 3.2 3.8 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 LSCCOQ1 When I study the readings for the courses in the program, I outline the material to help me organize my thoughts.
LSCCOQ 2When I study for the courses, I go through the readings and my class notes and try to find the most important ideas.
LSCCOQ 3I make simple charts, diagrams, or tables to help me organize course materials in this program.
LSCCOQ 4When I study for the courses, I go over my class notes and make an outline of important concepts.
c. Elaboration (6 items) Figure 13-Mean for Elaboration Figure 13 illustrates that the students agreed for all items.The highest mean score was recorded for LSCCEQ 3 (M= 3.9) in which students relate the material to the general knowledge while reading for the course.It is followed by LSCCEQ 1 and LSCCEQ 5 sharing the same mean score (M= 3.8).On the other hand, LSCCEQ 2 and LSCCEQ6 both items' mean score was (M= 3.6).Whereas the lowest mean score for the elaboration items for cognitive strategies components was recorded by LSCCEQ 4 (M= 3.5), the strategy used was to write brief summaries for the main ideas from reading and class notes.LSCCEQ 5I try to understand the material in the classes by making connections between the readings and the concepts from the lectures.
LSCCEQ 6I try to apply ideas from course readings in other class activities such as lecture and discussion.
d. Critical Thinking (5 items) Figure 14-Mean for Critical Thinking The results for Figure 14 indicated that the most used cognitive strategies for critical components were LSCCCTQ 1 and LSCCCTQ 4 where both items mean score were the highest (M= 3.6).The students often questioning things that they hear or read in the course to find it convincing and play around with their own ideas that relate to the course.Meanwhile, the other three (3) critical thinking components shared the same mean score (M= 3.5) for LSCCCTQ 2, LSCCCTQ 3 and LSCCCTQ 5.Among the average strategies used by the students were, deciding good supporting evidence from theory, interpretation or conclusion, using course materials as a starting point in developing their own ideas and finding or thinking possible alternatives from assertion or conclusions.
Findings for Relationship between resource management with metacognition self-regulation and cognitive strategies
This section presents findings to answer research question 4-Is there a relationship between resource management with metacognition self-regulation and cognitive strategies?To determine if there is a significant association in the mean scores between metacognitive, effort regulation, cognitive, social and affective strategies data is anlaysed using SPSS for correlations.Results are presented separately in table 3, 4, and 5 below.LSCCCTQ 2When a theory, interpretation, or conclusion is presented in classes or in the readings, I try to decide if there is good supporting evidence.
LSCCCTQ 3I treat the course materials as a starting point and try to develop my own ideas about it.
LSCCCTQ 4I try to play around with ideas of my own related to what I am learning in the courses.
LSCCCTQ 5Whenever I read or hear an assertion or conclusion in the classes, I think about possible alternatives.
Table 3-Correlation between Resource Management and Metacognitive Self-Regulation
Table 3 shows there is an association between resource management and metacognitive selfregulation.Correlation analysis shows that there is a high significant association between resource management and metacognitive self-regulation (r=.785**) and (p=.000).According to Jackson (2015), coefficient is significant at the .05level and positive correlation is measured on a 0.1 to 1.0 scale.Weak positive correlation would be in the range of 0.1 to 0.3, moderate positive correlation from 0.3 to 0.5, and strong positive correlation from 0.5 to 1.0.This means that there is also a strong positive relationship between resource management and metacognitive self-regulation.
Table 4-Correlation between Metacognitive Self-Regulation and Cognitive Components
Table 4 shows there is an association between metacognitive self-regulation and cognitive components.Correlation analysis shows that there is a high significant association between metacognitive self-regulation and cognitive components (r=.797**) and (p=.000).According to Jackson (2015), coefficient is significant at the .05level and positive correlation is measured on a 0.1 to 1.0 scale.Weak positive correlation would be in the range of 0.1 to 0.3, moderate positive correlation from 0.3 to 0.5, and strong positive correlation from 0.5 to 1.0.This means that there is also a strong positive relationship between metacognitive self-regulation and cognitive components.
Table 5-Correlation between Cognitive Components and Resource Management
Table 5 shows there is an association between cognitive components and resource management.Correlation analysis shows that there is a high significant association between cognitive components and resource management (r=.718**) and (p=.000).According to Jackson (2015), coefficient is significant at the .05level and positive correlation is measured on a 0.1 to 1.0 scale.Weak positive correlation would be in the range of 0.1 to 0.3, moderate positive correlation from 0.3 to 0.5, and strong positive correlation from 0.5 to 1.0.This means that there is also a strong positive relationship between cognitive components and resource management.
Summary of Findings and Discussions
The findings of this survey revealed that among the components of resource management, students choose to seek help the most, followed by environmental management and effort management.Results indicate a persistent desire to seek assistance from classmates in the same course and programme, as well as a proactive effort to identify possible sources of support inside the class.In addition, students indicate a high level of agreement with consistently attending classes, and they are able to finish their assignments despite finding the course material dull and uninteresting.The results for Metacognitive Self-Regulation students show that, on average, students use a variety of tactics to improve their cognitive control and learning processes.Notably, students engage in activities such as revisiting challenging concepts and they actively attempt to determine their understanding of course concepts, set goals for their study activities, and clarify any confusion that arises, either during or after class.In line with the findings of Teng et al. (2021), the use of metacognitive methods demonstrates a proactive approach to learning and a readiness to adapt learning strategies to course requirements.According to the findings, students are most interested in practising elaboration strategies, followed by rehearsal, organisation, and critical thinking.As per the data, students show a strong tendency to relate course materials to prior knowledge, and they frequently prioritise discovering essential ideas from course materials and adopting memorization techniques such as memorising key words.Students also make an attempt to understand the content by connecting readings and lecture concepts, and they regularly integrate information from many sources, such as lectures, readings, and discussion.It is consistent with the findings of Zaini et al. (2023) and Mustajab Ahmed (2020), in which students employed these strategies to improve their academic performance and overall study results.The finding also revealed that there is positive inter-correlation between resource management, metacognitive self-regulation, and cognitive components.This finding sheds light on the dynamic relationship that exists between these strategies, highlighting the necessity of both of these factors working together to create academic achievement and learning outcomes.Students who demonstrate higher levels of self-regulatory metacognitive abilities are also more likely to exhibit higher levels of cognitive abilities, including critical thinking, problem-solving, and knowledge application.The results are aligned with the findings of Mohammadi et al. (2022), who discovered a high and positive correlation between academic well-being and the adoption of cognitive and metacognitive strategies by college students.
Pedagogical Implications
The findings of this study have substantial educational implications.First, it empowers students by allowing them to recognise and choose the most effective learning tactics depending on personal needs and preferences.The study highlights the necessity of giving students the ability to actively implement these selected techniques, ensuring that they are not merely aware of them but can also employ them effectively in their academic pursuits.The research assists students in sustaining ably enhancing their academic performance, establishing a culture of lifelong learning and educational environment development.Teachers can immediately implement the suggested learning strategies in their classrooms, giving students concrete examples and direction.This implementation can help students comprehend and experience these ideas in action, hence allowing a deeper understanding of efficient learning strategies.Teachers are able to identify and recommend the most effective learning strategies for student learning if they understand the different requirements and preferences of their students.In conclusion, these outcomes highlight the responsibility of teachers as facilitators of successful learning practices and advocates for tailored education, ultimately leading to increased student results and engagement.Suggestions for future research Future study in the subject of learning strategies should focus on a number of essential areas.First, additional research into the efficacy of diverse learning strategies, especially in various educational contexts and for various subjects or disciplines.Second, as online learning continues to grow in popularity, there is an urgent need for study examining the adaptation and optimization of these strategies across digital contexts, including their incorporation into e-learning platforms and virtual classrooms.Thirdly, instructors' viewpoints and awareness of language strategies can provide useful insights into educational procedures and how educators can better serve language learners if they are explored in depth.Exploring the complex relationship between learning techniques and student motivation could provide alternative methods for enhancing students' intrinsic motivation to learn, thus encouraging a more comprehensive and effective educational experience.
Contribution of study
The issue of learning techniques is explored in this paper, and including resource management, cognitive components, and metacognitive self-regulation.Additionally, the influence between these strategies has been examined, providing information and data that aids students in selecting the best strategies for learning a foreign language.This research also prepares scholars, academics, and teachers with deeper understanding, knowledge, and explanation of learning strategies in the teaching of foreign languages.
Figure 1 -
Figure 1-Conceptual Framework of the Study-The Influence of resource management on learning strategies
Figure 5 -
Figure 5-Percentage for Course-Introductory Arabic become confused about something I am reading for the classes, I go back and try to figure it out.change the way I study in order to fit any course requirements and the instructors' teaching style.
think through a topic and decide what I am supposed to learn from it rather than just reading it over when studying for the courses in this program.understand the material in a course, I ask another student in the class for help.
, I often miss important points because I am thinking of… MSSRQ 2When reading for the courses, I make up questions to help focus my… MSSRQ 3When I become confused about something I am reading for the classes, I… MSSRQ 4If course readings are difficult to understand, I change the way I read the… MSSRQ 5Before I study new course material thoroughly, I often skim it to see… MSSRQ 6I ask myself questions to make sure I understand the material I have… MSSRQ7I try to change the way I study in order to fit any course requirements and … MSSRQ8I try to think through a topic and decide what I am supposed to learn from… MSSRQ 9When studying for the courses in this program I try to determine which… MSSRQ 10When I study for the courses, I set goals for myself in order to direct my… MSSRQ 11If I get confused taking notes in classes, I make sure I sort it out afterwards.
study for the courses in this program, I pull together information from different sources, such as lectures, readings, and discussions.LSCCEQ 2I try to relate ideas in one subject to those in other courses whenever possible LSCCEQ 3When reading for the courses, I try to relate the material to what I already know.LSCCEQ 4When I study for the courses in this program, I write brief summaries of the main ideas from the readings and my class notes.
myself questioning things I hear or read in the courses to decide if I find them convincing.
Table 2 -
to reveal the variables in table 1 below.The survey has 4 sections.Section A has items on demographic profile.Section B has 19 items on cognitive components.Section C has 11 items on metacognitive self-regulation and section D has 11 items on resource management.Reliability of Survey | 7,953.4 | 2023-11-07T00:00:00.000 | [
"Education",
"Linguistics"
] |
Yang-Baxter deformations of the AdS5 × T1,1 superstring and their backgrounds
We consider three-parameter Yang-Baxter deformations of the AdS5× T1,1 superstring for abelian r-matrices which are solutions of the classical Yang-Baxter equation. We find the NSNS fields of two new backgrounds which are dual to the dipole deformed Klebanov-Witten gauge theory and to the nonrelativistic Klebanov-Witten gauge theory with Schrödinger symmetry.
Introduction
The AdS/CFT correspondence conjectures that certain gauge theories have a dual description in terms of string theories. The first case of the AdS/CFT correspondence states that N = 4 supersymmetric Yang-Mills theory on a four-dimensional flat spacetime is dual to type IIB superstring theory propagating in AdS 5 × S 5 [1]. One of the most important features of the AdS/CFT correspondence is its integrability which in the string theory side is associated to the existence of a Lax connection ensuring the existence of an infinite number of conserved charges. In the case of AdS 5 × S 5 superstring, the theory is described by a σ-model on the supercoset PSU(2,2|4) SO(1,4)×SO(5) [2] and the Z 4 -grading of the psu(2, 2|4) superalgebra is an essential ingredient to get a Lax connection [3]. The same happens for the AdS 4 × CP 3 superstrings [4], partially described by the supercoset UOSp(2,2|6) SO(1,3)×U(3) [5,6], which also has Z 4 -grading and is integrable [5].
Another way to get integrable theories is to start with an integrable model and then deformed it in such a way that integrability is preserved. This is accomplished by introducing r-matrices that satisfy the Yang-Baxter equation [7]. When applied to the AdS 5 × S 5 case [8,9] the superstring will propagate on what is called a η-deformed background which is not a solution of the standard type IIB supergravity equations [10,11], leading to the proposal of generalized supergravities [12,13].
Deformations based on r-matrices that satisfy the classical Yang-Baxter equation (CYBE) can also be considered [14]. When applied to superstrings in AdS 5 × S 5 [15][16][17][18][19] The Klebanov-Witten gauge theory is obtained by putting N D3-branes on the singularity of M 1,4 × Y 6 , where M 1,4 is the four-dimensional Minkowski space and Y 6 a Ricci flat Calabi-Yau cone C(X 5 ) with base X 5 [50]. Near the horizon the geometry becomes AdS 5 × X 5 , where X 5 is a compact Sasaki-Einstein manifold, i.e., an odd-dimensional Riemannian manifold such that its cone C(X 5 ) is a Calabi-Yau flat manifold [56]. Taking X 5 as T 1,1 , 5 only 1/4 of the supersymmetries are preserved so that we have N = 1 supersymmetry in four dimensions. The superpotential has a SU(2) × SU(2) × U(1) symmetry, with U(1) being part of the R-symmetry that gives the N = 1 supersymmetry, and SU(2) × SU(2) being a flavor symmetry which is not included in the N = 1 superconformal group in four dimensions PSU(2, 2|1) [58][59][60]. Thus, the full isometry group is PSU(2, 2|1)×SU(2)×SU (2). The bosonic part of the superalgebra g = psu(2, 2|1) on which we construct the σ-model is su(2, 4) ⊗ u(1). The generators of psu(2, 2|1) can be written as supermatrices which are formed by blocks that correspond to bosonic (diagonal) and fermionic (anti-diagonal) generators, The isometry group of AdS 5 × T 1,1 is given by the coset which is not the bosonic part of any supercoset [61,62]. Besides that, the coset for T 1,1 does not lead to the standard Sasaki-Einstein metric for T 1,1 . This happens because neither the bosonic subalgebra su(2) ⊗ u(1) nor the isometry group (2.2) captures the full isometries of the theory. All this can be overcome by extending the coset (2.2) to [54] AdS where the U(1) R now appears as part of the global symmetries and a second U(1) was added in order to preserve the number of parameters that describe the space. Thus, in terms of this extended Z 2 -graded algebra, the symmetric coset for AdS 5 × T 1,1 is taken as (2.4) The supermatrix has the block structure
JHEP02(2021)126
where the dashed lines split the algebras corresponding to the subspaces AdS 5 and T 1,1 , while the solid lines split the M 8×8 and M 1×1 bosonic blocks.
The algebra for the global symmetry of the AdS 5 space is and Str (K m K n ) = η mn , m, n = 0, 1, 2, 3, 4.
The T 1,1 space can be written as the coset in We also have Str (K m K n ) = − 1 3 δ mn , m, n = 5, . . . , 9. (2.20) The where T 1 generates the original U(1) in (2.2). An appropriate coset representative is then The coset representative that will allow use for AdS 4 × T 1,1 is then The projector P 2 on g (2) can be defined as Applied to A = g −1 dg, the Maurer-Cartan one-form, we get Then, we can compute the AdS 5 × T 1,1 metric from and where (θ 1 , φ 1 ) and (θ 2 , φ 2 ) parametrize the two spheres of T 1,1 and 0 ≤ φ 3 ≤ 2π.
The metric (2.30) was first obtained in [63] and describes the basis of a six-dimensional cone. It can be understood as the intersection of a cone and a sphere in C 4 such that its topology is S 2 × S 3 , and that the metric is a U(1) bundle over S 2 × S 2 . Besides that, SO(4) ∼ = SU(2) × SU(2) acts transitively on S 2 × S 3 and U(1) leaves each point of it fixed so that T 1,1 is described by the coset (SU(2) × SU(2)) /U(1).
Yang-Baxter deformed backgrounds
In this section we present some r-matrices satisfying the CYBE and build the corresponding deformed background identifying its gravity dual. As mentioned before, the background can be deformed partially by choosing generators on each subspace. The bosonic Yang-Baxter deformed action is [8] where A = g −1 dg ∈ g, γ αβ is the worldsheet metric and ε αβ is the Levi-Civita symbol. P 2 was defined in (2.24) and the deformed current one-form is where η is the deformation parameter. The dressed R operator R g is defined as
JHEP02(2021)126
Moreover, we can compute P 2 (J) in (3.1) by defining the action of P 2 as The coefficients j m can be calculated from where the matrix components C n m are those of The matrix Λ has components defined as Then, from (3.1), we can read off the metric and the B-field as [49] The three-parameter β-deformed of T 1,1 was obtained in [54] by a Yang-Baxter deformation and in [55] by a TsT transformation in perfect agreement. In this case the r-matrix was In the following subsections we will introduce two more r-matrices and the corresponding deformations they produce.
Dipole deformed Klebanov-Witten theory
Let us first consider an Abelian r-matrix like where X 3 , Y 3 and M are the Cartan generators of su(2)⊕su(2)⊕u(1) and µ i , i = 1, 2, 3, are the deformation parameters. 6 In this case (3.11) combines generators of both subspaces, which will lead to a deformation of the entire AdS 5 × T 1,1 background. The nonzero components of Λ n m in (3.7) are (3.12) 6 The deformation parameter η can always be absorbed in the r-matrix such that it is present in the µi's.
JHEP02(2021)126
It is worth mentioning that the choice of generators in (3.11) is dictated by the place where we want put the two-tori from the TsT perspective. In the present case we have one coordinate in AdS 5 and a combination of the U(1)'s in T 1,1 . The resulting metric (3.16) has deformations along the x 3 -direction in AdS 5 and along the angles φ 1 , φ 2 and φ 3 in T 1,1 .
Nonrelativistic Klebanov-Witten theory
In order to construct this deformation we must write the AdS 5 space in light-cone coordinates. Thus, the coset representative is now while for the T 1,1 we keep the same form as in (2.22). The AdS 5 metric is then while the T 1,1 metric is given by (2.30).
Let us now consider the r-matrix (3.11) with p 2 replaced by p − , 7 where X 3 , Y 3 and M are Cartan generators of the algebra. Taking the same steps as in the previous case we find that the nonzero components of Λ n m are
22)
7 In this case we identify x− ∼ x− + 2πr − , such that p − = i∂x − can be interpreted as the number operator p − = N/r − . Moreover, if we consider x+ to be the time then p + is the energy [64].
JHEP02(2021)126
while the nonzero elements of C n m are now where The deformed metric is then where now (3.27) The first two terms in (3.25) is the metric of a Schrödinger spacetime. 8 The choice of generators in (3.21) is very similar to the one in (3.11). Now, however, the two-tori defined by the TsT transformation takes the x − coordinate and a combination of the internal U(1)'s in T 1,1 and does not introduce any noncommutativity in the dual field theory. The metric (3.25) coincide with the Sch 5 × T 1,1 obtained in [47] for µ 1 = n 1 /2, µ 2 = n 2 /2 and µ 3 = −n 3 , where n i (i = 1, 2, 3) are the deformation parameters.
Conclusions
In this paper we have derived the metric and the B-field for the gravity duals of the dipole-deformed and the nonrelativistic Klebanov-Witten theory as Yang-Baxter deformations. We made use of an extended coset description of AdS 5 × T 1,1 which simplified the computation of the undeformed background and its deformation. We considered two abelian r-matrices with three-parameter satisfying the classical Yang-Baxter equation. The first r-matrix was composed by a momentum generator in AdS and a combination of the three U(1)'s generators of the internal space which lead to the gravity dual of the dipoledeformed Klebanov-Witten theory which should be obtained by TsT transformation of the AdS 5 × T 1,1 background. In second case we have also a momentum operator in AdS and a combination of the three U(1) generators in T 1,1 . It produced the Sch 5 × T 1,1 background which, having Schrödinger symmetry, corresponds to the nonrelativistic Klebanov-Witten theory [45]. The next step is to compute the RR fields of the deformed backgrounds. To get them we have to consider the fermionic sector as in [19]. The fact that we have not included the fermionic sector of the supercoset does not mean that we are unable to check the supergravity equations for the new backgrounds. Since the r-matrices that we used in the bosonic background are abelian they satisfy trivially the unimodularity condition, which is a sufficient for the background to satisfy the supergravity equations [12,13,33].
Another interesting case which deserves further study is the dual of the dipole deformation of N = 1 SU(N ) × SU(N ) Yang-Mills theory as well as its nonrelativistic limits. 9 The dynamical z factor is the exponent in the power of the radial direction in the z −2z dx 2 + term. To have Schrödinger symmetry we must have z = 2. The relativistic symmetry corresponds to z = 1. 10 The harmonic function is denoted in general as Φ , where 1, 2 are labels for the SU(2)'s, and r and q are U(1) charges [58,66]. 11 In [47], the harmonic function Φ is defined as the non-negative length square of the Killing vector K on T 1,1 , Φ = K 2 = gijK i K j with i, j = 1, 2, 3, where K = (µ1∂ φ 1 , µ2∂ φ 2 , µ3∂ φ 3 ).
JHEP02(2021)126
A A basis for so(2, 4) algebra Let us choose the following representation for γ µ | 2,844.8 | 2021-02-01T00:00:00.000 | [
"Physics"
] |
Causing Global Warming
Do I cause global warming, climate change and their related harms when I go for a leisure drive with my gas-guzzling car? The current verdict seems to be that I do not; the emissions produced by my drive are much too insignificant to make a difference for the occurrence of global warming and its related harms. I argue that our verdict on this issue depends on what we mean by ‘causation’. If we for instance assume a simple counterfactual analysis of causation according to which ‘C causes E’ means ‘if C had not occurred, E would not have occurred’, we must conclude that a single drive does not cause global warming. However, this analysis of causation is well-known for giving counterintuitive results in some important cases. If we instead adopt Lewis’s (2000) analysis of causation, it turns out that it is indeterminate whether I cause global warming (etc.) when I go for a single drive. Still, in contexts where we seek to control or understand global warming, there is a pressure to adopt a more fragile view of this event. When we adopt such a view, it turns out that a single drive does cause global warming (etc.). This means that we cannot like Sinnott-Armstrong (2005) and Kingston and Sinnott-Armstrong (2018) reject the idea that I should refrain from going for a leisure drive simply because such a drive does not cause global warming.
that there is more to say about the argument that a leisure drive does not cause global warming and its related harms. This is how Sinnott-Armstrong describes the example of going for a leisure drive: JOYGUZZLING: Some people drive to their jobs or to the store because they have no other reasonable way to work and eat. I want to avoid issues about whether these goals justify driving, so I will focus on a case where nothing so important is gained. I will consider driving for fun on a beautiful Sunday afternoon. My drive is not necessary to cure depression or calm aggressive impulses. All that is gained is pleasure: Ah, the feel of wind in your hair! The views! How spectacular! Of course, you could drive a fuelefficient hybrid car. But fuel-efficient cars have less 'get up and go.' So let us consider a gas-guzzling sport utility vehicle. Ah, the feeling of power! The excitement! […] Do we have a moral obligation not to drive in such circumstances? (Sinnott-Armstrong 2005: 295-96) Sinnott-Armstrong's (2005) main thesis is that there is no moral obligation to refrain from going for such a drive. To establish this thesis, he considers quite a few general moral principles that might seem to support this thesis, and argues that upon closer inspection, none of these principles support the claim that there is such an obligation. He continues this pursuit in the later paper co-written with Kingston, which is an extension of Sinnott-Armstrong's previous paper.
Here, I will concentrate on the idea that there might be a moral obligation to refrain from joyguzzling since doing so causes other people climate change related harm. On this topic, Sinnott-Armstrong suggests that the following principle might seem to ground an obligation to refrain from joyguzzling: THE HARM PRINCIPLE: We have a moral obligation not to perform an act that causes harm to others. 1 (Sinnott-Armstrong 2005: 297) This principle, he argues, does not entail that we have a moral obligation to refrain from joyguzzling since such an act does not cause harm to others. As he puts it: Bthe point is simply that my individual joyride does not cause global warming, climate change, or any of their resulting harms^ (Sinnott-Armstrong 2005: 299). Kingston and Sinnott-Armstrong (2018) repeat this claim. This is the claim I dispute in this paper. I do not consider whether we have a moral obligation to refrain from going for a leisure drive, but simply whether such a drive causes global warming, climate change or their related harms. Still, if my arguments in this paper are sound, Sinnott-Armstrong & Kingston have not succeeded in ruling out that we might have a moral obligation not to joyguzzle.
I do not cause climate change related harm, Sinnott-Armstrong argues in the original paper, for the following reasons: (I) my act is neither necessary nor sufficient for climate change related harm to occur; emissions of greenhouse gases are perfectly fine in small quantities; they do not cause harm, not even imperceptible harm, and (II) there is no reason to single out my leisure drive out of all the other background conditions and identify it as a cause of climate change related harm. In the more recent paper, Kingston and Sinnott-Armstrong elaborate the first argument and argue (Ia) that individual drives do not cause climate change induced harm since climate change is a result of global warming, and global warming is an emerging phenomenon; and (Ib) that the only difference (if any) an individual leisure drive makes is that climate change related harm would occur a fraction of a second earlier than it otherwise would. I will argue that (I) and (II) most likely are mistaken. Sinnott-Armstrong (2005) does not spell out what he means by causation, and neither do Kingston and Sinnott-Armstrong (2018). This makes it difficult to assess the claim that a single leisure drive does not cause global warming, climate change or their resulting harms. However, they implicitly make use of two different ideas about causation: a simple counterfactual analysis of causation (or a But-For requirement of causation, as it is often called within legal philosophy) and a distinction between salient causes and background conditions. The simple counterfactual analysis of causation is most prominent in (I), while the distinction between salient causes and background conditions is apparent in (II). The simple counterfactual analysis, which is close to the analysis of causation David Lewis (1973) proposes, can be specified as follows: SIMPLE: C causes E if and only if it is the case that if C had not occurred, E would not have occurred. 2 If we assume SIMPLE, (I) could be restated along these lines: a single leisure drive causes global warming if and only if global warming would not have occurred had I refrained from going for this drive; this is obviously false, and therefore a single leisure drive does not cause global warming. The same reasoning would apply to the questions of whether a leisure drive causes climate change or climate change related harm.
Kingston & Sinnott-Armstrong are not the only ones who have advanced this argument. For instance, Jamieson illustratively claims that BJoyriding in my '57 Chevy will not in itself change the climate, nor will my refraining from driving stabilize the climate, though it might make me late for Sierra Club meetings^(2007: 167). Since there are no act consequentialist grounds for why we should refrain from going for a leisure drive, he argues, all act consequentialists should become virtue theorists, at least on matters concerning global warming. 3 In addition, Christopher Kutz argues that an individual difference principle would not entail that I am morally responsible for environmental harm. The individual difference principle essentially states that BI am accountable for a harm only if what I have done made a difference to that harm's occurrence^ (Kutz 2000: 116). This principle clearly entails that I am not accountable for global warming if I go for a leisure drive; global warming would occur 2 Lewis (1973) argues that causation is the ancestral of counterfactual dependence, where counterfactual dependence is analysed along the lines of SIMPLE. I will explain what taking causation to be the ancestral of counterfactual dependence means later on in the paper. 3 Jamieson elaborates his argument in Reason in a dark time (2014). On the topic of green virtue ethics in relation to Sinnott-Armstrong's (2005) arguments, see also Hourdequin (2010) and (Sandler 2010). I must say, however, that I cannot see why a virtue theory necessarily would entail that I there is a moral obligation to refrain from going for a leisure drive. A proper phronimosi.e. a person who appreciates what is morally relevant in the situation, who correctly weighs the different reasons in the light of how to best arrive at eudaimonia, and who acts in accordance with thiswould certainly realise that if there is no causal connection between a leisure drive and global warming, considerations concerning global warming should not have any bearing on whether to go for a leisure dive or not. If the phronimos would refrain from going for a leisure drive, this must be because there are better ways for her to spend her time. (Kingston and Sinnott-Armstrong 2018, advances The point is something like this: in order for an individual leisure drive to cause climate change related harm, it must first cause global warming (a raise in the average temperature of the world), which in turn must cause climate change (additional or more severe storms, floods etc.), which in turn must cause harm. 5 To give an outline of the paper, I will begin by arguing that (I) is mistaken since it relies on SIMPLE, a principle that is known to give counterintuitive results in cases of overdetermination and pre-emption (I do this in section one). Further, in section two through four, I will consider a few alternative strategies that might seem to entail that a single drive causes global warming, but that are not fully successful in establishing this upon closer scrutiny. In section two, I argue that an appeal to expected utility would not entail that we have a moral obligation to refrain from joyguzzling. As long as we stay with assuming SIMPLE, the expected climate change related utility of a single drive is zero (disregarding butterfly effects). In this, I side with Kingston & Sinnott-Armstrong. In section three, I will consider appeals to group causation. I will argue that even if we grant that the group of people driving do cause global warming, it is far from clear what implications this has for the individual driver. In addition, it is far from clear that the group of drivers causes global warming (etc.) given SIMPLE. In section four, I will consider Kingston & Sinnott-Armstrong's argument for why a NESS-condition for causation would not entail that a single leisure drive causes global warming. I argue that while their argument is mistaken, their conclusion is correct.
In section five, I will suggest that we should assume Lewis' elaborated counterfactual analysis of causation, which roughly states that an event C causes another event E when differences in how, when and if C occurs make enough of a difference for how, when and if E occurs. One reason for suggesting this is that this analysis can handle most problematic cases of pre-emption and overdetermination. However, the elaborated counterfactual analysis does not determinately entail that a single drive causes global warming. Instead, it entails that it is indeterminate whether such a drive causes global warming. It is indeterminate since it is unclear when a single drive makes enough of a difference for how and when global warming occurs. I also consider some circumstances under which we have reasons to think that also minuscule differences in how and when global warming occurs matters.
In section six, I will argue that Lewis' elaborated counterfactual analysis straightforwardly refutes the argument (Ib). If a single drive does make a small difference for when global warming occurs, it is indeterminate whether this drive causes global warming (since it is indeterminate whether this difference is large enough). I also argue that (Ia) most likely is mistaken on physical grounds. The ability of greenhouse gases to absorb and re-emit photons is not an emergent property.
In section seven, I will consider the possibility that even if a single drive causes global warming and climate change (given Lewis' elaborate analysis), and even if global warming and climate change in turn cause climate change related harm, it is not obvious that a single drive will cause climate change related harm. Still, I argue, that since causation on Lewis' account is transitive, a single drive does cause climate change related harm in such circumstances. Moreover, it is far from sure that the emissions from a single drive never make some tiny difference for how, when and if climate change related harm occurs.
Finally, in section eight, I will argue that (II) is mistaken, or at least inadequately argued for. THE HARM PRINCIPLE could either refer to salient causes (as Sinnott-Armstrong 2005 presume), or to causes in a non-discriminatory, broad sense. If the latter is correct, the saliency of a cause is irrelevant for the matter at hand. Because of this possibility, Sinnott-Armstrong cannot conclude that there is no moral obligation to refrain from joyguzzling since joyguzzling is not a salient cause of global warming. It could however turn out that the former is correct. Still, there is some ground for doubting the criteria for saliency that Sinnott-Armstrong (2005) uses, and therefore he cannot use these criteria for showing that a leisure drive is not one of many salient causes of global warming and its related harms.
Before we proceed, I should say that apart from SIMPLE, NESS and Lewis' elaborated analysis, there are other candidates for being the most plausible account of causation, such as those proposed by Hitchcock (2001), Schaffer (2003Schaffer ( , 2005 and Woodward (2003). Due to limitations of space, I will refrain from discussing whether these competing accounts of causation would entail that a single leisure drive causes global warming (etc.). I will also refrain from evaluating whether Lewis' elaborated analysis of causation is superior to its rivals (apart from SIMPLE and NESS). It seems to me, however, that Lewis' account fruitfully can be applied to the question of whether a single leisure drive causes global warming. At least, it is far better suited for analysing this question than SIMPLE.
1 Questioning Simple SIMPLE is well known for giving counter-intuitive results in cases of pre-emption and (symmetric) overdetermination (cf. Hart and Honoré 1985;Lewis 2000;Wright 1985). Consider for instance the following examples, which are instances of pre-emption: SHOOTING AND POISONING: D shoots and kills P just as P was about to drink a cup of tea that was poisoned by C. (Wright 1985(Wright : 1775 and BOTTLE SHATTERING: Billy and Suzy throw rocks at a bottle. Suzy throws first, or maybe she throws harder. Her rock arrives first. The bottle shatters. When Billy's rock gets to where the bottle used to be, there is nothing there but flying shards of glass. Without Suzy's throw, the impact of Billy's rock on the intact bottle would have been one of the final steps in the causal chain from Billy's throw to the shattering of the bottle. But, thanks to Suzy's preempting throw, that impact never happens. (Lewis 2000: 184) If we assume SIMPLE, D did not cause P's death since P's death would have occurred whether or not D had shot P, and Suzy did not shatter the bottle since the bottle would have shattered whether or not Suzy had thrown a rock at the bottle, and these are of course problematic results.
Considering how SIMPLE could accommodate cases of pre-emption despite first appearances, Lewis (1986) suggests that we could consider events in a more fine-grained manner. In a case like SHOOTING AND POISONING we could say that P's dying by being shot is not the same event as P's dying by being poisoned. Here, this strategy seems to give the desired result; D causes the event P's dying be being shot since that event would not have occurred if D had not taken the shot.
Still, the strategy of taking events to be more fine-grained seems mistaken in some cases. As Lewis argues in his later paper BCausation as Influence^ (2000), the event of the bottle shattering is the same event regardless of whose stone hits the bottle. Global warming might be an even better illustration of this. Most would agree that global warming would be the same event with or without the emissions a single leisure drive produces. In the end, the problem is that in ordinary language it is indeterminate when one event turns into another. As Lewis puts it: How much delay or change do we think it takes to replace an event by an altogether different event, and not just by a different version of the same event? An urgent question, if we want to analyze causation in terms of the dependence of whether one event occurs on whether another event occurs. Yet once we attend to the question, we surely see that it has no determinate answer. We have not made up our minds; and if we do presuppose sometimes one answer and sometimes another, we are entirely within our linguistic rights. (Lewis 2000: 186) Since SIMPLE gives problematic results in some important cases of pre-emption and overdetermination, we should not use this principle to evaluate alleged causes of global warming and climate change. Arguably, these events are of the problematic kind.
Further, that SIMPLE entails that Suzy did not cause the bottle to shatter is usually not taken as a sign that she in fact did not cause the bottle to shatter. Rather, this is taken as a sign that there is something wrong with SIMPLE. Likewise, it seems that if SIMPLE entails that a leisure drive does not cause global warming, we should at least not straightforwardly take this as a sign that such a drive does not cause global warming (as Kingston & Sinnott-Armstrong do This question could be repeated for global warming and for climate change related harm. To appreciate the problem this question hints at, one must realize that the claim that a single leisure drive does not cause climate change, if correct, applies to any emission of greenhouse gases in roughly the same quantity as a single leisure drive. In the causal evaluation, it does not matter that the car ride is just for fun, or that it occurs on a Sunday, and it does not matter that the emission comes from a car. Considering Hiller's question, one might wonder if there are any individual emissions of greenhouse gases that would be large enough to be necessary and sufficient for climate change to occur. Maybe the emissions of some huge coal-fired power plant could be said to cause (some part of) a certain climate impact. Even so, such huge emissions could only account for a portion of the totality of climate change. In addition, if events are measured in short enough intervals; even the biggest industry would emit only small quantities of greenhouse gases per event, and it is unlikely that such small quantities make a difference for the occurrence of any climate impact.
So, if we accept Sinnott-Armstrong's arguments, a great deal of climate change and of global warming seems to be uncaused. Now, Sinnott-Armstrong does not think that global warming is uncaused and he is not a climate sceptic. For instance, he asserts Ba significant amount of global warming is due to human activities. The main culprit is fossil fuels^ (Sinnott-Armstrong 2005: 294). His position is rather that the sum of all human greenhouse gas emissions causes global warming, while no individual emission does. Hiller uses this claim to argue that^if we assume that the sum total of AGCC [anthropogenic global climate change] is causing harm [as Sinnott-Armstrong does], then we can show that any individual act that contributes to AGCC also causes an expected harm^ (Hiller 2011: 355). The idea is that if all acts taken together cause huge harm, on average each act causes some harm.
The reason why Hiller appeals to expected harm rather than just harm is that it is possible that some slight increases in the amount of greenhouse gases in the atmosphere are beneficial, some slight increases might make no difference at all, and some slight increases might make a huge difference for the worse. However, he argues, since no one knows exactly which effect his or her particular drive will have, we should instead consider the expected utility of such a drive to be what is morally relevant. Hiller then goes on to calculate the expected utility of a single drive, proceeding from Nolt's (2011) estimation that an American's lifetime greenhouse gases emitting activities on average cause serious harm to one or two people. Using simple arithmetic, we can calculate, Hiller argues, that Bgoing on a Sunday drive is the moral equivalent of ruining someone's afternoon^(2011: 357, italics as in the original).
However, Hiller is wrong in thinking that if many acts together cause great harm, each act causes some harm (on average). If he stays with assuming something like SIMPLE, he cannot argue this way. If we assume SIMPLE, a single drive has no chance of causing global warming (etc.). Global warming would occur whether or not I went for a leisure drive. Further, it does not seem to help us at this point to adopt a fine-graining strategy. It is doubtful that my leisure drive would make any storm, flood or drought turn into a different event. Therefore, the expected global warming related utility of such a drive is zero (or close to it, considering the possibility that my drive might have some butterfly effect). While Hiller succeeds in explaining why it is unreasonable to think that a single drive makes no tiny causal contribution to global warming, he fails to show that a leisure drive has a negative expected utility.
Shelly Kagan (2011) advances another type of argument for the expected utility approach. He argues that all collective harm cases 7 are triggering cases, that is cases where Bit is indeed true for most acts that it makes no difference whether or not I do it, but for some act-the triggering act-it makes all the difference in the world^(Kagan 2011: 119). Therefore, he argues, even though my act most likely will cause no harm at all, there is a risk that it might trigger great harm. This is why I should refrain from, for instance, joyguzzling. However, it seems that just as adding one single grain of sand to a collection of such grains has no probability of turning this collection into a heap, a single leisure drive has no probability of triggering global warming, climate change and their related harms. 8 This is also pointed out by Kingston and Sinnott-Armstrong (2018).
Group Causation
Hiller's argument for the expected utility approach relies on an appeal to group causation. It relies on the idea that had all of us refrained from driving cars running on fossil fuel, climate change related harm would not have occurred, or at least been greatly diminished. Together, we are causing climate change related harm. This idea can be traced at least back to Derek Parfit (1984), who argues that a set of acts harms someone if it is the smallest set such that if 7 Collective harm cases are cases where no individual act makes any difference for the occurrence of the outcome, but where there is a bad result when enough people perform some kind of act. 8 For a thoroughgoing critique of Kagan's (2011) argument, see Nefsky (2012). none of the acts had been performed, this person would not have been harmed. 9 Anne Schwenkenbecher is a more contemporary advocate of this approach. She writes: The argument from aggregation, namely that our individual actions are potentially harmful to others not by themselves, but because they are part of a set of similar actions which together cause harm, delivers very strong reasons in favour of individual emission reductions. (Schwenkenbecher 2014: 176) Kingston and Sinnott-Armstrong (2018) question the idea that an individual act is wrong whenever it belongs to a set of acts that jointly cause harm, and I think rightly so. To advance their argument, they consider: STABBING CAESAR: In ancient Rome, 23 senators stabbed Caesar. Suppose that no single stab was sufficient to kill him, it took at least 10 or 20 stabs to kill him, and he was still alive until minutes after the final stab. As he lay dying, Caesar reportedly said, 'Et tu, Brute.' Brutus could have replied, 'My stabbing was not necessary to kill you, because, without me, the 22 others still would have stabbed you, so you still would have died. So my act did not make any difference to your life.' 10 (Kingston and Sinnott-Armstrong 2018: 172) The reply that Kingston and Sinnott-Armstrong imagine that Brutus could have given Caesar illustrates the idea that even though the group of senators causes Caesar's death, no individual senator does. (Again, it is clear that their argument presupposes something like SIMPLE).
In order for the bad outcome that the group causes to reflect badly on a participating individual, there must be something that connects the individual to the group outcome. Kingston & Sinnott-Armstrong agree that each senator is both causally and morally responsible for Caesar's death, but they suggest that this is so only because the senators conspired to kill him (they performed a collective action) and because each senator intended to kill Caesar. 11 These factors, they argue Bcan explain why each Senator's act is seen as causing harm and as morally wrong^ (Kingston and Sinnott-Armstrong 2018: 172). There is something that connects the harmful group act to each participating individual. However, they continue, none of these conditions are satisfied in JOYGUZZLING: usually, someone who goes for a leisure drive does neither collaborate with others to bring about global warming and its related harms, nor intend global warming (etc.) to occur. In order for Schwenkenbecher's argument to hold, she must show that there is something that connects the group outcome in cases like JOYGUZZLING to each individual driver.
Kingston & Sinnott-Armstrong's idea that factors such as intending and conspiring to do harm might render a non-cause a cause is problematic. First, if the senators conspired to kill Caesar, it is not necessarily the case that no senator caused Caesar's death. In fact, SIMPLE can straightforwardly explain why most senators are causally responsible for Caesar's death: one 9 For a discussion, see e.g. Eggleston (2000); Gruzalski (1986); Jackson (1997); Parfit (1986); Petersson (2004);Shrader-Frechette (1987). 10 The example is inaccurate historywise (cf. Suetonius 2003 [121]). For instance, had Brutus not agreed to stab Caesar, there would most likely have been no murder. Since Brutus was a direct male-line descendent of the man that overthrew the roman monarchy and established the republic, his participation was vital for giving political legitimacy to the enterprise. Still, if we consider the example as it is presented, Kingston & Sinnott-Armstrong's point should be clear. 11 They also suggest a few additional factors. These additional factors do not matter for the argument here. senator might have hinted at the idea of killing Caesar, another might have stated it more boldly, a third might have suggested a general plan, a fourth might have proposed that they should stab him instead of poisoning him, a fifth might have suggested that they should do it on the Ides of March instead of during summer, etc. If this is how things transpired, most of the senators made a difference for the occurrence of Caesar's death. Second, as I will explain in more detail in section eight, if we assume that Caesar's death was truly overdeterminedthat is, if we assume that no senator made a difference for the occurrence of Caesar's death, neither during planning nor during the execution of the planit is problematic to assume that the senators' intentions to cause harm turn their otherwise ineffectual actions into causes of Caesar's death.
Further, Kingston and Sinnott-Armstrong (as well as Schwenkenbecher, Kutz, Hiller etc.) are wrong in thinking that the group of drivers causes global warming (assuming SIMPLE). SIMPLE states that all of us who drive cars running on fossil fuel cause global warming if and only if global warming would not occur had not all of us driven cars running on fossil fuel. However, had one driver refrained from going for a leisure drive, global warming would still occur. The same goes if ten or a hundred drivers would have refrained from going for any single drive. If you think that SIMPLE entails that all drives cause global warming, you probably think that had none of these drives occurred, or had enough of them not occurred, global warming had not occurred. However, it is not clear that this is the relevant contrast. Until we have established the relevant contrast, we must conclude that it is indeterminate whether the group of all drives causes global warming (assuming SIMPLE).
The same argument could be repeated for climate change and its resulting harms. In fact, a similar argument could be made for showing that the group of senators did not cause Caesar's death. 12 Had one senator at the last minute been prevented from participating in the murder, a group consisting of 22 senators had still stabbed Caesar, whereby his death had occurred anyway. Therefore, had not the group consisting of 23 senators done what they did, Caesar would have died anyway.
Again, we should not conclude from this that the set of drives does not cause global warming (or that the senators did not cause Caesar's death), but rather that there is something problematic about SIMPLE. What we need is an analysis of causation that helps us to establish which acts that belong to the relevant set.
The NESS-Condition of Causation
There are competing accounts of causation one could assume instead of SIMPLE. For instance, Braham and Van Hees (2012) argue (partly in reply to Sinnott-Armstrong 2005) that the morally relevant notion of causation is a NESS-condition: NESS: A condition C was a cause of a consequence E if and only if it was necessary for the sufficiency of a set of existing antecedent conditions that was sufficient for the occurrence of E. (Braham and Van Hees 2012;Wright 1985Wright , 2013 Kingston and Sinnott-Armstrong object to the idea that NESS would entail that a single leisure drive causes climate change (and its related harms), and they do so in relation to STABBING CAESAR. Their comments on NESS are very brief. Considering Brutus' imagined reply to Caesar (that his stab did not make any difference), they write: This reply would have been accurate, and some views of causation will see Brutus' stabbing as not a cause of Caesar's death. These include accounts in which acts must be themselves necessary or necessary parts of sufficient sets in order to be causal. (Kingston and Sinnott-Armstrong 2018: 172, my emphasis) It seems that Kingston & Sinnott-Armstrong have misunderstood NESS. Of course Brutus' stabbing was a necessary part of a set of existing antecedent conditions sufficient for Caesars' death. One major reason behind thinking that NESS gives a better account of causation than SIMPLE is exactly that NESS often give the right answer in cases of overdetermination. To take a straightforward example: If two shooters simultaneously shot me, and if both shots were sufficient to kill me, then SIMPLE would entail that neither shooter caused my death (since neither shot was necessary for bringing about my death), 13 but NESS would entail that both shooters caused my death (each shot is necessary for the sufficiency of the set of actually obtaining antecedent conditions containing that shot but not the other). Similarly, in STABBING CAESAR, NESS entails that all senators caused Caesar's death.
One might think that our ignorance of how many stabs that were minimally sufficient to kill Caesar prevents us from determining whether Brutus's stab was necessary for the sufficiency of a particular set. However, this ignorance is irrelevant. For all possible answers to the question 'How many stabs were sufficient to kill Caesar?', Brutus's stab is necessary for the sufficiency of some sets of existing antecedent conditions. If say fifteen stabs (but not fourteen) were sufficient to cause Caesar's death, then Brutus' stab was necessary for the sufficiency of quite a few actually obtaining sets consisting of fifteen stabs, and it suffices that his stab was necessary for the sufficiency of one such set for it to be true that Brutus caused Caesar's death.
If it instead turns out that twenty (but not nineteen) stabs were sufficient to cause Caesar's death, Brutus's stab would be necessary for the sufficiency to cause Caesar's death of quite a few of the sets that consist of twenty stabs.
Still, even though Kingston & Sinnott-Armstrong are mistaken about what NESS entails in STABBING CAESAR, their conclusion that NESS does not entail that a single drive causes global warming is correct. One single drive is not necessary for the sufficiency of any set of antecedent conditions for global warming. Our concept of global warming is too coarsegrained for this to be the case. To say that a single drive satisfy the NESS-condition for causing global warming is like saying that adding a single grain of sand to a collection of grains of sand optimally arranged was necessary for the sufficiency of turning this collection into a heap. It would be analogous to saying that without this addition, the collection would not be sufficiently large to amount to a heap. 14 For NESS to entail that adding one grain of sand to a collection of grains of sand optimally arranged might cause this collection to become a heap, there must be some threshold such that if the collection contains n grains of sand, this collection is not a heap, but when the collection contains n + 1 grains of sand, it is a heap. However, there is no such threshold. Likewise, in joyguzzling, there is no threshold such that n 13 Simple would entail this given that we do not think that my death by being shot by two shots is an altogether different event than my death by being shot by one shot. 14 This relates to Kagan's (2011) mistaken argument that there must be a threshold in all collective harm cases.
drives are not sufficient to cause global warming, but n + 1 drives are. This contrasts to STABBING CAESAR, where one stab (we do not know which) does make a difference for Caesar's death. Somewhere, there is a threshold in this case.
Assuming an Elaborated Counterfactual Account of Causation
As another alternative to assuming SIMPLE, we could assume an elaborated counterfactual account of causation. Following Lewis (2000), let us tentatively assume that C influences E if and only if it is the case that if C had not occurred at all, or had occurred at a different time from the time that it actually did occur, or in a manner different from the manner in which in actually did occur, then E would not have occurred at all, or would have occurred at a time different from the time that it actually did occur, or occurred in a manner different from the manner in which it actually did occur. For short: ELABORATED: C influences E if and only if how, when or whether C occurs makes a difference for how, when or whether E occurs.
The reason why we assume ELABORATED tentatively is that this analysis requires an adjustment in order to fully capture Lewis's account. If the influence C exerts on E is minuscule, Lewis argues, we might be justified in neglecting this influence and conclude that C does not influence E enough to count as a cause. I will soon get back to this issue. Also following Lewis (2000), let us further assume that causation is the ancestral, or transitive closure, of influence: C causes E if and only if there is a causal chain leading from C to E, where a causal chain is a sequence of causal influences.
This means that if C influences D, and D influences E, then C causes E, and this is true even if C does not influence E. In the most trivial case where C influences E directly, C does of course cause E. The transitivity of causation has implications for the question of whether a single drive causes climate change related harm. I will return to this issue in section seven.
If we abandon SIMPLE in favour of ELABORATED, we must conclude that a single drive influences global warming, and therefore that it causes global warming. The emissions produced by such a drive influence how and possibly when global warming occurs; global warming will be the same event, but it will come in a slightly different versionit will occur in a slightly different manner or at a slightly different timegiven the addition of these extra CO 2 molecules.
Still, it seems wanting to say that infinitesimal differences in the manner in which global warming occurs (which and how many CO 2 molecules that re-emits photons towards the earth, for example), or the minuscule difference in the timing of global warming (that it occurs a fraction of fraction of a second earlier) would turn one version of this event into another. The event of global warming is not that fragile, to use Lewis' words. Lewis (2000) considers a similar problem in relation to BOTTLE SHATTERING, and admits that our tentative version of ELABORATED might be too promiscuous in assigning causes. He writes: By the law of universal gravitation, a distant planet makes some minute difference to the trajectory of Suzy's rock, thereby making a tiny difference to the shattering of the bottle.
So by adopting the fragility strategy, in whichever form, we open the gate to a flood of spurious causes. (Lewis 2000: 188) This problem becomes especially acute when we realise that also the gravitational force of Billy's rock might influence the bottle shattering in some minute way, whereby we get the undesired result that Billy's throw in fact caused the bottle shattering.
To counter this problem, Lewis argues that while Billy's throw might exert some tiny influence on the bottle shattering, Suzy's throw influences the bottle shattering to a much greater extent. If Suzy's rock would have been a little heavier, or if she would have thrown the rock a little sooner, the bottle shattering would have changed correspondingly; but if we would similarly alter Billy's throw, the bottle shattering would be (almost) unchanged. On this basis, he argues that if the influence of one event is minuscule enough compared to the influence of some other event, we are entitled to neglect it. Lewis again: Well, these differences made by spurious causes are negligible; so, surely, we are entitled to neglect them. Just as it is right to say that a box contains nothing when, strictly speaking, it contains a little dust, so likewise we are within our linguistic rights to say that Billy's throw made no difference to the shattering when, strictly speaking, its gravitational effects made an imperceptibly minute difference. (Lewis 2000: 189) For this reason, a more accurate and elaborated version of ELABORATED is: ELABORATED*: C influences E if and only if how, when or whether C occurs makes enough of a difference for how, when or whether E occurs.
If we assume ELABORATED*, it seems that a single leisure drive might not cause global warming after all. The differences in global warming produced by one single drive are not significant enough to turn one version of global warming into another. There are substantially larger emissions than those produced by a single drive, for instance those made by large coal-fired powerplants (given that we do not individuate these emissions events too much). Why would not such an emission count as a cause of global warming, while the emissions from a single drive would not? Similarly, if we consider my decision to go for a single drive, the effects such a decision has for global warming (assuming that my decision in fact makes me go for such a drive) makes much less of a difference for global warming than a governmental decision to allow for offshore oil drilling (given that such a decision results in actual drilling and, in the longer run, lower fuel prices and more emissions of greenhouse gases from burning fossil fuels). It seems that considerations such as these might explain a common position with respect to global warming: what I do as an individual does not cause global warming, 15 but that governmental decisions and the decisions of most large companies do. As we have seen, this is a view that Kingston and Sinnott-Armstrong subscribe to (albeit for a mistaken reason, see section 3).
Still, in some circumstances we might have reasons to consider even minuscule differences in how and when an effect occurs. Lewis again: [I]f for some strange reason we did attend to these negligible differences, would we not then put ourselves in an unusual context where it was right, not wrong, to count all the things that make negligible differences as joint causes of the effect? (Lewis 2000: 189) The fundamental question then becomes whether we are in a context where we have reasons to consider also minuscule differences in how and when global warming occurs. One problem for settling the issue whether a single drive causes global warming is that the only guidance Lewis gives to decide this is that an event does not count as a cause of a certain outcome if it influences this outcome to a much lesser extent than some other event. However, when it comes to global warming, there is no natural place to draw the line between the emissions that are too small to count as causes, and those that are large enough to do so. In JOYGUZZLING, there are not only two potential causes as in BOTTLE SHATTERING, one which greatly influences the outcome and one which only influence the outcome to a minuscule extent, but a whole range of potential causes of different sizes. Single drives could be longer or shorter, there are companies emitting less and companies emitting more. The decisions of smaller governments have less of an impact on global warming than the decisions of larger ones, etc. We could choose to draw the line at one particular level, saying that all smaller emissions do not count as causes and that all the larger ones do, but wherever we draw the line, there will be some arbitrariness about the choice. This means that it is indeterminate whether a single drive causes global warming, and this indeterminacy is due to conceptual inexactness. Our concept of global warming does not univocally set the level of specification of how and when this event occurs, and given the circumstances there is no natural way to specify this level. ELABORATED* paired together with the guidance Lewis provides for deciding when a potential cause makes enough of a difference do not help us settling the matter.
This might not be the end of the matter. Apart from the considerations Lewis suggests to be relevant for causal evaluations, other considerations have bearing on the matter. 16 There are at least two further reasons in favour of thinking that a single drive causes global warming. First, if we do not consider a single drive to cause global warming, we get an explanatory deficit. Hiller's question re-emerges. If emissions on the level of a single drive do not cause global warming, then what does? Are we dealing with an uncaused effect? If we only consider emissions on the scale of those produced by coal-fired coal powerplants to cause global warming, a substantial amount of global warming would be left unexplained.
Second, whether an emission on the level of a single drive is too small to count as a cause of global warming (climate change, etc.) or not might be a normative issue. It might turn out that a vital part of the best way to thwart global warming (climate change, etc.) would be if each individual did his or her best to decrease his or her emissions of greenhouse gases. In that case, we have a normative reason to consider global warming to be a fragile event; that is, we have a normative reason to think that also minuscule differences in how and when global warming occurs makes this event to come in a different version. 17 Considering another of Sinnott-Armstrong's arguments might elucidate these points. He makes the following comparison between going for a leisure drive and pouring a quart of water in a river: 16 I am grateful to two anonymous reviewers for pressing me on this matter. 17 Note that the idea is not that we should think that a single drive causes global warming even though it in fact does not. Rather, the idea is that we might have reasons to consider global warming to be a fragile event, something which entails that a single drive do cause global warming. FLOOD: Global warming is more like a river that is going to flood downstream because of torrential rains. I pour a quart into the river upstream (maybe just because I do not want to carry it.) My act of pouring the quart into the river is not a cause of the flood. Analogously, my act of driving for fun is not a cause of global warming. (Sinnott-Armstrong 2005: 299) Sinnott-Armstrong is however wrong in assuming that these cases are analogous. To begin with, in FLOOD there are only two potential causes: the torrential rain and my pouring a quart of water into the river. Therefore, we can safely conclude that my act of pouring the quart of water into the flood did not cause the flood if we assume ELABORATED* and follow Lewis' guidance. The torrential rains influence the downstream flood to a much higher degree than my pouring quart of water into the flood. Had the torrential rains occurred at a different time, in a different manner or not at all, the flood would have changed correspondingly. However, had I refrained from pouring a quart of water into the river, or poured it at a different time or in a different manner, the flood would have occurred in more or less the same manner and at the same time. Therefore, we are entitled to neglect my act of pouring water; this act is analogous to Billy's throw. However, in in JOYGUZZLING, there are loads of potential causes with varying degree of influence.
Secondly, in FLOOD, the explanatory deficit if we disregard the influence of my pouring water into the river is negligible. However, in JOYGUZZLING, if we disregard each individual drive as not being a cause of global warming, it seems that we cannot fully explain what caused this phenomenon.
Thirdly, if we seek to prevent future floods from occurring, it suffices to concentrate on how to prevent future torrential rains to cause floods. Since the small amount of water I contributed to the flood is negligible, we are allowed to disregard this contribution. Things would have been different if the flood was caused by billions of people each pouring a quart of water into the river. In such a case, we would have a reason to consider also minor contributions. Likewise, if we seek to prevent as many climate change related harms as possible, it might not suffice to concentrate on events that influence these harms to a great extent (such as emissions from huge coal-fired powerplants and some political decisions). Instead, we might also have reasons to consider more minor emissions, such as those produced by a single drive.
I will leave it unsaid whether these considerations give us decisive reason to conclude that a single drive in fact do cause global warming. Instead, I will rest with the conclusion that if we assume ELABORATED* and follow the guidance Lewis gives for deciding when a potential cause makes enough of a difference to count as a cause, it is indeterminate whether a single drive causes global warming. Some might balk at this conclusion. Towards the end of the paper, I will offer some considerations that hopefully make it more palatable.
Timing and Emergence
One of Kingston and Sinnott-Armstrong's arguments for thinking that my single drive does not cause climate change related harm is that the only difference (if any) this drive makes is that harm would occur a fraction of a second earlier than it otherwise would (an argument I earlier called Ib). A similar argument has previously been advanced by Maltais (2013). Assuming ELABORATED, and granting that my drive would affect the timing of harm's occurrence, we must conclude that my drive actually influences climate change related harm. This way, assuming ELABORATED can quite straightforwardly refute one of their arguments in favour of thinking that a leisure drive definitely does not cause climate change related harm. Yet, considering that the influence my drive exerts on climate change related harm might be negligible, we should say that my drive definitely makes some difference in the timing of harm's occurrence, but that it is indeterminate whether this minuscule difference is negligible or not, and therefore it is indeterminate whether my drive causes climate change related harm. In other words, if we assume ELABORATED*, it is indeterminate whether my drive causes global warming.
Still, Kingston and Sinnott-Armstrong (2018) (2005) does not consider climate change to be an uncaused phenomenon; he considers a significant amount of global warming to be due to human activities. However, Hiller argues, it seems a bit odd to claim that a single car ride does not cause global warming, but that global warming is due to human activities.
If individual drives do not make any difference in AGCC, but everyone's driving does, then everyone's driving would have to be some odd emergent entity that is not reducible to individual acts of driving. The existence of emergent properties is controversial in metaphysics, as is the claim that emergent properties can cause any effects. But at least our opponents have not shown that global warming is not emergent. As a result, they have not shown joyguzzling is causal. (Kingston and Sinnott-Armstrong 2018: 176).
To advance the thesis that global warming and climate change are emergent phenomena, they propose two arguments. First, they use an analogy concerning emergent properties of oil molecules. Following Wimsatt (2007), they argue that emergent properties can be contrasted to aggregative properties, such as mass. One molecule of oil has mass, but lacks properties like sliminess and colour. If the property of greenhouses gases needed for them to contribute to global warming is an emergent one, they would have a case. However, the relevant property of greenhouse gases is not emergent in the way the sliminess of oil is. Even though an individual molecule of CO 2 does not on its own cause additional floods, draughts etc. (except possibly for butterfly effect cases), it can absorb a photon and re-emit it; therefore, it hason its ownthe ability required for hindering heat radiation from escaping earth. This property, which is an essential property of greenhouse gases, is aggregative. Second, they argue that just like the emergent properties of oil, the amount of global warming the CO 2 molecules in the atmosphere cause Bdo not depend simply on the number of molecules, but also on their arrangement, structure, or organization^ (Kingston and Sinnott-Armstrong 2018: 175). They continue, When the molecules are arranged properly in the atmosphere, the group causes climate change (as well as individual storms), but the same molecules would not cause climate change (or individual storms) if they were re-arranged into a thin sheet only one molecule thick far from the earth's surface. In this case, any photon absorbed and reemitted by a particular molecule would most likely be released at one of the many angles that would see it miss the earth, rather than back towards it as typically happens when the molecules are arranged thickly nearer the earth. (Kingston and Sinnott-Armstrong 2018: 176) It is true that the arrangement of the molecules matters for the degree of global warming, but this does not make the relevant property of greenhouses gases emergent. Individual oil molecules lack the property of being slimy because this property is something that arises in the interaction between the oil molecules. Unlike an oil molecule which lack the property of being slimy, each molecule of CO 2 has the ability to absorb and re-emit photons.
Further, you might think that even if individual CO 2 -molecules have the ability to absorb and re-emit photons, they lack the ability to re-emit them towards earth. 18 However, given the vast amount of time the molecule will be in the atmosphere, it is overwhelmingly likely that it will re-emit quite a few photons back towards earth, photons that otherwise would have escaped earth. The only difference in the case where the molecules are spread out in a thin layer far from earth is that the re-emitted photon has less of a chance to be re-emitted towards earth, hence the lower degree of global warming. But still, given the time most CO 2 -molecules stay in the atmosphere, and given how many such molecules a single drive emits, there is no real possibility that no molecules from a single drive would re-emit photons towards earth.
In some places, Kingston and Sinnott-Armstrong say that individual molecules of oil lack the property of being slimy because we cannot feel them. For instance, they write: BAn individual molecule is not slimy in the least. We cannot feel any individual molecule at all, so it cannot feel like slime.^(2018: 175). However, whether we can perceive a property cannot be the relevant consideration here. What is relevant is whether an individual molecule has the property relevant for being able to cause the outcome in question.
Finally, and perhaps most importantly, given ELABORATED it suffices that the greenhouse gases produced by my leisure drive re-emit some photons back to earth in order for global warming to occur in a slightly different manner. Global warming would come in another version if I joyguzzle. The photons re-emitted towards earth would be taken up by some matter on earth, and thereby make the molecules of this matter move slightly faster, something which constitutes raising the temperature of this matter. This way, my drive influences the event of global warming. Therefore, going for a leisure drive causes global warming. But, once again, if we instead assume ELABORATED*, we might or might not have reasons to think that the influence my leisure drive exerts on global warming is sufficient enough for my drive to count as a cause, and therefore we must conclude that it is indeterminate whether my drive causes global warming. 18 I am thankful to an anonymous reviewer for making elaborate this issue.
You could understand Kingston & Sinnott-Armstrong's argument from emergence in a different way. In some places, it seems that they are not taking the analogy between the sliminess of oil and the greenhouse gases' ability to cause global warming literally, but rather to illustrate that just as properties can be emergent, so can events. For instance, they write: BJust as individual molecules of oil do not cause parts of sensations of sliminess (or yellowish color), so individual molecules of greenhouse gas do not cause parts of dangerous climate impacts^ (Kingston and Sinnott-Armstrong 2018: 175). On this view, global warming as well as individual climate impacts are emerging events, and they are emerging relative to individual emissions of greenhouse gases since no particular emission causes these events, while the set of emissions does. If this is the idea, this argument basically amounts to reiterate that global warming and climate impacts are collective harm problems, albeit doing so in a more illustrative way. However, as I already have argued, the idea that a single drive causes neither global warming nor climate change relies on the problematic assumption that something like SIMPLE is true. Moreover, as I also have argued, if we assume SIMPLE, it is far from sure that we can conclude that the set of all actual emissions do cause global warming.
I should add that there might be properties of greenhouse gases that are essential for their ability to cause global warming that are emergent, properties that I have failed to discuss here. If this would turn out to be the case, Kingston and Sinnott-Armstrong would be correct after all. However, as I think my argument shows, they have yet to show that this is the case.
The Counterfactual Analysis, Climate Change, and Transitivity
There is a case to be made for thinking that a single drive does not influence climate change related harm. If a single drive does not influence climate change related harm, Kington and Sinnott-Armstrong's claim that THE HARM PRINCIPLE does not entail that we have a moral obligation to refrain from joyguzzling comes out true. Consider the following example: BURST SEAWALL: A seawall bursts due sea level rising. If I had not gone for a leisure drive, this event would have happened in a slightly different manner; it might have happened a fraction of a second earlier, the initial crack in the sea wall might have been at a slightly different place due to a minuscule difference in the currents of the water pushing against the sea wall, or the water rushing through the burst seawall might have done so in an ever so slightly different manner.
If this were the case, my leisure drive would be a cause of the burst seawall and of the resulting flood if we assume ELABORATED.
BURST SEAWALL CONTINUED: Say that this flood harms you; you get swept away in the flood, bumping into several objects and get severely injured. Imagine further that the harm you experience would be exactly the same whether or not I had refrained from going for a leisure drive; you would have bumped into exactly the same objects, contracting the same injuries etc. We could further assume that there is no difference in imperceptible harm as well; your neurons are firing at exactly the same rate and manner in both cases.
If this is the case, neither ELABORATED nor ELABORATED* seems to entail that my leisure drive causes you harm; my leisure drive does not influence the event of you being harmed at all.
Sinnott-Armstrong might have something similar in mind, albeit in relation to SIMPLE, when he argues: There is nothing bad about global warming or climate change in itself if no people (or animals) are harmed. But there is no individual person or animal who will be worse off if I drive than if I do not drive my gas-guzzler just for fun. Global warming and climate change occur on such a massive scale that my individual driving makes no difference to the welfare of anyone. (Sinnott-Armstrong 2005: 301).
However, according to Lewis, causation is the transitive closure, or ancestral, of causal influence. This means that if C influences D, and D influences E, then C causes E; and this is true even if C does not influence E. To illustrate this idea, Lewis considers the following Frankfurt-style case (cf. Frankfurt 1969): NEUROSCIENTIST: The neuroscientist knows just how she wants Jones to behave. She hopes that Jones, left to himself, will behave just as she wants. By reading his brain, she can predict what he will do if left to himself. She predicts that he will do just what she desires, so she does nothing else. But if instead she had read that he would stray from the desired path, she would have taken control and manipulated his brain directly so as to produce the desired behavior. (Lewis 2000: 12) Lewis assumes that the right conclusion in NEUROSCIENTIST must be that Jones's initial brain state (C) caused his subsequent behaviour (E), but this is not the result we get if we apply ELABORATED*; his behaviour will be exactly the same even if there is a change in how, when or whether his initial brain state occurs. The remedy to this problem is to look at the transitive ancestral of counterfactual dependence: causation; and to distinguish an intermediate event (D) consisting of the combination of neuroscientist's decision not to intervene and Jones' brain state at that time. Here, it is clear that Jones' initial brain state (C) influenced (D). A difference in Jones' initial brain state (C) would have resulted in a difference in the neuroscientist's decision (which partly constitutes D). Further, (D) caused (E): had for instance the neuroscientist decided to change Jones' brain state in some way, (E) would have turned out different. This way, we have a chain of influences going from (C) to (E), explaining why (C) caused (E) in this case.
If causation is the transitive closure of causal influence, this means that if my leisure drive influences the breaking of the sea wall, and the breaking of the sea wall in turn influences your well-being in a negative way, I have caused you harm. Likewise if we take this in several steps; if I influence global warming, which in turn influences the sea level rising, which in turn influences the breaking of the sea wall, which in turn harms you, I have caused you harm.
Still, there is the question of whether my leisure drive really makes enough of a difference for the breaking of the sea wall in order to count as a cause. As before, if we assume ELABORATED, it definitely does, and so we must conclude that my leisure drive causes you harm in BURST SEAWALL; but if we instead assume ELABORATED*, it is indeterminate whether my leisure drive makes enough of a difference for the breaking of the sea wall, and therefore we must instead conclude that it is indeterminate whether my leisure drive causes you harm.
You could of course deny that counterfactual causation always is transitive; you might agree with writers such as Douglas Ehring (1987) and Ned Hall (2000Hall ( , 2004 that there are counterexamples to the transitivity of counterfactual causation. (For a defence of the transitivity of causation, see for instance Paul 2000;Maslen 2004, and of course Lewis 2000). Still, even if it would turn out that causation is not always transitive, causal transitivity might hold in BURST SEAWALL. As a final point, if you still do not think that causation is transitive in cases like BURST SEAWALL, and therefore conclude that my leisure drive does not cause you harm in this case, you might still conclude that my leisure drive might cause other people harm. After all, quite a few instances of climate change related harm might be directly influenced by my leisure drive. Think for instance of cases where my leisure drive causes minuscule differences in how and when a sea wall breaks, and when these minuscule differences in turn makes a minuscule difference for how and when some people are harmed. In these cases, because I go for a leisure drive, some people are harmed in a slightly different way or at a slightly different time. If this is the case, we do not have to assume that causation is transitive in order to conclude that I cause harm when I go for a leisure drive (if we assume ELABORATED), or that it is indeterminate whether I cause harm (if we assume ELABORATED*).
Salient Causes and Background Conditions
We can now turn to Sinnott-Armstrong's second argument for thinking that a leisure drive does not cause climate change: the argument that joyguzzling does not cause climate change since there are no special reasons to pick out that act out of all the other background conditions and identify it as a cause (the argument I earlier called II). As a contrast to the JOYGUZZLING case, he considers a case where five persons are pushing a car with a person locked inside off a cliff. In order for them to succeed in this endeavour, it is sufficient that three persons help pushing. However, I join them in pushing. Here, he assumes that Bmy act of pushing is a cause (or part of the cause) of the harm to the passenger^(Sinnott-Armstrong 2005: 297), and this is so even though my act is neither sufficient nor necessary for the outcome. BWhy^, he continues, BBecause I intend to cause harm to the passenger, and because my act is unusual.^1 9 These factors, he continues, are lacking in the JOYGUZZLING case; people who go for a leisure drive does not do so intending to bring about climate change related harm, and such acts are not unusual. Note that we must go back to assuming something like SIMPLE in order to fully appreciate Sinnott-Armstrong's argument. If we would assume ELABORATED or NESS instead, it seems obvious that my pushing the car causes harm to the person locked inside; the car might for instance go over the cliff at an earlier time given that I help pushing. Sinnott-Armstrong illustrates the claim that the rarity of an act might provide us with a reason to identify it as a cause even though it is not necessary for bringing about the outcome with the following example: MATCH LIGHTING: For a match to light up, we need to strike it so as to create friction. There also has to be oxygen. We do not call the oxygen the cause of the fire, since oxygen is usually present. Instead, we say that the friction causes the match to light, since it is unusual for that friction to occur. It happens only once in the life of each match. (Sinnott-Armstrong 2005: 298) However, it seems a bit odd to use the match lighting example to illustrate how the rarity of an event might turn something that otherwise is a non-cause into a cause. After all, match lighting is not a case where the resulting event is overdetermined. If friction had not occurred, the match had not been lighted. Therefore, this example cannot be used to show that the rarity of a condition serves as a reason to identify this condition as a cause in a case where the effect is overdetermined.
There is another view in the vicinity of the one that Sinnott-Armstrong (2005) suggests. Hart and Honoré (1985) suggest that a cause stand out as salient if it is abnormal (i.e. rare) or if it consists in a voluntary human action (for instance an act performed with the intention to harm). Still, on their account, a cause can only stand out as salient if it is a cause to begin with. That a cause is abnormal or intentional does not turn a non-cause into a cause, as Sinnott-Armstrong assumes it does. 20 Instead, to deal with cases of overdetermination, Hart & Honoré claims that a in order for an event to be a cause of an outcome, it must either be necessary or sufficient for the occurrence of this outcome.
They briefly elaborate what it means for a cause to be sufficient for an outcome to occur, and they end up holding an analysis that is very close to NESS. Therefore, on their account, the person that joins others in pushing a car over a cliff would be causing harm to the passenger trapped inside on their account, not since he intentionally does this, but since his pushing is a necessary part of a sufficient condition for harming the passengers. In contrast to, say, gravity, his pushing is also a salient cause of the harm done to the passenger trapped inside the car since he is pushing with the intent to harm the passenger (this goes for each of those who helps pushing the car).
The contrast between Hart & Honoré's view and Sinnott-Armstrong's view helps us see a problem with (II): it confuses two different senses of causation. In one sensethe narrow sensea cause is one (or a few) condition(s) that stands out as salient. In MATCH LIGHTING, the friction might fit this description. As Lewis puts it: We sometimes single out one among all the causes of some event and call it "the" cause, as if there were no others.
[…] We may select the abnormal or extraordinary causes, or those under human control, or those we deem good or bad, or just those we want to talk about. (Lewis 1973: 558-59) In another sensethe broad senseeach condition is a cause, salient or not, that bring about the effect. On this view, both friction and the presence of oxygen count as causes of the match lighting. There are also other conditions that would count as causes, such as the fact that the match is made of flammable matter, and that one end of the match is coated with a material that can be ignited by frictional heat. This is the sense of causation that Lewis aims at analysing, both in his (1973) BCausation^and in his (2000) BCausation as Influence^, and this is also the sense of causation the NESS-condition gives an analysis of.
This raises the question of whether THE HARM PRINCIPLE refers to salient causes or to causes in the broad sense. If it refers to causation in the broad sense, it might entail that we have a moral obligation not to go for a leisure drive even if Sinnott-Armstrong (2005) would be correct in claiming that we have no reason to identify this drive as a salient cause of global warming. It would for instance entail this if we assume that ELABORATED correctly describes causation in the relevant sense. It would also entail this if we assume that ELABORATED* correctly describes 20 Things get a bit more complicated if we understand THE HARM PRINCIPLE as instead referring to salient causation. If we want to evaluate whether this principle entails that I have a moral obligation to refrain from going for a leisure drive because it causes climate change related harm, we need some criteria for deciding whether such a drive counts a salient cause of this harm. One problem is that the criteria that and Hart & Honoré suggest and Sinnott-Armstrong makes use of (i.e. intentionality and abnormality) are not entirely reliable. As Lipton argues, B[i]f I tell my young son that he dropped his food on the floor because he was not paying attention to the task at hand, I do not imply that this is an unusual state of affairs^ (Lipton 1992: 134). We could add that nor would he imply that his son dropped the food on the floor intentionally, but still the son's behaviour is the salient cause for why the food end up on the floor. For another example, reckless drivers are common and they do not intend harm, still such drivers might be the salient causes of harm. Moreover, if we accept these criteria, we must also accept that there might be some agents going for a leisure drive that actually cause climate change; namely all those who intend climate change to occur. Admittedly, there are probably not many such agents, but there might be some malicious agents that are tired of humanity and that want them to suffer, or some agents living in the arctic areas of the world aiming for a warmer climate where they live. It seems however that if these agents are causing climate change, they do so regardless of what they intend to do. Their intentions might have implications for their moral responsibility (i.e. whether they are blameworthy or praiseworthy), but not for their causal ditto. This should make us hesitant to use the suggested criteria.
It might turn out that there are no fixed criteria governing whether a cause stands out as salient. Instead, this issue might be fundamentally perspective dependent. For instance, when a house is burning, the fire fighters might see the presence of oxygen, heat and flammable material as the salient causes of the fire, and thereby aim at extinguishing the fire by reducing the access to one or more of these elements. Conversely, the police at the scene might see the arsonist's lighting of the fire as the salient cause. If saliency is fundamentally perspective dependent in this way, it would seem odd to hold that THE HARM PRINCIPLE refers to salient causation. Why would the question of whether I have a moral obligation not to harm someone be perspective dependent? What would this even mean? That I only have a moral obligation not to harm someone if there will be no one from whose perspective my act will stand out as the salient cause of harm?
As one final consideration, it might be the case that we should understand the intentionality criterion as referring to the content of the agent's mental state when acting rather than to the reason because of which the agent is acting. In that case, this criterion would entail that there is a moral obligation not to knowingly cause harm even if this harm is merely foreseen and therefore also that I have a moral obligation not to go for a leisure drive (given that I am aware that doing this contributes to global warming and its related harms). In order to settle this question, we first have to settle the question of whether the doctrine of double effect is accurate, and that is too big of an issue for this paper.
Conclusion
Kingston and Sinnott-Armstrong's conclusion that a single leisure drive does not cause global warming and climate change relies on the assumption that something like a simple counterfactual analysis of causation is correct. However, this analysis is known to give problematic results in cases of overdetermination and pre-emption. Since global warming, climate change and their related harms are typical examples of these kinds, we should hesitate to assume this analysis of causation in this context. If we instead assume a more elaborated analysis of causation, like the one that Lewis (2000) suggests, it turns out that it is indeterminate whether a single leisure drive causes global warming. This indeterminacy is partly due to that there is no natural way to distinguish the emissions of greenhouse gases that have large enough influence on global warming to count as causes from those that have not. Since there is no natural way to make this distinction, cases like JOYGUZZLING are in fact importantly different from cases like SHOOTING AND POISONING and BOTTLE SHATTERING.
If we go beyond the guidelines Lewis suggests to decide whether a potential cause has large enough of an influence, we might find reasons to consider also minuscule differences in how and when the outcome occurs to matter. If we are in a context where we wish to explain global warming, or where we wish to control it, we have reasons to consider also small emissions of greenhouse gases, such as a those produced by a single drive, to exert enough of an influence to count as causes. In such a context, we might have reasons to consider global warming to be a very fragile event.
Moreover, if a single drive does not directly influence climate change induced harm (if it does not influence how, when or whether the firing of neurons of any victim of climate impacts occurs), a single drive might still be said to cause harm. Since a single drive influences global warming, and global warming influences climate change, and climate change in turn influences climate change induced harm, a single drive causes this harm if causation is transitive in this case.
Finally, Sinnott-Armstrong's (& Kingston's) argument that a single drive cannot cause global warming since there are no special reasons to pick out that act out of all the other background conditions and identify it as a salient cause of global warming is mistaken, or at least inadequately argued for. It is mistaken if THE HARM PRINCIPLE refers to causes in the broad sense since. In that case, the saliency of a cause would not matter for whether I have moral obligation not to cause harm. If THE HARM PRINCIPLE instead refers to salient causes, we need some reliable criteria to distinguish salient causes from mere background conditions. Without such criteria, the moral obligations THE HARM PRINCIPLE entails becomes implausibly dependent on people's perspective on why harm occurred. However, the criteria that Sinnott-Armstrong uses are susceptible to counterexamples. Some might be dissatisfied by the conclusion that it is indeterminate whether a single drive cause global warming if we assume ELABORATE*, so I want end this paper by offering some considerations that hopefully might render this conclusion more acceptable.
First, there might be semantic issues lurking here. You might think that it is absurd to say that a single drive causes global warming since saying so implies that such a drive is the only cause of global warming. This linguistic intuition might spill over, and make you think that it is likewise absurd to say that it is indeterminate whether a single drive causes global warming; you might think that it cannot be indeterminate whether a single drive causes global warming because there clearly are other causes of global warming. However, if you think this, you are probably confusing causes with salient causes. Saying that a single drive causes global warming does not imply that this is the only cause. Nor does it imply that such a drive is a salient cause.
You might also think that it is absurd to say that a single drive causes global warming (or that it is indeterminate whether this is so) since you think that the notion of causation is intimately connected to something like SIMPLE. Still, if you think so, it seems that you must accept strange conclusions in cases of pre-emption and overdetermination. For instance, if two shooters simultaneously shot me, and if both shots were sufficient to kill me, you would have to conclude that neither shooter caused my death.
One way to get rid of these misleading semantic intuitions would be to use the notion of contribution instead of causation in some cases. 21 This seems fitting in JOYGUZZLING ('a single drive contributes to global warming' or 'it is indeterminate whether a single drive contributes to global warming'), but perhaps less so in BOTTLE SHATTERING ('Suzy's throw contributed to the bottle shattering'). I must leave the question of when it is accurate to use 'contribution' instead of 'causation' for another paper. 22 Second, you might be dissatisfied with the conclusion since you want to be able to say that a single leisure drive do cause global warming, and thereby draw the conclusion that there is a moral obligation not to go for a leisure drive. Still, even if we cannot determinately say that a single leisure drive causes global warming, Kingston & Sinnott-Armstrong are likewise wrong in claiming that a leisure drive definitely does not cause global warming (as well as climate change and their related harms). Therefore, they have not really succeeded in showing that there is no moral obligation to refrain from going for a leisure drive. We can at least say that it is undecided whether a single leisure drive causes global warming, climate change and their related harms.
On this note, we might also speculate about what THE HARM PRINCIPLE would entail in cases where it is indeterminate whether I cause harm. One could specify it so it only entails that there is a moral obligation to refrain from performing some action when it is true that this action causes harm to someone. On this interpretation, THE HARM PRINCIPLE would not entail that there is a moral obligation to refrain from joyguzzling (at least not unless we have additional reasons for thinking that even minuscule emissions of greenhouse gases are large enough to count as causes of global warming). However, it does not seem farfetched to assume that we should specify THE HARM PRINCIPLE differently. We should rather specify it to state that that there is a moral obligation to refrain from performing acts when it is true that this act will cause harm, or when it is indeterminate that this act will cause harm. After all, it seems that we at least in some cases have moral obligations not to perform acts that might cause harm. | 18,666.2 | 2019-04-01T00:00:00.000 | [
"Environmental Science",
"Philosophy"
] |
An experimental study on tolerance to hypoxia in tardigrades
Introduction: Tardigrades are small aquatic invertebrates with well documented tolerance to several environmental stresses, including desiccation, low temperature, and radiation, and an ability to survive long periods in a cryptobiotic state under arrested metabolism. Many tardigrade populations live in habitats where temporary exposure to hypoxia is expected, e.g., benthic layers or substrates that regularly undergo desiccation, but tolerance to hypoxia has so far not been thoroughly investigated in tardigrades. Method: We studied the response to exposure for hypoxia (<1 ppm) during 1–24 h in two tardigrade species, Richtersius cf. coronifer and Hypsibius exemplaris. The animals were exposed to hypoxia in their hydrated active state. Results: Survival was high in both species after the shortest exposures to hypoxia but tended to decline with longer exposures, with almost complete failure to recover after 24 h in hypoxia. R. cf. coronifer tended to be more tolerant than H. exemplaris. When oxygen level was gradually reduced from 8 to 1 ppm, behavioral responses in terms of irregular body movements were first observed at 3–4 ppm. Discussion: The study shows that both limno-terrestrial and freshwater tardigrades are able to recover after exposure to severe hypoxia, but only exposure for relatively short periods of time. It also indicates that tardigrade species have different sensitivity and response patterns to exposure to hypoxia. These results will hopefully encourage more studies on how tardigrades are affected by and respond to hypoxic conditions.
Introduction
Tardigrades are micro-metazoans inhabiting a wide range of environments and micro-habitats around the world, from the poles to the deep sea, and including both permanently aquatic conditions and terrestrial habitats that for shorter or longer periods are deprived of moisture (Nelson et al., 2018).The phylum Tardigrada currently includes more than 1,400 species (Degma and Guidetti, 2023).Despite a large diversity in habitat choice all tardigrades are essentially aquatic animals and need to be surrounded by water to be active.Nevertheless, tardigrades living in terrestrial habitats have evolved adaptations to survive a complete deprivation of body water, a state at which metabolism necessarily ceases and the animal enters an ametabolic state of cryptobiosis (e.g., Keilin, 1959;Wright et al., 1992;Wright, 2001;Møbjerg et al., 2011).Cryptobiosis cannot only be induced by desiccation (anhydrobiosis), but also by cold (cryobiosis), osmotic pressure (osmobiosis), and oxygen deficiency (anoxybiosis) (Keilin, 1959).Among these categories, anhydrobiosis (e.g., Crowe, 1971;Rebecchi et al., 2007;Welnicz et al., 2011;Schill and Hengherr, 2018;Arakawa, 2022) is by far the most studied phenomenon followed by cryobiosis (e.g., Ramløv and Westh, 1992;Guidetti et al., 2011;Hengherr and Schill, 2018;Møbjerg et al., 2022) and osmobiosis (e.g., Halberg et al., 2009;Heidemann et al., 2016;Emdee et al., 2023).Reports of tolerance to low oxygen conditions (hypoxia) in tardigrades are very scarce and mainly restricted to anecdotal observations describing that tardigrades may enter an asphyctic state in response to oxygen deficiency and may survive in this state for a few days (e.g., Ramazzotti and Maucci, 1983;Nelson et al., 2015).Crowe and Higgins (1967) reported that the ability of desiccated (anhydrobiotic) tardigrades to rehydrate and resume activity tended to decline with oxygen levels below 5 ppm.Also, two species of marine tardigrades (Dipodarctus subterraneus and Tanarctus ramazzottii) were reported from an outlet area in the Black Sea at a depth of 88-250 m (Kharkevych and Sergeeva, 2013), where oxygen levels at 88-122 m were estimated at 0.12-0.17ppm (mg/L), suggesting that these populations of tardigrades live under more or less anoxic conditions.Several other invertebrate metazoa have been found in this area of permanent anoxia, with Nematoda, Harpacticoida and Polychaeta being the most abundant (Sergeeva and Mazlumyan, 2015).Some of the adaptations of deep-sea meiobenthos are reviewed in Zeppilli et al. (2018), but the adaptations of tardigrades in these environments are unknown.In contrast to anhydrobiosis and osmobiosis, which is connected to dehydration and contraction of the body into a "tunstate" (Møbjerg and Neves, 2021), the response to hypoxia in tardigrades is immobilisation and inflation of the body with the eight legs protruding from the body trunk.This behavioral response has been interpreted as resulting from a lack of osmoregulatory control (Nelson et al., 2015).
So far, no specific studies evaluating the tolerance to hypoxia in hydrated tardigrades have been reported, and there are also no studies evaluating if the metabolism of tardigrades comes to a halt when oxygen are depleted and the animals enter the asphyctic state.The existence of a cryptobiotic state sensu stricto induced by low oxygen levels under hydrated conditions has also been challenged (Wright et al., 1992;Clegg, 2001) and remains to be verified.
Given that the ability to enter cryptobiosis in response to various environmental agents represent adaptations for survival, species adapted to different environmental conditional are also expected to exhibit different levels of tolerance.In line with this, previous studies have found inter-specific differences among tardigrade species in tolerance to desiccation (Wright, 1989;Jönsson et al., 2001;Rebecchi et al., 2006), cold (Hengherr et al., 2009), and osmoregulation (Møbjerg et al., 2011).Such differences may also be expected with respect to tolerance to hypoxia and the ability to enter anoxybiosis, but no such comparative data are available.
Here we present the first experimental study on tolerance to hypoxia in tardigrades, evaluating the response to exposure to hypoxia for time periods up to 24 h in two different species of tardigrades.
Specimens of H. exemplaris were obtained from a population cultured in the lab, fed on a diet of chlorella algae and kept at 15 °C.This population originates from a British strain that was previously named Hypsibius dujardini but has been redescribed as H. exemplaris (Gąsiorek et al., 2018).The habitat of the original collection was the benthic layer of a pond.R. cf.coronifer was extracted from moss growing on carbonated rock at Ölands Alvar in south-eastern Sweden (see description of habitat in Jönsson et al., 2001) using the Baermann funnel method (Czerneková et al., 2018).The funnels were set up using deionized water (EASYpure ® RF, m. 07033, Barnstead/Thermolyne, Dubuque, IA, United States) in the day before the experiment and left for 12-13 h after which the extracted animals were left to acclimatize in new water for 2 h.
For both species only normally moving adult animals of medium and large size were selected for use in the experiment.In total, 630 specimens of each tardigrade species were used in the main experiment, including hypoxia-exposed specimens + controls (see Section 2.3.1), and 50 specimens in the experiment with increased hypoxia (see Section 2.3.2).The body size of individual specimens was not measured in this study but the mean size of adult R. cf.coronifer has been reported as 645 μm (Czernekova and Jönsson, 2016) and of H. exemplaris as 232 μm (Gąsiorek et al., 2018).Both species consist of females with parthenogenetic reproduction.
Method for creating a hypoxic environment
Based on the evaluation of methods for removing dissolved oxygen from water by Butler et al. (1994) we used high purity nitrogen gas (Nitrogen Instruments 5.0, ≥99,999%) to create hypoxic conditions.Deionized water (see Section 2.1) was used in all experiments.The nitrogen flow was set to 65 mL/min and was chosen to create a steady but not to vigorous purging.The flow was measured with a Porter Instrument B-125-20 flowmeter.To facilitate the spread of nitrogen in the water, an aquarium diffusion stone was attached to the end of the tube.The lowest concentrations of dissolved oxygen obtained in this study was 0.2-0.3ppm after purging with nitrogen, which is in line with the study by Butler et al. (1994).Dissolved oxygen in water was measured using a HACH HQ40d multimeter with a LDO101 probe (HACH, Colorado, United States).
Exposure to hypoxia for different time periods
The general design of this experiment was to expose hydrated active tardigrades to hypoxia during different time periods (1, 6, 12, 18, and 24 h) and evaluate the conditions of the animals immediately after the exposures and approximately 10 h after exposure.
In each exposure trial, seven replicate samples were used, each of them with 10 animals, contained in 100 mL Duran ® laboratory glass bottles (Schott, Mainz, Germany) filled with 80 mL of deionized water.The seven samples exposed to hypoxia were connected serially to the nitrogen gas cylinder via plastic tubes, and the bottles were sealed with airtight rubber stoppers with double tubing allowing inflow and outflow of nitrogen gas (see Supplementary Figure S3).
In all except the 1 h exposure trial seven control samples were used, each with 10 animals.It was considered unnecessary to use controls in the 1 h trial due to the short time span.The control samples were not exposed to hypoxia, but otherwise kept under similar conditions.Tardigrades were transferred to the bottles prior to the start of the introduction of the nitrogen gas, using a Pasteur pipette.For each exposure time period tested, a new set of tardigrades were used.The same experiments were performed for R. cf.coronifer and H. exemplaris, but in separate trials.
Since the animals were placed in the bottles before the nitrogen flow was initiated, the effective exposure time at the lowest oxygen level in all trials is shorter than the reported exposure times, and relatively more so in the short-time than in the long-time exposures.The rate at which oxygen level declined in our experiment was not measured, but preexperimental tests showed that oxygen levels in the lowest range had been reached after approx.25 min.Butler et al. (1994) showed that rate of deoxygenation using the same method (but with larger water volume and higher gas flow) is more rapid at the beginning and slows down as the water gets saturated with nitrogen.The animals in our study therefore may well have experienced low enough oxygen levels to enter an asphyctic state within 10-15 min after the start (see Result Section 3.4).By allowing a gradual change in the oxygen conditions, both towards low oxygen in the initial phase of the trial and towards restored oxygen after the trial (see below), the animals were allowed to make physiological adjustments in response to gradually changed oxygen conditions.
The oxygen levels in each of the 14 bottles (7 treatments +7 controls) were measured prior to each experiment, and when the experiment had been running for the set amount of time the nitrogen flow was stopped and oxygen levels were again measured.The oxygen level of the deionized water before any treatment was on average 8.43 ppm (SD = 0.10; estimated as the mean value based on the means for all experiments trials).The mean (SD) level of oxygen at the end of the 1, 6, 12, 18, and 24 h trials was 0.66 (0.11), 0.42 (0.090), 0.66 (0.037), 0.44 (0.10), and 0.52 (0.097) ppm for H. exemplaris, respectively, and 0.75 (0.090), 0.38 (0.064), 0.47 (0.14), 0.23 (0.044), and 0.31 (0.091) ppm for R. cf.coronifer.The experiment was performed in a laboratory with natural daylight and a temperature of 20-22 °C.
For ease of observation, the animals from each bottle were directly transferred together with the 80 mL water used in the experiment to a 100 mL plastic cup (Kebolab AB, 40 mm high, 65 mm upper diam.)without cap after the hypoxia treatment.Thus, no new water was added but instead the low oxygen water was allowed to reoxygenate naturally from oxygen in the air.The water surface diameter in the cup with 80 mL water was 62 mm and the water depth 30 mm.
Exposure to increased levels of hypoxia
To investigate how reduction of oxygen level affected the tardigrades and at what level of hypoxia effects on the animals appeared we performed a separate experiment.We used the same two species and five replicate samples with 10 animals for each species.The samples were kept in 100 mL plastic cups (see Section 2.3.1)filled with 80 mL deionized water, and the replicate samples were treated one by one (thus not in parallel).No controls were used in this experiment.
At the start, the oxygen level in each replicate cup were measured, and oxygen level was then reduced by purging nitrogen gas using a tube from the nitrogen tank and a diffusion stone, as described in Section 2.2.Oxygen level was measured continuously, and tardigrade behavior was recorded stepwise at every 1 ppm, beginning at 8 ppm and with the final recording at 1 ppm.
Rate of natural reoxygenation
To document the rate of reoxygenation after the samples were transferred from the hypoxic conditions to the cup with exposure to open air in the laboratory, a separate test was performed.Seven 100 mL plastic cups (see Section 2.3.1) were filled with 80 mL of deionized water, and nitrogen gas was then purged to lower the oxygen level to 0.3 ppm.The seven cups were then left without cover and allowed to reoxygenate from the surrounding air and the oxygen levels were measured every 0.5 h for 3 h.
Recording of animal activity
After the hypoxia exposure and transfer of animals to 100 mL plastic cup (see Section 2.3.1) the behavior of individual animals was recorded twice; immediately and after 10 h.Observations were made with an Olympus SZX9 stereo microscope.We considered 10 h as likely sufficient for asphyctic animal to recover and return to a state with regular movements based on personal experience with tardigrades exposed to hypoxia and recovering within half an hour when provided oxygenated water.However, knowledge on how recovery time may vary with exposure to different levels of hypoxia are currently lacking.We originally classified tardigrade behavior as "Regular movement," "Irregular movement," "Hypoxic" and "Dead".The animals were classified as having regular movement when they were moving with normal unimpeded leg movements.Irregular movements represent animals with irregular or slow leg movements.Animals identified as hypoxic were in an immobile asphyctic state with inflated and stretched out bodies (interpreted as a response to oxygen deficiency), while animals identified as dead were also usually stretched out but with body content disintegrated.Although we initially tried to distinguish between these two immobile categories, and dead animals are usually easy to identify from their disintegrated body contents, the status of apparently asphyctic animals is more problematic with respect to the living status.In the analyses we therefore put together animals classified as hypoxic and dead into the same category and used three behavioral categories: regular movement (RM), irregular movement (IM), and no movement (NM).Figure 1 shows the appearance of tardigrades with regular movements and tardigrades in an asphyctic state with no movements induced by low oxygen.
Statistical analyses
Statistical analyses were made using IBM SPSS Statistics (v.24).The distribution of data was evaluated from histograms and Q-Q plots using unstandardized residuals from a univariate GLM analysis, with exposure as independent variable and behavioral response categories as dependent variables.Since residuals were found to be non-normally distributed, nonparametric tests were used in all analyses.For analyses of differences between treatment groups the Kruskal-Wallis Analysis of Variance test within the Nonparametric tests/Independent samples module of SPSS was used.This module also provided pairwise comparisons between groups, based on Dunn´s post hoc test.Treatment groups were considered statistically different when p < 0.05.The standard uncorrected p-values were used as the base for interpreting the results, but in pairwise post hoc tests the Bonferroni-adjusted values are also presented for comparison.The adjusted p-values were not used as the Bonferroni correction has been criticized to warp results, and while decreasing the risk of type I error, it also increases the risk of type II errors (Perneger, 1998;Armstrong, 2014).
For control samples, no statistical differences were found among controls for the different time categories (within the three behavioral groups), neither for R. cf.coronifer or H. exemplaris, and within each species the control data were therefore pooled.
Richtersius cf. coronifer
The proportion of animals of R. cf.coronifer in the three behavioral categories after exposures to hypoxic conditions for different periods of time is shown in Figure 2 (for original data see Supplementary Table S1).In the first check directly after the exposure, 100% of the tardigrades were in a no movement (NM) state after 6 h, 12 h, 18 h, and 24 h exposure (Figure 2A).In contrast, after 1 h exposure animals in all three behavioral categories were observed.For the regular movement (RM) and NM variables, all exposure categories differed significantly from the control group (Supplementary Tables S2.1, S2.3), with lower proportions of animals in the RM group, and higher proportions in the NM category.No significant differences were found among the different exposure groups, but the p-values for comparisons of the NM category between the 1h and 6-24 h exposures were close to the significance level (Supplementary Table S2.3).For the irregular movement (IM) variable, the 1 h exposure group had significantly higher proportion of animals compared to all other exposure groups and to controls (Supplementary Table S2.2).In conclusion, 1 h exposure to reduced oxygen had mixed effect on the animals while 6-24 h exposure made all specimens immobile.At the second check 10 h after exposure, the proportion of animals in the RM category tended to increase for all exposure groups, and in the 6 h and 12 h groups the proportion with normal movements was 97% and 87%, respectively, and did not differ statistically compared to the control group (Figure 2B; Supplementary Table S2.1).The proportion of RM animals in the 1 h, 18 h, and 24 h groups remained lower than the controls.There were also significantly higher proportions in the 6 h and 12 h groups compared to 18 h and 24 h, while the 1 h group did not differ significantly from 12 h, 18 h or 24 h.With the exception of the 1 h exposure, the proportion of NM animals tended to increase with longer exposure time, with the 24 h group having significantly higher values than the control, 6 h and 12 h groups (Supplementary Table S2.3).The 1 h and 18 h exposure groups showed a similar pattern that deviated from the other exposure groups, with considerable proportions of animals in the RM, IM, and NM groups.
Hypsibius exemplaris
Figure 3 and Supplementary Tables S2.4-S2.6 show the results for H. exemplaris.At the check directly after exposure, animals in a state of RM were observed in the 1 h, 6 h, and 12 h exposure groups, but with significantly lower proportions compared to the controls (Supplementary Table S2.4).The higher proportion of animals with RM at the 12 h exposure is also statistically different compared to the 18 h and 24 h categories, but not compared to the 1 h and 6 h categories.
For the NM category there were significant differences between the control group and all other exposure groups, with a higher proportion of tardigrades in an NM state in the exposure groups.The proportion of animals in the NM state was significantly lower in the 1 h group than in the 6 h and 24 h groups, while no statistical differences were found compared to the 12 h and 18 h exposure groups (Supplementary Table S2.6).Animals in the IM category were observed in all exposure groups, but with a significantly higher proportion in the 1 h group.
At the 10 h check, there were still no animals returning to a RM state after the 18 and 24 h exposures, while the proportion of animals in this state had increased in the 1 h, 6 h, and 12 h groups, but remaining significantly lower than the controls (Supplementary Table S2.4).In the 1 h and 6 h exposure groups, around 60% of the animals had returned to regular movements.The proportion of animals in the IM state did not show any dramatic change between the two checks but tended to be slightly higher at the 10 h check.The proportion of animals in the NM state tended to increase with increased exposure time, and in the 12-24 h group, most animals were in a state of no movements (Figure 3).
Comparison of responses to low oxygen in the two species
Figure 4 and Table 1 shows a comparison between the two species in their behavioral responses RM, IM, and NM to hypoxia, for the observations directly after exposure and after 10 h.At the first check the overall pattern was relatively similar for the two species, with most specimens for all exposure groups being in the NM group, thus strongly affected by the hypoxic conditions (Figures 4A, C, E). H. exemplaris had significantly higher proportion of RM animals in the 12 h exposure group, while R. cf.coronifer had higher proportions of NM animals at the 1 h exposure but lower proportions at the 12 h exposure.
At the observations 10 h after exposure R. cf.coronifer had significantly higher proportions of animals in the RM state at the 6 h, 12 h, and 18 h exposures than H. exemplaris, suggesting that the latter species had a lower rate of recovery from hypoxia (Figures 4B, D, F).For the NM state the pattern was opposite, with significantly higher proportions of H. exemplaris in the 6 h, 12 h, and 18 h exposure groups.In the 24 h exposure all or almost all animals in both species were found in the NM state.
Effects of exposure to increased levels of hypoxia
Figure 5 shows that reduced levels of oxygen started to have an impact on the behavior of R. cf.coronifer at 3 ppm of oxygen, with some animals showing irregular movements, while in H. exemplaris such impact was first observed at 4 ppm.The proportion of animals with affected behavior increased with further reductions in oxygen levels, and more animals of H. exemplaris were affected than in R. cf.coronifer.No specimens of H. exemplaris entered an NM state, while some specimens of R. cf.coronifer did so, at the 1-2 ppm level.
The rate of reoxygenation
Figure 6 shows the rate of natural reoxygenation from the air after reducing oxygen level to on average 0.3 ppm.30 min of exposure to air resulted in an increase in oxygen level to about 1 ppm, and after 60 min the oxygen levels had increased to about 2 ppm.At the last estimates the oxygen level had increased to 4.3 ppm, and the rate of reoxygenation declined to about 0.5 ppm per 30 min.Fitting a curve for the rate of reoxygenation and extrapolating from the curve function predicted that complete reoxygenation (8 ppm) would be reached after approx.400 min (6.7 h) (y 2 = a+bt 1.5 ; y = oxygen level (ppm), t = time (min), a = 0.08756, b = 0.00791, r 2 = 0.998, TableCurve 2D v. 5.01).
Discussion
The results of this study show that tardigrades can recover activity after almost complete loss of free oxygen in their environment.However, in both R. cf.coronifer and H. exemplaris recovery generally declined with increased exposure time, with no or very few specimens recovering after 24 h exposure to hypoxia.In R. cf.coronifer a very high proportion of the animals returned to normal activity after 6 h and 12 h under hypoxic conditions.The fact that all of these animals were immobile directly after the exposure show that they had responded to the hypoxic conditions and entered the asphyctic state but were able to recover.The much lower proportion of full recovery after 1 h exposure is hard to explain, but many specimens in this exposure group were still active but showing irregular movements.
We did not distinguish between animals in a viable but asphyctic state and dead animals, and whether animals that were observed in an immobile state 10 h after the exposure were all dead, or if some of them were still viable but required more time to recover, remains to be evaluated in future studies.The data on reoxygenation showed that oxygen level reached 4 ppm within 3 h, a level at which little effect was seen on behavior in the experiment with sequential reduction of oxygen, and full reoxygenation was projected to be reached within 7 h.The immobile animals at the 10 h observation post-exposure therefore would have been exposed to about 7 h of oxygen conditions >4 ppm.However, increased time of recovery after long exposure to anoxia has indeed been reported in both Caenorhabditis elegans (Van Voorhies and Ward, 2000) and in embryos of Artemia franciscana (Clegg, 1997), and the relationship between exposure time to hypoxia and recovery time deserves investigation also in tardigrades.Positive correlations between recovery time and level of stress have previously been reported in tardigrades for exposure to desiccation [rate of desiccation, Horikawa and Higashi (2004); time in dry state; Crowe and Higgins (1967); Rebecchi et al. (2009)], probably representing periods of repair of cellular damage induced by the stress (Jönsson, 2003).
Both the exposure to different time periods of hypoxia and the response to gradually reduced oxygen levels indicate that H. exemplaris is slightly more sensitive to hypoxia than R. cf.coronifer.Additional studies should be made to confirm this indication of an interspecific difference, and also consider the possible evolutionary, ecological and physiological background to such difference.H. exemplaris is considered a freshwater tardigrade, and according to Nelson et al. (2015) certain aquatic tardigrade species can survive up to 3 days in the asphyctic state.Our study suggests much more limited tolerance under very low oxygen conditions, but it is possible that less severe hypoxic levels than used in our study allow better recovery of asphyctic tardigrades.Since the animals in our study started to respond to hypoxia already at 3-4 ppm, it would be interesting to investigate in future studies how the interaction between level of hypoxia and time of exposure affects the pattern of activity response and survival.
In comparison to the two other invertebrate phyla with many species showing cryptobiotic capability-nematodes and rotifers-the observed tolerance to hypoxia in R. cf.coronifer and H. exemplaris is relatively modest.Van Voorhies and Ward (2000) reported almost no mortality in adult C. elegans exposed to anoxia up to 48 h, 50% survival after 96 h, and no survival after 144 h.In another study by Kitazume et al. (2018) a large interspecific difference in tolerance to hypoxia was reported in four nematode species, with one species (Bursaphelenchus xylophilus) showing >90% survival of adults after exposure to oxygen levels at < 0.01 ppm for 96 h.In rotifers, Ricci (2017) reported that 30%
FIGURE 6
Reoxygenation over time in ambient laboratory conditions of 7 replicate samples of deionized water without tardigrades after reduction of oxygen by nitrogen purging.The first bar ("Ref") represents the natural oxygen level before nitrogen purging (mean = 7.9 ppm, SD = 0.1), while the second bar ("0") represents the oxygen level achieved after nitrogen purging for 40 min (mean = 0.3 ppm, SD = 0.05).
of the species Macrotrachela quadricornifera could survive in an anoxic environment for 6 days.The tolerance to hypoxia in tardigrades, nematodes and rotifers is however far behind embryos of the brine shrimp, A. franciscana, which has been reported to survive 4 years of continuous anoxic conditions in a hydrated state with assumed completely arrested metabolism (Clegg, 1997).
Since very few studies on hypoxia in tardigrades have been reported, the physiological mechanisms allowing tardigrades to survive under severe hypoxic conditions, and the factor(s) limiting the time that tardigrades can survive hypoxia, are unknown.An important question related to this is whether the animals enter an ametabolic state (anoxybiosis) in response to hypoxia, or are able to maintain a reduced metabolism under very low oxygen conditions.If metabolism is arrested, energetic constraints are unlikely to limit the time that animals can stay under hypoxic conditions and still recover.Instead, accumulation of damage on cellular components may then be the cause of the inability to recover after prolonged exposure to hypoxia.The relatively short time of exposure close to anoxic conditions that the tardigrade species investigated in this study were able to recover from could indicate that they do not tolerate a complete arrest of metabolism.Clearly, studies on the metabolic status of tardigrades exposed to hypoxia, and how the cells and tissues in these animals are affected by hypoxia, including "-omics" responses to hypoxia, are needed.
In many animals, an activation of endogenous antioxidant defenses connected with exposure to hypoxia has been documented, interpreted as a preparation for oxidative stress (POS) (Hermes-Lima et al., 1998;Hermes-Lima et al., 2015).Contrary to what was earlier believed, the hypoxic state or the exit from hypoxia may give rise to increased reactive oxygen species (ROS), counteracted by an increased antioxidant activity.The antioxidant defense system is considered to have a central role in the tolerance of tardigrades and other cryptobiotic invertebrates to desiccation and radiation (Rebecchi, 2013;Jönsson, 2019;Giovannini et al., 2022), and analyses of ROS generation and antioxidant responses connected with exposure to hypoxia in tardigrades would be of great interest.Also, the presence of hypoxia-inducible factor (HIF) genes (Graham and Presnell, 2017) are of interest for understanding hypoxia tolerance in tardigrades.Genes of the HIF family are highly conserved across metazoans, but loss of major HIF pathways has been documented within the crustacean group, also including species tolerating anoxia (Graham and Barreto, 2020).In several species of tardigrades (Ramazzottius varieornatus, H. exemplaris, Paramacrobiotus sp.TYO, Echiniscus testudo), representing both taxonomic classes (Heterotardigrada, Eutardigrada), loss of one of the major HIF transcriptional regulators, the HIF-1α pathway, has been reported (Hashimoto et al., 2016;Yoshida et al., 2017;Hara et al., 2021;Murai et al., 2021).This suggests that alternative mechanisms and genetic pathways for responses to hypoxic stress have evolved in tardigrades.Studies comparing antioxidant responses to exposure of desiccation, radiation and hypoxia would contribute to evaluation of the cross-tolerance hypothesis, suggesting a common defense mechanism behind tolerance to several environmental stresses in cryptobiotic invertebrates (Ryabova et al., 2017;Jönsson, 2019).
To the extent that resistance to hypoxia represents an adaptation to the environmental conditions experienced in the natural habitats of the two species, the present results do not suggest that R. cf.coronifer and H. exemplaris are naturally exposed to long-term hypoxia.R. cf.coronifer lives in mosses regularly exposed to desiccation, which may lead to temporary oxygen deficiency when the animal is captured in small water pockets in the process of dehydration of the moss.The duration of this condition will likely be short (minutes or hours), especially in the dry Alvar habitat where the population of R. cf.coronifer used in this study lives.H. exemplaris on the other hand lives in more permanently wet freshwater habitats which rarely dry up (Gąsiorek et al., 2018), but where hypoxic/anoxic levels may arise in the benthic layer due to high decomposition activity and low water circulation.In the context of hypoxia adaptations in tardigrades, the report by Kharkevych and Sergeeva (2013) of marine tardigrades living under permanent anoxic conditions in the Black Sea is of great interest, and more studies on the natural environmental conditions, hypoxia tolerance pattern, and metabolic system of these populations would be very valuable.A report by Kristensen and Hallas (1980) on two marine tardigrades of the genus Echiniscoides from Greenland inhabiting barnacle shells is also of interest.The animals were kept in a closed vial with decomposing barnacles for 6 months, presumably under anoxic conditions, and became active after aeration of the water.Future comparative studies on tolerance to hypoxia in tardigrade populations living in terrestrial, freshwater and marine ecosystems may reveal if there are general differences in tolerance, reflecting evolutionary adaptations to current environmental conditions or phylogenetic associations more related to the evolutionary history of different lineages.
FIGURE 1
FIGURE 1 Richtersius cf.coronifer with (A) normal movements under normal oxygen levels in water, and (B) during hypoxia with inflated (asphyctic) bodies under low oxygen.Photo: K.I.Jönsson.
FIGURE 4
FIGURE 4Comparisons between the two species R. cf.coronifer and H. exemplaris of the proportions of animals in each of the time exposure categories with respect to regular movement, irregular movement, and no movement.Panels (A,C,E) shows the results from the first check directly after exposure, while panels (B,D,F) shows the results 10 h after the exposure.Error bars represent 1 standard error from 7 replicate samples, each with 10 individual tardigrades.Note: this figure is based on the same data as in Figures2, 3.
FIGURE 5
FIGURE 5Behavioral responses to reductions in oxygen level, in terms of proportions of tardigrades recorded in the three behavioral categories regular movement (RM), irregular movement (IM), and no movement (NM).Panel (A) shows the results for R. cf.coronifer and panel (B) for H. exemplaris.Error bars represent 1 standard error from 5 replicate samples, each with 10 individual tardigrades.
TABLE 1 p
-values in statistical comparisons (Kruskal-Wallis analysis) between H. exemplaris and R. cf.coronifer for the three behavioral responses regular, irregular and no movement, at the observations directly after exposure and 10 h after exposure.Values in bold indicate a significant difference (p < 0.05) between the species for a behavioral response and for a specific exposure category.Statistical data for the tests are given below the table.Degrees of freedom = 1 in all comparisons. | 7,523.6 | 2023-09-05T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
The Direct Anti-Virulence but Not Bactericidal Activity of Human Neutrophil Elastase against Moraxella catarrhalis
Neutrophil elastase (NE) contributes to innate antibacterial defense at both the intracellular (phagocytosis) and extracellular (degranulation, NETosis) levels. Moraxella catarrhalis, a human respiratory pathogen, can exist in an inflammatory milieu which contains NE. No data are available on the action of NE against M. catarrhalis or on the counteraction of NE-dependent host defenses by this pathogen. Using time-kill assays we found that bacteria are able to survive and replicate in the presence of NE. Transmission electron microscopy and flow cytometry studies with NE-treated bacteria revealed that while NE admittedly destabilizes the outer membrane leaflet, it does not cause cytoplasmic membrane rupture, suggesting that the enzyme does not target components that are essential for cell integrity. Using LC-MS/MS spectroscopy we determined that NE cleaved at least three virulent surface proteins in outer membrane vesicles (OMVs) of M. catarrhalis, including OMP CD, McaP, and TbpA. The cleavage of OMP CD contributes to the significant decrease in resistance to serum complement in the complement-resistant strain Mc6. The cleavage of McaP did not cause any sensitization to erythromycin nor did NE disturb its drug action. Identifying NE as a novel but subtle anti-virulence agent together with its extracellularly not-efficient bactericidal activity against M. catarrhalis may facilitate the pathogen’s existence in the airways under inflammation.
Introduction
Neutrophils are the pivotal cellular components of innate defense that rapidly accumulate at the site of infection. To kill bacterial or fungal pathogens, they use both oxidative and non-oxidative mechanisms during phagocytosis, neutrophil extracellular trap formation (NETosis), and degranulation (exocytosis) of the pre-formed mediators from cytoplasmic granules. Neutrophil elastase (NE) is a serine protease stored in azurophilic granules and is engaged in host defense against Gram-negative but not Gram-positive bacteria [1]. NE plays a multifaceted role in protecting against bacterial infections. The direct antibacterial actions involve general phagosome-dependent and additional non-oxidative bactericidal mechanisms comprising the cleavage of selected outer membrane proteins and as a result membrane destabilization [2,3]. Apart from its antimicrobial proteolytic function, in indirect immunomodulatory mechanism of action, NE promotes cytokines, i.e., TNF-α, MIP-2, and IL-6 expression, which contributes to host anti-Pseudomonas aeruginosa defense [4]. Likewise, endogenous elastase at the site of infection/inflammation can process inactive proforms of mammal cathelicidins into active antibacterial peptides [5] as well as synthetic antimicrobial peptides (AMPs) into pharmacologically active peptide D-BMAP18 (a COPD exacerbations or bronchiectasis [26,27], may explain their ability to adapt and thrive in its presence.
Neutrophil elastase at Concentration Representative for Pathological Conditions of Respiratory Tract Does Not Exert Direct Killing against M. catarrhalis
The direct killing activity of NE against Gram-negative bacteria such as Klebsiella pneumoniae [1], Escherichia coli [2], and P. aeruginosa [3] was demonstrated previously. To investigate the bactericidal impact of NE on M. catarrhalis, a series of experiments was conducted in vitro using concentrations of NE usually higher than that found in chronic inflammation of the lower respiratory tract. The enzymatic activity of each batch of NE was checked by reaction with a specific fluorogenic substrate MeoSuc-Ala-Ala-Pro-Val-AMC. Assessment of killing ability was performed with or without 2 µM NE using time-kill assays from 0 to 4 h. As illustrated on survival plots, M. catarrhalis 6 (Mc6) at various cfu/mL was not killed by NE over the 4 h of incubation ( Figure 1). The lack of lytic and permeabilization-inducing properties of the enzyme against M. catarrhalis was confirmed further using transmission electron microscopy (TEM) observations and flow cytometric measurements. As shown in Figure 2A, TEM images demonstrated some degree of NE-dependent disorganization of lipooligosascharide (LOS) structures, with a characteristic radial morphology and no visible distortion of the inner cell membrane or other deformations of the outer membrane of Mc6 cells. In contrast, the bacteria treated with EDTA (positive control) showed distorted structural integrity of the envelope accompanied by destabilization (disintegration) of both bacterial membranes. The results of undisturbed by NE membrane integrity were confirmed by incubation of bacteria with a dye propidium iodide (PI), that is impermeable to viable cells but intercalates nucleic acids in damaged cells. As shown in Figure 2B,C, for bacteria treated with NE as well as for negative control, the fluorescence intensity was on the similarly very low level (PI-negative cells) and the survival rate was comparable. In contrast, treatment of bacteria with EDTA, which causes an increase in the permeability of cell envelopes, caused cell damage rates, as reflected in both the high percentage of PI-positive cells and the lethal effect seen in the spots. It can be concluded therefore that NE probably does not cleave surface proteins and thus does not facilitate the damage of inner cell membrane of M. catarrhalis, which would be considered a lethal effect. concentrations exceeding those found in pathological conditions, which are common in COPD exacerbations or bronchiectasis [26,27], may explain their ability to adapt and thrive in its presence.
Neutrophil elastase at Concentration Representative for Pathological Conditions of Respiratory Tract Does Not Exert Direct Killing against M. catarrhalis
The direct killing activity of NE against Gram-negative bacteria such as Klebsiella pneumoniae [1], Escherichia coli [2], and P. aeruginosa [3] was demonstrated previously. To investigate the bactericidal impact of NE on M. catarrhalis, a series of experiments was conducted in vitro using concentrations of NE usually higher than that found in chronic inflammation of the lower respiratory tract. The enzymatic activity of each batch of NE was checked by reaction with a specific fluorogenic substrate MeoSuc-Ala-Ala-Pro-Val-AMC. Assessment of killing ability was performed with or without 2 µM NE using timekill assays from 0 to 4 h. As illustrated on survival plots, M. catarrhalis 6 (Mc6) at various cfu/mL was not killed by NE over the 4 h of incubation ( Figure 1). The lack of lytic and permeabilization-inducing properties of the enzyme against M. catarrhalis was confirmed further using transmission electron microscopy (TEM) observations and flow cytometric measurements. As shown in Figure 2A, TEM images demonstrated some degree of NEdependent disorganization of lipooligosascharide (LOS) structures, with a characteristic radial morphology and no visible distortion of the inner cell membrane or other deformations of the outer membrane of Mc6 cells. In contrast, the bacteria treated with EDTA (positive control) showed distorted structural integrity of the envelope accompanied by destabilization (disintegration) of both bacterial membranes. The results of undisturbed by NE membrane integrity were confirmed by incubation of bacteria with a dye propidium iodide (PI), that is impermeable to viable cells but intercalates nucleic acids in damaged cells. As shown in Figure 2B,C, for bacteria treated with NE as well as for negative control, the fluorescence intensity was on the similarly very low level (PI-negative cells) and the survival rate was comparable. In contrast, treatment of bacteria with EDTA, which causes an increase in the permeability of cell envelopes, caused cell damage rates, as reflected in both the high percentage of PI-positive cells and the lethal effect seen in the spots. It can be concluded therefore that NE probably does not cleave surface proteins and thus does not facilitate the damage of inner cell membrane of M. catarrhalis, which would be considered a lethal effect. min. Data are expressed as mean cfu/mL ± SD from at least two independent experiments performed in triplicate. HiNE-heat inactivated NE (95 °C, 15 min).
Neutrophil Elastase Degrades Pivotal Outer Membrane Proteins (Virulence Factors) of M. catarrhalis
The observed lack of bactericidal activity of NE against Mc6 raises the question of whether, and if so which, surface membrane proteins of Mc6 are susceptible to NE action, and what the other biological consequences are. In addition, given the ability of Mc6 outer membrane vesicles (OMVs) to cause degranulation of PMNs [28], they may contribute to the increased concentration of NE in the immediate vicinity of the bacteria. Treating OMVs with NE showed that at least three key outer membrane proteins (OMPs) of M. catarrhalis were cleaved by NE following 1 h incubation at 37 • C ( Figure 3). These proteins, corresponding to gel bands of~50,~70,~100, and further analyzed by LC-MS/MS mass spectrometry (Table 1), were identified as transferrin-binding protein TbpA (120 kDa), involved in iron uptake from transferrin [29], and two bacterial adhesins: OMP CD (46 kDa), involved in adhesion and complement resistance [30], and McaP (62 kDa), involved in adhesion and lipolytic activity [31]. These findings indicate that NE exerts a potent proteolytic activity towards three M. catarrhalis OMPs which are associated with the virulence of this bacterium. The importance of cleavage of two proteins, namely OMP CD and McaP, were further studied.
the increased concentration of NE in the immediate vicinity of the bacteria. Treating OMVs with NE showed that at least three key outer membrane proteins (OMPs) of M. catarrhalis were cleaved by NE following 1 h incubation at 37 °C ( Figure 3). These proteins, corresponding to gel bands of ~50, ~70, ~100, and further analyzed by LC-MS/MS mass spectrometry (Table 1), were identified as transferrin-binding protein TbpA (120 kDa), involved in iron uptake from transferrin [29], and two bacterial adhesins: OMP CD (46 kDa), involved in adhesion and complement resistance [30], and McaP (62 kDa), involved in adhesion and lipolytic activity [31]. These findings indicate that NE exerts a potent proteolytic activity towards three M. catarrhalis OMPs which are associated with the virulence of this bacterium. The importance of cleavage of two proteins, namely OMP CD and McaP, were further studied.
Having documented the proteolytic activity of NE against OMP CD and McaP, we posed two research hypotheses. The first hypothesis was that enzymatic degradation of the OMP CD protein, which confers partial complement resistance to human serum, would result in increased bacterial sensitivity to complement. The second hypothesis was that digestion of McaP, a protein displaying esterase activity against macrolide antibiotics [32], would sensitize the bacteria to an exemplary antibiotic of this group, erythromycin. The verification of both hypotheses required demonstrating that the proteolytic activity of NE favors the reduction of resistance of the wild-type M. catarrhalis strain to the aforementioned compounds. For these experiments, isogenic mutants of strain Mc6, which were devoid of the OMPs, namely ∆ompCD and ∆mcaP mutant strains, were used as positive controls ( Figure S1). Having documented the proteolytic activity of NE against OMP CD and McaP, we posed two research hypotheses. The first hypothesis was that enzymatic degradation of the OMP CD protein, which confers partial complement resistance to human serum, would result in increased bacterial sensitivity to complement. The second hypothesis was that digestion of McaP, a protein displaying esterase activity against macrolide antibiotics [32], would sensitize the bacteria to an exemplary antibiotic of this group, erythromycin. The verification of both hypotheses required demonstrating that the proteolytic activity of NE favors the reduction of resistance of the wild-type M. catarrhalis strain to the aforementioned compounds. For these experiments, isogenic mutants of strain Mc6, which were devoid of the OMPs, namely ∆ompCD and ∆mcaP mutant strains, were used as positive controls ( Figure S1). In preliminary experiments, we confirmed that the complement-resistant wild-type (WT) M. catarrhalis Mc6 and its isogenic ∆ompCD mutant strain showed different sensitivities to complement in active normal human serum (NHS) while growing comparably in the presence of heat-inactivated NHS (HiNHS,). Specifically, ∆ompCD showed a significant decrease in viability in the presence of 25% NHS in comparison to 25% HiNHS. In contrast, the WT strain even grew in the presence of 75% of NHS ( Figure S2).
To assess the contribution of NE-mediated proteolytic degradation of the surface OMP CD protein to the complement-associated bactericidal actions of NHS, the M. catarrhalis WT was incubated for 4 h with 2 µM NE and a subsequent additional 2 h incubation was performed with either NHS or HiNHS. As shown in Figure 4A, the WT strain subjected to proteolytic degradation by NE became significantly more susceptible to complement action by NHS in comparison to the intact bacteria. At the same time, no reduction in the survival of enzymatically digested Mc6 was observed in the HiNHS control serum. These finding indicates that NE is able to degrade OMP CD in the outer membrane of intact M. catarrhalis, resulting in its greater sensitivity to NHS complement.
Activation of Terminal Complement SC5b-9 Component
Next, knowing that NE can degrade OMP CD, we decided to verify the potential of NE-treated OMPs to activate the complement system and that the proteolytic action of NE did not interfere with complement activity. Using an ELISA assay, the quantity of the soluble terminal membrane attack complex SC5b-9 was measured as an indicator of complement activation. As shown in Figure 4B, the OMP CD protein-rich OMVs from M. catarrhalis WT strain were potent activators of complement cascade. However, although the absence of OMP CD protein in OMVs from the ∆ompCD mutant strain significantly attenuated SC5b-9 formation ( Figure 4B), the enzymatic digestion of OMVs with clinicallyrelevant concentrations of NE did not affect the activation of complement compared with non-digested OMVs ( Figure 4C). This finding shows that NE treatment of OMVs does not affect complement activation. Bacteria from log phase were enzymatically digested with 2 µM NE (1 h, 37 °C) before 4 h bactericidal assays were performed. Bacteria untreated with NE and incubated in reaction buffer for 1 h at 37 °C were used as controls. Data are expressed as mean cfu/mL ± SD from two independent experiments performed in triplicate. Statistical analysis was performed by Wald-Wolfowi test (* p < 0.05); HiNHS-heat inactivated NHS. (B) Human serum complement activation in 90% NHS by OMVs from Mc6 WT and its isogenic mutant following 30 min. incubation at 37 °C as determined by ELISA. Data were analyzed using sera from three volunteers (O1-O3) and are expressed as mean SC5b-9 ± SD from two replicates for each serum. Statistical analysis was performed by T test for independent variables (* p < 0.005). (C) Activation of human serum complement in 90% NHS by OMVs Mc6 WTe previously cleaved by NE (1 h, 37 °C). Data are expressed as mean SC5b-9 ± SD from two replicates for pooled serum.
Degradation of McaP by Neutrophil Elastase Does Not Sensitize M. catarrhalis to Erythromycin Action
McaP, a conserved autotransporter has adhesive properties and mediates adherence to human epithelial cells [31,32]. This protein also displays esterase activity, [32], which determines one of the mechanisms of macrolide resistance [33].
Initially, examining the sensitivity of Mc6 WT to erythromycin (macrolide antibiotic), we showed, that for ~2.5-5 × 10 5 cfu/mL, the minimum inhibitory concentration (MIC) and the minimum bactericidal concentration (MBC) were, respectively, 0.125 µg/mL and 0.5 µg/mL. As expected, the isogenic ΔmcaP Mc6 mutant strain was significantly more sensitive to this antibiotic. In time-kill assays, the lethal effect of supra-MICs concentration of erythromycin occurred 1 h post-incubation. Applying the same concentrations of antibiotic to the WT strain only produced a bacteriostatic effect ( Figure S3 incubation at 37 • C as determined by ELISA. Data were analyzed using sera from three volunteers (O1-O3) and are expressed as mean SC5b-9 ± SD from two replicates for each serum. Statistical analysis was performed by T test for independent variables (* p < 0.005). (C) Activation of human serum complement in 90% NHS by OMVs Mc6 WTe previously cleaved by NE (1 h, 37 • C). Data are expressed as mean SC5b-9 ± SD from two replicates for pooled serum.
Degradation of McaP by Neutrophil Elastase Does Not Sensitize M. catarrhalis to Erythromycin Action
McaP, a conserved autotransporter has adhesive properties and mediates adherence to human epithelial cells [31,32]. This protein also displays esterase activity, [32], which determines one of the mechanisms of macrolide resistance [33].
Initially, examining the sensitivity of Mc6 WT to erythromycin (macrolide antibiotic), we showed, that for~2.5-5 × 10 5 cfu/mL, the minimum inhibitory concentration (MIC) and the minimum bactericidal concentration (MBC) were, respectively, 0.125 µg/mL and 0.5 µg/mL. As expected, the isogenic ∆mcaP Mc6 mutant strain was significantly more sensitive to this antibiotic. In time-kill assays, the lethal effect of supra-MICs concentration of erythromycin occurred 1 h post-incubation. Applying the same concentrations of antibiotic to the WT strain only produced a bacteriostatic effect ( Figure S3) (Figure 3), no sensitization of this pre-exposed Mc6 bacterium to erythromycin was observed, despite extended incubation times ( Figure 5). Nevertheless, besides the lack of enhanced erythromycin activity after NE treatment, it is worth adding that the presence of NE in the environment does not disturb its bacteriostatic drug action.
M. catarrhalis Is a Potent Inducer of Neutrophil Elastase Release
Given the ability of M. catarrhalis to cause degranulation of PMNs, they may contribute to the increased concentration of this proteolytic enzyme in the immediate vicinity of the bacteria. Previously, we have shown that OMVs released by M. catarrhalis are potent inducers of NE release from PMNs neutrophils [28]. Here, we documented the differences in magnitude of NE release due to PMN degranulation in response to bacteria either opsonized or not by human serum (opsonic versus non-opsonic manner). As shown in Figure 6, in opsonic conditions, NE release was 5.7-to 11-fold higher in comparison to the unstimulated control, depending on the blood donor. In non-opsonic conditions, this increase was noticeably lower, from 1.2 to maximally 3.3-fold. These results indicated that the opsonized Mc6 induced the mean~4-fold ± 0.52 SD higher increase in elastase release comparing to non-opsonized bacteria. By inducing the release of NE under a variety of immune conditions, M. catarrhalis contributes to enhancing the inflammatory environment, which facilitates its persisting. (Figure 3), no sensitization of this pre-exposed Mc6 bacterium to erythromycin was observed, despite extended incubation times ( Figure 5). Nevertheless, besides the lack of enhanced erythromycin activity after NE treatment, it is worth adding that the presence of NE in the environment does not disturb its bacteriostatic drug action.
M. catarrhalis Is a Potent Inducer of Neutrophil Elastase Release
Given the ability of M. catarrhalis to cause degranulation of PMNs, they may contribute to the increased concentration of this proteolytic enzyme in the immediate vicinity of the bacteria. Previously, we have shown that OMVs released by M. catarrhalis are potent inducers of NE release from PMNs neutrophils [28]. Here, we documented the differences in magnitude of NE release due to PMN degranulation in response to bacteria either opsonized or not by human serum (opsonic versus non-opsonic manner). As shown in Figure 6, in opsonic conditions, NE release was 5.7-to 11-fold higher in comparison to the unstimulated control, depending on the blood donor. In non-opsonic conditions, this increase was noticeably lower, from 1.2 to maximally 3.3-fold. These results indicated that the opsonized Mc6 induced the mean ~4-fold ± 0.52 SD higher increase in elastase release comparing to non-opsonized bacteria. By inducing the release of NE under a variety of immune conditions, M. catarrhalis contributes to enhancing the inflammatory environment, which facilitates its persisting. (Figure 3), no sensitization of this pre-exposed Mc6 bacterium to erythromycin was observed, despite extended incubation times ( Figure 5). Nevertheless, besides the lack of enhanced erythromycin activity after NE treatment, it is worth adding that the presence of NE in the environment does not disturb its bacteriostatic drug action.
M. catarrhalis Is a Potent Inducer of Neutrophil Elastase Release
Given the ability of M. catarrhalis to cause degranulation of PMNs, they may contribute to the increased concentration of this proteolytic enzyme in the immediate vicinity of the bacteria. Previously, we have shown that OMVs released by M. catarrhalis are potent inducers of NE release from PMNs neutrophils [28]. Here, we documented the differences in magnitude of NE release due to PMN degranulation in response to bacteria either opsonized or not by human serum (opsonic versus non-opsonic manner). As shown in Figure 6, in opsonic conditions, NE release was 5.7-to 11-fold higher in comparison to the unstimulated control, depending on the blood donor. In non-opsonic conditions, this increase was noticeably lower, from 1.2 to maximally 3.3-fold. These results indicated that the opsonized Mc6 induced the mean ~4-fold ± 0.52 SD higher increase in elastase release comparing to non-opsonized bacteria. By inducing the release of NE under a variety of immune conditions, M. catarrhalis contributes to enhancing the inflammatory environment, which facilitates its persisting. Human neutrophil samples at~1 × 10 7 cell/mL from four volunteers S1-S4 were primed with cytochalasin D (5 µg/mL), incubated for 30 min with bacteria preincubated with 10% pooled human sera (opsonic) or not (non-opsonic), and assayed for the release of elastase. Activity of enzyme was determined using fluorogenic substrate, MeoSuc-Ala-Ala-Pro-Val-AMC, and the increase in fluorescence was measured using Varioskan™ Flash Multimode Reader, Thermo Scientific, Vantaa, Finland) at Ex. λ = 370 nm and Em. λ = 445 nm. Data are expressed as fold increase of RLU/s in comparison to non-stimulated control for each individual supernatant after degranulation.
Discussion
Excessive neutrophilic inflammation accompanied by the impaired function of neutrophils is a hallmark of lower respiratory tract infections and chronic pulmonary diseases, including acute respiratory distress syndrome COPD or neutrophilic asthma [14,18,34]. For example, in COPD, patients' neutrophils are described as aberrant due to abnormal degranulation, phagocytosis, high ROS generation, and NET formation [35]. Furthermore, it was shown that NETs are more abundant in sputum from patients with severe COPD and are associated with more frequent exacerbations as well as loss of microbiota diversity and Haemophilus species dysbiosis [36]. There is increasing evidence that an overwhelming NET response correlates with poor outcome also in other lung-related diseases such as bacterial pneumonia, cystic fibrosis, or influenza. The detrimental effects that NETosis causes, such as destruction of epithelial and endothelial cells, vessel occlusion, or additional neutrophil recruitment and activation [37] are partly responsible for this. Likewise, neutrophils from patients with neutrophilic asthma display enhanced migration but diminished phagocytic efficiency compared with healthy controls [38].
In the case of bacterial exacerbation of inflammatory disorders, bacteria are exposed to granule contents containing elastase during neutrophil degranulation or NET-osis. As was documented in this paper, the exposure of M. catarrhalis to NE at 2 µM (60 µg/mL) did not result in the loss of their cocci-like morphology, inner-membrane damage or increased permeability. For other Gram-negative bacteria, using the same or even lower concentrations of the enzyme, a nonoxidative mechanism of bactericidal action of NE involving degradation of pivotal OMPs that facilitate osmotic lysis has been proposed for OmpA E. coli [2] and OprF P. aeruginosa [3]. Our observed lack of bacterial death in M. catarrhalis after exposure to NE used in concentrations exceeding those documented for lower airways in bronchoalveolar lavage fluid (BALF) in COPD exacerbations or bronchiectasis [26,27] indicates that, in these conditions, potential degradation of any of the highly expressed surface proteins by NE is not sufficient to either destroy cell wall integrity or to locally attenuate wall thickness, which would facilitate osmotic lysis. The lack of, or reduced sensitivity to, NE may be another strategy that allows the bacterium to survive extracellularly in its presence, although we cannot exclude the possibility that bactericidal activity could be observed in phagolysosomal compartments inside neutrophils, where the concentration of this enzyme should be much higher. To date, other defense strategies of M. catarrhalis, which allow the bacteria to overcome the inflammatory conditions, have also been documented. For example, M. catarrhalis evaded neutrophil oxidative stress responses via induction of less ROS and reduced NETosis in differentiated HL-60 neutrophils [39].
Although we did not observe a direct elastase-dependent bacteriolytic effect or innermembrane perturbation using 2 µM of enzyme, we have shown for the first time that neutrophilic elastase caused the proteolytic degradation of three important outer-membrane proteins of this bacterium, i.e., OMP CD, McaP, and TbpA, as was determined by LC-MS/MS analyses. The functional significance of this phenomenon for two of the proteins mentioned, namely OMP CD and McaP, has been further explored. OMP CD is a highly conserved and abundantly expressed M. catarrhalis surface protein identified as a target of serum IgG antibodies to surface epitopes in the majority of adults with COPD who cleared this pathogen as well as mucosal IgA in COPD patients [40,41]. OMP CD is recognized intensively by cross-reactive intraspecies antibodies from mice sera and human sera in healthy children and those with otitis media [42,43]. Functionally, this protein is involved in complement resistance [32]. Initially, using a constructed isogenic ∆ompCD mutant of Mc6 defective in expression of OMP CD as an internal control, we confirmed the involvement of this surface protein in complement resistance by showing that bacteria lacking OMP CD die in the presence of the complement cascade in contrast to the wild-type strain. Interestingly, although the absence of OMP CD protein in OMVs significantly attenuates the activation of the terminal SC5b-9 complement complex, enzymatic digestion of OMVs with clinically relevant concentrations of NE (2 µM) does not inhibit the formation of the aforementioned complex as compared to OMVs that are not digested by NE. These results imply that the proteolytic activity of NE against OMVs did not interfere with complement activation. The finding that NE-dependent proteolytic degradation of OMP CD sensitizes complement-resistant M. catarrhalis Mc6 to bactericidal action of complement, contributing to the significant decrease of this resistance, is important and a new observation. It indicates that NE exerts direct anti-virulence activity against M. catarrhalis, making it more susceptible to the action of this humoral innate mechanism.
Unlike many Gram-negative bacteria which are resistant to macrolide antibiotics, most strains of M. catarrhalis are sensitive to these hydrophobic compounds, including erythromycin [44]. However, the presence of OMPs with esterase activity may potentially contribute to macrolide resistance. It has been previously demonstrated that a lack of McaP expression abolishes the esterase activity of isogenic M. catarrhalis O35E mutant and considerably decreases its adherence to several human cell lines [32]. The esterase activity of McaP against erythromycin should result in its degradation. In the absence of McaP, this bactericidal activity of the antibiotic is expected to be enabled. However, when analyzing the consequences of NE-dependent digestion of McaP, we did not show any greater sensitization to erythromycin beyond inhibition of bacterial growth in its presence, as we observed for bacteria incubated in the presence of antibiotic alone. The observed lack of bactericidal effect of erythromycin in the presence of surface McaP partially digested by elastase can be explained by the sufficiently high expression level of McaP protein on bacterial cells, thereby retaining esterase activity against erythromycin despite the action of NE. Alternatively, the proteolytic action of NE did not cleave the site of McaP responsible for the esterase activity. Other OMPs encoding genes, including uspA2 and uspA2H, also are reported to be engaged in macrolide resistance in M. catarrhalis. Furthermore, macrolideresistant isolates exhibited enhanced adhesion when compared with macrolide-susceptible isolates, indicating they were more pathogenic [45].
Overall, despite the fact that our studies did not reveal any direct bactericidal action of NE against M. catarrhalis, we showed a new beneficial indirect role for this enzyme in the innate immune response against this bacterium. It involves the decrease of the resistance of M. catarrhalis to human serum complement.
Bacteria and their OMVs can induce neutrophil granule exocytosis [28,46]. Moreover, bacterial pathogens can manipulate neutrophil degranulation and by inhibiting, dysregulating, or inducing excessive neutrophil degranulation, bacteria can skew the protective effects of neutrophil degranulation in a way that ultimately benefits the pathogen and worsens disease [47]. This virulence strategy is used by Shigella flexnerii, which utilizes antimicrobial proteins released by degranulation to increase adhesion efficiency followed by hyperinvasion into epithelial cells [48]. Since the OMVs released can disseminate over significant distances, OMV-dependent degranulation of PMNs may be another virulence mechanism that triggers cellular exocytosis away from the bacteria. This could both delay the direct contact between the pathogen and PMNs and disarm PMNs by protecting bacteria from the anti-virulent effects of elastase. The protective role of vesicles against the deleterious effects of released neutrophil granule components has so far been demonstrated for several Gram-negative pathogens. For example, Porphyromonas gingivalis deploys OMVs decorated with gingipains for a neutrophil-deceptive strategy to degrade released external MPO and LL-37, creating a favorable inflammatory niche as well as avoiding killing [49]. We have previously demonstrated that neutrophils stimulated by M. catarrhalis OMVs released both azurophilic and secondary granules and that these OMVs caused cell death of respiratory epithelial cells [28]. In the present work, we found that antibody-opsonized bacteria induced significantly stronger NE release than non-opsonized bacteria, suggesting that under a variety of immune conditions M. catarrhalis contributes to enhancing the inflammatory niche in which it can persist.
Importantly, the level of free elastase in the lungs of severe COPD patients is significantly higher than in healthy individuals [50]. This enzyme is recognized also as a valuable biomarker for distinguishing the bacterial exacerbation in patients in COPD [27]. Thus, the bacteria-or OMV-dependent elastase release, trapping, and finally utilization, together with PMN depletion and exhaustion, may facilitate the adaptation of M. catarrhalis in countering lower respiratory tract defense.
In conclusion, the ability of M. catarrhalis to provoke neutrophil elastase release which does not seem to be effective as a bactericidal agent against these bacteria, at least in extracellular inflammatory milieu, as well as the identification and characterization of novel NE proteolytic targets within M. catarrhalis OMPs, broaden our understanding of how these bacteria contribute to enhancing the inflammation in which they persist and counteract host defense mechanisms.
Microbial Strains and Growth Condition
M. catarrhalis Mc6, described previously [43], and its study isogenic mutants generated in this study were used. WT strain was grown on Columbia agar with 5% sheep blood, BHI agar plates, or BHI broth. Mutants were grown on BHI supplemented with 20 µg/mL of kanamycin. Strains were cultivated at 37 • C.
PMNs Isolation
Polymorphonuclear (PMNs) cell fraction enriched in neutrophils was isolated as describe previously [28]. Briefly, heparinized blood from healthy volunteers, aged 20-45 years, was mixed in a 1:1 ratio with 2% dextran w/v dextran in PBS buffer, pH 7.4, and incubated for 30-40 min at RT for erythrocytes sedimentation. The 3-6 mL of PMN-rich plasma collected was carefully transferred to a discontinuous Percoll gradient (61% and 76% in 0.9% NaCl) and centrifuged (320× g/10 min, RT). After centrifugation, the PMNs fraction between both Percoll layers was collected in a sterile falcon tube and washed twice by centrifugation (320× g/10 min, RT) with erythrocyte lysis buffer (150 mM NH4Cl; 10 mM KHCO3; 0.3 mM EDTA; pH 7.4). Finally, the cells were resuspended in HBSS. Isolated neutrophils were assessed for viability with the trypan blue exclusion assay.
Neutrophil Elastase Activity Measurement
Neutrophil elastase (NE) activity was determined by measuring the cleavage of the fluorogenic NE substrate, MeoSuc-Ala-Ala-Pro-Val-AMC, dissolved in Hank's Balanced Salt Solution (HBSS) reaction buffer at pH 7.5, containing 0.1% (w/v) HEPES, 10% (v/v) DMSO, and 150 mM NaCl, as determined previously [28,51]. The working substrate concentration that gave the linear relationship (increase in fluorescence) was 100 µM as determined in preliminary calibration curve experiments with various concentrations of elastase. Cell-free suparnatants after degranulation were added to the substrate in a 1 : 1 ratio in a volume of 50 µL each and were immediately measured using the 96-well flatbottom black microplate (NUNC). The cleavage rate of the substrate measured for 30 min at 37 • C as the increase in fluorescence was monitored spectrofluorometrically (Varioskan™ Flash Multimode Reader, Thermo Scientific) at excitation wave λ = 370 nm and emission λ = 445 nm.
Flow Cytometry Analysis
To measure the activity of NE in permeabilization of bacterial membranes, the method described previously was used [52]. Briefly, 18 h Mc6 was recultivated in BHI until early-log phase (OD 600 = 0.25). The bacteria were washed with PBS (pH 7.4), resuspended in PBS-1% BHI (w/v) and diluted to~2-4 × 10 6 cfu/mL. The cells (100 µL) supplemented with NE, EDTA (positive control), or buffer (negative control) were incubated at 37 • C up to 4 h (thermoblock) and then treated with 6 µM PI for 15 min at room temperature. The samples were suspended in 250 µL PBS, diluted at least 10× in PBS and analyzed with GUAVA EasyCyte flow cytometer (Merck) by measurement of 5000 events on red fluorescence channel. Data were analyzed using GUAVA EasyCyte software (guavaSoft 3.3) Tests were performed in 3 independent biological replicates.
In Gel Trypsin Digestion and Peptide Identification by LC MS/MS Analysis
Gels were rinsed with HPLC-grade water. Excised bands were destained in 100 µL of 100 mM ammonium bicarbonate/acetonitrile solution for 30 minutes at room temperature and then washed with 500 µL of neat acetonitrile. Gel pieces were then covered with 10 ng/µL porcine trypsin solution in 10 mM ammonium bicarbonate/10% (v/v) acetonitrile and incubated for 2 h on ice followed by overnight incubation at 37 • C. After digestion, samples were centrifuged and supernatant aliquots were withdrawn and stored at −20 • C until LC MS/MS analysis. The analyses were performed on an Ion Trap LC/MS/MS spectrometer (Agilent Technologies, Santa Clara, CA USA). The resulting peptide mass fingerprints and LC MS/MS fragmentation spectra were identified using the MASCOT (http://www.matrixscience.com) and BLAST engines [53] searching M. catarrhalis protein databases.
Time-Kill Assay for WT and Isogenic Mutants
For the time kill assay from 0 to 4 h, overnight cultures of Mc6 WT, ∆ompCD, or ∆mcaP were recultivated until early log-phase (OD 600 = 0.25-0.3) in relevant media. The bacteria were diluted in 1% (w/v) BHI-PBS to obtain~2-4 × 10 6 cfu/mL and incubated in the presence of NE, NHS, HiNHS, or erythromycin in a final volume of 100 µL for 0, 1, and 4 h at 37 • C. At each time point suspensions were 10-fold serially diluted with and 10 µL aliquots were plated in triplicate on BHI agar plates or alternatively in spots. The plates were incubated overnight at 37 • C, and cfu/mL were calculated.
Bactericidal Activity of Serum Complement or Erythromycin against NE-Treated Bacteria
For enzymatic digestion, the early-log phase bacteria at~5 × 10 5 cfu/mL were mixed with 2 µM NE in an elastase HBSS buffer containing 0.1% HEPES, 10% DMSO, 150 mM NaCl, 1% BHI (pH 7.5) in a final volume of 100 µL. The not enzymatically treated bacteria were used as control samples. Bacteria were incubated for 4 h at 37 • C in water bath.
Next. the NE-treated as well as NE-non-treated bacteria were divided into equal volumes and used in bactericidal tests with 25% or 50% normal human serum (NHS) as well as 4 or 6 µg/mL of erythromycin (E). Simultaneously, the incubation of bacteria in the presence of heat-inactivated serum (HiNHS) and appropriate diluent was included as negative and positive controls, respectively. To assess the bactericidal effect at 0, 60, 120, and 240 min of the experiment, 10 µL each of bacterial suspensions incubated in a water bath at 37 • C were 10-fold serially diluted and then 10 µL aliquots were plated in triplicate on BHI agar plates. The plates were incubated overnight at 37 • C. The colony counts and cfu/mL were calculated next day.
Complement Complex SC5b-9 Activation
Briefly, 10 µL of OMVs in veronal buffer (pH 7.4) to obtain final vesicle protein concentrations of 20 µg/mL were added to 90 µL of NHS. The negative control was NHS with the addition of 10 µL of veronal buffer. The samples were incubated for 30 min at 37 • C, diluted in the range 100-2000×, and concentration of soluble SC5b-9 was determined by ELISA kit (MicroVue SC5b-9 Plus, Quidel; Athens, OH, USA) according to the manufacturer's instructions. The absorbance at λ = 450 nm was read using Varioskan™ LUX multimode microplate reader (Thermo Scientifc, Vantaa, Finland).
Outer Membrane Vesicles Isolation
Outer membrane vesicle (OMV) isolation was performed as we reported previously [43]. Briefly, the 18 h pre-culture of M. catarrhalis 6 (Mc6) was diluted 50 × in 500 mL brain-heart infusion (BHI) media and incubated at 37 • C for 16-18 h with orbital shaking (150 rpm). The culture was centrifuged at 8000 rpm for 15 min at 4 • C. The supernatant was collected and passed through 0,22 µm pore size filter vacuum pump (Merck, Millipore). The filtrate was concentrated using 50 kDa vivaspin centrifugal concentrators (Amicon ultra, Merck Millipore, Cork, Ireland) at 5000× g for 30 min at 4 • C. The concentrated supernatant was thereafter ultracentrifuged overnight (100,000× g, at 4 • C) using Beckman Coulter Optima ultracentrifuge (model L-90K, Palo Alto, CA, USA). The pellet containing OMVs was re-suspend in 500 µL of sterile PBS buffer (pH 7.4), aliquoted, and stored in −20 • C. The sterility of OMVs was confirmed on BHI agar. The protein concentration in OMV preparation was measured using Qubit fluorometer (Life Technologies Corporation, Carlsbad, CA, USA), and the quality of OMVs preparation was confirmed in 12% SDS-PAGE stained with GelCode blue stain reagent.
Outer Membrane Protein Isolation
Outer membrane proteins (OMPs) were isolated with zwitterionic detergent Zwittergent 3-14 according to our method described in [42]. Briefly, the bacteria from 200 mL of culture were suspended in 5 mL of 1 M sodium acetate buffer containing 1 mM βmercaptoethanol, pH 4.0). To this suspension, a 45 mL volume of a solution of 0.5 M CaCl 2 containing 5% Zwittergent was added and stirred for 1 h at room temperature. The nucleic acids were precipitated by adding 12.5 mL of cold absolute ethanol and subsequently centrifuging the solution (17,000× g, 10 min., 4 • C). The pellet was discarded and the proteins remaining in the supernatant were precipitated by adding 187 mL of cold ethanol and collected by centrifugation (17,000× g, 20 min., 4 • C). The pellet was air dried and then resuspended in 10 mL of Z buffer (0.05% Zwittergent, 50 mM Tris, 10 mM EDTA; pH 8.0). This mixture was stirred for 1 h at room temperature and centrifuged at 12,000× g for 10 min. at 4 • C, and the soluble fraction containing OMPs was retained. The OMPs were divided into aliquots and stored at −80 • C. The quantity and quality of OMPs preparation was confirmed using Bradford reagent and 12% SDS-PAGE stained with GelCode blue stain reagent, respectively.
TEM
Briefly, 18 h cell culture of M. catarrhalis in BHI was centrifuged and rinsed in PBS. The pellet was fixed in 1 mL of cacodylate buffer (0.2 M sodium cacodylate, 0.2 M HCl, pH 7.4) supplemented with 2.5% glutaraldehyde and incubated 8-10 h at room temperature (RT). The suspension was rinsed by centrifugation (3000× g, 10 min., RT) several times with cacodylate buffer. The resultant pellet was postfixed in cacodylate buffer containing 1% OsO 4 for 2 h at RT and rinsed. The samples were subsequently dehydrated in a series of ethanol concentrations and embedded in Epon 812. Thin sections were cut with an ultramicrotome (Reichert-Jung) equipped with a diamond knife and stained with 2% uranyl acetate and lead citrate. The samples were then visualized with a TEM (TESLA BS 540, Brno, Czech Republic) operated at 80 kV.
Statistical Analysis
The data were expressed as the mean ± SD and analyzed for the significant difference using the Statistica (version 13.3) software (StatSoft, Krakow, Poland). Differences were considered statistically significant if p < 0.05.
Eric Lafontaine from University of Georgia, USA, for sharing the plasmids pJTmcaP and pJTmcaPnpkan; Ryszard Adamski and Marek Chmielewski from Faculty of Biological Sciences, University of Wroclaw, Poland, for TEM sample preparation and TEM images; Donata Wawrzycka, Faculty of Biological Sciences, University of Wroclaw, Poland for design of primers; and Hanna Walkowicz for her assistance with microbiological assays. This work is a part of the Ph.D. thesis by J.R.
Conflicts of Interest:
The authors declare no conflict of interest. | 9,198 | 2023-04-01T00:00:00.000 | [
"Biology"
] |
Cross-Correlations between Energy and Emissions Markets : New Evidence from Fractal and Multifractal Analysis
We supply a new perspective to describe and understand the behavior of cross-correlations between energy and emissions markets. Namely, we investigate cross-correlations between oil and gas (Oil-Gas), oil and CO 2 (Oil-CO 2 ), and gas and CO 2 (Gas-CO 2 ) based on fractal and multifractal analysis. We focus our study on returns of the oil, gas, and CO 2 during the period of April 22, 2005– April 30, 2013. In the empirical analysis, by using the detrended cross-correlation analysis (DCCA) method, we find that crosscorrelations for Oil-Gas, Oil-CO 2 , and Gas-CO 2 obey a power-law and are weakly persistent.Then, we adopt the method of DCCA cross-correlation coefficient to quantify cross-correlations between energy and emissionsmarkets.The results show that their crosscorrelations are diverse at different time scales. Next, based on themultifractal DCCAmethod, we find that cross-correlatedmarkets have the nonlinear and multifractal nature and that the multifractality strength for three cross-correlated markets is arranged in the order of Gas-CO 2 >Oil-Gas>Oil-CO 2 . Finally, by employing the rolling windows method, which can be used to investigate time-varying cross-correlation scaling exponents, we analyze short-term and long-term market dynamics and find that the recent global financial crisis has a notable influence on short-term and long-term market dynamics.
Introduction
It seems a common sense that high or low energy prices (e.g., oil and gas) are conducive to an increase or a decrease of CO 2 prices [1].For instance, Kanen [2] found that Brent crude oil prices are the main driver of natural gas prices, power prices, and CO 2 prices.Alberola et al. [3] further identified that oil and gas prices are the main CO 2 prices drivers by using a standard GARCH (1, 1) model.Fezzi and Bunn [4] investigated interrelationships among electricity, gas, and CO 2 prices in the UK.They revealed that gas prices affect CO 2 prices and the reaction of CO 2 prices to a shock on gas prices is significant in short term.Mansanet-Bataller et al. [5] examined correlations in CO 2 prices, energy, and weather.They obtained a similar conclusion that the major factors in the determination of CO 2 prices are the oil and gas which are the most emission intensive energy sources.However, they also reported that extreme temperatures affect CO 2 prices.This finding suggests that there are some other factors influencing CO 2 prices.Ham et al. [6] studied the relationship of return volatilities between oil and CO 2 .They found that the relationship is complex and presents a feature of asymmetry and instability.By using models of the vector autoregressive (VAR) and the dynamic conditional correlation MGARCH (DCC-MGARCH), Chevallier et al. [1] analyzed timevarying cross-correlations in oil, gas, and CO 2 prices.The results showed that cross-correlations are dynamic; for example, time-varying correlations are in the range of [−0.05, 0.05] between oil and CO 2 and [−0.2, 0.2] between gas and CO 2 .Based on the results obtained by Chevallier et al. [1] and Ham et al. [6], we can preliminary deduce that there is not a simple linear relationship or cross-correlation between energy and emissions markets.In other words, cross-correlations in oil, gas, and CO 2 prices may be nonlinear and dynamic.
The fractal and multifractal scaling behavior was widely reported in many financial time series from complex financial systems [7][8][9].At the same time, many scholars confirmed that the fractal and multifractal behavior has been a "stylized 2 Mathematical Problems in Engineering fact" in energy markets, such as the crude oil market [10] and the electricity market [11].The detrended fluctuation analysis (DFA), which was proposed by Peng et al. [12], is a popular fractal analysis method [13].Kantelhardt et al. [14] extended the DFA approach to a multifractal detrended fluctuation analysis (MF-DFA) method, which can be used to analyze the multifractality in nonstationary time series.Based on the DFA method, Podobnik and Stanley [15] developed a new method in the fractal analysis to study power-law cross-correlations between two synchronized time series, which is called detrended cross-correlation analysis (DCCA).Since then, the DCCA method is widely applied to examine cross-correlations between the financial entities [16,17].As an example, Podobnik et al. [16] investigated 14, 981 daily observations of the Standard and Poor's 500 Index from 1950 to 2009.By using the DCCA approach, they displayed the power-law cross-correlations between price and volume volatilities.Based on the DCCA method, Wang and Xie [17] analyzed cross-correlations between Renminbi and four major currencies (i.e., USD, EUR, JPY, and KRW) in the currency basket of the Renminbi exchange rate and found that cross-correlations are weakly persistent.In order to quantify the level of cross-correlation between two synchronous time series, Zebende [18] proposed a novel detrended crosscorrelation coefficient, namely, the DCCA cross-correlation coefficient DCCA (), which is based on the DFA and the DCCA methods.The DCCA cross-correlation coefficient is also widely used to quantify the level of cross-correlations in financial markets; for example, see [19,20].To detect the multifractal feature of cross-correlations between two synchronous time series, Zhou [21] extended the DCCA and MF-DFA to the method of multifractal DCCA (MF-DCCA).Besides, Kristoufek [22] further generalized the method and proposed the method of multifractal height cross-correlation analysis.Recently, the MF-DCCA approach has become a powerful technical tool to analyze the multifractality property of cross-correlations in financial markets [23,24].As for the crude oil market, Wang et al. [23] employed the MF-DCCA method to study cross-correlations between the West Texas Intermediate (WTI) crude oil spot and futures return series.They found that cross-correlations are strongly multifractal for small time scales, while for large time scales crosscorrelations are nearly monofractal.In addition, Wang and Xie [24] investigated cross-correlations between the WTI crude oil market and the US stock market by using the MF-DCCA method and showed that the cross-correlated behavior between the two markets is nonlinear and multifractal.
In this paper, we aim to analyze cross-correlations between energy and emissions markets from the perspective of fractal and multifractal analysis.In practical terms, we focus our attention on cross-correlations in oil, gas, and CO 2 prices, namely, cross-correlations between oil and gas, oil and CO 2 , and gas and CO 2 .That is to say, as for the energy market, we choose Brent crude oil prices and Henry Hub natural gas prices as research objects.The CO 2 prices are obtained from the European carbon emissions trading market that is the biggest emissions trading market at present.In the empirical analysis, to begin with, we make a preliminary analysis of the three returns of oil, gas, and CO 2 in the period of April 2005-April 2013.Next, we employ the DCCA and the DCCA cross-correlation coefficient to analyze power-law cross-correlations between energy and emissions markets.Then, based on the MF-DCCA, we investigate the multifractal behavior of cross-correlations.Finally, by using the rolling windows method, we examine time-varying crosscorrelation scaling exponents, which can detect dynamics of cross-correlations.
The rest of the paper is organized as follows.We describe methodologies of the DCCA, the DCCA cross-correlation coefficient, and the MF-DCCA in Section 2. In Section 3, we show the empirical data and make a preliminary analysis.The main empirical results and analysis are presented in Section 4. Finally, we draw some conclusions in Section 5.
Methodology
2.1.DCCA Method.The DCCA method, which is used to investigate power-law cross-correlations between two different simultaneously recorded time series, was proposed by Podobnik and Stanley [15].Supposing that there are two time series (e.g., returns) { } and { } with the equal length , where = 1, 2, . . ., , the DCCA method can be introduced as follows [15,19,25].
Step 2. We divide two sequences {()} and {()} into = int(/) nonoverlapping intervals V.The length of each interval is .Considering that the length is often not an integral multiple of the time scale , a short part at the end of each sequence may be left [14,24].In order not to form surplus, the same procedure is reduplicated from the opposite end of each sequence in (1).After that, we can obtain 2 intervals altogether.Following Kantelhardt et al. [14] and Wang et al. [19], in our study, we set 10 ≤ ≤ /4.
Step 4. The detrended covariance fluctuation function 2 DCCA () can be calculated by averaging over all intervals [24]; that is, If the time series { } is identical to { }, the DCCA method reduces to the DFA method.In practical terms, DCCA () reduces to the detrended variance DFA () described in the DFA method [12,26]; that is, Step 5.By analyzing the log-log plots of DCCA () versus , we can obtain the scaling behavior of the fluctuation function [17].If the two time series { } and { } are power-law crosscorrelated, then where is a cross-correlation scaling exponent and is also known as an extension of Hurst exponent such that DFA () ∝ in the case of the DFA method, which can be estimated by the slope of log-log plots DCCA () versus through ordinary least squares (OLS) [17].Generally, the values of show the type of cross-correlations between the two time series.Three cases of can be summarized as follows [17] (i) If > 0.5, the cross-correlations between the two time series are persistent (positive); namely, an increase (a decrease) in one time series is likely to be followed by an increase (a decrease) in the other time series.(ii) If < 0.5, the cross-correlations between the two time series are antipersistent (negative), which is an opposite situation to the case (i).At this point, the direction of both time series is reversed.For instance, if there is an increase of one time series, it is likely to be followed by a decrease of the other time series.(iii) When = 0.5, the two time series are not crosscorrelated; that is, there are no correlations between the two time series [17].
DCCA Cross-Correlation Coefficient.
The DCCA crosscorrelation coefficient, an extension of the DFA and the DCCA methods, was proposed by Zebende [18].This coefficient is developed to quantify the level of cross-correlation between two synchronized time series.For each time scale , the DCCA cross-correlation coefficient is defined as the ratio between the detrended covariance function 2 DCCA () of ( 4) and the multiplier of two detrended variance functions DFA () of ( 5) [18,20,27]; that is, where DCCA () is a dimensionless quantity ranging from −1 to 1 [27].At this point, it is important to note that when we calculate values of the DCCA cross-correlation coefficient for time scales in (7), the detrended covariance 2 (V, ) in ( 4), that is, ( 2) and ( 3) should be calculated as For details, see [25].Similar to the classical correlation coefficient, for each time scale , a value of DCCA () = 1 or DCCA () = −1 means that two series are perfectly cross-correlated or anticross-correlated, while a value of DCCA () = 0 suggests that there is no cross-correlation between two time series [17].A prominent advantage of DCCA () is that it can quantify the level of cross-correlations between two different but synchronous time series at different time scales [17].[21] proposed the MF-DCCA method, which is a generalization of the DCCA method, to examine the multifractal behavior of power-law crosscorrelations between two simultaneously recorded time series.The procedure of the MF-DCCA method consists of five steps; its first three steps are identical to the DCCA procedure.Here, we only present the last two steps.
MF-DCCA Method. Zhou
Step 4. The th order detrended covariance fluctuation function () is obtained by averaging over all intervals [21,24,28]; that is, for any real value ̸ = 0 and Step 5. Similar to the 5th step in the DCCA method, we can determine the scaling behavior of the fluctuation function for each .If the two time series { } and { } are power-law crosscorrelated, then where ℎ () is denoted as the generalized cross-correlation scaling exponent.When = 2, the conventional DCCA is retrieved and ℎ (2) is equivalent to the exponent in (6).If = for any , the MF-DCCA method is identical to the MF-DFA method [29] and ℎ () reduces to ℎ () or ℎ () which is called the generalized autocorrelation scaling exponent (or Hurst exponent).If the value of ℎ () is dependent on , that is, the ℎ () is a function of , the crosscorrelations between the two time series are multifractal; otherwise, the cross-correlations are monofractal [24].
In order to characterize multifractality of crosscorrelations between two time series, we hereby introduce the singularity strength (or Hölder exponent) and the singularity (or multifractal) spectrum (), which are defined by [24,28] where ℎ () represents the derivative of ℎ () for .Following Wang and Xie [24], we set the range of varying from −10 to 10 with a step of one.
Data and Preliminary Analysis
In our study, we investigate three time series of oil, gas, and CO 2 daily closing prices from April 22, 2005 Let denote the daily closing price on day .In this paper, we focus our study on the daily logarithmic return on day , which is defined as = ln( / −1 ).Thus, each return series contains 2060 observations.The absolute return | | is called volatility. Figure 1 shows the graphical representation of prices and returns of oil, gas, and CO 2 .For each of the prices, as shown in Figure 1, there is a sharp decrease from the peak to the valley which occurs in the period between July 2008 and March 2009.One possible interpretation of this phenomenon is that the above-mentioned period may be the worst phrase of the global recession during the US subprime mortgage crisis.We organize descriptive statistics of the three returns in Table 1.The mean values of the three returns are almost close to zero.The values of the standard deviation are ranged in the order of gas > CO 2 > oil, which suggests that volatilities of gas and CO 2 are higher than that of oil.For each return series, the Jarque-Bera statistic rejects the null hypothesis of the normal distribution at the 1% significance level.We can also find that values of skewness and kurtosis are not equal to zero and larger than three, respectively, which implies that the three returns are fat-tailed.The fat-tailed phenomena can be evidenced by results of normal Q-Q plots of the three returns in Figure 2 because each of the normal Q-Q plots is arced or "S" shaped.
In financial time series [30], many fat-tailed distributions have the power-law decay in the tail of the probability distribution.A lot of previous works confirmed that the "inverse cubic power-law" is found in financial markets; for instance, see [16,23,24,31].Recently, Podobnik et al. [16] proposed a new power-law estimation to investigate the fattailed distribution for financial time series.They developed an estimator of the average return interval ave (q), which is defined as follows [24]: on average, there is one volatility above threshold after each time interval ave (); then The values of ave () for varying can be calculated by ( 12) and then the estimate for can be obtained by the following relationship: According to Wang and Xie [24], we set thresholds varying from 2 to 7 with a fixed step of 0.25, where is the standard deviation of each volatility series.Then, we calculate the value of ave () for each and estimate the exponent by (13).Finally, we display the log-log plots of ave () versus threshold in Figure 3.There is a power-law relationship with Podobnik's tail exponents = 3.1634, = 3.5185, and = 3.0735 for oil, gas, and CO 2 , respectively.The three estimated Podobnik's tail exponents are close to three, which is in line with the "inverse cubic power-law" and further indicates that fat-tailed distributions widely exist in energy and emissions markets.
Empirical Results and Analysis
4.1.Cross-Correlations Analysis.Based on the DCCA method, in this subsection, we first estimate cross-correlation scaling exponents to quantitatively study cross-correlations between oil and gas, oil and CO 2 , and gas and CO 2 .For the sake of simplicity, we denote the three pairs of crosscorrelations as Oil-Gas, Oil-CO 2 , and Gas-CO 2 .The log-log plots of the detrended covariance fluctuation function DCCA () versus time scale are drawn in Figure 4. From Figure 4, we can find that all the circle points are in a linear arrangement.Thus we can employ the OLS to estimate slopes of regression lines (see the solid lines in Figure 4), that is, cross-correlation scaling exponents, and also present the three estimated exponents in Figure 4.It can be observed that all the three cross-correlation scaling exponents are larger than 0.5 but very close to 0.5.This finding suggests that cross-correlations between oil and gas, oil and CO 2 , and gas and CO 2 are weakly persistent (positive).By comparing the three scaling exponents, one can see that the largest scaling exponent belongs to Oil-CO 2 not Oil-Gas, which is different from the results reached by Chevallier et al. [1].However, as suggested by Wang and Xie [17], when the compared scaling exponents are very similar, the DCCA method can only be used to analyze the type of cross-correlation which is either persistent or antipersistent.Hence, we then employ the DCCA cross-correlation coefficient DCCA () to quantify the level of cross-correlations at different time scales.In Figure 5, we show plots of the DCCA cross-correlation coefficient DCCA () versus time scale for Oil-Gas, Oil-CO 2 , and Gas-CO 2 .As shown in Figure 5, one can see that DCCA cross-correlation coefficient series vary with time scales.Interestingly, when the time scale is 10 < < 100, the three coefficients for Oil-Gas, Oil-CO 2 , and Gas-CO 2 are relatively stable and are arranged in the order of Oil-Gas > Oil-CO 2 > Gas-CO 2 .However, for 100 < < /4, each coefficient presents a rising trend as time scales increase; and the order of the three coefficients is changed; that is, the position of Oil-Gas and Oil-CO 2 is swapped.From the aforesaid analysis, we can obtain some inspirations as follows: (i) for Oil-Gas, Oil-CO 2 , and Gas-CO 2 , cross-correlations are diverse at different time scales, which indicates that the traditional linear correlation coefficient cannot accurately capture the diversity of crosscorrelations between energy and emissions markets; (ii) for small time scales, the strength or level of cross-correlations is arranged in the order of Oil-Gas > Oil-CO 2 > Gas-CO 2 , which suggests that cross-correlations in the internal energy markets are stronger than those of the cross markets (i.e., cross-correlations between energy and emissions markets); and (iii) unlike case (ii), for larger time scales, the strength of cross-correlations has changed, which are arranged in the order of Oil-CO 2 > Oil-Gas > Gas-CO 2 .At this point, crosscorrelations between the cross markets (but only for Oil-CO 2 ) are stronger than those of the internal energy markets.
Multifractal Detrended Cross-Correlations Analysis.
In this subsection, we adopt the MF-DCCA method to investigate the nonlinear and multifractal behavior of crosscorrelations between the energy and emissions markets, that Oil (h xx (q)) Oil and gas (h xy (q)) Gas (h yy (q)) (h xx (q) + h yy (q))/2 Oil (h xx (q)) Oil and CO 2 (h xy (q)) CO 2 (h yy (q)) (h xx (q) + h yy (q))/2 is, cross-correlations between oil and gas, oil and CO 2 , and gas and CO 2 .We show relationships between the crosscorrelation scaling exponent ℎ () (i.e., curves with circle symbols) and for Oil-Gas, Oil-CO 2 , and Gas-CO 2 in Figures 6, 7, and 8, respectively.At the same time, we also calculate autocorrelation scaling exponents ℎ () and ℎ () for the single market by the method of MF-DFA.For instance, in Figure 6, the ℎ () (i.e., the curve with triangle symbols) and ℎ () (i.e., the curve with box symbols) stand for autocorrelation scaling exponents for oil and gas, respectively.According to the multifractal analysis theory, if the scaling exponent ℎ() is dependent on , that is, the value of ℎ() varies with different , auto-correlations or crosscorrelations are multifractal; otherwise they are monofractal [24].As drawn in Figures 6-8, it can be found that, for Gas (h xx (q)) Gas and CO 2 (h xy (q)) CO 2 (h yy (q)) (h xx (q) + h yy (q))/2 different , the cross-correlation scaling exponent ℎ () is different.That is to say, each ℎ () is a nonlinear function with respect to , which implies that cross-correlations between oil and gas, oil and CO 2 , and gas and CO 2 exhibit a strong multifractal character.By analyzing the relationship between ℎ () (or ℎ ()) and , we come to a conclusion that individual markets (i.e., gas, oil, or CO 2 ) also have an evident multifractal nature.
In general, as proposed by Zhou [21], for two time series generated by a binomial measure from the -model, there is a relationship among ℎ (), ℎ (), and ℎ (), which is descried by Here, we denote the expression (i.e., (ℎ () + ℎ ())/2) in the right side of ( 14) as the average scaling exponent.In order to verify whether the above-said equation fails or not in this empirical study, we also calculate average scaling exponents for Oil-Gas, Oil-CO 2 , and Gas-CO 2 , and show their results (i.e., the curves with diamond symbols) in Figures 6, 7, and 8, respectively.As depicted in the three figures, one can observe that, for < 0, the cross-correlation scaling exponent ℎ () is less than the average scaling exponent (ℎ () + ℎ ())/2 and greater than the average scaling exponent for > 0. From this, the general relationship (i.e., ( 14)) reported by Zhou [21] is not confirmed by our empirical result based on the analysis of energy and emissions markets.In addition, a similar result was obtained by Wang and Xie [24] who studied the crosscorrelations between the WTI crude oil market and the US stock market.This unexpected phenomenon may be due to the existence of some unknown external events and the noise trading which synchronously influence the cross-correlated behavior of the two investigated markets.
In order to better quantify the multifractality for the two markers, we further investigate the multifractal strength Oil Oil and gas Gas by analyzing the multifractal spectra.To begin with it, based on (11), we obtain results of the multifractal spectra between the two markets and show the corresponding graphs in Figure 9.It is generally known that if the multifractal spectrum presents as a point, it is monofractal; otherwise, it is multifractal [24].From Figure 9, we can find that all the curves for the multifractal spectra in the two markets do not show as a point.These results once again imply that the multifractality exists not only in the energy market (i.e., oil and gas) and the emissions market (i.e., CO 2 ) but also in cross-correlated markets (i.e., Oil-Gas, Oil-CO 2 , and Gas-CO 2 ).Then, to examine the multifractal strength (or multifractality degree), we introduce the measure [24,32] where Δ stands for the width of the multifractal spectra ().The empirical results of multifractality degree are represented in Table 2.By comparing the results in Table 2 or Figure 9, we find that multifractal strengths are arranged in the order of CO 2 > Oil > Gas-CO 2 > Gas > Oil-Gas > Oil-CO 2 .From this, we can draw some conclusions as follows, which may just respond to the conjectures in Section 1. (i) Whether individual markets or cross-correlated markets, they have the nonlinear and multifractal feature.Besides, except for Gas-CO 2 , the multifractal strengths for individual markets are larger than those of cross-correlated markets.(ii) The return series of CO 2 has the largest multifractal degree, which suggests that CO 2 prices have a strong multifractal feature and thus its pricing mechanism is complex and may be affected by many other external factors (e.g., weather).(iii) The multifractality exists in Gas-CO 2 and Oil-CO 2 , which indicates that an increase or a decrease of CO 2 prices is not a simple feedback to a rise or a fall of the energy prices, especially gas prices.(iv) The nonlinear and multifractal behavior shows that either the separately analyzed (energy and emissions) markets or cross-correlated markets violate the random walk process; and some traditional linear bivariate models (e.g., the VAR model) may not be appropriate to detect correlations or interrelationships between the two investigated markets.So it will be an important and interesting work to develop a class of nonlinear cross-correlations models that can capture the multifractal nature [33].
Rolling Windows Analysis.
To uncover the dynamic evaluation of cross-correlations between the two markets, we employ the method of rolling windows to analyze timevarying cross-correlation scaling exponents .The rolling windows method is also known as the local Hurst (scaling) exponent [24].For the detailed procedure of the rolling windows method (or the local Hurst exponent), see [32].Many scholars discussed the selection of the window size, which is a difficult issue in the rolling windows analysis because the local Hurst exponent at a given time is dependent on the window size [24]; namely, different window sizes may generate different time-varying scaling exponents.Wang and Xie [24] summarized the choice of the window size and proposed that one can choose different window sizes for different purposes.On the one hand, for a small window size such as a year, the evolution of Hurst (scaling) exponents is fierce.Thus, one can choose a small window size to examine affections of exogenous events (e.g., seasonal factors and financial crisis) on short-range market dynamics.On the other hand, for a large window size, such as Tabak and Cajueiro [10] set the window size as four years, the evolution of Hurst (scaling) exponents is smooth and stable.So to investigate the major trend (e.g., market efficiency) of long-range market dynamics, one should select a large window size.In this study, we consider two window sizes (i.e., small and large window sizes) to study dynamics of crosscorrelations between the two analyzed markets.In practical terms, the small and large window sizes are fixed to 250 and 1000 trading days, respectively, which are roughly equal to one and four trading years.The step length of window is set as a single trading day for both cases.Therefore, for the window sizes of 250 and 1000 trading days, there is a total of 1811 and 1061 windows, respectively.To analyze short-term market dynamics (i.e., the window size is set to 250 trading days), we present time-varying cross-correlation exponents for Oil-Gas, Oil-CO 2 , and Gas-CO 2 in Figures 10, 11, and 12, respectively, while to analyze long-term market dynamics (i.e., the window size is set to 1000 trading days), we show their results in Figures 13, 14, and 15.For each figure, the time in -axis represents the period of each analyzed window, that is, dates of the beginning and the last day in each analyzed window [17].By comparing the former three figures (i.e., Figures 10-12) with the latter three ones (i.e., , we can find that dynamics of cross-correlation scaling exponents of the latter are relatively smoother than those of the former, which is just as expected. From Figure 10, we can observe that cross-correlation scaling exponents for Oil-Gas vary in the range of [0.38, 0.58].For the whole period, except for the period during the US subprime mortgage crisis, time-varying scaling exponents are in the interval [0.45, 0.55] and its curve (or tendency) fluctuates above and below 0.5.These results imply that the dynamic performance of cross-correlations between oil and gas is relatively stable, except for the impact by the 2008 financial crisis.An interesting finding is that about 1138/1811 ≈ 61.48% of scaling exponents are less than 0.5; that is, more than half of cross-correlations between oil and gas are antipersistent.One possible explanation to this phenomenon is that the oil and gas are the substitute goods because an increase (a decrease) in one product's sales (prices) will reduce the potential sales of another product.As shown in Figure 11, it can be seen that crosscorrelation scaling exponents between oil and CO 2 have a trend of cyclical fluctuation.The fluctuation trend shows a slow decrease firstly and then a rapid increase.Similar to Oil-Gas, during the US subprime mortgage crisis, the cross-correlation scaling exponents for Oil-CO 2 exhibit a dramatic fluctuation and most of them are less than 0.5.More than half (about 53%) of cross-correlation scaling exponents are greater than 0.5, which suggests that Oil-CO 2 has a long-range positive cross-correlation.As for Gas-CO 2 , in Figure 12, one can find that cross-correlation scaling exponents show high volatilities over time and no law can be found in them.In addition, as plotted in Figure 12, it can be observed that most of cross-correlation scaling exponents are larger than 0.5.That is, during most of the period, gas and CO 2 are positively cross-correlated.Before the period or time window, for which the beginning date is April 2008, in Figure 13, we find that the cross-correlation scaling exponents are in the range [0.49, 0.54] which are very close to 0.5.This finding implies that, from a long-term point of view, the cross-correlated market (i.e., the crude oil market and natural gas market) is a weakly efficient market.However, after the US subprime mortgage crisis, exponents for Oil-Gas experience a large change, which are all smaller than 0.5.From this, we can draw a conclusion that the 2008 financial crisis has a longrun influence on the cross-correlated market.Interestingly, from Figure 14, one can see that most of cross-correlation scaling exponents for Oil-CO 2 are greater than 0.5, which indicates that cross-correlations between oil and CO 2 are persistent.This result confirms that crude oil prices are the main driver of CO 2 prices from a long-term perspective.As shown in Figure 15, it can be observed that the trend of cross-correlation scaling exponents for Gas-CO 2 is similar to that of Oil-Gas.Similar to Oil-Gas, the financial crisis has a marked impact on the Oil-CO 2 and Gas-CO 2 which leads to dramatic changes on cross-correlation scaling exponents.
Conclusion
In this study, we focus our study on cross-correlations between energy and emissions markets from a perspective of fractal and multifractal analysis.Namely, we take a fresh look at cross-correlations between oil and gas, oil and CO 2 , and gas, and CO 2 .We choose returns of the oil, gas and CO 2 during the period of April 22, 2005-April 30, 2013 as the research sample.In the empirical process, we first use the methods of the DCCA and the DCCA cross-correlation coefficient to examine power-law cross-correlations and the level of cross-correlations, respectively.Then, we employ the MF-DCCA approach to analyze the multifractal behavior of cross-correlations and quantify multifractal strengths of individual and cross-correlated markets.Finally, by using the rolling windows method, from short-term and long-term perspectives, we investigate time-varying cross-correlation scaling exponents, which can capture dynamics of crosscorrelations.
The basic findings in our study can be summarized as follows.(i) On the basis of the analysis of descriptive statistics and the Podobnik's tail exponent, we find that the three returns of oil, gas, and CO 2 are fat-tailed and obey the "inverse cubic power-law." (ii) By employing the DCCA method, we find that cross-correlations between oil and gas, oil and CO 2 , and gas and CO 2 are weakly persistent.(iii) The cross-correlation coefficients for Oil-Gas, Oil-CO 2 , and Gas-CO 2 are different at different time scales.(iv) The nonlinear and multifractal nature is also found in individual and crosscorrelated markets.For cross-correlated markets, Gas-CO 2 has the largest multifractality degree.
In addition, we investigate short-term and long-term market dynamics of cross-correlations and come to some results as follows.On the one hand, for short-term market dynamics, (i) cross-correlation scaling exponents for Oil-Gas, Oil-CO 2 , and Gas-CO 2 show a drastic fluctuation; (ii) oil and CO 2 and gas and CO 2 are positively cross-correlated for most of the period; and (iii) the dynamic performance of the three cross-correlation scaling exponents is notably affected by the global financial crisis.On the other hand, for long-term market dynamics, (iv) the three cross-correlated markets are also influenced by the financial crisis; (v) for the wholly analyzed period, except for the period after the financial crisis, cross-correlated markets for Oil-Gas and Gas-CO 2 exhibit as a weakly efficient market.Oil-CO 2 has a longrange positive cross-correlation.
Through this empirical analysis, we supply a new perspective to describe and understand cross-correlations between energy and emissions markets.From fractal and multifractal analysis, we obtain some new results, such as positive power-law cross-correlations, different DCCA crosscorrelation coefficients at different time scales, the nonlinear and multifractal cross-correlated behavior, and short-term and long-term market dynamics, which are new insights in energy and emissions markets, especially in the field of energy economics.At the same time, our results also can be added as a new factor or view to consider for banking, finance sectors, and so forth.As an extension of this study, an urgent and interesting future work is needed to design a new kind of cross-correlation models that can capture the nonlinear and multifractal nature.
Figure 1 :
Figure 1: Prices (a) and returns (b) of oil, gas, and CO 2 .
Figure 3 :Figure 4 :
Figure 3: Log-log plots of the mean return interval ave () versus threshold (in units of ) for oil (a), gas (b), and CO 2 (c).
Figure 6 :
Figure 6: The relationship between ℎ() and for oil and gas.
Figure 7 :
Figure 7: The relationship between ℎ() and for oil and CO 2 .
Figure 8 :
Figure 8: The relationship between ℎ() and for gas and CO 2 .
Figure 9 :
Figure9: Multifractal spectra between energy and emissions markets.(a), (b), and (c) exhibit the multifractal relationships between the multifractal spectrum () and the singularity strength for oil and gas, oil and CO 2 , and gas and CO 2 , respectively.In each panel, it also shows the relationship between () and for the corresponding individual market.
13 Figure 10 :
Figure 10: Time-varying cross-correlation scaling exponents for Oil-Gas.The window size is set to 250 trading days.
Figure 13 :
Figure 13: Time-varying cross-correlation scaling exponents for Oil-Gas.The window size is set to 1000 trading days.
Figure 15 :
Figure 15: Time-varying cross-correlation scaling exponents for Gas-CO 2 .The window size is set to 1000 trading days.
Table 1 :
Descriptive statistics of returns of oil, gas, and CO 2 .
Figure 11: Time-varying cross-correlation scaling exponents for Oil-CO 2 .The window size is set to 250 trading days.
Figure 12: Time-varying cross-correlation scaling exponents for Gas-CO 2 .The window size is set to 250 trading days.
Figure 14: Time-varying cross-correlation scaling exponents for Oil-CO 2 .The window size is set to 1000 trading days. | 8,044.4 | 2014-01-09T00:00:00.000 | [
"Physics"
] |
The Origin of Conductive-Pulse Sensing Inside a Nanopore and the Role of Electro-Hydrodynamics
Despite the highly negatively charged backbone of DNA, electroosmotic ow (EOF) within a nanopore can lead to DNA travelling opposite to electrophoretic force (EPF) at low ionic strengths. However, EOF pumping and its role in producing current-enhancing events is ambiguous due to the complicated interactions between nanopore walls, DNA grooves, ion mobility, and counterion clouds. Here, we discuss how current-enhancing DNA events could be the result of a ux imbalance between anions and cations. The contributing factors for driving a ux imbalance within a nanopore include pore size, voltage bias, and type of alkali chloride electrolyte. Once the mechanism behind conductive events is established, the physics of transducing a DNA translocation into an electrical signal can be further exploited for improving DNA sequencing and, more broadly, bio-sensing. LiCl-lled nanopores - ux low a higher ux under pressure biased uid ow compared to other cations (K + and Cs + ). The same pore nS in 10 mM KCl) was used for all measurements to reduce variability due to different pore sizes.
Introduction
Since their rst use as a biosensor, solid-state nanopores continue to explore new biophysical phenomena and have cemented their place in history as invaluable real-time, single-molecule, electrical read-out platforms. Although the translocation of new biological entities is now a routine practice in labs across the world, the high electrolyte concentrations in which experiments are performed is rather unchanged since nanopores were rst utilized in 1996 1 . The popularity associated with high electrolyte solution is largely due to the high signal-to-noise ratio (SNR) and reliable generation of resistive pulses stemming from DNA transiently blocking ions (typically potassium and chloride). The physical principles in which DNA modulates the ow of ionic current within a nanopore has been studied extensively 2-4 . However, the resistive nature of events is not consistent across all DNA translocation experiments 1,5,6 . In 2004, Chang et. al., reported on current-enhancing events wherein the DNA-occupied pore conducts more ions than when the pore is empty 7 . Therefore, events can be categorized as a current-reducing event (i.e. resistive event, RE), or a current-enhancing event (i.e. conductive event, CE). The question, "Why does ionic current increase during transient DNA occupancy of a nanopore?", remains unanswered and warrants further investigation.
As electrolyte concentration decreases, CEs are often observed in both planar membrane nanopores as well as conical nanopipettes, suggesting that CEs are not pore geometry speci c [8][9][10][11][12][13][14][15][16] . It is also at this regime where EOF strengthens, sometimes leading to the translocation of molecules opposite of the EPF (i.e. negative DNA traveling towards the negatively biased electrode). Although EOF and CEs often coincide, it is important to note that they are not mechanistically linked. Reports of CEs occurring in nanopores where EOF is reduced to allow EPF-driven events also produced CEs 9 . Despite the large number of experiments describing CEs, the origins of CEs in the presence of low ionic strength has been elusive.
It has been generally accepted that CEs stemming from low ionic strength conditions occur because the introduction of additional DNA counterions (i.e. K + ) within the nanopore is greater than the number of ions within the empty pore 7 . Once electrolyte concentration decreases below roughly 0.02 M, mostly counterions are present within the pore, which explains the current enhancement 17,18 . Interestingly, at approximately 0.4 M, counterions are thought to precisely compensate for the DNA occupied regions of the pore and yields no current modulation 19 . A second hypothesis has been that frictional forces (i.e. ionic friction with the grooves on DNA) are in uential in generating CEs 3,9 . Although these hypotheses can predict the well-known crossover point in which events transition from resistive to conductive (via decreasing salt concentrations), the cation-speci c, voltage-speci c, and pore size-speci c dependence of CEs have not been studied and should provide con rming evidence for one of these hypotheses 6,19 . One recent experiment in particular con icts with these hypotheses and may lead to a third potential theory; namely that current enhancement is not only a low salt phenomena and can also be observed at high, asymmetric salt conditions 20 . Above 1 M KCl, counterions should contribute very little to current modulation. Since a cohesive theory for the nature of conducting events is still elusive, we studied the transport of DNA within a nanopore using various monovalent salts.
Herein, we characterize EOF-driven events (anti-electrophoretic, or anti-EPF) with Lambda DNA (λ-DNA) and neutral polymers (i.e. polyethylene glycol, PEG) using quartz nanopores. Interestingly, we found that current enhancements can be observed using PEG; casting doubt on counterions being the dominant explanation for CEs. Furthermore, DNA CEs are extremely cation-, pore size-, and voltage-speci c and may be the result of an imbalance of ionic uxes. We will discuss the electrokinetic and hydrodynamic phenomena that affect event shape such as counterion cloud, ion mobility, pore size, and electrolyte composition. This report elucidates some of the fundamental prerequisites for observing CEs when DNA translocates a nanopore and paves the way for harnessing CE mechanisms for DNA sequencing and biophysical discoveries.
Results
While most nanopore-based, single-molecule sensing is performed using planar membranes, which have a well-de ned pore length (i.e. membrane thickness), nanopipettes have a gradual taper length (Fig. 1a) that increases the sensing region of the device 21 . We fabricate nanopipettes by laser pulling glass nanocapillaries, producing two identical quartz nanopores. With this technique, we can achieve <10 nm inner pore diameters, Figure 1a. This process is fast, inexpensive, and does not require a clean-room environment 22 .
Current-voltage (I/V) analysis reveals that the conductance of the pores (Fig. 1b) varies between 0.58 and 5.35 nS; as well as the presence of ionic current recti cation 23 . These conductance values are consistent with pore diameters between 5 (± 0.5) and 48 (± 4) nm, respectively. Speci cally, the relationship between pore conductance (G) and inner diameter (d i ) allows us to estimate the size of the aperture 24,25 : (1) Where is the length of the conical pore (taper length), is the measured conductivity of the buffer, and is the diameter of the capillary at the beginning of the conical taper. The initial, inner capillary diameter is constant in our experiments (0.7 mm) and the buffer conductance depends on concentration and alkali chloride used. The taper length was measured using transmission electron microscopy and I/V analysis is used to measure the pore conductance. Pore sizes are also occasionally con rmed using transmission electron microscopy.
After retrieving I/V information, translocation experiments with λ-DNA at 500 pM were performed. When EPF dominates, the capture volume outside the nanopore assumes a nearly spherical shape surrounding the pore's oriface [26][27][28][29][30] . As ionic strength decreases, EOF can dominate as the primary means for DNA entering the pore. According to the EOF streamlines, the capture volume adopts a shape con ned along the sides of the pore 31 . There also lies a crossover concentration point in which EOF reverses direction, where EOF is generated along the glass surface and radiates away from the pore aperature 31 .
Finite element analysis was performed to determine the uid ow rate at different voltages (Fig. 1c). As the applied voltage decreases from 0 mV, the mean uid velocity increases into the glass pore. The same is true for positive voltages, however, the uid ow direction switches ( ow reversal) from towards the pore at negative voltages to away from the pore at positive voltages. Notably, these uid velocities can in uence DNA dwell time inside the pore and have been described using hydrodynamic drag 32,33 .
DNA proceeds to diffuse around the solution until it enters the EOF capture volume, where it is then transported through the pore. This method of translocation is further illustrated (Fig. 1d) with KCl as the electrolyte and K + ions responsible for the movement of water carrying λ-DNA. Under these conditions, it is possible to see differing DNA con gurations: linear, partially folded, and fully folded (Fig. 1d). Reports of different DNA con gurations have been witnessed using high ionic strength conditions and with both planar nanopores [34][35][36][37] and nanocapillaries 24 . The ability to discriminate folding states using DNA CEs does not directly help uncover the nature of CEs, but it is important to recognize the existence and understand the effects of having various DNA con gurations upon translocation.
To show that this nding is not limited to low ionic strength phenomenon, we employed salt concentration gradients as previously described 20 . As shown in Figure 1e, our experimental set-up involved having a solution of 1 M KCl + λ-DNA inside the nanopore with 4 M KCl outside. With an applied voltage of -600 mV, λ-DNA was driven outside the pore through EPF, resulting in CEs. An additional salt gradient was implemented ( Fig. 1f) with 4 M KCl inside the nanopore and 1 M KCl + λ-DNA outside. In this situation, EPF drove λ-DNA to translocate through the pore, again resulting in CEs. Based on these results, a working hypothesis was made that the existence of CEs stem from a ux imbalance between anions and cations. This is notably different than ion selectivity which is typically a characteristic of the pore itself. Rather, ux imbalances can be generated through externally applied conditions and parameters.
EOF pumping of water into the pore, for example, can change the relative uxes of ions. Since the electric eld acts equally on both chloride and potassium ions, the net movement of water only provides a moving frame of reference which favors one ion over another. Nevertheless, total ionic current is constant regardless of EOF velocity. For the data shown where CEs are observed ( Fig. 1 d-f) we speculate that there is a net ux that favors potassium ions. Figure 1g illustrates how K + ions are pumped into the pore under low ionic strength (EOF; Fig. 1d) and concentration gradient conditions (EPF; Fig. 1e).
Validation using nite element methods was undertaken to further explain the potential impact that EOF has on DNA sensing and the unique capture dynamics of EOF-driven events. A 20 nm pore was modeled Realizing that the capture volume in EOF-driven translocations surrounds the outer walls of the nanopipette, we chose to expand and shrink the capture volume via a depth-dependent study to witness any changes in event frequency (Fig. 2c). By submerging varying lengths of the taper length inside the salt solution containing λ-DNA, the capture volume is controlled ( Supplementary Fig. 1). The nanopore was suspended at 0, 0.26, 0.53, 1.1, and 4.0 mm below the electrolyte solution surface containing DNA. For exact measurements, the nanopore was suspended from a micrometer. Translocations were obtained for voltages between -100 and -1000 mV, in increments of 100 mV. Recording at -600 mV yielded the most consistent translocations without clogging the pore. Events were recorded at -600 mV and the I/V relationship yielded a 2.53 nS pore. Capture rate was calculated at each depth. As nanopore depth increases, capture volume also increases, leading to higher event frequency with larger depth values. As more of the nanopore is exposed to the λ-DNA solution, the capture volume enlarges, leading to an increase in event frequency.
In order to understand how electro-hydrodynamics in uences ionic ux, particularly at low salt conditions, three monovalent salts were modelled by altering the cation diffusion coe cient and electrophoretic mobility ( Fig. 2d and e). Although the pore's total ionic ux was not altered signi cantly by EOF since K + ux increased and Clux decreased by the same amount, EOF does signi cantly impact the ux imbalance between cation and anion. This nding was particularly noteworthy since CEs have been observed at high asymmetric salt conditions which would also change a pore's ionic ux imbalance.
These results predict that a ux imbalance in favor of Cltransport leads to resistive events and a ux imbalance in favor of K + leads to conductive events. This is based on the experimental results that the 10mM KCl electrolyte always produces CEs. In Figure 2d, anion-dominant ux only occurs with small pore sizes, 20 nm and less, and an applied negative voltage between -300 and -400 mV. It is important to note that no events could be recorded at these conditions to nd out whether resistive events are observed. For a nanopore suspended in LiCl, we observed more opportunities for the pore to be Clselective, which we predict will result in REs upon translocation of λ-DNA. As the pore increases in size or an increasingly negative voltage is applied, the pore can become cation selective, which we speculate can give rise to CEs.
Although the ionic diffusion coe cients and electrophoretic mobilities encapsulate basic transport properties, all the while being utilized as variables in the nite element simulations, they neglect the geometric size of the ions and therefore the packing density/strength on oppositely charged surfaces. In order to understand the link between electro-hydrodynamics and Debye layer screening of the quartz surface charge, streaming current measurements were used as a proxy for cation mobility within the diffuse ion layer. Contrary to EOF, where mobile ions drag uid, streaming currents measure the uid's ability to drag along ions co-axial to the uid motion 38 . A pressure bias was used to generate a streaming current and the resulting data can be seen in Figure 2f. Negative pressures generate a ow into the nanopore and in the same direction as EOF in our experiments. We see that larger pressures create larger streaming currents. Interestingly, LiCl has signi cantly higher streaming currents compared KCl and CsCl at negatively biased pressures. Overall, these results indicate LiCl-lled nanopores can be Clux dominant at low voltages, and secondly, Li + has a higher ux under pressure biased uid ow compared to other cations (K + and Cs + ). The same pore (1.30 nS in 10 mM KCl) was used for all measurements to reduce variability due to different pore sizes.
DNA and Neutral Polymers in Potassium Chloride
Under high ionic strength conditions, pores with a diameter slightly larger than the analyte molecule yield greater SNR values when compared to larger pores 19 . Because of this, we were motivated to explore SNR values under low ionic strength conditions. A typical conductive DNA event can be seen in Figure 3a (bottom) in 10 mM KCl. Potassium chloride was chosen as the electrolyte because it is most frequently utilized in nanopore research due to similar ion mobilities of anions and cations. We incorporated differently sized pore diameters to witness any effect that pore size may have on event shape and size. The depth of each nanopore was kept consistent for all recordings as well as the voltage (-600 mV).
As λ-DNA translocates through the nanopore, we witness a current-enhancing event. For all SNR calculations, we omitted all con gurations except for linear DNA translocations (Fig. 3b). DNA has the ability to translocate linearly, folded 8, 24 , or in knots 34 , in which the latter two increase the current change.
To ensure DNA con guration had no effect on SNR, we applied only linearly translocating DNA to our calculation.
We witness an increase in SNR starting at 2.00 nS and saturating around 3.00 nS. To determine whether the current enhancement or the noise of the signal is the major contributor to the increase in SNR, we acquired the median current change of all events and the root mean square (RMS) noise of a data segment lacking events. We witness that the RMS noise maintaining values of 15 ± 7 pA, whereas the current change increases from 30 to 140 pA as pore size increases. For the left side of the graph, we witness a sharp increase in the SNR as the pore size decreases. This can be explained by a decrease in the noise associated with smaller pores. As seen in Supplementary Figure 2, the RMS noise is extremely low (< 10 pA) whereas the median current change is approximately 100 pA. Therefore, the higher SNR values for smaller pores stem from lower noise. On the right side of the graph, we speculate that the rise in SNR (and current enhancement) is a result of greater EOF pumping as a function of pore size ( Supplementary Fig. 3). Owing to larger uid velocities, the ux imbalance highly favors potassium rather than chloride.
The common hypothesis that DNA counterions are the sole mechanism of CEs led us to explore PEG under low ionic strength conditions with an applied negative voltage 39 . PEG 20,000 was diluted to 15% (w/w) in 10 mM KCl and voltage was applied from -100 to -1000 mV, in increments of 100 mV. Interestingly, PEG events could be observed at an extremely small pore size (0.43 nS); a pore size regime that we could not observe DNA events. Since EOF decreases with smaller pore sizes and EPF increases, we believe DNA energetically could not overcome the barrier at the pore entrance for translocations to occur. Since PEG is neutral, we were able to observe EOF-driven events at very small pore sizes ( Supplementary Fig. 4). The results indicated that smaller pore sizes resulted in CEs whereas larger pore translocations yielded REs (Fig. 3c and d). SNR calculations showed that the smaller pore diameter yielded higher SNR values in comparison to larger pore diameters. In both pores, the median current change was 71 ± 1 pA; whereas the RMS noise increased from 7 to 18 pA as the pore size increased from 4 to 25 nm in diameter, respectively. Based on these results, the nature of the event (CEs versus REs) seems un-coupled from the analyte counterions (or lack thereof, in the case of PEG) but rather linked to the pore size and/or voltage in which translocations occur. Although the analyte counterions do not seem to play a signi cant role in generating CEs, extremely small, negatively charged pores may be more likely to generate CEs due to their cation selectivity. It is also not fully understood how transient or long-term interactions of PEG with the charged glass surface far from the pore would impact EOF pumping.
Previous reports have used PEG to lessen or neutralize EOF 9 and therefore could be impacting the pore's ux imbalance via interactions with the nanopipette's conical taper.
Voltage Dependence with Lithium Chloride
Lithium chloride was chosen as an electrolyte because it has been previously shown to "slowdown" DNA translocations under high ionic strength conditions 40 . This can be attributed to Li + having a smaller atomic radius than K + and therefore, Li + binds to DNA stronger than K + 40 . Additionally, LiCl had a signi cantly higher streaming current (Fig. 2f) compared to both KCl and CsCl. Finite element simulations indicated a voltage and pore size dependence for ux imbalance that was within the voltage range: -400 to -1000 mV, where events are typically observed. The nanopore containing 10 mM LiCl was inserted inside a solution containing 10 mM LiCl + λ-DNA and current changes were recorded at various voltages (Fig. 4a). The same series of steps were repeated to calculate the SNR at each voltage.
Using the same pore (1.20 nS), we witnessed the crossover point that is independent of salt concentration, which is something not previously observed. At voltages of -300 and -500 mV, λ-DNA translocations resulted in REs and at voltages of -700 and -900 mV, DNA translocations resulted in CEs, as shown in Fig. 4b. Interestingly, at an applied voltage of -600 mV the event current shape assumes both a resistive and conductive spike ( Supplementary Fig. 5). For this pore, we see an increase in the amplitude of the REs as the voltage applied is reduced to -600 mV. Less than -600 mV (i.e. more negative), the CE amplitude continues to increase as the voltage decreases to -900 mV. The events recorded at -900 mV and -500 mV yielded higher SNR values in comparison to -700 mV and -300 mV, respectively ( Fig. 4c and d). Supplementary Figure 6 shows how the median current change is the main contributor to the SNR uctuation, the RMS noise for each voltage remains relatively constant. The transition from REs to CEs can be understood by the pore being anion selective at low voltages and cation selective at higher voltages. As the applied voltages increase in negativity, the change in current switches to a CE. The biphasic nature of the events at the transitional voltages (-500 mV and -700 mV) suggests that there may be two mechanisms of current modulation (hydrodynamic ow and pore occupancy) that can occur when the DNA molecule is near or entering the pore. DNA entering the ow eld of the pore during EOF pumping may cause current modulations that occur immediately prior to translocation.
Another comparison was done using two pores with inner diameters of 33 ± 3 nm. One pore contained 10 mM KCl and was suspended in 10 mM KCl + λ-DNA while the other contained 10 mM LiCl and was suspended in 10 mM LiCl + λ-DNA. Both had an applied voltage of -600 mV and we witnessed CEs for the pore containing KCl and REs for LiCl (Fig. 4e). At -600 mV with the aforementioned pore size, nite element simulations predicted that the nanopipette is cation selective in KCl and anion selective in LiCl, which may be a possible explanation for the event types observed. We also note that KCl and LiCl have similar event durations at these low salt conditions, however, KCl has a much larger variation in the degree of current-modulation (in the case of KCl: current enhancement). The current-reductions observed for LiCl are much more tightly clustered together compared to KCl CEs. The source of the variability observed in KCl CEs is still not fully understood and requires further investigation. The data seems to suggest that CEs are more variable regardless of the cation. The LiCl events in Fig. 4b, for example, show a much greater degree of scatter for CEs compared to REs.
Alkali Chloride Dependence on Event Characteristics
Recently, CsCl was shown to have an advantage over KCl in respect to sequencing using solid-state nanopores 11 . This publication used CsCl because it disrupts the hydrogen bonding between guanines, therefore denaturing the G-quadruplex into single-stranded structures. Although we are not working with ssDNA, we aimed to compare KCl event properties with another alkali metal chloride that holds promise in the nanopore community. Therefore, we performed experiments using 10 mM CsCl inserted into 10 mM CsCl + λ-DNA. The typical current trace and event signature is displayed in Fig. 5a.
Similar to KCl, we do not see a voltage dependence on event shape with CsCl, which is not surprising considering that K + and Cs + have nearly the same diffusion coe cient 41 . For con rmation, a pore with a conductance of 1.47 nS (14 ± 2 nm diameter) was used with λ-DNA. Under low ionic strength conditions, we applied voltages of -300 mV, -400 mV, -500 mV, and -1000 mV to witness any transition in event shape (Fig. 5b). All voltages resulted in CEs, which was predicted based on nite element analysis under the assumption that cation selective conditions yield CEs. Simulation results for CsCl can be found in the Supplemental Information, but were nearly identical due to the diffusion coe cients for KCl and CsCl being 2.02 × 10 -5 and 2.00 × 10 -5 cm 2 /s, respectively 41 .
To explore the difference that alkali chloride type has on event capture rate, we fabricated three pores with inner diameters 35 ± 4 nm to be used with λ-DNA at -400 mV. We calculated capture rate by methods previously described 28 to yield capture rates for each electrolyte used. Experimentally, we saw that λ-DNA in LiCl resulted in the highest frequency of events, followed by KCl, then CsCl (Fig. 5c). COMSOL was used to describe how alkali chloride type and pore size affected EOP pump velocity (Fig. 5d). Based on the conductance values of each pore, we believe some of the differences observed in the capture frequency are related to the size of the pore which strongly impacts the EOF pump velocity since smaller pores yield higher intra-pore electric elds. Based on this rationale, the CsCl experiments yielded a lower capture frequency due to the larger pore size. The extremely high capture e ciency observed in LiCl experiments may be due to the higher charge screening of the DNA backbone. A reduction of DNA charge will reduce the energetic barrier to move anti-EPF. The reduction in EPF is cohesive with the idea that DNA translocations in LiCl generate longer event durations at high salt conditions 40 . Lastly, we calculated the SNR of each electrolyte (Fig. 5e). We witness an increase in SNR starting with the lowest (CsCl) to the highest (KCl). In this scenario, translocations in LiCl resulted in the lowest RMS noise and median current change: 10 and 69 pA, respectively. KCl and CsCl both resulted with median current changes of 116 ± 2 pA. However, the major difference between these two lied within CsCl having more noise, resulting in a lower SNR.
How a ux imbalance yields CEs has yet to be addressed. The working hypothesis currently is that stored charges can accumulate at the nanopipette tip effectively acting as a capacitor in series with the highly resistive nanopore. Since the voltage at the extreme ends of the uidic reservoirs are clamped, charge build-up (i.e. potassium) tends to generate a voltage that, in turn, lowers the effective voltage at the pore. We speculate that a DNA-occupied pore transiently stops EOF pumping and thereby lowers the stored charge inside the nanopore and that the capacitor discharges current proportional to the blocked EOF. Finite element methods demonstrate the accumulation of charge inside the glass pore (Fig. 6a). The increase in stored charge with applied voltage is a characteristic trait of an ionic capacitor. Upon solving for the effective capacitance, we obtain a value of 4 ×10 -17 Farads. The timescale of charging and discharging accumulated charge is also fast (3-5 µs to reach steady state space charge density; Fig. 6c).
Ionic-generated potentials are typically named according to the principle in which they are generated. For example, diffusion potentials, streaming potentials, and exclusion potentials 42 . Nevertheless, charge separation is a commonality of these potentials as well as our capacitor model which ultimately could generate voltage and current transients. Data thus far supports the hypothesis that a ux imbalance plays an important role in the generation of CEs. The existence of CEs with PEG (e.g. using a 0.43 nS pore) further demonstrated that charged analytes are not a pre-requisite for CEs, but may indeed have an important role depending on the pore size. For example, a 2.63 nS pore lled with 10mM KCl produced CEs when DNA was the analyte, and REs for PEG at the same conditions. We speculate that the analyte and its concentration in the reservoir can transiently impact a pore's ux imbalance via translocation, or indirectly via interactions with the glass surface (i.e. outside the pore). For example, adsorbed molecules on the glass surface will hinder EOF pumping velocities and therefore the ux imbalance. Nevertheless, the evidence here demonstrates the importance of the pore's charged surface, voltage-bias, and associated electro-hydrodynamics in generating CEs.
Outlook
In this study, we described multiple electro-hydrodynamic effects that in uence EOF-driven DNA translocations under low ionic strength conditions. We have found that EOF can be used in various alkali chlorides and be used to translocate (un)charged molecules. Con rmation that EOF capture volume resides along the sides of the tip aperture and directs ow inward has been shown. The resulting current enhancement or reduction dependence on pore size can be explained by a pore's ux imbalance. Secondly, we discovered a crossover point, independent of salt concentration and speci c to LiCl, by scanning the applied voltage from -300 mV to -900 mV. We show that changing the electrolyte in uences the event shape, SNR values, and event frequency. Finally, by utilizing salt gradients to generate a ux imbalance, extremely high signal to noise ratios were achieved. Such information is valuable in the pursuit of using solid-state nanopores to sequence polynucleotides and as a diagnostic test.
Methods
Nanopore fabrication began with quartz capillaries (Sutter Instrument Co.) of 7.5 cm in length, 1.00 mm in outer diameter, and 0.70 mm in inner diameter. Capillaries were plasma cleaned for ve minutes prior to laser assisted machine pulling to remove any surface contaminations. After, quartz capillaries were placed within the P-2000 laser puller (Sutter Instrument Co.) machine where a CO 2 laser heated the center of the capillary while the ends were pulled away from each other. A one-line protocol was used: (1) HEAT: 630; FIL: 4; VEL: 61; DEL: 145; PUL: between 135 and 195. This resulted in two identical, conical nanopores. The heat duration was approximately 4.5 s.
Electrodes were constructed using silver wire dipped in bleach for 30 minutes and then rinsed. Nanopores were then back lled with either 10 mM KCl, LiCl, or CsCl. An optical microscope was used to inspect the nanopores at this stage for any irregularities. Once the nanopore had been inspected, it was secured in our Axopatch set-up. Electrodes were then placed inside the pore and the solution containing λ-DNA. The Axopatch 200B patch-clamp ampli er (Molecular Devices, USA) was used in voltage clamp mode to measure the ionic current changes. The gain was optimized prior to each experiment and the signal was ltered with a low-pass lter at 5 kHz. Data analysis for DNA translocations and folding were performed using a custom MATLAB code.
For the microscopy experiments, λ-DNA with a stock concentration of 500 µgml − 1 was purchased from New England Biolabs. Dilutions were performed in either 10 mM KCl to create a 500 pM concentration of λ-DNA. Afterwards, λ-DNA was incubated with YOYO-1 (Molecular Probes) for 30 minutes. Videos and images were captured using a 60X water-immersion objective.
COMSOL Multiphysics was used for modelling nanopipette geometries that were based on SEM and TEM images acquired from the same pipette pulling protocols that were used in sensing experiments. A 2D axisymmetric model was employed to reduce computational resources required. Once the geometries were created in COMSOL, the physics that were utilized included: laminar ow, transport of diluted species, and electrostatics. The electrostatics boundary condition for the glass was set at a surface charge density of -1 × 10 − 2 C/m 2 . In order to model electroosmotic ow, a volume force on the uid was set to the space charge density of the ions in solution multiplied by the electric eld vectors (r and z vectors). Diffusion coe cients and mobility values were obtained from Lee et. al. 41 . All models were tested with different solvers, solving conditions, and reservoir sizes to ensure accuracy of results.
Supplementary Files
This is a list of supplementary les associated with this preprint. Click to download. SITheOriginofConductivePulseSensing.pdf | 7,280 | 2020-09-25T00:00:00.000 | [
"Chemistry",
"Physics",
"Biology",
"Materials Science"
] |
Machine Translation in Foreign Language Learning Classroom-Learners’ Indiscriminate Use or Instructors’ Discriminate Stance
The use of machine translation (MT) tools in language learning classroom is now omnipresent, which raises a dilemma for instructors because of two issues, language proficiency and academic integrity, caused by that fact. However, with the unstoppable development and irresistible use of MT in language learning, rather than entangling with using it or banning it, it is more significant to figure out why learners turn to MT in spite of the prohibition from their instructors and how can instructors guide learners to use it appropriately. Consequently, this paper reviews articles with regard to the reason why learners turn to MT, the practical use of MT in learners’ writing, and some pedagogical solutions for making peace with MT in language learning classroom respectively. Implications can be garnered like that a course for learners of how to use MT tools properly should be included in the curriculum design, and simultaneously, the holistic understanding of these overwhelmingly fast-developed technology tools for instructors should be a part of teachers’ self-development, since instructors without knowledge said technology tools can not fully motivate language learners and implement the pedagogical solutions offered.
Introduction
First proposed in 1949 in a memorandum from Warren Weaver, a British crystallographer, the idea of machine translation (MT), which was initially for the use of war-time cryptography techniques, statistical analysis, Shannon's information theory, and the exploration of the underlying logic and universal features of language (Hutchins & Sommers, 1992), is now widespread availability not only to professional translators to dispose some tedious and repetitive source texts like commercial and business transaction, legal documentation, industrial patents and so forth, but also to students for their foreign language assignments. Concerns and debates regarding the use of MT in language learning follows the improvement of the capabilities of MT. Instructors explicitly dissuade students' use of MT with the anxiety of academic honesty violations and increased dependency on MT, while students surreptitiously consult online translators because of its rapidity, simplicity of results and amelioration of accuracy (Guenette, 2013). Actually, the advancement, pervasiveness and versatility of MT are irresistible nowadays. Policies that turn MT into a taboo stand directly in opposition to the gist of aiding a student become "a 21 st Century skilled learner" recommended by the American Council on the Teaching of Foreign Language (ACTFL) (ACTFL, 2011). Therefore, studies on why learners use MT programs and how they interact with them (White & Heidrich, 2013), what instructors can do to equip their students to use the MT in an educationally and interculturally respectful manner (Ducar & Schocket, 2018) and practical MT use in learners' writing (Garcí a & Pena, 2011) seem to be more significant for us to confront the challenge of learners' indiscriminate use of MT.
Machine Translation
Impressive progress has been witnessed in the realm of Machine Translation (MT), since its first debut in Warren Weaver's memorandum. Nowadays, MT tools are widely used available for many users or domains, but the wrong translation productions are intolerable, so that the human corrector are introduced to the process of post-editing. Aiming to diminish the human effort of post-editing to produce better translations, interactive machine translation (IMT) (Barrachina et al., 2009;Casacuberta et al., 2009;Foster et al.,1997) emerged as one of the most attractive strategies to solve the problem. In the development of MT, researchers have found that reordering errors and lexical and syntactic ambiguity are barriers that affect the quality of final translation. To address these obstacles, a large number of MT approaches have been developed over the years, among which the use of methodologies based on linguistics has resulted in the family of Rule-Based Machine Translation (RBMT) (Moussallem et al., 2018). However, the drawback, reliance on manually crafted rules, has prevented the easy development of new translation modules for different language. Then, a standard Phrase-Based Statistical Machine Translation (BP-SMT) system (Koehn et al., 2003b) and Example-Based Machine Translation (EBMT) were developed to deal with the scalability issues in RBMT (Moussallem et al., 2018). In recent years, neural machine translation (NMT), a novel corpus-based technology, emerged as the state-of-the-art in MT, in which the translations are generated solely by neural networks (Kazemi et al., 2017). The research and development of MT will never stop, and is unstoppable, indicating that the use of MT in every domains including language learning is also irrevocable.
Machine Translation in Foreign Language Learning and Teaching
With the popularization of personal computers and technological devices, the past 70 years saw the notable progress in the realm of machine translation after originally being proposed in Weaver's memorandum. In many previous studies on MT and language learning (Garcí a & Pena, 2011;Correa, 2011Correa, , 2014Kazemzadeh & Kashani, 2014), researchers found that a majority of students have admitted that they had consulted online translators when doing their foreign language (FL) assignments although their instructors explicitly asked them not to. Since the development of MT and the use of it in language learning are irreversible, there is no need to intertwine with using it or banning it. Rather, it is beneficial for us to figure out why we use it and how to use it effectively.
With the overwhelmingly fast development of MT technology, researchers have been aware of the use of MT in language learning. Several studies focusing on MT in language learning have been conducted, among which the study having been carried out by Ignacio Garcí a and Marí a Isabel Pena (2011), rather than inquiring to the MT as a language learning tool concentrating on its use by advanced learners like many other research, have paid attention to its use by beginners and early intermediate learners. With the purpose of figuring out whether "MT can be considered a suitable activity for developing writing skills in L2 of beginner/early intermediate language learners"(p.473), tests involving students of Spanish as participants were done at the School of Humanities and Languages in the University of Western Sydney. Two groups of participants were engaged with this test, the first group with 9 participants (six female, three male) being beginners as Level 1, while seven participants (four female, three male) being early intermediate level as Level 2 at the second group. In order to "discover whether students would communicate better and learn more if they wrote directly in Spanish or with the help of an MT draft"(p.474), two groups of participants were required to respond to a prompt that was at the same notional difficulty with 50 words at Level 1 and 100 at Level 2 using email communication. Two tasks, the first, a common task for both level, and the second, level-specific, were timed 15 minutes each. The tests, in which participants were asked to respond to one of the prompts directly in L2 and to the other in English first, using the MT Tradukka interface, a free application released in 2009, were screen-recorded by using BB FlashBack Pro 2.7.3 with regard to the cursor movements and the keyboard log. Having completed the test, two questions were raised for participants to respond "(1) Do you think you did better writing directly in Spanish or using machine translation?" aiming to "check whether there was correspondence between the perceptions of participants and our findings about the writing as a product", "(2) What do you think about using machine translation for writing into Spanish: would it help you express yourself better in Spanish? Would it help or hinder the process of learning Spanish?" (p.476) which was aimed at finding out the participants' thought of the activity, and figuring out "the possible correlations between the perceptions of participants and our findings about the writing as a process" (p.476). Conclusions can be drawn like that the findings of this study showed that MT do help beginner learners in their writing not only with writing more words but also with less effort. However, one thing needs to be conceded is that although MT assists the writing process, students' more exposure to target language may not happen this way. Several participants expressed their concerns of depending on MT despite the fact that MT aids them "to write faster with fewer mistakes" (p.485).
Kelsey D. White and Emily Heidrich (2013), from university of Wisconsin-Madison, conducted a research to figure out the reason of using MT from two perspectives, strategies before, during and after translation and beliefs about interactions with MT, with three research questions. Eighteen participants, intermediate learners of German chosen from one intact class, were involved in this study in which both quantitative and qualitative methods were employed. In the beginning questionnaire, demographic and background information were collected, such as English as their L1, German as FL, using web-based machine translation (WBMT) for an average of 27.7% of FL assignments and so forth. In the following translation task, students were required to write a paragraph in English describing a picture without knowing that their writing would be used in the following translation task. Having completed the writing task, students were informed that their composition would be translated into German using WBMT, but they were offered the option to edit their work so as to make it more appropriate for accurate translation. A further option to edit WBMT output was also offered to students. This study ended up with a final questionnaire about students' feeling about their interaction with this WBMT tool. Following, semi-scripted interviews involving 5 participants of the 18 were conducted face-to-face to further discuss their beliefs about WBMT and strategies used during the tasks with WBMT. Conclusion was drawn like that both students' beliefs and strategies are closely interconnected. The reason why many students continue to use WBMT in spite of its prohibition is that the use of WBMT can ensure them with linguistic accuracy at the cost of sophistication, complexity and expressing their own voice, which was indicated in the pre-task questionnaire survey.
In the article Machine translation and the L2 classroom: Pedagogical solutions for making peace with Google translate published in 2018, Cynthia Ducar and Deborah Houk Schocket analyzed the strengths and limitations of Google Translation (GT), a MT tool widely used, and proposed many valuable pedagogical suggestions. First launched in 2006, GT has made great progress not only in translating texts instantaneously, but also in listening, speaking and reading, which made it multifaceted. GT excels at conjugating verbs which enables lower-lever students to generate complex verb tenses that have not yet been studied, spelling and translating high-frequency idioms. However, pragmatical inaccuracy and intercultural ignorance when translating are the limitations that GT fails to break. When it comes to the pedagogical implications, many invaluable proposals were put forward by authors. It is essential to clearly and repeatedly inform students who grew up in a digital environment that "inputting data into GT and reproducing those results patently violates the code of academic conduct" (p.788). Additionally, motivating learners by "prioritizing learner-centered instruction; offering a challenging, content-infused curriculum; engaging students in project-and community-based learning" (p.789) can promote their awareness of language learning and involve them more in language learning so that their tendency to depend on MT may decrease. In order to expose learners to the pitfalls of MT, some pedagogical activities can be designed for students, such as, proposed by author, "translating a popular song from English into the target language and then comparing students' version with GT's" (p.789), which can make it clear evidence that translations are not simply "substituting words" and not rarely "verbatim reproduction of the original text" (p.789). In 2017, a notion of technology-facilitated language learning put forward in ACTFL's Statement on the Role of Technology in Language Learning was that "standard-based, instructor-designed, learner-centered, and aimed at developing proficiency in the target language through interactive, meaningful, and cognitively engaging learning experiences" (ACTFL, 2017a). What can be inferred from is that in the age of information and technology, advanced tools for language learning is not merely GT, Microsoft Word, WordReference (WR), a powerful online dictionary, and Linguee, a Web site that combines a dictionary with a search engine, can provide alternatives for instructors and learners to choose. Also, corpora databases can definitely not be neglected. Learners' proficiency development can not be achieved in vocabulary, expressions, grammar and syntax respectively without the engagement with texts and contexts. Some critical and social reading tools like InsertLearning and eComma can, to some extent, provide solutions to this problem by offering "appropriately leveled authentic materials on high-interest topics" (p.791). When instructors design classroom pedagogies to help students appropriately consult MT, there are several key points summarized by authors that they must bear in mind, including "(1) evaluate students' own knowledge of the available and emerging tools, (2) directly teach learners how to use appropriate technology responsibly, (3) review their beliefs about students' use of supportive technologies, (4) familiarize themselves with their institution's policies on academic honesty, and (5) decide how they intend to act and react when such policies are violated" (p.793), which are all significant and imperative for the future implementation of language teaching and learning.
Implications
When it comes to whether MT is helpful to language learning, research mentioned above have confirmed that it did make contribution to language learning more or less. What we can do is to leverage the use of MT in language learning, since MT now is considered as the fifth macro-skill to complete the other four, speaking, listening, reading and writing. As Krawer (1995) demonstrates that it is not the software itself, but the user of that software who determine the utility of machine translation systems. So, it is never overemphasized the importance of guiding learners how to use MT appropriately (White & Heidrich, 2013;Ducar & Schocket, 2018;Williams, 2006;Correa, 2014;Garcí a & Pena, 2011). A course related to teaching students to use MT tools in a responsible way should be included in the curriculum design across the upper elementary, secondary, and postsecondary spectrum with a beginning questionnaire used to estimate students' own knowledge of the available and novel technology tools for language learning in their first lesson, so that instructors can promote, rather than circumvent, students' progress to more sophisticated language proficiency with the help of MT (Ducar & Schocket, 2018). As aforementioned, the use of MT tools like GT can be mitigated by fully introducing the other technology tools assisting language learning, such as Microsoft Word, WR, Linguee, InsertLearning and eComma, and so forth, according to the specific pedagogical needs. Since instructors' attitudes and beliefs play a vital role in the use of MT tools in language learning, it is necessary to evaluate educators' own knowledge about these accessible and emerging technology tools, because when guiding students to use them appropriately, instructors without understanding said technology tools can not truly leverage the advantages of these tools and help students to achieve their further proficiency in language learning. That is to say, the most urgent thing to do is not to focus on how to instruct learners to properly use MT tools but to figure out what instructors' stances on and knowledge about MT tools used in language learning.
Actually, in the research conducted by Garcí a and Pena (2011), language learners have showed their concerns that although MT helps them to communicate better or with less effort, there is also the risk of making them dependent on MT and becoming lazy. For the students who are truly dedicated themselves to acquiring a foreign language, they can be aware that the use of MT can not lead them to what they want, so they will use it cautiously and accordingly. But there are still many students to consult MT tools when doing their FL assignment. One reason may be that the assignment appointed by instructors seems so overwhelmingly daunting (Ducar & Schocket, 2018) that they can only turn to MT. Therefore, the assignments should target the appropriate level of production as recommended in ACTFL Can-Do-Statements (ACTFL, 2017b) and evaluations and assessments can be shifted to focus on important content, meaningful communication and linguistic and cultural growth from grammatical accuracy alone. Also, clear instructions made by educators before starting the assignment can make it easier for learners to do. The other reason may be related to the motivation of learners. Helping learners autonomously communicate and further their proficiency seems more crucial than simply finish their assignments. It is beneficial to advocate the notion that learning foreign language as a 21 st -century skill and personal goals is a valuable investment in their own language development, because success in careers across a range of domains attributes to the well-developed skills in language (Strauss, 2017). Moreover, the information that graduates majoring in Foreign languages, Literatures, and Linguistics were the fewest underemployed in 2016 (Newton, 2018) can ultimately motivate our learners.
Conclusion
There is no doubt that the emergence and development of MT and its pervasiveness in the language learning classroom raises concerns about both language proficiency and academic honesty for instructors. Since the use of MT in language learning do further learners' language proficiency, which was revealed in many researches related to the use of MT in language learning, paying attentions to guiding learners appropriately to use MT seems more significant to the future pedagogy. Before that, the evaluation of instructors' stances on the use of MT in language learning classroom and their own knowledge about technological tools for language learning is a top priority. Instructors without understanding of these fast-developed technology tools may not be able to guide their students well. When we advocate that a course of using MT tools in language learning classroom should be included in the curriculum design, the learning of how to use technology tools for instructors should be a part of teachers' self-development simultaneously. With a holistic understanding of these MT tools, instructors can well expose learners to the pitfalls of MT tools, so that learners will not use them indiscriminately. However, no matter how fast the technology develop, students' language proficiency can not achieve merely with the help of MT. When there is a challenge, we need to embrace it, but not circumvent it. | 4,205.6 | 2020-10-15T00:00:00.000 | [
"Education",
"Linguistics",
"Computer Science"
] |
Bone in vivo: Surface mapping technique
Bone surface mapping technique is proposed on the bases of two kinds of uniqueness of bone in vivo, (i) magnitude of the principal moments of inertia, (ii) the direction cosines of principal axes of inertia relative to inertia reference frame. We choose the principal axes of inertia as the bone coordinate system axes. The geographical marks such as the prime meridian of the bone in vivo are defined and methods such as tomographic reconstruction and boundary development are employed so that the surface of bone in vivo can be mapped. Experimental results show that the surface mapping technique can both reflect the shape and help study the surface changes of bone in vivo. The prospect of such research into the surface shape and changing laws of organ, tissue or cell will be promising.
Introduction
The shape of bone is the adaptive result of bone in the mechanical and physiological environment [1][2][3][4]. A map, the visual representation of our real world symbol model, can reveal not only the spatial structure properties of an object but also the changes in time series [5][6][7]. The mapping of bone surface, therefore, is used as an approach to study the adaptability of bone morphology. Mapping technique has become an even more powerful and useful method to do scientific research. Mapping and flattening techniques have also been widely used in medical research [8][9][10][11][12][13]. They are both concerned with the development methods. The flattening technique develops the three-dimensional object to a two-dimensional one [14] while the mapping technique plays an essential role in interpreting the surface structure of an object [15]. The advancements and improvements of three-dimensional imaging of bone in vivo [16][17][18] have brought better data collection methods of bone surface, but the mapping techniques have not been systematically explored with satisfactory results.
To map the bone surface, some geographic marks and geographic coordinates such as the bone surface prime meridian, the equator line or the contour should be identified [19]. They should be defined on the bone's coordinate system. In order to study the bone's changes caused by the external factors, it is fundamental to set up a bone coordinate system when mapping the bone surface. In our study, we have set up a coordinate system with uniqueness on the principal axes of inertia of the bone in vivo. The coordinate system is thus employed to determine the prime meridian, and the average radius of the tomographic boundary is employed to determine the contour to map the bone surface. This means that the bone surface mapping technique could present an alternate approach to study the bone's morphology.
Standardized coordinate system of bone
The CT image of bone in vivo can generate the principal moments of inertia and the direction cosines [20]. Suppose the moment of inertia of the bone's tissue relative to its center of mass is a constant; then the magnitude of the bone's principal moments of inertia will be determined by its shape and mass distribution. The inertia tensor suggests that we can always find a group of coordinate systems where three products of inertia will be nil at the same time [21,22]. The magnitudes of these three principal moments of inertia could come out with three results: (i) all three are equal; (ii) two out of three are equal and (iii) each one is different from the other. When the object is homogeneous, in the first case, it is a sphere; in the second, an ellipsoid, a cube, a cylinder or a rectangular. When the bone's location and orientation relative to inertia reference frame (i.e. the coordinate system is defined by the CT coordinate system) are fixed, in the first case, there are numerous principal axes of inertia, while in the second case, the orientation of one principal axis can be determined, but not the other two. Therefore, in the first and second case, the principal moments of inertia have nothing to do with their principal axes of inertia. In the third case, however, the direction cosines of the bone's principal axis of inertia relative to inertia reference frame are unique, which means a one-to-one corresponding relation between the direction cosines of the bone's principal axes of inertia and its shape.
If the bone is defined as a collection of elements of volume ∆V , the element's position can be represented by (x oi , y oi , z oi ) (where o is located in the center of mass ), then the magnitude of the principal moments of inertia can be represented respectively by: where ρ is the density, ∆V = ∆x∆y∆z, ∆x and ∆y are the pixel sizes and ∆z the layer distances of CT images.
Let the angular displacements of the bone that take turns to rotate around axes x, y, z be α, β, γ. According to Equations 1, we can set up the following equation: (2) Next, differentiate Eq. 2, and let which will generate the following equation: It is apparent that when and ONLY when I x = I y = I z will Eq. 3 have a set of solutions.
x i y i ρ i ∆V are three products of inertia of inertia tensor. Within the range of [0, π], according to Eqs. 2 and 3, the limited rotations can always turn three products of inertia into zero at the same time.
The bone's shape is asymmetrical and its structure is anisotropic [23][24][25][26][27]. Eq. 3 exposes the direction cosines of principal axes of inertia relative to the inertia reference frame of the bone characterization are unique. As a result, the coordinate system set upon the principal moment of inertia can not only depict the position and orientation of the bone, but also verify the bone surface shape and its changes when making a quantitative analysis. Eqs. 2 and 3 also suggest that we can set up a coordinate system whose coordinate origin is located arbitrarily at the center of mass of the bone. After limited rotations, the coordinate axes is positioned on the principal axes of inertia.
Mapping of the Bone surface
CT scanning simplifies the bone as a collection of elements of volume ∆V (different densities). When performing an isotropic scanning (where pixel size must be the same as the layer distance), the volume of ∆V is a constant. The position of ∆V relative to the center of mass differs from one another. When the result of Eq. 3 is replaced for that in Eq. 2 accordingly and when the bone's coordinate system is positioned on the principal axes of inertia, the rotation will change the original CT image. It is necessary, then, to reconstruct the new image, which can be performed by the following equation: where (x oi , y oi , z oi ) stands for the position of ∆V after rotation, (x i , y i , z i ) for that of the reconstructed tomogram and trunc() for a function that truncates a number to an integer by removing the fractional part of the number. To keep the CT images isotropic, ∆d x = ∆d y = ∆d z is defined in Eq. 4, and its pixal size and layer distance are kept the same of those of the original image. According to Eq. 4, when keeping the ∆V to be in a cube, the new CT image after rotation remains to be closed and continuous.
CT scanning divides the bone surface into a collection of the tomographic image boundaries. In this way, the mapping of the bone surface has become an issue to develop the tomographic boundary, making it necessary to detect and draw the tomographic boundary. Before a new CT scanning, the equipment is reset, i.e. the gray value of the air is set as zero. The scanned tomographic images of bone are processed by Eq. 4, and their boundaries are drawn by the following equation: where z is the number of sequence of tomogram, (x, y) z the position of ∆V relative to the tomographic center of mass and ρ(x, y) z the density of (x, y) z .
When mapping the bone surface, the bone will be "cut" -from a cylindrical surface to a rectangle, or a rhombus. However it is cut, its ultimate area would be the same. But when it is a rectangle, there is only one. How to cut the bone into a rectangle? The two principal axes of inertia (the minimal and maximal principal moment of inertia) form a plane. On this plane, the bone surface boundary is called the prime meridian, which is used as the surface cutting line to develop the bone surface. The following equation will make it happen: where (x i z , y i z ) is the position of the ∆V at the tomographic boundary relative to the tomographic center of mass, (x c z , y c z ) the position of tomographic center of mass relative to the inertia reference frame and i the sequence of ∆V of the boundary after being cut.
Eq. 6 sequences the ∆V s at the tomographic boundary which has been cut. The average radium of the tomographic boundary perpendicular to the minimal (maximal) principal moment of inertia is defined as the "sea level", which is used as a datum line so that the tomographic boundary can be developed by the following equation: where z shares the same definition of that in Eq. 5, and (x i z , y i z ) have the same definitions as those in Eq. 6. Eqs. 4-7 have developed the closed surface into an open three-dimensional curved one with the properties of a contour. The three-dimensional curved map of the bone surface can be further developed into a two-dimensional plane. We can, however, translate the bone surface into a plane with the help of the following equation: where p(x, y) presents the position of ∆V in a plane whose surface has been flattened, and z stands for the value of contour on (i, j) in bone surface mapping. Eq. 8 suggests that the mapping of the bone surface actually serves as a simulation of the bone surface structure. It is a space model of an image symbol to represent the bone surface. It shares the consistency with the real body of the bone surface structure.
Experiment
Prior to our study, the Ethic Committee of Guangzhou Institute of Physical Education has proved our study and the participant has provided fully informed consent to participate in this study by signing a written consent form. From January 2008 to August 2009, we followed the track of Guangdong Provincial Youth Team of Male Wrestlers by using a 64 slice scanner (Brilliance 64, Philips Medical Systems). Excluding team members who left the team halfway and those with injuries, in January 2008 and August 2009, we were able to scan and collect the data of a 25-year-old wrestler's sesamoid bones beneath the head of the first metatarsal bone of both feet. See Table 1 for information of the sesamoid bones.
Let's make a mapping analysis to the right foot's external sesamoid beneath the head of the first metatarsal bone. First, rotate the first-time scanned sesamoid bone around axis x from 0 to 180 degrees. According to Eq. 2, the variations of (I y − I z ) α , (I x − I z ) β and (I x − I y ) γ are shown in Fig. 1, indicating that within the range of rotation from 0 to 180 degrees, an extremum exists in Eq. 2. We can get the result of α = 157.73 when the extremum is calculated by Eq. 3A. After the sesamoid bone rotates around axis x at 157.63 degree, it then rotates around axis y. The calculation by Eq. 3B generates the result of β = 8.92. When the sesamoid bone rotates around axis y at 8.92 degree, the calculation of Eq. 3C generates the result of γ = 172.07. When the coordinate system of the sesamoid bone rotate around axes xyz at 157.63/8.92/172.07 degree respectively, the axes of of the sesamoid bone coincide with those of the principal moments of inertia.
It's shown that the morphologically asymmetric and heterogeneously distributed bone has the uniqueness of direction cosines of their principal moments of inertia. An arbitrary coordinate system set upon the bone's center of mass can make every coordinate axis coincide with the principal moment of inertia by coordinate transformation, i.e. the proposed principal axis's coordinate system has its uniqueness. This method can be applied to both the homogeneous asymmetric geometry and the heterogeneous asymmetric geometry.
The principal axes of sesamoid bone is set up by Eqs. 2 and 3; the tomography of sesamoid bone is reconstructed after it is rotated by Eq. 4; the boundary of the sesamoid bone is drawn by Eq. 5. When the prime meridian is determined by the principal axes, the sesamoid bone surface is developed by Eq. 6 and the mapping of the sesamoid bone surface is accomplished by Eq. 7. See Fig. 2. Figs. 2A and 2B indicate that the mapping technique can better illustrate the bone surface properties. Fig. 2C suggests that the mapping technique can be used as a quantitative method to study the changes of an object's shape. Using the bone surface mapping, Eq. 8 can flatten the bone surface as a plane. See Fig. 3.
It can be concluded that the professional training has caused adaptative changes of the sesamoid bone's shape. The structural changes can be depicted by the density and distribution while the shape and its changes can be analyzed by the bone surface mapping.
Conclusion
The generalized Papoulis theorem [29] elucidates that the bone surface shape keeps its geometric invariance, such as rotation, translation or dimension change [30,31]. This ensures the consistency of the CT scanning results of different postures of bone in vivo when its isotropy is ascertained. The uniqueness of the relative consistency of the inertia reference system of principal moments of inertia on direction cosines provide evidence to the bone surface mapping technique. The characters such as the geometric invariance and the uniqueness of the coordinate system of the principal moments of inertia have enabled the bone surface mapping technique to depict the bone's external morphological characters. This can advance the research of the morphological mechanisms. The experiment of the bone in vivo signifies that the bone mapping technique adds another research method and supplements the analytical method of the bone's three-dimensional imaging technique.
We can understand the world through a map. When the bone surface mapping technique reveals its surface information through a "map", the activities of our life evolve continuously on this map. We anticipate that this mapping technique will be widely used in related disciplines. A closed bone surface means that when it is continuous, there is no boundary. But when the surface is cut by the prime meridian, a boundary emerges. So when smoothing the surface, cloud computing method is adopted [28] to ensure the integrity of the object's shape. | 3,508.4 | 2010-10-04T00:00:00.000 | [
"Engineering",
"Medicine"
] |
Case-Base Reasoning (CBR) and Density Based Spatial Clustering Application with Noise (DBSCAN)-based Indexing in Medical Expert Systems
Case-based Reasoning (CBR) has been widely applied in the medical expert systems. CBR has computational time constraints if there are too many old cases on the case base. Cluster analysis can be used as an indexing method to speed up searching in the case retrieval process. This paper propose retrieval method using Density Based Spatial Clustering Application with Noise (DBSCAN) for indexing and cosine similarity for the relevant cluster searching process. Three medical test data, that are malnutrition disease data, heart disease data and thyroid disease data, are used to measure the performance of the proposed method. Comparative tests conducted between DBSCAN and Selforganizing maps (SOM) for the indexing method, as well as between Manhattan distance similarity, Euclidean distance similarity and Minkowski distance similarity for calculating the similarity of cases. The result of testing on malnutrition and heart disease data shows that CBR with cluster-indexing has better accuracy and shorter processing time than non-indexing CBR. In the case of thyroid disease, CBR with cluster-indexing has a better average retrieval time, but the accuracy of non-indexing CBR is better than cluster indexing CBR. Compared to SOM algorithm, DBSCAN algorithm produces better accuracy and faster process to perform clustering and retrieval. Meanwhile, of the three methods of similarity, the Minkowski distance method produces the highest accuracy at the threshold ≥ 90.
Introduction
Expert system is a part of artificial intelligence that has been developed widely to help diagnose of diseases. The method commonly used in expert systems is rulebased reasoning, or case-based reasoning [1]. Case-based reasoning (CBR) methods have been widely applied in the medical field [2] - [6], due to the ability of CBR to work like an expert by retrieval of previous cases to solve new cases according to the given diagnosis [7]. The more old cases stored in the case base, the CBR system will be smarter in finding solutions for a given case. Problems with computation time and memory space requirements become a challenge especially when too many old cases exist on the case base. That is because the system must calculate the value of the similarity of new cases with all the old cases on the case base. A solution that can be used to shorter computational time is by finding solution that does not need to involve all data on the case base, but sufficient with some of the closest cases, so that the indexing process is needed [8].
Research focusing on the indexing process in CBR has been carried out with various methods, such as Fuzzy algorithm [9], back propagation classification algorithm [10], K-means clustering algorithm [11], and Local Triangular Kernel-Based Clustering (LTKC) algorithm [12]. K-means algorithm needs data of number of clusters that will be formed, because the assumption of the number of clusters determined at the beginning does not necessarily produce an optimal cluster. This method also has a low tolerance for data that contains noise and outliers. The back propagation and LTKC training process require quite long time because they have to try the training parameters one by one to get the best cluster. Clustering can group data sets that are not labeled into several data clusters based on similarity and dissimilarity [13]. Basically these Vol. 5 algorithms work by grouping cases based on the specified features. When the retrieval process is carried out on the CBR, searching for similarity values can be conducted to cases that have the same index as new cases. Clustering algorithm can describe the patterns and tendencies contained in data groups. Each group represented by the value of the center of the cluster (cluster centroid). Cluster center enables measurement of similarity between new data and all cluster centers so it can determine the most similar data groups. The proposed clustering method uses Self-Organizing Maps (SOM) compared to Density Based Spatial Clustering Application with Noise (DBSCAN). SOM is an artificial neural network-based learning algorithm that is good in exploration and visualization of highdimensional data [14]. The training process on the SOM algorithm does not require supervision, the SOM network will learn without having a target in advance [15]. This is different from some artificial neural network methods such as back propagation which requires a target during the learning process. Density-based clustering methods such as DBSCAN have the characteristics of clusters with high density surrounded by clusters that have with low density. DBSCAN has advantages such as: being able to handle large amounts of data in short time, having tolerance to data containing noise and outliers, being able to recognize irregular shapes, being able to handle high dimensional data, and unnecessary to know the number of clusters to be formed [16] [17].
Each clustering algorithm requires testing to determine the quality of the clustering results. The validation of the results of clustering in this study was performed by evaluating the results of the clustering algorithm based on the structure that has been determined in the data set using Davies-Bouldin index and Silhouette index [18]. The process of looking of similarity between new cases and old cases in this study uses the nearest neighbor retrieval technique, by calculating the value of similarity or closeness between new cases and old cases. Three methods were used and compared, that are manhattan distance similarity, euclidean distance similarity and minkowski distance similarity.
Method a. Knowledge Acquisition
This study used case data of medical record of patients with severe malnutrition at RSUP Dr. Sardjito Yogyakarta [3]. The malnutrition disease data consists of 90 data sets divided into 70 data as training data and 20 data as test data. The second case data is the medical record of patients with heart disease in the Medical Record Installation of RSUP Dr. Sardjito Yogyakarta [6]. The heart disease case data consists of 135 data sets divided into 115 data as training data and 20 data as test data. The third data is the diagnosis data on suspected thyroid disease from the Garvan Institute. The thyroid disease case data consists of 1428 data sets divided into 1000 data as training data and 428 data as test data.
b. Case Representation
The case representation used the frame model. Cases are represented as collections of features that characterize cases and solutions for handling these cases. Weighting of features is important to determine the level of significance of the feature to the disease. The weighting of each feature for each case is performed by an expert. If there are new cases, the weighting of disease features is divided into two categories, that are No and Yes. The value for each category is 0 for no symptoms and 1 for symptoms. After the old cases in the case base are clustered, the old case data is represented again by adding new knowledge derived from cluster center. Table 1 is a representation of cases of malnutrition in children under five who added new knowledge derived from the value of the cluster center.
c. Indexing
The indexing method in this system used clustering method, i.e Density Based Spatial Clustering Application with Noise (DBSCAN) compared to Self-Organizing Maps (SOM). DBSCAN or SOM is used to group old case data into groups based on similarity and dissimilarity, so in each group contains similar data.
1) Data Normalization
The data normalization used the Min Max Normalization method. Normalization features include age, TSH, T3, TT4, and T4U since they have significant vulnerability. Min Max Normalization requires Minimum and Maximum age features. For example the age feature of malnutrition cases is a minimum value of 0 months and a maximum value of 60 months, and the age feature of a heart case minimum value is 0 years and the maximum value is 100 years. Equation 1 is the Min Max Normalization formula.
(1) Self-Organizing Map (SOM) algorithm or often referred as Kohonen Artificial Neural Network is one of the topology of Unsupervised Artificial Neural Network (Unsupervised ANN) in which the training process does not require supervision (target output). The clustering design using the SOM method is shown in the flowchart of Figure 1 [15]. Explanation of the flowchart diagram of Figure 1 is as follows: a) Initializing the weights of each feature in the case base (ix) as input from SOM, number of clusters (k), initial weight (wi), and maximum iteration as SOM parameters. b) Determine the learning rate (η) and decrease learning rate (α). c) For each case base (xi) calculate the euclidean distance (Dj) to all initial weights of SOM (wij) using equation (2). After knowing the euclidean distance to each weight, look for the index that has the smallest value.
(2) d) Each wij weight within the radius of Dj neighborhood, the weight is updated by equation (3).
old new (4) f ) As long as the maximum number of iterations has not been reached, repeat steps c through e. g) Output clustering using the SOM method is a clustered case database and new weights are used as cluster center values.
3) Density Based Spatial Clustering Application with Noise (DBSCAN)
Density Based Spatial Clustering Application with Noise (DBSCAN) is one of the density-based clustering algorithms. The DBSCAN algorithm works by expanding high density regions into clusters and placing irregular clusters in the spatial database as noise. The clustering design using the DBSCAN method is shown in the flow chart of Figure 2 [16]. DBSCAN has 2 parameters, that are Eps or ε psilon (maximum radius of the neighborhood) and MinPts (minimum number of points in the Epsneighborhood of a point).
Explanation of the flow diagram of Figure 2 is as follows: a) Initializing the weights of each feature in the case base as DBSCAN input, the maximum radius of the neighborhood (Eps) and the minimum number of points in the Eps-neighborhood of a point (MinPts) as a DBSCAN parameter. b) Specify one data as a random starting point (p). c) For each case data in the case base, calculate the value of ε psilon or all distances that are density reachable to p using equation (5).
d) If the amount of case data that meets ε psilon is more than MinPts, then p is a core point and one cluster is formed. e) If there is no case data that is density reachable to p or the amount of case data that meets Eps is less than MinPts, then p is Noise. f ) Repeat steps c through e until all cases of case data base are processed. g) Calculate the cluster center value (cluster centroid) using the average value for each cluster group. h) The output of the case database is clustered and the average value is used as the cluster center value.
c. Cluster Evaluation
The evaluation methods used in this system are the silhouette index and the Davies-Bouldin index methods. These methods are used to test the quality of the results of clustering. These methods are cluster validation methods that combines cohesion and separation methods. To calculate the value of silhouette index and Davies-Bouldin index, the distance between data is acquired by using the euclidean distance formula.
1) Silhoutte index
Silhoutte index was used to measure the quality and strength of a cluster, how well an object is placed in a cluster. The step of calculating the silhoutte index value starts with calculating the average distance from object i to all objects in a cluster. The calculation will produce an average value called ai. Next, calculate the average distance from object i to objects in other clusters. Of all the average distances, take the smallest value, the value Where s (i) is a Silhouette index value, a (i) is the average distance between point i and all points in A (the cluster where point i is located), b (i) is the average distance between point i to all points in clusters other than A. The silhoutte index value can vary between -1 to 1. The clustering result is good if the silhoutte index value is positive (ai <bi) and ai approaches 0, so that the maximum silhoutte index value is 1.
2) Davies-Bouldin index (DB index)
Davies-Bouldin Index has characteristics in validating clusters based on the calculation of quantity and derived features of the datas et. DB index value is calculated using equation (7) [18].
(7)
Where DB is Davies-bouldin value, c is the number of clusters, d (xi) and d (xj) case data in clusters i and clusters j, d (ci, cj) is the distance between clusters ci and cj. The smaller value of Davies Bouldin Index shows that the cluster configuration scheme is optimal and the cluster quality is getting better.
d. Retrieve and Reuse
CBR systems built with cluster-indexing can provide additional knowledge derived from previous cases. This knowledge is acquired from cluster center values generated from cluster analysis and added as a representation on a case base. After the case is represented by adding knowledge to the cluster center value, the case is then stored in a database. Figure 3 shows the architecture of the CBR system architecture with cluster-indexing.
If there are new cases, the system initializes the symptoms experienced by the patient and represents them as new cases. The system will search for the most relevant clusters by calculating the similarity of symptoms of new cases to the cluster center values. Similarity calculation is performed by comparing the euclidean distance between new cases with the cluster center value using the Cosine Coefficient method. After obtaining an index or cluster that is relevant to the new case, then a calculation is performed to find the similarity value between the new case and the cases in the case base that are in the same cluster. The threshold value of similarity are 0.7, 0.8, and 0.9 which means that if the highest similarity is greater than the threshold and close to 1, so this indicates that the new case has the exact same resemblance to the old case then the solution from the source case will be given to the user (reuse ). If the similarity value decreases or is below the threshold, then the case will be stored in the database as a revision case which later the case under the threshold will be adjusted from the solution of the previous cases by the expert (revise). The new case is then saved to the case base by considering the cluster center value to become new knowledge (retain).
1) Determination of the Closest Cluster
During the process of finding a solution for a case, the CBR system will search for clusters that are most relevant to the new case by calculating the similarity of the symptoms of the old case with the cluster center value. Similarity calculation performed by comparing distances using the cosine coefficient method [19]. If given 2 vectors X and Y, then the similarity value can be found by equation (8): where "‹ ›" denotes the multiplication of vectors X and Y, and "| X || Y |" denotes the norm for each vector. For vectors with non-negative elements, the cosine similarity value always lies between 0 and 1, where 1 indicates the two vectors are really the same, and 0 indicates the opposite.
The retrieval process used the nearest neighbor method. Nearest neighbor works by calculating the value of similarity, that is, the closeness between new cases and old cases based on matching weights of a number of existing features. There are two types of similarity measurements that are local similarity and global similarity [6]. Local similarity is a measurement of proximity at the feature level, whereas global similarity is a measurement of proximity at the object level (case).
Local similarity used in this study can be divided into two types, which are numerical and symbolic. The features included in the symbolic type are the symptom features and risk factors, while the numerical features are the sex and age features. Numerical data is calculated using equation (9) ) ( Note: f (Si, Ti) is the similarity of the i-feature of the old case or source case (S) with the new case or target case (T), Si is the value of the i-feature of the old case (source case), Ti is the i-feature value of the new case (target case), fmax is the maximum value of the i-feature on the case base and fmin is the minimum value of the i-feature on the case base. Meanwhile, symbolic data will be calculated using equation (10).
Note: f (Si, Ti) is the i-th feature similarity of the S (source) and T (target) cases, Si is the i-th value feature of the old (source) case and Ti is the i-th value feature of the new case (target). Global similarity was used to calculate the similarity between new cases and cases on the case base. The methods to calculate global similarity in this study are Manhattan distance similarity in equation (11), euclidean distance similarity in equation (12), and minkowski distance similarity in equation (12) [20]. fi (Si, Ti) is the similarity of the i-th feature of the old case and the new case, the similarity of the i-th feature of the source case and target case, n is the number of features in each case, i is the individual feature, between 1 s / dn, wi is the weight given to the i-th feature, and r is the minkowski factor (positive integer). The value of r is equal to 2 for euclidean distance and equal to 3 for minkowski distance similarity.
e. CBR System Testing
Testing is performed by applying new cases, which are 20 data as test data for cases of malnutrition and heart disease and 428 data as test data for thyroid disease cases. The results of the system are then compared with the data contained in the medical record data. System accuracy is calculated by comparing the number of correct diagnosis with the amount of test data. The accuracy in this study is acquired by comparing the number of correct decision results and the amount of test data in accordance with equation (13). (13) Note: ki is the i-th decision (ki is 1 if the decision is right and 0 is if the decision is wrong), n is the amount of test data.
Results and Discussion a. Case Base Clustering Process
The process of clustering of old cases on a case base used the SOM and DBSCAN clustering algorithms. The SOM method requires three parameters, which are number of clusters, maximum iteration, and learning rate. While the DBSCAN method requires two parameters, that are minimum points and epsilon. The parameter value is optimal if it produces the minimum Davies-Bouldin index value and the highest silhoutte coefficient and accuracy. The optimal parameter determination process carried out by clustering each case base data set using several combinations of parameters. Then each combination of these parameters is used to calculate the accuracy of the CBR retrieval process. Table 2 shows the SOM parameters and Table 3 shows the DBSCAN parameters.
The results of clustering with the SOM method depend on the initial weight given and the number of neurons in the output layer. Meanwhile, in DBSCAN the greater the minPts value, the more noise will be, this also affects the quality of the cluster. Therefore, determining the εpsilon and minPts values at the beginning of the clustering process is very important. The quality of clustering results for the SOM and DBSCAN methods can be seen from the Davies-Bouldin index value, Silhoutte index and accuracy. The smaller the Davies-Bouldin index value, shows that the cluster parameters are optimal and the better the cluster quality. Meanwhile, for the Silhoutte index value getting closer to 1 shows that each case data is in the right cluster and there is no overlapping classes. Accuracy is determined by comparing the system diagnosis results and the actual diagnosis without applying a threshold value. The accuracy values of each trial are compared and the highest value for each data set is searched on the case base. The highest accuracy is used as the optimal clustering parameter.
b. System Capability Analysis
The process of analyzing the ability of the system is divided into three scenarios. The first scenario is the diagnosis of the system using CBR non-indexing, the second scenario is the diagnosis of the CBR system with indexing using the SOM algorithm and the third scenario is the diagnosis of the CBR system with indexing using the DBSCAN algorithm. The searching process of relevant clusters with CBR cluster-indexing used the cosine similarity method and the similarity calculation process for all three scenarios used the Manhattan distance similarity method, euclidean distance similarity and minkowski distance similarity. Testing is performed by applying new cases, which are 20 data as test data for cases of malnutrition and heart disease and 428 data as test data for thyroid disease cases. Then the amount of the correct data is calculated, and the accuracy is determined according to the threshold and the average retrieval time for each similarity method. Based on the 3 testing scenarios, there are differences in the results of each scenario, as seen in Table 4 for cases of malnutrition, Table 5 for cases of heart disease and table 6 for cases of thyroid disease. The testing results of the three scenarios shows that the best accuracy and retrieval time at the threshold ≥ 90 for malnutrition disease data, acquired using the Minkowski distance method which is implemented on the CBR with indexing using the DBSCAN method. The accuracy is 100% with an average retrieval time of 0.02305 seconds. Research [3] with the same case data, reached the best accuracy of 85% with a threshold ≥ 0.75. So in the case of malnutrition, CBR with indexing using the DBSCAN method can improve accuracy. The best accuracy and retrieval time value at threshold ≥ 80 of heart disease data acquired using the Minkowski distance method implemented on CBR with DBSCAN indexing. The accuracy is 100% with an average retrieval time of 0.0421 seconds. This accuracy is as good as research [6] which produces 100% accuracy at threshold ≥ 80. The best retrieval time for thyroid disease data at threshold ≥ 90 is acquired using CBR with DBSCAN indexing of 0.107 seconds. This value is better than research [12] with an average retrieval time of 0.3045 seconds. But for accuracy calculation, CBR non-indexing is able to guess 392 correct data from 428 test data and produce an accuracy of 91.56%. Whereas CBR with cluster-indexing is able to guess 389 data from 428 test data and produces an accuracy of 90.89% which is implemented with the DBSCAN algorithm. This accuracy is smaller than the accuracy produced by research [12] using the Minkowski distance method with an accuracy of 92.52%.
In CBR with cluster-indexing the number of clusters greatly influences the retrieval time. Because the increasing number of clusters will make the cluster size of each cluster being relatively reduced. The retrieval time of the old case matching process will also be reduced, as the number of clusters decreases. On the other hand the time to search for relevant clusters will also increase in the process of finding the cluster center along with the increasing number of clusters. The number of clusters in the SOM algorithm is determined based on the number of output neurons while the initial weighting of the initial neurons is determined randomly. In DBSCAN, the larger value of εpsilon the wider scope of the cluster. While too small εpsilon will produce a large number of clusters and the distance of objects are very close each other. Likewise, too large minPts will produce a lot of noise. This will affect the accuracy of the CBR system with cluster-indexing.
Non-indexing CBR always provides the highest similarity value as a solution. The solution is required by comparing new cases with all cases on a case base. If the CBR non-indexing finds cases with the same similarity value, the cases are sorted by the earliest calculation process and the top case is taken to be a solution. The diagnosis with the highest similarity is not always the same as the diagnosis given by experts. This is because the similarity method does not consider the level of confidence in the new cases. For the next research, it is necessary to add the level of expert confidence in diagnosing the disease since the different features that exist in a particular case.
Conclusion
The results of clustering with SOM algorithm are depend on the initiation of the initial weight given to the cluster and the number of neurons in the output layer. Initial weight initiation in the SOM algorithm is generated randomly so it is possible to obtain different clustering results for the same parameters. Likewise with the DBSCAN algorithm, the results of clustering are depend on the value of εpsilon and minPts specified at the beginning. Therefore, a proper method is needed to determine the most appropriate parameters for the SOM and DBSCAN algorithms in order to produce the best cluster.
In the case of malnutrition and heart disease data testing, CBR with cluster-indexing has better accuracy and shorter processing time than non-indexing CBR. Whereas in the case of thyroid disease the accuracy of non-indexing CBR is better than non-indexing CBR, even though CBR with cluter-indexing has a better average retrieval time. Cluster-indexing method with DBSCAN algorithm has a better accuracy, faster processing and retrieval time than SOM. Whereas, of the three similarity methods, the Minkowski distance method produced the highest accuracy at the threshold of ≥ 90. Further research needs to consider the level of confidence in the new case and the level of expert confidence of a case in calculating the value of similarity due to differences in features that exist in a particular case. | 6,014.4 | 2019-12-29T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Research on Music Wireless Control Based on Motion Tracking Sensor and Internet of Things
With the continuous development of society and rapid economic growth, intelligent music control technology has received more and more attention. At the same time, real-time motion tracking technology has also been developed more and more in the fields of virtual reality and human-machine control. This article is dedicated to developing a wireless music control system based on gesture tracking sensors. First, in the data collection part, an infrared sensor module based on the Internet of Things is used to automatically detect whether someone is approaching. When detecting that someone is approaching, the motion tracking sensor module captures and detects gestures and counts them through a counter. Then, the IoT data transmission module sends the acquired gesture information from the sending end to the receiving end. Finally, the particle swarm algorithm performs algorithmic intelligent processing and judgment on the transmitted data to realize wireless control of background music. After software and hardware debugging, a wireless music control model based on motion tracking was finally successfully established. The system has undergone a complete test, and the test results show that the system has strong stability. Users can easily control music equipment and achieve high accuracy of music control information.
I. INTRODUCTION
In recent years, with the development of lighting technology, control technology, and communication technology, various environmental lighting projects have developed rapidly, and the implementation of outdoor music and light performance projects will become a highlight of the city's night scene [1], [2]. At present, most city night scenes are mainly based on LED lighting [3]. The construction of large-scale outdoor music and light shows will become a jewel in the night scene [4]. The melody of music is more vivid and colorful. In order to further strengthen the construction of urban night scenes, increase the dynamics of lighting and sound effects to match the colorful lights [5], [6].
At present, there are mainly three protocols for data transmission of music and lighting control networks: DMX512 protocol (improved version of DMX512-A protocol), ACN The associate editor coordinating the review of this manuscript and approving it for publication was Yuan Tian . protocol and Art-Net protocol [7], [8]. The ACN protocol organized by the world-famous ESTA (Entertainment Services & Technology Association) is a dimming network protocol that represents the North American industry [9], [10]. The ACN protocol clearly states this: ACN is an advanced control network standard designed to provide next-generation lighting control network data transmission [11], [12]. ACN is going to complete more work including the DMX512 protocol. ACN will unify the lighting control network, allowing a single network to transmit more different types of dimming and other related data, and can connect dimming equipment from different manufacturers [13], [14]. The ACN agreement is not limited to the field of lighting, and is expected to apply to sound control and stage machinery. It can be applied to any network that supports the TCP/IP protocol, and the most common one is usually the Ethernet network [15]. In order to ensure that the music lighting control is foolproof, in addition to choosing mature technology, stable and reliable dimming products, scientific and reasonable control system design is also the key [16], [17]. As the command center of the music lighting system, the reliability of the music lighting control system directly affects the lighting effects of the application [18]. In recent years, major professional lighting manufacturers have designed and implemented several solutions in order to solve control problems. Strand Lighting, headquartered in Los Angeles, is one of the world's largest manufacturers of film and television stage lighting products. It has leading technology and has developed Serve and Show Net Configuration software and Show Net network systems [19].
Human motion capture systems are widely used in fields such as remote sensing control, athlete training, film production, and disease diagnosis. The movements of the human body can be regarded as a series of complex and regular movements. In order to complete a movement, each complete movement can be decomposed into the movements of individual limbs [20]. The movements of each limb are independent and mutually independent. limit. The various movements of the human body are composed of multiple degrees of freedom, and their complexity makes the computer simulation of limb movements still have many difficulties and challenges.
With the continuous development of society and rapid economic growth, intelligent music control technology has received more and more attention. At the same time, real-time motion tracking technology has also been developed more and more in the fields of virtual reality and human-machine control. This article is dedicated to developing a wireless music control system based on gesture tracking sensors. The second part of this article describes an overview of motion tracking sensors and related technologies of the Internet of Things. On this basis, the third section introduces the wireless music control based on motion tracking sensors and the wireless music control architecture based on the Internet of Things. The fourth section provides experimental results to verify the effectiveness of the proposed music wireless control plan.
II. RELATED WORK A. MOTION TRACKING SENSOR
Human posture recognition usually has two types: visionbased human posture recognition and sensor-based human posture recognition [21], [22]. Vision-based human gesture recognition technology started relatively early and the theory is relatively mature. It mainly uses algorithms such as support vector machines and hidden Marko's, and the recognition success rate or algorithm efficiency is relatively ideal [23]. Compared with vision-based gesture recognition methods, sensor-based gesture recognition has obvious advantages: it is not restricted by external conditions such as light, angle, and obstacles. Vision-based human gesture recognition methods are more dependent on the external environment, and motion data can only be obtained in an environment with sufficient light. If you are in a high temperature, smoke or vibration environment, you cannot use vision-based human gesture recognition methods. Users can do their habitual movements to obtain data; and the sensor used to capture the human body posture has the characteristics of small size and high sensitivity, and can be placed anywhere on the body for the user to carry [24], [25].
The wireless human posture recognition system is mainly composed of TMX4903 gesture tracking sensor, Arduino UNO R3 hardware development board, MPU6050, NRF24L01, power supply, etc. It mainly completes the recognition of human posture actions and the wireless transmission of recognition results [26], [27]. The hardware design block diagram of the wireless human body gesture recognition system is shown in Figure 1. It mainly includes the following parts. If the single-chip microcomputer does not receive the data, the counting module and the gesture sensing module are closed. If the microcontroller receives the data, it judges the gesture through the algorithm of gesture recognition.
The main functions realized by the program of the wireless human posture recognition system include collecting human posture data, calculating posture angle, human posture recognition, and wireless transmission of recognition results [28], [29]. The system is mainly divided into three parts: data acquisition node, relay node, server, and the software design block diagram of the wireless human body gesture recognition system is shown in Figure 2. Uplink data communication is mainly to send the collected sensor information to the server after filtering the relay node. The process of data communication is divided into two types, uplink data communication and downlink data communication [30]. Downlink data communication is mainly sent by the server to the relay node within a certain period of time.
The relay node forwards it to the data collection node, and the data collection node changes the sampling time according to the data. The data collection node uses the protocol to drive the three-axis accelerometer and the three-axis gyroscope, and sends the collected sensor data information to the relay node through the IoT module [31], [32]. The data collection node asynchronously monitors the synchronization information sent by the relay node at the same time to adjust the sampling time. The relay node is mainly composed of an Internet of Things module and a main control unit. The Internet of Things module is responsible for communicating with the server to form a sensor network, and the Bluetooth module is responsible for communicating with the data collection node [33], [34]. The server first establishes a 3D stick model of the human body, drives the human body model to move by receiving the sensor data information uploaded by the data collection node, and sends synchronization information regularly to correct the sampling time of the data collection node.
B. INTERNET OF THINGS TECHNOLOGY
The Internet of Things is a hot spot that has emerged in recent years. It can be said that it is an extension and derivative of the development of the Internet, and it is revolutionary for the Internet. Its important principle is to use a variety of information sensing equipment [35], [36]. For example, we often see radio frequency identification (RFID) technology, global positioning system, infrared sensors, laser scanners, gas sensors, etc. to effectively collect relevant information and data [37]. That is to say, all related items are connected to the Internet to facilitate people's identification, management and control [38]. At the same time, the collected sound, light, heat, electricity and other related information will be transmitted in the form of data information and interconnected with the Internet, effectively realizing the relationship between things and people, and things and things. The architecture design of the Internet of Things for the music wireless control system is shown in Figure 3.
The music wireless control system in this article consists of two parts: control center and monitoring station.
The hardware part of the monitoring center is relatively simple, requiring only a PC and a server [39], [40]. The PC is mainly used to install and manage the software of the system, and the server is used to store the transmitted data. According to the different functions, the hardware part of the monitoring station can be roughly divided into the following modules: embedded module, wireless sensor network module, wireless communication technology module and gesture tracking sensor module.
III. THE PROPOSED SCHEME
The second part of this article describes an overview of motion tracking sensors and related technologies of the Internet of Things. On this basis, the third section introduces the wireless music control based on motion tracking sensors and the wireless music control architecture based on the Internet of Things.
A. MOTION TRACKING SENSOR STRUCTURE DESIGN
The wireless music control system based on gesture tracking sensor includes three modules: information collection, data transmission, and data processing. The data acquisition module uses the infrared sensor module to periodically detect whether someone is approaching. When someone approaches the infrared sensor, the sensor senses the human body and transmits the signal to the Arduino UNOR3 microcontroller. It also starts counting at the same time. When the count reaches 30 seconds, if the TMx4903 sensor module senses a gesture, it will transmit data to the microcontroller. If the single-chip microcomputer does not receive the data, the counting module and the gesture sensing module are closed. If the microcontroller receives the data, it judges the gesture through the algorithm of gesture recognition. And according to the program's setting of the music control command represented by the gesture, different data is generated. Flow chart of music wireless control system is shown in Figure 4.
When the gesture is a command for music playback, the IoT signal transmitter will send playback data. When the gesture is a command for music pause, the IoT signal transmitter will send pause data. When the gesture is a command for music switching and upward switching, the IoT signal transmitter will send switching data. When the gesture is a command to switch music and switch down, the IoT signal sending end will send data. When the gesture is a volume adjustment and volume reduction command, the IoT signal sending end will send volume data. After receiving the 8bit data sent by the sending end, the signal receiving end of the Internet of Things sends the data to the single-chip computer on the music player side, and the single-chip computer at the music player side realizes the control of music playback.
B. SENSOR NETWORK OPTIMIZATION BASED ON PARTICLE SWARM ALGORITHM
In industrial production and scientific research, the problem of NP difficulty is often encountered. Intelligent algorithms can effectively solve high-dimensional, non-linear, VOLUME 9, 2021 and discontinuous problems. Traditional methods may have astronomical calculation times when solving NP-hard problems. In fact, the sensor network positioning problem is also NP-hard. Therefore, intelligent algorithms used in sensor network positioning can improve the accuracy of network processes. Particle swarm optimization algorithm is an optimization algorithm that simulates birds looking for food. The mark definition in the formula is shown in Table 1.
In biological populations, individuals and groups are closely connected, and individuals often cooperate with each other and exchange information to achieve their goals.
The particle swarm optimization algorithm is inspired by the predation behavior of birds, and it is found that the algorithm can be used to solve complex nonlinear problems in industrial production and research and to perform multi-objective optimization problems. All solutions to an optimization problem are all potential solutions in the feasible region.
First initialize random coordinates in the feasibility area.
The fitness value of each particle (estimated coordinate) can be calculated by the fitness function. According to the size of the fitness value, it can be judged whether the particle (estimated coordinate) meets the accuracy requirement.
Initialize the parameters and randomly generate a certain number of particles (estimated coordinates) in the feasible region.
Calculate the fitness value (precision value) of each particle (estimated coordinate) according to the fitness value function, and record the optimal value pets and the global optimal value.
Compare the current value of the estimated particle (estimated coordinate) l with the current optimal value of the particle (estimated coordinate) l.
Update each particle (estimated coordinates) according to formulas (3) and (4), and then return to the previous step.
Until the conditions are met, output the global extreme value (the highest precision value of all estimated coordinates) and the corresponding particles (estimated coordinates) and exit the loop.
The state estimation in target tracking is actually to estimate the current and future motion state of the target, including position, velocity, acceleration, etc., through a certain estimation method from a series of received measurement values. According to different application environments, the selected tracking algorithm is different, and the tracking accuracy obtained is also different.
IV. PERFORMANCE TEST A. MUSIC WIRELESS CONTROL TEST ENVIRONMENT
Through the motion tracking sensor node energy consumption model in the above chapter, we can know that the clustering structure has a great influence on the communication energy of wireless sensor networks. The energy consumption of communication nodes will directly determine the survival and applicability of wireless sensor networks. Therefore, in this chapter, we will carry out environmental testing of music wireless control system based on motion tracking and the Internet of Things.
B. MUSIC WIRELESS CONTROL SIMULATION
In practical applications, the motion of the moving target is nonlinear, so its state equation can be simulated as nonlinear. For the filtering of nonlinear models, the three algorithms introduced earlier in this chapter are generally popular, namely, extended Kalman filter, unscented Kalman filter and particle swarm algorithm. In order to study the tracking accuracy and error of the tracking algorithm based on the wireless sensor network, MATLAB is used to simulate these three filtering algorithms, and the results are analyzed. It can be seen that the state value calculated by the extended Kalman filter algorithm is very different from the actual state in the first 15 seconds. Although the tracking effect is better, it is extremely unstable. Of course, this has a lot to do with the selected state equation. The state equation produces frequent peaks, which increases the difficulty of tracking. It can be seen that the tracking effect of the unscented Kalman filter algorithm is acceptable, but the tracking results of part of the time are quite different from the real state, especially when there are a lot of rapid turns, the error value is relatively large.
The particle swarm algorithm has tracked the true state very well most of the time. Although the phenomenon of frequent loss of targets in a short period of time appeared in the early stage, the tracking effect is getting better and better with the increase of prior particles. The state values and real estate values calculated by the three algorithms of extended Kalman filter, unscented Kalman filter and particle swarm algorithm within 50s are shown in Figure 5 and Figure 6. Figure 6 shows the positioning error of each unknown node of the three algorithms when the communication radius is 30m, the number of unknown nodes is 50, and the number of anchor nodes is 20. It can be seen from Figure 6 that both the extended Kalman filter and the unscented Kalman filter have large error fluctuations and deviation data. The particle swarm algorithm has small fluctuations and the positioning error of 62% of the nodes is within 5m, but the vibration of the early error is stronger, the error situation in the middle and late stages is greatly alleviated, and the tracking process is well completed. If the single-chip microcomputer does not receive the data, the counting module and the gesture sensing module are closed. If the microcontroller receives the data, it judges the gesture through the algorithm of gesture recognition. And according to the program's setting of the music control command represented by the gesture, different data is generated. It can be seen that the improved algorithm in this paper not only reduces the positioning error, but also has better stability.
From the results of the motion tracking sensor node error test in the above chapters, we can know that the particle swarm algorithm proposed in this paper can improve the accuracy and efficiency of the motion tracking sensor. For the filtering of nonlinear models, the three algorithms introduced earlier in this chapter are generally popular, namely, extended Kalman filter, unscented Kalman filter and particle swarm algorithm. And different simulation environments will also have an impact on the effect of music wireless control, so we will carry out music wireless control simulation based on different environments to verify the applicability of the program in different environments. Figures 7-10 show the command category effects of the music wireless control system in different environments. The particle swarm optimization algorithm is inspired by the predation behavior of birds, and it is found that the algorithm can be used to solve complex nonlinear problems in industrial production and research and to perform multi-objective optimization problems.
Through the simulation test, we found that the system has achieved good simulation test results in different simulation environments. With the increase of data samples, wireless control commands of different types of music have been improved. The motion tracking signals collected above have been simply thinned and compressed. The state estimation in target tracking is actually to estimate the current and future motion state of the target, including position, velocity, acceleration, etc., through a certain estimation method from a series of received measurement values. The following will reconstruct the motion tracking signal. Since the motion tracking signal is a one-dimensional signal, try to use a more effective algorithm for one-dimensional signal recovery when choosing a reconstruction algorithm. In this study, the OMP signal reconstruction algorithm was selected. The algorithm first determines the position of the non-zero elements in the sparse signal according to the strength of the correlation between the measured value Y and the measurement matrix R, and then obtains the value of the non-zero elements by solving the least square problem. The music data reconstruction effect based on the test data sets I and II is shown in Figure 11 and Figure 12. The wireless music control system based on gesture tracking sensor includes three modules: information collection, data transmission, and data processing. The data acquisition module uses the infrared sensor module to periodically detect whether someone is approaching.
Through the simulation test, we found that the system has achieved good simulation test results in different ways. The music data reconstruction effects based on test data sets I and II can meet the needs of practical applications. The algorithm first determines the position of the non-zero elements in the sparse signal according to the strength of the correlation between the measured value Y and the measurement matrix R, and then obtains the value of the non-zero elements by solving the least square problem. It's just that the vibration of the error in the early stage is stronger, the error situation in the middle and late stage is greatly alleviated, and the tracking process is completed well. It can be seen that the improved algorithm in this paper not only reduces the positioning error, but also has better stability.
V. CONCLUSION
Real-time motion tracking technology has also been developed more and more in the fields of virtual reality and humanmachine control. This article is dedicated to developing a wireless music control system based on gesture tracking sensors and Internet of Things technology. Up to now, the research on tracking technology of moving targets based on wireless sensor technology is still an active field. Because particle swarm has more prominent advantages in the tracking of non-linear motion of the target, it is very important to study particle swarm algorithm. After debugging the software and hardware, this article successfully realized the music wireless control system based on motion tracking, and the system was tested completely. The test shows that the system is stable and the music control information is accurate, and users can easily control the music equipment. In the future, we will devote ourselves to the further research and development of existing music wireless control technology. | 5,101.6 | 2021-01-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
ON THE HIGH PERFORMANCE COMPUTING FOR MOTIF DISCOVERY IN DNA SEQUENCES.
Several used in by sensitivity analysis on of computational cost can be reduced. Thus, the proposed approach can be used for the motif discovery effectively and efficiently.
In bioinformatics, one of the most important research problems is the Motif discovery in DNA sequences. The algorithm having accuracy and speed has always been the goal of research in bioinformatics, for solving this problem. Therefore, the idea of this research study is to modify the random projection algorithm to be implemented using high performance computing technique (i.e., the R package pbdMPI). The steps that are needed to achieve this objective are the main focus of this study, i.e. preprocessing data, splitting data according to number of batches, modifying and implementing random projection in the pbdMPI package, and then aggregating the results. To validate this approach, some experiments have been conducted. Several benchmarking data were used in this study by sensitivity analysis on number of cores and batches. Experimental results show that computational cost can be reduced. Thus, the proposed approach can be used for the motif discovery effectively and efficiently.
Issues in motifdiscovery can be categorizedinto 3 types, namely Simple Motif Search (SMS), Edit distance based (EMS), and Planted Motif Search (PMS) [3]. The purpose of SMS is to find all the motifs from lengths 1 to the specified length in all sequences of [4] while the purpose of the EMS is tofind all the motifs on the desired number of sequences [5]. PMS aims to find the motive that appears in every sequence that exists [6].
In PMS, there are two important input parameters: the desired length of motif symbolized by l and the number of mismatches denoted by d [7]. For example, there are three DNA sequences, as follows: S 1 = ATTGCTGA, S 2 = GCATTGAA, and S 3 = CATGCTTG. With l = 4 and d = 1, we obtain the following repetitive motifs: ATTG and TTGC. It can be seen that PMS is included in the NP-Hard problem, so that if this algorithm is run to look for all possible motives that appear in all sequences, then the time spent will be exponential [1]. 881 the random position determined by k (k-mers) values [1,9]. RP represents that mutations can occurany where so the projection is done randomly. Even though many algorithms have been introduced, since PMS is NP-hard problems, an implementation of the algorithms into parallel computing is necessary to be done. Therefore, this research is aimed to design and implement RP for dealing with PMS in parallel computing in the R programming language. The R programming language [10] is chosen since it has become the de-facto standard for statistics, data analysis, and visualization. Now a days, there are many algorithms, collected in software libraries/packages, that have been implemented and saved in the Comprehensive R Archive Network (CRAN) at https://cran.r-project.org/. In this repository, one of packages in R used for high performance computing and big data analysis is pbdMPI [11] that is used in this research.
In the literature, we found some relevant articles discussing implementations of motif discovery in parallel computing. For example, in Clemente & Adorna's study [12], random Projection algorithm was developed in the concept of GPU (GraphicProcessingUnits). Each processor will be directed into threads that work within the device or GPU. Meanwhile, the sequential process will be executed on the host or CPU. TEIRESIAS has been introduced to improve the speed on finding maximal pattern [13]. An enhancement of the PMSPRUNE algorithm has been proposed with two additional features: neighbor generation on a demand basis and omitting the duplicate neighbor checking [14]. Furthermore, there are some different approaches for dealing with patterns matching in various fields. For instance, multiple patterns matching methods was introduced for large multi-pattern matching [15]. Improving the scanning mode of Square Non-symmetry and Antipacking Model (SNAM) for binary-image is obtained by proposing the new neighbor-finding algorithm [16].
The rest of the paper is organized as follows: first, the global procedure of this research is presented in Section 2. In Section 3, a main contribution, which is a modification and implementation of parallel random projection by using the pbdMPI package, is discussed. To validate and analyze the proposel computational model, we conduct some experiments in Section 4 and some analysis in Section 5. Finally, we conclude the research in Section 6. Figure 1 shows the research design done in this study. It can be seen that first, we perform some preparation, such as identifying problems, research objectives, and literature study. These activities have been presented in the previous section. Then, we present a main contribution of this research, which is designing and implementing parallel random projection with the R high performance computing (i.e., thepbdMPIpackage). This part will be explained in the next section. After that, we conduct some experiments and their analysis of the results. Drawing some conclusión is presented in the end. Parallel Random Projection with the pbdMPI package:-Basically, the computational model proposed in this research can be seen in Figure 2. First, after reading and converting the input data from the .falsa file, we perform a modification of random projection by utilizing the R high perfomance computing (i.e., the pbd MPI package), called parallel random projection with pbdMPI. Detailed explanation regarding the proposed approach can be seen in Figure 3. The results of this model is all motifs, their starting indices, and computational costs. Figure 3, it can be seen that besides supplying some parameters related to the RP algorithm, we need to input the number of cores and batches. Since the R programming language needs to load data into random access memory (RAM), we need to define the number of batches so that each batch just takes less than 20% of total memory capacity. Furthermore, actually Step 1 to 3 and Step 6 to 8 are the same as the RP algorithm on the standalone mode. However, from Step 4 to 5 the tasks are conducted in parallel computing by using pbdMPI commands. An important part of these steps is a rule to divide the sequence into numbers of batches. Moreover, the rule should prevent all possible motif including the sequence even though it has been splitted into several batches. So, in this case we implement the following equations:
Research Method:-
where and are starting and ending indices for cutting the batch of . and are the length of sequence, number of batches, and length of pattern, respectively. It should be noted that the starting index starts from = 2. For example, given the sequence S = CAGTGACGTAATCA, and the length of pattern is 3. So, according to (1) and (2), we obtain the following batches: S 1 = CAGT; S 2 =GTGACG; and S 3 =CGTAATCA. By following how the algorithm random projection generates k-mers, we obtain the following k-mers on all batches that 883 are the same as k-mers on the sequence (without splitting into batches): CAG; AGT;GTG; TGA; GAC; ACG; CGT; GTA; TAA; AAT; ATC; and TCA. It means that even though the sequence has been splited and processed by different cores, the results of RP and parallel random projection are the same.
Experimental Study:-Data Gathering:-
The data used in this study obtained from research in [17]. To download the data can be through the site of University of Washington Computer Science & Engineering on page http://bio.cs.washington.edu/research/download. In total, there are 52 data sets of DNA sequences derived from four species, 6 of which are derived from the Drosophila melanogaster sequence, 26 data derived from human sequences, 12 data derived from rat sequences and 8 other data derived from the Saccharomyces cerevisiae sequence. In each data file there are several sequences that number between 1 to 35 sequences. Then, every sequence that resides on the file has a variable length ranging from 500 to 3000 base pairs.
In this case, we only consider to use four datasets as follows: the dm01r.fasta and dm05r.fasta files that are DNA sequences of Drosophila melanogaster, then hm01r.fasta derived from the human sequence, and muso4r.fasta which is the rat DNA sequence as the input data. The dm01r.fasta file contains 4 DNA sequences with the total length of sequence is 6000, while the dm05r.fasta file consists of 3 DNA sequence with the length of 7500. The hm01r.fasta and mus04r.fasta files have the DNA sequence length of 36000 and 7000, respectively.
Experimental Design:-
In this study we conduct two simulations: standalone and parallel computing (i.e., multicore) modes. Each group will use all data as mentioned previously: dm01r.fasta, dm05r.fasta, hm01r.fasta, and muso4r.fasta. Furthermore, in accordance with the algorithm, some parameters should be assigned, as follows: the length of motif and mismatches
Results and Analysis:-
Since the limited space, in this section we illustrate the results and their analysis for a particular dataset only. For example, on the stand alone mode, a comparison of the number of motifs found according to m, θ, and (l, d) on the 884 dm01 r data set is shown in Figure 4. It can be seen that the higher numbers of mismatch makes the higher of numbers of motifs. Furthermore, on the stand alone mode, we can compare the computational cost with length of DNA sequence on the different (l, d) and θ as shown in Figure 5. It is obvious that the longer length of DNA sequence takes the higher computation cost. It should be noted that these lengths also represent the datasets used in the experiments, such as the dm01r has the length of 6000. On the parallel computing mode, Figure 6 shows that the comparison between the computational costs and numbers of cores when we used the dm01r dataset on (l,d) = (6,2), θ = 3, and b = 10. It can be seen that the proposed model has been successful since in general speaking the computation cost can be reduced by adding the number of cores. It means that the computational time on stand alone needs four times longer than using 2 cores. Moreover, the standalone mode took more than ten times compared with parallel computing using 3 cores (i.e., 2.52 seconds). Using 6 cores, the computation can be faster around 34 times compared with the standalone mode. So, now it is obvious that the proposed model is much faster than the standalone mode. We also compared computational time gained from experimental results on the previous research [1] even though there are different data on the file dm01r and mus04r. The number of DNA sequences contained in the file dm01r is 4 with the length of 1500 for each sequence while in the research [1] the dataset contains 5 DNA sequences. In the file mus04r the number of DNA sequences used in this experiment is 7 sequences with the length of each sequence is 1000 while only 6 sequences were used by the previous research. The comparison can be seen in Table 1. It can be seen that all experiments conducted in this research are faster than the study in [1]. Computational cost (s)
Conclusion:-
The main contributions of this research are as follows 1. To propose the computational model for modifying the random projection algorithm, called parallel random projection, for dealing with planted motif search by utilizing the R high performance computing (i.e., the pbdMPIpackage) and 2. To implement the proposed model and then valídate it for finding motifs on DNA sequences. According to the experiments, we can state that the proposed model is able to reduce the computational cost significantly. Moreover, a comparison with the previous study has been done, and it is shown that the proposal produced better results in the term of computational cost.
In the future, we have a plan to improve the model by using Big Data platform, such as by using the programming model of Map Reduceon Apache Hadoop [18] and Resilient Distributed Datasetson Apache Spark [19]. Moreover, the different toools for utilizing parallel computing, e.g., the foreachpackage [20], can be used as the study in [21]. Different tasks in the related research to bioinformatics can be applied to test the proposed model as well, such as prediction on cáncer [22], kidney disease [23], and sleep disorder [24]. | 2,846.2 | 2018-07-31T00:00:00.000 | [
"Computer Science"
] |
Intraguild predator drives forest edge avoidance of a mesopredator
Interactions between top predators and mesopredators of the same guild often result in habitat segregation restricting interactions to shared habitat edges. Although negative edge effects are recognized as important spatial patterns in the ecology of fragmented landscapes, the underlying mechanisms of predator–prey interactions resulting in negative edge effects remain unknown. To disentangle top-down effects of intraguild predators and bottom-up effects of shared resources on mesopredator spatial distribution, we recorded the occurrence of tawny owls Strix aluco in forests and their prey, the little owl Athene noctua in adjacent open areas over 2 yr across 687 km2 in Southern Germany. We developed a new, asymmetrical dynamic two-species occupancy model investigating spatial interactions while accounting for imperfect detection. Little owl occupancy was strongly reduced within 150 m of forests, but only in the presence of tawny owls. Analysis of over 30 000 telemetry locations of 275 little owls showed that little owls strongly avoided areas closer than 150 m from the forest during range use. These results suggest that the negative edge effect is due to forest edge avoidance rather than direct predation. Potential confounding mechanisms such as food depletion or habitat avoidance at forest edges can be ruled out. Thus, top-down effects caused by avoidance of intraguild top predators shape the spatial distribution of mesopredators such as the little owl. While habitat complexity mitigates multitrophic interactions within habitats, it is expected to reinforce multitrophic interactions between habitats, potentially leading to the suppression of mesopredators from suitable habitats.
Introduction
The spatial structure of species communities is affected by food webs whose predator-prey interactions may act by direct lethal predation or by nonlethal risk effects based on anti-predator behavior (Creel andChristianson 2008, Cresswell et al. 2010). Nonlethal effects include chang-es in the spatial behavior of prey such as avoidance of areas of high predation risk (Lima and Dill 1990, Heithaus and Dill 2006, Cresswell et al. 2010). Since predation risk varies according to landscape topology, habitat composition, and the abundance of specific predators, prey species constantly adapt their behavior to a "landscape of fear" (Brown et al. 1999, Laundré et al. 2001).
MICHEL ET AL.
Perceived predation risk can shape the spatial behavior of prey at different levels: home-range selection (Fontaine and Martin 2006), habitat use (Willems and Hill 2009), dispersal movements (Otsuki and Yano 2014) and thus, distribution and dynamics of prey animals throughout their lives (Cresswell 2008). Intraguild predator-prey interactions, i.e., interactions between a top predator and a mesopredator sharing the same food resources (Polis et al. 1989) are intensified by mutual competition for food. In the absence of avoidance behavior, encounter rates of mesopredators and their intraguild predator at shared foraging sites of high food availability are expected to exceed those of predator and prey with completely distinct diets, resulting in elevated predation risk in intraguild systems (Morris 2005). Compared to simple predator-prey interactions, intraguild predators additionally profit from exclusion of their intraguild prey from shared food patches by reduced depletion (Polis and Holt 1992). Life-history theory predicts that in prey strategies to minimize predation should evolve, for example exploitation of alternative food sources or use of distinct habitats (Korpimäki 1987), depending on the densities of both predator and prey (Heithaus 2001). As a result of increased encounter rates and predation pressure, this should particularly apply to intraguild systems. However, hitherto investigations of the consequences of predator-prey interactions on range use were rarely based on intraguild systems.
Negative effects of interspecific competition and predation may be reduced by temporal segregation (Fedriani et al. 2000), by small scale behavioral avoidance (Swanson et al. 2014), or by complete habitat segregation (Schoener 1974, Thiollay 1993, all of which reduce the encounter rates between the two species. Structured habitats can further reduce encounter rates and create refuges for prey, thereby mitigating the effect of intraguild predators on prey populations (Janssen et al. 2007, Thompson andGese 2007). Interactions between habitat segregated intraguild predators and their prey are limited to shared habitat edges. Nonetheless, in fragmented landscapes, the amount of edge habitat is considerable and interactions at habitat edges may be important determinants of mesopredator spatial behavior. Although intraguild predation is recognized as an important factor shaping range use of me-sopredators (Ritchie andJohnson 2009, Swanson et al. 2014), spatial patterns of mesopredators at shared habitat edges remain unknown. Furthermore, it remains unclear if reduced occupancy or prey density near habitat edges is due to direct predation, due to edge avoidance in response to perceived predation risk, or both (Suhonen et al. 1994, Lima 2009, Fonderflick et al. 2013. Behavioral studies are needed to differentiate between the two mechanisms (Lima and Valone 1991).
Our study aims to close this gap by investigating the interaction between the little owl Athene noctua living in open habitat and its intraguild predator, the tawny owl Strix aluco inhabiting adjacent forests (Redpath 1995, Van Nieuwenhuyse et al. 2008. While tawny owls often forage at the forest edge, little owls avoid forests (e.g., Lack 1946, Zabala et al. 2006. We examine three alternative hypotheses explaining this observed forest avoidance: (1) the "avoidance hypothesis" suggests active avoidance of forest edges in response to perceived predation threat (Fontaine and Martin 2006); (2) the "predation hypothesis" assumes predation close to the forest resulting in apparent forest avoidance (Suhonen et al. 1994); and (3) the "resource hypothesis" attributes the avoidance to the lack of important resources such as food or suitable hunting grounds near the edge (Ries and Sisk 2004). The "resource hypothesis" predicts that both occupancy and individual range use of little owls correspond to the distribution of resources. Thus, inconsistency between range use or occupancy patterns and resource distribution would provide evidence against it. While both the "avoidance hypothesis" and the "predation hypothesis" predict that little owls occupy territories further away from forests inhabited by tawny owls than from forests without tawny owls, only the "avoidance hypothesis" predicts behavioral avoidance during night-to-night range use. In contrast, under the "predation hypothesis" little owls should use their range according to resource availability, whereby individuals foraging close to the forest are predated. Accordingly, increased predation rates at sites close to forests are predicted. To test these predictions, we first developed a novel, asymmetrical dynamic two-species occupancy model based on presence-absence data (an extension of the models of Waddle et al. 2010 andMacKenzie et al. 2003). Second, we analyzed v www.esajournals.org data of individual spatial behavior and survival of little owls from a 4-yr telemetry study. Third, we investigated the availability of the main little owl food and of the preferred foraging habitats in relation to the distance to the forest edge. Our results give insights into predator avoidance strategies at shared habitat edges and their consequences for range use and distribution of intraguild prey.
Study species and study area
The little owl is a small nocturnal owl species of open habitats (Van Nieuwenhuyse et al. 2008). It is a mesopredator feeding on small rodents (mainly Microtus spp.), insects, earthworms, and birds (Juillard 1984). Particularly in open areas, where tawny owls frequently prey upon Microtus spp. (Petty 1999), the diets of little owls and tawny owls overlap considerably. Due to its small size, the little owl is susceptible to predation from several larger species, and there is a lot of evidence for little owl predation by tawny owls (Mikkola 1976, Schönn et al. 1991. Besides the eagle owl (Bubo bubo), which is rare in our study area, the tawny owl is considered as the second most important predator of the little owl (Van Nieuwenhuyse et al. 2008).
Our study was carried out in Southern Germany (District of Ludwigsburg, Baden-Württemberg, 48 °53′43″ N, 9 °11′45″ E). The study area with a surface of 687 km 2 is composed of a mosaic of forests (25%), human settlements (17%) and farmland (58%). The agricultural landscape is dominated by fields of intensive agriculture, interspersed with pastures, meadows, orchards, and vineyards (Bock et al. 2013). The little owl subpopulation within our study area currently consists of roughly 220 breeding pairs (H. Keil, unpublished data), mostly breeding in artificial nest boxes, which include a protection against martens. While the little owls breeding in nest boxes are being closely monitored, an unknown number of pairs breeds in natural nests within tree cavities every year.
Field methods Playback procedure
A survey of little owls and tawny owls was conducted in February-March 2012 and 2013 using call playbacks. An overview and details about the selection of the 156 playback sites are given in Fig. A1 (see Appendix A). Each playback site was visited three times using one of three different call sequences of each species (see Appendix A for detailed methods). Since the weather conditions can affect the detection probability, the occurrence of precipitation, wind, cloudiness, and the amount of background noise were recorded (variables are defined in Table A1, Appendix A). This approach resulted in a data set consisting of encounter histories of both species over three visits per year.
Radio tracking
To investigate the range use and direct avian predation of little owls in relation to the distance to the forest edge, point location data of little owls collected in a telemetry study from summer 2009 until summer 2013 were analyzed (Bock et al. 2013). Little owls were equipped with very high frequency (VHF) transmitters of own construction (Naef-Daenzer et al. 2005) weighing 6.9-7.2 g (corresponding to 4-5% of a bird's body mass), with an operational range of up to 40 km in the field and an expected life span of 400 d. For details about tagging procedures, see Bock et al. (2013). During 2-4 visits per week, each bird was located twice at an interval of 5 min by homing in using a 3-element Yagi antenna and a handheld receiver (Kenward 2001). Only night-time locations were considered, amounting to a total of 30 721 locations of 275 little owls (65 females, 58 males, and 152 juveniles).
Remains of depredated individuals were usually found shortly after death, allowing us to distinguish between mammalian and avian predation (Bock et al. 2013). In many cases, it was impossible to ascertain, which avian predator was responsible for the predation. Data of 167 little owls with known fate from 1 yr to the next were available for the investigation of mortality rates due to avian predation. Since several birds were followed over multiple years, these data originate from 120 individual adults (63 females, 57 males).
Food abundance
The range use of little owls is expected to vary according to the abundance of food March 2016 v Volume 7(3) v Article e01229 4 v www.esajournals.org resources. Although little owls have a broad prey spectrum, small mammals generally comprise the largest part of their biomass intake (e.g., Šálek et al. 2010). Therefore, we quantitated the number of field signs (i.e., runways and holes) of common voles (Microtus spp.) along transects with a width of 0.5 m and a length of 5 m as a proxy for food abundance (Giraudoux et al. 1995, Apolloni 2013. This proxy correlates well with live-trappings (Lambin et al. 2000).
Spatial variables
The distance of each playback site to the closest forest patch (area ≥ 2 500 m 2 ) was measured in Google Earth (Version 7.1.2.2041, © Google 2013) with an accuracy of 10 m. Points within the forest were assigned negative values corresponding to the distance to the forest edge. Since Central European little owls are often associated with orchards (Gottschalk et al. 2011) and their breeding success correlates with distance to human habitations (Tomé et al. 2004), distances of each playback site to the closest orchard (≥6 fruit trees), and to the closest village (≥6 houses) were extracted.
To compare the habitat compositions at different distances from the forest and to test whether little owls preferentially use areas at larger distances from the forest, the study area was split into areas of similar distance from the forest. Distance buffers (0-50 m, 50-100 m, …, 450-500 m, >500 m) were created around forest areas extracted from a land use raster of Baden-Württemberg (adapted from Gottschalk et al. 2011) using ArcGIS 10.0 (ESRI, Redlands, California, USA). Within each distance buffer, the relative proportion of three habitat types important for little owls (arable fields, orchards, and meadows) was calculated. Since range use of breeding little owls depends on the distance to the nest or roost site (Sunde et al. 2014), the availability of areas at different forest distances and their use were assessed separately for ten distance classes from the little owl nest (see Appendix A, Fig. A2 for details).
Statistical analyses Occupancy model
We developed a dynamic two-species occupancy model to analyze the presence-absence data of both owl species. Three visits at each playback site allowed quantitation of the detection probability. Our model (developed with the help of M. Kéry) accounts for the asymmetrical relationship between predator and prey, extending the parameterization developed by Waddle et al. (2010) to a multiseason model (MacKenzie et al. 2003), thereby creating an asymmetrical dynamic two-species occupancy model. We used colonization (γ; i.e., the rate at which previously unoccupied sites were occupied in the following year) and persistence (φ; i.e., the rate of sites occupied in both years) to model the differences in occupancy (ψ) between year t and year t + 1: (1) Initial occupancy of tawny owls was given by (2) where cov i are the different site-specific spatial distance variables described above (i.e., distance to forest, orchard and village). To avoid numerical overflow (Kéry and Schaub 2012), distance variables were standardized (see Appendix A). Detection probability (p) of tawny owls as well as φ and γ were modelled in an analogous way. Weather and noise variables entered the detection probability model as visit-specific covariates (cov ij in Eqs 2 and 3). In addition, the little owl detection model included tawny owl occupancy: (3) The initial occupancy by little owls was modelled as a function of tawny owl presence, site-specific habitat covariates and an interaction between the two: (4) Finally, little owl dynamics were modelled depending on tawny owl occupancy: The symbols + or − represent the presence or absence of tawny owls, respectively. All models were written in the BUGS language and run in the software JAGS (Plummer 2003) controlled by the package R2jags (Su and Yajima 2012) in R Version 3.0.2. (R Core Team 2012). To reach convergence, the models were run for 1 000 000 iterations with a burn in of 100 000, a thinning parameter of 10, and 3 chains. As priors for intercepts and parameters, we used a uniform distribution from −10 to 10, for the dynamic parameters of the little owls a uniform distribution from 0 to 1. Covariates were sequentially removed from the model if the 95% credible interval of the posterior distribution included 0. Goodness of fit of the final model was assessed using predictive model checking (for the predictive model check see Appendix B, the data and code to run the final model are given as Data S1).
Range use
Small scale behavior of little owls near forest edges might provide insight into the mechanism of edge avoidance. Within each distance class from the nest (see Appendix A, Fig. A2), Manly's resource selection ratio W i , the ratio of used and available habitat was calculated using the package adehabitatHS in R (Manly et al. 2002, Calenge 2006). This analysis relates the proportion of locations within each distance buffer from the forest (proportion used) to the proportion of area belonging to the according distance buffer (proportion available).
Avian predation and vole density
Reduced occupancy or range use near forest edges might be caused by direct predation of little owls or low food abundance. Therefore, we investigated if little owls nesting close to the forest were at a higher risk of being killed by avian predators. In four cases, tawny owls were calling repeatedly near the site of recovery of the remains or transmitter, strongly suggesting predation by tawny owls. Since this low sample size did not allow complex modelling, we compared the distance of the nests of these little owls to the rest using a two-sided t-test. Including the data of little owls killed by an unknown avian predator, a generalized linear mixed model (GLMM) with binomial error structure and logit link function was used to relate the occurrence of avian predation to the distance to the forest. Forest distance was logtransformed to improve convergence. Since many individuals were observed over several years, the individual identity was included as a random factor. The distance to the forest edge, sex, and the estimated occurrence of tawny owls nearby (extracted from the occupancy model) were included as fixed factors. To test whether a potential edge effect was due to reduced food abundance in the vicinity of the forest, we added a binary factor (distance <150 m = 1, n = 159; >150 m from the forest = 0, n = 3656) to a well-established model investigating which factors affect the frequency of vole signs (Apolloni 2013). This binomial GLMM includes the habitat type (arable field, grassland, orchard, and buffer zone) as a fixed factor and the sampling surface as a random factor. Both GLMMs were fit in R using function glmer in package lme4 (Bates et al. 2014).
Detection probability
Precipitation and cloudiness did not affect the detection probability of either owl species. Thus, these factors were removed from the final model. The presence of wind reduced the detection probability of tawny owls in 2012, but not in 2013 (Table 1, Appendix C, Fig. C1). Detection of little owls was not affected by wind. High background noise reduced tawny owl detection in 2013 and little owl detection in both years (Table 1, Appendix C, Fig. C1). In 81% of the MCMC-simulations, little owl detection was lower in the presence than in the absence of tawny owls (Table 1, Appendix C, Fig. C1).
Occupancy pattern
Both the occupancy probability and the yearto-year persistence of tawny owls declined with increasing distance of a playback site to the closest forest patch (Table 1). Tawny owl persistence increased with distance from the closest village, whereas their occupancy and colonization rates were not affected ( Table 1). The colonization rate of previously unoccupied sites by tawny owls was higher inside the forest or near its edge than at greater distances (Table 1). In summary, these results confirm the close association of tawny owls with forest habitats.
Little owl occupancy was neither related to the distance to the closest orchard nor to the distance to the closest village. Thus, both covariates were removed from the final model. There was a positive correlation between the presence of little owls and the distance to the forest. However, this relationship only occurred in the presence of tawny owls (Fig. 1, Table 1). Persistence and colonization rate of little owls were higher in the absence of tawny owls in 88% and 78% of the MCMC-simulations, respectively (Table 1).
Potential underlying mechanisms Range use: behavioral avoidance
Areas close to the nest were strongly preferred: 33.3% of all locations (n = 12 408) were situated within 50 m of an individual's nest. Due to the high abundance of locations in this small area, the forest avoidance pattern was not as clear as at larger distances (Appendix C, Table C1). The preference index revealed that beyond 50 m from the nest, areas within 150 m of the forest were avoided, while areas farther than 150 m from the forest were used according to availability or were even preferred (Fig. 2). The distance from the nest affected the strength of the avoidance: areas within 50 m of the forest edge were more strongly avoided when located far from (>100 m) than close to the nest (<100 m; Appendix C, Table C1). Thus, the distance between nest and forest was an important factor modulating forest avoidance.
Direct predation
Low little owl occupancy in areas close to forests might be due to increased predation rates of little owls settling there. Of the 167 birds observed over the course of a year, 21 birds were killed by an avian predator. Nests of the four little owls most likely killed by tawny owls were located significantly closer to the forest than those of the other 163 little owls (mean distance ± SE: 255 ± 54 m vs. 522 ± 41 m; two-sided t-test: t = −3.944, df = 4.046, P = 0.017). When including the data of little owls killed by an unknown avian predator, the occurrence of avian predation was not significantly related to the distance of the nest to the forest (Table 2). Thus, little owls living close to the forest were not more susceptible to avian predation than those living at larger distances. The occurrence of tawny owls did not affect the probability of little owl mortality due to avian predation, either (Table 2).
Vole density and habitat composition: food availability
Irrespective of the intraguild predator, differential vole abundance as well as the habitat composition near the forest might affect the range use of little owls. When controlling for habitat type, the occurrence of voles did not differ significantly between areas within 150 m of the forest and areas farther away (estimate = 0.882, CI = −1.755-3.485, χ 2 = 0.565, P = 0.453). However, vole abundance was shown to be higher in grassland and orchards than in arable fields (Apolloni 2013). Across our study area, the relative proportion of meadows close to the forest was twice as high as the proportion at greater distances (<150 m: 36.5%, >150 m: 17.8%). In contrast, the relative proportion of arable fields far from the forest exceeded the proportion near the forest by half (<150 m: 44.9%, >150 m: 67.1%). The abundance of orchards was similar (<150 m: 18.6%, >150 m: 15.1%, see Appendix C, Fig. C2). These results indicate an environment of higher food abundance near the forest.
Discussion
By applying different methods, we found distinct spatial patterns in a habitat-segregated intraguild predator-prey system. First, territory occupancy of the mesopredator showed a strong negative edge effect: the presence of the mesopredator rapidly decreased near forest edges in the presence but not in the absence of the top predator. Second, movement behavior of the mesopredator showed a strong negative edge effect as well: mesopredator individuals avoided movements into areas near forest edges. Third, the availability of preferred food resources was not reduced near forest edges. In combination, our results support the "avoidance hypothesis": the intraguild mesopredator actively avoids the use of suitable habitats shared with a habitat segregated top predator, although these habitats would comprise preferred prey.
Edge avoidance might arise due to confounding factors such as differences in habitat composition or resource availability at habitat edges, possibly due to food depletion around habitat edges as a consequence of exploitative competition (Schoener 1983). However, there was no evidence for this "resource hypothesis": preferred habitat types with high vole abundance (Šálek et al. 2010(Šálek et al. , Apolloni 2013 were more frequent within the avoided area than further from the forest, supporting the two remaining hypotheses. Since accessibility is not expected to differ between the same habitats at different distances from the forest it is unlikely that food availability is confounded by its accessibility. The large-scale distribution of the mesopredator and its individual movement behavior showed the same edge effect. Assuming the same underlying mechanism in range use and settlement decisions, the predator-induced edge effect likely results from predator avoidance behavior by the mesopredator ("avoidance hypothesis") and not from direct predation ("predation hypothesis"). The "avoidance hypothesis" is also supported by the finding that direct predation of the mesopredator was not increased at forest edges. However, we have to keep in mind that mesopredators are part of a complex multitrophic system including more than one predator. In our study system, additional intraguild top predators prey on little owls far from forest edges (e.g., common buzzard Buteo buteo, barn owl Tyto alba: Penteriani andFaivre 1997, Zuberogoitia et al. 2008), potentially blurring the effect of direct predation by the tawny owl. Mesopredators need to adapt their avoidance strategies to the type, distribution, and density of different intraguild predators: habitat segregation and large scale avoidance is only possible if there are gaps in the distribution of the top predator, or if the mesopredator can resort to a habitat which is not used by the predator (Treinys et al. 2011, Swanson et al. 2014). In the absence of such predator-free areas, the mesopredator needs to apply avoidance strategies on a small temporal or spatial scale to avoid suppression (Swanson et al. 2014). Little owls reduce their activity or move to shelter to avoid predation by barn owls co-occurring within the same habitat (Zuberogoitia et al. 2008). Here, we show that little owls reduce predation risk from tawny owls through forest edge avoidance. Thus, vertebrate mesopredators not only vary in their response to the same top predator, our results suggest that a single mesopredator applies different strategies to avoid different top predators, depending on the extent of habitat segregation.
Avoidance of favored, food-rich habitats near the forest edge attests to the trade-off between costs and benefits of using edge habitat (Cresswell 2008). Our results suggest that the costs of using these areas exceed the benefits in our study area. As a result, home-ranges containing many forest edges are low in quality. The cost-benefit function of occupying habitats of different quality is expected to be density dependent (Bollinger andSwitzer 2002, van Beest et al. 2014). As intraspecific competition increases, edge-sensitive animals are forced to use suboptimal habitats v www.esajournals.org MICHEL ET AL.
near edges (Huhta et al. 1999). Thus, whether occupancy patterns result from direct or indirect predation effects will depend on the density of both mesopredators and top predators. Within our study area, mesopredator density is low (~0.55 breeding pairs per km 2 : H. Keil, unpublished data, compared to a mean density ± 1 SD of 1.84 ± 5.25 breeding pairs per km 2 across 69 western European studies: Génot and Van Nieuwenhuyse 2002) indicating that densitydependent effects are not strong enough to interfere with habitat selection. We suggest that predator-induced edge effects change from nonlethal avoidance to lethal predation with increasing mesopredator density, and that interactions and avoidance behavior act in larger areas with increasing top predator density (St-Pierre et al. 2006).
Recent research on carnivores suggests that bottom-up effects (i.e., the density of the shared prey) determine the range use of top predators, whereas the range use of mesopredators depends on the trade-off between predation risk and food availability (Fedriani et al. 2000, Heithaus 2001, Thompson and Gese 2007, Wilson et al. 2010, Kozlowski et al. 2012. Therefore, edge avoidance by habitat-segregated mesopredators likely depends on the relationship between predation risk and the distance to habitats used by top predators (Cresswell et al. 2010). The little owl, which shows a woodpecker-like flight of little maneuverability, is expected to depend on minimizing the encounter rate rather than escaping an attack. In contrast, species with more notable escape abilities are expected to use high quality habitat patches shared with the top predator despite the linked predation risk. Instead of minimizing potential encounters with a predator, they are expected to adapt their flight initiation distance to the perceived predation risk and the distance to shelter.
Habitat complexity moderates the strength of top-down effects by reducing encounter rates, by providing refuges and by improving the escape ability of prey (Janssen et al. 2007, Wirsing et al. 2010. Thus, habitat complexity promotes coexistence of intraguild predators and their prey living in the same habitat (Finke andDenno 2002, Janssen et al. 2007). In contrast to other studies, the top predator and mesopredator in our study system use distinct habitats and mainly interact at the edges in-between. Since landscape complexity affects the distribution and length of habitat edges, intraguild predator-prey interactions at habitat edges become a key issue at the landscape scale, particularly in the light of ongoing habitat fragmentation (Haddad et al. 2015). We show that the mesopredator avoids suitable habitat along forest edges. Thus, landscape features such as size, edge-area ratio and habitat fragmentation of mesopredator habitat patches determine the impact of the intraguild predator on mesopredator populations. In contrast to the mitigating effect of habitat complexity on multitrophic interactions within habitats (Hartman et al. 2014), increasing landscape complexity is expected to reinforce multitrophic interactions between habitats by creating edge habitat, potentially completely excluding mesopredators from suitable habitats.
Top predator induced suppression of mesopredators at habitat edges may relax the predation pressure on lower trophic levels. However, this release effect is expected to be stronger in traditional predator-prey interactions than in intraguild systems, because predation pressure by intraguild predators persists. Similar to the well-investigated "mesopredator release" (Soulé et al. 1988, Crooks andSoulé 1999), where the top predator is suppressed, the trophic cascades to lower trophic levels in areas of suppressed intraguild mesopredators might be complex. Further studies are necessary to elucidate whether reduced predation pressure as a result of local mesopredator suppression leads to prey release or whether the intraguild predator compensates for the reduced predation pressure.
For our study, we developed an asymmetrical, dynamic two-species occupancy model. Occupancy modeling has several advantages over analyses of home-range use based on tracking data. First, repeated assessment of occurrence at regular temporal and spatial intervals is a cost-efficient method to gather data across a large area and multiple species. The models can be extended to include additional species at different levels of food webs, integrating simultaneous information on predator and prey species. Second, it is possible to investigate change rates from 1 yr to the next and their dependence on interspecific interactions or habitat features. Third, telemetry is often limited to individuals breeding in accessible nest boxes, v www.esajournals.org
MICHEL ET AL.
whereas occupancy models based on responses to playbacks do not have this constraint. However, occupancy modeling provides no information about the mechanisms responsible for the observed patterns (Waddle et al. 2010). Therefore, we suggest that future studies should combine large scale occupancy modeling with the analysis of individual behavioral data to gain deeper insights into the mechanisms shaping the spatial patterns at different trophic levels of food webs. | 7,039.2 | 2016-03-01T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
The Development of Community-Based Ecotourism in Border Area of Sambas Regency
Community-based ecotourism is a strategy done to develop an area through the improvement of the tourism sector by offering natural tourism sites that preserving the environment and involving the community skill in managing the tourism activities. This study aimed to identify the development of the community-based ecotourism in the border area of Sambas Regency. There were 30 key participants involved in this study. A questionnaire consisted of four main criteria (environmental, economy, social, and facility) employed in this study which then analyzed by the AHP technique. Statistical analysis showed that the economy criteria was having the highest weight with a total of 0.535, followed by the environmental, facility, and social criteria with a total of 0.287, 0.105, and 0.074 respectively. There were three alternatives of community-based ecotourism analyzed to improve the community income: beach, turtle, and mangrove ecotourism. The statistical analysis showed that beach ecotourism gained the highest score to be placed as an alternative priority (0.593).
INTRODUCTION
Today, most people spent their leisure time by visiting various tourism sites. (Naisbitt & Jhon, 1994) stated that in the year of 2000 the tourism industry would highly contribute to the national income. A regulation issued by the National Minister of Home Affairs (Regulation No. 33 year 2009) about the Local Ecotourism Development Guideline stated that ecotourism as a local top economic sector hasn't fully developed, even though it has a significant role in preserving the environment and cultural aspect in an area.
Community-based ecotourism was the initial step taken to improve the ecotourism site by actively involving the community on its management and development. This type of ecotourism mainly aimed to create community wellness. Parallel with a study done by (Qomariah, 2009), community-based ecotourism defined as a type of educative ecotourism and actively involving the community in planning, decision making, and sharing the economic profit. The profit-shared by the community done based on the agreement made between the tourism site management parties and the community.
West Borneo has abundances of natural beauty that potentially supporting the local economy security and community wellness if it well managed. Paloh district is a coastal area located in the Sambas Regency and a district in West Borneo that directly terrestrial and aquatically situated in the border area between Indonesia and Serawak, Malaysia. The tourism sites provided by Paloh district usually characterized by the uniqueness of its nature and ecotourism that highly rich in biological diversity (beaches, turtle species). Paloh District also known as "heaven in the tail of Borneo" or "Surga di Ekor Borneo" in Bahasa Indonesia.
Community-based ecotourism is a concept utilized to develop ecotourism which consisted of the plan, implementation, and the management system involving the benefit that would be accepted by the community. Ecotourism is also providing a wide range of job vacancies for the community around the site such as microindustry workers, souvenir sellers, transportation providers, homestay owners, guides, etc. This community-based ecotourism has positively affected the preservation of the nature and local culture, improving the income, reducing the community poverty, and improving the site's infrastructures which could increase the ecotourism activities in the end. The ecotourism sites in the border area of Sambas Regency were offering unique and attractive landscapes. Unfortunately, these sites did not fully optimized as ecotourism sites, hence it could not contribute significantly to preserve the environment and the wellness of the community. Therefore, this study done to identify the priority alternative in developing community based-ecotourism in the border area of Sambas Regency. SOCA: Jurnal Sosial Ekonomi Pertanian https://doi.org/10.24843/SOCA.2020.v14.i03 Sambas Regency located in the northern part of West Borneo Province. Sambas regency occupied as much as 6,395.70 km2 or covered 4.36% from West Borneo Province. This study conducted in Paloh District that consisted of three villages: Tanah Hitam, Sebubus, and Temajuk. Paloh District was chosen as the study location due to some reasons: 1). Their ecotourism offered the natural beauty of nature, uniqueness, and biological diversity such as ecosystem, culture, beach, turtle, and mangrove 2). Low community income, 3). Located on the border area of Indonesia -Malaysia, and 4). There were three main local tribes domiciled in this district (Melayu, Dayak, and China).
To select the study participant, a purposive sampling technique which was defined as a sampling technique used to choose participants based on the researcher criteria, was applied in this study (Muhammad, 2009). The number of participants in this study was determined by the statement stated by (Gujarati, 2007) which explained that the minimum number of participants in a study was thirty participants. The participants were the key informants (Sambas Regency Tourism Board, Development Planning Agency, WWF member, and community figure/leader) and main informant (local people and visitors). This study conducted in three villages in Paloh District, Sambas Regency, they are Tanah Hitam, Sebubus, and Temajuk.
The Analytic Hierarchy Process (AHP) was used to identify the ecotourism developmental goal in improving community income in the border area of Sambas Regency. The criteria used were environmental, economy, social, and facility.
Participant's Characteristic
The data obtained from 30 key participants spread in all ecotourism sites in the border area. The participants were Sambas Regency Tourism Board, Development Planning Agency, WWF member, community figure/leader, local people, and visitors. The participant's characteristics observed in this study were gender, participant's age, education level, marriage status, occupation, and income.
The majority of participants were male (66.67%). This could be due to the higher level of knowledge about ecotourism in the border area of the male participant. The participant also dominated by the age of 15-22 years of age (50%). Participants in this range of age usually more productive in visiting various ecotourism sites in the border area. On the education level characteristic, the most participant was graduated from senior high school (43.33%). The education level was the most important characteristic because it contributed a social mission in improving the awareness of the environment preservation and impact of a miss-planning on the implementation of a program. Higher levels of knowledge produced a higher level of productivity, therefore a higher level of knowledge also closely related to the higher level of community income.
On the marriage status characteristic, most participants were already married (60%). The participants who mostly involved were the key informant, not the visitors. According to the occupation characteristic, 12 participants was working as a private entrepreneur, 40% participant were working as fisherman, farmer, souvenir seller, and run a small-scale business. Sometimes some participants were also working as jellyfish fisherman.
The Determination of Pairwise-Weight Comparison Between Criteria
The initial step in this study was determining the weight for each criteria used: environmental, economy, social, and facility. The determination of weigh in each criteria done by filling the pairwise comparison matrix. The result revealed that the economy criteria had the highest weight with a total of 0.535. This happened due to the significant role of the tourism sector in the national economic growth and the positive impact on the local community. Ecotourism development were also absorbing more workers that finally could reduce the number of unemployment community and improve their income and wellness. The community used this opportunity to sell the traditional culinary and souvenir around the site in Paloh District. The ecosystem obtained the highest weight (0.750) due to the reciprocal correlation between the living thing and the environment inside it. A study was done by (Sabahan & Evita, 2017) that aimed to know the landscape planning on community-based ecotourism in the coastal area of West Borneo revealed that Paloh District had unique and attractive coastal ecotourism sites. The role of local and national government were required to create resilience ecotourism site. Training sessions to improve community awareness about their role in managing the ecotourism site was urgently needed.
The Determination of Pairwise-Weight Comparison on Enviromental Criteria
Ecosystem Sub-Criteria: Paloh District a beach/coastal area with specific characteristics of beach ecotourism, the transition between the land and sea ecosystem. The beach in Paloh District also consisted of rich biological diversity, especially mangrove and turtle. Studied from the land use aspect, the most beach ecotourism site was still in the natural state. Generally, this area consisted of sand, stones, mangrove, and turtle which ascend to the beach every night.
Pollution Sub-Criteria: Criteria that showed the water quality studied from the environmental aspect that related with the local community was the water purity. Results from the observation showed that the purity of the water affected by the content of mud. Ecotourism sites located near the big river estuaries usually contained more mud (Tanah Hitam and Selimpai Beach). The low river current brought the mud to the beach. The situation was a little bit different on the Belacan, Camar, and Tanjung Datok beach. These beaches dominated by white sand and stones, hence the mud could reach the seafloor. The micro-business obtained the highest weight with a total of 0.749. (Yulianda, 2011) defined ecotourism as a tour to tourism sites that preserving the nature and building the wellness of the community around the site. According to this definition, the ecotourism was highly promoting the small scale-business such as micro small medium enterprise and preserving the environmental/social-cultural.
The Determination of Pairwise-Weight Comparison on Economy Criteria
Micro-Business Sub-Criteria: Cooperation between the local community was the main subject requires to control the resources in the ecotourism sites. There were various types of micro-business in the study location, such as food, souvenir, water-taxi, guide, and homestay-business. This micro-business was effective in improving their income.
Original Local Government Revenue Sub-Criteria: The local stakeholder (Sambas Regency Development Planning Agency) was the key stakeholder that majorly contributed in taking decisions related to the development of ecotourism in Paloh District. The Sambas Tourism Board expected to make proper policies about community participation in the development of ecotourism in the Sambas Regency. The ecotourism sites in Sambas Regency did not optimally developed due to the lack of funds and infrastructures. If the ecotourism sites managed well, the profit obtained could increase the original local government revenue. The human resource quality had the highest weight with a total of 0.667. Local community must be actively involved as the main subject in the ecotourism development and the government must be built a good partnership with the local community. Local communities had a complete understanding of the ecotourism in their environment. They were also capable of providing a proper solution to improve the ecotourism sites. According to a study done by (Pratiwi & Pinasti, 2017), all local community (community figure/leader, youth organization (Karang Taruna), tourism awareness group, and other local community) could contribute to the ecotourism management. The impact of tourism activities revealed by the change of some cultural elements such as the community knowledge development, the emersion of the new occupation, the introduction of language diversities, technological advancements, the improvement of hospitality levels, increasing cooperation, and the emergence of horizontal conflicts between local communities.
The Determination of Pairwise-Weight Comparison on Social Criteria
Human Resource Quality: Human resources was an important factor in developing ecotourism sites. Improving the human resources and skills in the ecotourism sites was a basic requirement in a community-based ecotourism development (Priono, 2012). Local community participation in ecotourism development was the community involvement on the emotional, psychological, and physical aspects to build their responsibility in improving the community wellness in Paloh District. The comprehensive understanding of the local community about their ecotourism sites given valuable information for the ecotourism development.
Community Culture: Culture was a tradition resulted from the transferred information from generation to generation through written and oral ways, without this tradition, the community culture could become extinct. Study by (Sugiyarto & Amaruli, 2018) entitled "Cultural and Local Genius-Based Ecotourism Development" stated that cultural destination and local geniuses in tourism development were the part of human creativity that has economic value. Antar ajong culture was a sambasmalay community local tradition in Paloh District. This cultural tradition done to assure a good harvesting period and avoid the damage of the harvested product because of the pest attack. Antar ajong was also seasonal cultural tourism that periodically showed in Lestari Beach to attract tourists.
Determination of Pairwise Weight Comparison on the Facility Criteria
The highest weight in these criteria obtained by the infrastructure sub-criteria with a total of 0.594. (Calderon & Serven, 2014) in their study entitled "The Effects of Infrastructure Development on Growth and Income Distribution" also stated that the improvement of infrastructure was a significant part to build an area rapidly. Infrastructures undeniably impacted the development of certain aspects of the national growth. The limited amount of infrastructure provided in the ecotourism site in Paloh District highly contributed to the slow tourism activities in those border areas. Infrastructure Sub-Criteria: infrastructure was an important part to develop ecotourism site. The physical elements such as bridges, clean water, telecommunication network, and electricity were the basic infrastructure required to improve the ecotourism site.
Accessibility Sub-Criteria: most villages in Paloh District already have good local access roads and could be reached by the four-wheel transportation. Temajuk Village was the only one village whose main road made of yellow soil. The village located 45 km away from the center of Paloh District. This type of soil was relatively slippery in the rainy season and very dusty in the dry season Supporting Facillity: Electricity could be properly used by six villages in Paloh District. Temajuk Village was the only village that used electricity produced by a diesel power electric generator. This diesel generator only operated in the night (17.00-05.00) to save the fuel used to operate the generator. The fuel must be well utilized because of the remote location of the village and difficulty in gaining the fuel. Temajuk village also the only village that didn't get proper clean water from the municipal waterworks. https://doi.org/10.24843/SOCA.2020.v14.i03.p10 The beach ecotourism prioritized in this criteria (0.674). The development of the beach ecotourism done to utilize the coastal ecosystem in improving the community economic status. Therefore, the beauty of the ecosystem diversity in the Paloh District must be well preserved. In the pollution sub-criteria, the highest weight obtained by the mangrove ecotourism (0.559). The mangrove ecotourism in Sebulus and Temajuk Village, Paloh District haven't well managed by the local community and stakeholder. The supporting facilities (ex. trash bin) didn't properly provided around the ecotourism site. This condition made the sites seemed less attractive to be visited. This study showed that the type of ecotourism prioritized in Paloh District, West Borneo was the beach ecotourism (0,614). The beach ecotourism in Paloh District spread in three villages: Tanah Hitam, Sebubus, and Temajuk. Interview done on some key informants revealed that most visitors were spending their time on the beach due to the natural beauty offered by the beach and the "heaven in the tail of Borneo" nickname given to the beaches in Paloh district. The local community positively responded to the ecotourism site development. Their enthusiasm showed through the opening of some small food businesses operated by the local communities around the ecotourism site. Ecotourism development was significant in maintaining national economic security because of the short duration demanded to increase the foreign exchange in a country. Paloh District in Sambas Regency was an area that has many attractive ecotourism sites. There were some beach ecotourism sites offered in Paloh District, but the most attractive beach ecotourism sites were located in Temajuk Village. Human resource quality was the next factor that important to anticipate the negative impact of ecotourism development. The local community was the key stakeholder and the party who were accepting the direct impact of ecotourism development. Reviewed from the social-subcriteria, the turtle ecotourism was being prioritized (0.547) due to the low level of knowledge of the importance of turtle preservation effort. Results revealed that the community cultural alternative prioritized the beach ecotourism (0.672). The community stated that antar ajong culture was a unique charm that attracts the tourist to visit the beach in Paloh district. This culture performed by the local community on the beach, especially in Lestari Beach that located in Tanah Hitam and Temajuk Village. https://doi.org/10.24843/SOCA.2020.v14.i03.p10 To access beaches in three villages in Paloh District (Tanah Hitam Village, Sebubus Village, and Temajuk Village), the community or tourist used land and water transportation. The duration needed to access the beaches using land transportation (car or motorcycle) was 5-6 hours due to the inadequate local road (still made of soil). This calculation was the final result of the ecotourism development selection toward the improvement of community income in the border area. The beach ecotourism obtained the highest weight (0.593) followed by the turtle ecotourism and mangrove ecotourism with a total weight of 0.260 and 0.147, respectively. Result also showed that the priority factor was the economy factor with the total weight of 0.535. Some small-scale food businesses were actively operating around the ecotourism sites that finally contributed to the local community income. The number of local or international tourists in the beach ecotourism site also remained constant during the weekend or national holiday.
CONCLUSION
According to the result, we concluded the factors that need to be considered to improve ecotourism sites in the Sambas Regency border area were the environment, economy, accessibility, and facility factor. Statistical analysis showed that the economic factor was having the highest weight with a total of 0.535, followed by the environmental, facility, and social factors with a total of 0.287, 0.105, and 0.074 respectively. Some ecotourism sites such as beach, turtle, and mangrove ecotourism also potentially developed as alternative tourism sites. Results showed that beach ecotourism ranked in the first place, followed by turtle ecotourism and mangrove ecotourism with a total weight of 0.593, 0.260, and 0.147 respectively.
RECOMMENDATION
Based on the result and conclusion, we suggest the improvement of infrastructures in the ecotourism sites. Some roads to access the ecotourism already well built, but some ecotourism site's accesses were still made from soil. The condition of these site's accesses made the visitors and community couldn't visit the ecotourism sites during the rainy season. The gazebos, toilets, places to rinse, and mosques also must be built in the ecotourism sites. During the study, the visitor who desired to use those facilities must go to some restaurants or local people home. An improvement on the promotional activities also must be addressed. Promotional activity could be done using pamphlets media that consisted of complete information about certain ecotourism sites or social media platforms. This promotional activity would disseminate the unique characteristic of the ecotourism sites and bring a significant improvement on the community income and wellness. Synergy between the local, national, and community must be well maintained to improve the quality of tourism sites offered in the national or international level. | 4,344.2 | 2020-06-26T00:00:00.000 | [
"Economics"
] |
Stress-Driven Evolution on Mismatched Ca 2 Co 2 O 5 Oxide Material: From Geometry to the Electronic States
The geometrical structures, phase stabilities, electron energy band structures, electron density of states, and atom recombination together with the electron conduction behaviors of the sandwiched Ca 2 Co 2 O 5 with external stress of 1GPa are intensively studied by the density functional theory method. The studying results show that the symmetry remains undisturbed; the strain to the stress response is anisotropic. The strain of microarchitecture induced by external stress is also anisotropic. There is stronger covalent binding between Co and O. The binding between Co and O within CdI 2 like CoO 2 is very much even covalent, and it is weakened under external stress. But the covalent Co-O binding within the rock salt like CaCoO layer is enhanced. The Ca-O binding strength is insensitive to external stress. An energy gap of 0.1 eV below Fermi level for the spin-up electron band disappears, and the two energy gaps are narrowed for the spin-down electron bands. The p orbital electrons form primarily the bands below Fermi level and the d orbital electrons form primarily the bands above Fermi level. The transitions from p orbital electrons to d orbital electrons produce the conduction. The CdI 2 like CoO 2 layer has been enhanced in terms of participating in the conduction properties with external stress of 1GPa, and the capability of Co is enhanced while the capability of O is decreased.
Introduction
e multioxide framework materials with complicated layered crystal structures such as NaCoO, CaCoO, and BiSrCoO are very much diverse in physical properties, as well as the related sensitivity to structure, spintronics, topology, preparation procedures, and so on [1][2][3][4][5][6] e family of transitional metal Co-based CaCoO oxide materials which show similar valuable properties are the research focus in recent years. For example, the Ca 2 Co 2 O 5 and Ca 3 Co 4 O 9 type layered oxide materials exhibit especially complex crystal structure, spin topology, preparation variety, and anisotropic transport phenomena [2,3]. ey are similarly composed of rock salts like the CaCoO layer and CdI 2 like the CoO 2 layer that are stacked along c axis with the Sandwich framed crystal structure. e Ca 2 Co 2 O 5 crystalline oxide material was first discovered and reported in terms of its unique sandwiched structure by Vidyasagar et al. in 1984 [4]. Its anisotropic semiconductor conduction and positive temperature-dependent thermopower of 100 μV·K −1 at 100 K were then demonstrated by Funahashi et al. [2,3].
e Ca 3 Co 4 O 9 crystalline oxide material was discovered and reported in terms of its sandwiched structure and anisotropic transport by Shikano and Funahashi in 2003 [5]. e sensitivity of physical properties to preparation procedures was also investigated during the past years. For instance, the grain alignment together with conduction is very dependent on the external stress of preparation. e polycrystalline materials of sandwiched CaCoO oxide have been more widely studied in contrast to their single-crystal materials for the sake of preparation cost, fabrication easiness, product scale, etc. In addition, they have been intensively studied experimentally in terms of the transport properties for the intrinsic as well as the regulated materials in recent years [7][8][9][10][11]. In order to recover the performance of single-crystal materials, some fabrication methods have been adopted. For instance, isostatic pressing is one of these ways. In this way, stress ranging from tens of MPas to several GPas is applied to the crystalline bulk materials when preparing. e resulting bulk materials should then be consolidated and regulated with regard to their density and grain alignment in order to get the bulk texture. We have also reported the stress-dependent transport properties of this sandwiched CaCoO crystalline oxide material. e grain alignment and the transport property thereafter can be regulated by external stress ranging from 30 MPa to 500 MPa [9,12]. e fundamental background physical properties are determined by the geometry structure as well as the electronic states thereafter. e evolution of geometry and electronic states with external stress merits investigation. Unfortunately, a theoretical study in this sandwiched CaCoO crystalline oxide material is moderately lacking. e transitional metal Cobalt has several d orbital electrons where varieties of spin alignments can be configured. We have demonstrated and reported that the antiferromagnetic aligned Ca 2 Co 2 O 5 crystalline oxide material is most stable among the antiferromagnetic phase and ferromagnetic phase [13]. In the present work, the geometrical structures, microarchitectures, stabilities, electron energy band structures, the electron density of states, and species recombination together with the electron conduction properties of the sandwiched Ca 2 Co 2 O 5 crystalline oxide material with external stress of 1 GPa are intensively studied via the density functional theory (DFT) calculational and analyzing method for the first time to our knowledge.
Computational Methods and Details
e sandwiched Ca 2 Co 2 O 5 crystalline oxide material is composed of rock salts like the CaCoO sublayer and CdI 2 like the CoO 2 sublayer along c axis with space group of P1M1. e rock salt like CaCoO and CdI 2 like CoO 2 has the same lattice parameters along b and c axis; their sublattices are mismatched along a axis. e cell angles α, β, and c are 90°, 90°, and 98.13°, the a, b, and c of the cell are 4.56Å, 9.66Å, and 10.84Å, respectively [2][3][4].
e schematical crystal structure of this sandwiched Ca 2 Co 2 O 5 crystalline oxide material and the projections onto several planes are shown in Figures 1 and 2. e present study was carried out based on the platform which is implemented in the Serial Total Energy Package (CASTEP, Cerius2, Molecular Simulation, Inc.) code within the DFT framework [12,14]. is packaged code is established within the DFT framework which has been successfully applied within the areas of solid states and material sciences for several years [12][13][14]. e DFT framework has been verified to be one of the most accurate strategies for the solutions of the electronic eigenvalues of solid states [15]. In this work, the deep valance electrons together with the atomic core were treated as Coulombic cores, and the Coulomb interactions of valance electrons with their cores of Ca, Co, and O were herein described by Vanderbilt pseudopotential function. e wave functions of electrons were represented by plane wave functions. e configurations of valence electrons for Ca(3s 2 3p 6 4s 2 ), Co(3d 7 4s 2 ), and O(2s 2 2p 4 ) were selected. e generalized gradient approximation (GGA) scheme and revised Perdew-Burke-Ernzerhof (RPBE) function within the scheme were used to describe the exchange-correlation relation between these electrons. e Hubbard energy revision of 2.5 eV was used to represent the on-site Coulomb effect of Co d electrons. e antiferromagnetic aligned Ca 2 Co 2 O 5 has previously been verified to be most thermally stable among their ferromagnetic phase and antiferromagnetic phase [13]. For the antiferromagnetic phase, the spin state of Co d within the CoO 2 layer was set as contrary as that of Co d within the CaCoO layer for the unpaired electron. In addition, the computational result of the magnetism is well in consistent with the initial settings of the antiferromagnetic phase. In the ground state total energy calculational process, the convergence tolerance of displacement during the self-consistent calculations was set as 0.0005Å, and the maximum force tolerance was set as 5 × 10 −6 eV/atom. e cutoff energy for Table 1 shows the lattice parameters, total energy E t , formation enthalpy E f , and magnetism of sandwiched Ca 2 Co 2 O 5 crystalline oxide material with external stress of 1 GPa, these values for the counterpart Ca 2 Co 2 O 5 are also provided, and the ratios of these parameters are also deduced for comparison. e α, β, and c keep invariant. e initial lattice symmetry type is not influenced, and the space group remains undisturbed within the externally applied stress range. It can be seen from Table 1 that a, b, c, and cell volume exhibit decreasing trends. is is corresponding to the positive shrinkage expansion of this type of material under external stress. However, it is worth noting that the ratios of a, b, and c between counterpart cells and that under external stress are distinctively different. For example, the ratio of a is 0.997, and the ratio of b and c is 0.998 and 0.999, respectively, corresponding to the disproportionate ratio of the volume of 0.994. It is an indication that the strain induced by external stress is very much anisotropic. Specifically, it can be seen that the strain to the stress response of geometry is sensitive along a direction and insensitive along b c direction for the sandwiched Ca 2 Co 2 O 5 crystalline oxide material. In addition, this is another indication that the bindings within the sandwiched Ca 2 Co 2 O 5 crystalline oxide material are very much different, in terms of the binding nature, binding type, binding length, and binding strength.
Results and Discussion
It can be seen from Table 1 that the counterpart sandwiched Ca 2 Co 2 O 5 crystalline oxide material without external stress has a total energy of −12556.6 eV for the cell, while the total energy of −12556.5 eV for the cell under external stress is slightly larger than it. Although the difference is negligible, it indicates that the counterpart sandwiched Ca 2 Co 2 O 5 crystalline oxide material without external stress is more thermally stable. In order to verify the stability of the sandwiched Ca 2 Co 2 O 5 crystalline oxide material under different external stress, the formation enthalpy is applied. e formation enthalpy ΔH MN of materials with specific molecular equation x M y N can be deduced by where E t is the total energy of the molecular, E M and E N are the averaged energy of element M and N, and x M and y N are the quantity of element M and N of the molecular equation. e thermal stability should be higher for material with a smaller formation enthalpy value and it is easier to be formed. It can be seen from Table 1 that the counterpart sandwiched Ca 2 Co 2 O 5 crystalline oxide material has a formation enthalpy of −6.221 eV, which is slightly lower than that under external stress of 1 GPa with −6.220 eV. e counterpart sandwiched Ca 2 Co 2 O 5 crystalline oxide material should be easier to be formed according to the formation enthalpy values. is is in agreement with the total energy that has been discussed above. It is convinced herein that the total energy as well as the formation enthalpy can be applied Advances in Condensed Matter Physics 3 jointly for analyzing the thermal stability of materials; it turns out to be reasonably reliable. e Ca-O binding strength is insensitive to external stress and strain within the applied range. Figure 4 shows the full energy range spin electron band structures of the sandwiched Ca 2 Co 2 O 5 crystalline oxide material with external stress of 1 GPa, the Fermi energy level is relatively set to be 0 eV, and other energy levels are determined thereafter by comparing with Fermi energy level. Figure 5 shows the full energy range spin electron density of states of the sandwiched Ca 2 Co 2 O 5 crystalline oxide material with external stress of 1 GPa. It is true that the bands are anisotropic, especially for a band near the Fermi energy level. It can be seen that the spin-up valence electrons of Ca 2 Co 2 O 5 form five bands within the whole energy range; they locate near −38.5 eV, −19.5 eV, −17 eV, Fermi energy level, and 5 eV. e spin-down valence electrons of Ca 2 Co 2 O 5 form six bands within the whole energy range, and the new band near 1.5 eV can be detected. It can also be observed from Figure 4 that the deep valence bands far from the Fermi level are heavier and the conduction bands are lighter for both of the spin-up and spin-down electrons [13,14]. ere is an obvious band concentration near −19.5 eV, and a strong interaction between electrons can be observed near −2.5 eV, as shown in Figure 5. Figure 6 shows the spin electron band structures near the Fermi energy level of the sandwiched Ca 2 Co 2 O 5 crystalline oxide material with external stress of 1 GPa. Figure 7 shows the spin electron density of states near the Fermi energy level of the sandwiched Ca 2 Co 2 O 5 crystalline oxide material with external stress of 1 GPa. For the spin-up electron band structure, a band valley locates at 4.0485 eV and a band peak locates at 1.9680 eV, and there is an indirect energy gap of 2.0805 eV. It has been investigated within our former study that the spin-up band has an energy gap of 2 eV above Fermi level and an energy gap of 0.1 eV below Fermi level for the counterpart Ca 2 Co 2 O 5 crystalline oxide material. e energy gap of 0.1 eV below Fermi level disappears for the Ca 2 Co 2 O 5 crystalline oxide material under external stress of 1 GPa. For the spin-down electron band structure, the band valleys It can be seen from Figure 7 that the density of states below Fermi energy level is largely contributed by p orbital electrons and the density of states above Fermi energy level is largely contributed by d state electrons. e p orbital electrons form primarily the bands below Fermi energy level, and the d state electrons form primarily the bands above Fermi energy level. In addition, it can be said that the transitions from p orbital electrons to d orbital electrons should produce the conduction process, and they should be responsible for the electron heat capacity part for this kind of Ca 2 Co 2 O 5 crystalline oxide material. Figure 8 shows the detailed density of states of the rock salt like CaCoO and CdI 2 like CoO 2 layer near Fermi energy level of the sandwiched Ca 2 Co 2 O 5 crystalline oxide material with external stress of 1 GPa. Figure 9 shows the density of state values on the Fermi energy level for the sandwiched Ca 2 Co 2 O 5 , the rock salt like CaCoO, and CdI 2 like CoO 2 layer, as well as species that form these layers. e values marked with number 1 are for the sandwiched Ca 2 Co 2 O 5 crystalline oxide material with external stress of 1 GPa, and the values marked with number 0 are for the intrinsic sandwiched Ca 2 Co 2 O 5 crystalline oxide material with no 31. e proportion of density of states for this layer is decreased from 84% down to 82%. e electronic properties of metallic solids are determined by electrons near Fermi energy; the capability of determining the transport properties is reduced for this layer. It can also be seen that the total density of states value below Fermi energy of this layer is largely composed by p orbital electrons, and the total density of state value above Fermi energy of this layer is largely composed by d orbital electrons. It can be seen that the total density of state value for CdI 2 like CoO 2 layer at Fermi level is 0.5093; it contributes 18% to the total density of state value of 2.8836. Nevertheless, for the intrinsic counterpart Ca 2 Co 2 O 5 , the total density of state value for this same layer at Fermi level is 0.83; it contributes 16% to the total density of state value of 5.31. e proportion of density of states for this layer is increased from 16% to 18%. It is seen that this layer has been enhanced in determining the electronic properties. It can also be seen that the total density of states value below Fermi energy of this layer is largely composed by p orbital electrons; the total density of state value above Fermi energy of this layer is largely composed by d orbital electrons. Advances in Condensed Matter Physics Figure 10 shows the detailed density of states of Ca, Co, and O within the rock salt like CaCoO near Fermi energy level of the sandwiched Ca 2 Co 2 O 5 crystalline oxide material with external stress of 1 GPa. It is observed that the total density of state values of Ca, Co, and O at Fermi level is 0.0197, 0.565, and 1.7465; they contribute 0.8%, 24%, and 75.2% to the total density of state value of 2.3742 of this layer. Nevertheless, for the counterpart Ca 2 Co 2 O 5 , the total density of state values of Ca, Co, and O at the Fermi level is 0.09, 2.08, and 2.30; they contribute 2%, 47%, and 51% to the total density of state value. It can be indicated that the capability of contributing to electronic properties of Ca and Co is decreased; the capability of contributing to electronic properties of O is enhanced. It can be concluded from Figures 9 and 10 that the total density of states of the CaCoO layer at the Fermi energy level is mainly composed of the orbital electrons of Co d and O p. is is the same as the counterpart Ca 2 Co 2 O 5 . It is inferred that these two kinds of orbital electrons within this layer contribute to the conduction process. Figure 11 shows the detailed density of states of Co and O within the CdI 2 like CoO 2 layer near Fermi energy level of the sandwiched Ca 2 Co 2 O 5 crystalline oxide material with external stress of 1 GPa. It is observed that the total density of state values of Co and O at the Fermi level is 0.1854 and 0.3237; they contribute 36% and 64% to the total density of state value of 0.5093 of this layer. However, for the counterpart Ca 2 Co 2 O 5 , the total density of state values of Co and same as the counterpart Ca 2 Co 2 O 5 . It is inferred that these two kinds of orbital electrons within this layer contribute to the conduction process.
Conclusions
In conclusion, the geometrical structures, microarchitectures, phase stabilities, electron energy band structures, electron density of states, species recombination, and the electron conduction properties of the sandwiched Ca 2 Co 2 O 5 with external stress of 1 GPa are intensively studied within the framework of density functional theory calculational and analyzing method. e symmetry type is not influenced, and the space group remains undisturbed.
e strain-to-stress response of geometry is sensitive along a direction; it is insensitive along the c direction. e strain induced by external stress of microarchitecture is anisotropic, indicating the different binding characteristics. e distances between Ca and O are larger than those between Co and O in common, and there is stronger covalent binding for the Co and O. e bindings between Co and O within CdI 2 like CoO 2 are very much covalent than those between Co and O within the rock salt like CaCoO layer. e covalent Co-O binding within the rock salt like CaCoO layer is enhanced; nevertheless, the covalent Co-O binding within the CdI 2 like CoO 2 layer is weakened under the external stress. e Ca-O binding strength is insensitive to external stress. e intrinsic sandwiched Ca 2 Co 2 O 5 is more stable. An energy gap of 0.1 eV below Fermi level for spin-up electron band disappears, and the two energy gaps are decreased to 1.1089 eV and 0.6047 eV for the spin-down electron bands, respectively. e p orbital electrons form largely the bands below Fermi energy level and the d state electrons form largely the bands above Fermi energy level. e transitions from p orbital electrons to d orbital electrons produce the conduction process. e CdI 2 like CoO 2 layer has been enhanced in terms of involving the transport properties with external stress of 1 GPa. Nevertheless, the rock salt like the CaCoO layer exhibits contrary characteristics. For the CdI 2 like CoO 2 layer, the capability of contributing to transport properties for Co is enhanced, but the capability of contributing to transport properties for O is decreased. For the rock salt like the CaCoO layer, the capability of contributing to transport properties for Ca and Co is decreased; the capability of contributing to transport properties for O is enhanced.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare no conflicts of interest. | 4,866.4 | 2021-05-11T00:00:00.000 | [
"Physics"
] |
On the plerionic rectangular supernova remnants of static progenitors
Pulsar wind nebulae are a possible final stage of the circumstellar evolution of massive stars, where a fast rotating, magnetised neutron star produces a powerful wind that interacts with the supernova ejecta. The shape of these so called plerionic supernova remnants is influenced by the distribution of circumstellar matter at the time of the explosion, itself impacted by the magnetic field of the ambient medium responsible for the expansion of the circumstellar bubble of the progenitor star. To understand the effects of magnetization on the circumstellar medium and resulting pulsar nebulae, we conduct 2D magnetohydrodynamical simulations. Our models explore the impact of the interstellar medium magnetic field on the morphology of a supernova remnant and pulsar wind nebula that develop in the circumstellar medium of massive star progenitor in the warm phase of the Milky Ways interstellar medium. Our simulations reveal that the jet like structures formed on both sides perpendicularly to the equatorial plane of the pulsar, creating complex radio synthetic synchrotron emissions. This morphology is characterized by a rectangular like remnant, which is typical of the circumstellar medium of massive stars in a magnetized medium, along with the appearance of a spinning top structure within the projected rectangle. We suggest that this mechanism may be partially responsible for the complex morphologies observed in pulsar wind nebulae that do not conform to the typical torus, jet or bow shock, tail shapes observed in most cases.
Pulsars represent the final evolutionary phase of massive stars that do not directly collapse into black holes.Understanding the physics of a pulsar and its interaction with the surrounding medium requires knowledge of various physical processes, including high-energy phe-nomena, fluid dynamics, general relativity, and nuclear physics, see, e.g., Weber (1999); Bucciantini (2011); Steiner et al. (2005); Lasky (2015); Pnigouras & Kokkotas (2015, 2016); Pnigouras (2019).Pulsars have very powerful magnetospheres with strong magnetic fields on the order of kilogauss (kG), which play a crucial role in the evolution see e.g., Mestel et al. (1985).The rotating magnetospheres extract energy from the pulsar and generate a powerful wind, see, e.g., Pétri (2022).The interaction of the pulsar wind with the ambient medium produces the so-called pulsar nebulae, which can be located inside or outside the supernova remnant of the progenitor star, depending on whether the supernova explosion had kicked off the pulsar.Various observations have triggered investigations into such phenomena, see, amongst others, the studies of Pavan et al. (2016); Kargaltsev et al. (2017); de Vries & Romani (2020); Igoshev (2020); de Vries et al. (2021).
A particular class of supernova remnants containing a pulsar exhibit a succession of structured shocks powered by the pulsar's magnetic wind, producing multi-wavelength polarized non-thermal emission.Examples of such plerions include the Crab Nebula (Hester 2008), as well as the Geminga pulsar residing within the Vela supernova remnant (Bock et al. 1998;Popov et al. 2019).Additionally, one can observe youthful supernova remnants hosting both a pulsar and a pulsar-wind nebula, such as B0540-693 (Williams et al. 2008) and G11.2-0.3 (Borkowski et al. 2016).
The modelling of PWN has been a long-standing challenge for several reasons.First, the physics involved in PWN is inherently complex, involving the interaction between the pulsar's relativistic wind and the surrounding medium.This requires a multi-disciplinary approach.Second, the environment in which the pulsar wind is launched is often structured, as it depends on the supernova remnant's properties, the progenitor star's circumstellar medium, or the interstellar medium in which the pulsar resides.The properties of the surrounding medium can significantly affect the dynamics and emission of the PWN.These factors together make the modelling of PWN a complex and multi-faceted problem, requiring sophisticated theoretical models and numerical simulations to understand the physics at play fully.The Crab Nebula stands out as a prominent example of a plerion.Extensive research, both theoretical (Kennel & Coroniti 1984;Coroniti 1990;Begelman & Li 1992;Begelman 1998) and numerical, has been dedicated to studying PWN like the Crab Nebula.These investigations encompass relativistic axisymmetric 2D simulations (e.g., Komissarov & Lyubarsky 2003, 2004;Komissarov 2006;Del Zanna et al. 2006;Camus et al. 2009;Komissarov & Lyutikov 2011;Olmi et al. 2014) as well as relativistic 3D simulations (e.g., Mignone et al. 2013;Porth et al. 2014;Olmi et al. 2016).
Moreover, the Crab Nebula is pivotal in advancing our understanding of pulsar physics and their interactions with supernova remnants.Notably, the dynamics and morphology of pulsar wind nebulae experience significant transformations as they expand within a supernova remnant.This influence can be even more pronounced when the pulsar receives a kick during a supernova explosion, as observed in the case of PWN CTB 87 (Matheson et al. 2013).Extensive studies have been conducted on the interaction between PWN and supernova remnants, focusing on no-moving pulsars in 1D and 2D scenarios (van der Swaluw et al. 2001;Blondin & Chevalier 2017), including a mock complex surrounding the neutron star (van der Swaluw 2003; Blondin et al. 2001).
These studies have also been extended to moving pulsars inside the supernova ejecta, revealing the development of strong bow-shocks between the pulsar wind and supernova remnant (van der Swaluw et al. 2003(van der Swaluw et al. , 2004;;Temim et al. 2017;Kolb et al. 2017;Temim et al. 2022) and the study was extended using relativistic MHD (Bucciantini et al. 2004).In some cases, the strong interaction between the PWN and the reverse shock of the supernova remnant can result in compression.This interaction phase is referred to as reverberation, during and after which the morphology, spectrum, and dynamics of PWN could undergo significant changes, see the recent studies by Torres & Lin (2018); Bandiera et al. (2020Bandiera et al. ( , 2023a,b),b).
In later phases, as the moving pul sar leaves the supernova remnant and begins to interact with the interstellar medium, a bow shock nebula forms around the runaway pulsar.This intriguing phenomenon has been extensively studied analytically (Bucciantini & Bandiera 2001) and through numerical simulations in two dimensions with relativistic considerations (Bucciantini 2018).Furthermore, research into this phenomenon has delved into three-dimensional simulations, encompassing non-relativistic pulsar winds (Toropina et al. 2019) and relativistic pulsar winds (Barkov et al. 2019a(Barkov et al. ,b, 2020;;Olmi & Bucciantini 2019).Despite the high complexity of these simulations and the numerous questions they leave unanswered (Olmi & Bucciantini 2023), all of these studies still neglect the effects of the circumstellar medium of the defunct star in which the supernova remnant and the PWN expand during the initial phases.
The circumstellar medium of a defunct star is formed through the interaction between the star's wind and luminosity with the sur-rounding interstellar medium (ISM).The shape and properties of the circumstellar medium depend on various factors, including the evolution of the star, such as its mass, age, and stage of evolution, as well as the characteristics of the surrounding ISM, such as density, temperature, and magnetic field van Marle et al. (2015a); Meyer et al. (2022a).In the context of massive stars, the circumstellar medium undergoes successive structural changes.During its early life, it forms an accretion disc (Liu et al. 2020;Meyer et al. 2022b;Elbakyan et al. 2023;Burns et al. 2023).In its main sequence phase, it expands into a wind bubble (Weaver et al. 1977;Gull & Sofia 1979;Wilkin 1996a).Later on, it evolves into expanding shells (Stock & Barlow 2010;Cox et al. 2012;Decin 2012;Decin et al. 2012).If a supernova explosion occurs, it leaves behind an expanding remnant shell (Aschenbach & Leahy 1999;Yusef-Zadeh et al. 2003;Katsuda et al. 2018;Arias et al. 2019;Chiotellis et al. 2019;Derlopa et al. 2019).
Once the pulsar emits a relativistic and powerful wind, it initially interacts with the surrounding supernova ejecta (Cox et al. 1999;Sun et al. 1999;Crawford et al. 2001;Olmi & Bucciantini 2023).As the PWN passes through the supernova ejecta, it subsequently interacts with the circumstellar medium of the defunct star.The distribution of ejecta, stellar wind, and ISM gas acts as a matrix that channels the expansion of the pulsar wind (Kolb et al. 2017;Temim et al. 2022).This is particularly important when the supernova progenitor is a runaway star, as the bow shock created by its surrounding stellar wind can influence the subsequent evolution and emission of the supernova ejecta and the PWN (Meyer & Meliani 2022).
This study aims to investigate how a magnetised ambient medium influences the dynamics, morphologies, and emission properties of PWN with static massive star progenitors.The multi-dimensional magnetohydrodynamical (MHD) simulations conducted by van Marle et al. (2015b) have revealed that the circumstellar medium of high-mass stars is significantly influenced by the organized magnetic field of its ambient medium.This finding has profound implications on the understanding of stellar wind bubbles around massive stars, as previously studied by Freyer et al. (2003); Dwarkadas (2005); Freyer et al. (2006); Dwarkadas (2007).The presence of a magnetic field can cause expanding stellar wind bubbles to become elongated and adopt an oblong morphology along the direction of the magnetic field lines.Our previous work (Meyer et al. 2022a) has shown that such asymmetric pre-supernova environments can result in a peculiar reflection of the supernova shock wave, forming rectangular-shaped remnants like Puppis A. In this study, we further investigate the effects of the reflection of the supernova blastwave in asymmetric, magnetized wind bubbles that are generated by a static, rotating star in the warm phase of the Galactic plane and how it may impact the evolution of plerionic pulsar wind nebulae.
The paper is structured as follows.In Section 2, we present the modelling methods used in this study.This includes the description of the numerical simulations of pulsar wind nebulae, which are detailed in Section 3. We then discuss the outcomes of our study in Section 4, and present our conclusions in Section 5.
METHOD
In this section, we will provide a comprehensive review of the numerical setup used in this study to generate models of PWN from static massive stars.We will summarize the initial conditions, including both the initial and boundary conditions, in the following paragraphs.Additionally, we will describe the numerical methods employed in the simulations.
Initial conditions and boundaries conditions
This paper presents models that simulate the interaction between a star's wind and ejecta at all phases of its evolution with the warm ISM in the Milky Way galaxy.The total number density of the ISM is taken to be ISM = 0.79 cm −3 , while the magnetic field of the ISM is uniform and structured, with a strength of ISM = 7 G.In these models, we assume that the ionized gas has a temperature of 8000 K (table 1).The ambient medium is in equilibrium between the photoheating provided by the reionizing gas around the star, as described in Osterbrock & Bochkarev (1989) and Hummer (1994), and the radiative losses from optically-thin cooling processes, as outlined in Wolfire et al. (2003).The cooling law used in this study is based on the work of Wiersma et al. (2009), which is suitable for a solar metallicity environment (Asplund et al. 2009).The cooling law accounts for hydrogen and helium as the primary coolants at temperatures < 10 6 K, and various metals' emission lines at temperatures ⩾ 10 6 K.The cooling curve is further enhanced with [Oiii] , 5007 collisionally excited forbidden lines, as described in Asplund et al. (2009) and Henney et al. (2009).This paper presents a model that captures the evolution of the circumstellar medium surrounding a static massive star with an initial mass of 35 M ⊙ at the zero-age main sequence.The star is considered to be rotating with an angular velocity ratio of Ω ★ /Ω K = 0.1, where Ω ★ represents the star's initial angular frequency and Ω K is its equatorial Keplerian angular velocity.Consequently, the equatorial velocity of the star can be expressed as, rot () = Ω ★ () ★ (). (1) Here, ★ () denotes the stellar radius, and the time-dependence signifies the variation in surface properties throughout the star's entire lifespan.The model tracks the complete evolution of the circumstellar medium surrounding the static star, ranging from the onset of the zero-age main sequence to the pre-supernova phase.This comprehensive approach encompasses various stages, including the main sequence, red supergiant, and final Wolf-Rayet phase.
Regarding the stellar wind throughout the evolution phase, we assume that the stellar wind maintains spherical symmetry throughout the entire lifespan of the supernova progenitor, with the axis of rotation of the rotating star aligned with the axis of symmetry of the domain.To determine the wind's characteristics, we use the onedimensional stellar evolution model provided by the geneva library, as described in Ekström et al. (2012) 1 .Specifically, we extract the mass-loss rate () and the effective temperature eff () of the star at each stage of evolution from this database, and derive from it the wind density, In this equation, represents the radial distance from the star, and () corresponds to the mass loss rate of the star at time .
1 https://www.unige.ch/sciences/astro/evolution/en/database/syclist/ The terminal velocity of the stellar wind, denoted as w (), is calculated based on the escape velocity esc ().The escape velocity depends on the star's effective temperature eff and is determined using the conversion law, where represents the gravitational constant, and () is a the normalisation factor introduced by Eldridge et al. (2006).
We adopt the time-dependent evolution of the surface magnetic field ★ of the supernova progenitor as derived in Meyer et al. (2023) where the magnetic field strength at the surface of the star are scaled to that of the Sun, as described in Scherer et al. (2020); Herbst et al. (2020);Baalmann et al. (2020Baalmann et al. ( , 2021)); Meyer et al. (2021b).Specifically, we assume a magnetic field strength at star surface of ★ = 500 G during the main-sequence phase (Fossati et al. 2015;Castro et al. 2015;Przybilla et al. 2016;Castro et al. 2017), to a Betelgeuse-like field of ★ = 0.2 G for the red supergiant phase (Vlemmings et al. 2002(Vlemmings et al. , 2005;;Kervella et al. 2018) and ★ = 100 G during the Wolf-Rayet phase (Meyer 2021).Concerning the stellar magnetic field structure, we utilize a Parker spiral made of a radial component, and a toroidal component, respectively, with, being the latitude-dependent surface velocity of the rotating massive star (Parker 1958;Weber & Davis 1967;Pogorelov & Semenov 1997;Pogorelov & Matsuda 2000;Chevalier & Luo 1994;Rozyczka & Franco 1996).
At the end of a star's evolution, it enters the supernova phase, during which we model the expanding supernova ejecta as a spherically symmetric distribution within a radius of max .The ejecta has a total energy of SN = 10 51 erg and a mass of SN , which takes into account the star's mass loss throughout its entire evolution until the immediate pre-supernova time psn , as well as the mass NS of the neutron star that forms at the centre.Specifically, we set, with psn and NS = 1.4 M ⊙ (Das et al. 2022).
In our study, we adopt a density and velocity profile for the freely expanding supernova ejecta based on the work by Truelove & McKee (1999).This profile consists of two distinct regions (Bandiera et al. 2021).The first region is a uniform density core extending from 0 to core , where core represents the core radius.In this region, the density decreases with time following a power-law relationship of −3 , where denotes the time after the explosion.The second region is the outer edge, extending from core to max , where max corresponds to the maximum radius.In this region, the density decreases steeply with radius, following a power-law relationship of ∝ − , with the exponent set to 11.Additionally, the density in the outer edge region decreases with time as − (3+) .These density profiles can be expressed as follows: and, respectively.These density profiles are commonly used for corecollapse supernovae (Chevalier 1982).
For the velocity, we utilize a homologous radial profile for the supernova ejecta, given by = /, across all regions from 0 to max at time max .The characteristics of the supernova ejecta profile are computed following the methodology outlined in Truelove & McKee (1999) and Whalen et al. (2008).
The velocity at the core radius, denoted as core ( core ), is determined as, where sn represents the total energy of the supernova ejecta and sn is the total mass of the ejecta.This equation ensures conservation of both mass and energy in the supernova ejecta.The maximum speed, denoted as max , is set to: This choice of max maintains the conservation of total mass and energy in the supernova ejecta (van Veelen et al. 2009).
As the supernova ejecta are expelled, we set a radial pulsar's wind that emanates from the centre, as described by Meyer et al. (2022a).This wind has a total power that is assumed to evolve over time, , according to the following equation: with o is the initial spin-down of the pulsar, defined as, where the initial spin period of the pulsar is set to o = 0.3 s, and its time derivative is set to o = 10 −17 s s −1 .The braking index is assumed to be = 3, which corresponds to magnetic dipole spindown, as outlined in Pacini (1967).Furthermore, we assume that the pulsar's wind maintains a constant speed of psw = 10 −2 , where denotes the speed of light in vacuum.It's important to acknowledge that this speed is significantly lower than the realistic pulsar wind speeds, which can reach , corresponding to a Lorentz factor of 10 6 (as demonstrated in Kennel & Coroniti 1984).This decision to employ a reduced pulsar wind speed can lead to noticeable alterations in the properties of its termination shock.These changes encompass compression rates, speeds, and, subsequently, shock positions and influence the development of associated instabilities.It is crucial to emphasize that our paper's primary objective is to replicate the overall evolution of the PWN accurately.This evolution is predominantly governed by the wind's momentum flux (Wilkin 1996b).In terms of magnetization, we have opted for a low value of = 10 −3 in this study, a choice in line with descriptions found in Rees & Gunn (1974), Kennel & Coroniti (1984), Slane (2017), Begelman & Li (1992) and Torres et al. (2014).This magnetization value implies that a significant portion of the magnetic field is converted into kinetic energy.It is worth noting that recent multi-dimensional simulations have demonstrated that larger magnetization values, such as = 0.01 in 2D (e.g., Komissarov & Lyubarsky 2003, 2004;Del Zanna et al. 2004, 2006) and even > 1 in 3D (Porth et al. 2014;Barkov et al. 2019a), can accurately reproduce the features of termination shocks of PWN.
However, it is essential to recognize that the value of pulsar wind magnetization remains a topic of debate, as it significantly influences PWN termination shock strength and, consequently, particle acceleration.Moreover, the magnetization of the equatorial wind zone may decrease, leading to lower magnetization values due to the annihilation of equatorial wind magnetic stripes (Coroniti 2017).By selecting such a low magnetization, as in Bucciantini et al. (2004), the Pulsar Wind Nebula tends to expand more in the equatorial plane, resulting in a stronger termination shock.
Furthermore, Komissarov and Lyubarsky in 2003-2004and Del Zanna et al. 2004, showed that the properties of the inner nebula can only be recovered with 2D simulations if the injected magnetization is larger than 0.01.
The magnetic field is assumed to have only a toroidal component.The total kinetic energy, magnetic field strength, and kinetic energy are functions of the radial distance and polar angle , as described in Komissarov & Lyubarsky (2004).
Our choice of a spherically symmetric supernova explosion allows us to assume that the neutron star is at rest at the location of the explosion and neglects any potential kick velocity resulting from asymmetries in the explosion.
Numerical methods
To investigate the evolution of the PWN within the circumstellar medium of its static progenitor star that is surrounded by a magnetized external medium, we follow the strategy we used in Meyer et al. (2015Meyer et al. ( , 2020) ) and that we extended after to PWN in Meyer & Meliani (2022).The magneto-hydrodynamical simulations are conducted in a 2.5-dimensional, axisymmetric cylindrical coordinate system.The simulation box extends over the range [; max ] × [ min ; max ] and is discretized using a uniform grid of R × z cells.Consequently, the spatial resolution is consistent along both directions, with each grid cell having a size of Δ = max / R .We employ two different spatial resolutions throughout the evolutionary process.During the progenitor star wind phases, the circumstellar medium is resolved using a grid resolution of R = 2000 and z = 4000 cells.The stellar wind is implemented as an internal boundary condition within a sphere centred at the origin of the computational domain, with a radius of 20Δ, following the standard procedure outlined in Comerón & Kaper (1998).
At the immediate pre-supernova stage, we remap the solution for the circumstellar medium onto a finer grid with R = 3000 and z = 6000 cells.The supernova ejecta is confined within a central sphere of radius max , as described in section 2.1.Simultaneously, the pulsar wind is imposed within a sphere of radius ns wind = 20Δ, also detailed in section 2.1.Due to our choice of an asymmetric coordinate system, we are compelled to align the directions of the pulsar spin axis and the symmetry axis of the computational domain to be the same.
In this paper, we study the evolution of the circumstellar medium influenced by the magnetized wind emitted by a massive star with a mass of 35 M ⊙ in two distinct types of external medium: the magnetized and unmagnetized warm phases of the Galactic plane in the Milky Way.We refer to these models as Run-35-HD-0.79-PWNand Run-35-MHD-0.79-PWN.In the magnetised external medium case, the adopted strength of the background magnetic field is set to that measured in the spiral arms of the Galaxy, with an average strength of ISM = 7 G (see Draine 2011).The main parameters utilized in the two cases investigated in this paper are provided in Table 1.For a more comprehensive description of the model and the implemented strategy, please refer to Meyer et al. (2023) and Meyer & Meliani (2022), where detailed explanations can be found.
The numerical simulations are conducted using the pluto code (Mignone et al. 2007(Mignone et al. , 2012;;Vaidya et al. 2018)2 and we solve the following set of equations, and with the gas density , velocity , momentum = and magnetic field , as well as the the total pressure and the energy of the gas The sound speed of the medium reads where the adiabatic index is = 5/3.Last, radiative cooling by optically-thin processes and photo-heating are included into the equations via the term Φ(, ), with the gas temperature , accounting for the prescriptions of Meyer et al. (2014).Regarding the cooling/heating processes of the gas, we assume the gas to be optically thin throughout the entire progenitor's life.After this point, with the launch of the pulsar wind, the cooling and heating mechanisms are disabled.We employ a Godunov-type numerical scheme with the Harten-Lax-van Leer approximate Riemann solver (HLL) and utilize the 8-waves magnetic field formulation (Powell 1997).For time integration, a third-order Runge-Kutta scheme is employed, with the time step controlled by the Courant-Friedrichs-Lewy (CFL) number.The numerical simulations are performed at the North-German Supercomputing Alliance (HLRN3 ) using the LISE cluster in Berlin, which is equipped with Cray XC40/30 processors.
RESULTS
In this section, we will analyze the results of the evolution of the PWN within the supernova remnant and circumstellar medium of the progenitor star in both the unmagnetized and magnetized cases.Our focus will be on investigating the influence of the magnetic field of the progenitor star and the external medium on the shape and dynamics of the PWN.
Model with unmagnetised ISM
In Figure 1, the density contour is shown for the unmagnetized case Run-35-HD-0.79-PWN(left panels) and the magnetized case Run-35-MHD-0.79-PWN(right panels) at different evolution times, from to top to the bottom.The density contour is represented in logarithmic scale in cm −3 units.In both cases, the red contour marks indicate the region of the plerionic supernova remnants where the contribution of the pulsar wind reaches 50 times of the number density.
In Figure 1a (top-left), we present the pre-supernova circumstellar medium.At this stage, it forms a large-scale quasi-spherical stellar bubble (Weaver et al. 1977), and its spherical forward shock extends to distances of approximately 90 pc.Throughout the star's evolution, the stellar wind interacts strongly with the ambient medium.Each phase of evolution contributes to the formation of successive shock structures, which appear in order from the farthest to the nearest region to the star.The thick and dense shell located farthest from the star, with a radial extent of ⩾ 50 pc, is the result of the interaction between the stellar wind and the ISM, and it occurs mainly during the main-sequence phase (Freyer et al. 2003;Dwarkadas 2005;Freyer et al. 2006;Dwarkadas 2007).In the central region, within a radius of less than 20 pc, a low-density cavity is formed due to the continuous outflow of the free stellar wind during the Wolf-Rayet phase.This cavity is surrounded by successive dense shells resulting from the interactions between the Wolf-Rayet wind and the slower wind from the preceding red-giant phase.The first shell, extending to approximately 35 pc, is dense and exhibits unstable behaviour.Subsequently, a second, less dense shell is formed due to the interaction between the red-giant wind and the main-sequence wind.Additionally, the main-sequence wind interacts with the surrounding ambient medium, forming an external dense shell that is limited by the contact discontinuity surface.
It is worth noting that the contact discontinuity, which marks the interface between the wind and the ISM, exhibits a slightly aspherical morphology, particularly in the region close to the symmetry axis.This aspherical shape is influenced by the presence of the magnetic field and the star's rotation.The variations between the bubbles depicted in Fig. 1a and Fig. 1 of Meyer et al. (2022a) highlight this effect.Furthermore, the grid's proximity to the near-polar axis amplifies this asymmetry.Moving on to Fig. 1c (middle-left), we can observe the supernova remnant at 25 kyr after the explosion.The expanding shock wave from the supernova remnant propagates outward, sweeping up and pushing away all the previously formed dense shells associated with the successive stellar winds.As the shock wave reaches the contact discontinuity surface between the main-sequence stellar wind and the ISM, it interacts with this surface, causing reflection, as described in Meyer et al. (2015Meyer et al. ( , 2021a)); Meyer & Meliani (2022).This interaction and reverberation contribute to the observed structure and morphology of the supernova remnant.
After the supernova explosion, a pulsar wind with high initial mechanical luminosity, psr,0 = 10 38 ergs −1 , is launched.However, this luminosity decreases over time according to Eq. 21 of Pacini (1967).This pulsar wind interacts with the dense supernova ejecta (van der Swaluw et al. 2004), resulting in the formation of a complex structure as described in Meyer & Meliani (2022) for a runaway progenitor star with a zero-age main-mass of 20 M ⊙ .Within this structure, the central region of the plerion is occupied by the freely-expanding pulsar wind.Surrounding the central region, a shell of shocked pulsar wind is formed, resulting from the interaction of the pulsar wind with the expanding supernova remnant.A pulsar wind termination shock is formed at the interface between the unperturbed pulsar wind and the shocked pulsar wind.The outermost region of the pulsar wind nebula behind the termination shock contains the contact discontinuity.This contact discontinuity marks the interface between the supernova ejecta and the shocked pulsar wind (depicted by the red contour in Fig. 1).Beyond the contact discontinuity, a transmitted pulsar wind forward shock propagates through the still unshocked supernova ejecta and further travels into the surrounding medium.
The pulsar wind contact discontinuity undergoes expansion to larger radii due to the fast rotation of the magnetized neutron star.This expansion leads to the characteristic shape with an equatorial torus and an elongated polar jet, as found by Komissarov & Lyubarsky (2004); Del Zanna et al. (2006); Porth et al. (2014); Olmi et al. (2016).However, it's important to note that due to limitations in the numerical scheme applied to the 2D symmetry axis, the jet along the polar axis may appear more elongated than it would in a full 3D simulation.Nevertheless, despite these limitations, the general behaviors of the PWN remains accurate.This shape can be observed at a later time, specifically 45 kyr after the explosion, as shown in Fig. 1e.As the contact discontinuity surface expands, it encounters Rayleigh-Taylor instabilities due to the significant differences in density and velocity between the pulsar wind and the supernova ejecta.These instabilities are further amplified by the reverberation of the reverse shock from the supernova ejecta, as illustrated in Fig. 1e.
Model with magnetized ISM
During the main-sequence phase of a massive star, the influence of the ISM magnetic field becomes particularly significant.During this phase, the interaction between the stellar wind and the magnetized ISM carves out a large-scale circumstellar wind bubble.This wind bubble plays a crucial role in shaping the propagation of the supernova forward shock.Additionally, the wind bubble's presence influences the pulsar wind's dynamics, further highlighting the interplay between the stellar wind, the ISM magnetic field, and the subsequent evolution of the system.We will describe it in detail in the following.In Fig. 1b (top-right), we can observe the circumstellar medium surrounding the massive star in the presence of a magnetized ISM, as represented in the model Run-35-MHD-0.79-PWN.The black arrows indicate the magnetic field lines of the ISM, which are initially aligned with the polar axis.The overall structure of the circumstellar medium in the presence of the magnetized ISM remains similar to the unmagnetized model (Run-35-HD-0.79-PWN,Fig. 1a).However, the morphology of the shocked shells within the low-density cavity, up to the contact discontinuity between the shocked stellar wind and the shocked ISM, appears to be more elongated along the polar axis due to the influence of the ISM magnetic field.
Indeed, as the expanding stellar bubble interacts with the magnetized ISM, it compresses the magnetic field lines, increasing magnetic pressure and tension along the polar axis.This phenomenon has been extensively studied and described in detail in van Marle et al. (2015b).During the last evolution phase, when the Wolf-Rayet wind material reaches the main-sequence termination shock, it undergoes reflection near the equator.This anisotropic reflection causes a change in the direction of propagation of the shocked material, resulting in the loss of the initially spherical shape of the shocked shell from the Wolf-Rayet wind.The interaction with the magnetized ISM further influences the shape and dynamics of the shocked shell, leading to the observed rectangular morphology of the resulting supernova ejecta.Furthermore, as the expanding supernova blast wave propagates within the elongated cavity (as shown in the left panel of Fig. 1), it interacts with the reflected dense shells resulting from the Wolf-Rayet wind and the elongated contact discontinuity.These interactions lead to anisotropic reverberation at the contact discontinuity of the supernova ejecta.As a result, the shape of the supernova ejecta becomes rectangular, reflecting the influence of the asymmetric interactions with the elongated structure induced by the magnetized circumstellar medium.This mechanism is specifically described within the context of the remnant Puppis A in Meyer et al. (2022a).
In Fig. 1d and f, the influence of the magnetized ISM on the shaping of the PWN can be observed.The ISM magnetic field, which plays a significant role in determining the morphology of the circumstellar medium and supernova blastwave, also affects the confinement and shape of the pulsar wind.Under the influence of the ISM magnetic field, the reflected and the supernova blastwave adopts a rectangular morphology along the direction perpendicular to the magnetic field.This happens because the ram pressure of the supernova ejecta is directed towards the polar axis, causing compression and confinement of the pulsar wind in that direction.In contrast, in the direction parallel to the magnetic field, the pressure exerted by the supernova ejecta is lower, resulting in a more extended shape of the PWN.This interplay between the magnetic field of the ISM, the reverse shock, and the pulsar wind contributes to the complex and asymmetric morphology observed in the PWN, as depicted in Fig. 1d and f.Indeed, the presence of a magnetized ISM influences the propagation of the PWN, resulting in distinct behavior compared to an unmagnetized ISM.In the magnetized ISM model (Run-35-MHD-0.79-PWN), the expansion of the PWN is less pronounced in the equatorial plane compared to the hydrodynamical simulation (Run-35-HD-0.79-PWN),as illustrated in Fig. 1c and d.As time progresses, at a later evolution time of 45 kyr as depicted in Fig. 1f, the pulsar wind continues to be channeled along the direction of the ISM's magnetic field, leading to the formation of a stretched PWN.The presence of the ISM magnetic field affects the dynamics of the PWN and leads to enhanced instabilities at the termination shock of the pulsar wind.These instabilities, which arise from the interaction between the pulsar wind and the magnetized ISM, are more pronounced in the magnetized ISM model (Run-35-MHD-0.79-PWN)compared to the hydrodynamical simulation (Run-35-HD-0.79-PWN).
Our models provide compelling evidence that the morphology of the PWN inside a subsequent supernova, when the progenitor massive static star is located in the Galactic plane, is strongly influenced by the distribution of the magnetic field in the ambient medium.The contrasting evolution and instabilities observed in the magnetized and unmagnetized cases emphasize the significant role played by the interstellar medium's magnetic field in shaping the dynamics and morphology of the PWN.These findings underscore the importance of considering the magnetic field effects when studying the evolution of PWN and their interaction with the surrounding environment.
DISCUSSION
In this section, we will discuss the applications and limitations, of our model.We will also examine the non-thermal characteristics of the simulated pulsar wind nebulae and compare our findings to existing observational data.By doing so, we aim to provide a comprehensive analysis of our model's strengths and weaknesses and assess its compatibility with the observed properties of pulsar wind nebulae.
Model limitations
Let us first consider four aspects central to the model.First, the simulations conducted in this study are two-dimensional, assuming axisymmetry and not accounting for variations in the supernova progenitor or the pulsar's spin.While this approach offers computational efficiency and valuable insights, it's essential to acknowledge that a fully three-dimensional treatment is not only important to capture the realistic properties of the ISM but also crucial for a comprehensive understanding of the pulsar wind nebula and the supernova remnant.A 3D model would better represent the complex interactions of the PWN and supernova remnant with the surrounding medium, including the realistic behaviour of magnetic fields.Moreover, the magnetization of the pulsar wind is a fundamental parameter that plays a significant role in the evolution of the PWN and supernova remnant.While we have considered a weak magnetization of the pulsar wind in this study, it's essential to discuss its implications thoroughly.State-of-the-art simulations in both 2D and 3D have shown that the strength and longitudinal variation of magnetization are subjects of debate (Coroniti 2017;Olmi et al. 2016).Future investigations will explore the influence of higher magnetization on the evolution of the PWN in its interaction with supernova remnant and circumstellar medium.
Furthermore, we acknowledge that our modelling of the pulsar wind involves simplified assumptions.A more realistic modelling approach should also involve a better physical description of the wind properties, including its relativistic speed and composition.Addressing these aspects will be crucial in future research for a more comprehensive understanding of the system's dynamics and morphology.Another aspect to consider is the absence of pulsar motion in the simulations.Incorporating pulsar motion would introduce additional complexities and offer a more realistic representation of the interaction between the pulsar wind and the surrounding medium.Furthermore, accounting for the oblique rotation of the pulsar's magnetic axis would allow for a more accurate reproduction of the observed characteristics of the PWN.These are important considerations for future research.The chosen two-dimensional setup and static pulsar position provide valuable insights into general behaviour and trends.However, future investigations can explore the impact of three-dimensional effects, pulsar motion, higher magnetization, and improved modelling of the pulsar wind to obtain a more comprehensive characterization of the system's dynamics and morphology.
Non-thermal emission
To enhance the comparison between our MHD models of the PWN embedded in an elongated circumstellar medium and the available observational data, we performed radiative transfer calculations to gen-erate synthetic images that accurately capture the non-thermal emissions, particularly synchrotron emissions in the radio band.These calculations were specifically carried out at the different evolution stages of the PWN that were previously discussed.The synchrotron radio emission was calculated by considering a non-thermal electron spectrum described by the expression, where here, represents the gas number density, is the spectral index and denotes the energy of the non-thermal electrons in the postshock region of the advancing blast wave.The emission coefficient is given by: being the observed frequency and ⊥ the component of the magnetic field perpendicular to the observer's line-of-sight.Intensity maps were obtained by performing the projection given by, where obs denote the inclination angle of the remnant with respect to the sky plane.These calculations were conducted using the radiative transfer code RADMC-3D4 , and the methodology described in detail by Meyer et al. (2022a).Note that since the investigated numerical simulations are non-relativistic, they do not account for the beaming effect, and this issue will be addressed in our upcoming work.
Figure 2 illustrates the normalized emission maps representing our numerical simulations, specifically Run-35-HD-0.79-PWN(left-hand panels) and Run-35-MHD-0.79-PWN(right-hand panels), showcasing the non-thermal synchrotron emissions in the radio waveband.The top panels correspond to 25kyr, while the bottom panels depict a time of 45 kyr.The intensity is plotted assuming an observer angle ( obs ) of 45 • , representing the angle between the plane of the sky and the plane of symmetry of the supernova remnant.Figure 2a displays the pulsar wind nebula at the age of 25 kyr within an unmagnetized ISM.As highlighted in Meyer & Meliani (2022), no trace of the circumstellar medium is visible in the emission maps because of the absence of the ISM magnetic field.Indeed, the emission map focuses on the pulsar wind and its associated nebula.The image reveals an ovoidal shape, with slightly brighter regions observed at the polar and dimmer regions in the equatorial plane.This brightness variation can be attributed to the toroidal component of the pulsar wind, which applies lateral pressure on the pulsar wind material, causing it to be displaced sideways in the equatorial plane.
At a later evolution time, with a pulsar age of 45 kyr, the radio synchrotron map of the PWN in an unmagnetized ISM is shown in Fig. 2c.The PWN exhibits a jet-torus-like shape, with brighter regions observed at the polar zones.These bright regions result from the strong interaction between the pulsar wind and the supernova ejecta along the pulsar's rotational axis.On the other hand, in the equatorial plane, the strong pulsar wind, driven by the centrifugal force and toroidal magnetic field pressure (Komissarov & Lyubarsky 2004), extends outward.The gas is more diluted in this region, which explains why the equatorial plane is not the brightest region in the hydrodynamical plerion model Run-35-HD-0.79-PWN.In the case of a magnetized ISM, significant changes are observed in the synthetic radio image.The corresponding image is shown at 25 kyr in Fig. 2b.It reveals the presence of two bright arcs parallel to the direction of the ISM magnetic field.These arcs, observed in our axisymmetric setup and aligned with the pulsar's rotation axis, are formed as a result of the interaction between the supernova ejecta and the contact discontinuity between the stellar wind and the magnetized ISM within the elongated cavity (Meyer et al. 2022a).The influence of the ISM magnetic field plays a crucial role in shaping these arcs, ultimately forming a PWN enclosed within a rectangular supernova remnant.
Fig. 2d depicts the older remnant within a magnetized ambient medium, showcasing characteristics of both a supernova shock wave that has interacted with the cavity's border and the growing pulsar wind nebula inside it.The presence of the pulsar wind prevents the reverberation of the supernova shock wave towards the centre of the explosion, as described in Meyer et al. (2022a), resulting in an empty region near the rotating neutron star.The overall morphology of the plerionic remnant still exhibits features of a rectangularly reflected supernova shock wave, with the pulsar wind distributed as an elongated structure.The brightest regions are observed as two polar spots located beyond the termination shock of the pulsar wind.
Comparison with observations
The models presented in this study focus on the evolution of the circumstellar medium surrounding static high-mass stellar objects that eventually undergo supernova explosions, leaving behind a static pulsar.We aim to investigate the formation of elongated pulsar wind nebulae, similar to those observed in Igoshev (2020).It is important to note that these elongated PWN, where the leptonic wind is channelled into the cavity created by the stellar wind shaped by the organized ISM magnetic field, should not be confused with the long tails observed behind the bow shocks of runaway pulsars (e.g., Bucciantini 2002Bucciantini , 2018;;De Luca et al. 2013;Barkov et al. 2019a).
The class of torus/jet-like pulsar wind nebulae, as classified in the catalogue based on Chandra X-ray data, provides strong support for the conclusions drawn from our model.These objects naturally exhibit both an equatorial structure and a jet/counter-jet system, as observed in studies such as Kargaltsev & Pavlov (2010); Kargaltsev et al. (2012) and references therein.Notable examples include the famous Crab nebula with its twisted double jet (Mignone et al. 2013) and the Vela supernova remnant.Magneto-hydrodynamical models have successfully reproduced such structures without considering the stellar wind or supernova ejecta as initial conditions, as demonstrated in Klingler et al. (2014).
The influence of the environment on the morphology of pulsar wind tails/jets has been demonstrated in cases such as the Geminga pulsar wind nebula, which exhibits two curved antennae representing its jets/counter-jet that bend under the influence of the bow shock formed due to the interaction between the fast pulsar motion and the surrounding medium (Posselt et al. 2017).Similar effects have been observed in the case of B0355+54 (Klingler et al. 2014).We propose that the pre-supernova environment plays a similar role, and further modelling efforts are highly desirable, as discussed in Meyer & Meliani (2022).The peculiar morphology of certain pulsar wind nebulae, which cannot be classified as either torus/jet-like objects or bow shock/tail systems, may result from their interaction with a particularly complex surrounding medium.This medium could be shaped by the asymmetric stellar wind during the evolved phases of the progenitor's pre-supernova life, which influences the forward shock of the ejecta and causes aspherical propagation (Velázquez et al. 2023;Villagran et al. 2023).
CONCLUSION
This paper presents a study on the modelling of PWN in core-collapse supernova remnants associated with static massive stars in the warm phase of a magnetized spiral arm of the Milky Way.By utilizing 2.5-dimensional simulations, we demonstrate that the reflection of the supernova blast wave against the elongated contact discontinuity between the stellar wind and magnetise ISM of the magnetically elongated stellar wind cavity in the progenitor's circumstellar medium has a significant impact on the morphology of the resulting PWN.This phenomenon might be responsible for forming rectangular supernova remnants, such as Puppis A, as described in Meyer et al. (2022a).The reverberation of the shock wave leads to the compression of the pulsar wind and imposes a preferred expansion direction perpendicular to the plane of the pulsar's spin.As a result, the PWN within the rectangular supernova remnant becomes elongated rather than adopt-ing the jet-torus-like shape typically observed in previous studies, as described by Komissarov & Lyubarsky (2004).
The radio synchrotron emission maps of plerionic supernova remnants exhibit a complex morphology that evolves over time.Initially, the morphology is characterized by a young, growing, ovoidal PWN combined with the rectangular shape produced by the interaction between the supernova ejecta and the walls of the unshocked stellar wind cavity of the progenitor star.This interaction gives rise to the rectangular appearance observed in Puppis A, as discussed in Meyer et al. (2022a).As time progresses, the influence of the ISM magnetic field becomes more prominent in shaping the remnant's morphology.The channelling effect of the pulsar wind into the elongated circumstellar wind cavity of the progenitor extends along the pulsar's rotation axis.Instabilities at the interface between the pulsar wind and the ejecta result in a knotty nebula, manifesting as bright spots within the plerion.The irregular shapes observed in many pulsar wind nebulae may indicate the complex nature of the surrounding environment, influenced by both the distribution of material in the ambient medium and the stellar wind history of the supernova progenitor.In this complex environment, the interaction between the supernova ejecta and the pulsar wind gives rise to observed irregular morphologies.
Figure 1 .
Figure 1.Number density fields in our magneto-hydrodynamical simulation of the pulsar wind nebula forming in the supernova remnant of a static 35 M ⊙ star rotating with Ω ★ /Ω K = 0.1 in an unmagnetised (left) and magnetised (right) ISM.The red contours highlight the region with a 50% contribution of pulsar wind material, i.e. the contact discontinuity.The streamlines in the right-hand side of panels b,d,f mark the ISM magnetic field lines.
Figure 2 .
Figure 2. Normalised radio synchrotron emission map of the plerionic supernova remnants with an inclination angle of obs = 45 • between the observer's line of sight and the nebula's symmetry axis.The left-hand panels correspond to the hydrodynamical model (Run-35-HD-0.79-PWN),and right-hand panel to the model with magnetised ISM (Run-35-MHD-0.79-PWN).The top figures show the remnants at time 25 kyr and the bottom figures display them at time 45 kyr.
Table 1 .
List of models in this study.All simulations assume a rotating static massive star of mass ★ at solar metallicity, in a medium of number density ISM and organised magnetic field strength ISM .The initial rotation rate of the central massive star is Ω ★ /Ω K = 0.1. | 10,651.2 | 2023-11-12T00:00:00.000 | [
"Physics"
] |
Experimental and Numerical Investigations of High-Speed Projectile Impacts on 7075-T651 Aluminum Plates
Simulation of the material failure under high strain rate conditions is one of the most difficult problems in the finite element analyses, and many researchers have tried to understand and reproduce dynamic material fracture. In this study, we investigate a failure criterion that minimizes the mesh dependency at high strain rates and incorporates the criterion into the Johnson-Cook constitutive relationship by developing a user-defined material model. Impact tests were performed using a gas-gun system in order to investigate the response of the 7075-T651 aluminum plate in high-speed collision. On the other hand, numerical simulations are carried out by considering various element sizes and the relationship between element size and failure strain is inversely obtained using numerical results. By accommodating the relationship into the damage model and implementing in the user-defined material model, mesh dependency is significantly reduced, and sufficient accuracy is achieved with alleviated computational cost than the existing damage model. This study suggests an element size-dependent damage criterion that is applicable for impact simulation and it is expected that the criterion is useful to obtain accurate impact responses with a small computational cost.
Introduction
An impact is defined as a mechanical process that involves the collision of two or more bodies. The relevant engineering has a wide range of applications, such as the safety assessment of buildings and nuclear reactor vessels, the assessment of the crashworthiness of vehicles, the protection of cargo and barriers, and the design of military vehicles and armor systems. As opposed to static or conventional dynamic loading, forces created by collisions are exerted and removed in an extremely short time duration. Penetration is described as the entrance of an object into a target body without passing through the body, resulting in the embedment of the striker and the formation of a crater, whereas perforation is defined as the complete escape of the target after impact.
Numerous experimental and numerical studies on impact phenomenon have been conducted by many researchers. Backman and Goldsmith studied a comprehensive survey of the mechanics of penetration of projectiles into targets [1]. An empirical formula for determining projectile penetration into steel barriers is proposed and a method for determining the ballistic limit for penetrating a target from penetration depth is presented [2]. Johnson et al. conducted research on the quasi-static piercing of metal plates [3,4]. Corbett et al. researched the penetration and perforation of plates and cylinders by free-flying projectiles traveling at sub-ordnance velocities [5]. A numerical analysis of the ballistic perforation of an impactor through steel plates was performed by Lee et al. using the peridynamic the relationship between the element size and failure strain is applied to the damage model, and we verify the efficiency and accuracy of the damage model.
Johnson-Cook Material Model
The aluminum plate is constructed using the simplified Johnson-Cook model. The Johnson-Cook material model represents the constitutive relationship for metals and is widely used to describe the dynamic behavior of the materials, such as impact and penetration. The advantage of this material model is that it is relatively easy to determine the material constants [27]. In addition, the Johnson-Cook model has been applied to various commercial finite element analysis software because of its low computational cost due to the simple form [28]. The flow stress σ y of the model is expressed as: where a, b, c, n, and m are the user-defined parameters, ε p is the effective plastic strain, . ε * is the effective plastic strain rate, and T * is defined as T * = (T − T room )/(T melt − T room ) [29]. The parameter m is ignored in the simplified Johnson-Cook model. If excessive deformation occurs during the finite element analysis, the analysis becomes unstable, or the computational cost significantly increases. To solve this problem, a technique of removing the elements with excessive deformation has been used, and the failure strain is used as one of the criteria for the removal. The failure strain ε f in the Johnson-Cook model is expressed by: where D i are constants and σ * is the triaxiality of stress.
Element Dependent Failure Strain
During the collision, the kinetic energy of the projectile is transferred to the strain energy of the target, and the projectile proceeds with the residual kinetic energy in the perforation failure mode. Therefore, the factors influencing the strain energy transfer to the target affect the residual velocity of the projectile. The failure strain affects the strain energy transferred. The reason is that the strain energy accumulates until the strain of the element reaches the failure strain. Also, the size of the element affects the volume of the removed elements and strain energy of the target. In order to achieve a high accuracy for the simulation, sufficiently small elements and a substantial computational cost are essential. However, there is a limitation on the applicable element size according to the simulation scale and, therefore, the appropriate value of the failure strain should be determined by the element size. In this study, we propose a failure strain criterion incorporating the impact velocity and element size (element dependent failure strain: EDFS) to accurately evaluate the impact response regardless of changes in element size as: where e is Euler's number, v * is a dimensionless parameter of the initial velocity, and h * is a dimensionless parameter of the element size. The initial velocity v i and the element size h are normalized by the reference velocity v ref and the reference size h ref , respectively, as v * = v ref /v i , and h * = h/h ref .
User Defined Material Model (UMAT) for LS-DYNA
The material model for impact simulation has been implemented into LS-DYNA, a commercially explicit dynamic finite element code. LS-DYNA has a UMAT option where a user can implement a new material model as a subroutine. The Johnson-Cook constitutive relationship is implemented in UMAT for LS-DYNA in order to incorporate the damage criterion with the Johnson-Cook constitutive relationship. At each time step, the equation of motions for the dynamic system are calculated at the integration points of each element. The strain increment is determined by the calculated displacement of the node and is used as the variable for the UMAT subroutine. The stress increment at each time step is calculated using the stress update algorithm according to the strain increment. In this process, the internal variables of the constitutive model are also updated.
Experimental Setup
A gas-gun system located at Korea Institute of Civil Engineering and Building Technology (KICT), shown in Figure 1a, was utilized for the high-speed impact test. The gas-gun is comprised of a high-pressure chamber that can pressurize nitrogen gas to the working pressure 2000 psi and a 40 mm diameter gun-barrel. The gas-gun system is fabricated to a vacuum chamber that has the inner diameter 1000 mm and length 1485 mm, and the jig frame to fix an aluminum target is located in the chamber. The aluminum plate was installed to the steel frame set in the vacuum chamber as shown in Figure 1b, and the projectiles fired through the gas-gun by releasing gas pressure momentarily. The material properties of the 7075-T651 aluminum are summarized in Table 1. The gas-gun is capable of propelling a 200 g projectile at speed up to 400 m/s. The projectile set consists of a warhead made of steel and a sabot made of polycarbonate, as shown in Figure 1c. The steel warhead has a diameter of 36 mm and a thickness of 15 mm and weighs 120 g. The sabot has a groove in the shape of a cylinder on the front face to mount the warhead, and is made of 40 mm in diameter and 80 mm in length so that it can be fired through the barrel. In order to minimize the influence of various variables of geometric shape, such as nose and projectile shapes, the cylindrical projectile, which is the simplest form, is used for the impact tests. Each aluminum target plate was installed to the jig frame in the vacuum chamber, and the movement of the projectile propelled from the gas-gun was recorded through the side window using a high-speed camera (Phantom V711, Vision Research, NJ, USA). The videos were recorded at 20,000 fps with a resolution of 1088 by 400 in grayscale. step is calculated using the stress update algorithm according to the strain increment. In this process, the internal variables of the constitutive model are also updated.
Experimental Setup
A gas-gun system located at Korea Institute of Civil Engineering and Building Technology (KICT), shown in Figure 1a, was utilized for the high-speed impact test. The gas-gun is comprised of a high-pressure chamber that can pressurize nitrogen gas to the working pressure 2000 psi and a 40 mm diameter gun-barrel. The gas-gun system is fabricated to a vacuum chamber that has the inner diameter 1000 mm and length 1485 mm, and the jig frame to fix an aluminum target is located in the chamber. The aluminum plate was installed to the steel frame set in the vacuum chamber as shown in Figure 1b, and the projectiles fired through the gas-gun by releasing gas pressure momentarily. The material properties of the 7075-T651 aluminum are summarized in Table 1. The gas-gun is capable of propelling a 200 g projectile at speed up to 400 m/s. The projectile set consists of a warhead made of steel and a sabot made of polycarbonate, as shown in Figure 1c. The steel warhead has a diameter of 36 mm and a thickness of 15 mm and weighs 120 g. The sabot has a groove in the shape of a cylinder on the front face to mount the warhead, and is made of 40 mm in diameter and 80 mm in length so that it can be fired through the barrel. In order to minimize the influence of various variables of geometric shape, such as nose and projectile shapes, the cylindrical projectile, which is the simplest form, is used for the impact tests. Each aluminum target plate was installed to the jig frame in the vacuum chamber, and the movement of the projectile propelled from the gas-gun was recorded through the side window using a high-speed camera (Phantom V711, Vision Research, NJ, USA). The videos were recorded at 20,000 fps with a resolution of 1088 by 400 in grayscale.
Impact Tests on 7075-T651 Aluminum Plates
Impact tests using the gas-gun system were carried out on 7075-T651 aluminum plates of 400-mm width, 400-mm height, and 5-or 10-mm thicknesses, respectively. We performed 5 impact tests for each target thickness with different impact velocities. Initial and residual velocities are measured by observing the travel distance of the projectile per each frame using images captured from the high-speed camera. The initial velocity was measured as 152.2 m/s when the projectile was propelled with the gas pressure of 120 psi, and by increasing the gas pressure to 1500 psi, the initial velocity increased to 372.8 m/s. Experimental results are fitted to a model suggested by Lambert [30] to represent the residual velocity of the projectile as a function of impact velocity, as shown in Figure 2. The ballistic limits are obtained as 132.7 and 194.2 m/s for 5 and 10 mm thickness plates, respectively. The images after perforation captured by the high-speed camera are shown in Figures 3 and 4, and the measured initial and residual velocities are summarized in Tables 2 and 3. Videos of the impact tests are provided as Supplementary materials of Video S1-S8.
Impact Tests on 7075-T651 Aluminum Plates
Impact tests using the gas-gun system were carried out on 7075-T651 aluminum plates of 400mm width, 400-mm height, and 5-or 10-mm thicknesses, respectively. We performed 5 impact tests for each target thickness with different impact velocities. Initial and residual velocities are measured by observing the travel distance of the projectile per each frame using images captured from the highspeed camera. The initial velocity was measured as 152.2 m/s when the projectile was propelled with the gas pressure of 120 psi, and by increasing the gas pressure to 1500 psi, the initial velocity increased to 372.8 m/s. Experimental results are fitted to a model suggested by Lambert [30] to represent the residual velocity of the projectile as a function of impact velocity, as shown in Figure 2. The ballistic limits are obtained as 132.7 and 194.2 m/s for 5 and 10 mm thickness plates, respectively. The images after perforation captured by the high-speed camera are shown in Figures 3 and 4, and the measured initial and residual velocities are summarized in Tables 2 and 3. Videos of the impact tests are provided as Supplementary materials of Video S1-S8.
Impact Tests on 7075-T651 Aluminum Plates
Impact tests using the gas-gun system were carried out on 7075-T651 aluminum plates of 400mm width, 400-mm height, and 5-or 10-mm thicknesses, respectively. We performed 5 impact tests for each target thickness with different impact velocities. Initial and residual velocities are measured by observing the travel distance of the projectile per each frame using images captured from the highspeed camera. The initial velocity was measured as 152.2 m/s when the projectile was propelled with the gas pressure of 120 psi, and by increasing the gas pressure to 1500 psi, the initial velocity increased to 372.8 m/s. Experimental results are fitted to a model suggested by Lambert [30] to represent the residual velocity of the projectile as a function of impact velocity, as shown in Figure 2. The ballistic limits are obtained as 132.7 and 194.2 m/s for 5 and 10 mm thickness plates, respectively. The images after perforation captured by the high-speed camera are shown in Figures 3 and 4, and the measured initial and residual velocities are summarized in Tables 2 and 3. Videos of the impact tests are provided as Supplementary materials of Video S1-S8. The polycarbonate sabot was destroyed at the front surface of the target during the collision, and only the steel warhead breaks through the target and proceeds toward the rear side in the case of perforation. The perforation failure modes were observed at the impact velocity of equal or more than 163.2 m/s for 5-mm thickness and 200.9 m/s for 10-mm thickness, and the penetration mode occurs at the lowest velocity for each thickness (127.0 m/s for 5-mm thickness and 150.6 m/s for 10mm thickness).
Numerical Model and Impact Simulations
In order to investigate the impact tests numerically, finite element simulations are carried out. With a nonlinear finite element program, LS-DYNA, projectiles and target plates are discretized with 8-node solid elements. The plastic-kinematic model and the simplified Johnson-Cook model [29,31] are used as material models for the cylindrical projectile and targets, respectively. Because projectiles had little deformation during the collision, the projectile is constructed by a simple plastic material model which has a Young's modulus of 200 GPa, Poisson's ratio of 0.3, and yield stress of 710 MPa. The material constants in Equation (1) are listed in Table 4 by referring to the results in [32]. The projectile set consists of a steel warhead and a polycarbonate sabot in the test, whereas only the steel warhead is modeled for the numerical simulation. To make the projectile mass quantity identical to the experiment, the total mass of the projectile set is given to the numerical model of the steel warhead by modifying the mass density. The polycarbonate sabot was destroyed at the front surface of the target during the collision, and only the steel warhead breaks through the target and proceeds toward the rear side in the case of perforation. The perforation failure modes were observed at the impact velocity of equal or more than 163.2 m/s for 5 mm thickness and 200.9 m/s for 10 mm thickness, and the penetration mode occurs at the lowest velocity for each thickness (127.0 m/s for 5 mm thickness and 150.6 m/s for 10 mm thickness).
Numerical Model and Impact Simulations
In order to investigate the impact tests numerically, finite element simulations are carried out. With a nonlinear finite element program, LS-DYNA, projectiles and target plates are discretized with 8-node solid elements. The plastic-kinematic model and the simplified Johnson-Cook model [29,31] are used as material models for the cylindrical projectile and targets, respectively. Because projectiles had little deformation during the collision, the projectile is constructed by a simple plastic material model which has a Young's modulus of 200 GPa, Poisson's ratio of 0.3, and yield stress of 710 MPa. The material constants in Equation (1) are listed in Table 4 by referring to the results in [32]. The projectile set consists of a steel warhead and a polycarbonate sabot in the test, whereas only the steel warhead is modeled for the numerical simulation. To make the projectile mass quantity identical to the experiment, the total mass of the projectile set is given to the numerical model of the steel warhead by modifying the mass density. We use the 8-node solid elements for all of the numerical models with the constant stress solid element formulation. The projectile is discretized with 2592 solid elements, and the number of elements for the target plate model is summarized in Tables 5 and 6. The targets are discretized with different sizes of elements to investigate the effect of the element size on impact simulations. To alleviate the computational cost, the inner part of the target (Figure 5b) is modeled with fine elements and is attached to the outer part ( Figure 5c) discretized with coarse elements (element size is 1.25 mm). The locations of the nodes in the inner and outer parts are not identical because the element size of the inner part varies. Therefore, the surface to surface contact condition [33] is used to attach the inner part to the outer part regardless of the nodal position. The element size of the inner part is considered as 0.31, 0.50, 0.63, 0.83, and 1.25 mm to include 4 to 16 elements in the thickness direction for the 5 mm-thick plates and 8 to 32 elements for 10 mm-thick plates. A total of 40 numerical models are constructed considering two thicknesses of the plates, four impact velocities for each thickness, and five element sizes. In order to describe the boundary condition of the plate mounted in the frame jig, all nodes in the area where the plate and the jig meet are constrained. The area is 2 cm from the outer edge of the plate and is shown in Figure 5a. The required time step is determined by the material properties and the size of the element and is constantly updated during the simulation as the material deforms [33]. All simulations are set to use the value of 0.9 times the required time step for stable analysis. We use the 8-node solid elements for all of the numerical models with the constant stress solid element formulation. The projectile is discretized with 2592 solid elements, and the number of elements for the target plate model is summarized in Tables 5 and 6. The targets are discretized with different sizes of elements to investigate the effect of the element size on impact simulations. To alleviate the computational cost, the inner part of the target (Figure 5b) is modeled with fine elements and is attached to the outer part ( Figure 5c) discretized with coarse elements (element size is 1.25 mm). The locations of the nodes in the inner and outer parts are not identical because the element size of the inner part varies. Therefore, the surface to surface contact condition [33] is used to attach the inner part to the outer part regardless of the nodal position. The element size of the inner part is considered as 0.31, 0.50, 0.63, 0.83, and 1.25 mm to include 4 to 16 elements in the thickness direction for the 5 mm-thick plates and 8 to 32 elements for 10 mm-thick plates. A total of 40 numerical models are constructed considering two thicknesses of the plates, four impact velocities for each thickness, and five element sizes. In order to describe the boundary condition of the plate mounted in the frame jig, all nodes in the area where the plate and the jig meet are constrained. The area is 2 cm from the outer edge of the plate and is shown in Figure 5a. The required time step is determined by the material properties and the size of the element and is constantly updated during the simulation as the material deforms [33]. All simulations are set to use the value of 0.9 times the required time step for stable analysis. Figure 6 shows the result of numerical analysis where the projectile collides with a 10 mm thick aluminum plate at an initial velocity of 200.9 m/s, and the simulation takes 1080 s with the element size of 0.63 mm. The physical behavior of the perforation process is represented by numerical simulations. In the first step, the overall bend of the plate occurs as the projectile pushes the target in the direction of impact. As the mass in front of the projectile is accelerated by the projectile and the elements near the shear area are damaged, the elements beyond the critical failure strain are removed, and the formed plug is separated from the target. As the projectile penetrates the target, the elements of the plate are eroded in a circular shape.
Failure Strain Value for Residual Velocity
The values of the failure strain are obtained through iterative simulations so that the residual velocities are identical with the test results. The changes in the failure strain depending on element size, impact velocity, and plate thicknesses are summarized in Figure 7. From the variation of the failure strain, it is found that the failure strain is inversely proportional to the element size and impact velocity. The value of failure strain for simulating accurate residual velocity gradually decreases as the element size, and the impact velocity increases and is sensitive to small element and low impact velocity. However, the failure strain hardly changes even when the element size varies if the impact velocity is higher than 250 m/s. In the comparison of the failure strains between the 5 mm and 10 mmthick targets with similar impact velocities shown in Figure 8, the failure strain varies similarly regardless of the thickness of the plate. The failure strain significantly decreases by increasing the element size at relatively low velocities, as shown in Figure 8a, and the failure strain decreases as the impact velocity increases. The change of the failure strain is relatively sensitive if the element is very small, which implies that the failure strain should be carefully handled when using small elements.
Failure Strain Value for Residual Velocity
The values of the failure strain are obtained through iterative simulations so that the residual velocities are identical with the test results. The changes in the failure strain depending on element size, impact velocity, and plate thicknesses are summarized in Figure 7. From the variation of the failure strain, it is found that the failure strain is inversely proportional to the element size and impact velocity. The value of failure strain for simulating accurate residual velocity gradually decreases as the element size, and the impact velocity increases and is sensitive to small element and low impact velocity. However, the failure strain hardly changes even when the element size varies if the impact velocity is higher than 250 m/s. In the comparison of the failure strains between the 5 mm and 10 mm-thick targets with similar impact velocities shown in Figure 8, the failure strain varies similarly regardless of the thickness of the plate. The failure strain significantly decreases by increasing the element size at relatively low velocities, as shown in Figure 8a, and the failure strain decreases as the impact velocity increases. The change of the failure strain is relatively sensitive if the element is very small, which implies that the failure strain should be carefully handled when using small elements.
Failure Strain Value for Residual Velocity
The values of the failure strain are obtained through iterative simulations so that the residual velocities are identical with the test results. The changes in the failure strain depending on element size, impact velocity, and plate thicknesses are summarized in Figure 7. From the variation of the failure strain, it is found that the failure strain is inversely proportional to the element size and impact velocity. The value of failure strain for simulating accurate residual velocity gradually decreases as the element size, and the impact velocity increases and is sensitive to small element and low impact velocity. However, the failure strain hardly changes even when the element size varies if the impact velocity is higher than 250 m/s. In the comparison of the failure strains between the 5 mm and 10 mmthick targets with similar impact velocities shown in Figure 8, the failure strain varies similarly regardless of the thickness of the plate. The failure strain significantly decreases by increasing the element size at relatively low velocities, as shown in Figure 8a, and the failure strain decreases as the impact velocity increases. The change of the failure strain is relatively sensitive if the element is very small, which implies that the failure strain should be carefully handled when using small elements. Figure 9 represents the comparison between failure strain values from iterative simulations and Equation (3). The reference velocity in Equation (3) is determined as 110.0 m/s that minimizes the error for all eight cases, and the reference element size ℎ is determined as the unit length in mm scale (1 mm). The curves from Equation (3) adequately express the tendency of the failure strain depending on the element size and impact velocity variation. The analytical and numerical values are in good agreement in all eight cases considering both thicknesses and impact velocities. The difference between the analytical value and the numerical value is relatively large because the change of the failure strain caused by the change of the element size is more sensitive in the case of low impact velocity. Although there are slight differences in the low-velocity region, the analytical and numerical values tend to agree well at different thicknesses and various collision velocities.
Implementation of EDFS in Johnson-Cook Constitutive Model
We implement the constitutive relationship of the Johnson-Cook material model in the UMAT. The material failure criterion, EDFS, is defined in the UMAT subroutine during the impact simulation by taking the element size and the impact velocity as input variables. In order to verify the UMAT subroutine, tension and impact simulations are performed for comparison with existing Johnson-Cook model. As a result of comparing the stress-strain curves of the tensile simulation, the Johnson-Cook model is simulated to be perfectly matched with UMAT as shown in Figure 10. In the impact Figure 9 represents the comparison between failure strain values from iterative simulations and Equation (3). The reference velocity in Equation (3) is determined as 110.0 m/s that minimizes the error for all eight cases, and the reference element size ℎ is determined as the unit length in mm scale (1 mm). The curves from Equation (3) adequately express the tendency of the failure strain depending on the element size and impact velocity variation. The analytical and numerical values are in good agreement in all eight cases considering both thicknesses and impact velocities. The difference between the analytical value and the numerical value is relatively large because the change of the failure strain caused by the change of the element size is more sensitive in the case of low impact velocity. Although there are slight differences in the low-velocity region, the analytical and numerical values tend to agree well at different thicknesses and various collision velocities.
Implementation of EDFS in Johnson-Cook Constitutive Model
We implement the constitutive relationship of the Johnson-Cook material model in the UMAT. The material failure criterion, EDFS, is defined in the UMAT subroutine during the impact simulation by taking the element size and the impact velocity as input variables. In order to verify the UMAT subroutine, tension and impact simulations are performed for comparison with existing Johnson-Cook model. As a result of comparing the stress-strain curves of the tensile simulation, the Johnson-Cook model is simulated to be perfectly matched with UMAT as shown in Figure 10. In the impact
Implementation of EDFS in Johnson-Cook Constitutive Model
We implement the constitutive relationship of the Johnson-Cook material model in the UMAT. The material failure criterion, EDFS, is defined in the UMAT subroutine during the impact simulation by taking the element size and the impact velocity as input variables. In order to verify the UMAT subroutine, tension and impact simulations are performed for comparison with existing Johnson-Cook model. As a result of comparing the stress-strain curves of the tensile simulation, the Johnson-Cook model is simulated to be perfectly matched with UMAT as shown in Figure 10.
Comparative Studies with Johnson-Cook Damage Model
We verify the effectiveness and efficiency of the failure criteria, EDFS, through the comparative study between EDFS and Johnson-Cook damage model. Impact simulations are carried out for two thicknesses of aluminum plate and four impact velocities per each thickness, and the accuracy of the solution and computational efficiency are investigated. The parameters for the Johnson-Cook damage model are determined as = 0.096, = 0.049, = 3.465, = 0.016, and = 1.099 by referring to the previous study [32]. The value of in EDFS is 110 m/s, which is determined from the parametric study in Section 4.3.
The residual velocities of the impact simulations for 5 mm aluminum plates using the EDFS and Johnson-Cook damage model are summarized in Figures 11 and 12. Impact simulations using both the Johnson-Cook model and EDFS predict the residual velocity over the entire range. However, the error of the residual velocity from the Johnson-Cook model increases as the element size increases, and the velocity is predicted to be lower than the impact tests. In the case of low impact velocity ( = 163.2 m/s), the error of the EDFS model is relatively large compared to the Johnson-Cook model. The reason is that when the impact velocity is not significantly higher than the ballistic limit, the residual velocity varies sensitively with the failure strain. In the other cases except for the low impact velocity, the EDFS model predicts the residual velocity much more accurately than the Johnson-Cook model, and the effect of element size is also much smaller.
Comparative Studies with Johnson-Cook Damage Model
We verify the effectiveness and efficiency of the failure criteria, EDFS, through the comparative study between EDFS and Johnson-Cook damage model. Impact simulations are carried out for two thicknesses of aluminum plate and four impact velocities per each thickness, and the accuracy of the solution and computational efficiency are investigated. The parameters for the Johnson-Cook damage model are determined as D 1 = 0.096, D 2 = 0.049, D 3 = 3.465, D 4 = 0.016, and D 5 = 1.099 by referring to the previous study [32]. The value of v ref in EDFS is 110 m/s, which is determined from the parametric study in Section 4.3.
The residual velocities of the impact simulations for 5 mm aluminum plates using the EDFS and Johnson-Cook damage model are summarized in Figures 11 and 12. Impact simulations using both the Johnson-Cook model and EDFS predict the residual velocity over the entire range. However, the error of the residual velocity from the Johnson-Cook model increases as the element size increases, and the velocity is predicted to be lower than the impact tests. In the case of low impact velocity (v i = 163.2 m/s), the error of the EDFS model is relatively large compared to the Johnson-Cook model. The reason is that when the impact velocity is not significantly higher than the ballistic limit, the residual velocity varies sensitively with the failure strain. In the other cases except for the low impact velocity, the EDFS model predicts the residual velocity much more accurately than the Johnson-Cook model, and the effect of element size is also much smaller.
Comparative Studies with Johnson-Cook Damage Model
We verify the effectiveness and efficiency of the failure criteria, EDFS, through the comparative study between EDFS and Johnson-Cook damage model. Impact simulations are carried out for two thicknesses of aluminum plate and four impact velocities per each thickness, and the accuracy of the solution and computational efficiency are investigated. The parameters for the Johnson-Cook damage model are determined as = 0.096, = 0.049, = 3.465, = 0.016, and = 1.099 by referring to the previous study [32]. The value of in EDFS is 110 m/s, which is determined from the parametric study in Section 4.3.
The residual velocities of the impact simulations for 5 mm aluminum plates using the EDFS and Johnson-Cook damage model are summarized in Figures 11 and 12. Impact simulations using both the Johnson-Cook model and EDFS predict the residual velocity over the entire range. However, the error of the residual velocity from the Johnson-Cook model increases as the element size increases, and the velocity is predicted to be lower than the impact tests. In the case of low impact velocity ( = 163.2 m/s), the error of the EDFS model is relatively large compared to the Johnson-Cook model. The reason is that when the impact velocity is not significantly higher than the ballistic limit, the residual velocity varies sensitively with the failure strain. In the other cases except for the low impact velocity, the EDFS model predicts the residual velocity much more accurately than the Johnson-Cook model, and the effect of element size is also much smaller. (c) (d) Figure 11. Comparison between impact test and numerical analysis at an initial velocity of (a) 163. Figures 13 and 14 represents the comparison between EDFS and Johnson-Cook model for 10 mm thickness plates. The residual velocity from Johnson-Cook model shows significant differences with the impact test at low impact velocity ( = 200.9 m/s), and even the failure mode is contradictory with the test. As the impact velocity increases, the accuracy of the Johnson-Cook model improves, but the residual velocities are underestimated in the overall range, and the element size dependency remains. On the other hand, the numerical results of EDFS are hardly affected by the element size in the impact simulations for the 10 mm thick plates. EDFS predicts the residual velocity more precisely than the Johnson-Cook model over the entire range and very accurately predicts the residual velocity, especially in the high impact velocity range. In summary, the EDFS model predicts the relative accuracy of the accurate residual velocity regardless of the element size, while the Johnson-Cook damage model tends to underestimate the residual velocity and shows element size dependent results. Figures 13 and 14 represents the comparison between EDFS and Johnson-Cook model for 10 mm thickness plates. The residual velocity from Johnson-Cook model shows significant differences with the impact test at low impact velocity ( = 200.9 m/s), and even the failure mode is contradictory with the test. As the impact velocity increases, the accuracy of the Johnson-Cook model improves, but the residual velocities are underestimated in the overall range, and the element size dependency remains. On the other hand, the numerical results of EDFS are hardly affected by the element size in the impact simulations for the 10 mm thick plates. EDFS predicts the residual velocity more precisely than the Johnson-Cook model over the entire range and very accurately predicts the residual velocity, especially in the high impact velocity range. In summary, the EDFS model predicts the relative accuracy of the accurate residual velocity regardless of the element size, while the Johnson-Cook damage model tends to underestimate the residual velocity and shows element size dependent results. On the other hand, the numerical results of EDFS are hardly affected by the element size in the impact simulations for the 10 mm thick plates. EDFS predicts the residual velocity more precisely than the Johnson-Cook model over the entire range and very accurately predicts the residual velocity, especially in the high impact velocity range. In summary, the EDFS model predicts the relative accuracy of the accurate residual velocity regardless of the element size, while the Johnson-Cook damage model tends to underestimate the residual velocity and shows element size dependent results.
One of the impact cases is selected to evaluate the effectiveness of the Johnson-Cook model and EDFS by comparing the computation time and accuracy with element size changes as shown in Figure 15. The Johnson-Cook model accurately predicts the residual velocity of a collision experiment when the element size is small enough, but the error increases significantly compared to the EDFS when the element size exceeds 0.6 mm. The Johnson-Cook model requires the element size of smaller than 0.6 mm in order to have an error of smaller than 5%, but the EDFS model ensures sufficient accuracy regardless of the element size. Thus, using the EDFS model, numerical analyses are performed much more efficiently while ensuring accuracy using large size of elements. To predict the residual velocity in error by less than 5%, the Johnson-Cook model takes 6798 s in wall-clock time solved in parallel with 4 threads by using the element size of 0.5 mm. On the other hand, the EDFS model predicts the residual velocity in error by less than 5% using only the calculation time of 270 s solved in parallel with 4 threads using the 1.25 mm of the element size. A workstation which has Dual Intel ®Xeon(R) CPU E5-2687W v2 @ 3.40 GHz of 32 threads and 64 GB memory was used to perform parallel processing. Using the EDFS, it is possible to predict the residual velocity at the same accuracy with only 4% of the computational cost compared to the Johnson-Cook damage model. Thus, numerical results using the Johnson-Cook model requires a sufficiently small element size to ensure accuracy, but the computational cost increases dramatically. If the EDFS model is used, it is possible to predict the accurate residual velocity even with a small computational cost. One of the impact cases is selected to evaluate the effectiveness of the Johnson-Cook model and EDFS by comparing the computation time and accuracy with element size changes as shown in Figure 15. The Johnson-Cook model accurately predicts the residual velocity of a collision experiment when the element size is small enough, but the error increases significantly compared to the EDFS when the element size exceeds 0.6 mm. The Johnson-Cook model requires the element size of smaller than 0.6 mm in order to have an error of smaller than 5%, but the EDFS model ensures sufficient One of the impact cases is selected to evaluate the effectiveness of the Johnson-Cook model and EDFS by comparing the computation time and accuracy with element size changes as shown in Figure 15. The Johnson-Cook model accurately predicts the residual velocity of a collision experiment when the element size is small enough, but the error increases significantly compared to the EDFS when the element size exceeds 0.6 mm. The Johnson-Cook model requires the element size of smaller than 0.6 mm in order to have an error of smaller than 5%, but the EDFS model ensures sufficient
Summary and Conclusions
In the finite element analysis, the accuracy of the numerical solution depends on the element size, and significant computational cost is essential to attain sufficient accuracy. For efficient and accurate impact simulation, we propose an enhanced damage criterion. This criterion can alleviate computational costs in impact simulations while providing more accurate results than existing damage criteria, and we verify the criterion by comparing the numerical results with the impact tests. Also, the damage criterion is combined with the Johnson-Cook constitutive relationship and implemented in UMAT of LS-DYNA for the usability of the damage criterion. Using a gas-gun system, impact tests were carried out to investigate the impact response of the 7075-T651 aluminum plate, and the test results are used as the reference data to verify the numerical model and damage criterion. Numerical models are constructed using various element sizes in order to evaluate the effect of the element size on the impact simulation. The correlations among the failure strain, the impact velocity, and the element size are inversely obtained from numerical simulations by comparing the residual velocities with the test results. It is found that the failure strain varies inversely with the element size and impact velocity. In particular, the sensitivity of the failure strain to the impact velocity significantly increases as the element size decreases and, therefore, the failure strain should be carefully determined when a small element size is used. By applying the characteristic of the failure strain, which depends on the impact velocity and element size, we have introduced an element-size dependent failure strain (EDFS) and the results show good agreement with experimental results regardless of the element size. To import EDFS in the subroutine of the material model, the Johnson-Cook constitutive relationship is implemented in UMAT and the numerical results using UMAT are in good agreement with the existing Johnson-Cook material model. Then, EDFS is imported in UMAT to implement a material model that combines EDFS and Johnson-Cook configuration relationships. When using the Johnson-Cook damage model, the accurate solution is obtained if we use a sufficiently small element size, but the computational cost exponentially increases accordingly. The application of the EDFS allows the calculation time to be reduced significantly because the numerical results are in good agreement with the results of experiments even if the large element size is used. Using the damage criterion presented in this study, efficient simulations can be carried out, ending up with a high accuracy as obtained without very fine discretization.
Summary and Conclusions
In the finite element analysis, the accuracy of the numerical solution depends on the element size, and significant computational cost is essential to attain sufficient accuracy. For efficient and accurate impact simulation, we propose an enhanced damage criterion. This criterion can alleviate computational costs in impact simulations while providing more accurate results than existing damage criteria, and we verify the criterion by comparing the numerical results with the impact tests. Also, the damage criterion is combined with the Johnson-Cook constitutive relationship and implemented in UMAT of LS-DYNA for the usability of the damage criterion. Using a gas-gun system, impact tests were carried out to investigate the impact response of the 7075-T651 aluminum plate, and the test results are used as the reference data to verify the numerical model and damage criterion. Numerical models are constructed using various element sizes in order to evaluate the effect of the element size on the impact simulation. The correlations among the failure strain, the impact velocity, and the element size are inversely obtained from numerical simulations by comparing the residual velocities with the test results. It is found that the failure strain varies inversely with the element size and impact velocity. In particular, the sensitivity of the failure strain to the impact velocity significantly increases as the element size decreases and, therefore, the failure strain should be carefully determined when a small element size is used. By applying the characteristic of the failure strain, which depends on the impact velocity and element size, we have introduced an element-size dependent failure strain (EDFS) and the results show good agreement with experimental results regardless of the element size. To import EDFS in the subroutine of the material model, the Johnson-Cook constitutive relationship is implemented in UMAT and the numerical results using UMAT are in good agreement with the existing Johnson-Cook material model. Then, EDFS is imported in UMAT to implement a material model that combines EDFS and Johnson-Cook configuration relationships. When using the Johnson-Cook damage model, the accurate solution is obtained if we use a sufficiently small element size, but the computational cost exponentially increases accordingly. The application of the EDFS allows the calculation time to be reduced significantly because the numerical results are in good agreement with the results of experiments even if the large element size is used. Using the damage criterion presented in this study, efficient simulations can be carried out, ending up with a high accuracy as obtained without very fine discretization. | 10,298.4 | 2019-08-26T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
Magnetic field dependence of the neutral pion longitudinal screening mass in the linear sigma model with quarks
We use the Linear Sigma Model with quarks to study the magnetic field-induced modifications on the longitudinal screening mass for the neutral pion at one-loop level. The effects of the magnetic field are introduced into the self-energy which contains the contributions from all the model particles. We find that to obtain a reasonable description for the behavior with the field strength, the magnetic field dependence of the particle masses need to be accounted for. We also find that the couplings need to decrease fast enough with the field strength to then reach constant and smaller values as compared to their vacuum ones. The results illustrate the need to treat the magnetic corrections to the particle masses and couplings in a self-consistent manner, accounting for the back reaction of the field effects for the magnetic field dependence of the rest of the particle species and couplings in the model.
We use the linear sigma model with quarks to study the magnetic-field-induced modifications on the longitudinal screening mass for the neutral pion at one-loop level.The effects of the magnetic field are introduced into the self-energy, which contains the contributions from all the model particles.We find that, to obtain a reasonable description for the behavior with the field strength, we need to account for the magnetic field dependence of the particle masses.We also find that the couplings need to decrease fast enough with the field strength to then reach constant and smaller values as compared to their vacuum ones.The results illustrate the need to treat the magnetic corrections to the particle masses and couplings in a self-consistent manner, accounting for the backreaction of the field effects for the magnetic field dependence of the rest of the particle species and couplings in the model.
I. INTRODUCTION
In recent years, it has become clear that electromagnetic fields provide a powerful probe to explore the properties of the QCD vacuum.When the energy associated with the field strength is larger than Λ QCD , the field can prove the hadron structure and help reveal the dynamics associated with confinement and chiral symmetry breaking.For example, at zero temperature, magnetic fields catalyze the breaking of chiral symmetry, producing a stronger light quark-antiquark condensate [1].However, for nonvanishing temperature, magnetic fields inhibit the condensate formation and reduce the critical temperature for chiral symmetry restoration, giving rise to inverse magnetic catalysis (IMC) [2][3][4][5][6][7][8][9][10][11][12][13][14][15].This property has motivated an intense activity aimed to search for the influence of magnetic fields on hadron dynamics .Since the dynamics of chiral symmetry breaking is dominated by pions, the lightest of all quark-antiquark bound states, it then becomes important to explore how the pion mass is affected by the presence of magnetic fields.
Recall that, for a Lorentz-invariant system, the mass corresponds to the rest energy of a given particle, which can then be obtained from the pole of the propagator when the particle three-momentum ⃗ q is taken to zero.This is dubbed the "pole mass".Notice that if, instead, the zeroth component of the particle four-momentum q 0 is taken first to zero, we obtain the "screening mass".The screening mass squared can be identified as the negative of the particle magnitude of its three-momentum squared.In a system where Lorentz symmetry is unbroken, the pole and screening masses coincide.However, when Lorentz symmetry is broken, as is, for example, the case of a system at finite temperature, the above described limiting procedures do not yield the same values.Explicitly, if f (q 0 , |⃗ q|; T ) represents the thermal medium response function that contributes to the particle dispersion relation, the limits f (q 0 , 0; T ) and f (0, |⃗ q|; T ) do not commute.Pole and screening masses do not coincide.The name screening mass stems from the analysis in linear response theory when studying the influence of static external fields on a thermal medium.Because of the static nature of the external field, its screening within the medium is controlled by the system's response function in the limit f (0, |⃗ q|; T ).The inverse of the screening mass corresponds to the screening or Debye length.
When the system is immersed in a magnetic field, the breaking of Lorentz symmetry happens in the spatial directions, giving rise to distinct dispersion properties for particles moving in the transverse or the longitudinal directions with respect to the field orientation.Thus, in addition to studying the magnetic-field-induced modifications of the pole mass, one can also study the corresponding longitudinal and transverse screening masses.At T = 0 the longitudinal screening mass is equal to the pole mass.This degeneracy is lifted when T ̸ = 0.Although most of the studies have concentrated in the magnetic properties of the pion pole mass [68][69][70][71][72][73][74][75][76][77][78][79][80], more recently, an interesting relation between the magnetic behavior of screening masses and condensates, and thus between IMC and screening masses, has been obtained in Ref. [77].Motivated by this finding, the authors of Ref. [81] used a lattice QCD (LQCD) setup to assess the importance of the "sea" versus the "valence" quarks' contribution for the temperature and magnetic dependence of the pion screening mass.For the lowest temperature studied, the screening mass seems to behave as a monotonically decreasing function of the field strength, up to |eB| ∼ 2.5 GeV 2 .Unfortunately, no attempt to distinguish between longitudinal and transverse screening masses was made.The transverse and longitudinal pion masses at finite temperature and magnetic field strength were also studied in Ref. [82] using a two-flavor Nambu-Jona-Lasinio (NJL) model in the random phase approximation.The authors focused on addressing possible mishaps of previous calculations [83,84].Their results indicate opposite behaviors for the transverse and longitudinal screening masses as functions of the magnetic field strength for T = 0; whereas the former decreases, the latter slightly increases.
Since it is important to check that in a magnetic background the pole and the longitudinal screening mass are equal when calculated within the framework of a given effective model at T = 0, in this work we use the linear sigma model with quarks (LSMq) to study the pion longitudinal screening mass as a function of a magnetic field of arbitrary strength for vanishing temperature.We argue that to extract a reliable behavior of this mass as a function of the field strength, the magnetic field dependence of the couplings, as well as of the quark, pion, and σ pole masses, need to be accounted for and that, in this sense, the complete solution of the particle mass dependence on the magnetic field needs to be treated selfconsistently within a given model.We find that a rapid decrease of the model couplings with the field strength is needed for the longitudinal screening mass to follow the LQCD profile as a function of the magnetic field.This procedure is consistent with previous calculations of the magnetic field dependence of the pion pole mass within the same model, where it was also found that a rapid reduction of the couplings with the field strength is needed to describe the magnetic field behavior of the pion pole mass [70,71].Since the LSMq provides a general framework to study quark-meson systems under the influence of magnetic fields, the setup can also be extended to address the properties of the directional sound velocities, a subject that is studied in Ref. [81], or the connection between screening masses and inverse magnetic catalysis, a subject that is emphasized in Ref. [82].These studies require, as a previous step, the implementation of an adequate formulation of the magnetic field effects at zero temperature, the subject that we explore in the present work.The work is organized as follows: In Sec.II, we introduce the linear sigma model with quarks.In Sec.III, we make a quick survey of the way magnetic field effects are introduced into the propagators of charged bosons and fermions, which we hereby describe in terms of the Schwinger proper time formalism.In Sec.IV, we com-pute the Feynman diagrams that contribute to the neutral pion self-energy.In Sec.V, we compute the magnetic corrections to the neutral pion screening masses, showing that the behavior strongly depends on the magnetic field dependence of masses and couplings.We finally summarize and conclude in Sec VI.We reserve for the Appendix the explicit calculation details of the one-loop magnetic filed corrections to the neutral pion self-energy.
II. LINEAR SIGMA MODEL WITH QUARKS
The LSMq is an effective model that describes the lowenergy regime of QCD, incorporating the spontaneous breaking of chiral symmetry.The Lagrangian for the LSMq can be written as Pions are described by an isospin triplet, ⃗ π = (π 1 , π 2 , π 3 ).Two species of quarks are represented by an SU (2) isospin doublet ψ.The σ scalar is included by means of an isospin singlet.Also, λ is the boson self-coupling and g is the fermion-boson coupling.a 2 > 0 is the mass parameter.
To allow for spontaneous symmetry breaking, we let the σ field develop a vacuum expectation value v, where the charged pion fields can be expressed as and the interaction Lagrangian is defined as In order to include a finite vacuum pion mass m 0 , one adds an explicit symmetry breaking term in the Lagrangian of Eq. ( 2) such that As can be seen from Eqs. ( 2) and (4), there are new terms that depend on v and all fields develop dynamical masses, Using Eqs. ( 2) and (5), the tree-level potential is given by This potential develops a minimum, called the vacuum expectation value of the σ field, namely, Therefore, the masses evaluated at v 0 are Finally, an external magnetic field, uniform in space and constant in time, can be included in the model introducing a covariant derivative in the Lagrangian density, Eq. ( 2), namely, where A µ is the vector potential corresponding to an external magnetic field directed along the ẑ axis coupled to a particle with charge e.In the symmetric gauge, this is given by and couples only to the charged pions and to the quarks.Notice that, in order to consider the propagation of charged particles, one can resort to introducing Schwinger propagators, which can be expressed either in terms of their proper time representation or as a sum over Landau levels.For completeness of the presentation, we now proceed to briefly discuss the properties of these propagators.
III. MAGNETIC FIELD-DEPENDENT BOSON AND FERMION PROPAGATORS
To consider the propagation of charged particles within a magnetized background, we use Schwinger's proper where Φ f (x, x ′ ) is the Schwinger phase given by and q f is the charge of a quark with flavor f .Φ f (x, x ′ ) corresponds to the translationally noninvariant and gauge-dependent part of the propagator.On the other hand, S f (x − x ′ ) is translationally and gauge invariant and can be expressed in terms of its Fourier transform as where with m f as the quark mass.In a similar fashion, for a charged scalar field we have with where m b and q b are the boson mass and charge, respectively.The ϵ appearing in Eqs. ( 15) and ( 17) is the infinitesimal positive parameter that enforces Feynman boundary conditions and thus causality.Notice that, in the B → 0 limit, one recovers the usual Feynman fermion and scalar propagators.We now use these ingredients to compute the elements necessary to obtain the magnetic modification of the neutral pion mass.
IV. ONE-LOOP MAGNETIC CORRECTIONS
To compute the magnetic-field-induced modification to the neutral pion screening mass, the starting point is the equation defining its dispersion relation in the presence of the magnetic field, namely, where Π B is the magnetic-field-dependent neutral pion self-energy that depends on the model couplings and masses.Notice that, for the calculation of the magneticfield-induced modifications to the mass, only the real part of Π B contributes.On the other hand, the imaginary part would contribute to the magnetic-field-induced pion damping rate.The properties of this damping rate can also offer insights into the possible opening of magnetic field driven channels for particle process; however, for the purposes of the present work, we hereby concentrate exclusively on the magnetic-field-induced modifications of the (longitudinal screening) mass that are encoded in the real part of the self-energy.
The computation requires knowledge of each of the above-mentioned elements as functions of the field strength.To obtain the screening mass, we need to set q 0 = 0 in Eq. ( 18) and find positive solutions for the parameter m 2 sc = −|⃗ q| 2 .In the presence of a constant magnetic field, we have two kinds of solutions for m 2 sc : the longitudinal screening mass denoted by m sc,∥ , which is defined for the limit where ⃗ q ⊥ = 0, and the transverse screening mass, denoted by m sc,⊥ , which is defined for the limit where q 3 = 0. Since we have chosen the direction of the magnetic field to point along the z axis, In what follows, we concentrate on the calculation of the longitudinal screening mass.We leave for a future work the computation of the transverse screening mass.We first compute the neutral pion self-energy, The terms on the right-hand side of Eq. ( 19) are represented by the Feynman diagrams depicted in Figs. 1, 2, 3 and 4, which contribute to the self-energy at one loop.
The subindices represent the kind of particles in the loop and correspond to the quark-antiquark loop Π B f f depicted in Fig. 1, the quark tadpole Π B f depicted in Fig. 2, the charged boson tadpoles Π B π ± , depicted in Figs. 3 and 4, and the neutral boson tadpoles Π π 0 , Π σ .Notice that the diagrams with neutral bosons in the loop contribute only to vacuum renormalization and not to the magnetic properties of the system.To see this, recall that these fields contribute with terms represented by the regularized integrals that are computed using bare couplings and masses.When the propagator does not include magnetic field effects, the result of the regularized integral can be canceled by the introduction of suitable counterterms.The upshot is that, when a given diagram is computed without field effects in the propagator, it does not contribute to the magnetic field modifications of particle properties.Therefore, hereafter we do not consider the effect of these diagrams for the description of the magnetic modifications of the pion self-energy.
Since the contribution from the quark-antiquark loop is the only one that depends on the pion momentum, we first concentrate on the contribution from this diagram, for a single quark species.This is given explicitly by Notice that, since both particles flow with the same charge around the loop, the Schwinger phase vanishes.
According to the explicit computation in the Appendix, the fermion contribution to the pion self-energy is given by where we have defined the variables To isolate the magnetic contribution in the pion self-energy, we need to work with the function F (q 2 0 , q 2 3 , q 2 ⊥ , |q f B|, m f ) defined as explicitly given by where X and X 0 are defined as Hereafter, we concentrate on the computation of the longitudinal screening mass.For this purpose, we set q 2 ⊥ = q 2 0 = 0 in the function F (q 2 0 , q 2 3 , q 2 ⊥ , |q f B|, m f ) of Eq. ( 24), to obtain In this case, X and X 0 reduce to the same expression, which is explicitly given by with The real part of the u integral in Eq. ( 26), which is needed to compute the screening mass, can be performed Figure 5. Masses of the quark (blue line), pion (orange line), and sigma (green line) as a function of the magnetic field following Ref.[72] analytically (see the Appendix), with the result where A 1 and A 2 are given by Notice that the limits q f B → 0 and ϵ → 0 do not commute.Therefore, to check that Eq. ( 29) goes to zero when q f B vanishes, the ϵ dependence has to be maintained.For finite and arbitrary values of q f B, the integration over v needs to be numerically performed for finite values of ϵ.We have checked that the value of . Average normalized condensate computed from the model compared to the results from Ref. [86].
the integral converges after having performed the v integration when we then take smaller values of ϵ so as to implement the limit ϵ → 0. Notice that, to compute the magnetic modification of the screening mass, only the real part of the self-energy is required.However, the imaginary part of the self-energy is also an interesting and useful quantity that could be computed in the presence of a magnetic field, since this is directly linked to the magnetic field activation of decay channels that are otherwise not present in the absence of magnetic fields.For example, if as a consequence of field effects, a meson mass becomes larger than twice the quark mass, the meson decay into a quark-antiquark pair can be opened and this is signaled by a nonvanishing imaginary part of the meson self-energy.In the present work, no such channel is opened since the pion mass decreases and always remains smaller than twice the quark mass as a function of the field strength, and consequently, the magnetic-fielddependent imaginary part of the pion self-energy vanishes, as expected.This may not be the case were we to consider the σ meson or when the combined thermomagnetic effects on meson masses are considered.
We now proceed to compute the charged boson loop contribution to the neutral pion self-energy.This includes the two tadpole diagrams shown in Figs. 3 and 4 and can be written as where Π B b is the contribution to the neutral pion selfenergy coming from Fig. 3, which is calculated by Notice that, since the initial and final loop space-time points in the tadpole Feynman diagram coincide, the Schwinger phase also vanishes.Substituting Eq. ( 17) into Eq.( 32) and integrating over the momentum and s variables, we obtain, after subtracting the B = 0 contribution, Notice that, since Eq. ( 33) does not depend on the external momentum, it represents a purely real contribution.Therefore, the explicit expression for Π B π ± in Eq. ( 31) is given by Finally, for the contribution of the quark tadpole Π B f shown in Fig. 2, we have Substituting Eq. ( 15) into Eq.( 35), then performing a Wick rotation to Euclidian space and finally integrating over the momentum variables, we obtain, after subtracting the B = 0 contribution, (36) Notice that Eq. ( 36) corresponds also to a purely real contribution.75, (dotted), g = 1.5 (dashed), g = 0.75 (dashed-dot), and g = 0.33 (solid).Solutions to the dispersion relation cease to exist beyond q f B ∼ 0.2, 0.35, 0.55 and 0.8 GeV 2 , respectively, and the range where solutions exist increases as the coupling decreases.
V. MAGNETIC MODIFICATION TO THE NEUTRAL PION MASS
With all these elements at hand, we can now find the magnetic-field-dependent longitudinal screening mass for the neutral pion from the dispersion relation ( 18) by setting q 2 ⊥ = q 2 0 = 0. Since we are pursuing the purely magnetic field effects, we also subtract the B = 0 contribution, which amounts to subtracting the vacuum contribution, namely, where F is defined in correspondence with Eq. ( 23) and accounting for all relevant diagrams, that is, The longitudinal screening mass is obtained finding solutions for m 2 sc,∥ ≡ −q 2 3 , for different values of the field strength.In anticipation of the results, we point out that, in order to make a reasonable description of the behavior of the screening mass with the field strength, we need to account for the magnetic field dependence of the different particles involved in the self-energy, as well as of the couplings.In this sense, the full-fledged description of the problem therefore requires a self-consistent treatment, whereby all self-energies of the particles subject to the influence of the magnetic field depend on each other through the field dependence of their masses.However, for our purposes, here we set the problem in a simpler manner.We borrow results for the magnetic field dependence of the pion, σ and quark pole masses, which are inputs to compute the magnetic corrections to the neutral pion screening mass.We have taken as input the pole pion mass m 0 (B), the quark mass m f (B), and Neutral pion longitudinal screening mass as a function of the magnetic field strength normalized to the pion pole mass for B = 0 including all the contributions to the self-energy computed with g = 0.33 and λ = 2.5 compared to the NJL results from Ref. [82] and to an interpolation of the data for the LQCD results from Ref. [81] for T = 17 MeV.The green shadow represents the error in the LQCD calculations from Ref. [81].For comparison we also show the case where only the fermion-antifermion loop is considered, computed with g = 0.33.
the σ mass m σ (B) as functions of the magnetic field from Ref. [72]. Figure 5 shows the magnetic field dependence of the input masses.To have a direct comparison with LQCD results of Ref. [81], hereby we use a vacuum value of m 0 (B = 0) = 220 MeV for the pion mass, m f (B = 0) = 252 MeV for the quark mass, and m σ (B = 0) = 550 MeV for the σ mass.In principle, the magnetic mass dependence we use is rigorously valid for eB ≤ 0.4 GeV 2 , which is the upper limit for the cutoff for the NJL calculation of Ref. [72].Hence, the mass values for large magnetic fields should be considered as extrapolations, as they provide only a qualitative behavior in this limit.As we show, the magnetic dependence of these masses turns out to be a key ingredient that allows a good description of the behavior of the longitudinal screening mass found by LQCD and for NJL model-based calculations [81,82].Before proceeding to the analysis of the screening mass, we first test whether the model can be used to describe the LQCD average condensate as a function of the field strength.Figure 6 shows this quantity taken from Ref. [86] compared to our model calculation, using the same magnetic field dependence of the quark mass that we use as input to compute the neutral pion screening mass.The model calculation provides a reasonable description of the LQCD data.
As discussed in Sec.IV, the neutral pion self-energy is described by the two couplings g and λ; the former enters in the calculation of the fermion contribution to the self-energy, depicted in Fig. 1, whereas the latter enters in the contribution of the tadpole diagrams of Figs. 3 and 4. Also, a combination of both couplings enters in the computation of the tadpole diagram in Fig. 2. In vacuum, these parameters have to obey the following constraints imposed by the model and that are derived from Eq. ( 6): where, in account of the partially conserved axial current statement, we identify the vacuum expectation value v 0 with f π , the pion decay constant.Substituting the values of the masses in Eq. ( 39), we obtain g ∼ 2.75 and λ ∼ 15.
We now use the aforementioned parameters in Eq. ( 37) to find the screening mass for the neutral pion.The results are shown in Fig. 7 as the ratio m sc,∥ /m 0 .Hereafter, for the calculations, we sum over the two light quark flavor charges taking |q u | = 2/3 and |q d | = 1/3.Notice that, with this choice, the behavior of the screening mass does not resemble the findings of LQCD, nor those of NJL.Furthermore, the solutions to the dispersion relation equation cease to exists for an intermediate value of the field strength.
Motivated by the results of Ref. [72], which point out to a fast decrease of the NJL coupling as a function of the magnetic field, we first study the consequences of using a lower value of the g coupling to explore the effects for the m sc,∥ /m 0 ratio.The results are shown in Fig. 8. Notice that the effect of decreasing g is to increase the range of solutions for m sc,∥ as a function of q f B, producing results closer to those of the NJL and LQCD ones.We find that the choice g = 0.33, which corresponds to the solid line plot in Fig. 8, already provides a good description of the NJL and LQCD findings.Finally, we add the contribution from the tadpoles shown in Figs. 2 and 4. Here, we naturally choose the best parameter already determined for g, that is, g = 0.33 from Fig. 8, and use it as a starting point to then add the tadpole contributions.The results are shown in Fig. 9 for the choice of parameters g = 0.33 and λ = 2.5.Here, we compare our findings with the results for the screening masses reported in Ref. [82] for the NJL model and also with the LQCD results in Ref. [81] for T = 17 MeV.Notice that the NJL results are reported for T = 0, just as ours, however, the results from LQCD are calculated for finite temperature.We thus compare with the smallest temperature reported, which corresponds to T = 17 MeV.The results are consistent with the findings in Refs.[70,71] for the magnetic field dependence of the pole pion mass in the large field limit.We emphasize that a good description for the LQCD and NJL results for the neutral pion parallel screening mass can be achieved only when the couplings are taken to be about one order of magnitude smaller than their vacuum values.We have refrained from parametrizing the magnetic field dependence of these couplings, but instead highlight that their decrease happens soon after the magnetic field starts growing from zero.
VI. SUMMARY AND CONCLUSIONS
In this work we studied the magnetic-field-induced modifications on the longitudinal screening mass of the neutral pion at one-loop level using the LSMq.The effects of the magnetic field are introduced in the neutral pion self-energy which is made out of several terms stemming from the contribution from the σ as well as from the charged particles of the model to the loop corrections.We found that, in order to obtain a reasonable description for the behavior of the longitudinal screening mass with the field strength, the magnetic field dependence of the particle masses, as well as of the couplings, needs to be taken into account.Moreover, for the calculation to reproduce the corresponding results from LQCD and NJL, the couplings g and λ need to decrease fast enough (within a magnetic field interval ≃ 0.1 GeV 2 from B = 0) to then reach constant and small values with the field strength.This result is in agreement with the findings of Refs.[70,71].The results illustrate the need to account for the backreaction of the magnetic field dependence of the rest of the particle species in the model.This could be achieved by a complete self-consistent treatment of the problem.However, this represents a highly involved procedure, requiring the simultaneous solution, at a given perturbative order, first of the set of coupled equations that govern the behavior of the pole masses, together with the couplings, to then use these as inputs for the coupled set of equations that yield the screening masses.Although this procedure can be in principle implemented, in this work we have taken the more modest approach that makes use of the magnetic field behavior of the particle masses found in Ref. [69].In this sense, we believe that this work provides further evidence of the need to consider mutually dependent magnetic-field-dependent masses and couplings in effective model calculations to achieve better insight into the properties of strongly interacting systems subject to the effects of magnetic fields.The results are obtained using a method to analytically carry out the calculation of the quark-antiquark contribution to the neutral pion longitudinal screening mass up to the last integral.The method is valid for arbitrary field strengths, but cannot be directly applied to the case of the transverse screening mass, for which the pole contributions need to be handled in a different manner.We are currently working on this calculation, and the results, together with thermal effects, will soon be reported elsewhere.
The integral I ∥ is also found with the help of the Gaussian integrals in Eq. (A5).The result is Finally, for the integral over the zeroth momentum component, we have A11) where the A and B constants are defined as Using again the Gaussian integrals in Eq. (A5), the result for I 0 is given by Substituting Eqs.(A6), (A10), and (A13) into Eq.(A2), and making the change of variables s = u(1 − v) and s ′ = uv, we get Eq.( 21) that we hereby reproduce, Since the poles of cot(|q f B|u) lie along the real axis, we should evaluate Eq. (A21) using the principal value prescription.Also, we promote the integral to the complex plane using the quarter circle contour shown in Fig. 10.Thus, we now focus in the contour integral, where C = C 1 ∪ C 2 ∪ C 3 , as shown in Fig. 10.It is convenient to make the change of variables u ′ = |q f B|u in our expression for I C so that It is easy to see that the integral over C 2 vanishes when R → ∞, due to the exponential damping in Eq. (A23) ) Now, for C 3 it is convenient to make the following change of variable Thus, the integral over C 3 becomes Since ϵ → 0, the imaginary exponential above tends to 1, and the integral turns out to be analytic; the result is given by (A30) It is again convenient to make the change of variable u ′ = |q f B|u so that the J integral becomes where we have defined (A33) Finally, J 2 is calculated in a similar fashion as was done to compute the I integral, namely, by promoting it to a closed contour integral using the same contour of integration.The result for J 2 , taking into account again the factor −i|q f B| from Eq. (A29), is given by (−i|q f B|) PV (J 2 ) = π|q f B| − Eq. (A36) corresponds to the result in section IV given by Eq. ( 29).
Figure 1 .
Figure 1.Feynman diagram corresponding to the one-loop contribution from the fermion anti-fermion loop to the neutral pion self-energy in the LSMq.
Figure 2 .
Figure 2. Feynman diagram corresponding to the tadpole contribution from the fermion loop and a sigma to the neutral pion self-energy in the LSMq.
Figure 3 .
Figure 3. Feynman diagram corresponding t the one-loop contribution from charged pions to the neutral pion self-energy in the LSMq.
Figure 4 .
Figure 4. Feynman diagram corresponding to the tadpole contribution from charged pions and a sigma to the neutral pion self-energy in the LSMq.
Figure 7 .
Figure 7. Neutral pion longitudinal screening mass as a function of the magnetic field strength normalized to the pion pole mass for B = 0 computed using g=2.75.Solutions to the dispersion relation cease to exist beyond q f B ∼ 0.2 GeV 2 .
Figure 8 .
Figure 8. Neutral pion longitudinal screening mass as a function of the magnetic field strength normalized to the pion pole mass for B = 0: g = 2.75, (dotted), g = 1.5 (dashed), g = 0.75 (dashed-dot), and g = 0.33 (solid).Solutions to the dispersion relation cease to exist beyond q f B ∼ 0.2, 0.35, 0.55 and 0.8 GeV 2 , respectively, and the range where solutions exist increases as the coupling decreases.
Figure 9 .
Figure 9. Neutral pion longitudinal screening mass as a function of the magnetic field strength normalized to the pion pole mass for B = 0 including all the contributions to the self-energy computed with g = 0.33 and λ = 2.5 compared to the NJL results from Ref.[82] and to an interpolation of the data for the LQCD results from Ref.[81] for T = 17 MeV.The green shadow represents the error in the LQCD calculations from Ref.[81].For comparison we also show the case where only the fermion-antifermion loop is considered, computed with g = 0.33.
PV
A27) where ψ (0) (z) is the digamma function.Substituting Eqs.(A24) and (A27) in Eq. (A23) and using Cauchy's residue theorem in the closed integral I C , we can obtain the integral along the path C 1 with the result on the part of the integral in F (0, p 2 3 , 0, |q f B|, m f ) that comes from the term −iG 2 , namely e−iau |q f B| csc 2 (|q f B|u) + cot(|q f B|u) u − 2 |q f B|u 2 .(A29)Let us isolate the u integral definingJ = ∞ 0 due −uϵ e −iau |q f B| csc 2 (|q f B|u) + cot(|q f B|u) u − 2 |q f B|u 2 .
J 1
can be integrated by parts to bring it to a form similar to the I integral in Eq. (A21).Taking into account the −i|q f B| factor in Eq. (A29), we find that (−i|q f B|) PV (J 1 ) = −(ia+ϵ) | 7,810.6 | 2023-11-22T00:00:00.000 | [
"Physics"
] |
Prediction of Protein–Protein Interactions by Evidence Combining Methods
Most cellular functions involve proteins’ features based on their physical interactions with other partner proteins. Sketching a map of protein–protein interactions (PPIs) is therefore an important inception step towards understanding the basics of cell functions. Several experimental techniques operating in vivo or in vitro have made significant contributions to screening a large number of protein interaction partners, especially high-throughput experimental methods. However, computational approaches for PPI predication supported by rapid accumulation of data generated from experimental techniques, 3D structure definitions, and genome sequencing have boosted the map sketching of PPIs. In this review, we shed light on in silico PPI prediction methods that integrate evidence from multiple sources, including evolutionary relationship, function annotation, sequence/structure features, network topology and text mining. These methods are developed for integration of multi-dimensional evidence, for designing the strategies to predict novel interactions, and for making the results consistent with the increase of prediction coverage and accuracy.
Introduction
Proteins perform their complicated functions by physically interacting with other proteins. Sketching a map of protein-protein interactions (PPI) is a significant topic of system biology and an important step towards understanding protein functions and cellular behaviors [1]. Different experimental techniques (in vivo or in vitro) have made significant efforts to study the constant nature of protein interaction sites and screen a large number of protein interaction partners (Figure 1), such as two-hybrid (Y2H) screens, Tandem affinity purification mass spectroscopy (TAP-MS), protein microarrays, mating-based split-ubiquitin system (mbSUS), pulldown assays, dual polarization interferometry (DPI), NMR-based method for mapping the structural interactions (STINT-NMR), bioluminescence resonance energy transfer (BRET), fluorescence resonance energy transfer (FRET), atomic force microscopy (AFM), surface plasmon resonance (SPR), protein complex immune precipitation (Co-IP) [2][3][4][5], and so on. Among these experimental techniques, some high-throughput methods such as Y2H, TAP-MS, protein chips, etc. have been comprehensively applied to detect a protein's binary interactions and to generate many genome-scale protein interaction networks in model organisms such as Homo sapiens [6], Drosophila melanogaster [7], Saccharomyces cerevisiae [8], and Caenorhabditis elegans [9]. However, genome-scale experiments are costly and labor-intensive, and have inherent biases and limited coverage. Limitations of equipment resolution and environmental disturbances during operations (such as purification, capture, equilibrium, signal label and imaging) Bioinformatics techniques of PPI prediction strengthen and flourish the study of protein interactions ( Figure 1). Bioinformatics approaches consider the term of "protein-protein interactions" as the associations between proteins that include relationship aspects of evolution, function and structure. These techniques overcome the limitations of experimental techniques, are beneficial to complete the missing pieces of experimental PPI data and help in discovering the clues of PPI mechanisms in silico. Up until now, several computational methods have been successfully applied to predict protein interactions in multiple perspectives: phylogenetic profile [12], protein sequence [13], domain-domain interaction (DDI) [14], coexpression [15], ortholog [16], etc. These methods are mainly focused on individual (or homogeneous) evidence for prediction and have certain specificities as well as biases [1,17]. An alternative strategy is the integration of evidence sources in a statistical learning framework. Combining evidence exhibits the strength of machine learning and data mining to overcome the limitations of independent predictions and make the results consistent with the increase of prediction coverage and accuracy [1,[18][19][20][21][22][23]. Such methods of PPI prediction are referred as "prediction of protein-protein interactions by evidence-combining methods".
In this review, the workflows for prediction pair-wise PPIs by combined evidence from studies building PPI networks on the genome scale level are presented and discussed. The presented workflows mainly consist of three basic steps: (1) Defining gold standard datasets/training datasets of interacting and non-interacting protein pairs; (2) Characterizing the interactions by annotating gold standard datasets with diverse and carefully chosen evidence; this is an encoding process to turn protein interaction features into machine-readable rules; (3) Determining the probability of particular interactions by individual evidence, and thus combining the probabilities (or encoded vector) of all evidences to uncover the novel subset of the interactome. Bioinformatics techniques of PPI prediction strengthen and flourish the study of protein interactions ( Figure 1). Bioinformatics approaches consider the term of "protein-protein interactions" as the associations between proteins that include relationship aspects of evolution, function and structure. These techniques overcome the limitations of experimental techniques, are beneficial to complete the missing pieces of experimental PPI data and help in discovering the clues of PPI mechanisms in silico. Up until now, several computational methods have been successfully applied to predict protein interactions in multiple perspectives: phylogenetic profile [12], protein sequence [13], domain-domain interaction (DDI) [14], coexpression [15], ortholog [16], etc. These methods are mainly focused on individual (or homogeneous) evidence for prediction and have certain specificities as well as biases [1,17]. An alternative strategy is the integration of evidence sources in a statistical learning framework. Combining evidence exhibits the strength of machine learning and data mining to overcome the limitations of independent predictions and make the results consistent with the increase of prediction coverage and accuracy [1,[18][19][20][21][22][23]. Such methods of PPI prediction are referred as "prediction of protein-protein interactions by evidence-combining methods".
In this review, the workflows for prediction pair-wise PPIs by combined evidence from studies building PPI networks on the genome scale level are presented and discussed. The presented workflows mainly consist of three basic steps: (1) Defining gold standard datasets/training datasets of interacting and non-interacting protein pairs; (2) Characterizing the interactions by annotating gold standard datasets with diverse and carefully chosen evidence; this is an encoding process to turn protein interaction features into machine-readable rules; (3) Determining the probability of particular interactions by individual evidence, and thus combining the probabilities (or encoded vector) of all evidences to uncover the novel subset of the interactome.
Defining Gold Standard Datasets
Units of gold standard datasets are usually constructed for training or testing of PPI prediction. Datasets for training and testing units are generally independent. The quality and reliability of gold standard datasets for training affect the performance of different machine learning methods [17].
The gold standard positive (GSP) datasets are basically PPIs with high experimental confidence or reference evidence. Some of the datasets are available in public databases, such as: the Biological General Repository for Interaction Datasets (BioGRID) [24], the IntAct molecular interaction database (IntAct) [25], Search Tool for the Retrieval of Interacting Genes (STRING) [26], Agile Protein Interactomes DataServer (APID) [27], the Database of Interacting Proteins (DIP) [28], HitPredict [29], the Molecular INTeraction database (MINT) [30], the Arabidopsis Information Resource (TAIR) [31], the Human Protein Reference Database (HPRD) [32], Protein Interaction Network Analysis (PINA) platform [33] and the High-quality INTeractomes database (HINT) [34]. These repositories of protein complexes and interactions are varied in size and species-specificity, and they contain information from experimental and computational sources with or without manual validation (Table 1). For these reasons, it is advised to choose high-quality positive datasets from multiple (times or methods) independent assays (usually high-throughput methods that consider the coverage and biases of different assays) [1] or from text mining of published literature with careful evaluation [2]. The gold standard datasets are always focused on reference datasets that source from model organisms ( Figure 2) with advanced accuracy and coverage. This repository is very helpful for seeking out general clues of PPI mechanisms in silico, and supporting studies which lack the existing data of a targeted organism [1,16,35]. However, it is also a double-edged sword that inevitably leads to errors and biases by over-fitting of specific data in the minority organisms.
Defining Gold Standard Datasets
Units of gold standard datasets are usually constructed for training or testing of PPI prediction. Datasets for training and testing units are generally independent. The quality and reliability of gold standard datasets for training affect the performance of different machine learning methods [17].
The gold standard positive (GSP) datasets are basically PPIs with high experimental confidence or reference evidence. Some of the datasets are available in public databases, such as: the Biological General Repository for Interaction Datasets (BioGRID) [24], the IntAct molecular interaction database (IntAct) [25], Search Tool for the Retrieval of Interacting Genes (STRING) [26], Agile Protein Interactomes DataServer (APID) [27], the Database of Interacting Proteins (DIP) [28], HitPredict [29], the Molecular INTeraction database (MINT) [30], the Arabidopsis Information Resource (TAIR) [31], the Human Protein Reference Database (HPRD) [32], Protein Interaction Network Analysis (PINA) platform [33] and the High-quality INTeractomes database (HINT) [34]. These repositories of protein complexes and interactions are varied in size and species-specificity, and they contain information from experimental and computational sources with or without manual validation (Table 1). For these reasons, it is advised to choose high-quality positive datasets from multiple (times or methods) independent assays (usually high-throughput methods that consider the coverage and biases of different assays) [1] or from text mining of published literature with careful evaluation [2]. The gold standard datasets are always focused on reference datasets that source from model organisms ( Figure 2) with advanced accuracy and coverage. This repository is very helpful for seeking out general clues of PPI mechanisms in silico, and supporting studies which lack the existing data of a targeted organism [1,16,35]. However, it is also a double-edged sword that inevitably leads to errors and biases by over-fitting of specific data in the minority organisms. Gold standard negative (GSN) datasets generally cannot be obtained by direct experimental measures. There is a Negatome database (2.0) [37] which provides a collection of protein and domain pairs unlikely to be engaged in direct physical interactions (supported by text mining and 3D structure of protein complexes) (Table 1). Unfortunately, due to the limited data (about 6000 pairs at present), this non-interacting dataset could not satisfy the diverse GSP datasets of different users. There are some reported methods for extracting negative datasets, such as: (1) Negative datasets are constructed by using random pairs which exclude the experimentally detected interactions [1], and as there are discordant numbers between high-confidence interactions and random pairs, the scale and structure of networks should be balanced between negative and positive datasets. This method may include undetected PPIs; (2) Negative examples are chosen based on the categories of their distinct functions, such as sub-cellular localization (can be accessed by tools such as LOCATE [38], PSORTdb 3.0 [39], LocDB [40]) and annotations (such as KEGG pathways, gene ontology (GO), and Enzyme Commission (EC)) [22,41]. However, these methods can also lead to biases due to varying definitions of categories [42]; (3) Another alternative approach is based on topological policy: choose pairs of separated proteins in existing PPI networks to represent non-interactions: defining negative samples as the protein pairs with the shortest path lengths exceed the median shortest paths in a GSP network [43], or further construct a GSN network based on the principle of keeping the composition and degree of a node identical to the GSP network [20]. The negative samples, however, still contain biases if the referential networks are partial [17].
Annotate Protein Pairs with Diverse Evidence
The characterization of existing interactions is usually processed to explore the crucial role of protein interactions. Interactions can convert proteins/polypeptides into transient or permanent complexes and the binding is determined by different elements such as cell physiology (function switches, regulation status, etc.), biochemistry environment (ions, dipoles, Van der Waals forces, etc.) and shape of the binding surface (3D structure, folding elements, amino acid composition, etc.), which are further involved in the fields of functional genomes, dynamics, kinetics, mechanics etc. [3,4,44]. Experiments for detecting PPIs in vivo and in vitro are aimed at capturing and displaying the specific nature of protein interactions under a certain condition. However, the strategies of prediction of PPIs in silico are devoted to extracting machine-learned PPI rules (usually unintelligible to humans) from interaction-related features and are used to predict unexploited PPIs. Evidence for machine learning includes physical features (such as calculated statistics of hydrophobicity, hydrophilicity, polarizability, etc.) and non-physical features (such as gene coexpression, sequence similarity, function annotation enrichment, etc.). Each feature provides a different angle to view protein interactions and has the potential for uncovering a novel subset of the whole interactome. For this reason, during the workflow of PPI prediction, protein pairs are generally annotated by different parameters (individual or co-occurring parameters) taken from diverse sources of evidence, such as evolutionary relationship, functional annotation, sequence/structure features, network topology and text mining ( Table 2). Evolutionary Relationship: Methods based on evolutionary information use genomic context of organisms to infer functional associations between proteins, including gene neighborhood [70], gene fusion [71] and phylogenetic profiles [12]. (1) The basic hypothesis of the gene neighborhood method is that if neighbor associations of multiple genes are conservative across genomes, it infers that those genes/proteins may have function association which implies interactions; (2) Gene fusion events are also called the "Rosetta stone" method. It is based on the hypothesis that the homology of two interactive proteins/domains in one species may fuse into a single protein in another species. Generally, organisms' sequences are compared to detect the Rosetta stone (domains) fusion events in selected organisms. The fusion phenomenon indicates the functional association and possibility of forming a protein complex; (3) Phylogenetic profile hypothesized that functionally linked proteins tended to coexist during evolution, and the two proteins with similar profiles (inherited together) in different species might have interactions or functional linkages. Sequence comparisons between genomes are used to construct phylogenetic profiles (A protein/domain is represented as an N-dimensional vector: N, number of genomes; Value = 1 or 0, presence or absence of protein/domain in an organism) and evaluate protein pairs by measuring distance.
Ortholog: If a pair of proteins has high similarity to the sequences of another pair of genes or proteins with known interaction in other species (orthologous proteins), they are supposed to have similar functions which infers the relationship of interactions. This approach usually uses sequence alignment algorithms to define the similarity of full sequences or residues, which is regarded as an index to predict interactions between proteins [1,50,51].
Gene Function Annotations: This method is based on the hypothesis that two proteins functioning in the same biological process should be more likely to interact with each other than those two proteins not sharing the same biological process. Information of biological function is accessible from some hierarchically structured annotation systems, such as GO, KEGG, EC and MapMan (usually used for plants) [72], which provide information of colocalization and participation in a shared cellular process implicit to PPIs.
Coexpression: It is generally acknowledged that a pair of interacting proteins has relative gene expression, although the gene coexpression methods are an indirect way to infer the protein interaction (some results indicated that there is no straight correlation between gene expression profiles and PPI associations under some conditions [73]). However, gene co-expression contains information of transcription and regulation, and can be utilized to validate PPIs by calculating correlation coefficient of transcriptome data including RNA sequencing, DNA microarrays, expressed sequence tag (EST), etc. [1]. In addition, by applying the clustering algorithms or analyzing topological structure of coexpression network [73], cluster modules can help to reveal functional relationships and predict PPIs.
Sequence-Based Code Signatures: Some studies implement the natural language processing (NLP) technique to encode sequences for perdition of PPIs. The language of protein sequences is translated into sequence-based signatures and mapped into high-dimensional vectors by using the occurrence frequencies of each kind of building block [74]. Different signatures are wildly used, including N-grams, ORF codon, Conjoint Triad, etc. The "N-grams" (natural language processing term refers to N consecutive symbols) are sets of all possible subsequences of amino acids in protein sequences (N-grams: N = 3, total number = 8000 (20 3 )) [60,61]. ORF codon uses 64-dimensional vectors to represent a given open reading frame (ORF) instead of an amino acid [62].
The Conjoint Triad Method (also called Shen's method) [75] is one of the popular codon usage methods of sequence-based PPI prediction. It encodes each protein sequence as a feature vector by observing frequency of amino acid (AA) triads as follows ( Figure 3): (1) It encodes/classifies 20 amino acids into seven classes based on their dipoles' strength and volume of the side chains; (2) A protein sequence is resolved into a series of AA triads (three continuous AAs as a unit); (3) It uses 343 (7 3 )-dimensional vectors to represent a given protein, and each element of this vector is the frequency of an AA triad; (4) The PPI pair is represented by concatenating the individual two vectors of corresponding proteins. It is noticed that, if we do not process the AA cluster step (in step 1), protein pairs will be required to get encoded as a 16,000-dimensional vector (20 3 × 2, as N-grams method), which is too large for most classifiers. The rule of seven classes for 20 AAs is effective and convenient to operate, and is developed as a classical method that has been widely applied in interaction prediction and interaction site prediction based on sequences [58].
Sequence-Based Structure Signatures: Structure and chemical properties of a protein sequence can be translated into structure signatures to represent characteristics of a residue interface. These signatures include: (1) Physicochemical properties of amino acids, such as hydrophobicity, hydrophilicity, polarizability, solvent-accessible surface area (SASA), relative surface accessibilities (RSA) of residues, side chain net charge index (NCI), charge, isoelectric point, etc.; (2) Signatures of protein structure, such as 3D structure indexes in PDB, protein fold alpha helices, beta sheets and coils, posttranslational modifications (PTMs), and domains [1,76]. These signatures are available from different tools, including NACCESS program [77], DSSP algorithm in PDB [78], PSIPRED [1], AA index [79], etc.
Domain methods aim to establish protein relationships by domain−domain interactions (DDIs), which are applied widely in sequence-based PPI prediction [35,[45][46][47][48] As the domains are conserved, distinct, compact structural units in proteins, the computational insights into detailed knowledge about a protein pair's interaction can be typically simplified as domain associations. Information of protein domains can be accessed at Pfam [80], Conserved Domain Database (CDD) [81], etc. Large-scale inference of DDIs can be processed by analyzing the domain composition of a protein pair in a high-quality PPI network and then using specific classifiers to identify domains (or domain combinations) responsible for protein interactions (Figure 4). Moreover, some prediction work of DDIs complements other evidence. For example, the DOMINE database [46,82] integrates other evidence for DDI inferences, such as phylogenetic profile, gene fusion, GO, etc. Network Topology: Network topological parameters are generally calculated from positive datasets. They characterize the topological properties of currently available protein interaction networks to evaluate target protein pairs. Graph-theoretic invariants include weighted domination number, average eccentricity number, the eccentricity, circumference, weighted peripheral number, clustering coefficient of a protein pair, etc. [57].
Text Mining: Protein-protein interactions can also be predicted using text mining (TM). TM technology could explore protein interactions from full-length papers through titles, abstracts, paragraphs, diagram texts and find co-occurrence of statistical significance between text corpuses [84]. Some methods present grammatical structures as networks considering properties of semantic notion and analyses with kernel-based methods (mostly an SVM) [69]. Other studies reassemble text corpus to integrate PPI-related information such as phosphorylation, domain interactions, and homology [85,86]. Literature curation is managed by many accessible protein databases such as Yeast Proteome Database (YPD) [87], Database of Interacting Proteins (DIP) [88], BioGRID and HPRD. In addition, there are some TM-based methods/tools that provide multiple-perspective evidence for PPI extraction, such as BioRAT (Biological Research Assistant for Text mining) [89], eFIP (Extracting Functional Impact of Phosphorylation) [85], FACTA (Finding Associated Concepts with Text Analysis) [90] and Hit Predict [86].
Strategy for Integrative Analysis
Studies in this category make use of a classification algorithm to integrate interaction-related features. With these available physical and non-physical features, classifiers are trained to distinguish between positive and negative examples. It is a challenge to integrate evidence variants in confidence and coverage to increase PPI prediction coverage and accuracy. The common process of PPI prediction by evidence-combining methods includes several steps.
Step 1: Choose appropriate evidence. Evidence must be carefully chosen with content specialized for each different network. Moreover, the following issue must be taken into consideration: Is this evidence a discovery of a global PPI in an unexploited species, or is it a meticulous digging of interaction sites in model species? It should be noted that there is a widespread misconception that "more evidence yields better results". In a prediction process, blindly incorporating multiple sources of evidence could influence the results and yield other biases [42].
Step 2: Encode protein pairs with evidence. The common encoding process transforms individual or homogeneous evidence into a feature vector representing each pair of proteins. The goal is to convert them to solve the problem of binary classification. These features may represent a particular source of information such as correlations of gene expression, phylogenetic profiles, sequence-based signatures, GO functional annotation and chemical properties. There are many modes to encode evidence sources into a featured vector, to choose statistical standard and data dimensions, and to check the normalization affect or the reliability of different computational predictions [22,45].
Step 3: Different strategies are adopted to merge classifiers into integrative datasets. Some studies use uniform evidence with a similarly encoded rule in one step. Some studies first train datasets with multiple independent evidence and then cross-validate and integrate multiple independent sets of training results to reduce potential bias. Others use single training or integrating probability score to uncover a novel subset of the whole interactome. Many classifiers have been introduced to predict PPIs including, Artificial Neural Network (ANN), Decision Tree (DT), K-Nearest Neighbor (KNN), Logistic Regression (LR), Naive Bayes (NB), Random Forest (RF), Support Vector Machine (SVM), etc. (Table 3).
However, studies of PPIs are diverse in target species, data sources, demand of accuracy and coverage, which are various in details and processes. In this paper, we are focused on introducing several independent strategies for integrative analysis. Some related studies are listed in Table 3.
Exploratory PPI Predictions Using Combinated Vector Descriptors
Some studies encode evidence sources with uniform rule and use high-dimensional concatenated vectors to present information of uniformed evidence.
Case 1: The Multi-Scale Continuous and Discontinuous (MCD) feature method [58] (developed from the auto-covariance (AC) [76] method) captures the interactions from continuous and discontinuous binding patterns present within a protein sequence. MCD divides the entire protein sequence into four strings of equal length. For each string, three types of descriptors (composition, transition and distribution that have evidence based on amino acid sequences) are used to represent amino acid properties. Then, a high-dimensional concatenated vector is used to present information of sector combination (4-bit binary of MCD feature) and encode evidence in a protein pair. At last, minimum redundancy maximum relevance (mRMR) is applied for the feature selection, and the SVM classifier finally performs the prediction tasks.
Case 2: Another method is to predict PPIs using graph invariants and a neural network [57]. It considers the primary structure of domains as a numerical sequence that combines even invariants containing graph invariants derived from graph-theoretic models of individual amino acids (including weighted domination (g), averaged eccentricity (d), circumference (c) and weighted peripheral number (p)), hydrophobicity and charge of each amino acid. Then, vectors train with a neural network to recognize their targets.
Exploratory PPI Predictions Using Probabilistic Classification Scoring
Some studies construct a PPI network using scoring methods based on probabilistic classification decision making. These methods evaluate particular potential of protein interactions through the likelihood of a true positive. Take the following individual cases for example.
Case 1: Naive Bayes strategy is proposed for exploring a model network in specific species which lack protein structural information [18,22]. Available evidence includes genomic and proteomic assembled data, ortholog interaction in model organisms, coexpression profiles and enriched protein−domain pairs, as well as shared functional annotations from Gene Ontology (identified the smallest shared biological process (SSBP) score). The probability combines the evidence sources into a naive Bayes model which involves calculating and identifying the max LR of each pair-based evidence, and then integrating the above results with naive Bayes algorithm and generating final composite likelihood ratio from multiplicative LR. Case 2: InPrePPI (an integrated evaluation method based on genomic context for predicting protein−protein interactions in prokaryotic genomes) [21] uses AC value (an integrated value of the accuracy and coverage) to integrate data. In this study, each protein pair of three positive datasets (KEGG, EcoCyc, and DIP) is encoded by four methods of phylogenetic profile (PP), gene cluster (GC), gene fusion event (FE) and gene neighbor (GN), respectively. The accuracy and coverage is calculated based on each method. Finally, an integrated score for each protein pair is presented by calculating weight and normalized AC value.
Prediction of Protein-Protein Interaction Sites
Proteins associate with each other through specific binding sites. These protein-protein interaction sites (PPISs) are believed to be good contributors to the recognition of binding residues under specific chemical and physical statuses. Since PPISs mark the central position of interactions and are less efficiently captured by experimental methods, computational approaches have been developed to model the discrimination between interacting and non-interacting sites for prediction of PPIS. Many studies proposed PPI site prediction methods by training with structure-based and sequence-based evidence. Computational approaches for PPI prediction using structural information have gained more attention due to the rapid growth of structural information (in PDB). In this review, the following individual studies are taken as examples.
Case 1: A prediction server of PPIS named PSIVER [64] predicts binding residue protein pairs by using the naive Bayes (NB) classifier and kernel density estimation (KDE) with two distinct features: position-specific scoring matrix (PSSM) and predicted accessibility (PA). Individual classifiers are trained on the basis of PSSM and PA evidence, respectively. Then, results are combined into a score for classifying GSP and GSN.
Case 2: In a study by Dhole et al. (2014), L1-regularized logistic regression (L1-logreg) was developed as a classifier by training evidence based on PSSM, averaged cumulative hydropathy (ACH) and predicted relative solvent accessibility (RSA), which includes evolutionary conservation and chemical/functional information of amino acids [63].
Case 3: The SSWRF method [65] is introduced in order to assemble the SVM and sample-weighted random forest (SWRF). A lower-dimensional vector represents the evidence of the PSSM-derived feature, averaged cumulative hydropathy (ACH) and averaged cumulative relative solvent accessibility (RSA). It processes some vectors of a given training dataset with SVM. The generated scores to evaluate samples and to calculate weights are further utilized for training with SWRF. Finally, the ensemble algorithm of the SVM and SWRF is executed to predict query inputs.
Performance Evaluation of PPI Prediction
Generally, cross-validation is employed to evaluate the prediction of performance of the proposed method. Some studies evaluate the performance of prediction by cross-validating datasets from different sources (databases, experimental methods or organisms). Some studies randomly divide testing datasets into several equally sized subsets, and each subset is used as a test set [21,65,76].
The following assessments are taken into account to perform evaluation: Precision, Recall (Sensitivity), Specificity, Overall Prediction Accuracy, Matthews's Correlation Coefficient (MCC), F-measure, Receiver Operating Characteristic (ROC) and Area Under the ROC Curve (AUC). These assessments compute the accuracy and deviation to evaluate the feasibility and robustness of a PPI prediction method. Some are defined as follows: TP (true positive) is the number of the predicted PPIs found in the GSP; FP (false positive) is the number of the predicted PPIs not found in GSP; FN (false negative) is the number of PPIs in the GSP that failed to be predicted by the method false positive; TN (true negative) is the number of true non-interacting pairs predicted correctly. MCC, F-measure, ROC and AUC are important assessments. MCC is a measure of the quality of binary classification, which is a correlation coefficient between the observed and predicted results (it returns a value between −1 and +1. MCC equal to 0 is regarded as a completely random prediction, −1 is regarded as a completely wrong prediction and 1 is regarded as a perfect prediction). F-measure is the harmonic mean of Recall and Precision which combines Recall and Precision with balanced weights. In addition, ROC curve and AUC value illustrate performance of a binary classifier system by graphical plot. ROC curve is generated by plotting the TP rate against the (FP rate at various thresholds, and AUC values are used for comparison between methods.
Conclusions
Biology relies on the concerted actions of proteins organized in networks. The role of computational biology research in the area of protein-protein interaction prediction methodologies has recently gained widespread attention. Many tools have been developed to facilitate system biologists, not only in PPI prediction but also in defining their binding residues involved at interaction interfaces. In this review, we presented workflows to predict large-scale PPIs through a variety of evidence methods. However, the result of "interactions" is solely a definition of compatibility between two proteins with respect to evolution, function and structure, regardless of their relative reactivity.
There is still much space for further improvements to reach realistic interactions. For this purpose, high quantity and quality datasets are indispensable. The significant increase in the prediction coverage and accuracy during the past several years is mainly caused by the accumulation of credible data from genome sequencing, PPI experimental detection and protein 3D structure definition. It can be anticipated that, with more and more information available in the future, the prediction potential will be improved and the corresponding combined methods will acquire better performance. On the other hand, more precise methods are also required in this regard. More time is needed for the development of even more powerful machine learning methods (like deep neural networks), along with the systemic understanding of the essential mechanism of PPIs. We hope that the present work will inspire PPI predictors toward further evaluation and improvements. | 6,907.6 | 2016-11-01T00:00:00.000 | [
"Biology",
"Computer Science"
] |
COMPRESSIVE PROPERTIES OF GEOPOLYMER MATRIX COMPOSITES
Polymer based resin is presently the most used resin for preparing of composites constructions, because of its undisputable benefits; however, there are some limits. The aircraft industry has especially strict requirements for fire, smoke and toxicity (FST) properties which are limited when using organic polymers. Conventional polymer resins resist to temperatures usually up to 120 °C and then they lose stiffness and strength. However, geopolymer matrix is a new type of resin with high potential for cost-efficient applications dealing with temperatures up to 1 200 °C. This paper presents compressive properties of a new geopolymer resin and a fibre reinforced composite with the geopolymer matrix (geocomposite). The effect of a harsh environment exposition on the strength was also evaluated, specifically the impact of the exposure in hot-wet and salt mist conditions. Samples were tested in accordance with ASTM D695 in case of pure resin and in accordance with ASTM D6641 in case of the geocomposite. All tests were performed at room temperature and additionally, pure geopolymer resin was tested at 400 °C. The high temperature caused 35 % decrease of the compressive strength in comparison with the room temperature. Geopolymers behaves like a ceramic and have some unique properties such as high thermal stability, non-flammability and do not generate toxic smoke and fumes.
Introduction
At present, there is an increasing need for materials eliminating fatal results of fire in case of aircraft accident [1].Currently used organic matrix composites formally meet requirements such as FAR 25, appendix F; nevertheless, in the event of real fire they quickly deteriorate at temperatures above 300 °C emitting toxic fumes and gases.True non-flammability of present-day composites is limited to minutes [2].The material, known as geopolymer (first geopolymer resin was described in France by J. Davidovits in 1979 [3]) withstands temperatures of over 1000 ºC and can be utilized as matrix in fibre reinforced composites.The use of geopolymer materials looks like a good choice for constructions with increased requirements on fire, smoke and toxicity (FST) properties.The main advantages of geopolymers are the excellent temperature stability; fire resistance with no generation of toxic fumes and smokes, low thermal conductivity and good specific strength [4][5][6][7].The material behaves like a ceramic, but does not require high processing temperatures or pressures, so it can be worked up like common organic resins [8].On the other hand, the geopolymer materials have also disadvantages when compared with commonly used polymer resins.The most significant are lower mechanical properties.
Experiment
Compressive properties of a new geopolymer resin and carbon fibre reinforced geopolymer composite (geocomposite) were examined.The influence of the temperature and environment (hot-wet and salt mist conditions) on these properties was also investigated in the experiment.
Samples were tested in accordance with the ASTM D695 -Standard Test Method for Compressive Properties of Rigid Plastics [9], in case of pure resin and in accordance with ASTM D6641 -Standard Test Method for Compressive Properties of Polymer Matrix Composite Materials Using a Combined Loading Compression (CLC) Test Fixture [10], in case of geocomposite samples.
Material
Geopolymer resin was prepared by mixing of the components listed in Table 1.The table also shows the weight and weight ratio of the individual components.In the first step, pure geopolymer resin was prepared (by mixing components 1-5) and then short ceramic and carbon fibres were added.Unidirectional carbon fibre tape, type TCU175, 3K [11] was used for geocomposite samples.
Test samples
Resin samples were prepared by casting the geopolymer mixture into the form.The filling of the geopolymer resin into the forms was carried out on a vibration table.After casting, the test samples were cured at 80 °C/ 20 hour.The test samples were then extruded from the form and tempered at 80 °C/ 9 hours + 100 °C/ 12 hours at the chamber.Final tempering was done in a cascade mode from 105 to 170 °C.Carbon fibre-reinforced polymer (CFRP) samples were prepared by cutting of the test panel.Test panel were prepared by contact laminating (standardly 400 g of resin / m 2 of fabric).Parameters of the curing cycle were as follows: heat up rate 2-8 °C/ min, cured temperature 85 ±3°C, dwell time 18 hours, vacuum min.-80 kPa, cooling rate max.8 °C/ min.Test samples were cut from the test panel by a diamond blade.
The surface protection of all test samples to improve the resistance to external influence (especially by water) was performed by immersing test samples into a mixture of ethyl silicates of polysilic acids, concurrently with Dynasylan Silbond 40 [12].Test matrix is showed in Table 2.
Tests
All tests were performed in the testing laboratory of Strength of structure department of the Czech aerospace research centre according to valid standards on the mechanical loading machine Instron 55R1185.
Pure resin compression test
Test was performed according to ASTM D695.Specimen was placed between compressive plates parallel to the specimen surface (platen surfaces parallel was within 0.03 mm across the sample contact area), and compressed along its major axis at constant rate of displacement (1.0 mm/min) until the specimen fracture occurred.The test assembly is shown in Figure 1a and Figure 1b for RT and elevated temperature, respectively.
Coupon compression test
Test was performed according to ASTM D6641.Specimen was fixed in the combined loading compression (CLC) test fixture so that the end of the specimen was in flush with the ends of the CLC test fixture.All screws on the CLC fixture were screwed to 2.5 Nm.The assembled fixture was placed between well-aligned, fixed flat platens (platen surfaces parallel was within 0.03 mm across the fixture base) and compressed along its longitudinal axis at constant rate of displacement (1.3 mm/min) until the failure.To determine the compression modulus of the geopolymer, extensometer Instron 2620-601 with gauge length 12.5 mm was attached on the test sample.The test assembly is shown in Figure 1c.
Results
All measured results were tested by Dixon's Q test on presence of outliers.Test was performed for significance level 0.05 and outlying values were not included into the test evaluation.A T-test was performed for determination whether the individual sets are significantly different from each other.Statistical significance 0.05 was chosen.
Compressive strength of the pure resin.
The highest average compressive strength of 43.6 MPa was measured on samples tested at room temperature without previous exposition in the harsh environment.Mean values with standard deviations are shown in Figure 2. The measured data showed that the elevated temperature of 400 °C decreased in all cases the compressive strength compared to RT sets.The decrease was evaluated as statistically significant for all the sets.In the case samples measured at RT after the environmental exposition, the hot-wet condition had not statistically significant influence on compressive strength.However, the effect of salt mist was statistically significant.For the tests at 400 °C, the influence of the environment was statistically insignificant.Typical failure modes of the geopolymer resin samples are showed in Figure 3a.
Compressive strength and modulus of the geocomposite
The highest average compressive strength of 225.5 MPa was measured for set that was not exposed in the environment.Average compressive strength measured on the samples exposed at hot-wet conditions was approximately by 26 % lower (168.1 MPa) compared to the baseline and average strength of salt mist exposed sample was by 21 % lower (178.5 MPa) than for the not exposed set.Typical failure mode is shown in Figure 3b and mean values with standard deviations are shown in Figure 4a.
Differences between the measured compressive modulus of elasticity were evaluated as statistically insignificant.The difference between the highest and lowest measured average compressive modulus of elasticity was less than 15 % (96.8 GPa for not exposed set, 83.0 GPa for hot-wet set and 86.4 for salt mist set).Mean values of the compressive modulus with standard deviations are shown in Figure 4b.fibre/matrix debonding cracks in the geopolymer resin
Microstructure analysis
Microstructure analysis using scanning electron microscopy was performed on selected specimens before the testing; see Figure 5.The analysis showed that in some places the resin is debonded from the carbon fibres.Cavities and cracks in the pure resin were observed.The origin of these defects was probably in the shrinkage of the geopolymer resin.These defects were probably the cause of a large scatter of the measured values.
Conclusion
The research of geopolymer based materials showed that compressive strength of pure geopolymer resin at 400 °C was decreased by approx.35 % in comparison with the room temperature values.Furthermore, the exposure of salt mist was significantly deteriorating on the strength of geopolymer resin for both test temperatures.On the other hand, the influence of the hot-wet environment had insignificant influence on measured compressive strength.
For geocomposite samples, the effect of the harsh environment on values of compressive strength was significant for both the cases reaching up to 25 % decrease.However, the influence of the environment on compressive modulus of the geocomposite samples was statistically insignificant.
The measured values clearly demonstrated that the geopolymer based materials still retain approximately 65 % of the strength at 400 °C.Compared to that, conventional polymeric materials would be disintegrated at that temperature.
Geopolymeric materials can be recommended for the structures in airplanes interiors where strict requirements for fire, smoke and toxicity are defined according to FAR 25 regulations.
Figure 1 .
Figure 1.Test assemblies: a) pure resin at RT, b) pure resin at temperature chamber, c) geocomposite sample in CLC test fixture
Table 2 .
Test matrix Exposure in salt mist (SM) was performed according to standard EN ISO 9227 with concentration of the NaCl solution 50 g/l, temperature 35 °C and relative humidity 100 %.Hot/Wet (HW) condition at 70 °C and 85 % relative humidity until saturation per EN 60068-2-78, was applied. | 2,224.6 | 2018-01-01T00:00:00.000 | [
"Materials Science"
] |
The Radical Plasticity Thesis: How the Brain Learns to be Conscious
In this paper, I explore the idea that consciousness is something that the brain learns to do rather than an intrinsic property of certain neural states and not others. Starting from the idea that neural activity is inherently unconscious, the question thus becomes: How does the brain learn to be conscious? I suggest that consciousness arises as a result of the brain's continuous attempts at predicting not only the consequences of its actions on the world and on other agents, but also the consequences of activity in one cerebral region on activity in other regions. By this account, the brain continuously and unconsciously learns to redescribe its own activity to itself, so developing systems of meta-representations that characterize and qualify the target first-order representations. Such learned redescriptions, enriched by the emotional value associated with them, form the basis of conscious experience. Learning and plasticity are thus central to consciousness, to the extent that experiences only occur in experiencers that have learned to know they possess certain first-order states and that have learned to care more about certain states than about others. This is what I call the “Radical Plasticity Thesis.” In a sense thus, this is the enactive perspective, but turned both inwards and (further) outwards. Consciousness involves “signal detection on the mind”; the conscious mind is the brain's (non-conceptual, implicit) theory about itself. I illustrate these ideas through neural network models that simulate the relationships between performance and awareness in different tasks.
Consider the humble but proverbial thermostat. A thermostat is a simple device that can turn a furnace on or off depending on whether the current temperature exceeds a set threshold. Thus, the thermostat can appropriately be said to be sensitive to temperature. But is there some sense in which the thermostat can be characterized as being aware of temperature? Contra Chalmers (1996), I will argue that there is no sense in which the thermostat can be characterized as being aware of temperature. There are two important points that I would like to emphasize in developing this argument. The first is that there is no sense in which the thermostat can be characterized as being aware of temperature because it does not know that it is sensitive to temperature. The second point is that there is no sense in which the thermostat can be characterized as being aware of temperature because it does not care about whether its environment is hot or cold. I will further argue that these two features -knowledge of one's own internal states and the emotional value associated with such knowledgeare constitutive of conscious experience. Finally, I will argue that learning (or, more generally, plasticity) is necessary for both features to emerge in cognitive systems. From this, it follows that consciousness is something that the brain learns to do through continuously operating mechanisms of neural plasticity. This I call the "Radical Plasticity Thesis." Information processing can undoubtedly take place without consciousness, as abundantly demonstrated not only by empirical evidence (the best example of which is probably blindsight), but also by the very fact that extremely powerful information-processing machines, namely computers, have now become ubiquitous. shared by monkeys (Humphrey, 1971). To a synesthete, perhaps seeing the color red will evoke the number 5. The point is that if conscious experience is what it feels like to be in a certain state, then "What it feels like" can only mean the specific set of associations that have been established by experience between the stimulus or the situation you now find yourself in, on the one hand, and your memories, on the other. This is what one means by saying that there is something it is like to be you in this state rather than nobody or somebody else: The set of memories evoked by the stimulus (or by actions you perform, etc.), and, crucially, the set of emotional states associated with each of these memories. This is essentially the perspective that Damasio (2010) defends.
Thus, a first point about the very notion of subjective experience I would like to make here is that it is difficult to see what experience could mean beyond (1) the emotional value associated with a state of affairs, and (2) the vast, complex, richly structured, experiencedependent network of associations that the system has learned to associate with that state of affairs. "What it feels like" for me to see a patch of red at some point seems to be entirely exhausted by these two points. Granted, one could still imagine an agent that accesses specific memories, possibly associated with emotional value, upon seeing a patch of red and who fails to "experience" anything. But I surmise that this would be mere simulation: One could design such a zombie agent, but any real agent that is driven by self-developed motivation, and that cannot help but be influenced by his emotional states will undoubtedly have experiences much like ours.
Hence, there is nothing it is like for the camera to see the patch of red simply because it does not care: The stimulus is meaningless; the camera lacks even the most basic machinery that would make it possible to ascribe any interpretation to the patch of red; it is instead just a mere recording device for which nothing matters. There is nothing it is like to be that camera at that point in time simply because (1) the experience of different colors do not do anything to the camera; that is, colors are not associated with different emotional valences; and (2) the camera has no brain with which to register and process its own states. It is easy to imagine how this could be different. To hint at my forthcoming argument, a camera could, for instance, keep a record of the colors it is exposed to, and come to "like" some colors better than others. Over time, your camera would like different colors than mine, and it would also know that in some non-trivial sense. Appropriating one's mental contents for oneself is the beginning of individuation, and hence the beginning of a self.
Thus a second point about experience that I perceive as crucially important is that it does not make any sense to speak of experience without an experiencer who experiences the experiences. Experience is, almost by definition ("what it feels like"), something that takes place not in any physical entity but rather only in special physical entities, namely cognitive agents. Chalmers' (1996) thermostat fails to be conscious because, despite the fact that it can find itself in different internal states, it lacks the ability to remove itself from the causal chain which it instantiates. In other words, it lacks knowledge that it can find itself in different states; it is but a mere mechanism that responds to inputs in certain ways. While there is indeed something to be experienced there (the different states the thermostat can find itself in), there is no one home to be the subject of these experiences -the thermostat simply lacks 1988; Dehaene et al., 1998), integration and differentiation of information (Tononi, 2003(Tononi, , 2007, or the involvement of higher-order representations (Rosenthal, 1997(Rosenthal, , 2006, to name just a few. Another perspective is to consider that experience will never be amenable to a satisfactory functional explanation. Experience, according to some (e.g., Chalmers, 1996), is precisely what is left over once all functional aspects of consciousness have been explained. Notwithstanding the fact that so defined, experience is simply not something one can approach from a scientific point of view, this position recognizes that consciousness is a unique (a hard) problem in the Cognitive Neurosciences. But that is a different thing from saying that a reductive account is not possible. A non-reductive account, however, is exactly what Chalmers' Naturalistic Dualism attempts to offer, by proposing that information, as a matter of ontology, has a dual aspect, -a physical aspect and a phenomenal aspect. "Experience arises by virtue of its status as one aspect of information, when the other aspect is found embodied in physical processing" (Chalmers, 2007b, p. 366). This position leads him to defend the possibility that experience is a fundamental aspect of reality. Thus, even thermostats, for instance, may be endowed with very simple experiences, in virtue of the fact that they can toggle in two different states.
What, however, do we mean when we speak of "subjective experience" or of "quale"? The simplest definition of these concepts (Nagel, 1974) goes right to the heart of the matter: "Experience" is what it feels like for a conscious organism to be that organism. There is something it is like for a bat to be a bat; there is nothing it is like for a stone to be a stone. As Chalmers (2007a) puts it: "When we see, for instance, we experience visual sensations: The felt quality of redness, the experience of dark and light, the quality of depth in a visual field" (p. 226).
Let us try to engage in some phenomenological analysis at this point to try to capture what it means for each of us to have an experience. Imagine you see a patch of red (Humphrey, 2006). You now have a red experience -something that a camera recording the same patch of red will most definitely not have. What is the difference between you and the camera? Tononi (2007), from whom I borrow this simple thought experiment, points out that one key difference is that when you see the patch of red, the state you find yourself in is but one among billions, whereas for a simple lightsensitive device, it is perhaps one of only two possible states -thus the state conveys a lot more differentiated information for you than for a light-sensitive diode. A further difference is that you are able to integrate the information conveyed by many different inputs, whereas the chip on a camera can be thought of as a mere array of independent sensors among which there is no interaction.
Hoping not to sound presumptuous, it strikes me, however, that both Chalmers' (somewhat paradoxically) and Tononi's analyses miss fundamental facts about experience: Both analyze it as a rather abstract dimension or aspect of information, whereas experiencewhat it feels like -is anything but abstract. On the contrary, what we mean when we say that seeing a patch of red elicits an "experience" is that the seeing does something to us -in particular, we might feel one or several emotions, and we may associate the redness with memories of red. Perhaps seeing the patch of red makes you remember the color of the dress that your prom night date wore 20 years ago. Perhaps it evokes a vague anxiety, which we now know is also As Clark and Karmiloff-Smith (1993) insightfully pointed out, such representations are "first-order" representations to the extent that they are representations in the system rather than representations for the system that is, such representations are not accessible to the network as representations.
In other words, such a (first-order) network can never know that it knows: It simply lacks the appropriate machinery. This points to a fundamental difference between sensitivity and awareness. Sensitivity merely entails the ability to respond in specific ways to certain states of affairs. Sensitivity does not require consciousness in any sense. A thermostat can appropriately be characterized as being sensitive to temperature, just as the carnivorous plant Dionaea Muscipula may appropriately be described as being sensitive to movement on the surface of its leaves. But our intuitions (at least, my intuitions) tell us that such sensitive systems (thermostats, photodiodes, transistors, cameras, carnivorous plants) are not conscious. They do not have "elementary experiences," they simply have no experiences whatsoever. Sensitivity can involve highly sophisticated knowledge, and even learned knowledge, as illustrated by Hinton's (1986) network, but such knowledge is always first-order knowledge, it is always knowledge that is necessarily embedded in the very same causal chain through which first-order processing occurs and that can therefore only be expressed through action as a direct result of perception.
Awareness, on the other hand, always seems to minimally entail the ability of knowing that one knows. This ability, after all, forms the basis for the verbal reports we take to be the most direct indication of awareness. And when we observe the absence of such ability to report on the knowledge involved in our decisions, we rightfully conclude that the decision was based on unconscious knowledge. Thus, it is when an agent exhibits knowledge of the fact that he is sensitive to some state of affairs that we take this agent to be a conscious agent. This second-order knowledge, I argue, critically depends on learned systems of meta representations, and forms the basis for conscious experience provided the agent also cares about certain states of affairs more than about others.
Consciousness thus not only requires ability to learn about the geography of one's own representations, but it also requires that the resulting knowledge reflects the dispositions and preferences of the agent. This is an important point, for it would be easy to program a thermostat that is capable not only of acting based on the current temperature, but also to report on its own states. Such a talking thermostat would constantly report on the current temperature and on its decisions. Would that make the thermostat conscious? Certainly not, for it is clear that the reporting is but a mere additional process tacked on the thermostat's inherent ability to switch the furnace according to the temperature. What would go some way toward making the thermostat conscious is to set it up so that it cares about certain temperatures more than about others, and that these preferences emerge as a result of learning.
What would it take for a network like Hinton's (1986) to be able to access its own representations, and what difference would that make with respect to consciousness? To answer the first question, the required machinery is the machinery of agenthood; in a nutshell, the ability to do something not just with external states of affairs, but rather with one own's representations of such external states. This crucially requires that the agent be able to access, the appropriate machinery to do so. The required machinery, I surmise, minimally involves the ability to know that one finds itself in such or such a state.
This point can be illustrated by means of well-known results in the connectionist, or artificial neural network modeling literature. Consider for instance Hinton's (1986) famous demonstration that neural networks trained through associative learning mechanisms can learn about abstract dimensions of the training set. Hinton's (1986) network was a relatively simple back-propagation network trained to process linguistic expressions consisting of an agent, a relationship, and a patient, such as for instance "Maria is the wife of Roberto." The stimulus material consisted of a series of such expressions, which together described some of the relationships that exist in the family trees of an Italian family and of an English family The network was required to produce the patient of each agent-relationship pair it was given as input. For instance, the network should produce "Roberto" when presented with "Maria" and "wife." Crucially, each person and each relationship were presented to the network by activating a single input unit. Hence there was no overlap whatsoever between the input representations of, say, Maria and Victoria. Yet, despite this complete absence of surface similarity between training exemplars, Hinton (1986) showed that after training, the network could, under certain conditions, develop internal representations that capture relevant abstract dimensions of the domain, such as nationality, sex, or age! Hinton's (1986) point was to demonstrate that such networks were capable of learning richly structured internal representations as a result of merely being required to process exemplars of the domain. Crucially, the structure of the internal representations learned by the network is determined by the manner in which different exemplars interact with each other, that is, by their functional similarity, rather than by their mere physical similarity expressed, for instance, in terms of how many features (input units) they share. Hinton (1986) thus provided a striking demonstration of this important and often misunderstood aspect of associative learning procedures by showing that under some circumstances, specific hidden units of the network had come to act as detectors for dimensions of the material that had never been presented explicitly to the network. These results truly flesh out the notion that rich, abstract knowledge can simply emerge as a by-product of processing structured domains. It is interesting to note that the existence of such single-unit "detectors" has recently been shown to exist in human neocortex (Kreiman et al., 2002): Single-neuron recording of activity in hippocampus, for instance, has shown that some individual neurons exclusively respond to highly abstract entities, such as the words "Bill Clinton" and images of the American president. Now, the point I want to make with this example is as follows: One could certainly describe the network as being sensitive to nationality, in the sense that it exhibits differential responding (hence, behavioral sensitivity) to inputs that involve Italian agents vs. English agents. But, obviously, the network does not know anything about nationality. It does not even know that it has such and such representations of the inputs, nor does it know anything about its own, self-acquired sensitivity to the relevant dimensions. Instead, the rich, abstract, structured representations that the network has acquired over training forever remain embedded in a causal chain that begins with the input and ends with the network's responses.
Cleeremans
The radical plasticity thesis www.frontiersin.org closely related to processes of learning, because one of the central consequences of successful adaptation is that conscious control is no longer required over the corresponding behavior. Indeed, it might seem particularly adaptive for complex organisms to be capable of behavior that does not require conscious control, for instance because behavior that does not require monitoring of any kind can be executed faster or more efficiently than behavior that does require such control. What about conscious experience? Congruently with our intuitions about the role of consciousness in learning, we often say of somebody who failed miserably at some challenging endeavor, such as completing a paper by the deadline, that the failure constitutes "a learning experience." What precisely do we mean by this?
We mean that the person can now learn from her mistakes, that the experience of failure was sufficiently imbued with emotional value that it has registered in that person's brain. The experience hurt, it made one realize what was at stake, it made us think about it, in other words, it made us consciously aware of what failed and why. But this minimally requires what Kirsh (1991) has called "explicit representation," namely the presence of representations that directly represent the relevant information. By "direct" here, I mean that the information is represented in such a manner that no further computation is required to gain access to it. For instance, a representation that is explicit in this sense might simply consist of a population of neurons that fire whenever a specific condition holds: A particular stimulus is present on the screen, my body is in a particular state (i.e., pain, or hunger).
By assumption, however, such "explicit" representations are not necessarily conscious. Instead, they are merely good candidates to enter conscious awareness in virtue of features such as their stability, their strength, or their distinctiveness (Cleeremans, 1997;Cleeremans and Jiménez, 2002). What is missing, then? What is missing is that such representations be themselves the target of other representations. And how would this make any difference? It makes a crucial difference, for the relevant first-order representations are now part of the agent's known repertoire of mental states; such representations are then, and only then, recognized by the agent as playing the function of representing some other (internal or external) state of affairs.
Necessary coNditioNs for awareNess
Let us now focus on the set of assumptions that together form the core of a framework that characterizes how learning shapes availability to consciousness (see Cleeremans and Jiménez, 2002;Cleeremans, 2008, for more detailed accounts). It is important to keep it in mind that the framework is based on the connectionist framework (Rumelhart and McClelland, 1986). It is therefore based on many central ideas that characterize the connectionist approach, such as the fact that information processing is graded and continuous, and that it takes place over many interconnected modules consisting of processing units. In such systems, long-term knowledge is embodied in the pattern of connectivity between the processing units of each module and between the modules themselves, while the transient patterns of activation over the units of each module capture the temporary results of information processing.
This being said, a first important assumption is that representations are graded, dynamic, active, and constantly causally efficacious (Cleeremans, 1994(Cleeremans, , 2008. Patterns of activation in neural networks inspect, and otherwise manipulate its own representations, and this in turn, I surmise, requires mechanisms that make it possible for an agent to redescribe its own representations to itself. The outcome of this continuous "representational redescription" (Karmiloff-Smith, 1992) process is that the agent ends up knowing something about the geography of its own internal states: It has, in effect, learned about its own representations. Minimally, this could be achieved rather simply, for instance by having another network take both the input (i.e., the external stimulus as represented proximally) to the first-order network and its internal representations of that stimulus as inputs themselves and do something with them.
One elementary thing the system consisting of the two interconnected networks (the first-order, observed network and the secondorder, observing network) would now be able to do is to make decisions, for instance, about the extent to which an external input to the first-order network elicits a familiar pattern of activation over its hidden units or not. This would in turn enable the system to distinguish between hallucination and blindness (see Lau, 2008), or to come up with judgments about the performance of the firstorder network (Persaud et al., 2007;. To address the second question (what difference would representational redescription make in terms of consciousness), I appeal to Rosenthal's (1997Rosenthal's ( , 2006 higher-order thought (HOT) theory of consciousness. While I do not feel perfectly happy with all aspects of HOT Theory, I do believe, however, that higher-order representations (I will call them meta-representations in what follows) play a crucial role in consciousness.
An immediate objection to this idea is as follows: If there is nothing intrinsic to the existence of a representation in a cognitive system that makes this representation conscious, why should things be different for meta-representations? After all, meta-representations are representations also. Yes indeed, but with a crucial difference: Meta-representations inform the agent about its own internal states, making it possible for it to develop an understanding of its own workings. And this, I argue, forms the basis for the contents of conscious experience, provided of course -which cannot be the case in an contemporary artificial system -that the system has learned about its representations by itself, over its development, and provided that it cares about what happens to it, that is, provided its behavior is rooted in emotion-laden motivation (to survive, to mate, to find food, etc.). the radical plasticity thesis I would thus like to defend the following claim: Conscious experience occurs if and only if an information-processing system has learned about its own representations of the world in such a way that these representations have acquired value for it. To put this claim even more provocatively: Consciousness is the brain's (emphatically non-conceptual) theory about itself, gained through experience interacting with the world, with other agents, and, crucially, with itself. I call this claim the "Radical Plasticity Thesis," for its core is the notion that learning is what makes us conscious.
Before getting to the core of the argument, I should briefly sketch a framework through which to characterize the relationships between learning and consciousness. If the main cognitive function of consciousness is to make adaptive control of behavior possible, as is commonly accepted, then consciousness is necessarily learning mechanisms, which instantiate the different computational objective of mastering specific input-output mappings (i.e., achieving specific goals) in the context of specific tasks through errorcorrecting learning procedures. Stability, strength, or distinctiveness can be achieved by different means. Over short time scales, they can result, for instance, from increased stimulus duration, from the simultaneous top-down and bottom-up activation involved in so-called "reentrant processing" (Lamme, 2006), from processes of "adaptive resonance" (Grossberg, 1999), from processes of "integration and differentiation" (Edelman and Tononi, 2000), or from contact with the neural workspace, brought about by "dynamic mobilization" (Dehaene and Naccache, 2001). It is important to realize that the ultimate effect of any of these putative mechanisms is to make the target representations stable, strong, and distinctive. These properties can further be envisioned as involving graded or dichotomous dimensions (see also Maia and Cleeremans, 2005 for an exploration of how connectionist principles are relevant to the study of consciousness).
Over longer time scales, however, high-quality representations arise as a result of learning or cognitive development. Weak, fragile representations become progressively stronger and higher-quality. As a result, they exert more of an influence on behavior. In most cases, this is a good outcome because the stronger a representation is, the less it will require conscious control and monitoring. Thus, in any domain of experience (from being able to stand up to wine-tasting, from recognizing faces to reading) we begin with weak representations, which are characteristic of implicit cognition and do not require control because they only exert weak effects on behavior. Such representations, because of their poor quality, are also only weakly available to form the contents of consciousness. As learning progresses, the relevant representations become stronger, yet not so strong that they can be "trusted" to do their job properly. This is when cognitive control is most necessary. This is also the point where such explicit representations are most likely to form the contents of consciousness. Finally, with further training, the relevant representations become even stronger and eventually fully adapted. As such, these high-quality representations characteristic of automaticity no longer require cognitive control either, but this is so for completely different reasons than the weak representations characteristic of implicit cognition.
Thus, when I respond faster to a target stimulus in virtue of the fact that the target was preceded by a congruent subliminal prime, I can properly say that there exists a state c such that its existence made me respond faster, but by assumption I am not sensitive to the fact that this state c is different from state i where the target stimulus was preceded by an incongruent prime. States c and i are thus not conscious states -they merely exert their effects on behavior, so reflecting the agent's sensitivity to their existence, but crucially not its awareness of their existence. The reason such states are not conscious states has to do with the properties of the corresponding first-order states: It is not so much that there is a failure of a higher-order system to target these states, but rather that the first-order states are too weak to be appropriate targets.
You cannot know what is not (sufficiently) there.
Likewise, but perhaps more controversially so, habitual, automatic behavior is often described as involving unconscious knowledge: The behavior unfolds whether you intend to or not, it can and in the brain are typically distributed and can therefore vary on a number of dimensions, such as their stability in time, their strength. or their distinctiveness. Stability in time refers to how long a representation can be maintained active during processing. There are many indications that different neural systems involve representations that differ along this dimension. For instance, prefrontal cortex, which plays a central role in working memory, is widely assumed to involve circuits specialized in the formation of the enduring representations needed for the active maintenance of task-relevant information. Strength of representation simply refers to how many processing units are involved in the representation, and to how strongly activated these units are. As a rule, strong activation patterns will exert more influence on ongoing processing than weak patterns. Finally, distinctiveness of representation is inversely related to the extent of overlap that exists between representations of similar instances. Distinctiveness has been hypothesized as the main dimension through which cortical and hippocampal representations differ (McClelland et al., 1995;O'Reilly and Munakata, 2000), with the latter becoming active only when the specific conjunctions of features that they code for are active themselves.
In the following, I will collectively refer to these different dimensions as "quality of representation" (Farah, 1994). The most important notion that underpins these different dimensions is that representations, in contrast to the all-or-none propositional representations typically used in classical theories, instead have a graded character that enables any particular representation to convey the extent to which what it refers to is indeed present.
Another important aspect of this characterization of representational systems in the brain is that, far from being static propositions waiting to be accessed by some process, representations instead continuously influence processing regardless of their quality. This assumption takes its roots in McClelland's (1979) analysis of cascaded processing which, by showing how modules interacting with each other need not "wait" for other modules to have completed their processing before starting their own, demonstrated how stage-like performance could emerge out of such continuous, non-linear systems. Thus, even weak, poor-quality traces are capable of influencing processing, for instance through associative priming mechanisms, that is, in conjunction with other sources of stimulation. Strong, highquality traces, in contrast have generative capacity, in the sense that they can influence performance independently of the influence of other constraints, that is, whenever their preferred stimulus is present.
A second important assumption is that learning is a mandatory consequence of information processing. Indeed, every form of neural information-processing produces adaptive changes in the connectivity of the system, through mechanisms such as long-term potentiation (LTP) or long-term depression (LTD) in neural systems, or Hebbian learning in connectionist systems. An important aspect of these mechanisms is that they are mandatory in the sense that they take place whenever the sending and receiving units or processing modules are co-active. O'Reilly and Munakata (2000) have described Hebbian learning as instantiating what they call model learning. The fundamental computational objective of such unsupervised learning mechanisms is to enable the cognitive system to develop useful, informative models of the world by capturing its correlational structure. As such, they stand in contrast with task
Cleeremans
The radical plasticity thesis www.frontiersin.org (the flower is yellow), factivity (it is a fact and not just a possibility that the flower is yellow) and attitude (I know that the flower is yellow). Fully conscious knowledge is thus knowledge that is "attitude-explicit." This analysis suggests that a further important principle that differentiates between conscious and unconscious cognition is the extent to which a given representation endowed with the proper properties (stability, strength, distinctiveness) is itself the target of meta-representations.
Hence a second important computational principle through which to distinguish between conscious and unconscious representations is the following: Availability to consciousness depends on the extent to which a representation is itself an object of representation for further systems of representation.
It is interesting to consider under which conditions a representation will remain unconscious based on combining these two principles (Cleeremans, 2008). There are at least four possibilities. First, knowledge that is embedded in the connection weights within and between processing modules can never be directly available to conscious awareness and control. This is simply a consequence of the fact that consciousness, by assumption, necessarily involves representations (patterns of activation over processing units). The knowledge embedded in connection weights will, however, shape the representations that depend on it, and its effects will therefore detectable -but only indirectly, and only to the extent that these effects are sufficiently marked in the corresponding representations. This is equivalent to Dehaene and Changeux's (2004) principle of "active firing." Second, to enter conscious awareness, a representation needs to be of sufficiently high-quality in terms of strength, stability in time, or distinctiveness. Weak representations are therefore poor candidates to enter conscious awareness. This, however, does not necessarily imply that they remain causally inert, for they can influence further processing in other modules, even if only weakly so. This forms the basis for a host of sub-threshold effects, including, in particular, subliminal priming.
Third, a representation can be strong enough to enter conscious awareness, but failed to be associated with relevant metarepresentations. There are thus many opportunities for a particular conscious content to remain, in a way, implicit, not because its representational vehicle does not have the appropriate properties, but because it fails to be integrated with other conscious contents.
Finally, a representation can be so strong that its influence can no longer be controlled -automaticity. In theses cases, it is debatable whether the knowledge should be taken as genuinely unconscious, because it can certainly become fully conscious as long as appropriate attention is directed to them (Tzelgov, 1997), but the point is that such very strong representations can trigger and support behavior without conscious intention and without the need for conscious monitoring of the unfolding behavior.
sufficieNt coNditioNs for awareNess?
Strong, stable, and distinctive representations are thus explicit representations, at least in the sense put forward by Koch (2004): They indicate what they stand for in such a manner that their unfold with attention engaged elsewhere, and so on. In such cases, behavior is driven by very high-quality representations that have become, through experience, optimally tuned to drive behavior. While such very high-quality representations are appropriate objects for redescription, the redescriptions either no longer play a functional role or are prevented from taking place (for instance because the agent's attention is engaged elsewhere). Automatic behavior is thus not truly unconscious behavior (Tzelgov, 1997). Rather, it is behavior for which awareness has become optional. You can be perfectly aware of behavior that occurs automatically -you just seldom do so for it is neither necessary nor desirable for you to become aware of such behavior. That is precisely why the behavior has become automatic: Because it so adapted that it can unfold without the need for conscious monitoring.
Hence a first important computational principle through which to distinguish between conscious and unconscious representations is the following:
Availability to consciousness depends on quality of representation, where quality of representation is a graded dimension defined over stability in time, strength, and distinctiveness.
While being of high-quality thus appears to be a necessary condition for a representation's availability to consciousness, one should ask, however, whether it is a sufficient condition. Cases such as hemineglect or blindsight (Weiskrantz, 1986) clearly suggest that quality of representation alone does not suffice, for even strong stimuli can fail to enter conscious awareness in such conditions. In normal participants, the attentional blink (Shapiro et al., 1997), as well as inattentional (Mack and Rock, 1998) and change blindness (Simons and Levin, 1997), are all suggestive that high-quality stimuli can simply fail to be experienced unless attended to. Likewise, merely achieving stable representations in an artificial neural network, for instance, will not make this network conscious in any sense -this is the problem pointed out by Clark and Karmiloff-Smith (1993) about the limitations of what they called first-order networks: In such networks, even explicit knowledge (e.g., a stable pattern of activation over the hidden units of a standard back-propagation network that has come to function as a "face detector") remains knowledge that is in the network as opposed to knowledge for the network. In other words, such networks might have learned to be informationally sensitive to some relevant information, but they never know that they possess such knowledge. Thus the knowledge can be deployed successfully through action, but only in the context of performing some particular task.
Hence it could be argued that it is a defining feature of consciousness that when one is conscious of something, one is also, at least potentially so, conscious that one is conscious of being in that state. This is the gist of so-called HOT theories of consciousness (Rosenthal, 1997), according to which a mental state is conscious when the agent entertains, in an non-inferential manner, thoughts to the effect that it currently is in that mental state. Importantly, for Rosenthal, it is in virtue of occurrent HOTs that the target first-order representations become conscious. Dienes and Perner (1999) have developed this idea by analyzing the implicit-explicit distinction as reflecting a hierarchy of different manners in which the representation can be explicit. Thus, a representation can explicitly indicate a property (e.g., "yellow"), predication to an individual knowledge (i.e., my knowledge of a typical dog) are less available to form the contents of conscious experience then are the highly distinctive representations characteristic of episodic memory.
Second, those representations that meet these minimal requirements for redescription need to be accessed by another, independent part of the system whose function it is to redescribe them. It is important to note here that mere redescription probably does not cut it, for even in a simple feedforward network, each layer can be thought of as being a redescription of the input. The brain is massively hierarchical and thus contains multiple such redescriptions of any input. Instead of being strictly hierarchically organized, however, the redescriptions that count for the mechanism I have in mind should be removed from the causal chain responsible for the first-order processing. Hence, we need some mechanism that can access and redescribe first-order representations in a manner that is independent from the first-order causal chain.
I suggest that the general form of such mechanisms is something similar to what is depicted in Figure 1. Two independent networks (the first-order network and the second-order network) are connected to each other in such a way that the entire first-order network is input to the second-order network. Both networks are simple feedforward back-propagation networks. The first-order network consists of thee pools of units: a pool of input units, a pool of hidden units, and a pool of output units. Let us further imagine that this network is trained to perform a simple discrimination task, that is, to produce what is named Type I response in the language of Signal-Detection Theory. My claim is that there is nothing in the computational principles that characterize how this network performs its task that is intrinsically associated with awareness. The network simply performs the task. While it will develop knowledge of the associations between its inputs and outputs over its hidden units, and while this knowledge may be in some cases very sophisticated, it will forever remain knowledge that is "in" the network as opposed to being knowledge "for" the network. In other words, such a (first-order) network can never know that it knows: It simply lacks the appropriate machinery to do so. Likewise, in Signal-Detection Theory, while Type 1 responses always reflect sensitivity to some state of affairs, this sensitivity may or may not be conscious sensitivity. That is, a participant may be successful in discriminating one stimulus from another, yet fail to be aware that he is able to do so and thus claim, if asked, that he is merely guessing or responding randomly. In its more general form, as depicted in Figure 1, such an architecture would also be sufficient for the second-order network to also perform other judgments, such as distinguishing between an hallucination and a veridical perception, or developing knowledge about the overall geography of the internal representations developed by the first-order network (see also Nelson and Narens, 1990).
Can we use such architectures to account for relevant data? That is the question we set out to answer in recent work (e.g., Cleeremans et al., 2007;Pasquali et al., 2010) aimed at exploring the relationships between performance and awareness. We have found that different approaches to instantiating the general principles we have described so far are required to capture empirical findings. In one, as hinted at above, the first-order and the second-order network are part of the same causal chain, but are trained on different tasks, one corresponding to first-order decisions and the second reference can be retrieved directly through processes involving low computational complexity (see also Kirsh, 1991Kirsh, , 2003. Conscious representations, in this sense, are explicit representations that have come to play, through processes of learning, adaptation, and evolution, the functional role of denoting a particular content for a cognitive system. Importantly, quality of representation should be viewed as a graded dimension. This is essential to capture the fact that phenomenal experience, particularly ordinary phenomenal experience, appears graded itself. Gradedness can be achieved in different ways in a complex system such as the brain. One possibility is that representations are inherently graded because their vehicles are patterns of activation distributed over populations of firing neurons. Another is that representations tend to be all-ornone, but always involve multiple levels of a hierarchy (Kouider et al., 2010).
Once a representation has accrued sufficient strength, stability, and distinctiveness, it may be the target of meta-representations: The system may then "realize," if it is so capable, that is, if it is equipped with the mechanisms that are necessary to support selfinspection, that it has learned a novel partition of the input; that it now possesses a new "detector" that only fires when a particular kind of stimulus, or a particular condition, is present. Humphrey (2006) emphasizes the same point when he states that "This selfmonitoring by the subject of his own response is the prototype of the "feeling sensation" as we humans know it" (p. 90). Importantly, my claim here is that such meta-representations are learned in just the same way as first-order representations, that is, by virtue of continuously operating learning mechanisms. Because metarepresentations are also representations, the same principles of stability, strength, and distinctiveness therefore apply. An important implication of this observation is that activation of metarepresentations can become automatic, just as it is the case for first-order representations.
What might be the function of such meta-representations? One possibility is that their function is to indicate the mental attitude through which a first-order representation is held: Is this something I know, hope, fear, or regret? Possessing such metaknowledge about one's knowledge has obvious adaptive advantages, not only with respect to the agent himself, but also because of the important role that communicating such mental attitudes to others plays in both competitive and cooperative social environments.
What is the mechanism through which such redescription is achieved? Minimally, enabling redescription of one's own internal states requires such internal states to be available to redescription, where availability is contingent, as described above, on such internal states being patterns of activation endowed with certain characteristics such as their strength, their stability in time, and their distinctiveness. Note that these assumptions rule out many potential sources of internal knowledge. For instance, the sort of weak, fleeting representations presumably resulting from the presentation of a brief stimulus would be poor candidates to be available to further processing. Likewise, the associative links that exist between representations, if implemented through patterns of connectivity between groups of units (as they are in connectionist networks) would likewise be inaccessible. Finally, and though this is more speculative (but see Brunel et al., 2010), it may also be the case that the highly distributed representations typical of semantic
Cleeremans
The radical plasticity thesis www.frontiersin.org required to place a high or a low wager on their decision, such as relative to stimulus identification for example. The intuition behind this measure is that people will place a high wager when they have conscious knowledge about the reasons for their decisions, and a low wager when they are uncertain of their decisions. In this, wagering is thus similar to other subjective measures of awareness (Seth et al., 2008;Sandberg et al., 2010). According to Persaud et al. (2007) wagering provides an incentive for participants not to withhold any conscious information, as well as not to guess, making it a more objective measure of awareness than confidence judgment. Despite recent criticism of Persaud et al.'s claims (Dienes and Seth, 2010;Sandberg, et al., 2010), wagering certainly reflects the extent to which an agent is sensitive to its own internal states. In Cleeremans et al. (2007), we therefore aimed at creating a wagering network, for wagering affords easy quantification and thus appeared more readily amenable to computational simulation than other metacognitive measures such as confidence. In one of our simulations, which I will describe in more detail here, the first-order feedforward back-propagation network (see Figure 2) consisted of 7 input units representing digit shapes (as on a digital watch), 100 hidden units, and 10 output units for the 10 digits. The task of the first-order network is a simple one: It consists of identifying the "visual" representations of the digits 0-9. This is achieved by training the first-order network to respond to each input by activating one of its 10 output units. The 100 first-order hidden units connected to a different pool of 100 hidden units of the second-order feedforward network, with 2 output units representing a high and a low wager, as shown in Figure 2. The task of the higher-order network consisted of wagering high if it "thought" that the first-order network was providing a correct answer (correct identification of the digit), and to wager low in case the first network gave a wrong answer (misidentification of the digit). Note that as implemented here, there is no substantial difference between wagering and merely expressing confidence judgments.
corresponding to metacognitive decisions. In a second approach, the two networks are truly independent. Note that in either case, our assumptions are oversimplified, for a complete implementation of the theory would require that the second-order network may influence processing as it takes place in the first-order network by means of recurrence. In the following, I will illustrate the first approach, through which we have focused on architectures in which the first and second-order networks function as part of the same causal chain. Post-decision wagering was recently introduced by Persaud et al. (2007) as a measure of awareness through which participants are Figure 1 | general architecture of a metacognitive network. A first-order network, consisting for instance of a simple three-layers back-propagation network, has been trained to perform a simple classification task and thus contains knowledge that links inputs to outputs in such a way that the network can produce Type I responses. By design, this entire first-order network then constitutes the input to a second-order network, the task of which consists of redescribing the activity of the first-order network in some way. Here, the task that this second-order network is trained to perform is to issue Type II responses, that is, judgments about the extent to which the first-order network has performed its task correctly. One can think of the first-order network as instantiating cases where the brain learns about the world, and of the second-order network as instantiating cases where the brain learns about itself.
Figure 2 | Architecture of a wagering network.
A first-order network instantiates a simple pattern classifier trained to classify "visual" input patterns representing the shapes of digits 0-9 in 10 categories. A secondorder network is assigned the task of wagering on the first-order network's performance based on the latter's internal representations of the stimulus. The second-order network thus performs judgments about the extent to which the first-order network is correct in its own decisions.
performance of the second-order network begins to decrease. This corresponds to a stage where the second-order network is beginning to bet "high" on some occasions as it learns to categorize states of the first-order network that are predictive of a correct classification. An interesting pattern of dissociation then occurs, for the second-order network is performing rather poorly just when the first-order network is beginning to truly master its own digit classification task. One can think of that stage as corresponding to a point in training where the system as a whole is essentially acting based on unconscious knowledge: First-order performance on the digit classification task is well above chance level, yet, wagering by the second-order network is close to chance, and is at chance on epoch 40. Later on, after epoch 40, the second-order network has learned enough about when the first-order network will be correct vs. incorrect to begin attempting to maximize its own wagering performance. Thus, epoch 40 corresponds to the second-order network's "most doubtful moment." One could view this as the moment at which the higher-order network abandons a simple "safe" strategy of low wagers and explores the space of first-order hidden unit representations, looking for a criterion that will allow it to separate good from bad identifications.
Thus, as the two networks learn simultaneously to perform their respective tasks, one sees the entire system shifting from a situation where there is no relationship between first-and second-order performance to a situation where the two are correlated. This transition reflects, under our assumptions, a shift between unconscious vs. conscious processing.
In later work (Pasquali, et al., 2010), we have explored similar models based on germane or identical architectures and shown that they are capable of accounting for the data reported by Persaud et al.
A learning rate of 0.15 and a momentum of 0.5 were used during training of the first-order network. In a first condition of "high awareness," the second network was trained with a learning rate of 0.1, and in a second condition of "low awareness," a learning rate of 10 −7 was applied. Ten networks were trained to perform their tasks concurrently throughout 200 epochs of training and their performance averaged. The performance of all three networks is depicted in Figure 3. Chance level for the first-order network is 10% (there is one chance of out 10 of correctly identifying one digit amongst 10); it is 50% for the second-order network (one chance out of two of placing a correct bet). The figure shows that the first-order network simply gradually learns to improve its classification performance continuously until it achieves 100% correct responses at the end of training. The performance of the "high awareness" second-order network, however, exhibits a completely different pattern. Indeed, one can see that the second-order network initially performs quite well, only to show decreasing performance up until about epoch 40, at which point its performance has sagged to chance level. From epoch 40 onwards, the second-order network's performance increases in parallel with that of the first-order network. This u-shaped performance pattern is replicated, to a lesser degree and with slightly different dynamics, in the "low awareness" second-order network.
One can understand this performance pattern as follows. Initially, the second-order network quickly learns that the first-order network is systematically incorrect in classifying the digits. (which is expected since it has not begun to learn how to perform the task). The safest response (i.e., the response that minimizes error) is thus to always bet low. This, incidentally, is what any rational agent would do. However, as the first-order network quickly begins to exceed chance level performance on its digit classification task, the
Cleeremans
The radical plasticity thesis www.frontiersin.org both approaches by means of distinct architectures. Clearly, additional research is necessary to clarify the predictions of each approach and to further delineate their mechanisms. Beyond giving a cognitive system the ability to learn about its own representations, there is another important function that meta-representations may play: They can also be used to anticipate the future occurrences of first-order representations (see Bar, 2009, on the human brain as a prediction machine). Thus for instance, if my brain learns that SMA is systematically active before M1, then it can use SMA representations to explicitly represent their consequences downstream, that is, M1 activation, and ultimately, action. If neurons in SMA systematically become active before an action is carried out, a metarepresentation can link the two and represent this fact explicitly in a manner that will be experienced as intention. That is: When neurons in the SMA become active, I experience the feeling of intention because my brain has learned, unconsciously, that such activity in SMA precedes action. It is this knowledge that gives qualitative character to experience, for, as a result of learning, each stimulus that I see, hear, feel, or smell is now not only represented, but also re-represented through independent meta-representations that enrich and augment the original representation(s) with knowledge about (1) how similar the manner in which the stimulus' representation is with respect to that associated with other stimuli, (2) how similar the stimulus' representation is now with respect to what it was before, (3) how consistent is a stimulus' representation with what it typically is, (4) what other regions of my brain are active at the same time that the stimulus' representation is, etc.
To see how this is different from mere first-order knowledge, consider what happens in the case of hallucination. Imagine a simple three-layers network akin to those described above in which a first layer of units receives perceptual input and is connected to a second layer of internal ("hidden") units that are in turn connected to response units. One can easily train such a simple system to produce specific outputs in response to specific inputs (i.e., activating the "9" unit when presented with the visual pattern corresponding to the digit "9"). After training, each input will cause the emergence of a specific (learned) pattern of action over the network's hidden units, and this will in turn cause a specific response. Crucially, one can now induce a specific response by either presenting a familiar pattern over the network's input units (as it would be in the case of a genuine perception) or by directly activating the network's hidden units with the learned pattern corresponding to that same input (as it could be, for instance, in the case of a memory retrieval whereby the pattern is reinstated by means of other pathways). The point is that the network would respond in exactly the same way in both cases for it simply lacks the ability to identify whether its response was caused by the activation of its input units or by the activation of its hidden units in the absence of any input. In other words, such a network is unable to distinguish between a veridical perception and an hallucination. Doing so would require the existence of another, independent network, whose task it is to learn to associate specific input patterns with specific patterns of activity of the first network's hidden units. That system would then be able to identify cases where the latter exists in the absence of the former, and hence, to learn to distinguish between cases of veridical perception and cases of hallucination. Such internal monitoring is viewed here as (2007) in three different domains: Artificial Grammar Learning, Blindsight, and the Iowa Gambling Task. In all three cases, our simulations replicate the patterns of performance observed in human participants with respect to the relationship between task performance and wagering. The blindsight and Artificial Grammar learning simulations instantiate the second approach briefly described above in that they use an architecture in which the processing carried out in second-order network is completely independent from that carried out in the first-order network. In such architectures, the two networks are connected by means of fixed connections that instantiate "comparator units." The Iowa Gambling Task simulation, on the other hand, relies on the same mechanisms as described for the digits task. Interestingly, in this latter case, we were able to additionally capture the fact that asking participants to reflect upon their own performance helps them improve metacognitive awareness (Maia and McClelland, 2004) and hence, the relationship between first-order performance and wagering. The fact that the relationships between first-order and metacognitive performance can vary as a function of task instructions is borne out by a recent study of Fleming et al. (2010) which indicates large individual differences in people's ability to judge their own performance. Strikingly, the authors found that differences in metacognitive ability were subtended not only by differences in the activity of anterior prefrontal cortex, but also by structural differences in the white matter of these regions.
It may seem that the proposed mechanism is identical with signal-detection accounts of metacognition (e.g., Scott and Dienes, 2008). However, there is a crucial difference. Signal-detection accounts typically make the second-order distinction between confidence and guessing (high vs. low wagers) on the very signal that is used for first-order classifications by setting two boundaries on the signal: One boundary that accounts for the first-order classification, and a second boundary (on either side of the first-order boundary) that distinguishes between guessing (cases that fall within the area defined by the second boundaries) and cases that fall outside of these boundaries (on the extremes of the distribution). In such an account, confidence thus depends directly on first-order signal strength (but see Maniscalco and Lau, 2010;Pleskac and Busemeyer, 2010 for further discussion). However, in some of the models we have proposed, the second-order classification does not depend on the same signal as the first-order task. Indeed, instead of wagering high or low based on signal strength, the second-order network re-represents the first-order error as a new pattern of activation. Thus, before it can wager correctly, the second-order network, like the first-order network, has to learn to make a new, single-boundary classification based on this second-order representation (the error representation). Thus, the second-order network actually learns to judge the first-order network's performance independently of the first-order task itself. The difference between our model and Signal-Detection Theory is substantial, for it impinges on whether one considers Type I and Type II performance, that is, first-order and second-order judgments about these decisions entertain hierarchical or parallel relationships with each other. This issue is currently being debated, with some authors defending a dual-route model (Del Cul et al., 2009;Dehaene and Charles, 2010) and others (Lau, 2010;Maniscalco and Lau, 2010) defending hierarchical models. The simulation work described in Pasquali et al. (2010) . "Subjective measures of unconscious knowledge," in (unconscious learnt) meta-representations that convey the mental attitude with which the first-order representations are held. From this perspective thus, there is nothing intrinsic to neural activity, or to information per se, that makes it conscious. Conscious experience involves specific mechanisms through which particular (i.e., stable, strong, and distinctive) unconscious neural states become the target of further processing, which I surmise involves some form of representational redescription in the sense described by Karmiloff-Smith (1992). These ideas are congruent both with higher-order theories in general (Rosenthal, 1997;Dienes and Perner, 1999), and with those of Lau (2008), who has characterized consciousness as "signal detection on the mind." In closing, there is one dimension that I feel is sorely missing from contemporary discussion of consciousness: Emotion (but see, e.g., Damasio, 1999Damasio, , 2010LeDoux, 2002;Tsuchiya and Adolphs, 2007). Emotion is crucial to learning, for there is no sense in which an agent would learn about anything if the learning failed to do something to it. Conscious experience not only requires an experiencer who has learned about the geography of its own representations, but it also requires experiencers who care about their experiences. ackNowledgmeNts Axel Cleeremans is a Research Director with the National Fund for Scientific Research (FNRS, Belgium). This work was supported by an institutional grant from the Université Libre de Bruxelles to Axel Cleeremans and by Concerted Research Action 06/11-342 titled "Culturally modified organisms: What it means to be human in the age of culture," financed by the Ministère de la Communauté Française -Direction Générale l'Enseignement non obligatoire et de la Recherche scientifique (Belgium). Portions of this article were adapted from the following publication: Cleeremans (2008), Consciousness: The Radical Plasticity Thesis, In R. Banerjee and B.K. Chakrabati (Eds.), Progress in Brain Science, 168, 19-33.
constitutive of conscious experience: A mental state is a conscious mental state when the system that possesses this mental state is (at least non-conceptually) sensitive to its existence. Thus, and unlike what is assumed to be case in HOT Theory, meta-representations can be both subpersonal and non-conceptual.
Overall, this perspective is thus akin to the sensorimotor or enactive perspective (O'Regan and Noë, 2001) and to the general conceptual framework provided by forward modeling (e.g., Wolpert et al., 2004) in the sense that awareness is linked with knowledge of the consequences of our actions, but, crucially, the argument is extended inwards, that is, to the entire domain of neural representations. It can also be extended further outwards, specifically toward social cognition (see also Graziano and Karstner, in press). Our representations of ourselves are shaped by our history of interactions with other agents. Learning about the consequences of the actions that we direct toward other agents uniquely require more sophisticated models of such other agents than when interacting with objects, for agents, unlike objects can react to actions directed toward them in many different ways as a function of their own internal state. A further important point here is that caretakers act as external selves during development, interpreting what happens to developing children for them, and so providing metarepresentations where they lack. In this light, theory of mind can thus be understood as rooted in the very same mechanisms of predictive redescriptions as involved when interacting with the world or with one self.
coNclusioN
Thus we end with the following idea, which is the heart of the "Radical Plasticity Thesis": The brain continuously and unconsciously learns not only about the external world and about other agents, but also about its own representations of both. The result of this unconscious learning is conscious experience, in virtue of the fact that each representational state is now accompanied by | 14,400.4 | 2011-02-25T00:00:00.000 | [
"Philosophy"
] |
A Moderated e-Forum for Adults With Cardiovascular Disease: Usability Study
Background Self-care behaviors are commonly prescribed to manage both cardiovascular disease and hypertension to reduce modifiable risk factors and improve quality of life. Nevertheless, long-term adherence to self-care recommendations for cardiac patients has been problematic. In cardiac patients, moderated online forums have been found to be particularly useful in supporting maintenance of heart-healthy diet and fewer hospital visits. As such, we developed the e-Forum, a Web-based moderated forum designed to promote continued user engagement and long-term self-care adherence. Objective The objective of this study was to assess the usability of the user interface for the newly designed e-Forum. In addition to overall user satisfaction, we obtained feedback from our target users on the key features of this newly developed interface. Methods An iterative design tested the usability of the e-Forum. On the basis of the user feedback, adjustments were made to the design of our e-Forum, and these changes were then tested in the succeeding group. Participants were recruited from the Heart Function Clinic at the Peter Munk Cardiac Center, University Health Network. After consenting to participate in our study, patients were asked to complete a set of goal-oriented tasks and a feedback interview for the e-Forum. A content analysis of the transcripts from the set of goal-oriented tasks and feedback interviews identified several themes, including general feedback and comments regarding 3 key areas of the e-Forum: layout, navigation, and content. Results Overall, 13 cardiac patients (aged 32-81 years) participated in 3 rounds of testing. Participants across all 3 rounds were highly satisfied with our e-Forum and indicated that they would find such a forum useful in managing their health. Expressions of overall satisfaction with the e-Forum and positive comments regarding layout increased between the initial and the final round. As improvements were made to the e-Forum based on participant feedback, potential barriers, negative comments related to the content, and the number of navigation errors decreased between rounds 1 and 3. Conclusions We found evidence to support the usability of the user interface for our e-Forum. These results indicate that the e-Forum will likely be a successful tool to support an online community of cardiac patients in their efforts to sustain long-term lifestyle behavior change.
Overview
According to the American Heart Association, cardiovascular disease (CVD) accounted for approximately 1 in every 3 deaths in the United States in 2013 [1]. Self-care behaviors (eg, maintaining a healthy diet, regular exercise, and medication adherence) are recommended to manage both CVD and hypertension to reduce modifiable risk factors and improve quality of life [2]. Nevertheless, long-term adherence to self-care recommendations for cardiac patients has been problematic [3].
In an effort to reduce risk for CVD and improve quality of life for patients, our research team developed a Web-based lifestyle counseling platform for cardiac patients (eg, those diagnosed with hypertension or heart failure, HF) to promote adherence to self-care recommendations [4][5][6][7][8][9].
On the basis of evidence from our program of research, our team created the Canadian e-Platform to Promote Behavioral Self-Management in Chronic Heart Failure (CHF-CePPORT; ClinicalTrials.gov: NCT01864369) [10]. Although CHF-CePPORT provides a 12-month comprehensive e-counseling program for self-care behavior change in patients with HF, long-term adherence to Web-based lifestyle counseling programs can be difficult to sustain. For example, dropout rates in Web-based interventions can range up to 62%, and failure to participate in the e-based interventions is 28% over 9 months [11]. These findings indicate that such programs may benefit from supplementary features that facilitate long-term patient engagement and adherence. To address this issue, we developed the e-Forum to supplement CHF-CePPORT by supporting the establishment of an online community that aims to promote continued user engagement and long-term self-care adherence. Our aim was to tailor the design and functional features of the e-Forum to meet the needs of patients with cardiovascular conditions such as HF, who are likely to be older and to present with lower computer literacy. In keeping with guidelines suggested from previous research [12][13][14], this study assessed the usability of this e-Forum to determine whether cardiac patients could use this program as intended.
Web-Based Moderated Forums
The use of online social networks is an important method for facilitating information sharing as well as providing and receiving support among patients and health care professionals [15,16]. Online communities offer patients access to both emotional support and information about disease management that are not always available or easily accessible [17]. Online moderated forums are online communities that are monitored by professionals or trained peers who (1) facilitate user engagement in the online forum, (2) ensure the accuracy of information discussed by users, and (3) check for safety and appropriateness of posted messages (eg, monitoring for language suggesting self-harm or aggressive or offensive language). Patients demonstrate a preference for this type of intervention over and above conventional e-pages that only present information [18,19]. Such forums have been found to help a diverse array of patients, including those suffering from obesity [17] and ovarian cancer [20]; they offer users a resource to manage the complexities of their illnesses by promoting and supporting healthy self-care strategies [21]. In cardiac patients, such online communities have been found to be particularly useful in supporting maintenance of heart-healthy diet and fewer visits to the hospital [22].
We designed our e-Forum to provide a reliable and accessible interface to foster an online community for patients enrolled in our CHF-CePPORT program. From a functional perspective, our e-Forum was developed to allow users to submit posts, including comments or questions regarding their efforts to begin or maintain therapeutic changes in self-care behaviors. The e-Forum was organized such that posts may be submitted under highlighted topics, including "Active Living," "Eating Healthy," "Smoke-free Living," and "Getting Motivated" (see Figures 1-3 to view the final version of the e-Forum). The e-Forum was designed to then send submitted posts to a moderator, who was trained to review posts for accuracy and appropriateness of content and patient safety before they were made accessible and viewable to the other members of the online community. In addition, the e-Forum was designed to allow members of our team to host live or taped presentations on select topics related to self-care adherence and quality of life. The original prototype of the e-Forum also featured large buttons, bright and inviting colors, and large font sizes to increase usability for our older target patient population.
Usability Assessment
Although there is preliminary evidence that the use of online forums may be an effective mode of intervention to enhance education and therapeutic support for participants, it is unclear which features enable users to interact with such forums more effectively [23][24][25]. Therefore, we undertook a usability study to assess our high-quality, user-centered interface designed to maximize the engagement with the e-Forum [26]. Specifically, our usability study was conducted to determine whether the target users (ie, cardiac patients) could use the e-Forum as intended. Usability studies have been found to improve the design of several other Web-based programs. For example, Stinson et al conducted a usability study to improve their Web-based self-management program for adolescents with arthritis and their parents [27]. In the first of 2 rounds of usability testing, adolescents with arthritis and their parents reported that the labels used in the medication home page were ambiguous, resulting in navigation difficulties in that portion of the program. On the basis of this feedback, the team revised the labeling, and this issue was not reported in the second round of the usability testing. A usability study of a Web-based self-management program for patients diagnosed with chronic obstructive pulmonary disease also found this type of assessment to be helpful in improving the design of their program [13]. The CHF-CePPORT program prototype also underwent a usability study [14]. During this study, navigation issues were identified and resolved before its launch as part of a randomized controlled trial [14]. Together, these studies suggest that users can provide practical feedback to help identify problems with functionalities that may have otherwise been overlooked.
Objective
The objective of this study was to assess the usability of the user interface for our newly designed e-Forum. To achieve this goal, we obtained feedback from our target users (eg, cardiac patients) on key features including general feedback, overall user satisfaction, layout, navigation, and content of the e-Forum.
Study Design
An iterative design [26,28] examined the usability of the e-Forum, such that multiple groups of participants were asked to navigate the e-Forum. On the basis of analysis of feedback from each round, adjustments were made to the e-Forum; these changes were then assessed for usability with the succeeding group of new participants.
Participant Recruitment
Because the e-Forum aims to foster heart-healthy lifestyle changes that are applicable to all cardiac patients, including HF patients, we wanted to ensure that it was user-friendly to the wider, heterogeneous cardiac population. Thus, we recruited subjects from the Heart Function Clinic at the Peter Munk Cardiac Center, University Health Network. Patients were eligible to participate in this study if they were (1) male or female patients aged ≥18 years, (2) diagnosed with a CVD, including systolic HF with New York Heart Association Class I-III symptoms, and (3) fluent in English. To assess whether our e-Forum was easy to use for individuals with varying degrees of experience, we purposefully sampled an array of self-reported novice and advanced users of both computers and the internet. Individuals who did not use computers and the internet at all and were not willing to try these technologies were ineligible to participate in this study.
Procedure
This study received approval from the Research Ethics Board at the University Health Network. During the study visit, each consented participant was asked to complete a set of goal-oriented tasks and a feedback interview on the e-Forum. All study visits were completed within 1.5 hours.
Goal-oriented tasks were the same across all study rounds and included logging onto the website, watching a tutorial video, and using different features of the e-Forum (eg, editing sample user profiles, submitting, bookmarking, and rating sample posts; Multimedia Appendix 1). Instructions for each goal-oriented task were read to participants, before asking them to "think-aloud" as they completed each task [29]. This commonly used protocol allowed us to assess the ongoing thought processes and difficulties experienced by the users while using the program [29]. To prevent disruption in the think-aloud protocol, no guidance or assistance was provided during task completion, unless requested by the participant [29]. All participants were able to successfully complete the think-aloud protocol.
After completing the set of goal-oriented tasks, a semistructured interview was used to ask participants about their overall experience with the e-Forum and to allow them to make suggestions for its improvement in layout (eg, font size, colors, and formatting), navigation (eg, ease of use), and content (eg, highlighted topics, and features/functionalities, including bookmarking and rating functions). All think-aloud sessions and feedback interviews were audio-taped using a digital audio recorder and then transcribed verbatim for analysis. Finally, all subjects completed a demographics form and a user satisfaction questionnaire. The items on the user satisfaction questionnaire were based on the usability characteristics, as described by Nielson [30], and included a 5-point Likert scale (1="disagree very much"; 5="agree very much") asking participants to rate their level of satisfaction with different aspects of the e-Forum.
Data Analysis
After each study visit, a research assistant transcribed the audiotape verbatim, and a second research assistant independently compared this transcription with the audiotape to verify its accuracy. A content analysis of the transcripts from the study sessions identified themes related to the overall satisfaction and the layout, navigation, and content of the e-Forum. QSR NVivo (QSR International, Victoria, Australia) was used to manage the transcript data. Concurrent data collection and analysis and constant comparison [31] facilitated probing for further insights to confirm themes that arose in subsequent interviews [32]. Transcripts were independently coded by RT and AB, and divergent codes were discussed and resolved. Once the coding process was complete, a frequency count tallied participants' experiences in each theme [32]. Both quantitative frequency counts and qualitative interview excerpts were reported. Means, SDs, and percentages were calculated for data collected from the demographics and the satisfaction questionnaire forms.
All 13 participants used the internet at home, with 12 accessing the internet via a computer and 1 via a mobile device. All participants reported at least being somewhat comfortable with computers and the internet. Nevertheless, there was variability in the degree to which participants used the computer/internet at home. Of the participants who had a computer at home, 50% (6/13) spent less than 5 hours per week on the computer, whereas the other half (6/13) spent more than 5 hours per week on the computer. Similarly, although all participants had access to the internet at home, 54% (7/13) spent less than 5 hours per week on the internet, whereas 46% (6/13) spent more than 5 hours per week on the internet at home. Nevertheless, the majority also reported at least being somewhat comfortable with using online forums or message boards (62%, 8/13); and 5 participants (38%, 5/13) regularly used online forums or message boards for personal use ( Table 2).
Satisfaction With the e-Forum
Evaluation of the user satisfaction assessment indicated that, on average, participants in all 3 rounds were satisfied with their experience in using the e-Forum (Table 3). Similarly, the majority of participants made at least one comment regarding their overall satisfaction with the e-Forum, and the number of satisfactory comments per participant increased from 3.5 in round 1 to 5 in round 3. Unique comments included general statements of satisfaction, expressions of satisfaction with the moderated aspects of the forum, as well as satisfaction with the opportunity to connect with other patients with similar conditions (Tables 4 and 5).
Description of Use
Participants from all 3 rounds made a total of 41 individual comments describing how they would use the e-Forum. Participants indicated that they would use the e-Forum to exchange advice regarding lifestyle behavior change and to share/gather information regarding the management of their cardiac condition. They also indicated that they might enlist the help of family members when using the e-Forum, and that this interface may also be used to provide additional support for family members of cardiac patients. Participants said that other cardiac patients would also likely be interested in using the e-Forum for additional support and resources (Tables 4 and 5).
Potential Barriers
Participants in all rounds also speculated that there might be potential barriers to accessing or using the e-Forum for other cardiac patients. There was a decrease in the total number of comments made regarding potential barriers between round 1 (12 comments) and round 3 (4 comments). Potential barriers included lack of access to the internet, poor computer skills, and self-consciousness about typing or general ability to use computers. Other barriers included the potential unwillingness of some cardiac patients to share their experiences in managing their condition (Tables 4 and 5).
Task Navigation
All participants were able to successfully navigate the e-Forum, with correct navigations per participant increasing from 13.8 in round 1 to 14.7 in round 3. Successful navigation included the ability to complete the specific steps to use the various features of the forum (eg, logging on, playing the tutorial video, editing profiles, and submitting and managing posts). Each participant also made at least one navigation error during the course of the study session. Nevertheless, the average number of navigation errors per participant decreased across the 3 rounds (5 in round 1 to 3.7 in round 3). Common navigation errors included difficulty finding the "edit profile," "rate this post," and "bookmark" buttons because of button placement or poor labeling (Table 4). Common navigation errors were addressed in changes made to the e-Forum between each round. See below for details.
Positive Navigation Comments
The majority of participants (85%,11/13) gave positive feedback (26 total positive comments) with regard to their ability to navigate the e-Forum. Positive comments included expressions of overall ease of navigation and indications that participants found the e-Forum easier to navigate or to understand as they used it (Tables 4 and 5).
Negative Navigation Comments
At least one participant in all 3 rounds provided a minimum of one negative comment on their ability to navigate the e-Forum. A total of 6 negative navigation comments were made, including overall difficulty with navigation and indications that the e-Forum was too complex to navigate (Tables 4 and 5). Nevertheless, all participants were able to successfully complete study tasks with little or no assistance.
Positive Content Comments
A majority of participants (92%, 12/13) provided positive feedback regarding the content presented in the e-Forum. Participants indicated that they were satisfied with features/functionality of the e-Forum (eg, appreciation of confirmation messages after submissions, spell-checking, bookmarking, or tool tips) as well as the sample information provided (eg, indication that video and highlighted topics were helpful; Tables 4 and 5).
Negative Content Comments
Negative comments regarding the content of the e-Forum were made in each round, with a total of 9 participants making 28 such comments throughout the course of the study. Negative content feedback included dissatisfaction with certain features or functionalities (eg, unclear rating criteria, tutorial video being overwhelming, and lack of spell-checking feature) and with the sample information provided in the e-Forum (eg, finding certain highlighted topics not relevant to their experience or that content was not comprehensive enough; Tables 4 and 5).
Neutral Content Comments
All participants made at least one neutral comment about the content of the e-Forum. Neutral comments included suggestions for additional features or functionalities of the e-Forum (eg, suggestions to create a search button or to host live support groups), suggestions for information to be provided on the e-Forum (eg, suggestions for additional highlighted topics or videos), and suggestions to create different forum groups based on varying health status (eg, diagnoses or lifestyles; Tables 4 and 5).
Positive Layout Comments
Every participant made at least one positive comment regarding the layout of the e-Forum. In total, 46 positive layout comments were made. Positive comments included satisfaction with buttons (eg, appropriate size and color), with font size (eg, easy to read or see), with colors (eg, attractive), and with the overall layout of the forum (eg, simple and easy to use; Tables 4 and 5).
Negative Layout Comments
At least one participant from each round expressed a negative comment regarding the layout of the e-Forum. However, such comments decreased from 5 to 1 comment per participant from round 1 to round 3. Negative comments included expressions of dissatisfaction with overall layout of the e-Forum (eg, layout too complex), font size (eg, too small or inconsistent), colors (eg, inconsistent or dated), or buttons (eg, not clearly labeled; Tables 4 and 5).
From Round 1 to Round 2
On the basis of the feedback provided by participants in round 1, various changes were made to the design of the e-Forum to improve usability for the subsequent round of participants. For example, buttons were moved and/or renamed to enhance visibility and accessibility. Text boxes were reformatted, and tool tips and button labels were changed or added (eg, changing "Return to Forum" to read "Go Back") throughout the forum to improve the accessibility of associated features or functions. Finally, suggested changes were made to the layout of the forum, including changes in background and font color.
From Round 2 to Round 3
On the basis of the feedback provided by participants in round 2, more changes were made to the e-Forum to improve usability. Buttons were moved and/or renamed to enhance visibility and accessibility, and they were reformatted to improve consistency in layout. A keyword search feature was added to the e-Forum. Grammar and spell-checking features were added to textboxes; contact information for the research team, including expected response times, was also added to the e-Forum. See Figures 1-3 for a sample of the final version of the e-Forum at the completion of this usability study.
Principal Findings
The e-Forum was designed to facilitate the establishment of a reliable and accessible online community for cardiac patients. This usability study was conducted to ensure that our e-Forum was user-friendly and accessible to our target patient population. An iterative design was used such that after each round of study sessions, changes were made to the e-Forum in response to participant feedback. Feedback included general reflections of user experiences as well as positive, negative, and neutral comments on the content, navigation, and layout of the e-Forum.
Overall, participants across all 3 rounds were highly satisfied with the e-Forum. Between rounds 1 and 3, expressions of satisfaction with the e-Forum increased, and fewer potential barriers were reported. Participants indicated that it would be helpful to speak with other cardiac patients and that they were particularly satisfied with the moderated aspect of the e-Forum. Participants indicated that they would use the e-Forum to exchange lifestyle behavior advice and general information regarding their health management with other patients. Having the moderation feature reassured them that the information they obtained would be reliable and safe. They also predicted that their family members would likely use the e-Forum on their own or together with the patients to obtain information and support.
As improvements were made to the e-Forum based on participant feedback, positive comments related to layout increased from the initial to the final round, whereas negative layout comments decreased. Moreover, negative comments related to content and the number of navigation errors decreased between rounds 1 and 3. These outcomes indicated that modifications made to the layout (eg, changes in colors and font sizes), as well as the content (eg, changes in descriptors and features, including the addition of the keyword search) of the e-Forum, likely improved the overall user experience and ease of use when interacting with our online community.
Limitations
The results of this study indicate that our e-Forum would likely be accessible to a diverse array of cardiac patients. However, there are some limitations to consider as efforts are made to disseminate this e-Forum to the wider patient population. For example, although the age of participants in this study was well representative of the target user population (10 participants were aged older than 50 years), those who agreed to participate in this study were primarily white, with at least some postsecondary education, and with self-reported experience and comfort using computers and the internet.
It is possible that our findings may be limited in generalizability, as the overall population of cardiac patients is more culturally and educationally diverse. Nevertheless, feedback provided by participants in this study also suggested that individuals from diverse backgrounds may actually be more comfortable asking questions about lifestyle behaviors on our e-Forum. These comments are congruent with other studies that have found that users from rare or geographically dispersed backgrounds may be more likely to feel confident in exchanging experiences and advice with regard to their health management within an online community [33]. Similarly, although some participants suggested that the wider patient population may be less comfortable with or have limited access to the internet, such concerns may not be relevant, as it has been established that the majority of individuals in North America have access to the internet [34,35].
Future Directions
Given the limitations of this study, future studies may work to recruit a more diverse sample of patients to ensure ease of use of the e-Forum across a wide range of patient demographics and experiences. Future studies may also use back-end analytics to assess how participants organically use the e-Forum (eg, how often, for how long, and which features they use most frequently) to gather additional information about how best to maximize the usability of the e-Forum. From a design perspective, future versions of the e-Forum may also increase usability by programming additional features, including making the e-Forum more mobile-friendly, including of speech recognition software, creating additional tutorial videos, allowing users to change font sizes, and offering the e-Forum in multiple languages. Moreover, it will be important to assess the e-Forum's ability to ultimately promote continued user engagement in Web-based lifestyle counseling programs and long-term self-care adherence.
Conclusions
In this study, we found evidence to support the usability of our newly designed e-Forum. After each study round, changes were made to the e-Forum based on user feedback. For example, buttons were moved and/or renamed to enhance visibility and accessibility and features, including but not limited to, a keyword search, and tool tips were added throughout the e-Forum. As a result, a diverse sample of cardiac patients, in terms of age and self-reported comfort with computers/internet, were able to successfully navigate the e-Forum. Moreover, these users indicated satisfaction with the layout and content of the e-Forum and expressed interest in using this tool for practical and emotional support in managing their CVD. The high user satisfaction ratings indicate that the e-Forum provided an acceptable user experience. In sum, these findings support this tool and its potential role in promoting long-term lifestyle behavior change when paired with existing e-counseling programs, such as the CHF-CePPORT program [10]. | 5,838 | 2018-05-18T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
Gaming passion contributes to the definition and identification of problematic gaming
Even if for most people playing video games is a healthy leisure activity, a minority of vulnerable users present an excessive use associated to negative consequences (e
Background
Video games are a leisure activity practiced by around 3.2 billion people worldwide (Newzoo, 2022).It is thus a widely spread activity that can take place on several platforms, from computers to smartphones.Even if for most people playing video games is a nonproblematic leisure activity, a minority of users show excessive use associated with ill-health (e.g., addiction symptoms, psychosocial maladjustment, sleep interference, health issues) and functional impairment (Jo et al., 2019;Männikkö et al., 2020;Reed et al., 2022).
In 2013, for the first time Internet Gaming Disorder was considered as a potential emerging condition and included as a "condition for further study" in the fifth version of the Diagnostic and Statistical Manual of Mental Disorders (American Psychiatric Association [APA], 2013).In the DSM-5, the criteria used to diagnose Internet Gaming Disorder include those of substance use disorder (e.g., withdrawal, tolerance, continue despite problems) and gambling disorder (e.g., deceiving, escape adverse mood) (Petry et al., 2014).At that time, the risk of excessive pathologizing was tentatively addressed by suggesting a higher threshold (i.e.number of criteria necessary to diagnose the condition) than the one recommended by the DSM-5 (Lemmens et al., 2015).More recently, Gaming Disorder (GD) has been recognized as a psychiatric condition and has been listed as a "disorder due to addictive behaviors" in the 11th edition of the International Classification of Diseases (World Health Organization [WHO], 2019).Crucially, the WHO followed a more conservative approach and proposed that GD is characterized by three mandatory features (loss of control, increasing priority given to gaming, and continued use despite negative consequences) associated with clinically relevant functional impairment (Reed et al., 2022).In contrast, the most recent version of the DSM-5 (DSM-5 TR) neither includes an updated definition of GD nor recognizes it as a disorder (First et al., 2022).
Given the recency of the ICD-11 framework for GD, the largest part of problem gaming research of the last decade was based on DSM-5 criteria to assess GD.However, a growing body of literature shows that some substance use disorder or gambling disorder criteriatypically withdrawal and tolerance, preoccupation, mood regulation, or deceptionare not necessarily relevant in the context of problematic gaming (Castro-Calvo et al., 2021;Deleuze et al., 2017Deleuze et al., , 2018;;Ko et al., 2014;Müller et al., 2019;Peeters et al., 2019;Rehbein et al., 2015).These criteria largely fail to discriminate between intensive but nonproblematic and pathological involvement in video games (Billieux et al., 2019;Charlton & Danforth, 2007), thus promoting the pathologizing of gaming behavior (Kardefelt-Winther et al., 2017).In this context, it is important to elucidate the mechanisms involved in highbut non-problematicinvolvement versus problematic involvement in video games, to eventually contribute to refine and improve the diagnosis, assessment, and treatment of GD.Ultimately, acknowledging the difference between problematic and non-problematic intense involvement in video gaming would contribute to reduce the stigma around the concept of GD.
The Dualistic model of passion
The Dualistic Model of Passion proposed by Vallerand (2010Vallerand ( , 2015) ) is a sound framework to investigate the distinction between highbut non-problematicinvolvement and problematic involvement in video games.Vallerand's framework posits a distinction between so-called "harmonious" and "obsessive" passions.Harmonious passion is the result of an autonomous internalization of a given activity into one's identity.People with harmonious passion have a strong connection with an activity, but this does not interfere with other aspects of their lives.Harmonious passion is associated with mindful engagement instead of unregulated urges.In harmonious passion, the activity is performed with a secure sense of self-esteem, openness, and flexibility.In contrast, obsessive passion refers to controlled internalization of a given activity into the person's identity.This type of internalization is due to some intra-and/or interpersonal forces because of contingencies related to the activity (feelings of social acceptance, self-esteem), or because the excitement produced by the activity becomes uncontrollable.Obsessive passions are central in the life of individuals and are associated with a passive attitude; they "enslave" people who become controlled by their passion and cannot regulate their engagement.In this case, the activity typically conflicts with various areas of life (e.g., professional, social).As a result, people exhibiting obsessive passions present an uncontrolled and inflexible involvement, which ultimately promotes negative consequences and, in extreme cases, functional impairment.There is evidence that obsessive passion for video games is associated with negative outcomes (Bertran & Chamarro, 2016;Mills et al., 2018) and problematic and deregulated usage patterns (Lafrenière et al., 2009;Wang & Chu, 2007).Also, gamers with an obsessive passion report high levels of loneliness, reduced well-being (Mandryk et al., 2020), and tend to play to escape daily problems (Bertran & Chamarro, 2016).In contrast, harmonious passion operates as a protective factor against gamingrelated negative consequences.Indeed, harmonious passion was associated with better life satisfaction, post play energy, and higher game enjoyment (Przybylski et al., 2009).Also, harmonious passion was associated with lower levels of loneliness and higher well-being (Mandryk et al., 2020).Nevertheless, both types of passions also have commonalities.For example, Lafrenière et al. (2009) showed in a sample of gamers that both harmonious and obsessive passions are associated with a positive experience toward gaming.Along the same lines, time spent on gaming is positively associated with both types of passion (Lafrenière et al., 2009;Mills et al., 2018;Przybylski et al., 2009), reinforcing the view that time spent gaming is not a good indicator of problematic gaming (Király et al., 2017;Skripkauskaite et al., 2022).Furthermore, playing for immersion purposes and obsessive passion constitute important predictors of problem gaming symptoms, which is not the case for self-reported gaming time (Kneer & Rieger, 2015).These findings were confirmed by a recent longitudinal study using objective playtime indicators (behavioral tracking) showing that (1) actual time spent gaming did not correlate with problem gaming symptoms and quality of life and (2) self-reported gaming time was on average 10h per week longer compared to objective gaming time (Larrieu et al., 2023).Taken together, these results suggest that (self-reported) time spent gaming is not a valid indicator (or even a proxy) of problematic gaming.
Present study
Against this background, the current study combines a personcentered and a variable-centered approach to pursue two main objectives (Fig. 1).The person-centered approach (first objective) was designed to identify the psychological factors that discriminate highly involved (but healthy, i.e., non-problematic) gamers from problematic gamers.These results may provide useful information to avoid pathologizing intensive but healthy gaming patterns and for the design of tailored treatment or prevention interventions.The variable-centered approach (second objective) was used for the evaluation of GD criteria.The aim here was to identify the most discriminative criteria for the detection of a potential GD.
The first objective was implemented by using a cluster analysis approach to identify different gamer groups (i.e., clusters) based on their profiles of passion towards gaming (using the theoretical framework of Vallerand described previously).The purpose in choosing these two variables for the cluster generation was to identify different passion profiles among gamers, and to compare them in terms of relevant external criteria.Such person-centered approach was used as it allowed us to consider how both types of passion co-exist or not in the same person, and how this affects the functional or dysfunctional nature of gaming behaviors.Based on previous research on problematic gaming, the external criteria considered included GD symptoms, gaming motives, and impulsivity traits.We focused on gaming motives and impulsivity as these two psychological dimensions have been extensively explored in the context of problematic gaming (Király et al., 2022;S ¸alvarlı & Griffiths, 2022).Gaming motives such as escapism (e.g., the desire to evade everyday worries), coping (e.g., playing to cope with adverse moods), fantasy (e.g. the interest in stepping out of the own identity and creating a new one far from reality), competition (e.g.achievement purposes), or skill development (e.g.playing to improve abilities like coordination) have been related to problematic gaming (Bäcklund et al., 2022;Ballabio et al., 2017;Bányai et al., 2019;Biolcati et al., 2021;Columb et al., 2023;Laconi et al., 2017;Melodia et al., 2022;Rafiemanesh et al., 2022;Šporčić and Glavak-Tkalić, 2018;Wu et al., 2017).Regarding impulsivity, several studies have found that impulsivity traits positively correlate with the severity of problematic gaming symptoms (Ding et al., 2014;Ryu et al., 2018).Some authors also argued that impulsivity could be a risk factor regarding the transition from recreational to problematic gaming (Raybould et al., 2022).Moreover, the negative urgency impulsivity trait has been identified as a predictor of comorbidity between ADHD and GD in a sample of outpatients diagnosed a posteriori using the new ICD-11 criteria (Cabelguen et al., 2021).
The second objective of this study was variable-oriented.We explored how gaming disorder symptoms, assessed within the substance use disorder and gambling frameworks (e.g., tolerance, withdrawal, preoccupation, mood modification), are linked to harmonious and/or an obsessive passion for gaming.For the second objective we used supervised machine learning to identify which GD criteria/symptom predict either a harmonious or an obsessive passion.
Participants
Participants were recruited from four Spanish universities (the Catholic University of Murcia, the University of Granada, the University of Extremadura, and the University of the Basque Country).The study consisted of an online survey and potential participants were invited by email.Confidentiality was guaranteed and participants were requested to give their online consent to participate after being informed about the aims of the study.Participants were required to report playing video games at least two hours per week and to be at least 18 years of age to be included in the study.Five gift cards of 15€ were raffled at the end of the study as an incentive for participation.A total of 1130 participants started the completion of the online survey.Participants were excluded if they had at least one missing data point on one of the study's variables (n = 133), did not met the inclusion criteria (n = 48), or if they provided invalid information such as playing more than seven days per week or more than 24 h per day (n = 104).The final sample consisted of 845 participants.Participants were aged between 18 and 50 years (M = 23.5, SD = 5.03).Gender distribution and gaming preferences are reported in Table 1.In the final sample, 11 participants were identified as disordered gamers according to the IGD-20 (cut-off score of 71) (Pontes et al., 2014).The study was conducted in accordance with ethics for human research in the Declaration of Helsinki and was approved by the Ethics Committee of the Catholic University of Murcia (CE031905).
Measures
The Passion Scale (Marsh et al., 2013) was of central importance for the current study as we used it to generate groups of gamers through a cluster analytical approach (see data analytic strategy section).This scale is composed of 12 items answered on a 7-point Likert scale (1 = strongly disagree; 7 = strongly agree).Among the 12 items, six assess harmonious passion, and six assess obsessive passion.Participants are asked to think about their gaming activity.Harmonious passion is evaluated using items such as "This activity is in harmony with the other activities in my life", or "This activity allows me to live a variety of experiences".In contrast, obsessive passion is evaluated with items such as "I have almost an obsessive feeling for this activity", or "This activity is the only thing that really turns me on".For the present study, we used the validated Spanish version of the passion scale (Chamarro et al., 2015) which presents good internal consistency.In the current sample, Cronbach's alpha was equal to 0.89 for obsessive passion and 0.87 for harmonious passion.Spearman's rank correlation between harmonious and obsessive passions was 0.37 (p <.001).This positive correlation can be explained by the fact that harmonious and obsessive passion are sharing some aspects related to the definition of passion such as considering the activity as a passion, giving some value to it, viewing it as integrated into the self, and dedicating time and energy to it (Vallerand et al., 2003).However, even if harmonious and obsessive passions belong to the same scale and are sharing common aspects related to passion, such correlation does not involve collinearity issues between these two variables which can be considered as distinct constructs for the cluster analysis.
The Motives for Online Gaming Questionnaire (MOGQ) (Demetrovics et al., 2011) is composed of 27 items assessing seven motives.Respondents are requested to use a 5-point Likert scale (1 = never; 5 = almost always/always).Gaming motives assessed include social (e.g., "I play online games because I can get to know new people"), escape (e.g., "I play online games because gaming helps me to forget about daily hassles"), competition (e.g.,"I play online games because I enjoy competing with others"), skill development (e.g.,"I play online games because gaming sharpens my senses"), coping (e.g.,"I play online games because it reduces tension"), fantasy (e.g.,"I play online games to feel as if I was somebody else"), and recreation (e.g.,"I play online games because I enjoy gaming").The psychometric properties of the Spanish MOGQ will be described in another research report based on the same dataset.Confirmatory factor analysis for the Spanish MOPGQ can be obtained from the following Open Science Framework link (OSF, https://osf.io/jk94v/).In the Spanish MOGQ, escape and coping motives are regrouped in a single motivation dimension.Cronbach's alphas for the other dimensions in the present sample were 0.93 for general motivation, 0.79 for social, 0.91 for escape/coping, 0.85 for competition, 0.92 for skill development, 0.84 for fantasy, and 0.82 for recreation.
The Internet Gaming Disorder Test (IGD-20) (Pontes et al., 2014) assesses GD symptoms based on the DSM-5 framework and the "Components Model" of addiction (Griffiths, 2005).Each item is scored on a 5points Likert scale (1 = strongly disagree; 5 = strongly agree).This questionnaire thus assesses GD symptoms within a substance-use based framework, through the following dimensions: salience (e.g., "I usually think about my next gaming session when I am not playing"); mood modification (e.g., "I play games to help me cope with any bad feelings I might have"); tolerance (e.g., "I need to spend increasing amounts of time engaged in playing games"); withdrawal (e.g., "I feel sad if I am not able to play games"); conflict (e.g., "I think my gaming has jeopardized the relationship with my partner"); and relapse (e.g., "I do not think I could stop gaming").For this study, the Spanish version by Fuster et al. (2016) was used.This version showed good psychometric properties (e.g., structural validity, internal reliability).In the current sample, Cronbach's alphas were 0.91 for the total score, 0.65 for the salience dimension, 0.63 for mood modification, 0.65 for tolerance, 0.76 for withdrawal, 0.71 for conflict, and 0.76 for relapse.Even if the scale is named "Internet Gaming Disorder Test", the items of the scale do not specifically refer to online gaming and can also refer to offline gaming.
The Short UPPS-P Impulsivity Scale (Billieux et al., 2012) contains 20 items that assess five distinct impulsivity traits, including negative urgency (e.g., "When I am upset I often act without thinking"), lack of premeditation (e.g., "My thinking is usually careful and purposeful"), lack of perseverance (e.g., "I finish what I start"), sensation seeking (e.g., "I quite enjoy taking risks"), and positive urgency (e.g., "I tend to act without thinking when I am really excited").Items are scored using a 4-point Likert scale (1 = strongly agree; 4 = strongly disagree).The strength of the UPPS-P model of impulsivity is that it allows for a comprehensive assessment of the multi-faceted nature of impulsivity (Whiteside & Lynam, 2001).For this study, the Spanish version was used (Cándido et al., 2012).This version has good psychometric properties (structural and construct validity, internal reliability).In the current sample, Cronbach's alphas were 0.82 for negative urgency, 0.76 for lack of premeditation, 0.79 for lack of perseverance, 0.81 for sensation seeking, and 0.66 for positive urgency.Here we decided to group the negative and positive traits into a single urgency dimension, since it has recently been demonstrated that these two traits actually form a single coherent construct (Billieux et al., 2021).Cronbach's alpha for urgency was 0.81.
Data analysis
Following the recommendations by Hair et al. (2010), we performed cluster analysis by combining hierarchical and non-hierarchical approaches.Using both hierarchical and non-hierarchical methods allows for compensating weaknesses of each method by capitalizing on the advantages of the other (Hair et al., 2010).As explained earlier, the variables used to create the clusters were the obsessive and harmonious passions scores.Before performing the cluster analysis, we first ensured that there was no collinearity between the two variables composing the passion scale.We then scaled and centered the variables used for the generation of clusters.This was followed by hierarchical clustering using the Ward method with squared Euclidian distances to identify the optimal number of clusters to be used in the following non-hierarchical clustering.The NbClust R package (Charrad et al., 2014) was used to evaluate the best number of clusters to retain.This package uses the majority rule which is a simple method for selecting the optimal number of clusters based on the number of times a particular value of k is chosen as the best clustering solution by different clustering indices (kl, ch, hartigan, ccc, scott, marriot, trcovw, tracew, friedman, rubin, cindex, db, silhouette, duda, pseudot2, beale, ratkowsky, ball, ptbiserial, frey, mcclain, dunn, hubert, sdindex, dindex, sdbw).The majority rule selects the k value best clustering solution the largest number of clustering indices.Once the optimal number of clusters was identified thanks to the majority rule, a non-hierarchical K-means cluster analysis was computed (iter max = 250, nstart = 50).Obtained clusters were then retrieved and compared according to our external correlates.Variables used as external correlates were gaming motives (MOGQ), GD symptoms (IGD-20), and impulsivity traits (UPPS-P).Clusters were also compared in terms of age and the number of hours spent daily on video gaming.These analyses were carried out using R (v4.2.0).The dataset and the code are available on the OSF link provided (https://osf.io/jk94v/).
Our second research objective was approached by using supervised machine learning to identify which GD symptoms constitute robust predictors of harmonious versus obsessive passion for video games.By using unknown data to evaluate the fitted model, supervised machine learning brings more robust results than traditional approaches where the model is fitted and evaluated on the same data.Two models (elastic net regressions) were computed (one for each type of passion), with the various dimension of IGD-20 assessing GD symptoms used as predictors.These analyses were computed using the ElasticNetCV model, which is a cross-validated (n fold = 5, random state = 42, max iter = 2500) elastic net model that finds the best regularization term (L1 ratio) value for the elastic net regression.The aim of the regularization term is to prevent overfitting of the model.This model has been chosen following the flowchart provided by the Scikit-learn Library documentation.We also used a pipeline (a tool that allows you to chain multiple data preprocessing and modeling steps together into a single object) that scales the data (standard scaler) and fits the model.Based on the supervised machine learning principle, one-third of the data (33 %) were set aside to form a test set to ascertain the model's accuracy.Lastly, we retrieved the coefficients, and the permutation importance values for each predictor.Permutation importance (not related to the coefficients) was computed by shuffling the scores of one predictor and observing the impact of this shuffling on the R 2 score.The purpose of this shuffling is to break the potential relationship between the predictor and the outcome variable (here, harmonious or obsessive passion).The more the fitted model depends on the predictor, the more the shuffling decreases the model's R 2 .This procedure was used for all the predictors in separate runs to compute the permutation importance for each of them.The entire process was repeated 250 times to control the potential effect of a specific shuffling, thus we report the mean and the standard deviation of the permutation importance for each predictor.A permutation importance value of zero means that the shuffled variable had no impact on the predictions done by the fitted model.Supervised machine learning analyses were run using Scikit-Learn V1.0 library in Python (Varoquaux et al., 2015).
Cluster generation
Hierarchical clustering suggested to retain three clusters according to the majority rule.The three clusters' profiles were then generated using the non-hierarchical K-means cluster analysis.Cluster one was labelled "engaged gamers" (n = 434) characterized by high harmonious passion (Z Score = 0.675) and low obsessive passion (Z Score = -0.201).Cluster two was labelled "risky gamers" (n = 100) with grouped gamers being characterized by a combination of elevated obsessive passion (Z Score = 2.251) and moderately high harmonious passion (Z Score = 0.435).Finally, the third cluster was labeled "casual gamers" (n = 311) containing those with low harmonious (Z Score = -1.082)and low obsessive passion (Z Score = -0.443)(Fig. 2).These clusters significantly differed in terms of harmonious (X 2 (2) = 565.87,p <.001) and obsessive (X 2 (2) = 224.49,p <.001) passion scores.
Supervised machine learning analysis (elastic net regression models)
Two models were computed to identify which types of GD symptoms (measured by the IGD-20) predicted either harmonious or obsessive passion for gaming (see Table 3, for details).Both elastic net regression models were trained using a train sample composed of 566 participants and tested on a test sample of 279 participants (33 % of the dataset).
A. Infanti et al. the conflict (PI = 0.03) and relapse (PI = 0.02) dimensions were related to the largest reduction in R 2 when their scores were shuffled.
Discussion
This study aimed to identify different profiles of gamers based on passion types, but also to determine which GD-related symptoms and constructs predict either harmonious or obsessive passion.Three distinct clusters of gamers were identified based on their passion profiles, including risky gamers, engaged gamers, and casual gamers.Supervised machine-learning algorithms identified specific GD symptoms (salience, mood modification, tolerance, low level of conflict) to predominantly predict harmonious passion, whereas a different subset of them (withdrawal, high level of conflict, relapse) were more strongly related to obsessive passion.
Cluster analysis (person centered approach)
Risky gamers comprised 12 % of our final sample and were characterized by a combination of high levels of obsessive passion and moderately high harmonious passion.Previous research using a variable-centered approach found, on the one hand, that obsessive passion is linked to excessive gaming and negative consequences (Bertran & Chamarro, 2016;Lafrenière et al., 2009); on the other hand, harmonious passion was found to potentially protect from such negative consequences (Bertran & Chamarro, 2016).Our study, which endorses a person-centered approach shows for a subgroup of gamers, that obsessive features overcome harmonious features and promote problematic and uncontrolled engagement in gaming (as reflected by higher GD symptoms) despite the presence of moderately high harmonious passion.In terms of gaming motives, risky gamers showed higher levels of escape/coping, competition, skill development, and fantasy motivations than the other groups, but also a highest general motivation towards gaming.This is in line with previous variable-centered research, which found that obsessive passion is associated with motives such as fantasy, escape, competition, and coping (Orosz et al., 2018).It is worth noting that such gaming motives have also been related to problematic gaming (Ballabio et al., 2017;Bányai et al., 2019;Biolcati et al., 2021;Columb Note.IGD-20 = Internet Gaming Disorder Test; MOGQ = Motives for Online Gaming Questionnaire; UPPS-P = Urgency (negative), Premeditation (lack of), Perseverance (lack of), Sensation seeking, Urgency (positive), Impulsive Behavior Scale.Wilcoxon rank sum test (p-value adjustment method: Bonferroni): b = Different from cluster 2; c = Different from cluster 3; * = Significant at p <.05; ** Significant at p <. 001. et al., 2023;Laconi et al., 2017;Melodia et al., 2022;Moudiab & Spada, 2019;Rafiemanesh et al., 2022;Šporčić & Glavak-Tkalić, 2018;Wu et al., 2017).In terms of impulsivity traits, we found that risky gamers are especially characterized by a lack of perseverance, which is defined as the "difficulty to remain focused on potentially boring and/or demanding tasks", and is closely linked to the conscientiousness trait of the Big Five model of personality (Whiteside & Lynam, 2001).This result is consistent with the results of a previous variable-centered study, which reported a positive relationship between the lack of perseverance dimension of impulsivity and obsessive passion (Orosz et al. 2016).Yet, and more interestingly, our results echo previous person-centered research results, which identified a group of "unregulated escapers" characterized by elevated lack of perseverance and coping motives (Billieux, Thorens, et al., 2015), or a group of "escapers" characterized by low conscientiousness and coping motives (Larrieu et al., 2022).It is worth noting that while urgency is particularly relevant in substance use disorders (Hildebrandt et al., 2021), this impulsivity trait did not differ between potentially problematic and casual gamers in our study.Risky gamers seem to display a combination of dysfunctional traits and motivational profile, calling for individualized treatment approaches aiming at reducing impulsivity and implementing more adaptive coping and/or emotion regulation strategies.Such interventions could help these gamers to reduce their obsessive gaming involvement and help them gaming in a way that is integrated into their daily life instead of interfering with it.Engaged gamers comprised more than half of the participants (51 %).They are characterized by a very high level of harmonious passion and a low level of obsessive passion.This cluster was named after the seminal work of Charlton and Danforth (2007) suggesting the need to discriminate between two types of intensive involvement in gaming, namely high but non-problematic engagement versus high and dysfunctional engagement.Crucially, despite not being different from risky gamers in terms of reported time spent gaming, they showed the lowest level of conflict (i.e., gaming-related negative consequences), providing further evidence to Vallerand's notion that harmonious passions are well integrated into one's life, allowing for needs to be fulfilled without interfering with important areas of functioning (e.g., social, professional).Our results are also in line with previous studies showing that gaming time (or screen time) is not a good indicator of problematic gaming (Billieux et al., 2013;Charlton & Danforth, 2007;Király, Tóth, Urbán, Demetrovics, & Maraz, 2017;Demetrovics & Király, 2016).Engaged gamers present a balanced motivational background, with the highest level of recreational motives and low to medium impulsivity.They are also characterized by the lowest scores in urgency and lack of premeditation, and report higher perseverance than the potentially problematic gamer group, which probably contributes to their regulated and non-problematic involvement in gaming.
The casual gamer group corresponds to 37 % of the sample.These gamers are characterized by a low level of both harmonious and obsessive passions.They show lower involvement in video games (e.g., self-reported lower time spent gaming) and fewer GD symptoms than the other two groups.An analysis of their gaming motives also revealed that -in general -they report less pronounced gaming motives, whatever their type.This profile aligns well with the recreational gamers subtype identified previously by Billieux, Thorens, et al. (2015) and Larrieu et al. (2022).In fact, it is likely that these gamers fulfill their basic needs through non-gaming activities and thus cannot be considered as passionate gamers in the sense of Vallerand (2010;2015).In terms of impulsivity, they are generally more impulsive than engaged gamers but less impulsive than problematic ones.Given this profile, it is worth noting that it cannot be excluded that the most impulsive members of this group would display deregulated involvement in other rewarding activities not assessed in the present study.Some studies have highlighted the positive impact that video games can have, thanks to some aspects of the game such as socializing, and on well-being and mental health if they are practiced in a balanced way (Barr & Copeland-Stewart, 2021;Giardina et al., 2021;Halbrook et al., 2019).It is conceivable that casual gamers do not benefit from these positive effects, while engaged gamers do.
Supervised machine learning analyses (variable centered approach)
The second objective of the study aimed to identify the GD symptoms predicting either harmonious or obsessive passion.The supervised machine learning analyses conducted revealed some important findings, which align well with previous findings from the gaming literature.Regarding harmonious passion, the trained model showed a strong and negative relationship with conflict and positive relationships with salience, mood modification, and tolerance.In contrast, for obsessive passion, the trained model showed positive associations with conflict, relapse, and withdrawal.Taken together, these results are well aligned with previous research showing that substance use disorder criteria, when applied to gaming, mix "central" features indicative of a problem (i.e., conflict, relapse, withdrawal) and "peripheral" features, which rather reflect non necessary problematic involvement (i.e., salience, tolerance, mood modification) (Billieux et al., 2019;Brunborg et al., 2013;Charlton & Danforth, 2007;Deleuze et al., 2018).Interestingly, these results also align well with a recent international Delphi consensus study about the clinical validity, clinical utility, and prognosis value of GD diagnostic criteria included in the DSM-5 and ICD-11 (Castro-Calvo et al., 2021).In detail, the expert panel recruited in this Delphi study agreed that criteria such as tolerance or mood modification, which were more related to harmonious passion in the present study, are not clinically useful as they cannot discriminate between problematic and nonproblematic gaming patterns.In contrast, the DSM-5 or ICD-11 criteria such as loss of control (reflected by the relapse items in the IGD-20) or continued use despite negative consequences (reflected by the conflict items in the IGD-20) were judged by the Delphi panel as clinically useful and able to identify pathological gaming patterns, thus aligning with our results regarding obsessive passion.Moreover, it is interesting to note that this pattern is almost identical with the very definition of compulsivity (Muela et al., 2022).Thus, our results are also in line with the work of Muela et al. (2022) who operationalize compulsivity as the main factor driving dysregulated or excessive behavior.
Overall, our pattern of results further suggests that recycling substance use disorder or gambling criteria, in the context of gaming behavior, is susceptible to conflate problematic and non-problematic usage and thus pathologize non-problematic behavior (Billieux et al., 2019;Kardefelt-Winther et al., 2017).
Limitations
This study has several limitations.First, the cross-sectional nature of the study does not allow for causality assumptions.Further longitudinal studies would bring more insight into the dynamic regarding passions, motivations, and impulsivity traits.Longitudinal studies are also required to determine whether the clusters identified are stable over time.Second, we used self-reported measures that can be influenced by response bias (Dunning et al., 2004).Third, while 21.18 % of our sample reported being offline gamers, one of the scales used in this study refers to online gaming motives (MOGQ).Although some motives might be perceived as less relevant for offline gaming (e.g., social or competition motives), most remain relevant in the context of online gaming (e.g., escape/coping, recreation, skill development, or fantasy motives).It is worth noting that the MOGQ was not used to create the clusters, and only served as an external correlate to compare clusters.Fourth, our sample is composed of a majority of highly educated participants.Nevertheless, the sample size (N = 845) and the fact that we had a very good balance with regards to gender can be considered as a clear strength of this study.Finally, even if we were able to identify several key risk factors for GD in the present study, other factors such as selfesteem (Billieux, Thorens, et al., 2015), childhood trauma (Shi et al., 2020), or mood disturbance (Ostinelli et al., 2021) could also have been considered.
Conclusion
By combining person-centered and variable-centered approaches, the present study contributes to models of and clinical approaches to the treatment of GD.Regarding the theoretical models, our results emphasize the importance of considering not only symptomatic or diagnostic features, but also underlying psychological processes and mechanisms (Brand et al., 2020).The present results also further emphasize the risk of "recycling" substance use disorder criteria to assess and diagnose GD (Castro-Calvo et al., 2021;Kardefelt-Winther et al., 2017) and potentially other types of excessive behaviors (Billieux et al., 2022;Flayelle et al., 2022).On the clinical aspect, our results support the relevance of person-centered approaches to the treatment of problematic gaming (Billieux, Schimmenti, et al., 2015;Park et al., 2021).Further research should thus be conducted to investigate how process-based and personcentered treatment approaches could be developed and validated to address problem gaming issues.Indeed, it remains an empirical question under which circumstances obsessive involvement in video games changes to a harmonious one, and whether psychological interventions can facilitate this transition, assuming a "controlled use" paradigm rather than an "abstinence-based" paradigm.
Declaration of Competing Interest
This work is part of the DRIVEN project funded by the Luxembourg National Research Fund under the PRIDE program (PRIDE17/ 12252781).The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 7,515.2 | 2023-07-01T00:00:00.000 | [
"Economics"
] |
Aspects on Learner-biased Classroom Observational Techniques
This paper was designed to explore the means in the field of observing classroom teaching/learning from both a general, and an “English as a Foreign Language” (EFL), viewpoint. The aim was to browse the relevant literature, and lead to consider observation tools which might serve to research in EFL. This paper summarizes the reading by surveying the field of classroom observation, and then proceeds to evaluate the likely usefulness of a number of selected observation instruments.
Introduction
This paper examines classroom observation as a research activity, with particular reference to observing the learning/teaching of English as a foreign language (EFL) in a specific context.The focus is on EFL because, at a later stage, the author of this paper intends to examine the experience of a group of Chinese students studying English.Although his investigation will be carried out mainly through interviews with the research population, the author would like to add perspective to the study by observing these learners and their teachers in action in real EFL classrooms.This will help verify or perhaps even contradict his interpretation of the interviews, and so serve to add some objectivity to the study.
The author of this paper has browsed interviewing techniques, and gained practical experience through a research paper of real-life interviews.However, he cannot claim previous in-depth knowledge of classroom observation as a research activity.Feeling that it could add a dimension to his proposed research, the author shall examine some of its strengths weaknesses here, as a first step in guiding him to decide on which kind of supplementary instruments might be considered for his purpose.A broad survey of classroom observation as a research activity will be employed, which will hopefully guide him in choosing appropriate observation instruments to pursue areas highlighted as worth investigating after carrying out pilot interviews with the research population.The choice of aspect to investigate will also be influenced by the experience and intuitions gained from some 35 years of learning/teaching English in China.
Research Traditions
In a survey of classroom research specifically related to language learning and teaching, Nunan (1989) refers to four different traditions (to which the author has appended an example of each): 1) Psychometric studies; e. g.Pilliner in Cohen & Manion (1985) 2) Interaction analysis; e. g.Wragg (1970) 3) Discourse analysis; e. g.Narushima (1993) 4) Ethnography e. g.Bailey (1983) In psychometric studies, the researcher investigates the effectiveness of particular methods, activities and techniques by measuring language gain on proficiency tests.In interaction analysis, researchers use systems and schemes for studying classroom behaviours and interactions.
Discourse analysis involves analytical schemes for the linguistic analysis of classroom interactions.
In ethnographic studies, the researcher observes, describes and interprets the classroom in ways similar to those employed by anthropologists when they study unfamiliar cultures and societies.(Nunan, 1989, p. 4) Psychometric studies concerning product or outcome are mainly "quantitative" in approach.Numerical measurement and statistical means are involved, which investigate the quantitative relationships between various classroom activities or behaviour and language achievement.This research might serve to predict trends, but cannot account for the complicated behaviour of individual human beings.
Interaction analysis is strongly influenced by sociology, whereby researchers use the methods of social investigations in classroom observation.Here, observation of the classroom and analysis of the interaction taking place there, serve to investigate social meanings and inferred classroom climate.In this approach, student behavior is regarded as being dependent on classroom atmosphere and on the interaction between teachers and learners.The focus is on the relationship between teachers and students, not on quantitative analysis.Research following this approach still employs some quantitative analysis but the stress is not on mechanical scoring.
Discourse analysis in classroom observation derives from a linguistic perspective, which focuses on the discourse of classroom interaction in structural-functional linguistic terms, not on the inferred social meaning.It is systematic, and includes a dimension for pedagogical aspects, content, speaker, and other functions.Although quantitative analysis is potentially useful in this approach, researchers tend to confine their attention to the appropriate pre-defined categories employed to interpret the discourse.
An ethnographic study derives from sociology and anthropology, which is widely accepted in classroom research.It attempts to explain behaviours from the idea of participants' different understandings, and in this sense might be regarded as an 'objective', qualitative approach.The diversity of purposes, practices, and locations explains why different styles of classroom observation have been developed, and why researchers may adopt a quantitative or qualitative approach (Wragg, 1994, p. 7).The quantitative/qualitative relationship is discussed below (2.2).
However, in reality the distinction between the four approaches outlined above may not always been as clear-cut as the categorization implies, and indeed, each of the four approaches mentioned above might be employed in combination for a particular piece of research,
Quantitative and Qualitative Approaches
Quantitative and qualitative research methodologies are mutually dependent, which offer the possibility of flexibility for researchers.The distinction usually made between quantitative and qualitative research is that the two approaches represent different ways of thinking about and understanding the world around us.The extent to which one is prepared to accept or reject particular methods-quantitative or qualitative-depends on one's view of world (Nunan, 1992, p.77).
Quantitative
In this century, the quantitative approach has been heavily influenced by the nineteenth-century French Philosopher Comte, who claimed that human thought evolved through the stages, theological, metaphysical and scientific.(Wragg, 1994, p. 7) The belief was that social behaviour could be predicted through systematic observation and analysis.Quantitative researchers are interested in facts and their relationship, and in details that can be measured to produce generalizable results.This research often makes use of statistical analysis: in this case it needs to be broad if the results are to be statistically valid (Bell, 1992, p.27).This approach predominated in educational research in the period from 1900 to 1930, and employs methods such as statistical studies, survey studies and experimental studies.
A quantitative approach is generally regarded as being obtrusive, controlled, objective, and product-oriented, and tends to be large scale and time-consuming.It tends to gloss over the complicated nature of individuals.
Qualitative
Compared with a quantitative approach, a qualitative approach tends to be narrow in scale, and focus on individual case studies.It is a process-oriented approach to the study of interaction (Chaudron, 1988, p. 48).Qualitative researchers are interested in individuals and their view of the world.The following types of qualitative approach are often used: symbolic interactionism and phenomenology, and social and cultural anthropology.The qualitative approach used in education research tends to focus on teacher's classroom strategies and learners' adaptations to school, patterns of classroom interaction, learner's perspectives and classroom behaviour, transfer between school, teachers' life histories, and the impact of public examinations on classroom teaching.
Making an Initial Choice of Approach
As the main thrust of the author's research will be conducted qualitatively through interviews, the author shall seek to complement this by obtaining more quantitative data for analysis.
The author of this paper does not intend to pursue a psychometric approach which tends merely to measure learning outcomes, because the process and environment of learning is of more importance in the context of his research.
Carrying out discourse analysis of the English classroom interaction in which the Guangdong University of Foreign Studies (hereafter referred to as GDUFS) learners are involved, is another avenue which does not recommend itself to his purpose.There are practical considerations in recording verbal discourse in the language classroom.Unless the researcher is adept at shorthand, then cassette recordings will have to be made.But teachers are not always happy to have their performances recorded, for they have only the researcher's guarantee that there will subsequently be no breach of confidentiality.Technical equipment can easily break down, especially if the researcher is unfamiliar with it.Inadequate acoustics in normal classrooms can result in recordings that are not clear enough to be transcribed; this applies particularly to group work.However, the most serious drawback is the time taken to transcribe a recording in communicative classrooms where there tends to be a great deal of verbal interaction.A one-hour class can take up to 20 hours to transcribe, and that is before any analysis takes place.This is the time that the author cannot afford, because he will already have his main interviews with the learners to produce as written texts.
The author is also obliged to rule out a supplementary ethnographic study.Thus, by a process of elimination, he has taken the decision to pursue the interaction analysis tradition in supplementing the data obtained through interviewing.This will be carried out through some form of classroom observation which will help me match that data to the reality.However, before doing so, he shall take a realistic look at the strengths and weakness of classroom observation as a research activity.
Classroom Observation
Direct observation would seem to be an obvious and straightforward activity in researching classrooms.The observer sits in on a series of lessons, records what goes on in them, and then analyses the information gathered.The observer sits in either as a participant, taking part in the process, or as a non-participant, observing the action in a detached way.Because of the difficulties of becoming a genuine participant to which he has referred in the author's discussion of ethnographic studies (above), he intends to focus on non-participant observation.However, the author's reading in this area has shown him that classroom observation is far from being an "objective" activity.This does not deny its value, but is an important fact to bear in mind when reading research reports deriving from data gathered through non-participant classroom observation.The author's attention has been drawn in particular to a chapter by Rees (1997) which adopts an awareness-raising-though not destructive-stance in this respect.Below the author edits and summaries some of the points made there.
1) narrow focus
Teaching as a profession, involves very much more than teaching.When we observe actual classrooms, we do not take into account important out-of-class activities which could also contribute to learning.Rees (op.cit.) mentions: planning, reading, homework, student profiling, exam writing, pastoral work and other extra-curricular activities, etc. and he adds other roles of the teacher such as being a friend, disciplinarian, and instilling values.
2) sampling
The question of sampling is a tricky one.Just how many classrooms does an observer need to visit to discover the general characteristics of any one learning environment.
3) variables
What is observed influences the observation, but in reality is often subject to variables which are not under the control of the teacher or the class, such as time of day, day of the week, size of the class, temperature, character of the previous lesson, and so on, all of which can influence the learning/teaching.
4) beyond shared knowledge
There may be knowledge shared by the learners and teacher which is not known to the observer, but may nevertheless influence his/her interpretation of what is seen and/or heard.
5) continuity
Unless the observer has the time and permission to observe any one class relentlessly, then it is difficult to ascertain how one lesson fits with those that precede and those that come after it.Teaching/learning are continuous, long-term processes which cannot always be detected by short-term observations.
6) perspective
There is a natural tendency for observers to concentrate on the role of the teacher, at the expense of that played by the learner, hence "teacher observation" is much more frequently heard than "learner observation".This neglects the equally important role of the learner in the language classroom.Classrooms where real language learning is taking place, are not necessarily characterised by constant teacher intervention.
7) goal
What to observe and why is problematic in the observation of teaching.Rees (personal conversation, 1999) regards this as the Achilles' heel of all classroom observation.There must be good reasons for choosing what to observe, founded on current knowledge of good practice.At the same time, it is wise to consider that-particularly in the history of language teaching'-what is approved of today may be condemned tomorrow.Rees (op.cit.) quotes a memorable extract from Cook (1994) in this respect.
In TEFL, yesterday's criminals become today's respectable citizens with such regularity that it seems almost certain that in this endless alternation, what is outlawed today will be eulogised tomorrow.A gambler would find TEFL a very easy field.The question of what to observe is closely related to the following two factors 1) the wood and the trees: A deliberate decision is usually made by observers to look at aspects of the classroom in analytical detail or in broader perspective.This is usually a matter of purpose, and convenience, and not of right or wrong.However, it should always be borne in mind that individual aspects of teaching examined should never be claimed to represent the whole, and that the whole can easily be lost in a forest of detail; 2) high and low inference: Decisions also have to be made in advance concerning concentrating on high and low inference factors (or a mixture of both) when observing classrooms.Factors which require a low degree of inference from the observer, such as the number of times the teacher moved to the back of the room during the lesson, can be recorded with some certainty, though they all too easily focus on trivia.Important factors, where high inference is required, such as exactly how much is being learned, cannot usually be recorded with such assurance.
1) the good language teacher
Any observation instrument which tries to identify the characteristics of the good language teacher, fails to acknowledge that some teachers teach differently from others with equal success, and that what succeeds in one context may prove inadequate in another.The teacher's performance in isolation cannot guarantee successful language learning.
2) the fragile observer
Observation requires a surprising self-discipline from the observer, who has to remain alert throughout even the most boring of lessons.Not every observer can display this stamina.
3) the egocentric observer
A trap which the observer can easily fall into is to assume that what is interesting/boring for him/her is also interesting/boring for the learners; and equally to assume that what is interesting must also be useful in the language learning context.This is a fallacy.Indeed, the communicative approach, wrongly interpreted, often leads to amusing sessions with no real language learning content at all.
4) the observer as interloper
A common complaint is that the presence of an observer threatens to change the nature of the class being observed, and so defeats the purpose of the observation.Quoting from his own experience, Rees (op.cit.) maintains that this is exaggerated.If the observer learns to be unobtrusive, e. g. by sitting quietly in the back of the classroom, not shuffling papers, nor establishing eye-contact, nor interfering with the lesson in any way, then the learners, with their backs to him, quickly forget that he or she is there.However, the teacher faces the observer, but this one difficulty can to some extent be dealt with by establishing good rapport with him/her beforehand, if possible.Where good rapport has not been established, or the purpose of the observation made dear, then this can lead to very uncharacteristic teacher performances ranging from the spectacular to the inhibited.
5) frames of reference
A problem in using ready-made instruments for observing classrooms is that the person who made them may not share the same frames of reference about teaching/learning as someone else wishing to use them.This explains why ready-made instruments often have to be adapted for use in different contexts if they are to produce meaningful outcomes.
6) tunnel vision
Because of the complexity of what goes on in language classrooms, the observer is obliged to observe only selected aspects of it.The limitations of human vision and hearing, and natural lapses in attention mean that it would be impossible to observe everything.Even watching a videotape of a lesson replayed for a second or third time on a small screen will reveal aspects missed on previous viewings.Of course, learners and teachers too are subject to these same constraints, but this fact reminds us of what a piecemeal activity classroom observation is, and so helps us to keep it in perspective.
7) subjectivity
All classroom observation is by its very nature subjective.Even checklists which require the observer merely to tick off low inference factors can be subject to observer fatigue, temporary loss of attention, and so on.And subjectivity is even involved in compiling any checklist in the first place.This does not mean that classroom observation should be abandoned, for truths can emerge when tackled from different directions.As Bowers (1989, p. 144) reminds us, there is substantial evidence to suggest that no one observational technique is in itself adequate: all techniques have their strengths and their weaknesses.Use of a range of techniques can help to cancel out the weaknesses of each while capitalizing on their strengths.
It is for this very reason that the author shall be using several techniques in his dissertation research.He ends this section with an apt quotation from Bowers which partly summarises the above section: "Whatever you see of a teacher's classroom activity can only be indicative: you will never see enough to know with certainty what kind of teacher he or she is, how representative the sample which you have seen may be of their overall competence and preferences.Moreover, even in what you do see, there is much which remains below the surface: you will observe what the teacher does, but not what the teacher perceives as happening (which may be different).You will not know how what you see ties in with other events (before or after) which add up to the full history of the relationship between this teacher and these pupils."(1989, p.142)
Sources
The author has undertaken a realistic look of the capabilities of observation instruments in classroom research, the next task will be to choose suitable instruments for his purpose, and to adapt them where required.It is astonishing to see that there are a number of instruments that had been published.Those in the classic Simon and Boyer (1975) and its British counterpart Galton (1978) run to over a hundred.He clearly needed to limit his research, and considered the following for ideas (for full details of texts, see the Bibliography): The author also examined a number of individual instruments from various sources shown to him by his colleagues.As these were merely a loose collection which had not been consistently referenced, he is able to present below only the details available to him: 1) Categories for the Puckett system (Puckett, late 1920s) 2) Coding lesson segments (Rees, 1984) 3) Grid for taking field notes (Rees, 1984) 4) Involvement learning in small groups (Wragg, 1994) 5) Language learning questionnaire (Nolasco & Arthur, 1988) 6) Learners' attitudes to the learning process (Forth, 1990) 7) My classroom environment (William Burden, 1999) 8) Observation schedule (Candlin) 9) Observation vocabulary teaching (Shahinda Modnis) 10) Pupil observation (Partington & Luker, 1984, p.48) 11) Pupils' questionnaire on writing (Chzung, 1993) 12) Questions/answers (Rees, 1984) 13) Small group discussion (Turney et al., 1982) 14) The good English teacher (Ministry of Education, Malaysia) 15) What is currently food language teaching practice?(Rees,1984) 16) Your experience of language learning (Forth, 1990) 17) Your views on language teaching (White, 1984) Based on what the author already knew of their language learning experiences from earlier interviews with the GDUFS students (see 1. Introduction), at this stage the author earmarked the following instruments as initially worth considering: 1) Good & Brophy, J. ( 1978
The Instruments for Piloting
The next task was to choose which instruments the author should actually pilot, given the time and resources available to him.He was guided by the following considerations: 1) avoiding abusing the generosity of teaching staff, so needed instruments which he could operate with a minimum of fuss or distraction.This ruled out using a video or tape-recorder, or trying out too many instruments.
2) avoiding all instruments which seemed to directly evaluate teachers' performances.Instead, it was decided to concentrate more on the learners' classroom experience.
3) being sensible to choose instruments which would not require intensive training in order to operate them.
Piloting the Instruments
Before piloting the 6 instruments, the author drafted a letter to be distributed to the teaching staff, explaining his objective, and asking for permission to sit in on their classes.The author circulated this letter only to teachers involved with students within his research, as he hoped to get to know them through this exercise.This might facilitate his using these same instruments with them during his later research.The teaching staff he had approached all agreed to his request.In the event the author discovered that in some cases testing out the instrument did not require the full hour for each which he had anticipated.
Below, the author took each instrument in turn, and comment on his findings.A few of these instruments required some adaptation by him even before use.Where this was the case, it is stated.All the instruments will be found in Appendix.(Good & Brophy, 1978) With only 10 categories, the author found this to be a very easy instrument to operate.The categories are clear, but very general.This requires high inference from the observer, so it would probably benefit from sub-specification to make it more objective.For example, how is "dominating" to be defined, and what is counted as "teacher participation"?As this was a very small class of six students, there was of course less chance of any one student dominating than in a larger one.This practice suggests that instruments need to be adapted for the situation in which they are to be used, and that categories that are too general may increase subjectivity in rating them.As the situation in a Chinese language classroom is quite different, many facets may influence students who are learning a foreign language:
Small-group Interaction
1) The way of performing the students used to have in the middle school.Chinese students have to pass the entrance examination if they want to further their education in universities or colleges.The focus of classroom lecture is to obtain the knowledge they can, not stress on the performance, such as oral presentation.Therefore, lectures are usually textbook-centred.
2) Confucius educational thought which has influenced their view of learning.Chinese students are passive in classroom learning, and always expecting the teacher to instruct them.
3) the teaching methodologies to which they already got used to.Communicative approach in language learning/teaching has proven quite successful in the west, but it has not been popular or acceptable in Chinese.The situation is that students pay a lot of attention to grammar, vocabulary and reading.4) language competency.Furthermore, Chinese students often encounter the barriers, language competency, when they are required to present their ideas orally, they find it difficult to employ proper words.And they frequently hear such comment, 'not comprehensive' which subsequently discourages them to perform naturally in class.Therefore, domination, at this stage, seems far from the reality.
Besides the above points preventing the domination in a language class in China, the normal size of a language class consists of 25 to 30 students.Because of the passiveness, it is unlikely for anyone of them to dominate.(Partington & Luker, 1984) This instrument concentrates on the observation of a single student during a lesson.It is not therefore designed to generalise about the experience of a large number of students at a time.This attracted the author to the scheme, for it could help him to follow up the individual experience of any one student who would seem to merit personal observation as result of his earlier interviews with him or her.
Student Observation
The author tried out the instrument in part of a reading comprehension session at intermediate level, and concentrated on a teenage girl sitting beside the teacher.The author immediately realised that no space had been allotted on the form for him to write these identifying details which may help him do the research in depth.The author would have to add these if he were to use this instrument as part of his research.This subsequently applied to some of the other instruments he tried out.The instrument has 14 main categories which are straightforward and therefore manageable.It also has spaces for unpredicted" other" categories, which gives it some flexibility.Entering simple ticks for each category was not difficult.As he did not have access to the original source of the instrument, he was unclear about the reason for the repetition of "answering questions" which seemed, however, to be referring to writing rather than speaking.
The format of the instrument appealed to the author, especially as he could easily change categories in the "Activity" column, according to what he was looking for.However, though the instrument reveals what the learner is doing throughout the lesson, it does not indicate whether this is the activity expected of him or her by the teacher!For example, if expected to listen to a tape-recording, the student could be talking to another.This could be easily rectified by using a tick for expected participation and a cross for deviant behaviour.Of course, such deviant behaviour could still be contributing to language learning.This practice reminded the author that observation instruments could tell only part of the full story.In the process of a lecture, it sometimes happens that students discuss each other about the questions associated with the learning task, which is not assigned by the teacher.What kind of assessment can we give?The 'absent-mind-less' may be part the learning task.Compared with the lecture offered by western teachers, lectures given by the Chinese lecturer are well organised with strict discipline.Any students peaking in class would be regarded as an offensive behaviour if they discuss things without the approval of the lecturer.A normal language class in China allows one voice, either the teacher or one student, not many.(Rees, 1984) The author of this paper found this to be very boring instrument to operate, but this does not necessarily mean than it is not useful.The scheme looks fairly simple, which led him to no studying it carefully beforehand, so that he had problems during the observation with the interpretation of the category "Use of space".The author subsequently realised that it referred to general disposition of the teacher and learners in each 5-minute segment.To specify this in more detail, the author subsequently decided that it would have been better for him to have added grid reference numbers (rather than just the letters A-H) to the initial sketch, so that he might report, example, that after 10 minutes, the teacher moved from A2 to a position at G3.It is not easy to sum up the main categories in any one 5-minutes period, especially as this cannot be realistically done until the next 5-minute period is already under way.Filling in the 6 categories in longhand as a non-native speaker of English took more time than the author would have expected, and might have caused him difficulties in a fast-paced class.To use this time-sampling instrument effectively would need more practice than he had anticipated.
Grid for Taking Field Notes
This instrument tends to be the trap of subjectivity.When an observer practises it, he may find it hard to avoid subjectivity which may appear in his notes because before he takes notes he has to make the comment on a point which depends upon his own knowledge and understanding.So the subjectivity goes along with the observer's view of the class he/she attends.
The author felt that this instrument would be useful in helping him to make consistent field notes to supplement, for example, a tape-recording of a lesson, but that he would not really find a use for it in his research especially as he preferred some of the other piloted schemes.This trial showed that this instrument should be thoroughly familiar to the observer before using it, and that some instruments look deceptively simple to operate.In particular the author realised that the ability to time-sample does not come naturally, and that lessons do not divide themselves into neat segments for researchers.(Nunan, 1989b) The nature and composition of tasks in the language classroom have gained particular prominence in the era of communicative language teaching.As all language classes are composed of tasks, and as determine the language learner's experience, it seemed commonness to examine a task-based observation instrument.
Tasks in Language Learning
As the author does not intend to make cassette recordings of lessons for his research, every class he observes must be regarded as a one-off, with no opportunity for him to listen to it more than once.So any instrument that he uses must accurately record what he wants it to.The 17 items of Nunan's instrument made this very difficult for him to do.Some of the items, e.g."7.The activities are appropriate to the communicative goals of the task" are high inference, and not easy to assess.The author found that "4.The task reflects the nature of language and learning" and "10.The student and teacher roles are inherent in the task" too vague to assist him in the task of rating them.
The recording was complicated by the fact that tasks in the classroom do not necessarily occur in neat succession, but may suffer external interruptions or the insertion of sub-tasks.And it is far from easy to determine exactly what the task is.In this instance, there was a discussion of homework, which appears to score well on the categories.The reality, however, was somewhat different.The class was composed mainly of female students who are culturally reticent about speaking and playing a prominent role in class.The task was consequently dominated by a male student.So though the task was appropriate for the class, not all the students benefited from it.This instrument might be useful as the basis for a questionnaire to discover students' and teachers' views on what makes a good teaching task.For harmonious teaching, there should be some consensus on this between the two parties.Employing Nunan's instrument made the author realise that though a task in the classroom may be theoretically sound, it is the teacher's role to ensure that it is effective for as many learners as possible.
6.5 How New Words Are Practised (Wajnryb, 1992) This was revealing insofar as there were few occasions in the particular class observed where new vocabulary was being taught.In the few instances that occurred, however, the categories seemed to be viable.
The lesson learned here was the obvious one-the author should have checked beforehand that the instrument he intended to use was appropriate for the particular class.Unfortunately, this is not always possible, and in any case, one of the problems with any classroom observation is such unpredictability.
Another point should be mentioned, which has a very typical Chinese characteristic.The way of learning words in Chinese students is much different from the way which the native speakers acquire the new words.There are two ways for Chinese students' enlarging their vocabulary.1) In intensive reading, students obtain the new word by listening to lecturers, consulting dictionaries and other references.The focus of the process is on explaining the usage of the words, and the comparison between some synonyms as well.Examples of the words will be practised in the process.
2) Learning new words focuses more on the words themselves, less on their contextual meaning.
It takes a long time for the students to memorise the new words.They take pains memorising them mechanically, such as writing each word again and again at beginner's stage without fully understanding the meanings.' Questions/Answers (Richards, 1994) The author likes the graphic presentation of this instrument; it shows clearly, for example, that one student dominated the interaction.He did encounter some difficulty in operating it at speed, and would need some further practice before feeling competent in its use.This would be particularly the case in deciding at speed what is a "reflective" question.
Teacher's and Students
The author should have asked beforehand how many students were likely to be in the class, for he had made boxes for only 10, and needed to add 2 by hand during the interaction.An alternative would have been to make room for many more boxes at the outset by abbreviating the content to S1, S2 for Student one, Student two, and so on, and by quickly crossing out any superfluous boxes as soon as the class had settled down.
The completed instrument does not tell us why one student was so prominent, so a designated space would have to be added to the instrument for additional observer comment.This was in fact the class mentioned above where the dominated his female classmates.What the instrument does not tell us is that part of the problem was the female learners were hesitant in giving responses, and the teacher did not allow them sufficient wait-time.., but this could be the function of a different instrument at a different time.Using this instrument confirmed that any observation instrument cannot paint the whole picture, and needs to be supplemented with further information.And the categories of even the most obvious-looking of instrument must be thoroughly understood and mastered before use.
Another point cannot be neglected that the way of Chinese students' performance in class is always connected with passiveness, because they have been strongly influenced by Chinese Educational System.Students have been led by the examinations which are the key reference to the students who want to further their education at each stage.So it is not really necessary or compulsory for them to notice their own performance.Furthermore, the assessment teachers use is the examinations, no other devices such as oral presentation, group work, writing report or paper etc.
Conclusion
After piloting the instruments mentioned above, the author has to take some points concerning the practice in Chinese circumstances into consideration.
Considering the class size in China, it is not practical to operate the instruments employed in this paper.Class size in China is usually larger, 25 to 30 students in language classes.The size of class has been proved to have great influence on students' and teacher's interaction.
Certain grade students in smaller classes benefited in terms of improved performance on some courses (Word et al. 1990, p.16).As to the language class, the size of class makes great difference accordingly.Smaller size of language class will provide students with more chances to practise, which is very important at the early stage in learning a foreign language.
The interaction between students and teacher is different in small or large class.In addition, another facet also affects on the performance, as mentioned in the quotation of word.This is palpable especially in Chinese language class, the freshmen who enter the university usually show enthusiasm in classroom interaction, and their ebullience would ebb away when they are senior students.At this stage there is no difference how large or small the class size is.
The size of class matters when students are in their first two years study at university as the basic skills, which need interaction between teachers and students, are the stresses that always employ intercommunication.
The author has learned from this research that instruments for observing classroom tend to suggest that classes are much more organised and straightforward than they are in real life.In many ways they are rough tools, which can capture only part of the reality of the classroom.Common-sense categories such as "participating" often need sub-specification if they are to be interpreted meaningfully.Trying out these instruments showed the author how very general and subjective such everyday terms really are as they stand.
One interesting facet revealed by the piloting was the clear difference in the classroom between expected behaviour and actual behaviour; this was something he was always dimly aware of, but had never really brought to consciousness.
Published instruments may be tidied up versions of the original, and not leave spaces where identification details and observer comments can be listed.These may have to be added indeed, it seems that few instruments can be used in new situation without being adapted in some way, either to fit the different context, or to suit the observer's preferences.
It is important, too, to read the background information on any instrument.Picking up an instrument just because it looks interesting, without knowing the ideas on which it was compiled, can be misleading, and lead to doubts about interpreting some of the categories.In fact, it appears that it is unwise to use instruments "cold", for some previous practice or training is required if they are to be operated effortlessly.One must be fully conversant with them.
The author found time-sampling difficult, as it needs considerable skill to sum up quickly the main points of a previous interaction when another one is taking place.This is particularly difficult when summarizing in longhand, either in English or in Chinese symbols.In fact the author discovered that he was happier using instruments that require merely the ticking of categories.
The piloting exercise made the author realise that some advance knowledge of a class is useful to ensure that an appropriate observation instrument is being used.But one can never predict the unpredictability of what goes on in some classes.
The author also became aware that no observation instrument is perfect, for each has drawbacks in operation, and there is often the need to add supplementary handwritten comments, or to explore further with other instruments.
The author was pleasantly surprised to find that his presence in the classroom did not seem to seriously distract the learners; at the same time he discovered that recording what goes on in classrooms can be a mechanical and not a very exciting activity.The study has given him a practical and theoretical insight into classroom observation which he did not previously possess, and he feels that it has made him better equipped to select and operate observation instruments to supplement his research. | 8,696.8 | 2012-04-25T00:00:00.000 | [
"Education",
"Linguistics"
] |
Proximity of transmembrane domains 1 and 3 of the gamma-aminobutyric acid transporter GAT-1 inferred from paired cysteine mutagenesis.
GAT-1 is a sodium- and chloride-dependent gamma-aminobutyric acid transporter and is the first identified member of a family of transporters that maintain low synaptic neurotransmitter levels and thereby enable efficient synaptic transmission. Because transmembrane domains 1 and 3 contain amino acid residues important for transport activity, we hypothesized that these domains may participate in the formation of the binding pocket of the transporter. Pairwise substitutions have been introduced in several predicted transmembrane domains and in the first extracellular loop of GAT-1. In the double mutant W68C/I143C, in which the cysteines were introduced at locations at the extracellular part of transmembrane domains 1 and 3, respectively, approximately 70% inhibition of transport was observed by cadmium with an IC50 of approximately 10 microm. This inhibition was not observed in the corresponding single mutants and also not in > 10 other double mutants, except for V67C/I143C, where the half-maximal effect was obtained at approximately 50 microm. The inhibition by cadmium was only observed when the cysteine pairs were introduced in the same polypeptide. Our results suggest that transmembrane domains 1 and 3 come in close proximity within the transporter monomer.
The overall process of synaptic transmission is terminated by neurotransmitter transporters located in the plasma membranes of cells surrounding the synapse. Most neurotransmitters are removed from the synaptic cleft by sodium-and chloridedependent transporters, which form a family that also includes (besides the transporters for ␥-aminobutyric acid (GABA)) 1 those for serotonin, dopamine, norepinephrine, and glycine (for reviews, see Refs. 1 and 2). The GABA transporter GAT-1 (3,4), the first identified member of this family, catalyzes electrogenic sodium:chloride:GABA cotransport with a stoichiometry of 2:1:1 (5)(6)(7)(8). The role of chloride in this process is still under debate, because it has been proposed that, during sodiumcoupled GABA transport, obligatory chloride out /chloride in exchange takes place (9). The predicted topology of GAT-1 (4) and the other members of this family is 12 TMDs linked by hydro-philic loops with the amino and carboxyl termini located inside the cell, and strong experimental support for this model has been obtained using the serotonin transporter SERT (10).
GAT-1 has 15 endogenous cysteine residues of which three are located on extracellular loops. Studies on the related dopamine and serotonin transporters indicate that the cysteine residues, equivalent to their GAT counterparts at positions 164 and 173 (which are located in the second extracellular loop), form a disulfide bond (11,12). This leaves cysteine 74, located on the first extracellular loop, as the only cysteine that reacts with impermeant methanethiosulfonate reagents. GABA transport by wild-type GAT-1 is only very modestly inhibited by the membrane-impermeant (2-(trimethylammonium) ethyl-)methanethiosulfonate (13), indicating that this position is not easily accessible. The reagent (2-aminoethyl)methanethiosulfonate has a definite membrane permeability and can react with cysteine 399, located on the intracellular loop connecting TMDs 8 and 9 (14). Attempts to create a functional cysteineless GAT-1 have been unsuccessful (14), and the same is true for DAT, another member of the SLC6 family (15).
Mutagenesis studies, of GAT-1 and SERT, in particular (but also of other members of the family), have identified a conserved tyrosine (tyrosine 140 and 176 in GAT-1 and SERT, respectively) in TMD 3 critical for neurotransmitter binding (16,17). Two TMD 3 residues of SERT, corresponding to leucine 136 and isoleucine 143 of GAT-1, are located one turn of a putative ␣-helix below or above the critical tyrosine 176, respectively (17). The highly conserved TMD 1, which appears to line the permeation pathway and appears to form a more extended structure than expected from a membrane-embedded ␣-helix (18,19), contains several amino acid residues critical for function. They have been implicated in playing an important role in the interaction with the neurotransmitter (20,21), in the determination of the apparent affinity for sodium (21,22), and in the sodium-dependent conversion of the leak mode of the transporter into the coupled mode (21). These observations suggest that TMDs 1 and 3 may participate in the formation of the binding pocket in this family of transporters and therefore may be close in space. In this study, we have provided the first evidence for this idea.
EXPERIMENTAL PROCEDURES
Generation and Subcloning of Mutants-Mutations were made by site-directed mutagenesis of the wild-type GAT-1 in the vector pBluescript SK(-) (Stratagene) using single-stranded uracil-containing DNA as described previously (23,24). Briefly, the parent DNA was used to transform Escherichia coli CJ236 (dut Ϫ , ung Ϫ ). From one of the transformants, single-stranded uracil-containing DNA was isolated upon growth in uridine-containing medium, according to the standard protocol from Stratagene, using helper phage R408. This yields the sense strand, and consequently mutagenic primers were designed to be antisense. Double cysteine mutants were prepared by subcloning mutants in TMD 3 into a construct containing mutants in TMD 1 using unique restriction enzymes. The coding and non-coding strands were sequenced between the unique restriction sites.
Cell Growth and Expression-HeLa cells were cultured in Dulbecco's modified Eagle's medium supplemented with 10% fetal calf serum, 200 units/ml penicillin, 200 g/ml streptomycin, and 2 mM glutamine. Infection with recombinant vaccinia/T7 virus vTF7-3 (25) and subsequent transfection with plasmid DNA, as well as GABA transport, was done as published previously (26). In all the experiments described, the expression vector was pBluescript SK(-). The data presented in the figures deal with the inhibition of the transport of mutants by sulfhydryl Cu(II)(1,10-phenanthroline) 3 (CuPh) and Cd 2ϩ . The values of the transport of the untreated mutants as a percentage of that of the parent construct are given in the text or in the figure legends, where the mutant appears for the first time.
Inhibition by Copper(II)(1,10-Phenanthroline) 3 Inhibition by Cd 2ϩ -HeLa cells transfected with the indicated construct were washed once with choline solution and preincubated with the indicated concentrations of cadmium chloride in sodium solution (150 mM NaCl, 5 mM KP i , pH 7.4, 0.5 mM MgSO 4 , and 0.3 mM CaCl 2 ) for 5 min at room temperature. The solution was aspirated, and the cells were assayed for transport in the presence of the same concentration of cadmium chloride.
Effects of Thiol Cross-linking and Cd 2ϩ on Transport by W68C Double Mutants-
The pioneering studies of Kaback and co-workers (27) to detect proximity relationships in lactose permease were based on creating, under oxidative conditions, a disulfide bond between two single cysteines, each located on a different TMD or loop. Extracellular loop 1, is the only extracellular loop between TMDs 1 and 3. If we could engineer a protease-sensitive site in this loop, it might be possible to detect cross-links between TMDs 1 and 3 by detection of the full-length transporter after cross-linking subsequent to proteolytic cleavage. However, this loop is almost completely conserved between the members of the SLC6 transporter family. Moreover, cysteine 74 located on this loop reacts poorly with (2-(trimethylammonium)ethyl)methanethiosulfonate (13), and therefore the protease would unlikely be able to cut at such an engineered site.
On the other hand, we have shown in the glutamate transporter GLT-1 that functional criteria, such as inhibition of transport by CuPh, which is apparently due to the fact that these transporters undergo extensive conformational changes during the translocation process, can be used as evidence for proximity (28). In fact, the recently published high resolution structure of a glutamate transporter homologue (29) confirmed the close proximity of a pair of cysteines, each located on a different re-entrant loop (28). Our assumption is that positions that are accessible to a sulfhydryl reagent are excellent candidates to be accessible to reactive oxygen species or divalent metal ions.
Based on the studies of SERT (17), in GAT-1, the candidate positions in TMD 3 are 136 and 143. In the external part of TMD 1, the positions, where an externally accessible cysteine can be introduced so that functionality is maintained, are 67, 68, and 70 (19). Unless stated otherwise, all of the studies described below were done in the background of C74A.
We started with the W68C/I143C double mutant in the background of C74A. Potent inhibition of the transport of this double mutant was observed by CuPh (Fig. 1). No such inhibition was observed by the I143C single mutant (data not shown) or by the W68V/I43C double mutant (Fig. 1). This indicates that the inhibition by CuPh observed in the double mutant is unlikely to be caused by the cross-linking of cysteine 143 with a previously buried endogenous cysteine, which became exposed because of the replacement of tryptophan 68. However, significant inhibition of transport by the single mutant W68C was observed with CuPh, and the same was true when, simultaneously, a mutation to another residue, such as valine ( Fig. 1) or to alanine (data not shown) was introduced at position 143.
The concentration dependence of the inhibition by CuPh of the W68C/L136C double mutant by CuPh was similar to that observed in W68C alone (data not shown). We tried to reduce the inhibition by CuPh of transport by W68C by removing the endogenous cysteines of GAT-1 one at a time or in combinations in the background of W68C, yet the inhibition remained the same as in W68C alone. We also considered the possibility that the cysteine introduced at position 68 might be cross-linked to the same cysteine in another transporter monomer. However, no higher molecular weight species than the monomer was found after the CuPh treatment of W68C followed by surface biotinylation (data not shown).
Although the strongest inhibition of transport by CuPh was observed in the W68C/I143C double mutant, we looked for additional evidence that these two positions could be close in space and examined the ability of the W68C/I143C double mutant to form a high affinity Cd 2ϩ binding site. This divalent cation interacts with cysteinyl side chains (30,31), and the affinity of the interaction is dramatically increased when the Cd 2ϩ can be coordinated by two cysteines (32).
Exposure of the single mutant W68C or the double mutants W68C/I143V or W68C/I143A to up to 500 M Cd 2ϩ had very little effect on [ 3 H]GABA uptake (Fig. 2), and the same is true for the W68V/I143C double mutant (Fig. 2) and the single mutant I143C (data not shown, but see Fig. 5). In contrast to these controls, an inhibition of ϳ70% is observed on uptake by the W68C/I143C mutant, with a half-maximal effect at ϳ10 M (Fig. 2). This inhibition is reversible when, after preincubation with Cd 2ϩ , the cells are washed with NaCl-containing medium supplemented with 2 mM EDTA (data not shown).
The inhibition by Cd 2ϩ was only observed when the cysteine pairs were introduced in the same polypeptide (Fig. 3) but not when the single mutants were coexpressed. This suggests that the cysteines introduced at positions 68 and 143 come in close proximity within the transporter monomer but not at the interface of two transporter monomers. The specificity of the ability of the two cysteines introduced at positions 68 and 143 to generate a Cd 2ϩ binding site is illustrated by the fact that very little inhibition of [ 3 H]GABA transport was observed by concentrations as high as 500 M Cd 2ϩ in the double mutants W68C/L136C, G79C/I143C, and A81C/I143C (with position 79 located in extracellular loop 1 and position 81 at the extracellular end of TMD 2) (Fig. 3). The same was true for each of the single mutants W68C, L136C, and I143C, each in the wild-type background (here a cysteine is present at position 74; in contrast, the single mutants shown in the other figures were in the C74A background). Cd 2ϩ sensitivity in the latter cysteine mutants would have been indicative of their ability to form a Cd 2ϩ binding site with the cysteine at position 74 in the extracellular loop 1 (Fig. 3).
Effects of Cadmium Ions on V67C/I143C and F70C/I143C Double Mutants-Adjacent to tryptophan 68 is valine 67, and the V67C mutant also has significant transport activity (19). Cd 2ϩ also inhibited uptake by the double mutant V67C/I143C up to a maximum extent of ϳ70% (Fig. 4), but its apparent affinity, with an IC 50 of ϳ50 M, was considerably lower than that observed with W68C/I143C (Fig. 2). No significant inhibition by Cd 2ϩ was observed in the single mutants V67C (Fig. 4) or I143C (Fig. 5) or in the double mutants V67C/I143A (Fig. 4), V67S/I143C, or V67C/L136C (Fig. 5). Further evidence for the specificity of the formation of the Cd 2ϩ binding site comes from the fact that uptake by V67C in the wild-type background (probing the ability of the cysteines at positions 67 and 74 to form such a site) also was not significantly inhibited by the divalent cation (Fig. 5). Again, as was the case for the pair W68C and I143C (Fig. 2), Cd 2ϩ did not inhibit when the single mutants V67C and I143C were coexpressed (Fig. 5).
There was only a very modest inhibition by Cd 2ϩ on transport by the F70C/I143C double mutant, which was only slightly increased as compared with that on the single mutant F70C and the double mutant F70C/I143V (Fig. 6). However, Cd 2ϩ inhibition of transport in the double mutant F70C/I143A was almost the same as in F70C/I143C (Fig. 6). Thus it appears that the weak inhibition observed is not necessarily because of the simultaneous presence of cysteines at positions 70 and 143. DISCUSSION We have created a Cd 2ϩ binding site formed by cysteines introduced at TMD 1 position 68 and TMD 3 position 143 (Figs. 2 and 3). The simultaneous presence of both of these cysteines is required. The site is not formed 1) with the single cysteine replacements (Figs. 2 and 5); 2) with one cysteine introduced at either position and a mutation to an amino acid other than cysteine at the other position (Fig. 2); 3) when, besides the cysteine at position 143, another is introduced at position 74 or at 79 in the first extracellular loop or at position 81 at the top of TMD 2 (Fig. 3); and 4) when, besides the cysteine at position 68, another is introduced at position 136 in TM3 or at position 74 (Fig. 3). We observed an additional Cd 2ϩ binding site when a cysteine is introduced at the neighboring TMD 1 position 67 together with the one at position 143, but the Cd 2ϩ affinity was at least five times lower (Fig. 4). A very modest inhibition is also observed in the F70C/I143C double mutant, but this inhibition does not depend on having a cysteine at position 143, because it is also observed in the F70C/I143A double mutant (Fig. 6). This latter result suggests that a perturbation at position 143 may cause one of the endogenous cysteines present in GAT-1 to come closer to the cysteine introduced at position 70. We did not explore this possibility due to the very low activity of the F70C/I143A double mutant (ϳ3% of WT). This would hamper the approach of replacing the endogenous cysteines one at a time to abolish Cd 2ϩ sensitivity.
There is good evidence that other members of the SLC6 family, such as SERT or the dopamine transporter DAT, exist in the membrane as dimers (33,34) or as a dimer of dimers (35). It is very well possible that this is a property of all of the members of this family, with perhaps the glycine transporters 1 and 2 as an exception (36). However, the Cd 2ϩ binding site formed by the cysteine pair introduced at positions 68 and 143 appears to be intra-rather than intermolecular (Fig. 3). This observation suggests the possibility that the monomeric form of GAT-1, and likely also that of the other family members, may be the functional unit, and the role of the oligomeric state may be in the targeting of the transporters to the plasma membrane (37) or in the formation of a structural framework enabling the monomers to fulfill their function. An example of the alternative possibility comes from the high resolution structure of a glutamate transporter homologue (29); the monomer is the functional unit, but the trimer is essential, because it allows the assembly of the bowl-like structure, which effectively shortens the membrane width for glutamate translocation. A high resolution structure of an SLC6 family member will be required to answer this issue in this family.
Formation of a Cd 2ϩ binding site in the W68C/I143C double mutant indicates that the distance between parts of TMD 1 and 3 positions is Ͻ5 Å, compatible with the possibility that the extended form of GABA may bridge this distance and interact simultaneously with molecular determinants on both TMDs. Tyrosine 140, located one turn of a putative ␣-helix below position 143, plays a crucial role in GABA binding (16), and similar evidence exists for such a role of the conserved tyrosine in the glycine transporter GlyT (38) and in SERT (17). Because this tyrosine is invariant among all family members, we have suggested that it interacts with the amino group of the substrate, the moiety common to amino acids and biogenic amines (16). Arginine 69 from TMD 1 is absolutely critical for GABA transport (21,39), and we have suggested that this residue could be important in the liganding of the carboxyl group of GABA (19,21). In the biogenic amine transporters, there are suggestions that an aspartate, at the TMD 1 position equivalent to that of 63 of GAT-1, is critical for substrate binding (40). The distance constraint revealed by our study clearly supports the idea that, in GAT-1 and presumably in the other family members as well, the binding determinants for the substrate are located on TMD 1 and 3.
The affinity for Cd 2ϩ in the V67C/I143C mutant is at least 5-fold lower than that in the W68C/I143C mutant. There are several possible explanations for this observation. First, position 68 may be closer to 143 than is position 67. An alternative possibility is that the bond angles of the pair 68/143 are more conducive to Cd 2ϩ binding than those of the 67/143 pair. Finally, although it has been reported that the apparent affinity for the interaction with Cd 2ϩ is increased even when the binding site is coordinated by two cysteines (32), in another system, three cysteines have been implicated in Cd 2ϩ binding (41). However, the apparent affinity for Cd 2ϩ in the latter case (K d ϳ5 M) was lower than in the former (K d ϳ0.1 M), indicating that other parameters are relevant as well. Nevertheless, it is possible that the higher affinity for Cd 2ϩ in the W68C/I143C pair is the result of a contribution of another (yet unknown) contact site within the W68C/I143C transporter and that this additional site contributes less to the binding site in V67C/ I143C. Such contributions have been observed during zinc sensitivity studies in DAT mutants (42).
The effects of CuPh on the W68C mutants also support the idea that the cysteines introduced at positions 68 and 143 are in close proximity (Fig. 1). Even though the W68C/I143C mutant was much more sensitive to CuPh than W68C, there was still a significant inhibition in this mutant as well as in those where the W68C mutation was paired with a non-cysteine mutation at position 143 (Fig. 1). We failed to identify an endogenous cysteine as a potential partner for the cysteine introduced at position 68, and the reason for this inhibition by CuPh is not yet clear. Another potential target to find such relationships may be TMD 2, which is highly conserved in the SLC6 family. This TMD is possibly close to the external part of TMD 1, because extracellular loop 1 is very short. The activity of TMD 2 cysteine mutants, except for A81C, is not inhibited by methane thiosulfonate reagents (data not shown), and the same is true in SERT (43). However, the cysteines introduced into TMD 2 may still be accessible to the much smaller Cd 2ϩ . Thus probing pairs of cysteines introduced into TMD 2 together with those in TMD 1 and/or 3 with Cd 2ϩ may potentially lead to the identification of additional structural constraints in GAT-1. | 4,695.6 | 2005-07-08T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Confidence intervals for the between‐study variance in random effects meta‐analysis using generalised Cochran heterogeneity statistics
Statistical inference is problematic in the common situation in meta‐analysis where the random effects model is fitted to just a handful of studies. In particular, the asymptotic theory of maximum likelihood provides a poor approximation, and Bayesian methods are sensitive to the prior specification. Hence, less efficient, but easily computed and exact, methods are an attractive alternative. Here, methodology is developed to compute exact confidence intervals for the between‐study variance using generalised versions of Cochran's heterogeneity statistic. If some between‐study is anticipated, but it is unclear how much, then a pragmatic approach is to use the reciprocals of the within‐study standard errors as weights when computing the confidence interval. © 2013 The Authors. Research Synthesis Methods published by John Wiley & Sons, Ltd.
Introduction
Meta-analysis is the statistical process of pooling the results from separate studies that investigate the same treatment or issue. One major difficulty when attempting this in practice however is that the study results may be too disparate to assume that they measure a common underlying effect. Indeed, some between-study variation may be thought to be inevitable due to different study practices and population mixes. The random effects model, described in detail in Section 2, is commonly used to model this between-study heterogeneity and estimate a meaningful average effect. Here, an unobserved random effect provides any necessary additional variation in the studies' results.
Although the average effect is the parameter of primary interest, quantifying the extent of the between-study variation is also important (Higgins and Thompson, 2002;Higgins et al., 2009). In particular, if this variability appears to be considerable, then investigating the reasons for this is often encouraged (Thompson, 1994). However, the uncertainty in the estimate of the between-study variance is often large (Biggerstaff and Jackson, 2008) and a measure of this uncertainty is an important aid to inference. Here, a procedure for calculating confidence intervals is developed for this purpose.
More specifically, generalised Cochran heterogeneity statistics (DerSimonian and Kacker, 2007) are used to provide confidence intervals in a similar way to Biggerstaff and Jackson (2008), who used the more conventional heterogeneity statistic. Here, it is also proved that the calculations result in 'well-behaved' confidence intervals with the correct coverage. Methods based on these heterogeneity statistics are however not the most efficient (Jackson et al., 2010). Likelihood-based methods are asymptotically efficient, but these present problems. For example, inferences from the asymptotic theory of maximum likelihood (Hardy and Thompson, 1996;Biggerstaff and Tweedie 1997) cannot be expected to be accurate in situations where there are a small number of studies, as is commonly the case. Small sample correction methods, such as those suggested for the average effect (Noma, 2011;Guolo, 2012), could be usefully employed to provide more accurate inference, but finding accurate likelihood-based methods for variance components with very few observations is inevitably challenging. Furthermore, inferences from Bayesian analyses are sensitive to the prior specification (Higgins et al., 2009;Lambert et al., 2005;Pullenayegum, 2011). In situations where an informative prior distribution for the between-study variance is available, then this could be used, but the resulting analysis requires more assumptions than conventional meta-analyses. A Bayesian analysis using informative prior distributions will also be open to immediate criticism by those who do not find the priors plausible.
Less efficient, but exact and easily computed, methods with good frequentist properties are therefore a desirable alternative. The Q profile method has recently been developed for this purpose (Knapp et al., 2006;Viechtbauer 2007), where Cochran type heterogeneity statistics are used with weights related to the total (within plus the unknown between) study variances. This gives rise to a pivotal test statistic that follows an w 2 distribution that may be inverted to give confidence sets. A similar approach was also suggested by Tian (2008) for normally distributed data, and Bowden et al. (2011) present some closely related ideas. Here, however, an alternative procedure is developed where the weights are instead fixed constants. As demonstrated in a simulation study in Section 4, this can provide shorter exact confidence intervals than the alternatives. The methods applied in this paper provide exact confidence intervals under the assumptions of the random effects model for meta-analysis, but this exactness requires normally distributed study outcomes with within-study variances that are treated as fixed and known. Hence, the lengths of the confidence intervals will be used to determine which method provides the most informative inference and so makes best use of the data.
The rest of the paper is set out as follows. In Section 2, the random effects model for meta-analysis and Cochran's conventional heterogeneity statistic are described. In Section 3, DerSimonian and Kacker's generalised heterogeneity statistic is presented and all the properties needed to ensure that these can be used to provide confidence intervals for the between-study variance are derived. In Section 4, a simulation study is performed that compares the proposed method with the Q profile method. Three variations of the methods are applied to some example datasets in Section 5, and the paper concludes in with a short discussion in Section 6.
The univariate random effects model and Cochran's heterogeneity statistic
The random effects model assumes that the outcome from each of n studies, y i for i = 1, 2, . . ., n, may be modelled using the equation Á and all u i and e i are mutually independent. It is conventional in the meta-analysis setting to assume that the within-study variances s 2 i are fixed and known, but they are estimated in practice. This convention is followed but methodology, which reflects the fact that the s 2 i are estimated, has also been developed (Malzahn et al., 2000). The random variable u i denotes the random, study-specific deviation from the mean effect, and the parameter t 2 represents the between-study variance: t 2 > 0 reflects underlying study differences in a formal sense, and if t 2 = 0, then all studies have the same underlying average effect m, providing a fixed effect model. Cochran's heterogeneity or Q statistic is frequently used in conjunction with this model and is conventionally written as the weighted sum of squares effects model treats the s 2 i as fixed constants, rather than random quantities, and hence, any fixed function of the s 2 i is also treated as a constant by this model. The conventional choice for this function is the reciprocal function. Estimation of t 2 and then inference m may be performed as described by DerSimonian and Kacker.
A generalisation of Biggerstaff and Jackson's result
In this section, a generalisation of the result from Biggerstaff and Jackson (2008) is proved. If a i = w i for all i, then the generalised heterogeneity statistic reduces to Cochran's heterogeneity statistic, and Biggerstaff and Jackson's result is recovered as a special case.
Write DerSimonian and Kacker's generalised heterogeneity statistic in matrix form. Let Y be the vector containing the Y i , and let B ¼ A À 1 aþ aa t , where A is the diagonal matrix containing the a i , a is the vector containing the a i , a + = P i a i and t denotes matrix transpose. The matrix representation of DerSimonian and Kacker's generalised heterogeneity statistic, Q a , is then given by Next, let Σ denote the variance of Y, the diagonal matrix with entries s 2 i þ t 2 À Á , and let Z denote a standard n dimensional multivariate normal vector. Noting that Q a is location invariant, we can write defining S = Σ 1/2 BΣ 1/2 . B is symmetric and hence, so is S. Following the same procedure described by Biggerstaff and Jackson (2008), writing S in terms of its spectral decomposition, we obtain where w 2 i 1 ð Þ are mutually independent chi-squared random variables with 1 degree of freedom and l 1 (S) ≥ l 2 (S) ≥ ⋯ ≥ l n (S) are the ordered eigenvalues of S.
The parameters that the distribution of Q a depend on
The l i (S) are functions of t 2 through their dependence on S but do not depend on m. The eigenvalues l i (S) also depend upon the s 2 i and a i , but these are taken to be fixed constants: the s 2 i are treated as fixed constants in the random effects model and the a i are fixed values, possibly a fixed function of the s 2 i , chosen by the analyst prior to analysis. Hence, the only unknown parameter that the distribution of Q a depends only on is t 2 . Next, some other important properties possessed by Q a are proven.
3.3. Property 1: Q a is distributed as a positive linear combination of w 2 (1) random variables with exactly (nÀ1) positive coefficients Biggerstaff and Jackson (2008) proved that the conventional Q statistic (1) is distributed as a positive linear combination of w 2 (1) random variables with at most (nÀ1) positive coefficients. The Appendix contains a proof that this is also the case for the generalised heterogeneity statistic Q a where it is further shown that this statistic is distributed as such a linear combination with exactly (nÀ1) positive coefficients.
3.4. Property 2: The cumulative distribution function of Q a is a continuous and strictly decreasing function in t 2 Biggerstaff and Jackson (2008) implicitly assumed this property when obtaining confidence intervals using the conventional Q statistic.
Proof: The entries of S are continuous in t 2 and hence, so are its eigenvalues. Hence, from the form of (2), the cumulative distribution function of Q a is continuous in t 2 and all that remains to be shown is that it is strictly decreasing in t 2 .
The eigenvalues in (2) are those of Σ 1/2 BΣ 1/2 where Σ depends on t 2 and B does not. Suppose that we also consider a larger value of t 2 , denoting the larger variance of Y as ΣM, where M is a diagonal matrix where all the diagonal entries are greater than one. Then l i (M) > 1 for all i. The eigenvalues in (2) for the larger value of t 2 are those of Σ 1/2 M 1/2 BΣ 1/2 M 1/2 , which are equal to those of SM. Then the second inequality in Equation (5) in the Appendix, with C = S and D = M, immediately shows that all the eigenvalues that provide coefficients in (2) are greater when considering the larger t 2 and so are strictly increasing in t 2 . Hence, the cumulative distribution function of Q a is strictly decreasing in t 2 .
Confidence intervals for t 2 by test inversion
Now that the necessary properties of Q a have been established, the test inversion procedure described by Casella and Berger (2002), Section 9.2.1, will be used to construct confidence sets with coverage probability 1 À a 1 À a 2 , where a 1 + a 2 denotes the significance level associated with the two-tailed test. Our proposed method essentially follows the method proposed by Biggerstaff and Jackson (2008) when using Q a with the conventional weights a i = w i .
We accept the null hypothesis H 0 : t 2 ¼ t 2 0 , and thus, t 2 0 lies in the corresponding confidence set, if and only if and Conceptually, (3) ensures that t 2 0 is not too small to result in the observed q a and so provides a lower bound; (4) provides an upper bound. We reject the null hypothesis H 0 : t 2 ¼ t 2 0 using (3) if q a is greater than the (1 À a 1 ) quantile of Q a . Similarly, we reject the null hypothesis using (4) if q a is less than the a 2 quantile. Because a 1 + a 2 < 1, it is impossible for (3) and (4) to simultaneously reject the null hypothesis. Hence, the significance level of the test is a 1 + a 2 , as required when producing a confidence set with coverage probability 1 À a 1 À a 2 .
Obtaining confidence intervals numerically
The cumulative distribution function of Q q , P(Q a ≤ q a ; t 2 ), can be evaluated using the algorithm for a positive linear combination of w 2 random variables proposed by Farebrother (1984). The CompQuadForm R package implementation of this was used throughout.
If P(Q a ≤ q a ; t 2 = 0) < a 2 , then no t 2 0 satisfies (4); t 2 is nonnegative and the cumulative distribution function of Q a is decreasing in t 2 . Hence, the strict implementation of (3) and (4) provides a null set for the confidence set for t 2 . This is analogous to the possible null confidence set that the Q profile method proposed by Viechtbauer (2007) may also provide. However, this only occurs when the observed q a is very small, so instead of providing a null set for the confidence set in such instances, it is preferable to give t 2 If t 2 > 0, as typically thought to be the case, then this does not increase the coverage probability of the confidence set. If t 2 = 0, then the coverage probability of the confidence set increases by a 2 and the confidence set is conservative. This convention can also be adopted when using the Q profile method, and this was suggested by Knapp et al. (2006). Alternative interpretations of the null set are possible in application however, such as 'the data appear to be highly homogenous' or even 'the interval estimation fails'.
If instead P(Q a ≤ q a ; t 2 = 0) ≥ a 2 then, because the cumulative distribution function P(Q a ≤ q a ; t 2 ) is continuous and strictly decreasing in t 2 , we can use any simple numerical method to find the value t 2 u that satisfies P Q a ≤q a ; t 2 ¼ t 2 Then all t 2 in the interval 0; t 2 u  à satisfies Equation (4) and all other values of t 2 do not. Next, if P(Q a ≥ q a ; t 2 = 0) ≥ a 1 , then all t 2 in the interval [0,1] satisfy (3) and we define t 2 l ¼ 0; Biggerstaff and Jackson (2008) refer to this as 'truncating' the lower confidence bound to zero. Otherwise, we can use any simple numerical method to find the value of t 2 that satisfies P Q a ≥q a ; t 2 ¼ t 2 l À Á ¼ a 1 and then all t 2 in the interval t 2 l ; 1  à satisfies (3) and all other values do not. The intersection of the intervals t 2 l ; 1 It is easily shown that the resulting confidence set is the interval t 2 First, note that if the null confidence set has been interpreted as [0,0], then trivially, t 2 u ≥t 2 l . Otherwise t 2 u > 0 and if the lower confidence bound has been truncated to zero then, again trivially, t 2 u ≥t 2 l . Finally, if the lower bound is not truncated, when solving P Q a ≥q a ; t 2 ¼ t 2 l À Á ¼ a 1 , and noting that a 1 + a 2 < 1, we have that Because the cumulative distribution function of Q a is strictly decreasing in t 2 , we must have t 2 l < t 2 u . Hence, t 2 u ≥t 2 l .
A simulation study
To assess the performance of the proposed method, a simulation study was performed with n = 5 studies. This represents the common situation where there is just a handful of studies and the proposed method may be anticipated to be especially valuable. If instead there were just two or three studies, any form of estimation for the random effects model would be extremely challenging, and if there were many more studies, then the sample D. JACKSON size would be sufficient, for example, to make accurate inferences using the profile likelihood in the way proposed by Hardy and Thompson (1996).
Weights of the form a i ¼ 1= s 2 i þ x À Á p were investigated when applying the proposed method, where x took the five values of t 2 described previously, in conjunction with p = 0, 0.5, 1. Provided the analyst specifies the fixed values of x and p to be used prior to examining the data or performing the analysis, the resulting weights are fixed constants because the s 2 i are treated as such in the random effects model. The repeated sampling properties of confidence intervals under the assumptions of the random effects model, where x and p depend in any way on an examination of the data, or any other random variables, are much harder to evaluate but one possibility is explored in Section 4.2. Weights of the form a i ¼ 1= s 2 i þ x À Á p could also be used to make inferences for m in the way described by DerSimonian and Kacker (2007), but here, the focus is the proposed method for constructing confidence intervals for t 2 These three values of p provide an unweighted analysis, and also weights that are related to the within-study standard errors and the corresponding variances. Hence, these values of p are intuitively appealing values to consider; x = 0 and p = 1 correspond to the conventional weights w i ¼ s À2 i . If p = 0, then an unweighted heterogeneity statistic is used irrespective of x; unweighted methods for meta-analysis have previously been proposed (Bonett, 2008;Bonett 2009;Shuster, 2010), and completely arbitrarily x = 0 was used in conjunction with p = 0. Setting x = 0 and p = 0.5 means that the reciprocals of the studies' within-study standard errors are used as weights. If, for example, an a priori value of t 2 is thought plausible, then setting x equal to this means that the weights are related to the total study variances thought plausible before examining the data, which are intuitively appealing weights to use. In situations where a suitable positive value of x is difficult or impossible to state in advance, then this should be set to zero to use weights that are more akin to the conventional ones.
Confidence intervals with a 1 = a 2 = 0.025, and hence, 95% coverage probability, were used throughout. Putting an equal probability of 0.025 in each tail follows common practice, but other possibilities are returned to in the discussion.
To compare the proposed procedure to the established Q profile method, the R package metafor was used to apply Viechtbauer's implementation of this. This method was chosen because it has become popular and, like the proposed method, only requires that the random effects model is assumed for the outcome data used in analysis. For example, neither the proposed method nor the Q profile method requires that the raw data follow a normal distribution.
The results of the simulation study are shown in Table 1, where the mean and the standard deviation of the lengths of the resulting confidence intervals are shown. Here, the proposed method with each set of weights, and the Q profile method, was applied to the same 40,000 simulated datasets for each t 2 ; m = 0 was used when simulating data, but this is immaterial. A different random seed was used for each value of t 2 . The convention where null confidence sets were interpreted as intervals of [0,0] was adopted throughout. The empirical coverage probabilities of the 95% confidence intervals are also shown in Table 1 and are within Monte Carlo error of 0.95 if t 2 > 0 and 0.975 if t 2 = 0, as the theory predicts. One conclusion from Table 1 is that the confidence intervals are in general very wide, reflecting the considerable uncertainty in t 2 for examples with just five studies. The results for the best performing method (shortest average confidence interval) for each value of t 2 are highlighted in bold in Table 1.
The results in Table 1 show that as t 2 increases, the average lengths of all confidence intervals, and the variation in these lengths, also increase. In terms of the previously proposed methods, the Q profile method provides shorter confidence intervals when there is considerable heterogeneity than intervals based on Cochran's conventional heterogeneity statistic ('B and J' in Table 1). However, this requires I 2 > 0.5 (t 2 > 0.069), and otherwise, Biggerstaff and Jackson's method is preferable.
Intuition suggests that using p = 1 in conjunction with a value of x that is appropriate in the context of the metaanalysis in question will ensure that the weights used will most accurately reflect the true variance structure in the data and hence provide shorter confidence intervals. This intuition is confirmed in Table 1, where the method that provides the shortest confidence intervals is in every case the one that uses the inverse of the true total study variances (p = 0and x = t 2 ). Cochran's heterogeneity statistic can therefore be seen as quite an extreme case, where the weights incorporate no between-study variance whatsoever. In practice, using values of x that are thought a priori close to the true value can be expected to perform better than sticking to the conventional weights. Those more comfortable with the I 2 statistic could specify a plausible value and convert this to a value of x to use in the weights. However, this suggestion requires some a priori knowledge about the likely extent of the between-study variation, which the author does not possess for the examples that follow, and so may be difficult to implement in practice.
The results for p = 1/2 are interesting because, for example, p = 1/2 and x = 0 outperforms the Q profile method unless the heterogeneity is very severe and only increases the length of the intervals slightly compared to D. JACKSON Table 1. Results from the simulation study. 40,000 simulated datasets were produced for each value of t 2 . The average lengths of the 95% confidence intervals are shown with their standard deviations in parentheses. The results for the procedure that provides, on average, the shortest intervals is highlighted in bold font for each value of t 2 . 'B and J' indicates the conventional weights, and so the procedure suggested by Biggerstaff and Jackson (2008) is used. The empirical coverage probabilities of the 95% confidence intervals are also shown. Biggerstaff and Jackson's method when the heterogeneity is mild. Weighting studies by the reciprocal of their within-study standard errors in this way, rather than by their variances as convention dictates, appears to provide a sensible and viable option when there is little a priori knowledge about the extent of heterogeneity, but some is anticipated. This weights the studies more equally than the usual weights and so better reflects the true variance structure when heterogeneity is present. Hence, it makes intuitive sense to consider this alternative set of weights under these circumstances. This proposal is compared with the established alternatives using some real datasets in Section 5.
Additional simulation studies
Further simulation studies were performed, again using 40,000 simulated datasets for each scenario, where different random seeds were used for each combination of n and t 2 . Sample sizes of n = 10, n = 20 and n = 40, and the same values of t 2 as in Table 1, were used in these additional simulation studies. Within-study variances were obtained in the same manner as before, as equally spaced quantiles, from 0% to 100%, from the distribution suggested by Brockwell and Gordon (2007). The results from these simulation studies reinforce the previous conclusions. In each case, Biggerstaff and Jackson's method outperformed (shorter confidence intervals) the Q profile method when the heterogeneity was mild, but when the heterogeneity was large, these roles were reversed. In every instance, p = 1 in conjunction with x equal to the true between-study variance performed very well as anticipated. Perhaps most importantly, the choice of p = 1/2 and x = 0 seemed a reasonable compromise between Biggerstaff and Jackson's method and the Q profile method, exactly as it did for n = 5. The results from these additional simulation studies are available in the supplementary materials that accompany the paper.
Weighting by the reciprocal of the estimated total study variances
Weights of the form a i ¼ 1= s 2 i þ t 2 À Á , so that the weights are the reciprocals of the true total study variances, were found to perform well in the simulation study. However, because t 2 is unknown, it is not entirely fair to compare the use of these weights with the established methods. It is however tempting to use weights equal to the estimated total study variances so that a i ¼ 1= s 2 i þt 2 À Á . These weights are easily computed and straightforward to use in application but their use invalidates the theory that ensures that exact confidence intervals are obtained, because the weights a i are now random variables.
Despite this theoretical objection, results were also obtained, using the simulated datasets used to produce Table 1 and weights of the form a i ¼ 1= s 2 i þt 2 À Á , wheret 2 is the usual estimate originally proposed by DerSimonian and Laird (1986). The average length of the resulting confidence intervals (with standard deviations in parentheses, as in Table 1) were 0.872 (0.862), 1.182 (1.050), 1.564 (1.306), 2.751 (2.136) and 11.541 (8.396) for t 2 = 0, 0.029, 0.069, 0.206 and 1.302, respectively. Furthermore, the empirical coverage probabilities of the nominal 95% confidence intervals were for these same t 2 were 0.978, 0.953, 0.953, 0.954 and 0.948. Because the Monte Carlo standard error associated with estimating a probability of 0.95 with a sample size of 40,000 is ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 0:95 Â 0:05=40000 p % 0:001, these results provide evidence that this procedure fails to provide the nominal coverage probability exactly but also that this departure from the nominal probability level is too small to be of any practical concern. This method also appears to perform well compared with the alternatives, in that it provides relatively short 95% confidence intervals, and in particular, it appears to provide shorter confidence intervals than using the weights proposed for general use in Section 4.1 (p = 1/2 and x = 0; seventh row of Table 1). These conclusions are supported by the results using n = 10, 20 and 40; all the results for these larger sample sizes are available in the supplementary materials that accompany this paper.
Further investigation is required before this procedure can be safely recommended for general use, because the random weights invalidate the theory, but this simulation study suggests that using the reciprocal of the estimated total study variances as weights, which are the conventional weights for making inferences about m (DerSimonian and Laird, 1986;Jackson et al., 2010) may also prove to be a good option for obtaining confidence intervals for t 2 .
Examples
Four examples were analysed by Biggerstaff and Jackson (2008), and these will also be used here. See Biggerstaff and Jackson for full details, but briefly, the examples use meta-analytic data from studies that examine: (i) aspirin and heart attack; (ii) diuretic and preeclampsia; (iii) glycerol for acute stroke; and (iv) sclerotherapy and cirrhosis. Three methods for obtaining exact confidence intervals will be applied to each example. First, the established Q profile method (first row of Table 1) and the method proposed by Biggerstaff and Jackson using Cochran's Q statistic (second row of Table 1) will be used. Finally, generalised Cochran heterogeneity statistics will be used, where the weights are the reciprocals of the within-study standard deviations (seventh row of Table 1). This follows the suggestion in Section 4, where some heterogeneity is anticipated but it is uncertain how much. Now that we have real data, however, the exactness of the confidence intervals is brought into question, because the random effects model only provides an approximation for data such as these.
The results are shown in Table 2. In three of the four examples, the proposed method (weighting by the reciprocal of the within-study standard errors) provides shorter confidence intervals than the method used by Biggerstaff and Jackson. This reflects the fact that statistical heterogeneity is present in all four datasets (Biggerstaff and Jackson, 2008). However, in three of the four examples, the Q profile method provides the shortest confidence interval. This observation may appear to contradict the finding from the simulation study that this method is generally outperformed by the proposed method. However, this is belied by the fact that the Q profile method provides a very much longer confidence interval for the Diuretic data, which gives an indication of the longer confidence intervals that this method has been found to provide.
The confidence intervals are generally in good agreement for all four examples and are wide, as anticipated, given the difficulty in accurately estimating the between-study variance in examples where there are few studies. The reasons for the more specific differences between the results using the three methods are hard to explain however, which is perhaps inevitable given the considerable heterogeneity present in these examples and the imprecision of estimates of t 2 in meta-analyses such as these.
Discussion
Generalised Cochran heterogeneity statistics provide a convenient method for computing exact confidence intervals for the between-study variance parameter in a random effects meta-analysis. They incorporate an existing method as a special case, and by choosing more appropriate weights than is conventional, shorter confidence intervals may be obtained. The only potential numerical difficulty is the use of Farebrother's algorithm, but the R implementation is convenient and fast. R code is available in the supplementary materials that accompany the paper, which produces confidence intervals in a few seconds.
In the simulation study, a 1 = a 2 = 0.025 was used, so that the convention of using equal probabilities in both tails has been adopted, but shorter confidence intervals with the same coverage might be possible by using a 2 6 ¼ a 1 and it is left as an open question as to whether this should be considered more often in practice. However, those who do not consider t 2 = 0 plausible may prefer to calculate one-sided confidence intervals that reflect this and set a 1 = 0. Furthermore, due to the very wide confidence intervals that were obtained in the simulation study and for the examples, coverage intervals with less coverage, 90% or even 80%, might be deemed preferable to obtain tighter confidence intervals.
A wide variety of methods for estimating t 2 are available (Sidik and Jonkman, 2007) and the confidence intervals developed here are built upon just one of these, the method proposed by DerSimonian and Kacker. Methods might also be developed that build upon the alternatives however and this could form the subject of future work. Methods for constructing confidence intervals for I 2 follow naturally by taking the typical within-study variance proposed by Higgins and Thompson (2002) as fixed, so that I 2 may be interpreted as a monotonic function of t 2 . Generalised Cochran heterogeneity statistics could also be used to test the null hypothesis that t 2 = 0, in a similar way to the conventional one, but this possibility is not examined here. This is because using them is more complicated and no gain in power is anticipated, but this too could provide an avenue for further work.
The Q profile method remains a viable alternative but this appears to be inefficient compared with the alternatives considered here unless the heterogeneity is very considerable, in which case, all confidence intervals become very wide and so very little can be inferred about the extent of the heterogeneity. In any case, many would hesitate to combine very disparate results in a random effects meta-analysis. If some heterogeneity is thought plausible but is difficult to determine how much there might be a priori, then using generalised Cochran heterogeneity statistics with weights equal to the reciprocal of the within-study standard errors would appear to be a sensible option. The possibility of using weights equal to the reciprocal of the estimated total study variances warrants further investigation. Table 2. Results for the four examples using three exact methods for obtaining confidence intervals for the between-study variance. For each method, the 95% confidence interval is tabulated using a 1 ¼ a 2 ¼ 0:025, and the width of the interval is given in square brackets. Dataset n Q profile B and J Proposed method Appendix: If C and D are square matrices of the same size, then l i (CD) = l i (DC) (Zhang 2010;page 57). Because the rows of B sum to zero, this matrix has an eigenvalue of zero and hence so does S. This can most easily be seen by observing that l i (S) = l i (ΣB), and the premultiplication of B by Σ retains the linear dependency amongst the rows of B and hence the eigenvalue of zero. Thus, l n (S) = 0, and the sum in Equation (2) only extends to (nÀ1). It can also be seen that l i (S) > 0 for i < n in Equation (2). This is a consequence of the observation that l i (A À 1/2 BA À 1/2 ) = l i (BA À 1 ) = 1 for i < n; A À 1/2 BA À 1/2 = S, where s 2 i ¼ a À1 i and t 2 = 0. It is a standard result that Q a $ w 2 (n À 1) when these standard weights are used and t 2 = 0 (Biggerstaff and Jackson, 2008). Because (2) shows that Q a is a linear combination of w 2 (1) random variables, an inspection of the form of the moment generating function of w 2 confirms that l i (BA À 1 ) = 1 for i < n. If C and D are positive semi-definite Hermitian matrices, then where l 1 (D) and l n (D) are the largest and smallest eigenvalues of D, respectively (Zhang, 2010;page 274). Then, the first inequality in (5) with C = B andD = A À 1 , and noting that l i (A À 1 ) > 0 for all i, shows that l i (B) > 0 if i < n. Finally, the second inequality in (5) with C = B and D = Σ, and noting that l i (Σ) > 0 for all i, shows that l i (S) = l i (BΣ) > 0 for i < n. Hence, Q a is a linear combination of w 2 (1) random variables with exactly (nÀ1) positive coefficients as stated. | 8,264.2 | 2013-09-01T00:00:00.000 | [
"Mathematics"
] |
Greenweb in Land Enhancement Policy of Enna Province - A DRSA Valuation Pattern in WebGIS Interaction Practice
The Enna Province is characterized by a low degree of economic, infrastructural and industrial development. Its hilly territory is a fair combination of many different and integrated landscapes. These conditions suggest the possibility of a sustainable development pattern in which the slow mobility, because of the low level of land infrastructures, can become one of the most important network for the land-value communication.The study applies an axiological approach, useful for the subsequent land planning practice, including a qualitative valuation model and an interactive multi-criteria tool combining Web-GIS and DRSA (Dominance Rough Set Approach) patterns.The valuation model is based on an axiological square in which four kinds of appreciation are taken into account. A WBS pattern explains in detail each appreciation, so that every piece of the green-web can be characterized and assessed within a general framework oriented to provide the aggregate value of the path to which they relate.The DRSA tool is used to generate the preferences structure of the target segments users. It is used as basis for extracting and the processing data. It allows to identify the preferences structure to support the WebGIS tools to generate the green way that best meet the user’s preferences.
-Geographic framing, ancient "Vali" (administrative districts) and orography of the province of Enna.
As a consequence, this territory synthesises the most important features of the Sicilian landscape comprising: 280 archaeological sites; an uncontaminated landscape that can be considered the synthesis of the regional environment because of the fair integration between cultural-historical and morphological features; an environmental-natural heritage including an important hydrogeology system composed of several lake basins, 11 Oriented Nature Reserves with their specific features of flora and fauna; a remarkable mining and mineralogical heritage with industrial archaeology features; a fair synthesis between the agricultural-forest framework and the consequent arable-arboreal landscape; a rich anthropic landscape including many urban centres with relevant remains of several historic civilizations whose cultural and religious traditions are somehow recognisable up to now.
Nevertheless, some criticalities must be highlighted, and in particular: land instability phenomena; not rational land uses; a general trend to abandon the territory and the ancient urban centres; landslides; degradation and collapse of the lake basins; deforestation and desertification; intrusive infrastructures. In addition, the reserves aren't appropriately accessible because of the lack of: suitable delimitations, trails, information signs, access points, visitor centres; a general land marketing system.
1. The Integrated Land Plans, as means to implement in local areas the Operative Regional Plans; the ILPs (PIT in Italian) boost the bottom-up local economic development by matching the institutional aims and the purposes of the social and economic organizations.
2. LEADER II, a European program aimed to boost the rural development, by involving the local communities in innovative and multi-sectorial initiatives.
3. The Territorial Landscape Province Plan is the main and the most extensive and detailed planning tool for the general knowledge and the government of the territory. It is based on an advanced Geographical Information System describing the three main landscape component: abiotic, biotic and anthropic.
Each component indicates the enhancement of the green paths network as the means of a sustainable development pattern based on the land culture and information. As a consequence, signification, information, and communication [10] can be assumed as a general landscape re-production paradigm in which the land (in)form(ation) assumes the role of primary "substance of value", the main origin and result of a sustainable development pattern.
Greenways: a culture as a method for territory enhancement
Landscape, as a whole human experience, is strictly linked to sustainability, because of its holistic dimension [12], thus confirming it as a natural and cultural unity [2], [10]. Greenways, as both a physical infrastructure and a cultural approach to landscape [5], combine natural and cultural factors, and rational and creative approaches. A greenways network can assume a large bundle of functions such as connecting different anthropic land districts, promoting a cultural and economic upgrade of rural land, developing sustainability awareness [1], renovating the scale of values and preferences, as the Italian Greenways Association, founded in 1998, declares in its program [15].
Creating networks can be considered one of the main attitude of social value point [5]: greenways can be considered the channels which the land information gets through, and due to the difficult to compare costs and positive externalities, an interactive assessment model, involving planners [11] and users, could be helpful. Greenways are assumed as the physical communicative network of land, through which the users spread land information: the more users get through the network, the more land social value increases.
Values, valuations and valorization: a semiotic and axiological approach
A semiotic approach [3] to valuation can be assumed as the basis of an axiological method for planning. Landscape is a perceptual/communicative land fabric whose value-frame a green-web could be. A green-web is a network of high-value experiences so that two questions arise.
1. What do we mean as "value of a path"? The valuation process gets through the formalization of: some explicit criteria; a set of utility functions in order to transform performances into specific valuations; a weight system; a procedure for aggregating all the elementary valuations into the main criteria. Each greenway part can be considered a "signifier unit" as well as in a semiotic signification process, in which each signifier (the set of characteristics of a path, not the path in itself) implies a reference (the physical frame and its both natural and artificial components) and a meaning (the 608 New Metropolitan Perspectives importance of them for someone or a specific community), so that no intrinsic value can be considered relevant. The value is strongly influenced by the user's profile and moreover by the textual unit in which it takes part as well. Therefore, the same object or performance can assume different values according to the different user's profiles. On the contrary, the same satisfaction can be achieved in many different (green)ways. So, the value of a path is given as the set of the valuations, properly aggregated, from the point of view of the different criteria.
2. Do actually paths among which to choose, currently exist? Our concern is a preliminary collection of information, valuations and planning indications aimed at realizing an interactive decision tool referring to the path which can be assembled according to the axiological user's profile. As a consequence, the paths among which to chose have to be composed by using parcels of the existing dirt roads: the system provides the aggregation of parcels that maximizes the function of the landscape-experiential value.
This is what we mean as an axiological approach, a value-centered and value-oriented vision, so that land cannot be meant as object and function, but as a bundle of combined values. The basic hypothesis is: the land social value can be distinguished into potential and current value: the first one is based on the occurrences (object and performances) which can be observed and measured; the second one depends on the appreciation of these resources by the users and then on their psychological and cultural determination, that is the axiological profile their choices are due to.
Accordingly, a specific tool has been realized; in the latter the user can express his or her preferences and communicate the satisfaction degree of the experiences, if carried out, so that the evaluator can adjust the tool: the user inputs his or her preferences into a form on the web-gis interface, basing on his or her specific axiological profile. The system proposes a group of paths which the user can further reduce to select the best one, by inserting more specific information about his or her wishes and expectations. The input form and the related preference pattern are inserted into three different sections, each of them each of them referred to one of three different approaches: 1. Object, 2. Performance, 3. Axiological.
Valuation pattern
1. The web-form to fill in order to select the path, proposes a list including all kinds of the objects which are present in that part of land. The user selects the objects or the places -archaeological sites, monuments, panoramas, natural features -he wants to come across; the system makes a query and composes all the paths containing the kinds of objects indicated in the form.
2. The section of performances includes some functional characteristics of the path, that can be a) measurable performances, as the maximum length, slope, traffic road crossings, or b) valuable performances, as smoothness, hardness, riskiness etc.; the valuable ones are calculated by using the space analysis web-gis functions; the pattern reduces the previous selection so that the user can refine the query to select the best path.
Object and performance approaches can (or should) be used together: they actualize the most concrete and specific approach and do not require the involvement of a coordinated value system.
3. The section of value includes the weighs of the four main appreciations that the user, according to his/her profile, inserts. According to an axiological approach, objects and performances are relevant only in order to achieve a purpose which is referred to a value. The value is not attributed to objects or performances but to the capability of the path to satisfy some general instances when they are crossed; thus objects and performances have no value in themselves; the user assigns to them a value once connected by the path whose configuration is defined by assembling a certain number of path units, so that the value function is optimized. The value of the path is the weighed average score calculated going up the WBS from the leaves (indicators), through arms (subcriteria and criteria) to the roots (values).
The root-criteria of the pattern are taken from the axiological square [6] a general diagram in which four kinds of appreciation are connected by three kinds of relationship, complementary, contrary and contradictory. Practical (functionality), critical (efficiency and convenience), utopic (mythic, existential), playful (differences, surprise) appreciations can be distinguished. Each of them corresponds to specific traveller's profiles shown in the square ( fig. 2, left), as detailed by WBS ( fig. 2, right) in which the contents of the valuation pattern are connected and organized by a progressive disaggregation in different criteria and subcriteria (the sub-sub criteria, the indicators and the 145 indices are omitted).
Space analysis and GIS tools application
A GreenWeb should be considered, from a topologic point of view, as a set of arcs and nodes linked into a reticular framework connecting the social land fabric. Each node is usually associated to a value function, but in this experience the value is traced to the path as a whole. Network Analyst extension is the tool which aggregates the path maximizing this value function. The database includes the ancient road network as shown in IGM 1:50.000 maps in 1965; some groups have been distinguished: main (consular) roads, herds' roads, lanes; 2. old railways [13]; all of them have been geo-referenced. By means of the Spatial Join extension and the geoprocessing functions a new viability database has been implemented by dividing each road into 250 m long segments, so that a continuous greenway can be assembled by joining the arcs which maximize the value function. Spatial join and Range query are the two geometric operations more frequently used in the geographic data management. The spatial join is a relational join in which geometric attributes and space relations are used and imposed instead of alphanumeric ones. There are: the topologic join that is more speed if the storage structure is based on a set of layers; other join based on direction and distance ( fig. 3).
Interactive value adjustment pattern based on the DRSA approach
The greenway can be considered a product-service for the users. To improve a recreational product-service it is necessary to identify an appropriate marketing strategy. The marketing management must coordinate the recreational demand with the local supply, in relation to the target segments of the users. The development of some new information and communication technologies (ICTs) help to coordinate the supply to demand, which is ever changing and more globalized. The web GIS is a ICTs tool that, if properly structured, is able to support the development of a Web 2.0 type, and therefore the tourism 2.0. The tools to support the development of a web GIS which is able to meet these requirements are: a data mining and an artificial intelligence tool that produces an output of the informational type for the product or service requested by the user. To support the extraction and the processing data, the study proposes the DRSA -Dominace Rough Set Approach and fuzzy sets (Greco, Masahiro and Slowinski, 2006). The DRSA tool is used to generate the preferences structure of the target segments users. The DRSA enables to generate a minimal set of decision rules in a neutral way [7]. By means of this minimal set [8,9] it is possible to generate a preferences structure or perceptual-value structure for the user [14], [15] (Tab. 1). The identified preferences structure may to support the GIS tool and the Web GIS tool in generating the best solution for the user's "green way". The information data for the data mining are obtained from some feedback questionnaires which are present in the institutional web that uses the proposed web GIS tool. In particular, the questionnaires are proposed to users at the feedback button on the GIS Web's site. The construction of the database and data mining to support the choice of the green way constitutes a examples and a test for the model that we are studying. Surely, when the sample turns out to be more representative, the model will be able to generate a stable preferences structure by means of which implementing the data mining and generating, using an automatic action, an individual more satisfying path (Tabb. 2-4).
Decision Rules
1. If the level of importance for the efficiency of the route is medium and the level of importance for the recreational facilities is high then choose the playful profile; 2. If the level of importance for the efficiency of the route is medium and the level of importance for the recreational facilities is high then choose the playful profile; 3. If the level of importance for the distances is medium, the level of importance for the perceptual landscape is medium and the level of importance for the recreational facilities is high choose the playful profile; 4. If the level of the importance for the recreational facilities is high then choose the existential profile; 5. If the level of importance for the recreational facilities is medium then choose the critical profile; 6. If the level of the importance for the density of the events is medium then choose the critical profile; 7. If the level of importance for the adventure is medium then choose the critical profile Tab. 2 -The preferences structure to support the critical profile The critical profile 1. If the level of importance for the recreational facilities is medium then chooses the critical profile; 2. If the level of the importance for the density of the events is medium then chooses the critical profile; 3. If the level of importance for the adventure is medium then chooses the critical profile.
Tab. 3 -The preference structure to support the existential profile The existential profile 1. If the level of importance for the recreational facilities is high then chooses the existential profile.
Tab. 4 -The preferences structure to support the playful profile
The playful profile
If the level of importance for the efficiency of the route is medium and the level of importance for the recreational facilities is high then chooses the playful profile; If the level of importance for the efficiency of the route is medium and the level of importance for the recreational facilities is high then chooses the playful profile; If the level of importance for the distances is medium, the level of importance for the perceptual landscape is medium and for the recreational facilities is high then chooses the playful profile.
Conclusions
The multiple selection into the different areas of the web form allows to apply the different approaches, the object one (orange), the performances one (light green) and the axiological one (dark green) separately or simultaneously ( fig. 4). Further inputs reduce the selected path or modify them. The DRSA extension helps to refine the appraisal pattern basing on the influence of each appreciation on the query. A green-web is an immaterial infrastructure, a phase of the information cycle -information as "the form that informs" -whose origin is the organization of the land knowledge and the access to it through of a personalized consultation system. Therefore the green-web can be considered composed of the knowledge system, the physical land support and the land values as produced by administration, planners, professionals and at last by the users through a recursive process of signification, information and communication (Rizzo, 1999). These three parts, between which the "value/valuation" is the most relevant one, are involved in the feedback process at the three levels of data/information, value/valuation, planning/communication.
At the first level the experience we have carried out has been a test about the connection between data and values, so that the knowledge system has been completely redrawn; values need some specific data, and an appropriate way of turning them into information. At the second level, the valuation one, the value system has been assumed as the matrix of the knowledge whose wide articulation has to be reduced to some axiological relationships, in order to create a shareable | 4,104.8 | 2014-06-01T00:00:00.000 | [
"Environmental Science",
"Geography",
"Economics"
] |
TWO INTERESTING DAMAEID MITES (ACARI, ORIBATIDA, DAMAEIDAE BERLESE, 1896) FROM THE BRITISH ISLES AND SVALBARD (SPITSBERGEN, NORWAY), WITH A DESCRIPTION OF KUNSTIDAMAEUS ARCTICUS N.SP.
Two species of the family Damaeidae (Acarina, Oribatida) are described and documented. Kunstidamaeus arcticus n.sp. was found in Svalbard (Spitsbergen) and differs from all other known species of the genus by having only five pairs of genital setae; by the specific development at the base of the prodorsum, where tubercles Ba are replaced by a multiple of small tubercles, by minute and hardly visible spinae adnatae, by the characteristic shape of the sensillus and other characters. The other species, belonging to the genus Epidamaeus, was found in North-West England and stands near to E. floccosus Behan-Pelletier and Norton, 1985, but differs by the development of sensillus, spinae adnatae unilaterally with one tooth and by the notogastral setae inserted on cuticular thickenings. The single available specimen did not allow us to decide with certainty about it´s specific status, which in the future may prove to be a separate species. The relationships of the two species found are discussed.
INTRODUCTION
Oribatid mites of the genus Kunstidamaeus and Epidamaeus were differentiated at generic level only recently (see Miko, , 2010Miko and Mourek, 2008). Kunstidamaeus Miko, 2006 can be distinguished from Epidamaeus (and other Damaeus sensu lato) species by presence of typical set of tubercles Ba and La in dorsosejugal area, absence of centrodorsal tubercles Da, and by presence of more or less developed, usually pointed or finger-form perpendicular apophysis P laterally on prodorsum. Species of this genus are represented in northern and western Europe by 9 species (see Subías, 2004;updated Internet version 2011), and morphologically can be grouped into three groups ("lengersdorfi", "tecticola", and "tenuipes" groups, see Miko and Mourek, 2008 for details). The species of genus Epidamaeus are known from a broad range of habitats, mostly in mountain, boreal, subarctic and arctic zones of the Holarctic ecozone, what was well documented by e.g. Behan-Pelletier and Norton (1983Norton ( , 1985. Epidamaeus is a species-rich genus (over 75 species), with many undescribed species still to be expected. At least some of the known species may be, however, after further study transferred to Kunstidamaeus. This article brings together the descriptions of two interesting species collected in the British Isles and Spitsbergen, with a designation of Kunstidamaeus arcticus as a new species.
MATERIALS AND METHODS
Material has been provided from the collection of the second author (F.D.M.) with details of location and date found given in the descriptions below. The British material (Epidamaeus sp.) was collected from sieved coarse detritus and extracted using the standard method of Berlese-Tullgren funnels. Details of the collection and extraction of the Svalbard material are unknown.
All individuals, previously preserved in alcohol, were examined unmounted and studied after maceration in lactic acid in open cavity slides. Holotypes of K. arcticus will be deposited in the Acarological collection of Prague National Museum (Czech Republic), whilst paratypes will be kept (in alcohol) in the collections of L. Miko (1 paratype) and F.D. Monson (3 paratypes). Single individual of Epidamaeus sp. will be kept in the collection of L. Miko.
In the present paper, we follow the morphological terminology and abbreviations developed by Grandjean (1960) and modified by subsequent authors (see Miko and Mourek, 2008 for complete references and a list of abbreviations). For leg setae, Grandjean´s notations, as reviewed by Norton (1977) were used. The drawings and measurements were made following the same methodology as in our previous works (see Mourek, Miko and Skubała, 2011 for details).
Kunstidamaeus arcticus n.sp. (Figs. 1-3)
Diagnosis -Kunstidamaeus with a short, slightly dilated sensillus covered distally by cerotegument; tubercle Ba absent; with a set of variably developed small tubercles present at the basis of prodorsum; and with weakly developed spinae adnatae. Ventral side with a paired anterior ventromedial apophyse, with most epimeral setae inserted on distinct tubercles; only 5 pairs of genital setae present.
Description of the adult.
Material examined -Holotype and four paratypes, collected by S. Coulson from soil of tundra heath in Svalbard (Spitsbergen, Norway), sample number OR 804, 1991OR 804, -1993. More detailed information about the collected material is not available to us.
Integument -Surface of body and legs, except distal parts of tarsi, covered mostly by filamentous and columnar cerotegument, which has, on prodorsum, anterior and central part of notogaster and on ventral plate, a very characteristic appearance: individual, rather short and distinctly attenuated filaments each with a slightly expanded, buttonlike base. Lateral part of sejugal area, propodolateral apophyse and parastigmatic apophyses with granular cerotegument. Distal part of sensillus with a very specific, short, but distinct, fine "leaflike" cerotegument ( Fig. 2J). Body surface under cerotegument finely granulated, with the granulation well visible on the prodorsum and the ventral plate.
Remarks -The species has several unusual characters distinguishing it from all other Kunstidamaeus (and Epidamaeus) species. The most unusual being the presence of only 5 genital setae per plate, whilst, the normal number for both genera and all Damaeidae is 6 setae per plate. The combination of the presence of a typical apophyse P perpendicular to the body axis together with prodorsal tubercles La and Ba is typical for Kunstidamaeus, and the similar combination of apophysis P and tubercle La let us to assign the species to this genus. However, in K. arcticus n.sp., postbothridial tubercles are absent, whilst a row of 3-4 small tubercles at each side of prodorsum base has developed instead, in some individuals. One could speculate about the homology of this structure with tubercles Ba or Da. However, this structure is variable in our material, and in some individuals is only weakly developed. This suggests that the homology is questionable and the structure may have evolved independently. The idea is, in our view, supported also by the very unusual presence of paired tubercles VM in the medial part of the ventrosejugal groove. The single, unpaired ventromedial tubercle VM is known from this area in only a few species of Epidamaeus eg. E. fortispinosus Hammer, 1967 andE. hastatus Hammer, 1967. The latter of the two species shares some more similarities (see Behan-Pelletier and Norton, 1985 for details), e.g. shape of spinae adnatae, exobothridial setae and partly also the relatively short, lanceolate sensillus, and, more importantly, a thickened cuticle at the basis of prodorsum. To our knowledge, unique to K. arcticus n.sp. is also the presence of a distinctly thickened cuticle at the ventral part of the proximal end of tarsi III and IV. Another unusual character belonging to the new species is the presence of a second antiaxial accessory seta, ventral to seta v2' on tarsus I, whilst the same seta on tarsus IV is absent. Absence of this seta on tarsi I and IV is a typical character shared by most of the species of Epidamaeus and Kunstidamaeus within the Damaeus (sensu lato) complex, and, if occasionally present, they are developed always on both legs. On the other hand, the weak development of the spinae adnatae is not sur-prising -the tendency of minimization and weakening of spinae adnatae seems to be quite common within the Damaeidae from northern Arctic areas, as demonstrated by Behan-Pelletier and Norton (1983). This unique combination of characters, combined with the very characteristic shape of the cerotegument, sensillus and presence of only 5 genital setae clearly differentiates this species from all other known species. Based on the presence of short sensillus (appearing distally slightly dilated), shape and size of the notogastral setae and presence of granular cerotegument, the new species shows similarities to the species-group "tenuipes", but given the specific characters described above, it should be considered as self-standing within the Kunstidamaeus.
Epidamaeus sp.
(aff. floccosus Behan-Pelletier and Norton, 1985) ( Fig.4-7) This species of Epidamaeus resembles the species Epidamaeus floccosus Behan-Pelletier and Norton, 1985 (see discussion below), but bears also some differing characters, namely a conspicuous transverse ridge behind the prodorsal tubercles Ba; elongated anterior parastigmatic apophyse Sa; spinae adnatae with lateral dents; and smooth, long notogastral setae inserted on cuticular thickenings. These characters would allow to establish a new species within Epidamaeus. However, taking into account that we had only a single, even slightly damaged, individual available, it was impossible to decide on stability and variability of the characters. Therefore, the potential decision on the specific status is left for later until broader material is available.
Description of the adult. Integument -Body covered by cotton-like filamentous cerotegument in the sejugal area and laterally around leg insertions. Cuticle of prodorsum smooth, with notogaster finely granulated. Ventrally, with net-like pattern on the mentum, epimeres I-II and genital plates. Cuticle of all femora and trochanters II-IV with a distinct 'netlike' pattern.
Prodorsum (Figs. 4A, 5A-B, D-E, 6A-E) -Regularly triangular in shape, with lateral part above insertions of legs II rounded and without an apophysis P. Proximal part of trochanters I and II covered by the tectum, projecting laterad and lateroposteriad from the lateral part of prodorsum (Fig. 5D). Parastigmatic apophyses very different in shape; anterior apophyse Sa prolonged, narrow, pointed, perpendicular to the body axis, about four times longer than Sp, which is short, triangular, blunt and pointing anteriad (Fig. 5E). Anterior postbothridial tubercles (Ba) present, distinct but relatively small and opposed posteriorly by a broad, transverse, transparent ridge. Rostrum broadly rounded, with an indistinct, broad central lobe. Short, indistinct oblique ridges present latero-anteriorly to insertion points of lamellar setae. Similarly, short ridges present laterally, behind insertions of leg I, projecting anteriad from bothridial area. Both structures combined together slightly resemble lamellar ridges present in other oribatids. Prodorsal setae fine and relatively long; rostral and lamellar setae unilaterally with small, hardly visible spines; other prodorsal setae smooth. Lamellar setae (75 µm) slightly longer than rostral (58 µm). Exobothridial setae strongly curved, fine, and slightly shorter than ro (50 µm) (Figs. 6C-E). Interlamellar setae clearly the most robust on the prodorsum, nevertheless, both broken and missing distal part. Remaining basal part around 37 -40 long, overall length is difficult to judge, but it is assumed they may reach about 60-80 µm (Fig. 6B). Bothridium typical of the Damaeidae, funnel-like, with a transparent, round and expanded rim. Sensillus smooth, elongate, setiform, attenuated distally, without a flagellate tip; about 140 µm long (Fig. 6A).
Notogaster (Figs. 4A, 5G, 6F-H) -Circular, with strong, medium long spinae adnatae, both distally with strong, lateral teeth (Fig. 5G). Notogastral setae fine, smooth and relatively long (c1 and c2 about 75 µm), some, however, with broken distal parts, with lm, lp (one side only) broken in part, or completely (Figs. 4A, 6F-G). All notogastral setae inserted on cuticular thickenings, forming small tu-bercles or short ridges, and more pronounced on posterior part of notogaster. Proximal part of setae, near insertion points, slightly narrower and more transparent than remainder of setae. Setae of ps series finer and shorter than remainder, ps2 about 50 µm with ps3 about 25 µm long (Fig. 6H). Lyrifissures normally developed; openings of notogastral glands well visible, with a small "cap" of a transparent secretion. A pair of pores present in posterior central part of notogaster, axial to insertions of setae lp.
Remarks -The individual stands very near
to Epidamaeus floccosus Behan-Pelletier and Norton, 1985, having very similar or identical development of tubercles and ridges in the sejugal area; parastigmatic apophyses or ventral tubercles are generally of very similar appearance as well. Still, there are also several characters which clearly differ. The british individual is larger; sensillus is shorter, without flagellate end and not covered distally by cerotegument as in E. floccosus. Notogastral setae (particularly l and h series) are inserted on tubercles or short ridges; anterior notogastral setae are finer and longer and, conversely, setae ps1-ps3 are much shorter than in E. floccosus. Ventral setae of our individual are shorter and most of the epimeral setae are inserted on tubercles. Legs differ slightly, also, in having finer and generally shorter setae, and all genual solenidia are longer than the coupled setae d (whilst in E. floccosus, coupled setae are longer than solenidia). Presence of teeth on spinae adnatae on our individual may be an easily observable difference, but it is difficult to judge if this character is stable. Similarly developed spinae adnatae have been observed on some individuals of E. aborigensis Behan-Pelletier and Norton, 1985, together with individuals where they had developed normally, without teeth or protuberances. As stated above, if the differences found will be proved stable by study of broader material, they justify in our view a status of new species to be proposed. As our attempts to find more individuals were not successful, we brought detailed description here to allow for comparison, and hopefully also for finding of more individuals by other authors, who may have collected the species without attempting detailed determination. Authors will be greatly appreciating if such a material, if exists, was provided them for further study. | 2,991.8 | 2013-03-29T00:00:00.000 | [
"Biology"
] |
The Integration For Morality Values In The Concept of Law
The adoption of moral values into legal concepts seems to still be a debate among legal thinkers, especially between legal thinkers positivism and natural law. This paper analyzes the point of view of the positivism school and the school of natural law. This paper is the result of legal research. The type of legal research used is a type of normative legal research with a conceptual approach. The results of the study indicate that there are differing views between the school of positivism and the school of natural law about the integration of moral values into legal concepts. The school of positivism says that moral value is a different aspect that cannot be included in the legal concept because law is a formal command of power, while the view of the school of natural law says that legal concepts should not ignore moral values because this moral is a standard for law to guarantee value of justice.
Introduction
The School of Natural Law and the School of Law of Positivism, are born from the results of legal thoughts which are rooted in Western legal thought. What distinguishes the two schools of thought, of course, from the epistemological point of view of how to find legal truth. The school of natural law, argues that the search for the value of legal truth must adopt moral values. Meanwhile, the school of positivism law holds that the search for legal values must be explored through written legal products issued by the authorities. The discourse of the school of natural law and the school of law of positivism has become a debate about the coloring of almost every study of legal philosophy. The two schools of law, claim to have succeeded in uncovering the nature of the law. However, the basis of the methodological instrument used by both remains the same, which is still based on the logic.
The discussion of the relevance of moral values with the concept of law has indeed been a long debate in the history of the law of western civilization. The glorification of rationalism and secularism, since of the industrial revolution and the renaissance, has strengthened the view of positivism and rationalism that moral values and legal values are two different poles that cannot intervene with each other anymore. The bearers of positivistic rationalism hold that law is something autonomous that has nothing to do with moral values because according to this view, the law is a command of power. After the second world war, humanitarian values declined, the more victims of war from innocent civilians, then the existence of law as a value by having to adopt the values of morality increasingly received the most attention among western legal experts, especially legal experts adherents of the school of natural law. At this point of view, this paper tries to explain the integration of moral values in the legal concept, so that the law can guarantee justice and uphold human values. Dictionary 1 , The term moral inside is grouped into two categories of understanding, first: morality in the noun category: (1). Discussion of good and bad received in general which is related to actions, attitudes, obligations, etc .; morals; character; moral; (2). Mental conditions that make people still brave, passionate, passionate, disciplined, etc .; the content of the heart or state of feeling as revealed is made; (3). Moral values can be drawn from a story. Second, morality in the verb category: 1. bad deeds; good character; 2. By morality (the custom of manners and so on).
The term morality is also explained in the Indonesian National Encyclopedia that morality is a branch of philosophy that specifically studies human behavior. Morality is said to be the norm, so what is discussed is about how a person should act, so morality in that sense is a characteristic of someone's behavior that is associated with a measure that applies in the community, especially regarding good or bad behavior, so morality is not something become innate, but it is born as a result of the influence of the environment in which a person grows and develops 2 . K Berten 3 also defines morality as value and norm that serves as guidelines for individuals and group which is used as a basis for regulating certain actions. K Berten's formula was concluded by Eri Hendro Kusuma, that morality is a standard for a person or group of people when performing an act or action. For example, a corrupt state official. The action of the official, considered to have a bad morality value. That is, the action of this official is seen as the bad morality because of corruption, according to the moral value in any society, as a disgraceful act.
According to Frans Magnis Suseno, morality is essentially a reference standard for a person's good and bad deed as human beings who live in society. Good and bad, is not placed solely on its nature, but there is the reference that is used as a standard to categorize which action is seen as a good deed or not. If society views the action as good, it is good morality, and vice versa. If the deed is considered bad then it is bad morality.
Furthermore, Frans Magnis Suseno 4 , that morality is related to the character and personality of a person as an individual or can also be related to the character inherent in society, so according to Frans Magnis Suseno, there is no connection between one's work performance and the morality inherent in its. For example, a lawyer who is very good at defending his clients at trial, but he often asks for payment, even though the client has a mediocre economic level. In this example, a lawyer has great work performance but services to his clients, he has a bad morality. From this view, Frans Magnis Suseno considers that in the aspect of work skills, a person is often not related to morality. Based on this example, Shidarta 5 concluded that morality is closely related to the good and bad of a person as a human being. The description of the meaning of morality both etymologically and terminologically is also discussed in the writings of Budiono Kusumohamidjojo 6 , that the use of the term morality in ethical terms should not be equated. The two terms have fundamental differences, namely, the term morality refers to a theoretical understanding, whereas morality refers to a more practical understanding.
The terms morality and ethics, according to Karen Lebacqz, is often used erroneously. She said that In the public arena, most people use the terms "morality" and "ethics" as though they are interchangeable. But in the field of ethics, they are understood differently: morality refers to the "mores" or customs or rules or expected/accepted behavior of a community, but "ethics" refers to a discipline of examining those customs and rules to see whether they are truly defensible. In many countries, for example, women cannot own property -this is a custom. But is it a defensible custom? Would it stand up to the scrutiny of disciplined rational reflection? 7 Karen Lebacqz explained that morality essentially refers to habits or called mores which apply as a standard for human actions that are accepted good or bad, while ethics refers to a particular discipline that examines whether the standard of human actions called good or bad is indeed something that really can be accepted rationally or not. 1 Departemen Pendidikan Nasional, Kamus Besar Bahasa Indonesia. 2 Indonesian National Encyclopedia Volume 10, Cipta Adi Pustaka, 1990, p. 371. 3 The difference in the meaning of morality with ethics, also mentioned by Frans Magnis Suseno 1 , with a different perspective. According to Frans Magnis Suseno, morality is a set of systematic teachings while ethics is critical thinking that is fundamental to morality views.
Detail of Frans Magnis Suseno's explanation of the differences in the meaning of morality and ethics, in Mustain's writing is also described concretely, that ethics falls within the scope of moral philosophy that discusses things about good and bad deeds in the perspective of theory, while morality examines things of good deeds and bad in a practical perspective. The meaning, as revealed by Budiono Kusumohamidjojo, is that morality sets the standard value of human action which is permissible and what cannot be done, whereas ethics is a deep critical study of the standard value of human action as a morality. Ethics as a critical study, the existence of ethics does not pretend to assess an act that is good or bad. In essence, as written by L Sinour Yosephus 2 , morality is a set of teachings that contain what is good and bad, and what becomes a taboo for humans, both as individuals and as members of society, while ethics does not advocate how we should live and act behavior. Ethics is only a critical study that questions why human actions are seen as good or bad.
Morality always talks about the value standards of human actions that refer to values that apply in society as something good or bad. The example of being honest is good morality which is set as a general standard in the life of any society, while ethics conducts in-depth studies of why people should be honest. Ethics conducts critical and in-depth studies of why people should be honest without having the authority to determine whether to be honest is good or bad. Ethics conducts critical studies using logical reasoning or argumentation, while morality sets standards for the value of human actions, is guided by tradition, prevailing habits in the community, or based on the advice of community leaders, or can be based on certain religious doctrines. In conclusion, morality is prescriptive, that is, advocating or prohibiting, whereas ethics is merely an expository, ie describing or explaining.
The meaning of ethics according to Shidarta 3 is a critical study of morality. Not only as a critical study of morality, but ethics also discusses a value system that is a guideline for certain professional groups, namely what is good and what is bad according to the values of the profession itself. Usually, the values are formulated in a written norm, called a code of ethics. The meaning of ethics according to Shidarta, can be grouped into two things, namely: ethics as a system of values and ethics as a science which is a branch of the study of philosophy.
Discussion On Integrating Value of Morality in Law
Dirkusus about morality relations with the law has indeed become a discourse of legal thinkers since centuries past until now. The thinkers of the School of Natural Law think that between morality and law are two mutually integrated components that cannot be separated from each other, while positivism viewers argue that the discussion of morality and law are two things different from their respective characteristics.
The discussion of morality relations with the law has been discussed by Peter Mahmud Marzuki that morality should be integrated into law because it is this law that serves to operationalize law in the context of human social interaction. According to Peter Mahmud Marzuki 4 , morality is a basic mental state of man. Morality actualizes as a command to oneself about good and evil deeds. A noble morality can occur because of the control of the passions through the utilization of the will and the mind, if one's will and mind are controlled by lust which harms other people or society, it means that morality is bad. Peter Mahmud Marzuki's view seems to be influenced by Thomas Aquinas's version of the theory of natural law.
Explanation of Peter Mahmud Marzuki 5 , who correlates morality with law, that law with morality is indeed inseparable, but correlating morality with law, is not necessarily by putting morality in the perspective of messages, but morality in question is morality in the context of behavior human actions in community life. Peter Mahmud Marzuki, seeks to position the relationship between morality and law at the point of contact that can be accepted by reason. The normalization of morality values manifested in the product of the rule of law, according to Peter Mahmud Marzuki, is measured according to rationality. According to Peter Mahmud Marzuki that morality that is to be adopted becomes a legal value, the morality that looks outwardly in the form of human actions as a manifestation of the reality of people's lives. Peter Mahmud Marzuki, believes that the adoption of moral values into legal norms is only possible if morality has actualized as real action, not in the context when morality is merely an inner form of human beings. The reason, according to Peter Mahmud Marzuki, is that the goal of the law is outward human behavior, the law will not act when a person's actions do not violate the rule of law even though the person's mind wants to do something illegal.
Peter Mahmud Marzuki 1 also does not deny that the law sometimes enters one's inner area. Provisions regarding intentions in criminal law and good faith in civil law, for example, are provisions that enter one's inner area. Even in the trial, the defendant's inner attitude is often taken into consideration in imposing sanctions as a mitigating factor or incriminating sanctions, a defendant who does not regret his actions and in court proceedings is rude and convoluted in court hearings, tends to be punished more severely than the defendant who regretted his actions, was polite in the trial, and honest in answering the judge's questions. Only that needs to be emphasized according to Peter Mahmud Marzuki, that factors are inward, only in contact with the law if someone commits a violation or against the law.
Peter Mahmud Marzuki's view that formulates the relationship between moral values and legal norms and how the integration of adopting moral values into legal norms, it is influenced by the thinking of the school of natural law. This is different from the thought of the school of law positivism, which originated from the ideas of Auguste Comte 2 , which this idea was increasingly popular after World War I when the school of thought positivism was developed by John Austin 3 . The school of thought John Austin holds that the law has the characteristics of forced force manifested as the will of the authority of the sovereign authority. Based on this view, John Austin holds that the actual law is a law whose adoption is characterized by three components, namely, a command that is coercive, sanctioned, and a ruling party. These three components are interlinked and integrated, but the most prominent is the authority of power 4 .
John Austin argues that the law must be seen as it is, in the legal sense which is an order, so that law is always a compulsory obligation that must be obeyed by most citizens. John Austin criticized adherents of the theory of natural law, which he said the law was not a stack of advice or morality. The reason, the law is merely an instrument that has forced power, which according to John Austin, contains two basic elements. First, the law as a command that realizes the desire of the ruler, that someone must do or refrain from doing something. The desire of the ruler in question is one that has a specificity character, namely that the parties affected by the law must bear the unpleasant consequences of sanctions if they do not comply with the legal provisions imposed. Law in the sense of an order is the desire of the ruler which contains punishment for anyone who is against the law. Second, the law can create something that causes suffering. For a citizen who is subject to an order, he is bound, obliged to do what he is told. Adherence to legal orders, a person will be sanctioned.
Positivism moves on two principles of thinking, namely: First, what is referred to as law is only positive law; Second, even though the substance of a legal product is contrary to the principles of morality, it is still valid law. According to Zippelius, a product of the law that applies if it is by official procedural powers, it remains legal as law without regard to any contents. Thomas Hobbes asserts that the application of a norm becomes a legal law, does not depend on its contents, but depends on whether the norm is by applicable legal procedures. Thomas Hobbes said, "authority is not the truth, but it is the essence of the law".
The basis of the thinking of positivism is to require space to be closed to natural law. According to positivism, the application of legal norms does not depend on conformity with morality. The only legal enactment criteria are the law offices. Such a position of thinking of positivism has consequences that separate morals from the law. Law is a system of norms established by legitimate authorities. The law is obeyed not because the law is judged to be morally good but because it is determined by a legitimate authority, as Karl Bergbohm 5 states, even though a legal product contains evil or harmful values, but insofar as it is applied through formal procedural power, it must remain recognized as law. Such a view of positivism has placed the law not in a moral framework but within the framework of formal procedures issued by the authorities.
The view of positivism, which separates law from morality as two components that must not be confused with one another, of course, has been rejected by some legal thinkers, especially those that are still consistent with the school of natural law. Peter Mahmud Marzuki argues that the law contains moral values because what determines a rule is law is not the content of the rules, but which determines whether the contents of the rules emit moral principles or not. It does not matter, according to Peter Mahmud Marzuki, whether the law is made by the authorities or grows and develops in society or is the creation of judges as long as the contents of the rules emit moral principles. That is, the law can be said as law if the content emits moral principles. The emission of moral principles is in the framework of maintaining human existential functions. That is, human existential functions can only be maintained through laws that reflect morality values 1 .
Law contains moral values that function to maintain human existence. The basis of this view is the argument of Peter Mahmud Marzuki, to reject the thesis of the John Austin version of the school of positivism which views the law as merely a formal order from the authorities. According to Peter Mahmud Marzuki when the essence of law is only placed in terms of the command of power, then the law can ignore the existential aspects of man. Law will be a tool of arbitrariness for power, which aims to secure a regime of power.
Closing
The school of positivism, looking at moral is a separate thing from the law. Legal norm according to positivism only appears when the law is legalized in a formal procedure by the holder of power, so that the law as part of the decree of the authorities, it must be placed on the frame as it is. Whereas in the view of the school of natural law, the moral is an absolute thing that must be integrated into legal values, because according to the school of natural law, normalization of legal values that do not reflect moral values, is only an instrument of arbitrariness for power. | 4,594.8 | 2019-11-01T00:00:00.000 | [
"Law",
"Philosophy"
] |
Multiphoton transitions for delay-zero calibration in attosecond spectroscopy
The exact delay-zero calibration in an attosecond pump-probe experiment is important for the correct interpretation of experimental data. In attosecond transient absorption spectroscopy the determination of the delay-zero exclusively from the experimental results is not straightforward and may introduce significant errors. Here, we report the observation of quarter-laser-cycle (4{\omega}) oscillations in a transient absorption experiment in helium using an attosecond pulse train overlapped with a precisely synchronized, moderately strong infrared pulse. We demonstrate how to extract and calibrate the delay-zero with the help of the highly nonlinear 4{\omega} signal. A comparison with the solution of the time-dependent Schr\"odinger equation is used to confirm the accuracy and validity of the approach. Moreover, we study the mechanisms behind the quarter-laser-cycle and the better-known half-laser-cycle oscillations as a function of experimental parameters. This investigation yields an indication of the robustness of our delay-zero calibration approach.
Introduction
Transient absorption spectroscopy plays a major role in the fast evolving field of attosecond science. It allows observing electron dynamics of atoms, molecules and solids on their natural timescale, with an all-optical method [1]. For the first demonstrations of attosecond transient absorption spectroscopy the transmitted extreme ultraviolet (XUV) radiation was detected after a noble gas target using either a single attosecond pulse (SAP) or an attosecond pulse train (APT) in the XUV spectral range which was overlapped with a femtosecond infrared (IR) pulse [2][3][4]. Hence, this technique benefits from the numerous advantages of photon detection over the detection of charged particles as discussed in more details in [1]. In addition to the fast data acquisition due to charge-coupled device (CCD) based spectrometers and the absence of space charge effects, it is possible to examine bound-bound transitions which stay hidden in conventional charged-particle detection experiments.
A key issue for the interpretation of the experimental data is the correct determination of the delay-zero, where the maximum of the envelope of the IR pulse and the SAP or APT exactly overlap. The precision of delayzero calibration has to match the timescales of the dynamics being studied and the time resolution of the experiment. As we will show below, just 'reading' and 'interpreting' experimental data in a straightforward way as done successfully before for femtosecond transient absorption spectroscopy contains serious pitfalls and may lead to errors in finding delay-zero on the order of several femtoseconds. Without theoretical support or other knowledge about the processes being studied, delay-zero can in general not be extracted from experimental data with the required precision. A variety of recent publications have discussed the absorption of XUV radiation in helium (He) around its first ionization potential, either using SAPs or APTs [5][6][7][8][9][10]. Due to the diversity of the observed effects in these experiments, e.g. sub-cycle ac Stark shift, light-induced states etc, it is not obvious if any of these effects can be used to define delay-zero. In this paper, we combine experimental results with calculations to identify a nonlinear light-matter interaction that supplies the proper delay-zero.
Our study yields a purely experimental method for calibrating the delay-zero in an attosecond transient absorption experiment using an APT synthesized from a number of high-order harmonics (HHs) of an IR laser field. We will show that neither the maximum of the total absorption, nor the envelope of the already wellknown and discussed half-laser-cycle (2ω) oscillations are suitable for this purpose [4-7, 9, 11-13]. Here and throughout this work, ω represents the frequency of the IR field. On the other hand, we report the first experimental observation of quarter-laser-cycle (4ω) oscillations in the transmitted XUV radiation as a function of the delay between an APT and an IR pulse and show that the maximum of the 4ω-oscillations coincides with the delay-zero. In all presented figures, we use the maximum of the 4ω-oscillations in the absorption of the 13th harmonic (HH 13) to define the delay-zero. We discuss the parameters needed for the manifestation of the 4ωoscillations and demonstrate that this highly nonlinear effect enables us to accurately define delay-zero. Moreover, we systematically study the influence of the IR intensity on the 2ω-and 4ω-oscillations. This systematic study defines the robustness of our proposed method.
In section 2, we begin with a short overview, experimental details, a general discussion on 2ω-and 4ωoscillations and introduce the theoretical model. In section 3, we describe how we use the 4ω-oscillations to experimentally calibrate the delay-zero and show that our experimental results are in excellent agreement with the theoretical predictions. Section 4 presents a systematic investigation of the dependence of 2ω-and 4ωoscillations on IR intensity. Finally, we compare our transient absorption results with a measurement of the He + ion yield in section 5, and conclude with section 6.
Quarter-laser-cycle oscillations
In 2007, Johnsson and co-workers published an experimental and theoretical pump-probe study using an APT combined with a moderately strong IR pulse in He [14]. 'Moderately strong' corresponds to an intensity of approximately 10 13 W cm −2 which is insufficient to excite electrons out of the ground state, but substantial enough to deform the atomic potential. They investigated the He + ion yield as a function of the APT-IR delay and discovered 2ω-oscillations of the total ion yield. Their results triggered several detailed studies of the same 2ω-oscillations by means of attosecond transient absorption spectroscopy [4-7, 9, 11-13]. The mechanism giving rise to the 2ω-oscillations involves the so-called 'transient virtual states' initiated by the IR field. These originate from two-color absorption processes with one XUV photon and a variable number of IR photons [6,7]. These different excitation pathways interfere destructively or constructively, depending on the APT-IR delay. As a result, the absorption probability exhibits 2ω-oscillations. Additionally, 2ω-oscillations can also originate from the two-photon coupling between real states [11].
Theoretical work by Chen et al [11] discussed the transient absorption of an APT in laser-dressed He atoms and predicted the occurrence of oscillations with a new periodicity, which appears in neither the initial APT nor the IR field, namely 4ω-oscillations. These oscillations originate from multiphoton coupling of HHs [15]. Infrared (IR) pulses with a duration of 25 fs and central wavelength of 789 nm are used to generate an attosecond pulse train (APT) via high harmonic generation (HHG) in a xenon gas target. A small fraction of the fundamental IR beam is split off before the HHG and sent over a delay line. After the HHG the residual IR radiation is blocked with a 100 nm thick aluminum filter. This filter compresses the pulses in the APT in time [16]. The APT and IR beams are recombined with the help of a mirror with a center hole and focused by a toroidal mirror into the pulsed He target operating at the laser repetition rate of 1 kHz with an opening time of 60 μs per laser shot. A motorized iris in the IR arm is used to adjust the IR intensity in the interaction region. After the interaction target, the IR beam is blocked with a second aluminum filter and the transmitted radiation is detected with an XUV spectrometer (resolution of ≈50 meV in the region of interest). The inset shows the spectral shape of the APT without He gas in the target (red-solid) and with He gas (black-dashed) with an optical density of 0.79 at 26.7 eV. The dashed-blue line indicates the first ionization potential of He at 24.59 eV.
constituting the APT that are spaced four IR photons apart, e.g., HH 13 connects to HH 17. This means that energy between these HHs is exchanged via a four-photon process. Moreover, the authors discussed the influence of resonances on the nonlinear coupling. The calculations reveal that a HH being in resonance with an excited state increases its nonlinear coupling to other HHs. As a result the coupling strength of the HHs depends on the driving IR wavelength and intensity. The 4ω-oscillations were recently observed for the first time in an experiment using a SAP instead of an APT, but they were not discussed any further [7].
Here, we investigate the 4ω-oscillations in an attosecond transient absorption experiment with APTs. Figure 1 shows the experimental setup. A more detailed description can be found in [15] and additional information is given in the caption of figure 1. We use the main part of the output from a Ti:sapphire-based laser amplifier system to generate the APT via high-order harmonic generation (HHG) in xenon (Xe). Our generation scheme results in an inherent synchronization of the APT to the IR pulses as can be seen from figure 1. The inset of figure 1 depicts the spectrum of the APT with and without gas but no IR present. While HH 17 and higher are located above the first ionization potential of He (24.59 eV, dashed-blue line) and strongly absorbed in the presence of He, HH 13 and 15 are essentially transmitted unchanged since they are below the first ionization potential [17]. Figure 2 depicts a delay scan showing the transmitted XUV spectral power density color-coded against the photon energy and the APT-IR delay. The APT primarily consists of HHs 13-21. The delay scan was recorded with an IR intensity of 4.0 · 10 12 W cm −2 . The delay step size is 0.2 fs. The 2ω-oscillations, especially in HHs 13-17, are clearly visible to the naked eye. The delay-zero here and in all the following figures was calibrated with the temporal envelope of the 4ω-oscillations of HH 13, as discussed in section 3.
To study the oscillations in the transmitted XUV radiation in more detail we have to be able to quantify the oscillation strength of different delay scans. Thus, we define in a first step the energy-integrated absorption τ Π( ) as a function of the APT-IR delay τ: where E is the photon energy, S(E) is the HHG spectrum before the interaction with the gas target, and T E ( ) 0 and τ T E ( , )represent the transmitted XUV radiation when the IR field is turned off and on at a fixed delay τ, respectively. The energy integration in the numerator over the spectral window δE is performed around each HH. For the normalization, we integrate in the denominator over the three HHs, 13-17. HH 19 and 21 are not very sensitive to the IR field and therefore not included in the normalization. A positive value of τ Π( ) corresponds to absorption induced by the IR field, while a negative value means a net emission of photons. As an example we show in figure 3 the energy-integrated absorption of HH 13 (energy integration window Δ = E 0.72 eV), 15 (Δ = E 1.17 eV) and 17 (Δ = E 1.08 eV) for the delay scan shown in figure 2. In this representation of the experimental data the 2ω-oscillations become even more apparent. HH 13 and 15 exhibit an increase of absorption around delay-zero. This increase corresponds to multiphoton absorption of one XUV and one or more IR photons. HH 17 shows only oscillations and no IR-induced multiphoton absorption due to being energetically located above the first ionization potential of He. HH 17 shows, however, a first hint of 4ωoscillations around delay-zero. In a second step, we disentangle the different multiphoton contribution in the one-dimensional energyintegrated absorption τ Π( ) by applying a Gaussian-Wigner time-frequency transform that yields a twodimensional function of frequency and APT-IR delay. The Gaussian-Wigner transform uses the Wigner transform of τ Π( ) [18], which reads as: x 2 i and convolves it with a two-dimensional Gaussian window: now also observe 4ω-oscillations at 1.52 PHz. The period of the 4ω-oscillation at a driving wavelength of 789 nm is 660 as. In HH 15 the 4ω-oscillations are relatively weak compared to the 2ω-oscillations. A 4ω-oscillation in the absorption of HH 15 would come from coupling to the absorption of either HH 11 or HH 19. HH 11 is not part of our APT spectrum, and the absorption of HH 19, which is well above the ionization threshold in He, is much weaker than that of HH 15 as can be seen in figure 2. In contrast, both HH 13 and HH 17 are absorbed strongly which allows us to see a relatively strong 4ω-coupling. In the remainder of the manuscript, we will restrict our discussion of the 4ω-oscillations to the absorption of HH 13 and HH 17.
We have also investigated theoretically the ultrafast transient absorption of the APT in He. We have calculated both the microscopic (single atom) and macroscopic absorption probabilities for each of the harmonics in the APT as a function of the APT-IR delay, as described in detail in [19]. Briefly, at the single atom level we compute the response function and ω E ( )are the Fourier transforms of the time-dependent dipole moment and the full APT-IR electric field, respectively. The timedependent dipole moment is calculated by direct numerical integration of the time-dependent Schrödinger equation (TDSE) in the single active electron approximation [19]. ω S ( )is the absorption probability per frequency, so that the probability to absorb a certain harmonic is the integral of ω S ( )over the bandwidth of that harmonic. For the macroscopic calculations we numerically solve the coupled TDSE and Maxwells wave equation [19] which yields the space-and time-dependent electric field at the end of the He gas jet. From this we calculate the same energy integrated absorption probability as in (1). In all of the calculations we use an APT synthesized from harmonics 13 through 21, with initial relative strengths of 0.25, 0.6, 1, 0.6 and 0.25, and all initially in phase. The full width at half maximum (FWHM) duration of the APT is 11 fs, and the peak intensity is 7.0 · 10 10 W cm −2 . The IR pulse has a central wavelength of 795 nm, a FWHM pulse duration of 25 fs, and a peak intensity which varies between 1.0 · 10 12 W cm −2 and 10.0 · 10 12 W cm −2 .
Figures 4(d)-(f) show the Gaussian-Wigner transform of the calculated macroscopic absorption probabilities for harmonics 13, 15, and 17, for an IR intensity of 6.0 · 10 12 W cm −2 and He density of 3.3 · 10 17 cm −3 . The theory agrees well with the experiment, with the 2ω-oscillations being asymmetric around delay-zero (especially in HH 17) whereas the 4ω-oscillations of HH 13 and HH 17 exhibit a very stable maximum at delayzero. The 4ω-oscillations of HH 15 are very weak, also in agreement with the experimental result. We note that at this density, the macroscopic absorption probabilities are very similar to the single atom absorption probabilities. We remark that the difference in strength between the 4ω-oscillations of HH 13 and HH 17 is due to the normalization in (1). In the calculations, the raw strengths of the 4ω-oscillations of HH 13 and HH 17 match exactly since the only four-IR-photon coupling of HH 13 is to HH 17 and vice versa (given that the absorption of HH 21 is very weak).
Delay-zero calibration
The simplest method for the definition of the APT-IR delay-zero is to use the maximum of the energy-integrated total absorption. Figure 5(a) shows the energy-integrated total absorption of the APT integrated in energy from 19.5 to 38.5 eV for an IR intensity of 2.7 · 10 12 W cm −2 . For the fit function, we choose the sum of a Gaussian and a linear function to take into account that the total absorption does not have the same value for large negative and positive delays. For large positive delays the preceding IR pulse does not influence the absorption probability for The envelope shows an asymmetric shape and the maximum is located more than 7 fs before the delay-zero. the XUV radiation, since the IR intensity is too low to ionize or excite He from the ground state. For large negative delays, when the APT precedes the IR pulse, effects like perturbed free polarization decay occur, and these influence the absorption probability [6]. Hence, the total absorption is not expected to be symmetric around delay-zero. The maximum of the total absorption retrieved by the fitting procedure appears to be 9.4 fs before the maximum of the 4ω-oscillations. This leads to a completely different calibration of the delay axis.
Another way to experimentally determine the delay-zero is using a signature based on a nonlinear process of higher order. We use the Gaussian-Wigner transform already presented in the previous section and apply it to the energy-integrated total absorption. By integrating the resulting two-dimensional representation along the frequency axis, we obtain the envelope of the 2ω-oscillations as a function of delay. For the 2ω-oscillations we integrate around 0.76 PHz with an integration window of 0.6 PHz. Figure 5(b) presents the result for an IR intensity of 4.0 · 10 12 W cm −2 . Intuitively, we might expect delay-zero to coincide with the maximum of the nonlinear signature. In contrast, figure 5(b) shows that the 2ω envelope has an asymmetric shape and splits into two peaks where the maximum is located more than 7 fs before delay-zero. In addition, the asymmetric splitting of the 2ω envelope strongly depends on the IR intensity as we discuss in the following paragraph. Accordingly, the envelope of the 2ω-oscillations of the energy-integrated total absorption is also not suitable for a precise delay-zero calibration.
A further idea for a delay-zero calibration is the application of the time-frequency analysis to individual HHs instead of investigating the total energy-integrated absorption. Figure 6 presents the envelope of the 2ωoscillations for HH 13, 15 and 17 for different IR intensities. The envelopes of all three harmonics exhibit a strong dependence on the IR intensity regarding the oscillation strength and the shape. For the lowest intensity shown here, 2.7 · 10 12 W cm −2 , the envelope is symmetrically centered on the delay-zero defined through the maximum of the 4ω-oscillations of HH 13, as in the case of the 4ω-oscillations shown in figure 7. With increasing IR intensity the symmetric envelope starts to split up into two peaks. Furthermore, the amplitude of the envelope decreases. It is also important to note that the splitting of the envelope is asymmetric, in the sense that one of the two peaks is dominant (the description of the asymmetric shape of the envelope is beyond the scope of the work presented here). For HHs 13 and 17 we observe that the peak at negative delays is on average higher, whereas the dominant peak for HH 15 is located at positive delays. Both of these observations are reproduced in the calculations. This complex behavior of the envelope shows that the maximum of the 2ωoscillations is not suitable for a correct delay-zero calibration. Indeed, the maximum peak can be shifted up from the real delay-zero by as much as 20 fs in the intensity range covered with our measurements.
The 4ω-oscillations in the transient absorption signal result from the highly nonlinear coupling of two HHs via four IR photons. For the envelope of the 4ω-oscillations we integrate the Gaussian-Wigner transform in the frequency domain, as we did for the 2ω-oscillations envelope. The integration window again has a width of 0.6 PHz but is centered on a frequency of 1.52 PHz. Figure 7 presents the envelope of the 4ω-oscillations for HH 13 and 17 at two different IR intensities, showing both theoretical ((a) and (b)) and experimental ((c) and (d)) results. In the calculations, delay-zero is known exactly by definition. The theoretical results show that the envelope of the 4ω-oscillations is centered very accurately at delay-zero and possesses a symmetric shape as a function of delay. In the calculations, we observe that this behavior is independent of the IR intensity over the whole range of moderate intensities studied. The experimental results in figures 7(c) and (d) also exhibit a symmetric envelope for the 4ω-oscillations which is not affected by the IR intensity in our IR-intensity range. The symmetric shape enables us to fit the envelope with a Gaussian. The fit function provides, besides the peak value, which we use to define the strength of the oscillations, also a peak position. The peak position is of special interest for us to experimentally define the delay-zero. The excellent agreement between the symmetric
IR-intensity dependence
As discussed by Chen and co-workers, we expect an IR-intensity dependence for the magnitude of the oscillations of the 2ω-and 4ω-oscillations [11]. The motorized iris in the IR beam path enables us to vary the IR intensity in the He target between 2.7 · 10 12 W cm −2 and 1.1 · 10 13 W cm −2 . These intensities are insufficient to induce any transition out of the He ground state to bound excited states or into the continuum. In order to quantify the magnitude of the oscillations, we extract the information on their strength by fitting the temporal envelope of the 2ω-and 4ω-oscillations. The symmetric shape of the 4ω-oscillation strength as a function of APT-IR delay is, as already mentioned in section 3, fitted with a Gaussian. Conversely, as we showed in section 3 and in figure 6, the 2ω-oscillations have an asymmetric envelope, which strongly depends on the IR intensity. Hence, we fit this envelope with a sum of two Gaussians. Figure 8 shows the resulting strengths of the 2ω-and 4ω-oscillations normalized to the value we obtain at the lowest IR intensity of 2.7 · 10 12 W cm −2 . The 2ωcomponent of HH 13 and 17 shows a local maximum around ≈5.0 · 10 12 W cm −2 . If we increase the IR intensity even further the oscillation strength declines. For HH 15 we observe a monotonic decrease of the 2ω-oscillation strength over the full scanned intensity range.
The intensity scan for the 4ω-oscillations of HH 13 and 17 ( figure 8(b)) shows two distinguishable intensity ranges. For intensities up to ≈5.0 · 10 12 W cm −2 the normalized oscillation strength rises monotonically. If we further increase the intensity we enter a different regime, where the normalized oscillation strength is decreasing with increasing intensity.
In the calculations, we do not observe this behavior when using an IR wavelength of 795 nm. Rather, we find that the 4ω-oscillation strengths of HH 13 and HH 17 increase monotonically with intensity. However, in the single-atom calculations we have explored changing the IR wavelength (which means that the harmonic wavelengths, which are locked to the IR wavelength, also change) and then studying the intensity dependence of the HH 13 and HH 17 total absorption, and their respective 2ω-and 4ω-components. Figure 9 shows an example of this two-dimensional exploration of parameter space for the 4ω-component of HH 17 (a), and for the total absorption of HH 13 (b) and HH 15 (c). Figure 9(a) shows that at the shortest IR wavelengths there are two dominant structures in the 4ω-component of HH 17. These can be identified as the resonant enhancement of the absorption due to the 2p state (resonant with HH 13, as shown in figure 9(b)) and the Stark-shifted 3p state (resonant with HH 15, and thus a two-IR-photon intermediate resonance for the HH 13-HH 17 coupling).
Where the two resonances meet, and the excitation dynamics are therefore very complex, there is a wavelength regime in which the 4ω-oscillation strength decreases with intensity. This wavelength is slightly shorter (775 nm) than that used in the experiment (789 nm). One can speculate that the experimental APT harmonics could possibly have been blue shifted in the generating Xe jet due to plasma self-phase modulation, or that theory possibly does not accurately predict the Stark shift of the 3p state.
Photoabsorption and photoionization probabilities
As described in [15], our setup also allows for the detection of charged particles. For this purpose we remove the pulsed gas target from the interaction region and insert a needle target. This target provides a continuous gas flow with a low gas load so that we can operate a time-of-flight (TOF) spectrometer with a micro-channel plate detector. Figure 10 shows the He + ion yield as a function of APT-IR delay (black curve). In order to avoid the situation in which the dominant contribution in the ion yield comes from the harmonics above the first ionization potential, we adjusted the XUV spectrum of the APT. We changed the intensity of the generating field and obtained a spectrum dominated by HHs 13 and 15. In this way the relative IR-induced changes in the photoionization become stronger and the analysis of the oscillating signal is more robust. As can be seen in figure 10(a), the ion yield has a maximum when IR and APT overlap around delay-zero, which mostly follows the maximum of the integrated absorption. In this region direct ionization by the harmonics above the ionization threshold and multiphoton ionization by harmonics below the threshold in combination with IR photons contribute to the ion yield. For large negative delays, when the APT precedes the IR pulse, we detect a higher ion yield than for large positive delays, where the IR field arrives first. In the case of large negative delays, the IR can ionize states which were populated by the preceding APT. This is in agreement with earlier results [20]. Additionally, we show in figure 10 the total absorption probability of the XUV radiation integrated in energy from 19 to 35 eV (red curve). Both in the ion yield and in the total absorption probability we observe strong 2ω-oscillations. In order to verify the appearance of 4ω-oscillations we perform the delay-frequency analysis with the help of the Gaussian-Wigner transform as described earlier. This analysis shows that both the ion and the optical signal exhibit 2ω-and 4ω-oscillations. We again integrate the delay-frequency representation in the frequency domain to obtain the envelope of the oscillations as a function of delay. Figure 10(b) shows the result for the 4ω-oscillations of the ion yield and the total absorption probability as a function of APT-IR delay. The modulations in the envelope of the ion signal are a numerical artifact of the Gaussian-Wigner transform. We fit both envelopes with a Gaussian, shown as dashed lines, and define delay-zero with the center of the Gaussian fit for the absorption probability. As shown in figure 10(b), determining delay-zero from the ion signal in the same fashion (black dotted line) yields good agreement with our definition of the delay-zero extracted from the optical response. A deviation τ Δ of less than 1 fs is measured.
Conclusions
In conclusion, we have measured and characterized sub-cycle oscillations in an attosecond transient absorption experiment in He by using an APT and a moderately strong femtosecond IR pulse with a IR peak intensity between 2.7 · 10 12 W cm −2 and 1.1 · 10 13 W cm −2 . In addition to the earlier observed 2ω-oscillations, we observed 4ω-oscillations, a periodicity which is not included in the initial interacting fields. We show that the 4ω-oscillations in the transient absorption signal can be used to determine delay-zero with an accuracy which is better than other experimental methods, including the total absorption and 2ω-oscillations. A systematic investigation of the IR dependency of the 4ω-oscillations reveals the influence of resonances on the oscillation strength. These experimental results are in excellent agreement with TDSE calculations. Additionally, we compare the total absorption probability of the APT with the He + ion yield. The ion yield exhibits, like the total absorption probability, 2ω-and 4ω-oscillations and the position of the maximum of the 4ω-oscillations is in agreement with the transient absorption measurement. Consequently, the calibration of the delay-zero based on the 4ω signature is not restricted to attosecond transient absorption spectroscopy but may also be very helpful for experiments based on the detection of charged particles. | 6,463.8 | 2014-06-12T00:00:00.000 | [
"Physics"
] |
A Calogero formulation for four-dimensional black-hole micro states
We extract the leading-order entropy of a four-dimensional extremal black hole in ${\cal N}{=}2$ ungauged supergravity by formulating the CFT$_1$ that is holographically dual to its near-horizon AdS$_2$ geometry, in terms of a rational Calogero model with a known counting formula for the degeneracy of states in its Hilbert space.
Introduction
A successful statistical mechanical description of black-hole microstates constitutes one of the most precise tests of any purported theory of quantum gravity such as string theory. The most outstanding insight to be gleaned from string theory can be formulated in terms of the holographic AdS/CFT correspondence which establishes an isomorphism between the Hilbert space of quantum gravity in asymptotically AdS spaces and that of a conformal field theory living on the lower-dimensional boundary of the AdS space. Hence, non-perturbative objects in gravity such as black holes have a microstate description as thermal ensembles in the holographically dual theory. The least well understood of the well-studied AdS/CFT correspondences is the AdS 2 /CFT 1 pair, where the dual conformal quantum mechanics is still an outstanding formulation problem in string theory. AdS 2 is of more interest than just as a two-dimensional toy model of quantum gravity: Every extremal black hole in four dimensions possesses a near-horizon geometry that can be expressed as the direct product of a black hole in AdS 2 and a spherical, planar or hyperbolic horizon of the four-dimensional black hole. The deep-throat geometry of the AdS isolates the constant modes in it from the asymptotic modes of fields in the black-hole background that affect the black-hole horizon and hence its entropy. In fact, the constant modes in the near-horizon geometry are fixed in terms of the quantum numbers of the black hole, and they are independent of their asymptotic values. This is the well known attractor mechanism displayed by these extremal black holes (see [1] and [2] and references therein for a detailed explication). The holographic Bekenstein-Hawking entropy of the black hole is therefore determined purely by states in the near-horizon region. Hence, an encoding of these states in the dual conformal quantum mechanics attains significance in identifying the holographically dual conformal quantum mechanics and counting the microstates of the black hole. In this article, we look at the induced worldline superconformal quantum mechanics of an n-particle BPS system moving in the background of a black-hole in AdS 2 . This quantum mechanics has a reformulation [3] in terms of an n-particle rational Calogero model (of type A n−1 ), 1 and we argue that this encodes the thermal-ensemble states corresponding to the black hole in the holographically dual CFT 1 . We justify this assertion by counting the large-charge degeneracy of states in this model to arrive at the Bekenstein-Hawking entropy of the dual black hole in AdS 2 .
Calogero dynamics and extremal black holes
The near-horizon geometry of a zero-temperature BPS black-hole solution in fourdimensional ungauged supergravity is a black hole in AdS 2 ×S 2 , whose geometry is described as We restrict ourselves to only bosonic backgrounds in the theory. The scalar fields φ i that make up the moduli space in this background and do not correspond to flat directions of the scalar potential are driven to a critical point of this potential. They flow from the asymptotically flat space to the near-horizon geometry, and their extremum values φ i * are fixed entirely in terms of the quantum numbers of the system, independent of the asymptotic starting values. Hence, the near-horizon geometry acts as an attractor in the moduli space. The common radius of the AdS 2 and S 2 spaces is the modulus |Z| of the central charge Z of the supersymmetry algebra and, by the BPS condition, equal to the mass M(φ i ) of the black hole. Both are computed at a point in the asymptotic moduli space coinciding with the attractor point. The three U-duality invariants characterizing the black hole can hence be summarized as 2) As our model system, we consider a bound state of D0 and D4 branes wrapped on a CY 3 × T 2 to produce a four-dimensional dyonic black solution. In the M-theory picture, this can be viewed as a collection of particle momenta on the M-theory circle S 1 M with intersecting M5 branes wrapping a (4-cycle in CY 3 ) × S 1 M . As the near-horizon geometry decouples from the asymptotically flat space, the states contributing to the black-hole entropy must be localized in this region. Hence, probing the Hilbert space of these states will yield a count of the black-hole microstates from a statistical mechanics perspective. As mentioned in the previous section, the Hilbert space of quantum gravity in the near-horizon AdS 2 geometry can be formulated in terms of states in the holographically dual CFT 1 which implies that the black-hole degeneracy must be reproducible in terms of the counting formula for states in this conformal quantum mechanics. We therefore need a proposal for identifying the microstates of the AdS 2 black hole in a conformal quantum mechanical theory. One such proposal for conducting such an analysis is motivated by the observation that this system belongs to the special class of BPS black holes which can be lifted up to five dimensions to yield a near-horizon geometry of a BTZ black hole in AdS 3 × S 2 . The holographic correspondence with the two-dimensional BCFT is well understood in this case, and the black hole can be thought of as a chiral-ensemble excitation in the CFT, with the central charge defined by the D4 branes and with the CFT excitation number of the black hole being equal to its mass. Hence, in the 'black hole in AdS 2 ' scenario, we are motivated to consider the black hole as an excitation about AdS 2 , described in terms of degrees of freedom that can be encoded in terms of a superconformal quantum mechanics. This suggests that the black hole is naturally represented as a halo of n BPS particles moving in the AdS 2 background. These particles are governed by a superconformal quantum mechanics, with a target space that is the symmetric product of AdS 2 and S 2 . This is a putative formulation of the holographically dual CFT. We proceed to delineate this connection below.
3 AdS 2 -Calogero correspondence 3.1 Rational Calogero from AdS 2 Gravity in two dimensions is a conformal quantum field theory living on a strip. States in this theory are in a one-to-one correspondence to those defined in the BCFT, which in this case is also the holographically dual field theory. This field theory is in fact some superconformal quantum mechanics and must encode all the bulk states. A single particle moving in the AdS background is described by a superconformal quantum mechanical worldline theory. For a scalar particle, in the large-radius limit of an AdS geometry, parametrized in the Poincaré patch via this is the rational 2-body Calogero model, with the Hamiltonian 2 where λ is proportional to the angular momentum Casimir of the particle in four dimensions. The energy must be evaluated with respect to the AdS global time coordinate, where the Killing vector is smooth everywhere, and the Hamiltonian for this coordinate is given by with an undetermined non-zero force constant ω. The addition of the last term arises by passing from the Poincaré time t to the global time τ , which are related as where K is the special conformal transformation generator of the SO(2,1) isometry group of AdS 2 , given by K = 1 2 q 2 in the large-R limit. The ground-state wave function in this case reads Hence, the particle has no support at the center of AdS, and its wave function is localized farther out. The limiting value of the wave function at the boundary acts as a local insertion on the BCFT and, hence, defines the operator in the BCFT corresponding to some state in the bulk. As a consequence, a state corresponding to an excitation in AdS 2 can be mapped to a superparticle moving in the bulk and such states can be organized in terms of the asymptotic symmetry group of AdS 2 . Thus, we can regard the black hole as an ensemble of n BPS particles in AdS, which define a superconformal quantum mechanics with a target space given by n symmetrized copies of AdS 2 × S 2 . In the fully symmetric sector, the SU(2) R-charge of the superconformal quantum mechanics will be simply the common R-charge of the n particles multiplied by n. It follows that the angular momentum matrix of this system is a multiple of the identity matrix.
Quantizing the spectrum of this system will generate the Hilbert space that counts the entropy of the BPS black hole. To this end, we observe that, in our chosen model of the dyonic black hole as a supersymmetric D0-D4 bound-state ensemble, the microstate counting is essentially a field-theory computation of the Witten index for n particles. Their momenta are equal to the D0-brane quantum numbers in the two-dimensional worldvolume theory of intersecting M5 branes on CY 3 ×S 1 M , at a point in the moduli space corresponding to V CY 3 ≪ R S 1 M . This theory is simply two-dimensional SU(n) super Yang-Mills on a cylinder, 3 which has been shown in [7] to be equivalent to an n-particle rational Calogero model governed by the Hamiltonian (3.6) As in the single-particle case, the spectrum of the system is computed with respect to the global time τ , and the corresponding Hamiltonian can be related to the Schwarzschildtime Hamiltonian by adding the superconformal generator K. This introduces a confining harmonic well to the rational Calogero model, . (3.7) In the Higgs limit where the spacing between the positions of all particles vanish and all the particles are driven to the origin of the coordinate system, the analysis of the ground states is similar to that of the single-particle system, and so the discussion for the singleparticle case goes through for the multi-particle system. Hence, this model offers a putative formulation of the CFT 1 required to count the large-charge leading-order black-hole entropy. We now proceed to show how this model encodes the vacuum states of the holographically dual quantum mechanics and how the AdS 2 geometry emerges in the bulk by analyzing the flow of the ground state in the space of its coupling constants. We test this model by deriving the degeneracy formula for this system.
AdS 2 from Calogero
How may an asymptotically AdS 2 bulk background arise from a rational Calogero model? It is necessary to check that the ground state of this model in some limit (corresponding to approaching the boundary, i.e. q → 0) must move in the space of coupling constants of the deformed model, such that this state feels the vacuum geometry of the bulk gravity theory, namely AdS 2 . The metric it should see is nothing but the Fischer information metric for the ground-state wave function of the deformed Calogero Hamiltonian (3.3), given by In the limit of q → 0, the wave function can be approximated to leading order by while all higher-order deformations of the Hamiltonian can be neglected. This yields a twoparameter space graded by α and ω. The Fischer information metric for a space parametrized by n variables Θ = (θ i ), with i = 1, . . . , n, is given as where |ψ(q, Θ)| 2 is the probability density on the wave-function space. The Fischer metric on the two-dimensional space under consideration is computed explicitly to be Hence, to summarize, if one considers a deformation of the conformal Calogero Hamiltonian by a harmonic oscillator term which initiates a flow in the space of coupling constants, then in the limit of q → 0, to observe the change in the ground state, we need to consider only the quadratic deformation so as to obtain a two-dimensional space of coupling constants. The latter is found to be essentially Euclidean AdS 2 . 4 This explicitly goes to show that the flow of the ground state in the space of relevant coupling constants, near the boundary of AdS 2 , falls into a representation of the SL(2, R) symmetry group that annihilates the vacuum of the dual CFT 1 . Hence, we now have a dynamical model which is a putative candidate for counting the degrees of freedom of the holographically dual CFT 1 . We now run our first check of this counting by computing the degeneracy of states in the spectrum of this Hamiltonian, dual to the ground state of a BPS particle moving in the background of a black hole in AdS 2 .
Degeneracy from the Calogero Hamiltonian
The presence of the harmonic oscillator discretizes the n-body spectrum in (3.7) so that it acquires energy eigenvalues [8,9], Here, β is the periodicty of the Euclidean time circle and (upto, numerical factors) equal to the inverse of the black-hole temperature. We work in the large-n limit, which implies p n (m) → p(m) and simplifies the generating function to The asymptotic growth of p(m) can be obtained by a saddle-point approximation of the Laplace transform of the degeneracy formula, in the low β limit, and by using the transformation property of the Dedekind η function under Poisson resummation to give where the approximation sign indicates a suppression of all quadratic corrections to the saddle point and of other subleading terms. As the system we are studying exhibits no classical mass gap, 5 we need to pick the largest possible Euclidean time periodicity to define the Euclidean temperature, and hence we take the Euclidean periodicity to be 2πn ω . This is equivalent to rescaling ω in the spectrum by a factor of n and demanding that we count only eigenvalues with m being integral multiples of n. Therefore, in the above expression, m should actually be replaced by mn.
Now, let us consider the physically relevant values of this model for the black-hole statistical mechanics. Essentially, we are counting a Witten index on the full Hilbert space of the system, and so we should be looking at the ground-state degeneracy. The full conformal quantum gravity has a net central charge of zero, which is the sum of the conformal anomalies due to diffeomorphisms, ghosts and matter. As the matter content in the black-hole background does not differ from that of 'empty' AdS, the matter contribution to the stress tensor is the same in both cases, and hence the only matter contribution to the stress tensor can come from modes which are fully annihilated by the complete SO(2,1) isometry of the AdS 2 vacuum. This fixes the excitation quantum number to m = c 24 . (4.6) Another argument for the above relation can be put forward as follows. The ground-state degeneracy we are counting is in the black-hole background, while the Calogero spectrum has been evaluated in the Poincaré patch of AdS 2 . A conformal transformation can be used to map the ground state of the black-hole background to that of the Poincaré patch. Under this transformation the stress tensor picks up an inhomogenous term coming from the Schwartzian derivative, which raises the ground-state energy by an amount of c 24 in the Poincaré patch [11]). Here, c is the ground-state Casimir energy or central charge of the holographically dual CFT. From a dual CFT perspective, this implies that all such black holes must have a Casimir energy equal to c 24 , implying again that m = c 24 . The number n of particles in the Calogero model is equal to the number of degrees of freedom of the CFT 1 and thus equal to c. Consequently, mn = c 2 24 , and the leading-order contribution to the black-hole entropy is found to be S = 2π which matches the standard Bekenstein-Hawking black-hole entropy of the four-dimensional black hole reduced on the two-sphere in the near-horizon geometry [12]. Note that the relevant degrees of freedom that go into the computation of the black-hole entropy can be interpreted as the degrees of freedom of the AdS vacuum that the black-hole observer does not see, resulting in an entanglement entropy. Hence, one can extract leading-order information about the microstate description of bulk states in AdS by using general properties of an equivalent formulation of the BCFT in terms of a known superconformal Calogero model.
Discussion and conclusions
The formulation of a holographic dual to quantum gravity in AdS 2 has been the least well understood of the frequently analyzed gauge-gravity correspondences. Concurrently, extremal black holes in four dimensions with a near-horizon geometry have a density of states that is related to the square root of the energy, reflecting an underlying degeneracy of microstates that is captured by a CFT 2 as opposed to a CFT 1 . This article builds upon a proposed formulation in [3] of the microstates of a black hole in AdS 2 in terms of the worldline quantum mechanics of conformal Calogero particles in AdS 2 . The degeneracy of states in this model accurately reproduces the Bekenstein-Hawking entropy without taking recourse to viewing the underlying CFT as the chiral half of a two-dimensional CFT or implementing the Cardy formula. The accuracy of the computation indicates that this formulation offers a putative way to understand quantum gravity in AdS 2 and opens avenues for new checks on the gauge-gravity correspondence in two dimension. If the Calogero model is to be dual to string theory in AdS 2 , then the metric on the space of coupling constants, as generated by the flow of generic states in this space, must be the emergent bulk metric of the geometry in which the motion of a BPS particle is governed by the worldline Hamiltonian that includes those couplings. The background so derived is dual to the states whose flow is under consideration. We have already demonstrated this for the vacuum state as a necessary condition for this theory to be a holographic candidate for AdS 2 . Investigating the Fisher information metric on the full Hilbert space of the Calogero model, by which bulk geometry emerges from superconformal quantum mechanics, might yield further insights into gauge-gravity duality in two as well as in higher dimensions. | 4,309.2 | 2015-09-10T00:00:00.000 | [
"Physics"
] |
Determination of trace amounts of selenium in natural spring waters and tea samples by catalytic kinetic spectrophotometry
In this work, a new kinetic method is described for the determination of trace Se(IV) in natural spring waters and commercial tea samples. The method is based on the activation of Se(IV) onto the indicator reaction in acidic medium. The reaction was monitored using a fixed time approach of 20 min at 680 nm. The variables affecting the reaction rate were evaluated and optimized. The method allows the determination of Se(IV) in the range of 0.0125-1.0 mg L-1 with a detection limit of 3.6 µg L-1. The precision was in range of 0.63-3.15% (as RSD %) with a higher recovery than 98.6%. The method has been found to be selective against matrix effect. The method was applied to the speciation analysis of inorganic Se species present in the selected samples. The method was statistically validated by analysis of two certified samples and comparing the obtained results to those of HG-AAS analysis. Also, the total Se levels of the samples were determined by using both methods after conversion of Se(VI) into Se(IV) in ultrasonic bath in acidic medium for 30 min at 85-90 °C. The results were in good agreement with those of HG-AAS. The Se(VI) level of the samples was calculated from the difference between amounts of total Se and Se(IV).
Introduction
Selenium is an essential trace element with only a small difference between toxic and essential levels. It has been reported that selenium has an anticancer effect, protecting the human body from free radicals and preventing heavy metal toxic effects 1 , but it is also a potential toxicant 2 . Selenium concentration in fresh waters is usually around 20 µg L -1 . The selenium content of surface waters is greatly influenced by pH, being high in acidic (pH < 3.0) and in alkaline waters (pH > 7.5). Traces of selenium ranging from 0.01 to 10 µg L -1 are commonly found in community drinking water.
The guideline level of selenium in drinking water set by the World Health Organization (WHO) was 10 µg L -1 3 . As bioavailability and absorption strongly depend on chemical form in which the element is present, rapid, accurate and precise analytical methodologies for the qualitative and quantitative speciation analysis of selenium in foodstuffs are now becoming more and more necessary 4 . In addition to food, beverages may act as another important potential ingestion way to elements in our daily life 5 . As a popular nonalcoholic and healthy beverage, tea is massively consumed in the world 6 . The regular consumption of tea may contribute to the daily ABSTRACT: In this work, a new kinetic method is described for the determination of trace Se(IV) in natural spring waters and commercial tea samples. The method is based on the activation of Se(IV) onto the indicator reaction in acidic medium. The reaction was monitored using a fixed time approach of 20 min at 680 nm. The variables affecting the reaction rate were evaluated and optimized. The method allows the determination of Se(IV) in the range of 0.0125-1.0 mg L -1 with a detection limit of 3.6 µg L -1 . The precision was in range of 0.63-3.15% (as RSD %) with a higher recovery than 98.6%. The method has been found to be selective against matrix effect. The method was applied to the speciation analysis of inorganic Se species present in the selected samples. The method was statistically validated by analysis of two certified samples and comparing the obtained results to those of HG-AAS analysis. Also, the total Se levels of the samples were determined by using both methods after conversion of Se(VI) into Se(IV) in ultrasonic bath in acidic medium for 30 min at 85-90 °C. The results were in good agreement with those of HG-AAS. The Se(VI) level of the samples was calculated from the difference between amounts of total Se and Se(IV). dietary requirements of several essential elements. Considering the enormous consumption of tea and the investigation focusing on selenium, there is great importance to study inorganic selenium speciation in tea samples as well as in natural waters 7 .
There are many analytical techniques with their self-advantages and disadvantages, such as capillary electrophoresis online coupled with hydride generation-atomic fluorescence spectrometry (CE/HG AFS) 2 , inductively coupled plasma mass spectrometry (ICP MS) after a separation and preconcentration procedure 4,7 , spectrophotometry with and without preconcentration 8,9 , graphite furnace atomic absorption spectrometric (GF AAS) after preconcentration with coprecipitation 10 , electrothermal atomic absorption spectrometry (ET AAS) 11 , inductively coupled plasma optical emission spectrometry with hydride generation (HG ICP OES) 12 , atomic absorption spectrometry with hydride generation (HG AAS) 13 , and high performance liquid chromatography with UV detection (HPLC) [14][15][16][17] in literature for speciation analysis of inorganic selenium species.
In routine analysis, the spectrophotometric method is a versatile and cost-effective analytical tool that is easy to use, simple and requires no expert user in this area for underdeveloped and developing countries. However, the abovementioned methods are time-consuming or less sensitive. Catalytic kinetic methods are noteworthy due to their significant advantages in determining many organic and inorganic components at trace levels without the need for prior separation and enrichment step with the choice of a good indicator, catalyst, inhibitor and activator.
Different catalytic kinetic methods have been reported for the determination of inorganic selenium species like Se(IV), Se(VI) and total selenium in waters [18][19][20][21][22][23][24] , including micellar sensitized kinetic quantification of low levels of bisphenol A in foodstuffs by spectrophotometry 25 . Significant number of methods for the determination of selenium in real samples have been based on the catalytic effect of Se(IV) on the reduction of absorbing chromogenic or fluorogenic dyes in visible region, 380-800 nm, such as Toluidine blue 26 , Methylene blue 27 , Gallocyanin 28 , Semicarbazite 29 , and Ponceau S 30 . Some of these methods have high selection limits or suffer from many interfering species such as Te(IV) and As(V), have time-consuming and laborious processes, and at the same time these methods are unstable. There are a limited number of catalytic kinetic methods that allow the determination of Se(IV), Se(VI) and total selenium in water samples 31 . Therefore, there is still a need to develop more sensitive and selective catalytic kinetic spectrophotometric methods for the determination and speciation of selenium in real matrix samples such as natural hot springs and tea samples.
In the present study, Se(IV) was used as an activator to increase the sensitivity and stability of the indicator system, Hg(II)-PMA-NaH2PO2-H2SO4. The variables affecting the reaction rate were evaluated in detail and optimized to give the best calibration sensitivity. The developed activation-controlled kinetics system has been successfully applied to speciation analysis of the inorganic selenium species present in natural spring water and tea samples. The proposed kinetic method is sufficiently sensitive, selective, very simple and practical to use. The existing kinetic method, without any pre-separation and enrichment, is as accurate and reliable as the sensitive and element selective HG AAS, which is commonly used for selenium analysis in real samples.
Instrumentation
In the present study, a spectrophotometer equipped with a 1 cm light path quartz cell (Shimadzu model UV-Visible 1601 PC, Kyoto, Japan) was used for absorbance measurements at 680 nm. A thermostatic water bath was used to control the reaction temperature with accuracy of ±0.5 °C. A stopwatch was used to record the reaction time. Shortly before the start of the indicator reaction with and without the activator, all the solutions were preheated to a temperature of 70 °C. A Sonicor model SC-121TH ultrasonic probe with total volume of 4 L was used for ultrasonic dissolution (optimal conditions, 35 kHz, 220 V, for 15 min at 65 °C). In addition, HG AAS (in terms of total Se analysis) was also used to check the accuracy of the method. For comparative purposes, the hydride for Se analysis was run with an atomic absorption spectrometer (HG AAS, Shimadzu AAS-6300, HVG-13 channels) forming under the following operating conditions: 4.0 (w/v) NaBH4, 6-8 mol L -1 HCl, carrier argon gas at a pressure of 0.32 MPa at a flow rate of 70 mL min -1 , air at a flow rate of 7.0 L min -1 for fuel/burner acetylene at a flow rate of 15 L min -1 , 0.2 nm bandwidth, 194.0 nm wavelength and 10 mA lamp current.
Chemicals and solutions
All chemicals used were in analytical reagent purity. 1000 mg L -1 Se(IV) and Se(VI) solutions were prepared by dissolving the appropriate amounts of solid Na2SeO3 and Na2SeO4 in doubly distilled water and completing with water to the line. 100 mL of 1.5 mol L -1 H2SO4 solution was prepared by diluting the concentrated solution with water. 100 mL of 0.5 (w/v) PMA solutions was also prepared by dissolving 0.5 g of solid PMA in diluted NaOH and diluting with water. 100 mL of 0.5 mol L -1 hypophosphite solution was prepared by dissolving suitable amounts of solid NaH2PO2 in water, homogenizing thoroughly with water and soaking in water. 100 mL of 0.01 mol L -1 Hg(II) ion solution was prepared by dissolving a known amount of solid Hg(NO3)2 salt in analytical purity in water and diluting with water. The other reagents (HNO3, H2O2, HCl and 1.5% (w/v) NaBH4 in 0.2% (w/v) NaOH) used in dissolution of the samples, interference studies and selenium analysis steps by Hg AAS, were used by either direct or preparing solutions at known concentrations.
Preparation of samples for analysis
Natural cold-and hot-spring water samples were directly collected from the cold and hot spring (Kalin Town, Sivas, Turkey) and stored in a cool, dark place to protect them from heat and light. Water samples were acidified using dilute HNO3 to prevent metal ions from adsorbing on the walls of the measurement containers. Samples were passed through a 0.45 μm pore size membrane filter to remove suspended solids prior to analysis. To determine the total selenium, samples were submitted to analysis under optimum reagent conditions, without any other pretreatment, except for prereduction with HCl. Where necessary, known volumes of masking reagents such as thiourea and NH4F were added to the solution medium prior to analysis to control possible interference resulted from Te(IV), Cu(II), Bi(III) and Sn(IV) ions. At least, one blank solution for each sample was also analyzed to evaluate metal contamination with the reagents used.
Initially, the certified tea sample (about 0.1-0.2 g) was subjected to analysis for different sonication times (5-30 min), temperature (25-80 °C) and H2SO4 concentration (0.2-5 mol L -1 ) under 35 kHz ultrasonic power for the optimization of the ultrasonically assisted dissolution process. The certified value of sample was considered as base to assess the effectiveness of the procedure. Optimal values were found to be an acid concentration of 3.5 mol L -1 , sonication temperature of 65 °C and sonication time of 15 min after each optimization step. Real tea samples were solubilized in these conditions, converted to hydride, H2Se after reduction with 1.5% (w/v) NaBH4 in acidic medium (4.0 mol L -1 HCl) and detected with HG-AAS. Approximately 0.1 g of tea samples was taken in PTFE dissolution vessels for five repetitive analyzes, and each was mixed with 5 mL of concentrated acid and/or acid mixture (H2SO4 or H2SO4-HNO3-H2O2, 2:2:1 (v/v)). The flasks were covered with a watch glass, and then dissolved at 60-80 °C for 3-4 h. The acid and/or acid mixtures were intermittently added until the color of the solution became transparent, and the heating was continued. The excess acid was evaporated until a semi-dried mass remained; 2.0 mL of 0.2 mol L -1 HNO3 was added to this after cooling and centrifuged for 10 min at 3500 rpm. Final volume was completed to a volume of 5.0-10 mL using 0.5 mol L -1 HNO3, and the known volumes of sample solution were analyzed by kinetic method. For the tea samples below the detection limit, the standard addition-based calibration curve approach was used when necessary, and the total selenium level of the sample was determined from difference after prereduction. The blank samples were analyzed in a similar way.
The catalytic kinetic procedure
A suitable volume (1.0 mL) of standard Se(IV) or sample solution in linearity range of 0.125-10.0 μg mL -1 was transferred to a centrifugation tube of 10 mL, and then 0.5 mL of 1.5 mol L -1 H2SO4, 0.1 mL of 0.5 mol L -1 H2PO2 -, 1.5 mL of 0.5% (w/v) PMA and 0.75 mL of 0.01 mol L -1 Hg 2+ solutions were sequentially added. After that, the volume was completed to 10 mL with water and incubated at 70±0.5 °C for a fixed time of 20 min. The thermostat was left in the equilibrium in the water bath. Finally, the solution was brought to room temperature by holding under the running tap. The absorbance of the indicator solution at 680 nm for analysis of Se(IV) was measured against water using a 1-cm quartz cell and taken as an analytical signal. In a similar way, under optimal conditions, the absorbance was measured for the noncatalyzed solution without Se(IV) and the signal ΔAC was taken into account. As a measure of calibration sensitivity, Δ(ΔA): ΔAC-ΔA0 difference as a net analytical signal was plotted versus Se(IV) concentration, and a calibration curve was generated. The selenium contents of the samples were determined using this calibration curve.
Absorption properties
Phosphomolybdic acid (PMA), a heteropoly acid with three acid ionization constants (pKa1,2,3: 2.40, 4.32 and 5.46), is a dye which is commonly used for sensitive detection of low molecular mass compounds such as alkaloids, phenolic species and steroids for visualization of complex biological structures in TLC. PMA (H3PMo12O40, FA: 1825.25 g moL -1 ), also known as dodeca molybdic acid, is a yellowish-green compound, soluble in polar organic solvents such as water and ethanol. Conjugate unsaturated compounds reduce PMA to Mo-blue. Color intensity increases with the number of conjugated double bonds present in the dye molecule 32 . The PMA's implementation principle is based on the fact that many inorganic and organic materials form highly colored blue mixed oxide where the initial Mo(VI) is reduced to Mo(IV). Similar reaction products can be easily monitored by light and electron microscopy and can be measured by spectroscopic techniques, usually at a wavelength of 600-900 nm, depending on the nature of the reducing agent used 33,34 . Different investigators [35][36][37][38][39][40][41] reported different absorption spectra with different wavelengths for maximum absorption for the Mo-blue complex. In the present study, when sodium hypophosphite was used as a reducing agent for PMA, and when H2SO4 was added, intense blue color appeared. The shape and maximum absorption wavelength of the absorption spectra are changed by changing the concentration of acid in solution in the presence of Hg(II) and Se(IV) ions at constant concentrations. A comparison between these spectra showed that the maximum absorbance for a solution containing 0.075 mol L -1 H2SO4 in a final volume of 10 mL was observed at 680 nm. Increasing Hg(II) and Se(IV) concentrations at constant acid concentration also led to an increase in the absorbance at the characteristic absorption wavelength. For this reason, 680 nm was taken into account as working wavelength for further studies.
Indicator reaction
PMA was used as a redox indicator because of its ability to produce a product such as Mo-blue, which has a characteristic absorption effect on the visible region when reduced. Reduction of PMA in the presence of hypophosphite in acidic medium at room temperature is very slow. However, Se(IV) at trace levels activates selectively the catalytic effect of Hg(II) ions in acidic medium at 70 °C. This can be explained by the stable complex formation of Se(IV) in the catalytic cycle of Hg(II) ions in the acidic medium. According to Pearson's acid-base theory, Hg(II), which is soft Lewis acid, will interact with the soft base Se(IV) to form Hg-Se bond. The rate increase observed in the catalytic behavior of Hg(II) in the presence of trace Se(IV) was spectrophotometrically monitored at 680 nm. The catalytic reaction mechanism, based on the expected activation, can be predicted as follows:
Optimization of the analytical variables
The effect of reaction variables (acidity, concentration of reactants, temperature, time, and ionic strength of the medium) on the net reaction rate was extensively evaluated and optimized by monitoring each variable at a certain interval by keeping all other variables constant, based on optimization tool which is also well-known as univariate approach. In fact, in which there is not an interaction between variables, this approach is simpler, easy to use, more reliable, and moreover does not require an expert user (herein, a mathematician or statistician, which can statistically use multivariate models in optimization step) in his/her area to determine whether or not a variable is significant, and to establish relationships between variables as only one variable is used each time to obtain results. The optimum values of the variables for triplicate measurements of selenium at fixed concentration of 0.25 mg L -1 were determined to obtain the minimum detection limit and maximum sensitivity at each determination. The results were represented as error bars showing the mean and standard deviation of each replicate measurement sets in all figures.
Effect of acidity
The effect of acidity on the sensitivity, which is a measure of the rate difference between catalytic and noncatalytic reactions, was investigated in the range of 0.1-2.0 mL of 1.5 mol L -1 H2SO4. The sensitivity, Δ(ΔA), for the fixed-time of first 20 min at 70 °C in Fig. 1 was plotted against the volume of acid by keeping other reagent concentrations constant, and the maximum sensitivity was observed to be 0.5 mL. Sensitivity decreases at lower and higher acid volumes. This high acidity also indicates that the rate of the uncatalyzed reaction is more effective than the catalyzed reaction. At low concentrations, the activation power of Se(IV) may not be effective enough. As a result, 0.5 mL of 1.5 mol L -1 H2SO4 concentration was considered to be sufficient for further studies.
Effect of reducing agent volume
The effect of the hypophosphite concentration on the sensitivity was investigated using 0.5 mol L -1 hypophosphite at constant concentration, by keeping the other reagent concentrations constant at 680 nm in Fig. 2, and its volume was ranged from 0.025 to 1.25 mL for 20 min at 70 °C. The volume of hypophosphite increased for both catalyzed and uncatalyzed reaction in range of 0.025-0.1 mL. At higher volumes, the sensitivity has decreased proportionally as the difference between the rate differences decreases. Therefore, a hypophosphite volume of 0.1 mL was considered as the optimal value.
Effect of PMA volume
The effect of PMA volume on sensitivity was investigated in the range of 0.25-2.0 mL of 0.5% (w/v) in Fig. 3. The sensitivity for the fixed time of 20 min at 70 °C was plotted versus its volume, by keeping the other reagent concentrations constant at 680 nm, and the maximum sensitivity was observed to be 1.5 mL. In low volumes, sensitivity has increased up to 1.5 mL, while in higher volumes the slope gradually declines with a decreasing slope. This decrease in sensitivity is due to the fact that the noncatalytic reaction rate is faster than the catalytic reaction. High sensitivity at low volumes can be explained by the fact that the activation power of Se(IV) is more effective. Therefore, a PMA volume of 1.5 mL was considered as optimal for further studies.
Effect of Hg(II) volume
The effect of Hg(II) volume on sensitivity was examined in the range of 0.1-2.0 mL of 0.01 mol L -1 . The sensitivity for the fixed time of 20 min at 70 °C was plotted versus Hg(II) volume in Fig. 4, by keeping the other reagent concentrations constant at 680 nm, and the maximum sensitivity was observed to be 0.75 mL. Sensitivity increased up to 0.75 mL at low volumes with increasing slope, declined with a decreasing slope in range of 0.75-1.5 mL, and remained constant in range of 1.5-2.0 mL. This decrease in sensitivity may be due to the fact that the noncatalytic reaction rate is faster than the catalytic reaction. Another explanation is that after the Hg(II)-complex formed in the presence of Se(IV) is reduced to Hg(I)-complex by hypophosphite, the reduced complex or Hg2 2+ ions can be converted to metallic Hg and Hg(II) by disproportionation. High sensitivity at low concentrations can be explained by the fact that the activation power of Se(IV) is more effective. Therefore, an Hg(II) volume of 0.75 mL was considered as optimal for further studies.
Effect of temperature on sensitivity
At optimal conditions, the effect of temperature on the sensitivity was investigated in range of 40-85 °C in Fig. 5 because no significant difference in sensitivity was observed in room conditions. Both the catalytic and noncatalytic reaction rates increased with increasing temperature in range of 40-70 °C, in which the rate of catalytic reaction was more pronounced. Sensitivity decreased at temperatures higher than 70 °C. This reduction may be due to the fact that the noncatalytic reaction rate is relatively faster. For this reason, a temperature of 70 °C was considered as optimal for further studies. To check for possible signal fluctuations at this temperature, the analysis was carried out in a water bath, where the temperature was thermostatically controlled with an accuracy of ±0.2 °C.
Effect of reaction time on sensitivity
Under optimum reagent conditions, the effect of reaction time on sensitivity was studied in time interval of 5-40 min at 70 °C in Fig. 6. The catalytic and noncatalytic reaction rates were monitored at 680 nm, and the sensitivity increased with increasing time in 5 min intervals; however, the catalytic reaction rate was more pronounced in this time interval. Sensitivity was decreased at longer times than 20 min. This decrease can be caused by acceleration in the noncatalytic reaction rate, so as to lead to a decrease in the signal difference. So, for more advanced applications, a reaction time of 20 min was considered as optimal.
Effect of inert salt concentration as a function of ionic strength on sensitivity
The effect of ionic strength on the catalyzed and uncatalyzed reaction was investigated in the volume range of 0.1-1.0 mL of 0.5 mol L -1 KNO3 and K2SO4 solutions in Fig.7. In the presence of KNO3, at low volumes up to 0.5 mL, the sensitivity did not change, but it began to decline with increasing slope at higher volumes. However, even at low volumes in the presence of K2SO4, it was observed that the sensitivity decreased with increasing slope. This indicates that the ionic strength of the environment should be controlled in real complex specimens with high ionic strength. Another solution is to conduct sample analysis with a standard addition calibration curve based on the addition of the known standards of Se(IV). The limits of detection and quantification of the present kinetic method (LOD and LOQ) was calculated by taking 3-and 10-folds of the standard deviation of the signal for the ten replicate measurements of the blank solution without analyte, and found to be 3.6 and 12 μg L -1 , respectively. Five replicate measurements were performed for different concentrations ranging from 0.0125 to 1.0 μg mL -1 and the percent recovery, relative error (RE) and relative standard deviation (RSD) values were found by substituting the analytical signals at the calibration equation. The detailed information is presented in Table 1. From the results, it can be stated that the proposed method is accurate and precise.
Selectivity
To determine the selectivity of the kinetic method against interferents, the effect of possible interfering species on the rate of catalytic reaction was investigated by changing the concentration of interfering ions to keep the concentration of Se(IV) constant at 250 μg L -1 in Table 2. The tolerance limit has been defined as the concentration of interfering species that does not cause more than ± 5.0% of relative error. The results are summarized in detail in Table 2. They indicate that all other interfering species, except for Te(IV), Bi(III), Cu(II) and Sn(IV) ions, do not significantly affect the analytical signal when selenium at the level of 250 μg L -1 was determined in a final volume of 10 mL under optimum reagent conditions. The effect of the Te(IV), Cu(II) and Bi(III) ions has been increased to a tolerance ratio ranging from 35 to 70 with addition of thiourea. The interference of Sn(IV) was improved to a tolerance ratio of 50-fold with the addition of NH4F. Similar errors resulting from positive and negative interactions between analyte and matrix in real samples can be minimized by using the calibration curve approach based on spiking at three concentration levels around quantification limit.
Analytical applications of the method
At initial, the method was applied to samples taken from hot-and cold-spring waters after submitting to certain pretreatments. Samples were treated by using directly kinetic method for analysis of Se(IV). Then, for total selenium analysis, samples were treated by boiling with 4.0-5.0 mol L -1 HCl at 85-90 °C for 30 min, for prereduction of Se(VI) to Se(IV). The total Se contents of samples were calculated from the difference between total Se and free Se(IV) amounts obtained by using the kinetic method with and without prereduction. To ensure the accuracy and precision of the method, total selenium levels were also monitored after conversion to hydride with NaBH4 in HCl medium after acidification. From the results obtained by both methods in Table 3, it can be concluded that the current method is as accurate and precision as the routine HG AAS method. Table 3. Determination and speciation analysis of inorganic Se(IV), Se(VI) and total selenium levels in hotand cold-spring waters by both kinetic method and HG AAS (n: 4). a The average ± standard deviation of replicate measurement results found using both kinetic method and HG AAS. b The spiking recoveries for analysis of Se(IV), Se(VI) and total selenium by the present kinetic method were defined by the following equation: spiking recovery% = Cfound -Creal/Cadded × 100. Here, Cfound, Creal and Cadded are the concentration of analyte after the addition of a known amount of standard in the real sample, the concentration of the analyte in the real sample and the concentration of a known amount of standard spiked to the real sample, respectively.
The present kinetic method for validation was applied to two separate SRMs given in Table 4 after sample preparation to analysis with two different wet dissolution approaches and conversion of Se(VI) to Se(IV) by boiling sample solutions at 85-90 °C with 4.0-4.5 mol L -1 HCl. To ensure the accuracy of the method, two certified samples were also analyzed by HG AAS, which is an independent comparison method after prereduction with NaBH4, by thoroughly dissolving and homogenizing the samples in H2SO4 by under ultrasonic effect. The results were quite consistent with the certified values of selenium. The kinetic method after wet digestion with H2SO4 appears to be somewhat questionable in terms of both accuracy and precision. This indicates that the selenium present in the sample matrix cannot be fully solubilized and released. It can be said that the result obtained with the other digestion approach is highly compatible with that of the HG AAS, and the results obtained with both methods are quite consistent with the certified values in terms of accuracy and precision. The kinetic procedure was applied to two different SRMs given in Table 4. The results are in good agreement with the selenium values. The relative standard deviations for solid samples were in the range of 5.8-8.7% as a measure of precision. The precision (as RSD%, n: 5) for both SRMs varies between 5.0-6.62%. On the other hand, the precision of HG AAS analysis results was in range of 4.57-5.07%. Because of the importance of selenium consumption by foods and beverages such as tea for healthy life, the present method was applied to different brand black and green tea samples after two different digestion approaches. The sensitive and selective method commonly used for total Se analysis such as HG AAS to ensure the accuracy of the method was used in parallel after the ultrasonicbased dissolution approach in H2SO4 medium, and the results were found to be highly consistent with those of the present kinetic method. The results were extensively presented in Table 5. A report in literature has shown that the total Se and Se(IV) levels in four commercial tea leaves supplied from different regions of China varied from 191 to 724 µg kg -1 and from 173 to 613 µg kg -1 respectively 16 , which are consistent with our results (ranging from 281 to 708 µg kg -1 ). Another research group in Turkey has determined a total selenium level of 68 µg kg -1 with a standard deviation of 5 µg kg -1 in a black tea sample supplied from the market from Turkey 10 . In a similar way, from analysis of the samples by means of ICP OES, it has been observed that total selenium levels are 280/1250 µg kg -1 and 1093/1668 µg kg -1 respectively in Turkish green and black tea samples with and without lemon. It is clear that lemon addition synergistically increases the selenium concentration in both the black teas and green teas 42 . Selenium is a trace mineral that is essential to good health but required only in small amounts. Selenium is incorporated into proteins to make selenoproteins, which are important antioxidant enzymes. The antioxidant properties of selenoproteins help prevent cellular damage from free radicals. Free radicals are natural by-products of oxygen metabolism that may contribute to the development of chronic diseases such as cancer and heart disease. Other selenoproteins help to regulate thyroid function and play a role in the immune system. In this sense, it can be concluded that citric acid in lemon leads to an improvement in antioxidant property of selenium or selenoprotein in tea. a The average ± standard deviation of replicate measurement results found using both kinetic method and HG AAS b The spiking recoveries for analysis of total Se by both the present kinetic method and HG-AAS were defined by the following equation: spiking recovery% = Cfound -Creal/Cadded × 100 here, Cfound, Creal and Cadded are the concentration of analyte after the addition of a known amount of standard in the real sample, the concentration of the analyte in the real sample and the concentration of a known amount of standard spiked to the real sample, respectively.
Conclusions
A spectrophotometry in visible region with selection of an indicator suitable for analyte is a comparatively low cost, robust and easy-to-operate analytical technique that is readily available in most analytical research laboratories. In the selected kinetic mode, it is a fast, reproducible and versatile technique with analytical frequency of nine samples (three samples plus six calibrations standard) per 20 min. Because the developed method is based on a Se-activated indicator reaction and the final intermediate product is stable for fixed time of 20 min even at a temperature of 70 °C, this detection tool can be efficiently used for the fast, accurate and reliable analysis of selenium species. In addition, the method allows a detection of low levels of Se(IV) up to 3.6 µg L -1 in a linear working range of 80-fold without need to a separation/preconcentration step. So, the determination of inorganic selenium species in other sample matrices can be performed even at low concentrations without any matrix effect. Finally, the method can be considered as an alternative to expensive, time-consuming/tedious and complex analytical techniques such as ICP MS, ICP OES, ET AAS or GF AAS, HG AAS, HG AFS, and HG AFS in combination with CE or LC. Moreover, these detection techniques require expert-users in his/her area as well as poor precision and low recovery at low concentrations. Also, the detection limit of the method is especially comparable to most of the similar spectrophotometric and kinetic spectrophotometric methods reported in the literature in terms of linear working range, sensitivity, selectivity and reproducibility. The only disadvantage of the method is that the indicator reaction takes place at high temperature (70 °C) and long time (20 min) limiting sampling rate related to kinetic analysis of samples. | 7,529.2 | 2019-10-01T00:00:00.000 | [
"Chemistry"
] |
Structural and Functional Profiling of the Human Histone Methyltransferase SMYD3
The SET and MYND Domain (SMYD) proteins comprise a unique family of multi-domain SET histone methyltransferases that are implicated in human cancer progression. Here we report an analysis of the crystal structure of the full length human SMYD3 in a complex with an analog of the S-adenosyl methionine (SAM) methyl donor cofactor. The structure revealed an overall compact architecture in which the “split-SET” domain adopts a canonical SET domain fold and closely assembles with a Zn-binding MYND domain and a C-terminal superhelical 9 α-helical bundle similar to that observed for the mouse SMYD1 structure. Together, these structurally interlocked domains impose a highly confined binding pocket for histone substrates, suggesting a regulated mechanism for its enzymatic activity. Our mutational and biochemical analyses confirm regulatory roles of the unique structural elements both inside and outside the core SET domain and establish a previously undetected preference for trimethylation of H4K20.
SMYD3 and its 4 vertebrate paralogs (Fig. 1A) derive from an ancient family of SET HMTases with orthologs present in plants, animals, fungi, and some (typically parasitic) protozoa [16]. All SMYDs have the N-terminal terminal portion of SET (N-SET), followed by a Myeloid translocation protein 8, Nervy, and DEAF-1 (MYND) domain, N-terminal to an intermediate or linker sequence (I-SET) of variable length and configuration [17,18]. The remainder of the SET domain (C-SET) comes next, sequentially, and includes critical catalytic folds. The SMYD SET ''core'' ends in a cysteine-rich zinc binding fold (post-SET). SMYDs 1-4 have an additional, previously uncharacterized ,150 residue C-terminal domain (CTD), whereas SMYD5 has primarily insertions in its MYND and I-SET sequences. Most prototypic SET active site residues are conserved in SMYDs [19,20], but there are notable exceptions (discussed further below). SMYD1 and SMYD3 were identified as H3K4me3-specific HMTases [5,21], whereas SMYD2 catalyzes H3K36me2 [19]. The only previous characterization of SMYD3 HMTase was performed by Silva et al. [22], who reported that substrate release is facilitated by tumor-specific proteolysis of the SMYD3 N-terminal 34 residues. Aside from this, little has been done to establish the functional interface of SMYD3 with its substrates or its structural underpinnings.
Conversely, numerous studies have strongly implemented SMYD3 as a protooncogene in hepatocellular, colon and breast carcinoma, based on its high levels of endogenous expression, cancer-associated promoter polymorphisms, and cell proliferative effects produced by enforced SMYD3 over-expression in normal cells or SMYD silencing in tumors [5,22,23,24,25]. Approximately 80 genes have been identified as targets of SMYD3 HMTase, including Nkx2.8, a homeobox transcriptional regulator upregulated in hepatocellular malignancies as well as cell cycle mediators, oncogenes, and developmental fate determinants [5,22,23,24,25]. The considerable if not unprecedented interest in SMYD structure and its implications for putative anti-cancer drug development is evidenced by publication of three structures which appeared just prior to [26,27] and during [28] the submission phase of this manuscript (addressed in sections below). We present here, in addition to the independent high resolution co-crystal structure of the full length human SMYD3 with the S-adenosyl methionine (SAM) analog Sinefungin, a detailed mutational and biochemical assessment of SMYD3 function. We provide a structural basis for the proposed [29,30] differential regulation of SMYD HMTase activities via their MYND domain binding partners. We demonstrate that SMYD3 can function as a transcriptional repressor via MYND interactions as well as through hitherto undetected H4K20 HMTase activity. We show that in addition to the MYND domain, the aromatic cage structure throughout the methyltransferase active site and the unique carboxy terminal domain have the potential to regulate SMYD HMTase methylation state and substrate specificity.
Results and Discussion
Preferential H4K20 activity of SMYD3 Human his-tagged SMYD3 was purified following baculoviral or bacterial expression (Fig. S1). In addition to the expected H3K4me3 activity, SMYD3 methylated all histones to various degrees with highest activity for histone H4 when measured on mixed calf thymus histone acid extracts or on individual recombinant histones (Fig. 2). Western blotting with anti-H4 antibodies indicated that the maximal activity was for H4K20me3 (Fig. 3A), which was unanticipated given that this has generally been associated with establishment of heterochromatin. Using a series of synthetic H4 peptides bearing mono-, di-, and trimethylation states at K20, we confirmed this specificity and also observed significant activity toward H4K20me2 (Fig. 3B). It is generally thought that the majority of H4K20 methylation occurs in a stepwise process in which monomethylation by the SET HMTase PR-SET7/SET8 serves as a substrate for di-and trimethylation by SUV420H [12,14]. That H4K20me2 served as a far better substrate than unmethylated or monomethylated species (Fig. 3B) indicated that SMYD3 alone, at least in vitro, is capable of progressive methylation at this lysine mark. H4K20 methylation is not a general property of SMYDs, as evidenced by the near baseline activity of SMYD1 (Fig.3A).
While it would be ideal to have a clear structural rationale for the substrate selectivity demonstrated here, crystal structures available at the time of writing do not provide enough detail to make a clear and definitive statement. Alignment of the SMYD3 (or SMYD1) structures with other structures featuring an H4 peptide fragment bound to an MTase, such as in the SET8 structure [31], shows considerable clashes between the H4 peptide and the SMYD protein. Close inspection of the overlay indicates that the H4 peptide forms part of the support for the SAM binding pocket in the SET8 structure, whereas the SAM binding pocket is fully formed and stabilized in SMYD3, independent of any substrate. Significant conformational changes would be necessary to accommodate the H4 peptide conformation as seen in [31]. Alternatively, one could use the conformation of the H3 peptide as seen in SET7/9 (discussed in more detail below), but the threading of the H4 residues onto the H3 backbone (bound to SET7/9) leads to several steric and electrostatic clashes between the modeled H4 peptide and the SMYD3 protein. Significant conformational changes and possibly water bridges would be necessary to model the H4/SMYD3 or H4/SMYD1 interaction in this peptide conformation. Further understanding of the structural roots of the observed selectivity profile requires additional studies beyond the scope of the current work.
In addition to PR-SET7 and SUV420H, only two additional SET-domain-containing proteins have been previously implicated in H4K20 methylation: The trithorax group activator Ash1 and the nuclear receptor-binding SET domain-containing protein (NSD1) (reviewed in [15]). As with SMYD3, Ash1 and NSD1 in vitro methylate other histone lysines in addition to K20 [32,33]. However, whether Ash1 and NSD1 are bona fide H4K20 HMTases has been challenged because of the questionable specificity of the peptide antisera employed [15] and by the lack of direct confirmation both in vitro [34,35,36] and in vivo [11]. In support of the case of SMYD3, we observed strong di-and preferential trimethylation of H4K20 on the most relevant in vitro substrate, the nucleosome (Fig. 3B). Nucleosomal H3K4me3 activity was not detected for SMYD3 (data not shown). The in vivo relevance of SMYD3-mediated H3K4 vs. H4K20 remains to be determined, but we return to this issue below in the context of the crystal structure.
Conventional SET and novel features of the SMYD3-Sinefungin complex Baculoviral SMYD3 was co-crystallized with the SAM analog Sinefungin, and the structure was solved to 1.8 Å resolution ( Fig. 1B; Table 1) [37]. SMYD3-Sinefungin crystallized as 2 symmetry-related molecules/unit cell (P2 1 ). However, no convincing dimer interface exists, and the mass of the purified protein following gel filtration was 50,187d ( [17,18,38,39,40,41,42,43,44]. Modification of the strictly conserved and catalytically essential Y239 results in the expected loss of function (Fig. 4A). Mutation of several residues conserved within many conventional N-SETs (e.g., G15, G17) and C-SETs [17,44] (e.g., C186, E192 and H206) abrogated SMYD3 HMTase activity (Fig. 4B), confirming the functional conservation of the split SET domain. About one third of the SMYD3 substrate binding site is formed by the Intermediate SET spacer (I-SET) region located C-terminal to MYND (Fig. 1). The significance of this variable linker region in SET substrate selectivity has already been noted [39,41,44,45]. However, the SMYD3 I-SET is unusually long and exhibits extraordinary structural conservation, in lieu of primary sequence similarity, with the I-SET of the Rubisco Large Subunit Methyltransferase (RLSMT) (Fig. S2E).
The close structural similarity to other SET domains allowed us to superimpose onto SMYD3 the H3K4 peptide coordinates from the SET7/9 ternary complex (Fig. 5A) [41]. The peptide is bound in the conventional manner; i.e., the methyl-lysine is oriented on the opposite surface of SMYD3 from the SAM/Sinefungin methyl donor, with a narrow channel connecting the two surfaces of the SET domain. The orientation is similar to that modeled in mouse (m)SMYD1 [26] (Fig. 5B), with selectivity opportunities on either flank of the target lysine. The relatively conservative mutation T184A, which contacts the N-terminal side of the peptide, confers not only increased activity toward H4, but a striking gain of activity toward H3. The C-terminal of the modeled peptide clashes with the CTD, suggesting that the CTD also regulates the specificity of substrate binding (more below).
The SMYD3 aromatic cage
The SMYD3 post-SET provides another commonly shared feature-an essential aromatic residue, Y257 (see Fig. 4B), that anchors against the conserved SET core to form the hydrophobic channel interface with substrate ( Fig. 5B). A notable difference in SMYDs is that a critical SAM-contacting tyrosine, which occupies , and, as a negative control, tri-methylated [H4(3)] peptides were employed in an in vitro HMTase proximity bead assay with baculoviral SMYD3 and SMYD1 (negative control). Degree of methylation was measured by scintillation counting in CPM. Left panels: Western analysis using anti-monoand trimethyl-specific antibodies (Upstate) confirm in vitro specificity of SMYD3 for H4-K20me3. (B) SMYD3 preferentially trimethylates H4-K20 in reconstituted chromatin. Recombinant oocyte nucelosomes were assembled into chromatin, followed by in vitro HMTase assays and SDS-PAGE resolution of reaction products. SMYD3 inputs were increased from 0.5 mg to 2.4 mg, (triangle above lanes), and western analyses were performed with the indicated histone H4 methylation state-specific antibodies (middle panels), with a pan-anti-H4 (lower panel) providing a loading control for chromatin input. doi:10.1371/journal.pone.0022290.g003 activity. Extended pi cloud interactions between aromatic sidechains extending from the aromatic cage appear common in MTases. For example, residue F259 interacts with the adenine ring of Sinefungin and the Y239 of the aromatic cage, which itself packs against Y257. F216 packs against Y198 which packs against the F183 of the aromatic cage. A similar network may be seen in the SMYD1 structure, with preservation of the aromatic network around Y252, the equivalent of Y239 in SMYD3. Despite the sequence identity and near identical backbone placement of the aromatic residues around F182 (the equivalent to F183 in SMYD3), the SMYD1 network assumes a very different set of side chain conformations, driven by the insertion of the adjacent leucine side-chain into the arrangement observed for SMYD3 (Fig. 5B). In fact, F182 in SMYD1 is rotated away from the catalytically competent conformation [26]. As suggested by this geometry, SMYD1 should be, and is, a less efficient MTase than SMYD3 in catalyzing higher methylation states of the common H3 substrate (Fig. 6). Fig S3 shows the similarities of the aromatic network among other lysine MTases, with the extent of the aromatic networking trending with the amount of methylation preferred. In general, stabilization of the biologically active conformation of the aromatic residues forming the cage surrounding the target lysine of the substrate should lead to more efficient transfer rates and therefore, indirectly, to the MTase's proclivity toward mono-, di-, or tri-methylation.
An intact MYND domain is required for catalysis and transcriptional specificity
The MYND domain is the principal distinguishing element separating the SMYDs from other SET domain-containing proteins. MYND consists of two interlocking zinc binding folds and is present in several transcriptional regulators where it facilitates interactions with partner proteins through PXLXP motifs [47,48,49]. Though unfettered by SET domain constraints, the integrity of MYND is essential to SMYD3 basal function, as substitution of its Zn2+-ligating residues (C49 or C87) eliminated HMTase activity (Fig. 4B). This observation is consistent with previous analyses of the AML1/ETO MYND domain which indicated that coordination of zinc atoms is essential to maintain the intact conformation of that MYND domain, with loss of zinc coordination leading to a disordered domain [49]. Loss of coordination here also likely leads to a disordered domain, but more importantly, the lack of order affects the catalytic fidelity of SMYD3, indicating that some constraints on the linking sequence between the N-and C-SET domains exist.
The intact MYNDs of AML-1/ETO and SMYD3 bind a common PXLXP-containing protein, the N-CoR transcriptional co-repressor (Fig. 7A) [50]. That N-CoR can bind to SMYD3 and ETO similarly is consistent with our finding that SMYD3 can act as a MYND-dependent transcriptional repressor (Fig. 7B,C). These data confirm that the nature of the MYND-bound ligand influences SMYD3 transcriptional outcomes [49].
Potential contribution of SMYD3 and SMYD1 CTDs to catalysis SMYDs1-4 have an additional ,150 residue C-terminal domain (CTD) whose function was recently proposed [26] to regulate MTase activity of SMYDs. The SMYD3 CTD is a superhelical 9 a-helical bundle which constricts the floor of the substrate binding site opposite to the I-SET domain, preventing the trivial insertion of substrates (Fig. 8). In fact, the CTD clamps further down on the peptide binding space of SMYD3 than of SMYD1, featuring a greater superhelical pitch, such that it contacts the MYND domain (circled region of Fig. 8). The difference in pitch is likely driven by the larger turn in the C-SET domain of SMYD1 which significantly displaces the entire CTD relative to its location in SMYD3. There is still a relatively large space near the C-terminus of the modeled peptide where the inner wall of the pocket is decorated by polar residues from the CTD (mainly helix 4). We suggest that these polar residues would cooperate with the post-SET residues to select for specific sequences N-and C-terminal to the methyl-lysine, even in the absence of a significant displacement. In this context, the CTD could function as a cap necessary to bind substrates effectively and selectively. Complexities of SMYD3 substrate entry/release Constrictions imposed by the I-SET, post-SET and CTD domains onto the peptide C-terminus suggest that substrate release is a complicated process for SMYD3. Silva et al. [22] reported that substrate release is facilitated by tumor-specific proteolysis of the SMYD3 N-terminal 34 residues; that is, the N-SET is "autoinhibitory" to catalysis. To the contrary and consistent with our structure, we found that elimination of the N-SET by truncation at position 44 or 74 or by destabilizing its conserved first b-turn, eliminated HMTase activity (Fig. 4B, C). We suggest, instead, that substrate release will require a significant conformational change in the CTD, which should be readily detected by differential shifts in the geometry/contacts of unmethylated and methylated peptides.
A 1.7 Å crystal structure of the human SMYD3-sinefungin complex was reported by Sirinupong et al. [27] during the final drafting of this manuscript. Despite space group/crystal packing differences (please compare Table I of both manuscripts), the two HMTase-substrate inhibitor complexes could be virtually superimposed. Indeed, a number of active site and MYND domain residues, predicted in that paper as important for basal catalysis or PXLXP-binding interactions, were confirmed by our mutational (Fig. 4) and biochemical (Fig. 7) analyses. Based on the differential geometries adopted by the CTDs of SMYD1 and SMYD3 (vide supra), Sirinupong et al. [26,27] speculated that the CTD must undergo a hinge-like movement to relieve its inherent autoinhibition of substrate entry and/or release. However, neither of the structural analyses rules out the possibility that, at least for basal catalysis, the CTD performs a positive enzymatic function by stabilizing the active site. As shown previously [5], SMYD3 HMTase is stimulated by HSP90, a chaperone whose deregulation is also strongly implicated in a broad array of malignancies [51,52]. It will be critical to determine if HSP90 binds directly to SMYD3, and if so, whether this interaction generates a CTD conformational change of the nature they proposed.
Another structural analysis of SMYD3 was published by Xu et al. [28] during the review process. Notwithstanding their considerably lower resolution (none better than 2.8Å ), their structure overlays very closely with ours. Much as in the work of Sirinupong et al. [26,27], Xu et al. [28] speculate on the previously observed [5,21] association of SMYD3 with HSP90. While they do not establish a causal link, they do help establish some of the residues necessary for basal activity against an uncharacterized admixture of histones. The two residues lowering activity (D241 and D332) have a structural role, making apparently key intramolecular hydrogen bonds, while the one that does not make any intramolecular hydrogen bonds (E192) fails to alter basal activity. Interestingly, E192 is proximal to T184 in space, suggesting the trajectory of the N-termini of histones lie less towards the CTD and more towards the MYND domain, which may explain why an intact MYND domain is essential for activity. Given that Xu et al. [28] find weak but dose dependent SMYD3 HMTase activation with DNA binding to the MYND domain, one might speculate that the influence of MYND domain conformation changes may lie not only with its interactions with the C-SET residues adjacent to the catalytic binding site but also with the histone on the exterior surface.
Conclusions
SMYD MTases share many key features in their SAM binding and lysine side-chain binding sites. A key beta-turn motif in the N-SET is essential for activity, with deletion of the motif or mutation of the superfamily signature residues G15 and G17 leading to a complete loss of activity. This motif serves as a flap that partially encloses the active site and provides residues that can interact with SAM. Targeting the disruption of this loop therefore becomes a logical objective for oncology research, as it should be sufficient to eliminate SMYD3 activity. The residues which comprise the motif are typically quite diverse and only modestly conserved, suggesting that selectivity may be achieved as well. The main drawback to targeting the loop is that the current motif features a relatively shallow groove and inhibitors would have to induce a conformational change that cannot be visualized from the current structures. Nevertheless, simulation methods could be used to explore this region of the protein.
A more likely approach to targeting SMYD3 activity is to design inhibitors that bind either the SAM-or substrate-binding pockets. Our examination of the active site suggests that disruption of the aromatic cage structure is likely to succeed, even if the site of catalysis is not occupied by an inhibitor. Differences in intramolecular aromatic-aromatic contacts lead to different stabilizations of the catalytically competent protein conformation. These differences in stability likely influence the MTase activity and hence the preference for the extent of methylation conferred on their substrates. The difference in MTase activity between SMYD1 and SMYD3 highlights this disparity: even though the sequences are identical, subtle changes in the packing influence the aromatic cages, with the more active SMYD3 retaining a stronger aromatic network than the less active SMYD1.
The MYND domain inserts into an otherwise structurally conserved SET motif that extends back to bacteria and viruses. We established that SMYD3 function is dependent on a properly folded MYND domain, suggesting that its role is not only in attracting particular binding partners but also in influencing the conformation of the N-and C-SET domains. Consistent with this hypothesis, T184 is on the far end of a beta sheet connected to the MYND domain. We establish that the rather conservative mutation of that residue to alanine leads to increased activity and promiscuity. This result suggests that small changes in the chemistry and position of the threonine side chain can lead to significant changes in catalytic activity and preference. Such changes may be possible through propagated changes in MYND domain conformation on the substrate binding pocket or may arise from changes in its direct association with a portion of the histone. More research is needed to refine these possibilities and to clarify which other residues confer the substrate preference for H4K20.
Although the MYND domain helps provide functional selectivity toward SMYD substrates, the CTD may also regulate the level of HMTase activity, serving as a cap necessary to bind substrates effectively and selectively. More experimentation is necessary to clarify the roles played by the CTD of SMYD3. New opportunities to design potent and selective agents may arise from the further characterization of these two domains and their interrelatedness to the SET domains.
Nevertheless, how do we explain the apparent biologic paradox that the oncogenic SMYD3 catalyzes histone lysine marks that promote both localized promoter activation (H3K4me3) and, even more aggressively, the repressive stabilization of heterochromatin (H4K20me3) [10,11,13,53]? While global reduction of H4K20 trimethylation has been suggested to be a hallmark of human cancer [11,15], stable and heritable H4K20-mediated repression of selected pol II genes, including tumor suppressors, has recently become appreciated as an epigenetic feature of cancer [4,11]. For example, the tumor suppressor target of methylation-induced silencing (TMS1/ASC) becomes methylated and silenced in human breast and other cancers [54,55,56,57]. Silencing is accompanied by a local shift from a histone activating mark, H4K16 acetylation (Ac), to H4K20 trimethylation [58]. Selective promoter-proximal "pausing" results, such that initiated Pol II accumulates just downstream of the transcription start site [59]. Taken together, SMYD3 may serve both as a repressor of tumor suppressor expression and a promoter of oncogene expression. These studies illustrate the complexities of gene-specific regulatory mechanisms in the epigenetic program and underscore the critical importance of tightly regulating the targeting of SMYD3 for regional deposition of H3K4me3 and H4K20me3.
Materials and Methods
Crystallography X-ray diffraction data are summarized in Table 1. Details of protein purification and crystallization are provided below. The data were indexed and integrated using the program MOSFLM [60] and then merged using the program SCALA [61]. The subsequent conversion of intensity data to structure factor amplitudes was carried out using the program TRUNCATE [60]. The program SnB [61] was used to determine the location of Zn sites in the protein using the Bijvoet differences in data collected at the Zn peak wavelength. The refinement of the Zn sites and the calculation of the initial set of phases were carried out using the program MLPHARE [60]. The electron density map resulting from this phase set was improved by density modification using the program DM [60]. The initial protein model was built into the resulting map using the program ARP/Warp [62] and XTALVIEW/XFIT [63]; (available on request from San Diego Super Computer Center). This model was refined using the program REFMAC [60] with interactive refitting carried out using the program XTALVIEW/XFIT [63]; (available on request from San Diego Super Computer Center).
Molecular biology
Immunoprecipitations, histone methyltransferase assays, and mutagenesis were performed as previously described [19]. Details of each of these experiments and a list of the templates and mutagenic primers employed are provided below. Dual luciferase assays using GAL4-DBD-SMYD3 wildtype and GAL4-DBD-SMYD3 mutants (C49G and C87G) were performed and normalized following transient transfection into 293T cells as previously described [19] and are detailed below.
Cloning and baculoviral expression
The full length human SMYD3 protein (Genbank Accession No. AAH31010; SEQ. ID NO:1) was engineered to contain a Cterminal hexa-histidine tag. Sequence verified clones were each transformed into DH10 BAC chemically competent cells (Invitrogen Corporation, Cat#10361012). The transformation was then plated on selective media. 1-2 colonies were picked into minipreps and bacmid DNA isolated. The bacmids were transfected and expressed in Spotoptera frugiperda (SF9) cells using the following standard Bac to Bac protocol (Invitrogen Corporation, Cat.#10359-016) to generate viruses for protein expression. SF9 cells were used for 48 hr expressions in SF-900 II media.
The full length cDNA of HSP90 was cloned from Hep G2 cells [ATCC HB-8065]. The chaperone HSP90 was co-expressed with SMYD3 by co-infection with virus for each. Cells were collected by centrifugation and frozen pellets were used for purification of full length SMYD3. These procedures resulted in expression of SMYD3 and HSP90 with 3 amino acids added to their N-terminal end (MAL) and an additional 8 amino acids (EGHHHHH) added to the C-terminal end of SMYD3.
Mutagenesis, cloning, and bacterial expression
Point mutants were generated using the GeneEditor in vitro Site-Directed Mutagenesis System (Promega) according to the instructions of the manufacturer For PCR, samples were heated to 94uC for 5 min, subjected to amplification for 16 cycles of 0.5 min at 94uC, 0.5 min at 55uC, and 0.5 min at 68uC and extended after the last cycle at 72uC for 7 min. Polyhistidine (6xHis)-tagged SMYD3 wildtype, truncation and substitution mutants were cloned into Gateway (Invitrogen) pET TM
Crystal preparation
Diffraction quality crystals were obtained by hanging or sitting drop containing 0.75 ml of protein 10 mg/ml and 1 mM Sinefungin in 25 mM TrisHCl pH 7.6, 150 mM NaCl, 1 mM TCEP and 0.75 mL reservoir solution: 100 mM Tris-HCl pH 8.5, 17% PEG 20 K, 100 mM Magnesium Chloride hexahydrate in a sealed container containing 500 mL reservoir solution, incubated overnight at 21uC. Crystals were also grown with a reservoir solution of 100 mM HEPES pH 7.5, 16% PEG 3350, 200 mM Magnesium Chloride.
The crystals were individually harvested from their trays and transferred to a cryoprotectant consisting of 75-80% reservoir solution plus 20-25% glycerol or PEG400. After ,2 min, crystals were collected and transferred into liquid nitrogen and then transferred to the Advanced Photon Source (Argonne National Laboratory), where a two wavelength MAD experiment was collected, using a Zn peak wavelength and a high energy remote wavelength.
Immunoprecipitation (IP) and Western blotting
293T cells were transiently transfected, harvested 48 hours later, and then lysed in RIPA buffer (150 mM NaCl, 1% NP-40, 0.5% DOC, 50 mM Tris pH 8, 0.1% SDS) containing protease inhibitors (Roche Molecular Biochemicals, Indianapolis, IN). Cell supernatants were incubated with primary anti-tag mAb or polyclonal anti-H3 Ab (0.5-2 ug/ml) centrifuged at 4uC, and then incubated with protein A-Sepharose/protein G PLUS-agarose (Santa Cruz Biotechnology) at 4uC with rotation for 1 hour. Resulting immune complexes were washed 6 times and immunoprecipitated proteins were resolved on 8-15% SDS-PAGE. Separated proteins were transferred to nitrocellulose (Protran BA, Schleicher and Schuell, NH), blocked using 5% nonfat milk (10 g nonfat milk, 150 mM NaCl, 10 mM Tris pH 8, 0.05% Tween-20) overnight at 4uC. Membranes were incubated with 1u antibody for 1 hour at room temperature, extensively washed, then incubated with 2u antibodies for 1 hour at room temperature. Blots were exposed and developed using the ECL blot detection reagent (Amersham Pharmacia Biotech) according to the instructions of the manufacturer.
Histone methyltransferase assays
For in vitro HMTase assays, SMYD3 proteins (0.1-1 mg) +/2 equivalent amt. of human HSP90a (Assay Designs, Ann Arbor, MI, USA, cat. no SPP-776D) were incubated with 1 mg of mixed histones from calf thymus (Sigma) or recombinant core histones (Upstate), or with 1 mg reconstituted chromatin generated from oocyte nucleosomes (graciously provided by Dr. Yali Dou, Univ. Michigan Med School) prepared as described previously [64,65]. Recombinant oocyte histones were assembled onto a 201 bp '601' DNA template [66] by mixing ,1.5 mg octamers with 1 mg DNA template in a volume of 10 ml containing 2 M NaCl, 10 mM Tris (pH 8.0), 0.1 mM EDTA, and 10 mM b-mercaptoethanol), followed by stepwise, 10-fold reduction of salt by addition of Tris-EDTA to a final concentration of ,0.2 pmol nucleosome/ml 200 mM NaCl/Tris-EDTA). For radioactive based assays, 2 mCi S-adenosyl-L-[methyl-3 H] methionine (SAM; Amersham Biosciences) was included as a methyl donor. All reactions were carried out in 40 ml HMT reaction buffer (10 mM dithiothreitol, 100 mM NaCl, 4 mM MgCl2, and 50 mM Tris-HCl at pH 8.8) at 30uC for 3 hours. An 18% SDS-PAGE gel was used to resolve the samples and fluorography was used visualize positive methylation. Substrate loading was visualized by Coomassie blue staining.
Preferential H4 methylation state catalyzed by SMYD3 was confirmed by proximity bead HMT assays as follows: 2 mCi of 3 H-SAM (Amersham Biosciences) were incubated with 0.1 mg of SMYD3 and 0.1 mg of histone H4 peptide, non-, mono-, di-, or trimethylated at K20 (sequence of the peptide: acetyl-GGKGLGKGGAKRHRKVL-biotin). The assay was carried out for three hours at 30uC in 20 ml HMT reaction buffer. At the end of the incubation time, 100 ml of binding buffer (1x PBS containing 1% NP-40 and 0.1% SDS) was added. The substrate was then precipitated using 10 ml of Streptavidin PVT SPA Scintillation Beads (Amersham Biosciences; used as 50% slurry in binding buffer) for one hour at room temperature on a rocking platform, followed by five washes in binding buffer and scintillation counting.
Transcription assays
The SV40-luciferase reporter, containing five copies of the GAL4-UAS, was obtained from J. Milbrandt [19]. pRL-TK was purchased from Promega. The GAL4-SMYD3 WT and MYND mutant mammalian expression vectors were constructed by PCR amplification (59 ATG CGC GCC GAG GCC CGC; 39 TCA GTG GCT CTC AAT CTC CTG) and restriction digestion (Not I; Xba I) followed by subcloning into the GAL4-DBD plasmid [20]. Dual luciferase assays were performed and normalized following transient transfection into 293T cells as previously described [19]. Figure S1 Expression and purification of recombinant human SMYD3. (A) Baculoviral SMYD3. 6X-his-SMYD3 was expressed in sf9 cells as detailed in Methods, purified by Ni-NTA, HiTrap-Q, and Superdex-75 column chromatography (left) and confirmed for purity by mass spectrometry (right). SMYD3 purified as a monomer of predicted (50189) mass. These fractions were suitable for crystallization and further biochemical analyses (described in text). (B) Bacterial SMYD3. 6X-his-SMYD3 (wildtype, catalytic mutant H206/A, and other mutants analyzed in Suppl. Fig. 4) were cloned into Invitrogen Gateway plasmids as described in Methods. Following IPTG-induction in Scarab MG232 (left), proteins were purified by Ni-NTA (center) and confirmed with polyclonal anti-SMYD3 (right). | 6,886.6 | 2011-07-14T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Synthesis of New 2-Halo-2-(1H-tetrazol-5-yl)-2H-azirines via a Non-Classical Wittig Reaction
The synthesis and reactivity of tetrazol-5-yl-phosphorus ylides towards N-halosuccinimide/TMSN3 reagent systems was explored, opening the way to new haloazidoalkenes bearing a tetrazol-5-yl substituent. These compounds were obtained as single isomers, except in one case. X-ray crystal structures were determined for three derivatives, establishing that the non-classical Wittig reaction leads to the selective synthesis of haloazidoalkenes with (Z)-configuration. The thermolysis of the haloazidoalkenes afforded new 2-halo-2-(tetrazol-5-yl)-2H-azirines in high yields. Thus, the reported synthetic methodologies gave access to important building blocks in organic synthesis, vinyl tetrazoles and 2-halo-2-(tetrazol-5-yl)-2H-azirine derivatives.
Introduction
2H-azirines are highly reactive and easily available compounds. Thus, they have been widely used as versatile building blocks for the synthesis of various nitrogen-containing compounds. They can act as nucleophiles, electrophiles, dienophiles, and dipolarophiles in a variety of organic reactions. Furthermore, selective cleavage of each of the three bonds can be achieved, and this leads to highly reactive intermediates such as vinylnitrenes, nitrile ylides, and iminocarbenes [1][2][3][4][5][6].
We have previously described a general route to tetrasubstituted alkenes via a non-classical Wittig reaction [7]. Particularly interesting was the possibility of preparing haloazidoalkenes since the study of their thermolysis led to the development of a new route to 2-halo-2H-azirines starting from α-oxophosphorus ylides [8][9][10]. This study allowed the synthesis of a range of 2-halo-2H-azirines with several substituents, including the first examples of 2-bromo and 2-iodo-2H-azirine derivatives. Since then, a few examples of halo substituted azirines prepared from haloazidoalkenes by thermal or photochemical decomposition have been reported [11][12][13][14].
Recently, we became interested in the development of synthetic routes to functionalized 5-(substituted)-1H-tetrazoles. In this context, the synthesis of novel 2-(tetrazol-5-yl)-2H-azirines using The synthesis of the target tetrazol-5-yl phosphorus ylides 6 is outlined in Scheme 2. N-Benzylchloroacetamide (2) was prepared in good yield from the reaction of benzylamine and chloroacetyl chloride by an analogous method to that described in the literature [23]. Chloroacetamide 2 was treated with phosphorus pentachloride, followed by addition of sodium azide and water to give 1-benzyl-5-chloromethyltetrazole (3) in 54% yield [24]. Reaction of chloromethyltetrazole 3 with triphenylphosphine afforded the corresponding phosphonium salt 4 in very high yield (90%), which was subsequently neutralized with aqueous sodium hydroxide solution over a short period of time with ice-cooling to give phosphorus ylide 5 bearing a tetrazolyl substituent in moderate yield (65%). As previously observed with other tetrazolic phosphorus ylides, phosphorane 5 was hydrolyzed in water to give triphenylphosphine oxide and 5-methyl-1H-tetrazole [25,26]. For this reason, in order to prevent this hydrolysis the base treatment of 4 was carried out in water for only 2 min with vigorous stirring and the resulting precipitate was filtered and immediately dried under reduced pressure. However, even with these controlled conditions mixtures of ylide and hydrolysis products were obtained making the purification procedure difficult. Reaction of phosphorus ylide 5 with ethyl oxalyl chloride and benzoyl chloride in the presence of triethylamine gave ylides 6a and 6b, respectively, in moderate yields (Scheme 2). Aiming to improve ylides of 6a and 6b yield and to overcome the difficulties observed in the synthesis of ylide 5, we tried to carry out the synthesis of ylides 6 starting directly from the phosphonium salt 4 in the Scheme 1. Synthetic strategy for the synthesis 2-halo-2-(1H-tetrazol-5-yl)-2H-azirines.
Results and Discussion
The synthesis of the target tetrazol-5-yl phosphorus ylides 6 is outlined in Scheme 2. N-Benzylchloroacetamide (2) was prepared in good yield from the reaction of benzylamine and chloroacetyl chloride by an analogous method to that described in the literature [23]. Chloroacetamide 2 was treated with phosphorus pentachloride, followed by addition of sodium azide and water to give 1-benzyl-5-chloromethyltetrazole (3) in 54% yield [24]. Reaction of chloromethyltetrazole 3 with triphenylphosphine afforded the corresponding phosphonium salt 4 in very high yield (90%), which was subsequently neutralized with aqueous sodium hydroxide solution over a short period of time with ice-cooling to give phosphorus ylide 5 bearing a tetrazolyl substituent in moderate yield (65%). As previously observed with other tetrazolic phosphorus ylides, phosphorane 5 was hydrolyzed in water to give triphenylphosphine oxide and 5-methyl-1H-tetrazole [25,26]. For this reason, in order to prevent this hydrolysis the base treatment of 4 was carried out in water for only 2 min with vigorous stirring and the resulting precipitate was filtered and immediately dried under reduced pressure. However, even with these controlled conditions mixtures of ylide and hydrolysis products were obtained making the purification procedure difficult.
Molecules 2015, 20, page-page 2 to 4-(tetrazol-5-yl)-1H-imidazoles, a class of compounds with potential biological activity [22]. Aiming to extend this approach to 5-substituted tetrazoles, we decided to prepare 2H-azirines combining halogen and tetrazole functionalities, since the presence of the extra functional group could be particularly interesting.
Results and Discussion
The synthesis of the target tetrazol-5-yl phosphorus ylides 6 is outlined in Scheme 2. N-Benzylchloroacetamide (2) was prepared in good yield from the reaction of benzylamine and chloroacetyl chloride by an analogous method to that described in the literature [23]. Chloroacetamide 2 was treated with phosphorus pentachloride, followed by addition of sodium azide and water to give 1-benzyl-5-chloromethyltetrazole (3) in 54% yield [24]. Reaction of chloromethyltetrazole 3 with triphenylphosphine afforded the corresponding phosphonium salt 4 in very high yield (90%), which was subsequently neutralized with aqueous sodium hydroxide solution over a short period of time with ice-cooling to give phosphorus ylide 5 bearing a tetrazolyl substituent in moderate yield (65%). As previously observed with other tetrazolic phosphorus ylides, phosphorane 5 was hydrolyzed in water to give triphenylphosphine oxide and 5-methyl-1H-tetrazole [25,26]. For this reason, in order to prevent this hydrolysis the base treatment of 4 was carried out in water for only 2 min with vigorous stirring and the resulting precipitate was filtered and immediately dried under reduced pressure. However, even with these controlled conditions mixtures of ylide and hydrolysis products were obtained making the purification procedure difficult. Reaction of phosphorus ylide 5 with ethyl oxalyl chloride and benzoyl chloride in the presence of triethylamine gave ylides 6a and 6b, respectively, in moderate yields (Scheme 2). Aiming to improve ylides of 6a and 6b yield and to overcome the difficulties observed in the synthesis of ylide 5, we tried to carry out the synthesis of ylides 6 starting directly from the phosphonium salt 4 in the Scheme 2. Synthesis of tetrazol-5-yl phosphorus ylides 6.
Reaction of phosphorus ylide 5 with ethyl oxalyl chloride and benzoyl chloride in the presence of triethylamine gave ylides 6a and 6b, respectively, in moderate yields (Scheme 2). Aiming to improve ylides of 6a and 6b yield and to overcome the difficulties observed in the synthesis of ylide 5, we tried to carry out the synthesis of ylides 6 starting directly from the phosphonium salt 4 in the presence of triethylamine. To our delight, carrying out the reaction of the phosphonium salt 4 with ethyl oxalyl chloride and benzoyl chloride in the presence of excess of triethylamine led to the formation of ylides 6a and 6b in 88% and 61% yield, respectively. The same methodology was applied to the synthesis of ylides 6d and 6e bearing a thiophenyl and a furanyl substituent, respectively, which were isolated in good yields (Scheme 2). On the other hand, reaction of ylide 5 with 5-nitro-furan-2-carboxylic acid in the presence of 1-ethyl-3-(3-dimethylaminopropyl)carbodiimide (EDCI) and 4-dimethylaminopyridine (DMAP) afforded ylide 6c in high yield (80%).
These ylides reacted with N-halosuccinimides in the presence of azidotrimethylsilane giving the corresponding haloazidoalkenes 7a-h and 8 in yields ranging from 47% to 93% (Schemes 3 and 4). Higher yields were obtained when NCS/TMSN 3 were used as reagents in the reactions with all ylides 6. The reaction of NCS with ylide 6a in the presence of TMSN 3 led to the formation of the desired chloroazidoalkene 7a with the highest yield (93%). As for bromoazidoalkenes, the best result was obtained from the reaction of ylide 6e bearing a furanyl substituent with NBS/TMSN 3 reagent system which led to the formation of the corresponding bromoazidoalkene 7h in 57% yield. The reactions with N-chlorosuccinimide were completed after 1-1.5 h while the reactions with N-bromosuccinimide required longer periods of time (2-3 h). The azidoalkenes were obtained selectively as single isomers except in the case of 7b and 8 which was obtained as a mixture of E and Z isomers (61:39).
Molecules 2015, 20, page-page presence of triethylamine. To our delight, carrying out the reaction of the phosphonium salt 4 with ethyl oxalyl chloride and benzoyl chloride in the presence of excess of triethylamine led to the formation of ylides 6a and 6b in 88% and 61% yield, respectively. The same methodology was applied to the synthesis of ylides 6d and 6e bearing a thiophenyl and a furanyl substituent, respectively, which were isolated in good yields (Scheme 2). On the other hand, reaction of ylide 5 with 5-nitro-furan-2-carboxylic acid in the presence of 1-ethyl-3-(3-dimethylaminopropyl)carbodiimide (EDCI) and 4-dimethylaminopyridine (DMAP) afforded ylide 6c in high yield (80%).
These ylides reacted with N-halosuccinimides in the presence of azidotrimethylsilane giving the corresponding haloazidoalkenes 7a-h and 8 in yields ranging from 47% to 93% (Schemes 3 and 4). Higher yields were obtained when NCS/TMSN3 were used as reagents in the reactions with all ylides 6. The reaction of NCS with ylide 6a in the presence of TMSN3 led to the formation of the desired chloroazidoalkene 7a with the highest yield (93%). As for bromoazidoalkenes, the best result was obtained from the reaction of ylide 6e bearing a furanyl substituent with NBS/TMSN3 reagent system which led to the formation of the corresponding bromoazidoalkene 7h in 57% yield. The reactions with N-chlorosuccinimide were completed after 1-1.5 h while the reactions with N-bromosuccinimide required longer periods of time (2-3 h). The azidoalkenes were obtained selectively as single isomers except in the case of 7b and 8 which was obtained as a mixture of E and Z isomers (61:39). In order to establish the stereochemistry of the synthetized alkenes compounds 7c, 7e and 7h, bearing a phenyl group, a thiophenyl group and a furanyl group at C-2′, respectively, were selected for X-ray crystallography studies. The three compounds crystallize in the same, monoclinic, space group (P21/c). The X-ray data unambiguously shows that the molecules adopt in the crystal the (Z)-configuration ( Figure 1). Although there is a significant freedom for rotation of substituents around single C-C bonds, no sign for disorder was found except for the thiophene ring in compound 7e, which features a minor disorder between two alternating positions related by a 180° rotation around the C2′′-C2′ bond with occupancies 67:33%. A selection of bond distance, bond angles and torsion angles is provided in Table 1. They are in agreement with typical average values and also to those of the XRD study of a bromo-azidoalkene reported in [17]. Cohesion of the crystal structures is provided by weak C-H···N hydrogen bonds and also C-H···Cg, Cg···Cg and Br···Cg interactions involving the aromatic rings ( Figure 2). Molecules 2015, 20, page-page presence of triethylamine. To our delight, carrying out the reaction of the phosphonium salt 4 with ethyl oxalyl chloride and benzoyl chloride in the presence of excess of triethylamine led to the formation of ylides 6a and 6b in 88% and 61% yield, respectively. The same methodology was applied to the synthesis of ylides 6d and 6e bearing a thiophenyl and a furanyl substituent, respectively, which were isolated in good yields (Scheme 2). On the other hand, reaction of ylide 5 with 5-nitro-furan-2-carboxylic acid in the presence of 1-ethyl-3-(3-dimethylaminopropyl)carbodiimide (EDCI) and 4-dimethylaminopyridine (DMAP) afforded ylide 6c in high yield (80%).
These ylides reacted with N-halosuccinimides in the presence of azidotrimethylsilane giving the corresponding haloazidoalkenes 7a-h and 8 in yields ranging from 47% to 93% (Schemes 3 and 4). Higher yields were obtained when NCS/TMSN3 were used as reagents in the reactions with all ylides 6. The reaction of NCS with ylide 6a in the presence of TMSN3 led to the formation of the desired chloroazidoalkene 7a with the highest yield (93%). As for bromoazidoalkenes, the best result was obtained from the reaction of ylide 6e bearing a furanyl substituent with NBS/TMSN3 reagent system which led to the formation of the corresponding bromoazidoalkene 7h in 57% yield. The reactions with N-chlorosuccinimide were completed after 1-1.5 h while the reactions with N-bromosuccinimide required longer periods of time (2-3 h). The azidoalkenes were obtained selectively as single isomers except in the case of 7b and 8 which was obtained as a mixture of E and Z isomers (61:39). In order to establish the stereochemistry of the synthetized alkenes compounds 7c, 7e and 7h, bearing a phenyl group, a thiophenyl group and a furanyl group at C-2′, respectively, were selected for X-ray crystallography studies. The three compounds crystallize in the same, monoclinic, space group (P21/c). The X-ray data unambiguously shows that the molecules adopt in the crystal the (Z)-configuration ( Figure 1). Although there is a significant freedom for rotation of substituents around single C-C bonds, no sign for disorder was found except for the thiophene ring in compound 7e, which features a minor disorder between two alternating positions related by a 180° rotation around the C2′′-C2′ bond with occupancies 67:33%. A selection of bond distance, bond angles and torsion angles is provided in Table 1. They are in agreement with typical average values and also to those of the XRD study of a bromo-azidoalkene reported in [17]. Cohesion of the crystal structures is provided by weak C-H···N hydrogen bonds and also C-H···Cg, Cg···Cg and Br···Cg interactions involving the aromatic rings ( Figure 2). In order to establish the stereochemistry of the synthetized alkenes compounds 7c, 7e and 7h, bearing a phenyl group, a thiophenyl group and a furanyl group at C-2 1 , respectively, were selected for X-ray crystallography studies. The three compounds crystallize in the same, monoclinic, space group (P2 1 /c). The X-ray data unambiguously shows that the molecules adopt in the crystal the (Z)-configuration ( Figure 1). Although there is a significant freedom for rotation of substituents around single C-C bonds, no sign for disorder was found except for the thiophene ring in compound 7e, which features a minor disorder between two alternating positions related by a 180˝rotation around the C2 11 -C2 1 bond with occupancies 67:33%. A selection of bond distance, bond angles and torsion angles is provided in Table 1. They are in agreement with typical average values and also to those of the XRD study of a bromo-azidoalkene reported in [17]. Cohesion of the crystal structures is provided by weak C-H¨¨¨N hydrogen bonds and also C-H¨¨¨Cg, Cg¨¨¨Cg and Br¨¨¨Cg interactions involving the aromatic rings ( Figure 2).
Since 7d, 7f and 7g differ from 7c, 7e and 7h only in the nature of the halogen, the (Z)-configuration is therefore proposed for all of these compounds. In previous studies, we could confirm that our synthetic methodology allowed the synthesis of a bromoazidoalkene bearing a carboxylate group at C-1′ and a phenyl group at C-2′ with the same selectivity [17]. Thus, the stereochemistry outcome is retained when the carboxylate group is replaced by a tetrazolyl group (7c).
The synthesis of the haloazidoalkenes can be rationalized as outlined in Scheme 5. The formation of the observed products can be explained by considering isomeric halonium ions 10 and 11 as intermediates. These halonium ions can interconvert by way of acyclic cation 9. The opening of these intermediates by the TMSN3 leads to the isomeric alkenes after the elimination of triphenylphosphine oxide (Scheme 5). The observed selected formation of alkenes with (Z) configuration may result from the higher stability of halonium ion 10 in comparison with the isomeric intermediate 11. (2) Since 7d, 7f and 7g differ from 7c, 7e and 7h only in the nature of the halogen, the (Z)-configuration is therefore proposed for all of these compounds. In previous studies, we could confirm that our synthetic methodology allowed the synthesis of a bromoazidoalkene bearing a carboxylate group at C-1′ and a phenyl group at C-2′ with the same selectivity [17]. Thus, the stereochemistry outcome is retained when the carboxylate group is replaced by a tetrazolyl group (7c).
The synthesis of the haloazidoalkenes can be rationalized as outlined in Scheme 5. The formation of the observed products can be explained by considering isomeric halonium ions 10 and 11 as intermediates. These halonium ions can interconvert by way of acyclic cation 9. The opening of these intermediates by the TMSN3 leads to the isomeric alkenes after the elimination of triphenylphosphine oxide (Scheme 5). The observed selected formation of alkenes with (Z) configuration may result from the higher stability of halonium ion 10 in comparison with the isomeric intermediate 11. Since 7d, 7f and 7g differ from 7c, 7e and 7h only in the nature of the halogen, the (Z)-configuration is therefore proposed for all of these compounds. In previous studies, we could confirm that our synthetic methodology allowed the synthesis of a bromoazidoalkene bearing a carboxylate group at C-1 1 and a phenyl group at C-2 1 with the same selectivity [17]. Thus, the stereochemistry outcome is retained when the carboxylate group is replaced by a tetrazolyl group (7c).
The synthesis of the haloazidoalkenes can be rationalized as outlined in Scheme 5. The formation of the observed products can be explained by considering isomeric halonium ions 10 and 11 as intermediates. These halonium ions can interconvert by way of acyclic cation 9. The opening of these intermediates by the TMSN 3 leads to the isomeric alkenes after the elimination of triphenylphosphine oxide (Scheme 5). The observed selected formation of alkenes with (Z) configuration may result from the higher stability of halonium ion 10 in comparison with the isomeric intermediate 11.
Molecules 2015, 20, page-page Scheme 5. Formation of isomeric halonium ions as intermediates of the reaction.
The formation of halophosphonium salt 9 is the expected intermediate of the halogenation of α-oxophosphorus ylides, which affords the corresponding halophosphonium salts [27]. Moreover, the synthesis of halogenated enol lactones from keto acid phosphoranes via an intramolecular non-classical Wittig reaction has also been described [28][29][30][31]. In fact, α-oxophosphorus ylides bearing a terminal carboxylic acid group react with halogenating agents leading to E-and Z-halo enol lactones. This cyclization was rationalized via a halophosphonium salt followed by loss of triphenylphosphine oxide. Indeed, bromophosphonium salt 13 could be isolated from the reaction of ylide 12 with bromine at 0 °C in the absence of NEt3. Treatment of 13 with triethylamine leads to the corresponding bromo enol lactones 14 (Scheme 6) [28]. Scheme 6. Synthesis of halogenated enol lactones from keto acid phosphoranes.
The 13 C-NMR spectra of the haloazidoalkenes 7a-h show the C-X carbon between 85.8 and 109.3 ppm and the C-N3 between 134.8 and 147.6 ppm ( Table 2). As expected, the chemical shift of C-X carbon of all bromoazidoalkenes is lower than the ones of the corresponding chloroazidoalkenes (e.g., 7b vs. 7a). The thermolysis of the haloazidoalkene derivatives 7 was then investigated (Scheme 7). Initially, attempts were made to promote these reactions in n-heptane. However, due to the low solubility of the haloazidoalkenes in this solvent, the thermolysis in n-heptane often led to complex mixtures of the desired 2H-azirines and degradation products. Nonetheless, carrying out the reaction of these haloazidoalkenes in toluene at 90 °C for 2-3 h led efficiently to the formation of new 2-halo-2-tetrazol-5-yl-2H-azirines 15. The reaction can be followed by TLC and by IR by monitoring The formation of halophosphonium salt 9 is the expected intermediate of the halogenation of α-oxophosphorus ylides, which affords the corresponding halophosphonium salts [27]. Moreover, the synthesis of halogenated enol lactones from keto acid phosphoranes via an intramolecular non-classical Wittig reaction has also been described [28][29][30][31]. In fact, α-oxophosphorus ylides bearing a terminal carboxylic acid group react with halogenating agents leading to Eand Z-halo enol lactones. This cyclization was rationalized via a halophosphonium salt followed by loss of triphenylphosphine oxide. Indeed, bromophosphonium salt 13 could be isolated from the reaction of ylide 12 with bromine at 0˝C in the absence of NEt 3 . Treatment of 13 with triethylamine leads to the corresponding bromo enol lactones 14 (Scheme 6) [28]. The formation of halophosphonium salt 9 is the expected intermediate of the halogenation of α-oxophosphorus ylides, which affords the corresponding halophosphonium salts [27]. Moreover, the synthesis of halogenated enol lactones from keto acid phosphoranes via an intramolecular non-classical Wittig reaction has also been described [28][29][30][31]. In fact, α-oxophosphorus ylides bearing a terminal carboxylic acid group react with halogenating agents leading to E-and Z-halo enol lactones. This cyclization was rationalized via a halophosphonium salt followed by loss of triphenylphosphine oxide. Indeed, bromophosphonium salt 13 could be isolated from the reaction of ylide 12 with bromine at 0 °C in the absence of NEt3. Treatment of 13 with triethylamine leads to the corresponding bromo enol lactones 14 (Scheme 6) [28]. Scheme 6. Synthesis of halogenated enol lactones from keto acid phosphoranes. The 13 C-NMR spectra of the haloazidoalkenes 7a-h show the C-X carbon between 85.8 and 109.3 ppm and the C-N3 between 134.8 and 147.6 ppm ( Table 2). As expected, the chemical shift of C-X carbon of all bromoazidoalkenes is lower than the ones of the corresponding chloroazidoalkenes (e.g., 7b vs. 7a). The thermolysis of the haloazidoalkene derivatives 7 was then investigated (Scheme 7). Initially, attempts were made to promote these reactions in n-heptane. However, due to the low solubility of the haloazidoalkenes in this solvent, the thermolysis in n-heptane often led to complex mixtures of the desired 2H-azirines and degradation products. Nonetheless, carrying out the reaction of these haloazidoalkenes in toluene at 90 °C for 2-3 h led efficiently to the formation of new 2-halo-2-tetrazol-5-yl-2H-azirines 15. The reaction can be followed by TLC and by IR by monitoring Scheme 6. Synthesis of halogenated enol lactones from keto acid phosphoranes.
The 13 C-NMR spectra of the haloazidoalkenes 7a-h show the C-X carbon between 85.8 and 109.3 ppm and the C-N 3 between 134.8 and 147.6 ppm ( Table 2). As expected, the chemical shift of C-X carbon of all bromoazidoalkenes is lower than the ones of the corresponding chloroazidoalkenes (e.g., 7b vs. 7a). The thermolysis of the haloazidoalkene derivatives 7 was then investigated (Scheme 7). Initially, attempts were made to promote these reactions in n-heptane. However, due to the low solubility of the haloazidoalkenes in this solvent, the thermolysis in n-heptane often led to complex mixtures of the desired 2H-azirines and degradation products. Nonetheless, carrying out the reaction of these haloazidoalkenes in toluene at 90˝C for 2-3 h led efficiently to the formation of new 2-halo-2-tetrazol-5-yl-2H-azirines 15. The reaction can be followed by TLC and by IR by monitoring the disappearance of the band corresponding to the azido group of the starting azidoalkenes (ν~2105-2130 cm´1). Regardless of C-3 substituents, 2-bromo-and 2-chloro-2H-azirines 15 were obtained in high yield (85%-99%). (Table 2).
It is well established that some 2-halo-2H-azirines undergo thermal rearrangement to their azirine isomers through a [1,2]-halogen shift [32,33]. Recently, Banert et al. reported optimized reaction conditions to favor the complete and irreversible isomerization of 2-halo-2H-azirines [11]. In our case, it was possible to isolate 2H-azirines 15 as pure isomers by thermolysis of the haloazidoalkenes 7. However, after being stored at −30 °C for 3 months 2-chloro-2H-azirine 15a, bearing a carboxylate group at C-3, underwent rearrangement to a mixture of 2H-azirines 15a and 16 (Scheme 8). Carrying out NMR measurements at different temperatures (25-95 °C), the variation of the isomer ratio with increasing temperature was observed, until complete rearrangement of 2H-azirine 15a into the isomer 16a (Supplementary Materials). Similar NMR experiments with 15c and 15e did not indicate the same behavior.
General Information
NMR spectra were run in CDCl3 or DMSO-d6 on a 400 MHz Bruker Avance III spectrometer (Bruker Biospin SA, Wissembourg, France) and recorded at the following frequencies: proton ( 1 H, 400 MHz), carbon ( 13 C, 100 MHz). Chemical shifts are expressed in parts per million related to internal TMS and coupling constants (J) are in hertz. Infrared spectra (IR) were recorded on a Nicolet 6700 FTIR spectrometer (Thermo Scientific, Waltham, MA, USA). Mass spectra were recorded in electrospray ionization (ESI) mode on a Bruker FTMS APEX III spectrometer (Bruker Corporation, Bremen, Germany). Melting points were determined in open glass capillaries and are uncorrected. Thin-layer chromatography (TLC) analyses were performed using precoated silica gel plates (Merck KGaA, Darmstadt, Germany). Flash column chromatography was performed with silica gel 60 as the stationary phase. (Table 2).
Experimental Details
It is well established that some 2-halo-2H-azirines undergo thermal rearrangement to their azirine isomers through a [1,2]-halogen shift [32,33]. Recently, Banert et al. reported optimized reaction conditions to favor the complete and irreversible isomerization of 2-halo-2H-azirines [11]. In our case, it was possible to isolate 2H-azirines 15 as pure isomers by thermolysis of the haloazidoalkenes 7. However, after being stored at´30˝C for 3 months 2-chloro-2H-azirine 15a, bearing a carboxylate group at C-3, underwent rearrangement to a mixture of 2H-azirines 15a and 16 (Scheme 8). Carrying out NMR measurements at different temperatures (25-95˝C), the variation of the isomer ratio with increasing temperature was observed, until complete rearrangement of 2H-azirine 15a into the isomer 16a (Supplementary Materials). Similar NMR experiments with 15c and 15e did not indicate the same behavior. The 13 C-NMR spectra of the 2-chloro-and 2-bromo-2-(tetrazol-5-yl)-2H-azirines 15 show the sp 2 carbon between 156.8 and 169.6 ppm and the sp 3 carbon between 33.3 and 51.9 ppm, depending on the substitution pattern (Table 2).
It is well established that some 2-halo-2H-azirines undergo thermal rearrangement to their azirine isomers through a [1,2]-halogen shift [32,33]. Recently, Banert et al. reported optimized reaction conditions to favor the complete and irreversible isomerization of 2-halo-2H-azirines [11]. In our case, it was possible to isolate 2H-azirines 15 as pure isomers by thermolysis of the haloazidoalkenes 7. However, after being stored at −30 °C for 3 months 2-chloro-2H-azirine 15a, bearing a carboxylate group at C-3, underwent rearrangement to a mixture of 2H-azirines 15a and 16 (Scheme 8). Carrying out NMR measurements at different temperatures (25-95 °C), the variation of the isomer ratio with increasing temperature was observed, until complete rearrangement of 2H-azirine 15a into the isomer 16a (Supplementary Materials). Similar NMR experiments with 15c and 15e did not indicate the same behavior.
General Information
NMR spectra were run in CDCl3 or DMSO-d6 on a 400 MHz Bruker Avance III spectrometer (Bruker Biospin SA, Wissembourg, France) and recorded at the following frequencies: proton ( 1 H, 400 MHz), carbon ( 13 C, 100 MHz). Chemical shifts are expressed in parts per million related to internal TMS and coupling constants (J) are in hertz. Infrared spectra (IR) were recorded on a Nicolet 6700 FTIR spectrometer (Thermo Scientific, Waltham, MA, USA). Mass spectra were recorded in electrospray ionization (ESI) mode on a Bruker FTMS APEX III spectrometer (Bruker Corporation, Bremen, Germany). Melting points were determined in open glass capillaries and are uncorrected. Thin-layer chromatography (TLC) analyses were performed using precoated silica gel plates (Merck KGaA, Darmstadt, Germany). Flash column chromatography was performed with silica gel 60 as the stationary phase.
General Information
NMR spectra were run in CDCl 3 or DMSO-d 6 on a 400 MHz Bruker Avance III spectrometer (Bruker Biospin SA, Wissembourg, France) and recorded at the following frequencies: proton ( 1 H, 400 MHz), carbon ( 13 C, 100 MHz). Chemical shifts are expressed in parts per million related to internal TMS and coupling constants (J) are in hertz. Infrared spectra (IR) were recorded on a Nicolet 6700 FTIR spectrometer (Thermo Scientific, Waltham, MA, USA). Mass spectra were recorded in electrospray ionization (ESI) mode on a Bruker FTMS APEX III spectrometer (Bruker Corporation, Bremen, Germany). Melting points were determined in open glass capillaries and are uncorrected. Thin-layer chromatography (TLC) analyses were performed using precoated silica gel plates (Merck KGaA, Darmstadt, Germany).
Flash column chromatography was performed with silica gel 60 as the stationary phase.
General Procedure for the Synthesis of Ylides 6
A solution of phosphonium salt 4 (10 mmol) and triethylamine (2.53 g, 25 mmol) in dry CHCl 3 (50 mL) was stirred at room temperature while a solution of the appropriate acid chloride (12 mmol) in dry CHCl 3 (10 mL) was added dropwise to it. After the addition, the mixture was stirred at room temperature for 12 h. The reaction mixture was washed with H 2 O (3ˆ50 mL), dried and evaporated to give the desired ylides 6 which were recrystallized from ethyl acetate. 2-(1-Benzyl-1H-tetrazol-5-yl)-1-(5-nitrofuran-2-yl)-2-(triphenylphosphoranylidene)ethanone (6c): Compound 6c was prepared by an analogous method to that described in the literature [34]. A solution of phosphorus ylide 5 (2.1 mmol) and 5-nitrofuran-2-carboxylic acid (2.5 mmol) in dry CHCl 3 (40 mL) was cooled in an ice bath. Then EDCI (3.2 mmol) and DMAP (catalytic) was added to it. After the addition, the mixture was stirred at room temperature for 12 h. The reaction mixture was washed with H 2 O (3ˆ50 mL), dried and evaporated. The crude product was purified by flash chromatography (ethyl acetate). Ylide 6c was obtained as a yellow solid Ylide 6 (4.5 mmol) was dissolved in dichloromethane (50 mL) and a solution of azidotrimethylsilane (0.71 g, 6.5 mmol) and N-chloroor N-bromosuccinimide (6.5 mmol) in dichloromethane (10 mL) was added. The reaction mixture was stirred at room temperature for the appropriate time (1-3 h). After removal of the solvent, the crude product was purified by flash chromatography (ethyl acetate/hexane). | 6,809.6 | 2015-12-01T00:00:00.000 | [
"Chemistry"
] |
Collembola from Two Samplings in the MSS of the Sierra de Guadarrama National Park in Two Different Seasons, with Description of a New Species
Simple Summary The interest in the fauna of the colluvial mesovoid shallow substratum (MSS) led us to install a series of pitfall traps (subterranean sampling devices or SSDs) for a full year in the Sierra de Guadarrama. This paper presents the comparative results between the captures at two different periods of the year and allows the description of a new species, found only in one of those periods. It seems proven that there are species present throughout the year, but others also predominate, or are exclusive, to just one of them. This study indicates, again, that the colluvial MSS has a particular species composition for the taxon Collembola, different from the surface, a perfectly differentiated habitat. Abstract An intensive sampling in a colluvial mesovoid shallow substratum (MSS) of the Sierra de Guadarrama National Park, using 33 subterranean sampling devices (SSDs) is the origin of the Collembola studied in this paper. The data were obtained from the second extraction of the traps, in operation between October of 2015 and May of 2016. This paper presents the faunistic and diversity data along with the entire park (mostly at sampling points above 200 m a.s.l.) for this period, compares the data between the first extraction of the traps and the second one, and describes one species of the genus Pseudosinella that appears as new in the second campaign.
Introduction
The mesovoid shallow substratum (MSS) consists of a network of interstices and subsoil fissures, and harbors diverse epigean species of a stenoic nature, and strictly hypogean species, permanent inhabitants of this environment [1,2]. Previous studies were focused on ecology [3,4], or faunal aspects [5][6][7]. Some studies on the MSS have been carried out in karstic areas. In a karstic environment, there are caves, and the scree-slope can be an intermediate place between the caves and the outside environment. The springtails that live in caves are not really from caves but from the infractions of the karstic terrain, which by definition is very broken. There are no caves in terrains such as the MSS de Guadarrama. In this paper, a continuation of five previously published [8][9][10][11], the fauna found in the same traps but in different periods of the year is compared, for a complete season. In addition to an important substitution of species between seasons, in the second period, a new species for science (Pseudosinella) that had not been captured in the first was (a) five locations in Siete Picos and La Mujer Muerta, of which four had double SSDs (1 m and 0.5 m); (b) 15 localities in Montes Carpetanos, one of which had two TSPs installed and the remaining 14 an SSD of 1 m in length; (c) 12 localities with SSDs of 1 m in length, located in Cuerda Larga and its associated mountainous complex; and (d) two in Puerto de los Cotos-Puerto de Navacerrada. The methodology used for sampling was described in detail in Baquero et al. (2017) [8].
Faunistic
The data for the taxon Collembola obtained from the first period were: 42,735 specimens, 31 genera and 65 species (16 new species). In this period, the most representative genus was Orchesella, represented by two new species (Baquero et al. 2017) [8]. In the second period, catches were significantly lower: 20,098 specimens, two genera were added (Caprainea and Ceratophysella), and 13 species not caught in the initial period. Twenty-six of the species captured in the first period were not captured in this one.
If the taxonomic groups lower than Collembola (Orders and Families) are considered (only those that provide sufficient abundance to allow a comparison to be made) it can be said that: Entomobryomorpha specimens decrease in the second period; Poduromorpha behaves differently depending on the family, with the Hypogastruridae decreasing and the Onychiuridae increasing; Symphypleona, in general, increases considerably, although there is a family (Sminthuridae) that suffers an appreciable decline.
Given that the traps had been in the field for a period of six months, and the specimens deteriorate over time, especially if they are not submerged in propylene glycol promptly, some identifications refer to specimens that had fallen into the trap close to its collection date.
Retinaculum with 4 + 4 teeth. Furca well developed. Ratio dens/mucro: 1.79. Dens with medium size granulation and two prominent tubercles between the two more distal chaetae (variable shape between specimens); seven dorsal chaetae of which one of the basal is slightly longer. Mucro with a sharpening tip, and with clear outer lamella (variable shape between specimens) ( Figure 7B).
Anal spines 0.8 times shorter than inner edge of claw, slightly curved, situated on long papillae the same length as the colorless spine ( Figure 6).
Ecology
In this project, five samples appeared, three in the highest part of the Montes Carpetanos (SSD-08, SSD-17 and SSD-21) and two on the north-face of La Mujer Muerta (SSD-01 and SSD-03). It seems linked to higher and colder areas. Tibiotarsi I, II, III with 19, 19, 18 chaetae, respectively, including a clavate tenent hair longer than the claw in each. Claws with a big inner tooth and two pairs of lateral teeth. Empodial appendage with broad basal lamella and apical filament reaching slightly below inner tooth (ratio empodial filament: inner edge of claw : 0.70) ( Figure 7A). Ventral tube with 4 + 4 chaetae (two apical and two basal).
Retinaculum with 4 + 4 teeth. Furca well developed. Ratio dens/mucro: 1.79. Dens with medium size granulation and two prominent tubercles between the two more distal chaetae (variable shape between specimens); seven dorsal chaetae of which one of the basal is slightly longer. Mucro with a sharpening tip, and with clear outer lamella (variable shape between specimens) ( Figure 7B).
Remarks
In 1955, Steiner [33] described a new species of Hypogastura s. str. from specimens collected above Cercedilla (Sierra de Guadarrama, Madrid, Spain). The description of the species is well documented but only the furca, the claw, and the anal spine are represented. We have considered important a re-description of this species due to the original description missing some characteristics that have recently been revealed as necessary. For this, not only the specimens collected by us (7851 ex.) but also four syntypes from the MNCN (Madrid) have been studied in detail. The species belongs to the armata-group (group B), Abd IV with p 1 as macrochaeta (Thibaud et al. 2004) [40] and subgroup B1 (Abd IV with chaeta p 3 ) (Bourgeois and Cassagnau 1972) [37]. It also has an accessory tubercle near the eyes on both sides of the head, similarly to C. sinensis Stach, 1964 [35] described from China, and also H. tooliki Fjellberg, 1985 [41], from Alaska and Magadan (Russia). The first species, unlike C. meridionalis, has the following diagnostic characteristics: color brown; body chaetae smooth; seven dorsal sensilla on dorsal, and some hook-like sensillum on the ventral side of Ant IV; head accessory tubercles are smooth, resembling a corneola; Insects 2022, 13, 1025 9 of 17 tenent hair short; empodial appendage short; dens basal chaeta two times the length of the other; dens without distal tubercles; dens to mucro ratio almost three times; mucro boat-like (rounded tip), and outer lamella not very developed; anal spines rounded at the tip, slender and not curved (Babenko et al. 1994) [38]. The second one has fine and uniform integumentary granulation, a weak difference between the body chaetae, all of them short and acuminate, body chaetae, and sensilla of the same size as the body chaetae. Anal spines 0.8 times shorter than inner edge of claw, slightly curved, situated on long papillae the same length as the colorless spine ( Figure 6).
Ecology
In this project, five samples appeared, three in the highest part of the Montes Carpetanos (SSD-08, SSD-17 and SSD-21) and two on the north-face of La Mujer Muerta (SSD-01 and SSD-03). It seems linked to higher and colder areas.
Remarks
In 1955, Steiner [33] described a new species of Hypogastura s. str. from specimens collected above Cercedilla (Sierra de Guadarrama, Madrid, Spain). The description of the species is well documented but only the furca, the claw, and the anal spine are represented. We have considered important a re-description of this species due to the original description missing some characteristics that have recently been revealed as necessary. For this, not only the specimens collected by us (7851 ex.) but also four syntypes from the MNCN (Madrid) have been studied in detail. The species belongs to the armata-group (group B), Abd IV with p1 as macrochaeta (Thibaud et al. 2004) [40] and subgroup B1 (Abd IV with chaeta p3) (Bourgeois and Cassagnau 1972) [37]. It also has an accessory tubercle near the eyes on both sides of the head, similarly to C. sinensis Stach, 1964 [35] described from China, and also H. tooliki Fjellberg, 1985 [41], from Alaska and Magadan (Russia). The first species, unlike C. meridionalis, has the following diagnostic characteristics: color brown; body chaetae smooth; seven dorsal sensilla on dorsal, and some hooklike sensillum on the ventral side of Ant IV; head accessory tubercles are smooth, resembling a corneola; tenent hair short; empodial appendage short; dens basal chaeta two times the length of the other; dens without distal tubercles; dens to mucro ratio almost three times; mucro boat-like (rounded tip), and outer lamella not very developed; anal spines None of the descriptions coincide with the description made, in his day, by Steiner or with the one made here. There is a clear heterochaetosis, with macrochaetae and microchaetae, the latter between 2/3 and 1/2 of the length of the Mc, all (Mc and mic) ciliated or with serrations, except for the sensilla that are smooth and longer or as long as the Mc. The furca is characteristic and as described by Steiner (1955) but has many sensilla and sensory chaetae of different types on the antenna ( Figure 7B); very different from Steiner's description and from that drawn by Jordana and Arbea in 1997 (Fauna Ibérica). Between Ant III and IV, the antenna has an eversible sac (Figure 3), characteristic of Ceratophysella. The shape of the PAO in Ceratophysella species is usually elongated (not as a rosette), as is also the case in this species.
When studying the definitions of the genera of Ceratophysella and Hypogastrura, it is observed that all the characteristics considered have exceptions, except heterochaetosis. In addition, since this species has clear heterochaetosis, it is considered to belong to the genus Ceratophysella. It is likely that DNA techniques can shed light on the difficulties of assigning a species to each of the genera. Order
Etymology
The name refers to the asymmetry of the prelabral chaetae, the central ones ciliated and the lateral ones smooth.
Diagnosis
Blue body pigment, including antennae and first leg segments. Head with 6 + 6 eyes; A 0 , A 2 and A 3 as Mc; basomedian labial fields chaetae smooth; posterior labial row with M 1 , m 2 , R*, e, L 1 and L 2 Mc; three plus one anterior postlabial chaetae as ciliate Mc. Th II-III without Mc; Abd II with chaeta a 2p present, a 3 forward from 'as' sensilla; a 2 as mic, and m 3 as ciliated Mc; Abd IV with three median mac (C 1 , B 5-6 ), four ciliated mic behind anterior bothriotrichum (some fan-shaped) and bothriothrichal complex mic D 1p present; claw with three internal teeth: two basal and one unpaired; empodium acuminate.
Description
Body length up to 1.90 mm, head included, excluding antennae (holotype: 1.80 mm). Color: dark blue on the whole body. Scales absent on antennae and legs, present on ventral and dorsal head, thorax, and abdomen dorsally, and furcula (dorsal and ventrally).
Furcula. Manubrium and dens with scales both dorsal and ventrally, and with the same length; manubrial plate with three internal and 14-16 external chaetae, and 2 pse ( Figure 11C). Non-ringed area of dens 3.5-4 times the length of mucro, with subapical tooth a little smaller than apical tooth. (Figure 11D).
Ecology
The species is scarcely distributed in the MSS of Sierra de Guadarrama since it has only been found at one of the sampling points. The sampling area, at 1818 m asl, in the oro-Mediterranean bioclimatic zone, has the important forest influence of Pinus sylvestris L. The rock substrate is orthogneisses (Vialette et al. 1987) [46], and its characteristics facilitate the fracture of this material, which allowed its fragmentation during glacial (Pedraza and Carrasco 2005) [47] and periglacial (Sanz 1986) [48] events, usually in the form of large and medium-sized scree. The MSS where SSD-2 was installed shows a profile with an accumulation of large rocks at higher levels, and as depths reach 1 m, the size of the rock is reduced. Infiltration of black earth rich in organic matter was observed, but the MSS conserves the subterranean tridimensional network of fissures and interstices.
Remarks
The species that share the traditional formula of Gisin are in addition to P. gonzaloi Baquero and Jordana, 2021, and P. valverdei Baquero and Jordana, 2021 [10]. P. gonzaloi has the sensilla of the Ant III sensory organ, being rod-like, all labral chaetae ciliated (at least on its half distal part), the eyes in other positions, chaeta E on ventral labial row ciliated, four teeth on internal or ventral claw, and a different number of chaetae on the manubrial plate (7)(8). P. valverdei has only five eyes, prelabral chaetae smooth, L 1 and L 2 chaetae on ventral labial row smooth, and four teeth on the internal or ventral claw. Curiously, these two species have been described in the same mountain massif. This species could have gone unnoticed among the thousands of specimens of Pseudosinella and Lepidocyrtus found in the first period. Most of the specimens were very deteriorated after remaining between several days and six months in the propylene glycol of the traps. Legs. The legs are without scales. Trochanteral organ with 12-17 close spine-like chaetae ( Figure 11A). Claw with three teeth on inner edge: paired at 50% and unpaired at 75% with respect to the internal claw edge length; two lateral teeth at 10-20%, and basal dorsal. Empodium acuminate, all legs with serrated pe lamella (very minute serration on leg 3), other lamellae smooth (ae, ai, pi); claw:empodium ratio = 1:0.65. Tibiotarsus III distally with one inner smooth chaeta 0.60 longer than claw; tenent hairs capitate, smooth, and 0.70 shorter than claw ( Figure 11B).
Furcula. Manubrium and dens with scales both dorsal and ventrally, and with the same length; manubrial plate with three internal and 14-16 external chaetae, and 2 pse ( Figure 11C). Non-ringed area of dens 3.5-4 times the length of mucro, with subapical tooth a little smaller than apical tooth. (Figure 11D).
Macrochaetotaxy. Reduced formula (from Gisin 1965Gisin , 1967a [25][26][27]: R0R1R2000/00/0101 + 2/s, pABq1q2, M1m2R*eL1L2 (* ½ to 2/3 of M; sometimes M2 ciliated, asymmetric). Details in Figures 8 and 9. The species is scarcely distributed in the MSS of Sierra de Guadarrama since it has only been found at one of the sampling points. The sampling area, at 1818 m asl, in the oro-Mediterranean bioclimatic zone, has the important forest influence of Pinus sylvestris L. The rock substrate is orthogneisses (Vialette et al. 1987) [46], and its characteristics facilitate the fracture of this material, which allowed its fragmentation during glacial (Pedraza and Carrasco 2005) [47] and periglacial (Sanz 1986) [48] events, usually in the form of large and medium-sized scree. The MSS where SSD-2 was installed shows a profile with an accumulation of large rocks at higher levels, and as depths reach 1 m, the size of the rock is reduced. Infiltration of black earth rich in organic matter was observed, but the MSS conserves the subterranean tridimensional network of fissures and interstices.
Remarks
The species that share the traditional formula of Gisin are in addition to P. gonzaloi Baquero and Jordana, 2021, and P. valverdei Baquero and Jordana, 2021 [10]. P. gonzaloi has the sensilla of the Ant III sensory organ, being rod-like, all labral chaetae ciliated (at least on its half distal part), the eyes in other positions, chaeta E on ventral labial row ciliated,
General Discussion
The difference between the catches of the first period and the second, when comparing the species found, is striking. Many of the species well represented and in abundance during the period between May and October went from high numbers (tens of thousands of individuals) to only hundreds in the following period (October to May), and in some cases, even a few units or disappear (Entomobrya ledesmai Jordana and Baquero, 2021 and Pseudosinella gonzaloi Baquero and Jordana, 2021) (see Table 2). The opposite also happens: species little represented in the first period increase in their abundance in the second, going from hundreds to thousands, even in one case, from complete absence to being one of the most abundant species with thousands of specimens (Lepidocyrtus labyrinthi Baquero and Jordana, 2021 and Hypogastrura papillata Gisin, 1949 [49]). There is a case to comment on: C. meridionalis Steiner, 1955 [33] was the most abundant species in the first period and seemed to be replaced by H. papillata. During the placement of the traps, a few weeks before the month of May, it was possible to see accumulations of this species on the surface, under the slabs of cattle excrement in the highland meadows. This indicates that it is present, at least during part of the year, on the surface of the ground. Finding it during the following months in the MSS confirms that it moves vertically on the ground during the year, and the fact that between May and October 621 specimens were captured in the traps indicates that during this period it prefers the surface.
This reasoning cannot be applied to other abundant species between May and October (Orchesella colluvialis, Lepidocyrtus paralignorum, Entomobrya guadarramensis, E. ledesmai) since they are not species detected on the surface at any time before. The decrease in their abundance between May and October must be due to other reasons; for example, biological cycles adapted to temperature or food availability.
No reference has been made to other papers that refer to karstic MSS because this biotope is granitic. The fauna found in this granitic MSS is mainly its own fauna, as has been shown in our previous works. This study indicates that the colluvial MSS has a particular species composition for the taxon Collembola, demonstrating that the MSS should not be considered a mere ecotone between the surface and the deep subterranean ecosystem, but rather as a perfectly differentiated habitat. Table 2. Total list of species found in the sampling, in the two sessions, ordered by abundance for the first extraction from the traps. The numbers are the total number of specimens for each species.
Species
May-October October-May Total Institutional Review Board Statement: Ethical review and approval were waived for this study, due to Spanish laws, which do not require permission from an institutional ethics committee on the use of animals for taxonomical studies with micro-arthropods. Data Availability Statement: All data is contained within the article. Collected specimens have been deposited in the collection of the Museum of Zoology, Department of Environmental Biology, University of Navarra (MZNA). | 4,501 | 2022-11-01T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
Odd entanglement entropy in T ¯T deformed CFT 2 s and holography
We construct a replica technique to perturbatively compute the odd entanglement entropy (OEE) for bipartite mixed states in T ¯ T deformed CFT 2 s. This framework is then utilized to obtain the leading order correction to the OEE for two disjoint intervals, two adjacent intervals, and a single interval in T¯T deformed thermal CFT 2 s in the large central charge limit. The field theory results are subsequently reproduced in the high temperature limit from holographic computations for the entanglement wedge cross sections in the dual bulk finite cut-off BTZ geometries. We further show that for finite size T¯T deformed CFT 2 s at zero temperature the corrections to the OEE are vanishing to the leading order from both field theory and bulk holographic computations.
Introduction
Quantum entanglement has emerged as a prominent area of research to explore a wide range of physical phenomena spanning several disciplines from quantum many body systems in condensed matter physics to issues of quantum gravity and black holes.The entanglement entropy (EE) has played a crucial role in this endeavor as a measure for characterizing the entanglement of bipartite pure quantum states although it fails to effectively capture mixed state entanglement due to spurious correlations.In this context several mixed state entanglement and correlation measures such as the reflected entropy, entanglement of purification, balanced partial entanglement etc. have been proposed in quantum information theory.
Interestingly it was possible to compute several of these measures through certain replica techniques for bipartite states in two dimensional conformal field theories (CFT 2 s).In this connection the Ryu Takayanagi (RT) proposal [1,2] quantitatively characterized the holographic entanglement entropy (HEE) of a subsystem in CFTs dual to bulk AdS geometries through the AdS/CFT correspondence.This was extended by the Hubeny Rangamani Takayanagi (HRT) proposal [3] which provided a covariant generalization of the RT proposal for time dependent states in CFTs dual to non static bulk AdS geometries.The RT and HRT proposals were later proved in [4][5][6][7].
Recently another computable measure for mixed state entanglement known as the odd entanglement entropy (OEE) was proposed by Tamaoka in [8].The OEE may be broadly understood as the von Neumann entropy of the partially transposed reduced density matrix of a given subsystem [8]. 1 The author in [8] utilized a suitable replica technique to compute the OEE for a bipartite mixed state configuration of two disjoint intervals in a CFT 2 .Interestingly in [8] the author proposed a holographic duality relating the OEE and the EE to the bulk entanglement wedge cross section (EWCS) for a given bipartite state in the AdS 3 /CFT 2 scenario.For recent developments see [9][10][11][12][13][14][15][16][17][18].
On a different note it was demonstrated by Zamolodchikov [19] that CFT 2 s which have undergone an irrelevant deformation by the determinant of the stress tensor (known as T T deformations) exhibit exactly solvable energy spectrum and partition function.These theories display non local UV structure and have an infinite number of possible RG flows leading to the same fixed point.A holographic dual for such theories was proposed in [20] to be a bulk AdS 3 geometry with a finite radial cut-off.This proposal could be substantiated through the matching of the two point function, energy spectrum and the partition function between the bulk and the boundary (see [21][22][23][24][25][26][27][28][29] for further developments).The authors in [30][31][32][33][34][35][36][37][38][39][40] computed the HEE for bipartite pure state configurations in various T T deformed dual CFTs.Subsequently the authors in [41] obtained the reflected entropy and its holographic dual, the EWCS, for bipartite mixed states in T T deformed dual CFT 2 s.Recently the entanglement negativity for various bipartite mixed states in T T deformed thermal CFT 2 s, and the corresponding holographic dual for bulk finite cut-off BTZ black hole geometries were computed in [42].
Motivated by the developments described above, in this article we compute the OEE for various bipartite mixed states in T T deformed dual CFT 2 s.For this purpose we construct an appropriate replica technique and a conformal perturbation theory along the lines of [32,34,42] to develop a path integral formulation for the OEE in T T deformed CFT 2 s with a small deformation parameter.This perturbative construction is then utilized to compute the first order corrections to the OEE for two disjoint intervals, two adjacent intervals, and a single interval in a T T deformed thermal CFT 2 with a small deformation parameter in the large central charge limit.Subsequently we explicitly compute the bulk EWCS for the above mixed state configurations in the T T deformed thermal dual CFT 2 s by employing a construction involving embedding coordinates as described in [10].Utilizing the EWCS obtained we demonstrate that the first order correction to field theory replica technique results for the OEE in the large central charge and the high temperature limit match exactly with the first order correction to the sum of the EWCS and the HEE verifying the holographic duality between the above quantities in the context of T T deformed thermal CFT 2 s.Following this we extend our perturbative construction to T T deformed finite size CFT 2 s at zero temperature and demonstrate that the leading order corrections to the OEE are vanishing, which is substantiated through bulk holographic computations involving the EWCS.
This article is organized as follows.In section 2 we briefly review the basic features of T T deformed CFT 2 s and the OEE.In section 3 we develop a perturbative expansion for the OEE in a T T deformed CFT 2 .In section 4 this perturbative construction is then employed to obtain the leading order corrections to the OEE for various bipartite states in a T T deformed thermal CFT 2 .Following this we explicitly demonstrate the holographic duality for first order corrections between the OEE and the sum of the bulk EWCS and the HEE for these mixed states.Subsequently in section 5 we extend our perturbative analysis to a T T deformed finite size CFT 2 at zero temperature and show that the leading order corrections to the OEE are zero.This is later verified through bulk holographic computations.Finally, we summarize our results in section 6 and present our conclusions.Some of the lengthy technical details of our computations have been described in appendix A.
2 Review of earlier literature
T T deformation in a CFT 2
We begin with a brief review of a two dimensional conformal field theory deformed by the T T operator defined as follows [19] T T = 1 8 It is a double trace composite operator which satisfies the factorization property [19].The corresponding deformation generates a one parameter family of theories described by a deformation parameter µ (≥ 0) as given by the following flow equation [19,32,34] dI where QFT and I CFT represent the actions of the deformed and undeformed theories respectively.The deformation parameter µ has dimensions of length squared.Note that the energy spectrum may be determined exactly for a T T deformed CFT 2 [43,44].
When µ is small, the action of the deformed CFT 2 may be perturbatively expanded as [32,34] where T ≡ T ww , T ≡ T w w and Θ ≡ T w w describe the components of the stress tensor of the undeformed theory expressed in the complex coordinates (w, w).Our investigation focuses on deformed CFT 2 s at a finite temperature, and finite size deformed CFT 2 s at zero temperature, which are defined on appropriate cylinders.The expectation value of Θ vanishes on a cylinder and the Θ 2 term in eq. ( 2.3) may be dropped from further consideration [32].
Odd entanglement entropy
We now focus our attention on a bipartite mixed state correlation measure termed the odd entanglement entropy (OEE), which approximately characterizes the von Neumann entropy for the partially transposed reduced density matrix of a given bipartite system [8].In this context we begin with a bipartite system comprising the subsystems A and B, described by the reduced density matrix ρ AB defined on the Hilbert space H AB = H A ⊗ H B , where H A and H B denote the Hilbert spaces for the subsystems A and B respectively.The partial transpose ρ T B AB for the reduced density matrix ρ AB with respect to the subsystem B is then given by e where n o is an odd integer.The OEE between the subsystems A and B may now be defined through the analytic continuation of the odd integer n o → 1 in eq.(2.5) as follows [8] S o (A : (2.6)
Odd entanglement entropy in a CFT 2
The subsystems A and B in a CFT 2 may be characterized by the disjoint spatial intervals [z 1 , z 2 ] and [z 3 , z 4 ] in the complex plane [with In [8] the author advanced a replica technique to compute the OEE for bipartite systems in a CFT 2 .The replica construction involves an n o sheeted Riemann surface M no (where n o ∈ 2Z + −1) prepared through the cyclic and anti cyclic sewing of the branch cuts of n o copies of the original manifold M along the subsystems A and B respectively.Utilizing the replica technique, the trace of the partial transpose in eq.(2.5) may be expressed in terms of the partition function on the n o sheeted replica manifold as follows [46,47] Tr The relation in eq.(2.7) may be utilized along with eq.(2.6) to express the OEE in terms of the partition functions as follows The partition function in eq.(2.7) may be expressed in terms of an appropriate four point correlation function of the twist and anti twist operators σ no and σno located at the end points of the subsystems A and B as follows [46,47] (2.9) We are now in a position to express the OEE between the subsystems A and B in terms of the four point twist correlator by combining eqs.(2.5) to (2.7) and (2.9) as follows [8,46,47] S o (A : Note that σ no and σno represent primary operators in CFT 2 with the following conformal dimensions [46][47][48] .11)We also note in passing the conformal dimensions of the twist operators σ 2 no and σ2 no , which are given as follows [8,[46][47][48] h (2) no = h(2) (2.12)
Holographic odd entanglement entropy
We now follow [8,49] to present a brief review of the EWCS.Let M be any specific time slice of a bulk static AdS geometry in the context of AdS d+1 /CFT d framework.Consider a region A in ∂M .The entanglement wedge of A is given by the bulk region bounded by A ∪ Γ min A , where Γ min A is the RT surface for A. It has been proposed to be dual to the reduced density matrix ρ A [50][51][52].To define the EWCS, we subdivide A = A 1 ∪A 2 .A cross section of the entanglement wedge for A 1 ∪ A 2 , denoted by Σ A 1 A 2 , is defined such that it divides the wedge into two parts containing A and B separately.The EWCS between the subsystems A 1 and A 2 may then be defined as [53] where Σ min A 1 A 2 represents the minimal cross section of the entanglement wedge.In [8] the author proposed a holographic duality describing the difference of the OEE and the EE in terms of the bulk EWCS of the bipartite state in question as follows where S(A 1 ∪ A 2 ) is the EE for the subsystem A 1 ∪ A 2 , and E W (A 1 : A 2 ) represents the EWCS between the subsystems A 1 and A 2 respectively.
OEE in a T T deformed CFT 2
In this section we develop an appropriate replica technique similar to those described in [32,34,42] for the computation of the OEE for various bipartite mixed state configurations in a T T deformed CFT 2 .To this end we consider two spatial intervals A and B in a T T deformed CFT 2 defined on a manifold M. The partition functions on M and M no for this deformed theory may be expressed in the path integral representation as follows [refer to eq. (2. 3)] When the deformation parameter µ is small, eqs.(2.3), (2.8) and (3.1) may be utilized to express the OEE as where the superscript µ has been used to specify the OEE in the deformed CFT 2 .The exponential factors in eq.(3.2) may be further expanded for small µ to arrive at The term S The second term on the right hand side of eq.(3.3) may be simplified to obtain the first order correction in µ to the OEE due to the T T deformation as follows We now investigate the behavior of the deformed CFT 2 at a finite temperature 1/β.The corresponding manifold M for this configuration is given by an infinitely long cylinder of circumference β with the Euclidean time direction compactified by the periodic identification τ ∼ τ + β.This cylindrical manifold M may be described by the complex coordinates [48] w with the spatial coordinate x ∈ (−∞, ∞) and the time coordinate τ ∈ (0, β).The cylinder M may be further expressed in terms of the complex plane C through the following conformal map [48] z = e where (z, z) represent the coordinates on the complex plane.The transformation of the stress tensors under the conformal map described in eq.(4.2) is given as The relations in eq. ( 4.3) may be utilized to arrive at where we have used the fact that ⟨T (z)⟩ C = T (z) C = 0 for the vacuum state of an undeformed CFT 2 described by the complex plane.In the following subsections, we utilize eq.(3.5) to compute the first order correction in µ to the OEE in a finite temperature T T deformed CFT 2 for two disjoint intervals, two adjacent intervals and a single interval.
Two disjoint intervals
We begin with the bipartite mixed state configuration of two disjoint spatial intervals A = [x 1 , x 2 ] and B = [x 3 , x 4 ] in a T T deformed CFT 2 at a finite temperature 1/β, defined on the cylindrical manifold M (x 1 < x 2 < x 3 < x 4 ).Note that the intervals may also be represented as A = [w 1 , w 2 ] and B = [w 3 , w 4 ] with τ = 0 [cf.eq.(4.1)].The value of T T Mn o on the replica manifold M no may be computed by insertion of the T T operator into the appropriate four point twist correlator as follows [54,55] Here T k (w), Tk ( w) are the stress tensors of the undeformed CFT 2 on the k th sheet of the Riemann surface M no , while T (no) (w), T (no) ( w) represent the stress tensors on M no [54,55].σ no (w i , wi ), σno (w i , wi ) represent the twist operators located at the end points w i of the intervals.An identity described in [34] has been used to derive the last line of eq.(4.5).The relation in eq. ( 4.3) may now be utilized to transform the stress tensors from the cylindrical manifold to the complex plane.The following Ward identities are then employed to express the correlation functions involving the stress tensors in terms of the twist correlators on the complex plane where O i s represent arbitrary primary operators with conformal dimensions (h i , hi ).Utilizing eq. ( 4.3), we may now express the expectation value in eq.(4.5) as where The four point twist correlator in eq.(4.7) for two disjoint intervals in proximity described by the t-channel is given by [8,56] The conformal dimensions h no , h no and h(2) no in eq.(4.8) are given in eqs.(2.11) and (2.12).We have defined the cross ratio η := z 12 z 34 z 13 z 24 where z ij ≡ z i − z j .We are now in a position to obtain the first order correction due to µ in the OEE of two disjoint intervals in a T T deformed finite temperature CFT 2 by substituting eqs.(4.4), (4.7) and (4.8) into eq.(3.5) as follows The detailed derivation of the definite integrals in eq.(4.9) has been provided in appendix A.1.These results may be used to arrive at We may now substitute (at τ i = 0) into eq.( 4.10) to finally obtain the leading order corrections to the OEE as follows It is worth noting that the last term in the above expression is nothing but the leading order corrections to the entanglement entropy of the two disjoint intervals in the t-channel.Remarkably, in the low temperature limit β ≫ x ij , the corrections to the OEE scales exactly like that for the entanglement entropy, −2µπ 3 c 2 9β 2 [32].In particular, in the zero temperature limit β → ∞, the corrections vanish conforming to our expectations.
Two adjacent intervals
We now turn our attention to the bipartite mixed state configuration of two adjacent intervals As earlier the intervals may be expressed as A = [w 1 , w 2 ] and B = [w 2 , w 3 ] with τ = 0.The value of T T Mn o for two adjacent intervals may be evaluated in a manner similar to that of two disjoint intervals as follows As before the relations in eqs.(4.3) and (4.6) may be utilized to express the expectation value in eq.(4.12) as follows In eq. ( 4.13) we have no with hi = h i (i = 1, 2, 3) [see eqs.(2.11) and (2.12)].The three point twist correlator in eq.(4.13) is given by [57] where C σn e σ2 ne σn e is the relevant OPE coefficient.The first order correction due to µ in the OEE of two adjacent intervals in a T T deformed thermal CFT 2 may now be obtained by substituting eqs.(4.4), (4.13) and (4.14) into eq.(3.5) as follows The technical details of the definite integrals in eq.(4.15) have been included in appendix A.2.
The correction to the OEE may then be expressed as As earlier we may now restore the x coordinates by inserting (at τ i = 0) into eq.(4.16) to arrive at Once again, we see that the leading order corrections to the OEE scales exactly like that of the entanglement entropy in the low temperature limit β ≫ x ij .It is interesting to note that we are unable to reproduce the above result by taking an appropriate adjacent limit of the corrections to the disjoint intervals given in eq.(4.11).However, this does not lead to any contradiction since our field theory results are perturbative and there is no a priori reason to believe that a limiting analysis holds in each order of conformal perturbation theory.More evidence towards this mismatch will be provided from a holographic viewpoint in section 4.2.2.
A single interval
We finally focus on the case of a single interval A = [−ℓ, 0] in a thermal T T deformed CFT 2 (ℓ > 0).To this end it is required to consider two auxiliary intervals [48].The intervals may be equivalently represented by the coordinates As before the intervals may also be characterized as ] with τ = 0.The OEE for the mixed state configuration of the single interval A is then evaluated by implementing the bipartite limit L → ∞ (B 1 ∪ B 2 → A c ) subsequent to the replica limit n o → 1 [48].For the configuration described above, the integral of T T Mn o on the replica manifold is given by As earlier eq.( 4.18) may be simplified by utilizing eqs.(4.3) and (4.6) as follows where no with hi = h i (i = 1, 2, 3, 4) [see eqs.(2.11) and (2.12)].The four point twist correlator in eq.(4.19) is given by [48] where c no and c (2 no are the normalization constants.The functions F no (η) and Fno (η) in eq.(4.20) satisfy the following OPE limits The functions f (η) and f (η) introduced in eq.(4.21) are defined as follows lim no→1 [F no (η)] The first order correction due to µ in the OEE of a single interval in a T T deformed CFT 2 at a finite temperature 1/β may now be computed from eq. (4.21) by reverting back to the coordinates involving ℓ, L and implementing the bipartite limit L → ∞ as follows The technical details of the integrals necessary to arrive at eq. (4.22) from eq. (4.21) have been provided in appendix A.3.Note that the second term on the right hand side of eq. ( 4.22) represents a divergent piece in the OEE for a single interval.Essentially, the quantity inside the parenthesis of the second term is the leading order correction to the entanglement entropy of the interval A ∪ B 1 ∪ B 2 .In the bipartite limit L → ∞, this represents the entanglement entropy of the entire system and hence should be vanishing.The IR divergence is an artifact of placing a cutoff in a continuum field theory2 .Interestingly the universal finite piece of the OEE for a single interval in a T T-deformed CFT 2 may be rewritten up to leading order in the deformation as follows where the thermal entropy S Th A is now given by A comparison of the above expression to the thermal contribution in the undeformed case, πcℓ 3β [48], indicates that the thermal entropy receives non-trivial corrections due to the T Tdeformation.
Holographic OEE in a T T deformed thermal CFT 2
We now turn our attention to the holographic description of the OEE as advanced in [8] for various bipartite mixed states in a T T deformed CFT 2 at a finite temperature 1/β.The holographic dual of a T T deformed CFT 2 is described by the bulk AdS 3 geometry corresponding to the undeformed CFT 2 with a finite cut-off radius r c given as follows [20] In eq.(4.25) µ is the deformation parameter, c is the central charge, ϵ is the UV cut-off of the field theory, and R is the AdS 3 radius.For a T T deformed CFT 2 at a finite temperature 1/β, the corresponding bulk dual is characterized by a BTZ black hole [58] with a finite cut-off, represented by [20] (4.26) In the above metric, the horizon of the black hole is located at r = r h , with β = 2πR 2 r h as the inverse temperature of the black hole and the dual CFT 2 .For simplicity from now onwards we set the AdS radius R = 1.The metric on the T T deformed CFT 2 , located at the cut-off radius r = r c , is conformal to the bulk metric at r = r c as follows [32,34] where x represents the spatial coordinate on the deformed CFT 2 .To compute the EWCS, we embed the BTZ black hole described by eq.(4.26) in R 2,2 as follows [10] ds The metric in eq.(4.26) may then be described by these embedding coordinates as follows [59,60] Note that for convenience the embedding coordinates in eq.(4.29) are parameterized in terms of the coordinate x described in eq.(4.27).We also introduce a new coordinate u = 1/r to simplify later calculations, with u c ≡ 1/r c and u h ≡ 1/r h .We also note the Brown Henneaux formula G N = 3/(2c) described in [61], which will be extensively used in later sections.In the following subsections we apply the methods described above to compute the holographic OEE from eq. (2.14) for two disjoint intervals, two adjacent intervals, and a single interval in a T T deformed thermal holographic CFT 2 .
Two disjoint intervals
We begin with the two disjoint spatial intervals A = [x 1 , x 2 ] and B = [x 3 , x 4 ] with x 1 < x 2 < x 3 < x 4 as described in section 4.1.1.The setup has been shown in fig. 1.The EWCS involving the bulk points X(s 1 ), X(s 2 ), X(s 3 ), X(s 4 ) is given by [10] where The four points on the boundary may be expressed in the global coordinates as X(0, r c , x i ) for i = 1, 2, 3, 4. The corresponding EWCS may then be computed from eq.
To compare with the field theory computations in section 4.1.1,we have to take the limit of small deformation parameter µ, corresponding to large cut-off radius r c (or small u c ) [see eq.(4.25)].Further we must consider the high temperature limit β ≪ |x ij |, as the dual cut-off geometry resembles a BTZ black hole only in the high temperature limit.Expanding eq.(4.32) for small u c and β ≪ |x ij | we arrive at The first term in eq.(4.33) is the EWCS between the two disjoint intervals for the corresponding undeformed CFT 2 .The rest of the terms (proportional to u 2 c and thus to µ) describes the leading order corrections to the EWCS due to the T T deformation.The third term becomes negligible (compared to the second term) in the high temperature limit.The change in HEE for two disjoint intervals in proximity due to the T T deformation is given by [34] The change in holographic OEE for two disjoint intervals due to the T T deformation may now be computed by combining eqs.(4.33) and (4.34) through eq.(2.14).Interestingly our holographic result matches exactly with our earlier field theory computation in eq.(4.11), in the large central charge limit together with small deformation parameter and high temperature limits, which serves as a strong consistency check for our holographic construction.
Two adjacent intervals
We now consider two adjacent intervals A = [x 1 , x 2 ] and B = [x 2 , x 3 ] with x 1 < x 2 < x 3 as described in section 4.1.2.The configuration has been depicted in fig. 2. The EWCS for the corresponding bulk points X(s 1 ), X(s 2 ), X(s 3 ) is given by [10] where
.36)
As earlier the three points on the boundary may be expressed in the global coordinates as X(0, r c , x i ) for i = 1, 2, 3.The corresponding EWCS may then be computed from eq. (4.35) as Similar to the disjoint configuration, the first term in eq.(4.37) is the EWCS between the two adjacent intervals for the corresponding undeformed CFT 2 .The rest of the terms (proportional to u 2 c and thus to µ) describes the leading order corrections for the EWCS due to the T T deformation.The third term becomes negligible (compared to the second term) in the high temperature limit.The change in HEE for two adjacent intervals due to the T T deformation is given by [34] The change in holographic OEE for two adjacent intervals due to the T T deformation may now be obtained from eqs. (2.14), (4.37) and (4.38), and is described by eq.(4.17),where as earlier we have used the holographic dictionary.Once again we find exact agreement between our holographic and field theory results (in the large central charge limit, along with small deformation parameter and high temperature limits), which substantiates our holographic construction.Note that a limiting analysis of the EWCS for two disjoint intervals for the undeformed CFT 2 does not lead to the corresponding adjacent result given by the first term in eq.(4.37).This mismatch is not surprising since for the case of disjoint intervals the EWCS is given by a minimal curve between two bulk geodesics whereas for adjacent intervals it is a minimal curve between a bulk geodesic and a boundary point.In this connection, we should not expect the corrections due to the T T deformations to have a well defined adjacent limit as well.
A single interval
Finally we consider the case of a single interval A = [−ℓ, 0] in a thermal T T deformed holographic CFT 2 (ℓ > 0).As described in section 4.1.3this necessitates the introduction of two large but finite auxiliary intervals [48].The situation has been outlined in fig. 3. We then compute the holographic OEE for this modified configuration, and finally take the bipartite limit B → A c (implemented through L → ∞) to obtain the desired OEE for the original configuration of the single interval A. The EWCS between the intervals A and B = B 1 ∪ B 2 may be computed from the following relation [62][63][64] ẼW (A :
Black Hole Interior
where ẼW (A : B) denotes an upper bound on the EWCS between the intervals A and B. All subsequent computations involving eq. ( 4.39) should be interpreted accordingly.Note that each term on the right hand side of eq.(4.39) represents the EWCS of two adjacent intervals which has already been computed in section 4.2.2.The corrections to these terms may thus be read off from eq. (4.37) as follows and where we have already taken the limits of small deformation parameter and high temperature.
The correction to the HEE for a single interval is given as follows [34] δS where the bipartite limit has already been implemented.The correction to holographic OEE for a single interval due to the T T deformation may then be computed from eqs. (4.39) to (4.42) through eq.(2.14) on effecting the bipartite limit L → ∞ as follows where we have utilized the holographic dictionary as earlier.Note that on taking the high temperature limit (β → 0), eq. ( 4.22) reduces (the second part of the first term becomes negligible as e β → 0) exactly to eq. (4.43).This once again serves as a robust consistency check for our holographic construction.
We may understand the corrections to the thermal entropy described in eq. ( 4.24) from a holographic viewpoint as well.Recall that the holographic entanglement entropy receives the thermal contribution as the corresponding RT surface wraps the black hole horizon [3].Under the T T deformation, the holographic screen is pushed inside the bulk and the wrapping of the corresponding minimal surface around the black hole horizon is now smaller compared to the undeformed case.As a result, the contribution to the thermal entropy decreases compared to the undeformed case.
OEE in a T T deformed finite size CFT 2
In this section we follow a similar prescription as in section 4.1 to formulate a perturbative expansion for the OEE in a T T deformed finite size CFT 2 of length L at zero temperature.For this setup, the corresponding manifold M describes an infinitely long cylinder of circumference L with the length direction periodically compactified by the relation x ∼ x+L [47].The cylindrical manifold M for this configuration may be represented by the complex coordinates described in eq.(4.1) with the spatial coordinate x ∈ (0, L) and the time coordinate τ ∈ (−∞, ∞) [47].The cylinder M may be further described on the complex plane C through the following conformal map [47] where (z, z) are the coordinates on the complex plane.The relations in eqs.(4.3) and (4.4) remain valid with β effectively replaced by iL.With these modifications, the expressions in eqs.(3.1) to (3.3) and (3.5) may now be applied to compute the OEE in a T T deformed finite size CFT 2 at zero temperature.
Two disjoint intervals
As earlier we start with the mixed state of two disjoint spatial intervals A = [x 1 , x 2 ] and B = [x 3 , x 4 ] in a T T deformed finite size CFT 2 of length L at zero temperature, defined on the cylindrical manifold M described above (x 1 < x 2 < x 3 < x 4 ).The first order correction in the OEE of two disjoint intervals in a T T deformed finite size CFT 2 may be obtained by substituting eqs.(4.5) to (4.8) along with eq.(5.1) (β replaced by iL) into eq.(3.5) as follows We now substitute z → e − 2πi(x+iτ ) L into eq.( 5.2) and integrate the resulting expression with respect to x to arrive at .
We observe that the first four terms on the right hand side of eq. ( 5.3) readily vanish on inserting the limits of integration x = 0 and x = L. Since we have considered the system on a constant time slice, we may take τ j (j = 1, 2, 3, 4) to be zero for all boundary points, and the contributions of the logarithmic functions become zero identically.Thus it is observed that the resultant integrand for the τ integration in eq. ( 5.3) vanishes leading to no non-trivial first order correction to the OEE.This is in conformity with the vanishing entanglement entropy for a finite sized T T deformed CFT 2 [32].
Two adjacent intervals
We now focus on the bipartite mixed state of two adjacent intervals A = [x 1 , x 2 ] and B = [x 2 , x 3 ] in a T T deformed finite size CFT 2 of length L at zero temperature, defined on the cylindrical manifold M described by eqs.(4.1) and (5.1) (x 1 < x 2 < x 3 ).For this case, eqs.(3.5) and (4.12) to (4.14) may still be employed along with the relation described in eq.(5.1), effectively replacing β by iL.The first order correction in OEE due to µ for two adjacent intervals is then given by Next we replace z → e − 2πi(x+iτ ) L into eq.( 5.4) and subsequently integrate with respect to x to obtain .
Similar to the disjoint case, the first three terms on the right hand side of eq.(5.5) readily vanish when the limits of integration x = 0 and x = L are inserted.As earlier, for a constant time slice τ j = 0 (j = 1, 2, 3), the logarithmic functions also contribute nothing to the definite integral.
The resulting integrand for the τ integration in eq.(5.5) thus vanishes.Hence the corresponding first order correction in the OEE of two adjacent intervals turns out to be zero.
A single interval
Finally we turn our attention to the bipartite mixed state configuration of a single interval A = [x 1 , x 2 ] in a T T deformed finite size CFT 2 of length L at zero temperature, defined on the cylindrical manifold M given in eqs.(4.1) and (5.1) (x 1 < x 2 ).The construction of the relevant partially transposed reduced density matrix for this configuration is described in [47].
Once again we may utilize eqs.(4.18) and (4.19) with only two points z 1 and z 2 , subject to eq. (5.1) (with the effect of iL replacing β), and a two point twist correlator as mentioned below in eq.(5.7).We have expressed the modified version of eq.(4.19) as applicable for the system under consideration for convenience of the reader as follows where no with hi = h i (i = 1, 2) [see eqs.(2.11) and (2.12)].The corresponding two point twist correlator for this configuration is given by [47] where C 12 is the relevant normalization constant.Following a similar procedure like the earlier cases, the first order correction for the OEE of this setup may be given as follows (5.8) We then obtain the following expression by substituting z → e − 2πi(x+iτ ) L into eq.(5.8) and integrating with respect to (5.9) Like the previous cases, we observe that the first two terms in eq.(5.9) vanish on implementation of the limits of integration x = 0 and x = L.As the system under consideration is on a constant time slice τ j = 0 (j = 1, 2), once again the terms containing the logarithmic functions also vanish.Again the resulting integrand for the τ integration in eq.(5.9) vanishes, indicating the vanishing of the first order corrections of the OEE as earlier.
Holographic OEE in a T T deformed finite size CFT 2
The bulk dual of a T T deformed finite size CFT 2 of length L at zero temperature is represented by a finite cut-off AdS 3 geometry expressed in global coordinates as follows [1,2] ds where ϕ = 2πx/L.As earlier we embed this AdS 3 geometry in R 2,2 as follows [10] ds The metric in eq. ( 5.10) may be expressed in terms of the embedding coordinates introduced in eq.(5.11) as follows X 0 (τ, ϕ, ρ) = R cosh ρ sin τ, X 1 (τ, ϕ, ρ) = R cosh ρ cos τ, (5.12) The finite cut-off of the AdS 3 geometry is located at ρ = ρ c , where cosh ρ c = 3L 2 2µcπ 3 . (5.13) With the UV cut-off of the field theory given by ϵ = µcπ/6 [see eq.(4.25)], the relation in eq.(5.13) may be rewritten as cosh ρ c = L 2πϵ . (5.14)
Two disjoint intervals
We begin with two disjoint spatial intervals A = [x 1 , x 2 ] and B = [x 3 , x 4 ] on a cylindrical manifold M as detailed in section 5.1.1 (x 1 < x 2 < x 3 < x 4 ).Note that the EWCS involving arbitrary bulk points X(s 1 ), X(s 2 ), X(s 3 ), X(s 4 ) for a T T deformed finite size CFT 2 is described by [10] where (5.16) The end points of the two disjoint intervals under consideration on the boundary may be represented by the embedding coordinates as X(0, ϕ i , ρ c ) for i = 1, 2, 3, 4, where ϕ 1 < ϕ 2 < ϕ 3 < ϕ 4 (Note that ϕ i = 2πx i /L).The corresponding EWCS may then be computed from eq. (5.15) as To extract the desired first order corrections, we now expand eq.(5.17) in small (1/ cosh ρ c ) as follows where we have utilized eq.(5.14) to substitute ϵ.The first term in eq.(5.18) is the EWCS between the two disjoint intervals for the corresponding undeformed CFT 2 .The rest of the terms characterizing the corrections for the EWCS due to the T T deformation are second order and higher in ϵ and thus negligible.The corresponding leading order corrections for the HEE due to the T T deformation has been shown to be zero [32].Thus the leading order corrections to the holographic OEE of two disjoint intervals in a T T deformed finite size CFT 2 is zero, which is in complete agreement with our corresponding field theory computations in the large central charge limit described in section 5.1.1.
The vanishing of the EWCS as well as the entanglement entropy may be attributed to the fact that in T T deformed finite sized CFT 2 s, the lengths of the intervals do not depend on the cutoff radius in eq.(5.13).In contrast, for thermal CFT 2 s the lengths of the intervals depend non-trivially (cf.eq.(4.27)) on the cutoff radius r c as long as r h ̸ = 0 (or, 1/β ̸ = 0) [32].We will discuss this issue further in section 6.
Two adjacent intervals
We now turn our attention to the case of two adjacent intervals A = [x 1 , x 2 ] and B = [x 2 , x 3 ] (x 1 < x 2 < x 3 ) as described in section 5.1.2.The bulk description of the end points of the intervals A and B for a T T deformed finite size CFT 2 is given by X(0, ϕ i , ρ c ) for i = 1, 2, 3, where ϕ 1 < ϕ 2 < ϕ 3 (ϕ i = 2πx i /L).The EWCS for this configuration is described as follows [10] where We now utilize eq.(5.19) to explicitly compute the EWCS as follows We are now in a position to extract the leading order corrections to the EWCS from eq. (5.21) by expanding in small (1/ cosh ρ c ) as follows where we have already substituted the relation in eq.(5.14).As earlier the first term on the right hand side of eq.(5.22) describes the EWCS between the two adjacent intervals for the corresponding undeformed CFT 2 .Again the T T correction terms are second order and higher in ϵ and negligible.The leading order corrections of the HEE for this configuration due to the T T deformation has been demonstrated to be vanishing [32].Hence the leading order corrections to the holographic OEE for this case vanishes, which once again is in conformity with our field theory results in the large central charge limit described in section 5.1.2.
A single interval
The bulk representation of the end points of a single interval of length ℓ may be given by X(0, 0, ρ c ) and X(0, δϕ, ρ c ), where δϕ = 2πℓ L .The EWCS for the given configuration (same as the HEE for a single interval) may be computed as Once again eq. ( 5.23) may be expanded for small (1/ cosh ρ c ) to obtain the following expression for the EWCS where we have used eq.(5.14) to replace cosh ρ c .Once again the first term of eq. ( 5.24) represents the EWCS of a single interval for the corresponding undeformed CFT 2 , while we have neglected the second and higher order correction terms in ϵ.The corresponding corrections for the HEE of a single interval has been shown to be zero [32].Thus the leading order corrections to the holographic OEE for a single interval vanishes, demonstrating agreement with our field theory calculations in the large central charge limit detailed in section 5.1.3.
Summary and discussions
To summarize we have computed the OEE for different bipartite mixed state configurations in a T T deformed finite temperature CFT 2 with a small deformation parameter µ.In this context we have developed a perturbative construction to compute the first order correction to the OEE for small deformation parameter through a suitable replica technique.This incorporates definite integrals of the expectation value of the T T operator over an n o sheeted replica manifold.We have been able to express these expectation values in terms of appropriate twist field correlators for the configurations under consideration.Utilizing our perturbative construction we have subsequently computed the OEE for the mixed state configurations described by two disjoint intervals, two adjacent intervals, and a single interval in a T T deformed thermal CFT 2 .
Following the above we have computed the corresponding EWCS in the dual bulk finite cut-off BTZ black hole geometry for the above configurations utilizing an embedding coordinate technique in the literature.Interestingly it was possible to demonstrate that the first order correction to the sum of the EWCS and the corresponding HEE matched exactly with the first order correction to the CFT 2 replica technique results for the OEE in the large central charge and high temperature limit.This extends the holographic duality for the OEE proposed in the literature to T T deformed thermal CFT 2 s.
Finally we have extended our perturbative construction to T T deformed finite size CFT 2 s at zero temperature.We have computed the first order corrections to the OEE for the configurations mentioned earlier in such CFT 2 s in the large central charge limit.In all the cases we have been able to show that the leading order corrections vanish in the appropriate limits.Quite interestingly it was possible to demonstrate that the first order corrections to the corresponding bulk EWCS in the dual cut-off BTZ geometry were also identically zero in a further validation of the extension of the holographic duality for the OEE in the literature to T T deformed finite size CFT 2 s at zero temperature.
There are several recurring features of our results.Note that when the intervals are located along a compactified direction, there are no T T corrections as the angular separations of the subsystems are not affected by pushing the holographic screen inside the bulk [65], as depicted in fig. 4. On the other hand, when this direction is non-compact, the spatial extents of the subsystems become dependent on the finite cutoff radius as depicted in figs. 1 to 3, and hence there will be appropriate T T corrections [65].For thermal CFT 2 s, the time direction is compactified, but the intervals are spatial and hence situated along the spatial direction.Thus for thermal CFT 2 s our results indicated corrections due to the T T deformations.For finite size CFT 2 s, the space direction is compactified, and the corresponding corrections vanish.It will be instructive to develop similar constructions for other entanglement measures such as entanglement of purification, balanced partial entanglement, reflected entropy etc. for T T deformed CFT 2 s.Also a covariant framework for holographic entanglement in these theories along the lines of the HRT construction is an important open issue.Furthermore, it will be interesting to extend our analysis to T T deformations of thermal CFT 2 with conserved charges.These constitute exciting open problems for the future.
A.1 Two disjoint intervals
The holomorphic part of the integral in eq.(4.9) may be written as Due to the presence of branch points, the logarithmic functions necessitate careful treatment while implementing the limits of integration τ = 0 and τ = β.The following relation outlines the contribution due to a branch point at z = z j [32,34] log e We are now in a position to integrate over x and utilize the prescription described above to implement the limits of integration to arrive at The anti holomorphic part of the integral in eq.(4.9) follows a similar analysis and produces the same result as the holomorphic part.
A.2 Two adjacent intervals
The holomorphic part of the integral in eq. ( 4.15) may be written as We proceed in a similar manner to the disjoint configuration as described in appendix A.1.The indefinite integration with respect to τ leads to the following primitive function On implementation of the limits of integration τ = 0 and τ = β, the non logarithmic terms in the above expression vanish, while the contributions of the logarithmic terms follow the relation in eq.(A.4).Due to the relation in eq.(A.4), the limits of integration over x for each term in the integrand gets modified as follows dx, for j = 1, 2, 3.
The integration over x may now be performed to arrive at As earlier, the anti holomorphic part of the integral gives result identical to the holomorphic part.
The desired correction to the OEE of a single interval of length ℓ may now be obtained through the substitutions {z 1 , z 2 , z 3 , z 4 } → {e As before the anti holomorphic part of the integral produces identical result to the holomorphic part.
j
⟩ describe orthonormal bases for the Hilbert spaces H A and H B respectively.The Rényi odd entropy of order n o between the subsystems A and B may be defined as[45] B) in eq.(3.3) represents the corresponding OEE for the undeformed CFT 2 .The expectation values of the T T operator on the manifolds M and M no appearing in eq.(3.3) are defined as follows T T M = M Dϕ e −I CFT (T T ) M Dϕ e −I CFT
Figure 2 :
Figure 2: EWCS for two adjacent intervals in a T T deformed CFT2. Figure based on [42].
Figure 3 :
Figure 3: EWCS for a single interval in a T T deformed CFT2. Figure based on [42].
Figure 4 :
Figure 4: Schematics of two disjoint intervals placed along the compactified direction in a holographic T T deformed CFT2.The undashed (dashed) circle denotes the location of the holographic screens before (after) the T T deformation.
4 )
The branch cuts of the logarithmic functions change the limits of the x integrals as follows∞ −∞ dx → ∞ β 2π log z j dx,for j = 1, 2, 3, 4. | 11,184.6 | 2023-07-10T00:00:00.000 | [
"Physics"
] |
On the Difference in Quality between Current Heuristic and Optimal Solutions to the Protein Structure Alignment Problem
The importance of pairwise protein structural comparison in biomedical research is fueling the search for algorithms capable of finding more accurate structural match of two input proteins in a timely manner. In recent years, we have witnessed rapid advances in the development of methods for approximate and optimal solutions to the protein structure matching problem. Albeit slow, these methods can be extremely useful in assessing the accuracy of more efficient, heuristic algorithms. We utilize a recently developed approximation algorithm for protein structure matching to demonstrate that a deep search of the protein superposition space leads to increased alignment accuracy with respect to many well-established measures of alignment quality. The results of our study suggest that a large and important part of the protein superposition space remains unexplored by current techniques for protein structure alignment.
Introduction
Pairwise protein structure alignment is one of the most important problems in computational molecular biology. At the same time, protein structure alignment is a very difficult problem, due to an in�nite number of possible ways to position a pair of proteins in the three-dimensional space. Because of the enormous size of the search space, the research into protein structure alignment has been traditionally focused on the development of methods with better objective functions, that explore a relatively small but representative set of proteins' spatial superpositions.
In this paper, we take a different approach and study the bene�ts of searching proteins' superpositions in a more detailed manner. We demonstrate signi�cant increase in the alignment accuracy of several well-known distance-based alignment methods, obtained by utilizing the superpositions that rigorously optimize a very simple and intuitive alignment metric, de�ned as the largest number of residues from the input proteins that can be �t under a prede�ned distance cutoff.
e size of gap between the accuracy of current heuristic solutions and optimal solutions, observed in this study, suggests that the protein structure alignment problem will likely remain a hot topic in years to come.
Materials and Methods
Our study is carried out using two protein structure alignment benchmarks: Sisyphus and FSSP. In both benchmarks, an in-house algorithm, MaxPairs [1], is applied to compute the superpositions that closely approximate the measure , which is de�ned as the largest number of pairs of residues from the input proteins that can be �t under Ångströms. MaxPairs algorithm is based on the approximation algorithm EPSILON-OPTIMAL [1], which is capable of �nding a superposition of the input proteins that �ts at least as many pairs of residues under the distance as an optimal superposition �ts under the distance , for any accuracy threshold . As an approximation algorithm, EPSILON-OPTIMAL suffers from high computational complexity. e algorithm's run time is a high degree polynomial in the lengths of the structures being compared. To circumvent high computational cost, the present study utilizes MaxPairs-a heuristic version of EPSILON-OPTIMAL that searches through a relatively small subset of the space of all superpositions of the input proteins inspected by EPSILON-OPTIMAL. While still not practical, as demonstrated in [1], MaxPairs enjoys accuracy superior to that of some widely utilized alignment programs and, as such, this algorithm is an indispensable tool for assessing the precision of more efficient and more popular algorithms. In present study, we set the distance cutoff to Å and the accuracy threshold to . Going below proves to be computationally prohibitive with our computing infrastructure.
We evaluated the performance of three well-known methods for protein structure comparison, STRUCTAL [2][3][4], TM-align [5], and LOCK2 [6,7], before and aer replacing their original superpositions with superpositions that optimize . It is important to emphasize that our experiment is not designed to compare these three methods head-to-head, but rather to assess the extent of improvements in the accuracy of each method that can be made by exploring the search space in a more thorough manner.
In choosing the methods for our study, we only considered the availability of soware and the simplicity of implementing the alignment scoring functions (see the Results section). An overview of the three algorithms is given below.
STRUCTAL. e STRUCTAL algorithm [2][3][4] employs iterative dynamic programming to balance the cRMS score with the lengths of aligned regions. In each iteration, the algorithm computes an optimal residue-residue correspondence (alignment) of the input proteins , … , ) and , … , ) and then �nds a superposition that minimizes cRMS of the aligned subchains , … , ) and , … , ). e cRMS score is given by (1) e alignment step in STRUCTAL is carried out using a dynamic programming routine, which implements the following recurrence formula: where , 20 e outputs of STRUCTAL are the subchains of and of , along with the rigidly transformed protein , denoted bŷ, and a residue-residue correspondence that maximizes the STRUCTAL score where , denotes the total number of gaps in the alignment. e STRUCTAL program used in our analysis was downloaded from http://csb.stanford.edu/levitt/Structal/.
TM-align. TM-align is another popular protein structure alignment program, widely used in many applications, in particular for assessing the quality of protein models generated by comparative modeling or abinitio techniques. e score matrix in TM-align is protein-length speci�c and is de�ned as where 0 .24 √ − 5 − . , and is the length of the shorter structure [5]. In contrast to linear gap penalties employed by STRUCTAL, the gap penalties in TMalign are affine and are set to 0.6 for gap-opening and 0.0 for gap-extension [5]. An improved version of the algorithm, called Fr-TM-align, has been published [8]. e TMalign soware, used in this study, was downloaded from http://zhanglab.ccmb.med.umich.edu/TM-align/.
LOCK2. LOCK2 [6] is an improved version of the original LOCK program [7]. It incorporates secondary structure information into the alignment process. An initial superposition is obtained by comparing the vectors of secondary structure elements. An iterative procedure is then applied to minimize RMSD between aligned subchains of the input proteins, using the threshold distance of 3 Å for atomic superposition. Rigid body motions for RMSD minimization are realized using quaternion transformations [9,10]. e alignment returned by LOCK2 is a sequence of pairs of points , , … , , , where are each other's nearest neighbors. More specifically, for every , … , , the point is the closest point in protein to the point and vice versa. e �nal alignment is generated through a two-step process. First, for every atom from protein , the algorithm �nds the nearest atom from protein that is at distance 3 Å from . In the second step, the algorithm selects the maximum number of aligned pairs in sequential order, by removing pairs that violate colinearity.
Sisyphus
Benchmark. e Sisyphus test [11] is frequently used to assess the accuracy of automated methods for Step 2: Final alignment F 1: e procedure for creating methods' speci�c alignments and alignments based on MaxPairs superpositions. protein structure comparison [1,12]. is sophisticated benchmark utilizes 125 alignments of structurally related proteins, created by experts in the �eld of protein structure analysis. e reference alignments can be downloaded from http://sisyphus.mrc-cpe.cam.ac.uk.
In present study, we (like Rocha et al. [12]) utilize only a subset of the Sisyphus test set, containing 106 alignments between single-chain proteins. e two-step process is illustrated in Figure 1. In the �rst step, STRUCTAL, TM-align, and LOCK2 are run with default parameters to generate the methods' speci�c alignments between proteins from the Sisyphus set. ese alignments are then compared to the reference ("gold-standard") alignments to compute the percentage of correctly aligned residue pairs [1,12].
In the second step, the MaxPairs algorithm is run to compute the set of (near-)optimal superpositions, namely, the superpositions that rigorously maximize the number of pairs of atoms that can be �t under 3 �. �e used our own implementations of the STRUCTAL, TM-align, and LOCK2 alignment procedures to compute optimal residue-residue correspondence (alignment) between the newly superimposed proteins. e percentage agreement with reference alignments is recorded again and compared to the agreement obtained in the �rst step.
e agreement with reference alignments in the Sisyphus test is de�ned as a function of the magnitude of the alignment error. More speci�cally, for the alignment tolerance shi , the agreement is de�ned as / ref , where is the number of aligned residues that are shied by no more than positions in the reference alignment and ref is the length of the reference alignment [12]. e perfect agreement is the one that corresponds to zero-shi ( . e dashed lines in Figures 2, 3, and 4 track the performance of original STRUCTAL, TM-align, and LOCK2 methods. e solid lines show the performance of the same methods when run on the superpositions that maximize the number of residues under 3 �. As seen in these �gures, there is a signi�cant boost in the methods' accuracy resulting from the "�ne-tooth comb" search of superposition space. More precisely, the new superpositions improve absolute agreement with the reference alignments for STRUCTAL, TM-align, and LOCK2 by 11%, 5%, and 5%, respectively, with a similar trend continuing for nonzero shi. e increase in number of correctly aligned residues, obtained by switching to MaxPairs superpositions, varies from one pair of structures to another (Figures 5, 6, and 7). For some pairs, the difference is striking. However, it should be emphasized that, in some of these cases, such a high difference might be due to unavailability of information in P�B �les used by the methods in our study. For instance, the LOCK method is built to take advantage of the residues' secondary structure assignment. Hence, it is reasonable to assume that the lack of secondary structure information in the P�B �le for one or both structures will oen decrease the accuracy of the LOCK alignment of those structures. A more detailed analysis shows that, when MaxPairs superpositions are used, the number of residue pairs correctly aligned by STRUCTAL increases by more than 10 for 31 out of 106 test pairs. e corresponding number of test pairs for which the same magnitude of increase is observed for TMalign and LOCK is 14 and 13, respectively. For comparison, original STRUCTAL superpositions have such an advantage only in 3 out of 106 test pairs. For TM-align and LOCK, the corresponding numbers are 5 and 4. e value added by the deep search of superposition space makes some of the methods analyzed here comparable to the best to date methods evaluated in the Sisyphus test. A slight accuracy advantage of algorithms such as Matt [13], PPM [14], and ProtDeform [12] is due to the fact that these methods consider proteins as �exible, rather than rigid objects. In other words, unlike STRUCTAL, TM-align, and LOCK2, which all utilize single transformations of input proteins to compute �nal alignments, the new generation of protein structure alignment methods consider sequences of different rigid transformations at different sites. It should be emphasized that the methods based on sequences of local transformations can themselves bene�t from incorporating the ��ne-tooth comb� search to detect fragments of local similarity. is would lead to further improvements in their overall accuracy, but the true extent of these improvements can only be accessed through a carefully designed study.
FSSP Benchmark.
Our second benchmarking set utilizes 183 representative pairs of proteins, related at various levels according to FSSP structural classi�cation [15]. is test set consists of 55 family pairs, 68 superfamily pairs, and 60fold pairs (see Supplementary Material available online at doi:10.1155/2012/459248).
In contrast to Sisyphus benchmark, which compares alignments returned by automated methods to those generated by human experts, the alignment precision in the FSSP benchmark is assessed using a set of well-known alignment quality measures: (i) NumPairs(d) represents the number of aligned pairs of residues in two proteins that are at distance ≤ Ångströms from each other. We note that, unlike ≤ , which is a globally optimal metric, representing the maximum number of pairs of residues in the superimposed structures that can be placed under Ångströms, NumPairs(d) represents the method speci�c count of pairs of aligned residues at distance ≤ .
(ii) Similarity Index, denoted by SI, is de�ned as min{ ( ) ( ) , where is the number of aligned residues in proteins and and ( ) and ( ) are the lengths of and , respectively [16]. e cRMS score, used in the formula for SI, is computed based upon the method speci�c alignments. As seen in Table 1, a more detailed search of the superposition space increases both NumPairs and PSI scores for all three methods in our study. e increase in SI scores is also seen for both STRUCTAL and LOCK2. It is interesting to note, though, that the original TM-align superpositions yeald better SI scores than the optimal superpositions. e FSSP level-speci�c results of our benchmarking analysis are summarized in Tables 2, 3, and 4. Figure 8 shows the alignment independent PSI scores computed from superpositions generated by STRUCTAL, TM-align, and LOCK2. For reference, a near-optimal PSI score, averaged across the FSSP test set and computed by the MaxPairs algorithm, is also provided in this �gure. e data used in Figure 8 shows that (on average) STRUCTAL, TM-align, and LOCK fail to place 8%, 7%, and 11% pairs of residues at distance ≤ 3Å, respectively. As expected, the best performance of these methods is observed at the FSSP family level (STRUCTAL fails to place 5%, TM-align: 5%, LOCK: 6%) and worst at FSSP fold level (STRUCTAL: 15%, TM-align: 12%, LOCK: 17%).
Illustrative Examples.
Several examples illustrating the advantage of the deep search of superposition space are given in Figures 9,10,11,12,and 13. While examples in Figures 9-13 are striking, it should be noted that they represent rather isolated cases. In fact (as the reader can conclude from Figures 5, 6, and 7), there are several examples where the output of heuristic methods compares favorably to that of MaxPairs (although the difference in quality is not as obvious as that shown in Figures 9-13). As emphasized before, in many instances, the inaccuracy of the alignment generated by heuristic methods is due to insufficient structural information stored in the PDB �le, relied upon these methods.
Discussion
Resent years have witnessed advances in the development of methods for approximate and exact solution to protein structure alignment problem. One of the �rst such methods is the Umeyama�s algorithm for �nding the transformation that gives the least mean squared error between two point patterns [17]. Since then, several algorithms have been published for �nding a near-optimal solution to the structure alignment problem under distance constraints. e procedure by Akutsu, for example, returns a superposition of the input proteins that �ts at least as many pairs of residues under the distance as an optimal alignment �ts under the distance , for every �xed [18]. is algorithm runs on the order of 8 ), where denotes the protein length. An improved running time procedure for the same problem has also been published [19]. e EPSILON-OPTIMAL algorithm, used in present study, is able to place at least as many pairs of residues under the distance as an optimal superposition places under the distance . e asymptotic cost of EPSILON-OPTIMAL is 4 ) for globular and 8 ) for nonglobular proteins [1]. e polynomial time approximation schemes (PTASs) have been designed for selected nonsequential protein structure alignment measures [20] as well as for the class of measures satisfying the so-called Lipschitz condition [21]. Moreover, methods exist that rigorously minimize proteins' intra-atomic distances, including the algorithm by Caprara et al., which is capable of approximating the "Contact Map Overlap" (CMO) measure with great accuracy [22]. Finally, the algorithms for absolute optimum, with respect to selected alignment metrics, have also been published [1,23], but they are computationally too expensive for everyday use.
Although inefficient for large scale analysis, the algorithms for exact solution are indispensable tools for assessing the accuracy of more commonly used heuristic methods. e present study utilizes a set of precomputed superpositions to evaluate the improvements in accuracy of three wellknown protein structure alignment algorithms, obtained by the deep search of the superposition space. In the Sisyphus benchmark, these superpositions increase the accuracy of alignments generated by STRUCTAL, TM-align, and LOCK2 by 11%, 7%, and 6%, respectively. An improvement of similar magnitude is seen aer allowing for alignment errors (residue shis). In the FSSP benchmark, the new superpositions increase NumPairs and PSI scores for STRUCTAL, TMalign, and LOCK2 by ∼7%, ∼5%, and ∼13%, respectively. A particularly noticeable improvement is seen in the Similarity Index scores of alignments generated by LOCK2 (from 8.35 to 5.69). We emphasize that our analysis provides an estimate of the lower bound on the difference between optimal and heuristic solution, since alignments generated by MaxPairs are not always optimal (in the strict sense).
Finally, it is reasonable to expect that a more thorough exploration of the superposition space, coupled with the fragment-based alignment techniques, can be used to further improve the precision of methods based on sequences of local transformations, such as Matt [13], PPM [14], and ProtDeform [12].
Conclusions
A typical distance-based protein structure alignment method explores the space of proteins' spatial superpositions, computing an optimal residue-residue correspondence (alignment) each time a new superposition is generated. Because of the large search space, current methods for protein structure alignment must trade precision for speed and explore only a small but representative set of superpositions.
We utilize an algorithm capable of �nding an alignment of any speci�ed accuracy to demonstrate signi�cant increase in the alignment quality of solutions generated by three popular protein structure alignment methods, obtained through the deep search of the superposition space. e large lower bound on the size of gap between optimal and heuristic solutions, observed in this study, suggests that the protein structure alignment problem will likely remain an attractive research area throughout the next decade. | 4,158 | 2012-12-23T00:00:00.000 | [
"Computer Science",
"Biology"
] |
Structural and Techno-Functional Properties of Bovine
Collagen and Its Application in Hamburgers
SUMMARY The objective of this work is to characterize two types of bovine collagen (fibre and powder), evaluating its application in mixed hamburger formulations, as well as the quality characteristics of the products. The collagen fibre had a fibrillar structure, molecular mass 100 kDa and greater gel strength (146 315 Pa) and protein content (97.81%) than the powdered collagen, which had molecular mass from 50 to 100 kDa, greater hydroxyproline content, and a morphological structure with spherical microparticles more amorphous than the collagen fibre. In this study we found that the addition of 1.5% powdered collagen and 2.5% flocculated soybean flour and/or 0.75% powdered collagen and 3.5% flocculated soybean flour did not deteriorate the technological properties or the sensory attributes of hamburgers. The use of collagen is a promising alternative, since it has functional properties, improves the texture characteristics of a product, and is of low cost.
INTRODUCTION
Meat products, such as frankfurters, salami, mortadella, sausages, crumbed products, meatballs and hamburgers, are attractive products for consumers as they require little preparation. Among the so-called fast food, hamburgers stand out as an excellent choice due to their sensorial characteristics, nutritional value, low price, and ease of preparation.
According to Brazilian legislation (1), hamburgers are meat products manufactured on an industrial scale from minced meat, with or without adipose tissue, with characteristic texture, colour, and flavour, moulded and submitted to appropriate technological processes. However, during hamburger preparation and cooking some problems, such as shrinking, mass loss and reduced yield, may arise. The use of alternative ingredients, such as non-meat proteins (isolated, concentrated, textured, and flocculated soy protein) at 4 % maximum mass fraction has been used by industries in order to minimize the above-mentioned issues. Fibre or powdered collagen could also be used in hamburgers since they increase their nutritional value and help to reduce deformity and mass loss during thermal treatment. Furthermore, some of their advantages include cost reduction, higher protein content, as well as improved functional properties, such as increased capacities of water absorption, gel formation, stabilization and emulsion formation (2), at 1.5 % maximum mass fraction in meat products. Thereby, collagen preparations can be used to improve processed meat attributes since, at low levels, functional collagen proteins stabilize shrinkage and promote increased cooking yield due to their gelling and water-binding properties (3)(4)(5).
Collagen is one of the most useful biomaterials due to its wide range of industrial applications (6). Bovine and chicken skins predominantly contain type I and III collagen fibrils (7). On a molecular basis, fibril-forming collagen features an uninterrupted helical region with alternating polar and non-polar domains leading to a lateral alignment of molecules in a quarter-staggered array (8). Type I collagen is a heterodimer composed of two identical α1-chains and one α2-chain (7), whereas type III collagen is a homotrimer, with three α1-(III)-chains and usually occurs in the same fibril with type I collagen (9). Collagen stability and structure are based on hydrogen bonds between polar residues of 4-hydroxyproline, 5-hydroxylysyl hydration networks and electrostatic interactions (7). The last emerge between ionizable side groups present in 15-20 % of all amino acid residues, either in X or Y position of the Gly-X-Y triplets (10).
Due to collagen low production cost and functional properties, its use as an additive allows for a cheap alternative to improve meat product texture, resulting in an improved organoleptic bite sensation, without a significant increase of the product price. Thus, the aim of this study is to characterize two types of bovine collagen (fibre and powdered collagen), evaluating their application in mixed hamburger formulations, as well as the quality characteristics of meat products in an industrial unit. The results will provide information about the technological properties and chemical characteristics of bovine collagen (fibre and powdered) and the prospects of its industrial applications.
Collagen characterization
Bovine collagen fibre (d particle =1.80-1.90 mm) and collagen powder (d particle =0.45-0.6 mm) were supplied by Novaprom Food Ingredients Ltda (Guaiçara, Brazil). Hydroxyproline and total protein content in the collagen samples were determined and protein fractions were identified by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE), morphological structure using scanning electron microscope (SEM) technique, chemical composition with X-ray photoelectron spectroscopy (XPS), and crystallinity through X-ray diffraction (XRD).
Total protein and hydroxyproline
The protein and hydroxyproline contents in collagen samples were quantified according to AOAC method 981. 10 (11) and AOAC method 991.20 (12), respectively.
Gel strength
Initially, fibre and powdered collagen gels were prepared at a 1:6 (m/V) ratio, heated and homogenized until reaching 72 °C. After that, the solution was cooled down to 8 °C for at least 12 h. Gel strength was then established using a TA.XT2 texture analyser (Stable Micro Systems Ltd, Godalming, Surrey, UK) with a 10-kg load cell, and pre-test, test, and post-test speeds of 1, 1 and 10 mm/s, respectively, as well as a 0.5-inch diameter spherical probe.
SDS-PAGE electrophoresis
Sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) was performed according to the method proposed by Laemmli (13) and Bustamante-Vargas et al. (14). First, the powdered and fibre collagen samples were prepared at a concentration of 5.0 and 2.5 mg/mL, respectively. For sample preparation, 40 µL of 60 % trichloroacetic acid (m/V) (Labsynth, Diadema, Brazil) were added to 100 µL of raw samples, placed in Eppendorf tubes and stored overnight in a freezer at -15 °C. The samples were subsequently centrifuged (model 5403; Eppendorf, Hamburg, Germany) at 10 000×g and 4 °C for 30 min, and then the supernatant was removed.
Scanning electron microscopy
Fibre and powder collagen morphology has been analysed by scanning electron microscopy (SEM microscope model JSM-6510; JEOL, Austin, TX, USA). The sample surfaces were coated with a gold layer (approx. 20 nm) using a sputter coater (model BAL-TEC SCD 050; BAL-TEC AG, Balzers, Liechtenstein).
X-ray photoelectron spectroscopy
Surface chemical composition analysis was done using X-ray photoelectron spectroscopy (XPS model Escalab 250Xi; Thermo Fisher Scientific, Lafayette, CO, USA) attached to a scanning electron microscope (SEM model JSM-6510; JEOL). In addition, XPS mapping approach was used to study chemical element dispersion on the electrode surface.
Only chicken and pork meat (undeclared quantities; industrial formulation) were used, using cuts with no obvious fat and with minimum visible connective tissue (pork loin and chicken breast). The fat used was removed from the pork loins and the cuts of meat were kept frozen (maximum 0 °C). First, the meat was cut using a mini cutter (Incomaf Indústria Ltda., São Paulo, Brazil) for 30 s, and then mixed with other ingredients. Then, the meat with soy protein and collagen was homogenized, hydrated with water for 15 min, and subsequently mixed using a blender (Risco, São Paulo, Brazil) for 3 min. A second grounding using a meat grinder (Seydelmann, Stuttgart, Germany) with a 5-mm disc was performed in order to standardize particle size.
After formulation preparation and grinding, the hamburger mix was submitted to a moulding stage. The process was done using a manual moulding equipment producing 90 g patties, after which the samples were frozen at -9 °C for a period of 90 days.
Hamburger characterization
Protein, moisture, fat, hydroxyproline, mass loss, instrumental texture (hardness), as well as histological and sensorial characteristics of the hamburger formulations were characterized on the first storage day.
Physical and chemical characteristics
Protein, hydroxyproline, moisture and fat contents were determined according to AOAC methods 981.10 (11), 991.20 (12), 985.26 (15) and 991.36 (16). Mass loss during cooking was assessed after heat treatment using the grill or oven. The grill (model ED 36G; Garland, Mississauga, Canada) was prepared by spraying with cooking oil and preheating for 2 min. Then, the frozen hamburgers were placed on the preheated grill and cooked for 3 min on each side. The oven (model Picasso; Venax®, São Paulo, Brazil) was preheated at 250 °C for 5 min, the frozen hamburgers were baked in the oven for 15 min (7.5 min on each side). The internal temperature of the product was kept at minimum 72 °C.
To determine mass loss, hamburgers were weighed on an analytical balance (model MA035; Marconi, Piracicaba, Brazil) before and after each heat treatment. Hardness was determined by sample compression method by a computer-controlled TA.XT2 texture analyzer (Stable Micro Systems Ltd.), with a Warner-Bratzler blade, equipped with a 10-kg load cell, using a 6.35-cm cylindrical probe. The pre-test, test, and posttest parameters were 2, 1 and 7 mm/s, respectively. Samples of the product ready for consumption of about 10 mm in height and compressed to 25 % of their size were analyzed in accordance with Harper et al. (17).
Histological analysis
For the histological analysis, the hamburger samples from each formulation were fixed at 10 % formalin and subjected to routine histological techniques, including gradual dehydration, diaphanization, infiltration steps, and embedding in paraffin. From each paraffin block, 4 μm thick histological sections were taken and the sections were stained with hematoxylin-eosin (18). The histological sections were analyzed using a microscope (model Lambda LQT-3; ATTO Instruments Co, Hong Kong, PR China) with the images photographed with a Motic Images Plus v. 2.0 software (Motic China Group Co. Ltd., Beijing, PR China) (19). The histological field of each slide was evaluated using 10× and 25× magnification.
Sensory evaluation
Sensory evaluation of the hamburgers was performed on a laboratory scale, with 12 trained panellists, who were employees of the meat processing industry, male and female, aged from 20 to 50. The sensory evaluation of hamburgers was conducted on the first day, serving 90-gramme samples grilled and baked in the oven (according to the procedure described in the section Physical and chemical characteristics).
The hamburger samples were coded with randomized three-digit numbers, and distributed along with the evaluation form and blank samples (cracker and mineral water). The panellists assessed each attribute (flavour, colour, odour, appearance, texture, and general acceptance) on a 9-point hedonic scale (1=dislike extremely and 9=like extremely), according to the procedure described by Queiroz and Treptow (20).
As the research involved humans, tests were performed according to the Research Ethics Committee of the Regional Integrated University of Upper Uruguay and Missions, as well as Brazilian National Health Council ethical and scientific requirements, registered at Plataforma Brasil (21).
Statistical analysis
The results (N=3) were analysed by analysis of variance (ANOVA), followed by the Tukey's test to compare the average differences, using the Statistica v. 5.0 software (22), with a 95 % confidence level. In addition, the Pearson correlation analysis and principal component analysis (PCA) were performed using XLSTAT software (23). Table 1 shows the characteristics of fibre and powdered collagen samples. It can be noted that there was a significant difference (p<0.05) in the hydroxyproline mass fraction between the samples, with powdered collagen having higher value (2.06 g/100 g) than collagen fibre. Collagen fibre had (p<0.05) a slightly higher protein content (97.81 g/100 g) than the powdered one (96.87 g/100 g).
Hydroxyproline and protein contents, and gel strength of collagen preparations
Variations in protein and hydroxyproline contents are due to the raw material, collagen extraction process and origin. According to Gómez-Guillén et al. (24), high temperatures cause protein solubilization and greater fragmentation of the collagen structure.
Gauza-Włodarczyk et al. (25) state that whatever their origin, collagen contains 19 amino acids; two of them, hydroxyproline and hydroxylysine, are practically absent from other proteins. The same authors verified that the hydroxyproline content in bovine Achilles tendon collagen (8.15 g per 100 g of protein) is 30 % higher than the hydroxyproline content of collagen from fish skin and concluded that fish skin collagen is less stable than the bovine Achilles tendon. The gel strength of collagen fibre was significantly different (p<0.05) from the powdered one with a higher value (146 315 Pa), demonstrating that fibre allows greater solvent entrapment than powdered collagen gels. According to Prestes (26), gel resistance (Bloom force) depends on concentration and molar mass, where a higher Bloom value is correlated to collagen molar mass, hence a high Bloom value (about 300 g) results in firmer gels. The obtained data show that powdered and fibre collagen are different products with distinctive characteristics, possibly due to the extraction process used. Fig. 1 shows SDS-PAGE for the tested collagen samples. On the electrophoresis gel, four distinct bands were identified for powdered collagen with molecular masses varying from 50 to 100 kDa, whereas a 100-kDa band for collagen fibre was found.
SDS-PAGE of collagen preparations
According to Oechsle et al. (27), SDS-PAGE gel with bovine telopeptide-poor collagen showed two distinct bands of approx. 123 kDa, indicating α1(I) and α2(I) chain monomers of type I collagen. Furthermore, the slightly larger band of α1(III) chain was observed above, indicating a type III collagen. The mass spectrometry analysis noted the presence of α1(I), α2(I) and α1(III) chains with 133, 129 and 138 kDa, respectively.
Scanning electron microscopy of collagen preparations
The SEM micrographs obtained for collagen fibre and powdered collagen are shown in Fig. 2. These results show that both materials had quite a different microstructure, with the powdered collagen (B1 and B2 in Fig. 2) having lost its fibrous characteristic during the milling of the collagen fibre (A1 and A2 in Fig. 2). Collagen fibre showed an internal axis with several thin ramifications connecting them to smaller external particles, whereas powdered collagen showed a major trend to form agglomerates. The thin filaments observed in the collagen fibre may be important in the eventual interaction between the polymeric matrix and fillers.
The greatest difference between fibre and powdered collagen is that the fibre physical structure retains water chemically, either through protein matrix or hydrogen bonding with water (26). As such, fibre swells in contact with water, blocking both moisture and fat from exiting the system.
X-ray photoelectron spectroscopy of collagen preparations
The main elements found in both powdered and collagen fibre were carbon, nitrogen and oxygen ( Fig. S1 and Fig. S2). Elemental mapping showed that such elements were homogeneously distributed on both powdered and collagen fibre surfaces. The presence of other elements, such as aluminium, magnesium, sodium and fluorine, was also noted in both collagen samples, while iron was found only in the powdered one. These differentiations could have been due to collagen production methods. Fig. 3 shows great XRD pattern similarities of both fibre and powdered collagen samples. Collagen fibre preponderant peaks were obtained with 2θ at approx. 7°, 25° and 30°, and powdered collagen preponderant peaks were obtained with 2θ approx. 25° and 30°, in line with reported reticulated collagen values (28), typical for amorphous material. The results indicated that powdered collagen is more amorphous than the fibre.
Physicochemical and sensorial characteristics of hamburger formulations
Average mass fractions and the respective standard deviations of protein, fat, moisture and hydroxyproline of different hamburger formulations on the first day of storage are shown in Table 2. The mass fraction of protein varied from 16.73 (T3) to 17.8 % (T4). Formulations T1 and T3, with 0.75 % fibre and powdered collagen respectively were not significantly different (p<0.05) from the standard one. Formulations T2 and T4, with 1.5 % collagen fibre and powdered collagen respectively had higher protein content (p<0.05). The results for protein content suggest that the addition of 1.5 % collagen (fibre or powdered) increased protein content. Such results are in accordance with Prestes et al. (29), who also reported total protein content increase in products containing bovine collagen.
Regarding lipid content, a variation in the formulations was noted, with values ranging from 8.22 % (standard) to 10.2 % (T3), given that T3 had statistically higher (p<0.05) lipid content than the other formulations. Variations in lipid content were possibly due to the changes in the raw materials, i.e. the fraction of fat removed from the meat.
Formulations T2 and T4, containing 1.5 % fibre and powdered collagen respectively, were significantly different (p<0.05) from the standard and formulations T1 and T3 (Table 2) and had slightly higher moisture values. Positive correlation was confirmed by the principal component analysis (Fig. 4). Such results are explained by the high water retention capability of the collagen, which reduces water loss during freezing. According to Pietrasik and Janz (30), non-meat proteins exhibit similar behaviour to meat proteins, promoting water retention, higher binding, and occupying the interstitial spaces in the gel matrix.
Hydroxyproline content served as a parameter to establish collagen amount in meat and meat products (3,25). Significant differences (p<0.05) in hydroxyproline mass fraction were observed among the developed formulations ( Table 2). All formulations with added collagen showed higher hydroxyproline mass fractions than the standard sample. However, formulations T1 and T3, with 0.75 % collagen fibre and powdered collagen respectively, were not significantly different (p<0.05) from each other.
It was also noted that formulation T4, containing 1.5 % powdered collagen, had the highest hydroxyproline content (0.77 g/100 g), 4.8 times higher than the standard formulation. The results found are in accordance with the ones reported by Prestes et al. (29), where formulations of chicken ham containing a mixture of collagen had higher values of hydroxyproline. Formulation T2 (1.5 % collagen fibre) showed increased (p<0.05) hardness (85.7 N) ( Table 2 and Fig. 5). When only collagen fibre was added, it resulted in a higher compressive strength and higher shear force in the samples due to the physical structure and larger particle size, which could be related to how high collagen contents in emulsions increase hardness and rigidity, while reducing the mass stability (31). In addition, by retaining water chemically through the protein matrix and swelling when in contact with water, collagen fibre alters the texture and cohesion of the hamburger mix, increasing the final product firmness (32). This behaviour was also observed by Li (33) when adding collagen to the preparation of cooked ham. In that case, collagen caused an increase in hardness from 11.96 to 16.91 N, suggesting that small size proteins affected the texture of the ham.
Mass loss of the hamburger samples prepared in the conventional oven ( Table 2) on the first day of storage was on the whole higher than of the ones prepared on the grill (except T3). It must be pointed out that mass loss rate of T3 formulation was lower, which may be better visualized in the multivariate analysis in Fig. 4. Lower mass loss after freeze-thaw and reheating process was found of samples with powdered collagen ( Table 2), a phenomenon explained by its greater interaction with the ingredients and additives present in the formulations, creating a cohesive mass. The addition of collagen to meat products as a binder is advantageous; at low levels functional collagen proteins promote an increase in cooking yield due to their gelling and water-binding properties (34). According to Pietrasik (35), the higher the percentage of added collagen, the lower the release of water due to a greater number of bonds between the polypeptide chains during cooking (formation of a dense protein matrix). Table 3 shows the results of sensorial evaluation of the hamburger formulations. A significant difference (p<0.05) could be noticed among the formulations. On the whole, formulation T3, made with 0.75 % powdered collagen, was the one that received the highest scores (in all attributes) compared to the other formulations. It also had the highest general acceptability of 81 % ( Table 3).
In general, the panellists positively accepted the replacement of soy protein with collagen in hamburgers. It is assumed that such substitutes enhance the acceptability of the hamburgers, as well as help to improve their physical properties, especially in the case of formulation T3. In contrast, the standard formulation obtained the lowest scores. In terms of flavour, it was noted that all formulations containing collagen received higher scores and differed statistically (p<0.05) from the standard sample. These results agree with Sousa et al. (32), who verified higher texture scores of frankfurter-type sausages containing different collagen mass fractions (25 to 75 %) and attributed this effect to the gelatinization property of collagen. Table S1 and Fig. 4 show, respectively, the Pearson correlation and principal component analysis (PCA) for the physicochemical and sensorial variables of the hamburger samples on the first day of storage. The variables are shown as vectors (Fig. 4); the longer the vector, the greater the sample variability. The samples were represented by triangles, where each vertex represented a repetition (N=3). It was observed that there The values obtained with the Pearson correlation validated the correlation among the parameters observed in the PCA (Fig. 4), with protein presenting a positive correlation (Table S1) with hydroxyproline content (0.666), mass loss during preparation on the grill (0.521) and moisture (0.556). Formulation T4 was the closest to these vectors, also confirmed by the values shown in Table 2, i.e. formulation T4 had the highest protein (17.8 g/100 g) and hydroxyproline (0.77 g/100 g) contents. Formulation T3 (0.75 % powdered collagen and 3.25 % soy protein) received the best sensorial scores ( Table 3 and Fig. 4). Positive correlations (Table S1 and Fig. 4) were also verified between flavour and fat content, and between texture (>0.70) and fat, appearance, colour, odour and flavour. However, with increased hardness (instrumental texture), there was a decrease in oven mass loss.
Due to its properties, such as extender, emulsifier, texture improver, and its nutritional value, collagen has great application potential in the industry of restructured and emulsified meat products, providing better technological performance and economic results. Collagen beneficially participates in meat emulsions in the range from 15 to 18 %, mainly aiding the texture and stability of the mass (36), reducing water loss in defrosting and cooking. Fig. 5 shows the photomicrographs of hamburger formulations. The use of histological methods allowed for the qualitative analysis of muscular, adipose and conjunctive tissues. The standard sample (A and B in Fig. 5) shows muscular tissue (arrows) associated with dense conjunctive tissue (tc). Empty spaces (ev) can be noticed between the muscular tissues and/or spaces with adipose tissue. It achieved a cellular organization classified as "good", but without consistency. It was the sample with the worst texture according to sensorial evaluation ( Table 3 and Fig. 4). Formulation T1 (C and D in Fig. 5) shows tissue disorganization (*), i.e. empty spaces (ev) dispersed among muscular cells (arrows), possibly adipose tissue, as well as dense conjunctive tissue (tc). The bad cellular organization and added collagen fibre caused an increase in the hamburger hardness ( Table 2). Formulation T2 (E and F in Fig. 5) has a muscular cell organization with peripheral nuclei (arrows) and some adipose cells (ta). It shows a non-emulsified conjunctive tissue with spaces between the conjunctive tissues, with cellular organization. The non-emulsion of the connective tissue caused the highest values of hardness due to the higher mass fraction of collagen fibre (1.5 %). Formulation T3 (G and H in Fig. 5) showed intense tissue disorganization (*) since there was not any difference among muscle (arrows), conjunctive and adipose tissues, technologically qualified as emulsified material, providing to the better hamburger texture ( Table 2) and sensory ( Table 3) characteristics. Formulation T4 (I and J in Fig. 5) showed intense tissue cohesion, but due to little difference among adipose, muscle and conjunctive (*) tissues, it also had greater mass loss during heat treatment ( Table 2 and Fig. 4). Mean values with the same lowercase letters in superscript within a column do not differ significantly at the 95 % level (Tukey's test), N=3. Standard=4 % soy protein, T1=0.75 % collagen fibre and 3.25 % soy protein, T2=1.5 % collagen fibre and 2.5 % soy protein, T3=0.75 % collagen powder and 3.25 % soy protein, and T4=1.5 % collagen powder and 2.5 % soy protein CONCLUSIONS Powdered collagen and collagen fibre are products with distinctive characteristics, mainly in terms of their protein composition, hydroxyproline content and gel strength. The fibre and powdered collagen have a molecular mass of 50 to 100 and 100 kDa respectively, and higher protein content (97.81 and 96.87 g/100 g) and gel strength (146 315 and 91 888 Pa) respectively than the standard sample. Powdered collagen is more amorphous than the fibre one. In the hamburger formulations with 1.5 % collagen, there was an increase in protein and moisture content. Sensorial analysis showed that the hamburger formulation containing 0.75 % powdered collagen received the best colour, appearance, texture and general acceptance evaluation. The histological analysis of the same formulation showed intense tissue disorganization, typical for emulsified material, with adipose tissue mixed with conjunctive one. The mass loss by baking in oven diverged among the hamburger formulations, but it was higher than when using the grill. This knowledge is useful for the development of novel strategies in which the mass fractions and collagen preparations are optimized to promote specific and desired technological attributes for healthier meat products. In addition, the use of bovine collagen (fibre and powdered) in hamburger can be an alternative to increase the intake of collagen by the consumer, contributing to the prevention of joint diseases and generate an opportunity for the industry to produce new functional meat products. | 5,808.6 | 2019-09-01T00:00:00.000 | [
"Agricultural and Food Sciences",
"Materials Science"
] |
Partially inserted nascent chain unzips the lateral gate of the Sec translocon
Abstract The Sec translocon provides the lipid bilayer entry for ribosome‐bound nascent chains and thus facilitates membrane protein biogenesis. Despite the appreciated role of the native environment in the translocon:ribosome assembly, structural information on the complex in the lipid membrane is scarce. Here, we present a cryo‐electron microscopy‐based structure of bacterial translocon SecYEG in lipid nanodiscs and elucidate an early intermediate state upon insertion of the FtsQ anchor domain. Insertion of the short nascent chain causes initial displacements within the lateral gate of the translocon, where α‐helices 2b, 7, and 8 tilt within the membrane core to “unzip” the gate at the cytoplasmic side. Molecular dynamics simulations demonstrate that the conformational change is reversed in the absence of the ribosome, and suggest that the accessory α‐helices of SecE subunit modulate the lateral gate conformation. Site‐specific cross‐linking validates that the FtsQ nascent chain passes the lateral gate upon insertion. The structure and the biochemical data suggest that the partially inserted nascent chain remains highly flexible until it acquires the transmembrane topology.
Introduction
Membrane proteins constitute a large part of the cellular proteome and determine the vital functionality and identity of biological membranes. These proteins are co-translationally targeted as ribosome: nascent chain complexes (RNCs) to the endoplasmic reticulum in eukaryotes and the cytoplasmic membrane in bacteria and archaea, where they are inserted by the dedicated and universally conserved Sec translocon (Fig 1A and B) [1]. The translocon, an integral membrane protein itself, builds a protein-conducting channel in the lipid bilayer and allows either transmembrane passage of nascent polypeptide chains or their partitioning into the lipid environment as transmembrane a-helices (TMHs). The nascent chain hydrophobicity forms a basis for the triage [2]. The central subunit of the translocon, SecY in bacteria or Sec61a in eukaryotes, consists of 10 TMHs arranged as a pseudo-symmetric "clam-shell" with a proteinconducting pore between the N-and C-terminal parts (Fig 1) [3,4]. A bilayer-facing crevice between SecY TMHs 2b and 7 is assumed to serve as a route, or a "lateral gate", for nascent TMHs to reach the hydrophobic membrane core. SecY is stabilized at the periphery by the essential subunit SecE/Sec61c that contains two a-helices, one in interfacial and one in transmembrane topologies. SecE of some Gram-negative bacteria, including Escherichia coli, contains also an accessory pair of N-terminal TMHs, the role and localization of which have remained largely unclear [5]. A non-essential and non-conserved SecG/Secb subunit near the N-terminal half of SecY is built of either one or two TMHs and plays a stimulatory role in protein translocation [6].
The assembly of the translocon:ribosome complex at the cytoplasmic membrane interface is a key step in membrane protein biogenesis, as it allows the hydrophobic nascent chain to egress into the lipid bilayer via the translocon, while not being exposed to the polar aqueous environment [1,7]. The architecture of the complex has been extensively studied by structural methods, first of all cryoelectron microscopy (cryo-EM) [8][9][10][11]. Binding of a ribosome results in minor rearrangements within the translocon and brings it to a pre-open or "primed" state [11]. The following insertion of a sufficiently hydrophobic helical domain, such as a signal sequence or signal anchor domain, shifts the complete N-terminal domain of SecY/Sec61a by 22°and also tilts TMH 7, so the lateral gate of the translocon acquires an open state ( Fig 1B) [12,13]. The folded signal sequence in a transbilayer topology may occupy the lateral gate where it replaces TMH 2b. Upon the further elongation of the nascent polypeptide chain, the newly inserted a-helix leaves the lateral gate and egresses into the lipid bilayer, and the translocon undergoes a reverse transition from a widely opened [14] to a compact, pre-closed state [15].
Although the dynamics of the lateral gate have been commonly acknowledged [16,17], the mechanism of the nascent chain insertion remains unclear. First, existing structures reflect rather late insertion stages, where the signal sequence has been fully inserted in the transmembrane topology, while early intermediates have been barely addressed [4,18]. Second, a vast majority of available ribosome:translocon structures represent detergent-solubilized complexes; however, the non-physiological environment and extensive downstream purification schemes may significantly affect the conformation and the interaction properties of membrane proteins, including the translocon [19][20][21]. The variations in detergent-based solubilization protocols may explain contradictory results on the translocon dynamics, where either a local displacement of helices within the lateral gate or an extensive movement of the complete N-terminal half was observed upon the nascent chain insertion, and also the conformation of the central "plug" domain has been disputed [12,13,22]. Furthermore, a compact "primed" state has been described for detergent-solubilized translocons in the absence of hydrophobic nascent chains [11], while a recent cryo-electron tomography analysis has revealed a predominantly open conformation of the ribosome-bound Sec61 within native ER membranes and so suggested a crucial effect of the molecular environment on protein dynamics [23].
Up to date, the only structure of the translocon:ribosome complex at the lipid interface was obtained by cryo-EM when using nanodisc-reconstituted SecYEG (SecYEG-ND) bound to a translation-stalled RNC [14]. Although demonstrating an advance compared to detergent-solubilized systems, the structure offers only limited resolution and also illustrates a rather late stage of the TMH insertion, with the translocon lateral gate widely open and the inserted anchor domain de-localized within the membrane. Here, we set out to determine the structure of the SecYEG:RNC complex that would describe an early stage of a transmembrane domain insertion into the lipid bilayer. Using cryo-EM and single-particle A Structure of quiescent SecYEG of Thermus thermophilus in the lipid cubic phase (PDB ID: 5AWW). TMHs 2b, 3, 7, and 8 of the lateral gate, as well as the proximate loop 6/7 involved in ribosome binding are indicated. The non-essential SecG subunit is omitted for clarity. B Model of the SecY lateral gate opening upon inserting a nascent chain (red) in the lipid bilayer. The color-coding of SecYE TMHs is as in panel (A). In the presence of the completely inserted and folded nascent chain, TMHs 2b and 3 of the N-terminal domain of SecY are displaced (arrows) thus opening a broad passage for the nascent TMH toward the lipid moiety. C SDS-PAGE of SecYEG-ND sample after size-exclusion chromatography. Asterisks indicate translocon-enriched fractions used for forming the RNC FtsQ:SecYEG-ND complex. Lipid-loaded "empty" nanodiscs elute at larger volumes and so can be separated. D Schematic drawing of a SecYEG-ND particle. Lateral dimensions of the nanodisc should be appropriate to accommodate a single SecYEG with surroundings lipids, thus mimicking the naturally occurring environment.
analysis, we resolved for the first time all three subunits of SecYEG in nanodiscs and described a novel conformation, where SecY TMHs 2b and 7 were apart at the cytoplasmic side to form a V-shaped lateral gate that is pre-opened for the nascent chain insertion, while accessory SecE TMHs 1 and 2 interacted with the gate at the periplasmic side. The RNC-induced dynamics within the translocon was validated by atomistic molecular dynamics simulations, which also described the interactions of SecYEG with anionic lipids. Cryo-EM data and site-specific chemical cross-linking further suggested that the FtsQ anchor domain is inserted via the lateral gate, where it forms close contacts with SecY TMH 7, but remains highly flexible before leaving the translocon.
Results and Discussion
Functional reconstitution of E. coli SecYEG in nanodiscs has been previously performed by several groups for biochemical, biophysical, and structural studies and allowed probing of the translocon interactions with the motor protein SecA, targeting factors, and ribosomes [14,20,24,25]. The diameter of formed nanodiscs is essentially determined by the length of the major scaffold protein (MSP) that girdles the lipid bilayer [26,27]. Translocon molecules have been initially embedded into nanodiscs as small as 9 nm in diameter [16,20,24]. However, a follow-up functional analysis demonstrated that larger nanodisc dimensions are beneficial for facilitating the translocation activity, likely due to the increased amount of coreconstituted lipids [25,28]. Thus, we used an extended scaffold protein MSP1E3D1 and POPG/POPC lipids to reconstitute SecYEG into nanodiscs with a diameter of approximately 12 nm. A large excess of MSPs and lipids ensured that translocons were reconstituted predominantly as monomers [25], as those have been shown to be the principle functional form both in bacteria and in eukaryotes [9,29,30]. Due to solvent-exposed loops of SecYEG, which contributed to the hydrodynamic radius, SecYEG-ND could be separated from "empty" nanodiscs containing only lipids by means of size-exclusion chromatography ( Fig 1C). Within formed nanodiscs, SecYEG would occupy~30% of the surface area ( Fig 1D) [25,26,28], thus providing sufficient space for the conformational dynamics, and for insertion of nascent TMHs upon interactions with RNCs.
We have previously demonstrated that SecYEG:ribosome assembly is strongly enhanced by hydrophobic nascent chains, such as a TMH of FtsQ, a model protein for studying the SecYEG-mediated insertion pathway [20]. The hydrophobic polypeptide exposed from a ribosome exit tunnel is sufficient to mediate SecYEG:ribosome binding in native and model membranes, even in the absence of targeting factors [20,31], but unlikely to undergo the complete insertion due to its short ribosome-bound linker. Thus, to investigate an early stage of the TMH insertion, we prepared translation-stalled ribosomes, which exposed the first 48 amino acids of FtsQ, including the TMH within the nascent chain (Fig EV1), and incubated those with a 10-fold excess of SecYEG-ND to achieve complex formation. After vitrification, samples were subjected to cryo-EM imaging and single-particle analysis. RNCs could be readily seen in raw micrographs, and a discoidal density of SecYEG-ND bound to RNCs was observed in projection groups of two-dimensional (2D) classification and in 3D reconstructions (Fig 2A-C). After sorting and refinement steps (Fig EV2), the ribosome structure was resolved at 3.3 Å , and independent refinement of the SecYEG-ND:RNC complex elements led to 3.2 and 3.1 Å resolution for the small (30S) and large (50S) ribosomal subunits, respectively (Appendix Fig S1), and was limited to 6 Å for the lipid-embedded SecYEG due to its small size and apparent dynamics relative to the 50S ribosomal subunit (Movie EV1). The local resolution within the SecYEG-ND particle ranged from 3.5 Å at the ribosome contact sites to 6-7 Å within the transmembrane core and above 10 Å for the surrounding MSP1E3D1 and lipid head groups, which could be visualized at lower threshold levels (Fig 2D and E).
In agreement with the initial prediction, the nanodisc dimensions were sufficiently large to accommodate a single copy of SecYEG. As SecYEG was positioned in the center of the nanodisc and contacts with edges of the lipid bilayer or MSP were not observed, it is likely that the translocon conformation was not affected by the confined environment. As electron densities of the centrally positioned translocon and the MSP were well-separated (Fig 2E), it facilitated the assignment of rod-shaped densities to TMHs of SecYEG and building the molecular model based on the structure of the quiescent translocon [4]. Both TMHs and extramembrane domains of SecY, SecE, and SecG subunits could be unambiguously fitted into the cryo-EM density (Figs 2E and 3A). The translocon:ribosome complex was established via the well-known canonical interactions [9,11,14]: Two structured cytoplasmic loops between TMHs 6/7 and 8/9 of SecY extended toward the ribosomal tunnel to interact with rRNA helices H6, H24, and H50, and the uL23 protein. Additionally, the ribosomal protein uL24 approached the C-terminal end of the SecY TMH 10, and the ribosomal protein uL23 formed two contacts within the essential amphipathic helix of SecE. Differently to earlier findings [14], we did not observe the contact between the rRNA helix H59 and the lipid head groups, although the H59 helix was displaced toward the bilayer ( Fig 3B). It seems plausible that those contacts are established at a later stage of membrane protein insertion, when one or more nascent TMHs egress the lipid bilayer and the H59 helix "screens" the charge of connecting loops, and so participates in the topology determination [15,32]. When evaluating other known structures of bacterial and eukaryotic translocons in complex with ribosomes (Appendix Fig S2), we noted a close agreement between our model and the detergent-solubilized E. coli SecYEG bound to a translation-stalled ribosome [18]. Interestingly, although the SecYEG structures in both environments were highly similar, the relative orientation of the ribosome and SecYEG differed substantially: While being bound to the RNC via its C-terminal domain, the detergent-solubilized translocon rotated as a rigid body away from the rRNA helix H59, so the displacement was most pronounced for its N-terminal half ( Fig EV3). It is tempting to speculate that the altered SecYEG:ribosome binding geometry, as well as the enhanced affinity of the complex in detergent [20], arose from the lack of electrostatic interactions between the rRNA and the polar moiety of lipid head groups.
In spite of the loose binding of SecYEG to the RNC and its higher flexibility, the complete architecture of essential SecY and SecE subunits was resolved, and a single-helix density proximate to the SecY N-terminal domain was assigned to TMH 2 of the SecG subunit, while TMH 1 could not be reliably detected (Figs 2E and 3A). No SecG subunit could be resolved in the earlier structure of SecYEG-ND [14], and the crystal structure of the quiescent SecYEG revealed that SecG TMH 1 faces away from the translocon core, so its periplasmic tip is separated by~10 Å from the nearest TMH 4 of SecY, with a lipid molecule filling the void [4]. Thus, weak protein:protein intersubunit interactions in the lipid environment likely favor spatial dynamics of SecG, up to a complete topology inversion [33], and the dynamics might be modulated by the ribosome binding. Remarkably, within the SecYEG-ND complex we could clearly observe accessory TMHs 1 and 2 of SecE, which were either absent or only poorly resolved in previous translocon structures [14,15,18]. Earlier models placed the SecE TMHs either distanced from the translocon by 20 Å , or near SecY TMH 9, i.e., at the back of the translocon [14,15] (Appendix Fig S2). However, our structure revealed a very different organization of the complex, as SecE TMHs formed a helical hairpin in close proximity to SecY C-terminal domain, and the hairpin was tilted within the lipid bilayer by~30° (Fig 3A). Such a tilted orientation of the SecE TMHs could also be recognized in densely packed 2D crystals of SecYEG [34,35], but has not been reported for either free-standing or ribosome-bound translocons. Surprisingly, the periplasmic loop of the SecE helical hairpin reached TMH 8 and a short helix connecting TMHs 7 and 8 of SecY, and so appeared in direct contact with the lateral gate of the translocon, thus suggesting a potential role of SecE in the translocon gating mechanism but also explaining interactions of SecE with nascent TMHs soon after their membrane partitioning [36].
We further examined whether the early interactions with the RNC were sufficient to trigger a conformational change within SecYEG, as it would be required for the nascent chain insertion into the lipid bilayer. SecY TMH 2a, known as a plug domain [37,38], resided in the central position, thus keeping the SecY pore sealed upon RNC binding [12,31], and only minor shifts could be seen for most TMHs in comparison with the quiescent state or detergentsolubilized SecYEG:RNC complex [4,18] (Fig 3C and Appendix Fig S2). Interestingly though, substantial rearrangements were observed within the lateral gate of the translocon, when compared both to the quiescent and to RNC-bound detergent-solubilized states ( Fig 3D): TMH 2b was displaced toward the central pore of the translocon, and SecY TMH 7 underwent a tilting of~5°, so its cytoplasmic and the periplasmic ends approached TMH 8 and TMH 3, respectively [3,4]. This tilting of TMH 7 was coupled to a displacement of TMH 8, as they are connected via a short rigid helix at the periplasmic side ( Fig 3D). The resulting conformation of the ribosome-bound translocon manifested a V-shaped crevice at the cytoplasmic side of the lateral gate that differed from the rather closed conformation of the detergent-solubilized SecYEG [18], but also from "primed" and fully opened post-insertion states of the eukaryotic homolog [10,11,13]. Thus, the observed conformation likely reflected a novel early stage in the gate opening. Such dynamics are in agreement with a previous fluorescence-based study on SecYEG-ND:RNC [16], but, to our knowledge, represent the first direct visualization of the pre-opened translocon in the lipid environment. D Local resolution map of SecYEG-ND sub-particle. The cytoplasmic side of the translocon demonstrates higher resolution due to stabilization by the bound ribosome, while high resolution at the periplasmic side is hindered by the SecYEG-ND dynamics within the complex. The associated ribosome is not shown for clarity. E A planar slice through the SecYEG-ND core at different signal levels (blue/green/red) with indicated positions of SecYEG TMHs (SecY indicated in orange, SecE in purple, and SecG in green). A single helical turn could be fitted in a density in the area where SecG TMH 1 was expected (green asterisk).
To investigate whether the observed translocon conformation was a result of RNC FtsQ binding, we employed microsecond-long molecular dynamics (MD) simulations of SecYEG in explicit solvent and an explicit membrane, which allows to study the behavior of lipid-embedded SecYEG in full atomic detail [39]. From the projection of MD conformations of SecY onto the plane spanned by the first two principal components (PC; both PCs together describẽ 50% of the total variance of motions during the simulations), a configurational free energy landscape was computed (equation 1). In this landscape, the SecY conformation from the SecYEG-ND:RNC complex lies in an area of slightly elevated free energy (DG conf.,i % 2 kcal/mol, Appendix Fig S3A), suggesting that this conformation was stabilized by the bound RNC and/or the nascent chain. The mechanism of structural adaptation of the translocon was then probed in a reverse direction, as the MD simulations started from the RNC-bound SecYEG conformation, but without RNC FtsQ. That way, the adaptation toward a non-disturbed quiescent state could be followed, as has previously been shown for membrane protein complexes [40,41]. The cytoplasmic loop 6/7 of SecY was highly mobile (mean root-mean-square fluctuations (RMSF) > 5 Å ; Fig 4A), likely due to the absent ribosome that otherwise recruits the loop as a docking site. The TMHs were substantially less dynamic (RMSF < 3 Å ), except for the lateral gate and the cytoplasmic part of TMH 2b. Structural differences upon reaching the free energy minimum were the most substantial for loop 6/7 and were followed by the lateral gate ( Fig 4B). We measured internal distances within the lateral gate (TMHs 2, 7, and 8), between TMH 7 and the adjacent TMH 3, as well as the angle g between TMH 7 and TMH 8 (Fig EV4, panels A and C). The cryo-EM structure implied that binding of RNC FtsQ to SecY induced tilting of TMH 7, such that its periplasmic end approached TMH 3, while TMH 2b shifted toward the pore. This effect was completely reversed in the absence of the RNC, as both the distance between TMHs 3 and 7 and the angle g increased (Fig EV4, panels B and D). Compared to the initial conformation, the distances between TMHs 2b and 7, and between TMHs 2 and 8, decreased over the course of the simulations, while the distance between TMHs 7 and 8 increased, which led to a closing of the observed V-shaped crevice (Fig EV4, panel B). Interestingly, the PC analysis also suggested that the movements of TMHs 7 and 8 were connected to the dynamics of the cytoplasmic loop 6/7 (Fig 4C, Appendix Fig S3B), such that the ribosome binding likely also influences the structural dynamics within the lateral gate, in agreement with an earlier structure of the ribosome-bound Sec61 translocon [11] and the recent biochemical data [42]. In the absence of a ribosome, binding of a short signal peptide causes an outward displacement of TMH 2b but not TMH 7 [4], so the enhanced structural dynamics at the cytoplasmic side of the lateral gate likely allows a range of pre-opened translocon conformations. Differently to SecY, the SecE conformation from the SecYEG-ND:RNC complex corresponded to a low free energy region (DG conf.,i % 0.16 kcal/mol; Appendix Fig S8A), indicating that it was similar to predominant SecE conformations in MD simulations. Accordingly, structural differences upon reaching the free energy minimum were small, as all residues in SecE, except the termini, show a RMSF < 3 Å (Fig 4D). Notably though, a small upward motion of TMHs 1 and 2 (Fig 4E and F, and Appendix Fig S4B) caused a loss of initial contacts between SecE TMH 1 and the SecY periplasmic helix (Appendix Fig S5, panel A), while new ionic interactions were formed on the periplasmic side between R44 on SecE TMH 2 and D393 on SecY TMH 9 (Appendix Fig S5, panel B). This change in the interaction pattern supports the hypothesis that the RNCinduced structural re-arrangement in SecE can be transferred toward the periplasmic part of TMHs 7, 8, and 9 in SecY and further modulates the lateral gate dynamics.
The MD simulations also revealed that the lateral gate area was enriched with zwitterionic lipids (POPC) (Fig 4G). Assuming that the phosphatidylcholine lipids used in the simulations adequately resemble the distribution of naturally occurring phosphatidylethanolamine lipids, this uneven distribution suggests that anionic lipids (POPG) are not an essential factor in the lateral gate dynamics, while the overall neutral charge in the lipid head group region may be beneficial for the insertion of hydrophobic nascent chains. The simulations furthermore indicated that anionic POPG lipids were also unevenly distributed within the nanodisc and preferentially clustered proximate to TMHs 3 and 4 of SecY (Fig 4G). Remarkably, the same regions of SecYEG have been recently described to recruit negatively charged cardiolipin lipids via interactions with lysine residues at the cytoplasmic interface of SecY, such as those in positions 115 (TMH 3) and 181 (TMH 4) [43]. Our data suggest that the SecYEG:lipid interaction is purely charge-determined, and the functionality of the translocon can be ensured either by cardiolipin or by phosphatidylglycerol lipids, while cardiolipin is not essential for the translocon functioning in vivo and in vitro [29].
As RNC FtsQ contained a hydrophobic anchor domain, we focused on locating that domain within the SecYEG-ND:RNC complex. The nascent chain could be traced along the whole ribosomal tunnel, and it was followed by a free-standing density aligned with the tunnel exit and the central cavity of SecY (Figs 5A and EV5, panel A), suggesting that the nascent chain was loaded into the translocon. The pronounced density of TMH 2b and 7 displayed the V-shaped conformation of the partially opened lateral gate, with an additional connecting density possibly indicating the presence of a flexible or partially folded FtsQ TMH in proximity of the gate in the bilayer (Fig 5B). Finally, a short rod-like density within the nanodisc interior pointed toward the lateral gate (Fig EV5, panel A), thus suggesting that the short FtsQ TMH emerged into the bilayer via the lateral gate and acquired a stable helical conformation. As the resolution of the map alone was insufficient for unambiguous attribution of the flexible FtsQ TMH, we performed site-specific chemical cross-linking of the nascent chain and the translocon lateral gate. For this purpose, RNC variants which contained single cysteines at positions 40-43 within the FtsQ anchor domain, thus covering one helical turn, were examined. Complementary, cysteines were introduced within TMH 2b (residues 83 and 87) and TMH 7 (residues 282 and 283) of the SecY lateral gate (Fig 5B), and the cross-linking was catalyzed by copper phenanthroline. In the presence of SecY C282 EG-ND or SecY C283 EG-ND, a cross-linking product of~80 kDa was detected for RNC FtsQ C40 in Western blots when using antibodies against the hemagglutinin-tagged nascent chain ( Fig 5C). As the molecular weight matched closely that of the putative tRNA-FtsQ:SecY adduct, we further investigated the involvement of SecY in the cross-linking products. We have found that the solvent-exposed cysteine within the periplasmic loop 3/4 of SecY (residue 148; Fig EV5, panel B) could be efficiently conjugated to CF488A-maleimide [29], but the fluorophore could not access cysteines within the lateral gate (Fig EV5, panel C). Thus, double-cysteine translocon SecY C148/C283 EG could be fluorescently labeled and used for cross-linking experiments, and presence of SecY in cross-linking adducts could be determined by in-gel fluorescence. If no ribosomes were added, only weak cross-linking products of SecY were observed at~85 kDa that likely represented occasional translocon dimers co-reconstituted into a single nanodisc (Fig 5D). If either non-translating ribosomes or cysteine-free RNC FtsQ were added, three cross-linking bands at molecular weights between 40 and 60 kDa were observed. Those bands diminished if the sample was treated with N-ethylmaleimide prior adding the SDS-containing sample buffer (Figs 5E and EV5, panel E), so they were assigned to cross-linking of SDS-denatured SecY with ribosomal proteins. However, in the presence of RNC FtsQ C40 , a specific cross-linking product of 80 kDa was formed that agreed with the observation from Western blotting experiments (Fig 5D). Thus, we concluded that the FtsQ nascent chain indeed resided within the lateral gate and could reach the core of the translocon, but did not partition the bilayer via the cytoplasmic crevice. Interestingly, we also observed cross-linking products between SecY C283 EG-ND and the nascent chains that contained cysteines in proximate positions 41 and 42, but not the upstream position 35 (Fig 5E), thus suggesting that the N-terminal part of FtsQ TMH has been released into the lipid bilayer. The SecYEG:FtsQ cross-linking was equally efficient in the presence and absence of phosphatidylethanolamine (POPE), a major component of the bacterial membrane (Fig 5D). PE lipids are known to stimulate the SecA-mediated post-translational translocation through SecYEG [44], but seemingly have little effect on the SecYEG-ND:RNC assembly, and the translocon:ribosome complex was also visualized by cryo-EM, although at substantially lower resolution (Appendix Fig S6). Membrane protein biogenesis occurs in a highly complex and anisotropic environment of a lipid bilayer, and lipid:protein interactions are known to mediate the structure and functionality of inserted proteins [45][46][47]. Here, we have revealed the most complete structure of the lipid-embedded SecYEG translocon in complex with RNC at the early stage of the nascent chain insertion. The accessory TMHs of SecE were found to interact with the lateral gate, so they potentially mediate the gate dynamics, but may also be involved in nascent chain release or interactions with the YidC insertase [48,49] and the membrane-anchored chaperones PpiD and YfgM [50,51]. Supported by the MD simulations, the structure evidenced that the opening of the translocon lateral gate was induced at the cytoplasmic interface upon the RNC binding. TMH 2b underwent a displacement of up to~5 Å toward the central pore, and TMH 7 tilted toward TMH 8 at the cytoplasmic side that resulted in a V-shaped crevice open for the nascent chain loading (Fig 5F). When compared to other visualized translocon:ribosome complexes, the observed translocon conformation could be readily placed between the "primed" and "inserting" states reported for the eukaryotic Sec61 complex [11,13]. Differently to the complex in its "inserting" state, the translated and exposed part of the FtsQ nascent chain was not sufficiently long to form a TMH in an N in -C out topology. The cross-linking results and the weak densities observed in the cryo-EM map imply that at this early insertion stage the short nascent chain remains flexible within the lateral gate of the translocon. One can envision that the elongation of the nascent chain would cause a further displacement of SecY TMHs 2b and 7 and results in the open "inserting" state of the lateral gate, so complete folding and insertion of the TMH can be achieved in a downstream event [13]. As a bimodal profile has been observed when studying the translocon-mediated insertion of TMHs in N out -C in topology, and two distinct insertion steps have been detected in vivo [52,53], the presented intermediate state of the translocon that allows for partial membrane partitioning and folding of a nascent TMH may potentially explain the experimental data.
Materials
All chemicals used were purchased from Merck/Sigma-Aldrich and Carl Roth in p.a. grade quality. Detergents were purchased from Anatrace and lipids from Otto Nordwald GmbH/Avanti Polar Lipids, Inc. Fluorophores were purchased from Thermo Fisher Scientific, Lumiprobe GmbH, and Atto-Tec GmbH.
SecYEG purification and labeling and reconstitution
Escherichia coli SecYEG translocons containing an N-terminal decahistidine tag followed by a flexible linker and the 3C protease cleavage site were overexpressed in E. coli strain ER2566 (New England Biolabs) and isolated as previously described [25] with minor modifications. Briefly, after the lysis (Microfluidizer M-110P, Microfluidics Corp.) bacterial membranes were pelleted upon centrifugation for 1 h at 125.000 g (rotor Ti45, Beckman Coulter) and resuspended in 50 mM HEPES pH 7.4, 150 mM KCl. Membranes were solubilized with 1% DDM in presence of 500 mM KCl, 50 mM HEPES pH 7.4, 200 lM TCEP, and protease inhibitors (cOmplete Protease inhibitor cocktail, Roche). Histidine-tagged translocons were isolated on Ni 2+ -NTA-sepharose resin (Macherey-Nagel GmbH) following standard procedures. Optionally, labeling with 200 lM fluorophoremaleimide conjugates was carried out for 2 h prior eluting the protein from the Ni 2+ -NTA resin, and the labeling efficiency was determined spectrophotometrically [29]. After the elution in presence of 300 mM imidazole, the buffer was exchanged for 50 mM HEPES pH 7.4, 150 mM KCl, 0.1% DDM, and 5% glycerol using PD SpinTrap or MiniTrap G-25 columns (GE Healthcare Life Sciences). The homogeneity of the purified translocons was controlled by sizeexclusion chromatography using Superdex 200 10/300 column connected to the AKTA Purifier (GE Healthcare Life Sciences) and the protein concentrations were determined spectrophotometrically. Samples were aliquoted, flash-frozen in liquid nitrogen, and stored at À80°C.
Cryo-EM experiments
For cryo-EM experiments, the SecYEG-ND-enriched fractions from size-exclusion chromatography were concentrated to~1 lM by using Amicon Ultra 0.5-ml tubes (MWCO 30 kDa, Merck/Millipore), and 100 nM RNC FtsQ was added and incubated at least 15 min at room temperature. Prior to sample vitrification, fluorinated octylmaltoside was added to the reaction to the concentration 0.2% to promote random orientation of particles on cryo-EM grids [54,55]. Vitrification was achieved using a Vitrobot mark IV (FEI). For each grid, 3.5 ll of sample was applied onto a glow discharged (20 s, 0.22 Torr) Quantifoil holey carbon grid coated with 2 nm carbon (R 3/3). After 45-s incubation, surplus sample was blotted away (2 s) and the grid was plunged into liquid ethane. From these grids, two separate datasets with a total of 13,098 micrograph movies with each 16 frames and an exposure of 2.5 e À /Å ²/frame were collected on a Titan Krios 300 keV cryo-electron microscope (FEI) using a Falcon II direct electron detector and the EM-Tools software (TVIPS GmbH). Magnification was set to result in a pixel size of 1,084 Å .
Cryo-EM data analysis
Anisotropic motion correction of the micrographs was performed using MotionCor2 [56], initially using the first ten frames only. The contrast transfer function (CTF) parameters were estimated using Gctf v1.06 [57], and particles were picked using Gautomatch v0.53 (www.mrc-lmb.cam.ac.uk/kzhang/Gautomatch/). All subsequent data analysis was carried out in Relion 2.1 [58]. At first, both datasets were processed individually but identically. Two rounds of unsupervised 2D classification of all particles were performed to eliminate false positives of the particle picking step (Fig EV2). In the following step, a 3D refinement was performed to align all particles to a E. coli 70S ribosome reference without the translocon. All following 3D classifications were performed with fixed alignment parameters. An initial round of 3D classification with five classes was used to select for 70S particles bearing the SecYEG translocon. The resulting particles of both data sets were joined for further processing with Relion 3.0 [59]. After a further 3D refinement of the joined set, beam tilt and per particle CTF refinement was performed. Using the resulting improved CTF parameters, all particles were re-extracted with 2× binning. Multi-body refinement was used to refine the ribosomal small subunit (SSU) and ribosomal large subunit including the SecYEG-ND (LSU:SecYEG-ND) as two independent rigid bodies. Following this step, the relion_flex_analyse tool was used to subtract the signal of the SSU from the particle images and re-center these on the LSU:SecYEG-ND moiety. This process of multi-body refinement and extraction of sub-particles was then repeated for the LSU and SecYEG-ND to finally obtain a stack of particle images containing only SecYEG-ND signal. These final sub-particles were used for a further round of 3D classification. Refinement of the final subset of SecYEG-ND sub-particles resulted in an average resolution of 6.0 Å . To obtain high-resolution reconstructions of the ribosomal density, the particles of the final class were reextracted from the motion-corrected micrographs and subjected to un-binned refinement. Again using multi-body refinement, the SSU and LSU:SecYEG-ND moieties were refined as independent rigid bodies to obtain optimal reconstructions of the ribosome, yielding resolutions of 3.3 and 3.1 Å for SSU and LSU:SecYEG: ND, respectively.
Model building
As a starting model, we used both a crystal structure of quiescent Thermus thermophilus SecYEG solved in the lipid cubic phase (PDB ID: 5AWW) [4], as well as the cryo-EM structure of Escherichia coli SecYEG together with the 70S ribosome (PDB ID: 5GAE) [18]. Rigidbody docking was performed with UCSF Chimera [60], and the positions of individual helices were adjusted using coot [61]. To obtain reasonable geometry, real space model refinement was performed using the phenix suite [62]. To complement the intermediary resolution of the SecYEG map, the aforementioned models 5AWW and 5GAE were used to provide external reference restraints for refinement. In a final step, side chains were pruned to alanine length. Mutual orientation of SecE TMHs 1 and 2 was derived from a coevolution pattern of residues within TMHs (http://gremlin.bakerlab. org/ecoli.php?uni=P0AG96). Strong correlations (probability score threshold 0.8) were found for residue pairs: A24:I50, L25:A54, L25: V58, V28:L51, and A29:V48, which formed a defined interaction interface.
Molecular dynamics simulations
In order to investigate the structural dynamics of the SecYEG complex in the absence of the ribosome and the nascent peptide, MD simulations of the SecYEG complex in an explicit membrane and explicit solvent were carried out, which used the cryo-EM-based structure as a starting conformation. ACE and NME groups were connected to the N-terminal and C-terminal residues, respectively, to avoid artificially charged termini. The SecYEG complex was prepared for pH 7 using EpiK [63] distributed with Schroedingers Maestro â suite of programs [64], which led to deprotonated residues E176 and E389 in SecY, and a protonated K81 in SecE. Furthermore, H99 in SecY was assigned to the HIE state, while the remaining histidine residues are in the HID state. We used the in-house software packmol_memgen, now also distributed with the Amber 18 suite of programs [65], to embed the SecYEG complex into a POPC: POPG (ratio 2:1) bilayer that mimics the nanodisc composition, to add 0.15 M of KCl, and to solvate the bilayer system with TIP3P water [66]. All relevant system files for subsequent MD simulations were generated using the LeaP program of the Amber 17 suite of programs [67]. The Amber ff14SB force field [68] was used to parametrize the protein, adaptations by Joung and Cheatham [69] were applied to treat K + and Cl À , and the lipid 17 force field distributed with Amber 17 to treat the lipid bilayer. For subsequent MD simulations, we used the simulation protocol as described by us previously [70,71]. In order to set up five independent MD production simulations, the target temperature during thermalization varied from 299.8 to 300.2 K in 0.1 K intervals, so that we obtained five different configurations for subsequent MD production runs. These production simulations were performed at 300.0 K for 1.0 ls. Coordinates were saved in a trajectory file every 200 ps. The particle mesh Ewald method was applied to treat longrange electrostatic interactions. Structural relaxation, thermalization, and production runs of MD simulations were conducted with pmemd.cuda [72] of Amber 17 [67].
We used the cpptraj program [73] to analyze the trajectories with respect to distances, root-mean-square fluctuations (RMSF), a measure for atomic mobility, angles, and lipid distributions. If not reported differently, all results are expressed as mean value AE standard error of mean over n = 5 independent simulations. Additionally, we performed a principal component analysis to extract the essential motions displayed by the systems, after superimposing each snapshot onto the ten transmembrane helices in SecY of the overall average coordinates in order to remove global rotational and translational motions. Mapping SecY and SecE along the trajectories onto a plane spanned by the 1 st and 2 nd principal components yielded a 2D histogram, from which we estimated the relative configurational free energy DG conf. , i of the state of the protein in bin i using equation (1) where R is the universal gas constant, T = 300 K, N i the population of bin i, and N max the population of the most populated bin [74].
The representative conformations of SecY and SecE were extracted from the MD trajectories and analyzed toward their structural features relative to the initial 3D structure.
In vitro cross-linking
To probe potential SecYEG:FtsQ contacts, 100 nM RNC FtsQ variants bearing single cysteines (mutations V40C, S41C, G42C, and W43C) in the FtsQ TMH were incubated with~1 lM SecYEG-ND, which contained single cysteines within the translocon lateral gate (M83C, S87C, I282C, and I283C). After 15-min incubation at the ambient temperature, cupper phenanthroline was added to the concentration of 1 mM, and the cross-linking reaction was conducted for 30 min at the ambient temperature. Cross-linking products containing the nascent chain were detected via Western blotting [75]. Western blots were developed using ECL Western blotting substrate (Pierce) and imaged using LAS-4000 Mini imager (GE Life Sciences). To detect SecYEG-based cross-linking products, the cysteines within the lateral gate were combined with a cysteine at the translocon periplasmic interface (mutation L148C), which was labeled with CF488A-maleimide (Sigma/Merck), as previously described [29]. For the cross-linking experiments, 100 nM SecY CF488A EG-ND variants was mixed with 200 nM non-translating ribosomes or RNCs, and the cross-linking with copper phenanthroline was conducted as described above. Where indicated, samples were treated with N-ethylmaleimide prior loading on SDS-PAGE. Ingel fluorescence was recorded using Typhoon FLA 7000 imaging system (GE Life Sciences).
Data availability
The datasets produced in this study are available in the following databases: • cryo-EM map: Electron Microscopy Data Bank (EMDB, www.eb i.ac.uk/pdbe/emdb), accession code 4743.
Expanded View for this article is available online. | 8,939 | 2019-08-05T00:00:00.000 | [
"Biology"
] |
The effectiveness of introduction to nuclear physics e-module as a teaching material during covid-19 pandemic
This study aimed to explain the effectiveness of Introduction to Nuclear Physics E-Module as a teaching material during Covid-19 Pandemic descriptively. The e-module was made by using Flip PDF Professional application. This study aimed to: (1) describe students’ learning outcomes before and after the implementation of e-module, (2) analyze the effectiveness of e-module descriptively through normalized Gain ; (3) describe the practicality of the e-module according to students as the users. The research instruments consisted of students’ test results and practicality questionnaire in the use of nuclear physics e-module. The results obtained showed that: (1) There was a difference of the pretest-posttest mean score descriptively; (2) The e-module was effective descriptively to boost students’ learning outcomes through normalized Gain analysis ; (3) The practicality of the e-module was in the ‘very practical’ category based on students as the users. Therefore, The nuclear physics e-module can be applied as the effort to provide teaching material in physics lecture during Covid-19 pandemic.
Introduction
The spread of Covid-19 pandemic has drastically intruded every aspect of human life, including education. It has created an unprecedented test on education. In many educational institutions around the world, universities are closed and learning activities are shifted to online [1]. Teachers are faced to the need to adapt on online learning [2]. Educators are required to adjust to a variety of aspects, such as the ways to interact with students, types of learning media, the kinds of assignments distributed, and the teaching materials provided [3][4][5][6][7][8][9].
Electronic teaching material can be used as learning source and media in online learning during the pandemic of Covid-19. Electronic teaching material is a set of material which is arranged sequentially and systematically while showing the need of competences expectedly mastered by students in learning process and poured into an interactive multimedia [10]. Electronic module (e-module) is one of the electronic teaching materials. Electronic module enables users to learn with or without facilitator or lecturer. One of the criteria of e-module is the independent learning which makes the teaching material train students to learn independently [11].
One of the applications to make e-module is Flip PDF Professional. The available format provided in flip PDF professional are (.exe), (.app), (.fbr), and (.html) [12]. The advantage of using flip PDF professional is that it is beginner-friendly, even to those who are unfamiliar with HTML programming language. Flip PDF Professional is a flipbook maker with the feature of editing each individual page [13]. Pdf flip professional can utilize any kind of media, such as audio, video and flash [14].
The introduction of nuclear physics is one of the compulsory subjects programmed by a prospective Physics teacher. The topics in this subject are atomic nucleus, nuclear force, nuclear model, nuclear reaction, and radioactivity. Students need to understand those concepts along with the equations and calculations in them. Generally, this subject discusses about declarative knowledge and procedural skill in solving arithmetical problems. To help students to learn independently whenever and wherever, especially in recent situation (Covid-19 pandemic), we need a useful and interesting teaching material. The teaching material which is not only containing material, but also video, exercises, etc. Therefore, the electronic learning module is considered as suitable teaching material in concurrent situation.
There were several studies regarding e-module based on flip PDF professional. As studied by Nisa et al [12], the implementation of e-module with flip Pdf professional on mathematics subject produced the effectiveness in moderate category. Additionally, the study conducted by Wijaya and Vidianti [11] showed that the use of interactive e-module on education innovation subject was effective to use to boost learning outcomes. Then, as it was studied by Seruni et al [15], E-module flip PDF professional based on project based learning could be applied to enhance students' critical thinking skill.
However, researchers have not found any study discussing the effectiveness of e-module based on flip PDF professional in physics lecture during the pandemic of Covid-19 yet. Hence, this study focused on describing the effectiveness of the use of e-module in introduction of nuclear physics descriptively as the teaching material implemented during the pandemic in improving students' learning outcomes.
Method
This study was a descriptive quantitative research. The aims of this study were describing students' learning outcomes score, the normalized gain (N-gain) of students' learning outcomes from pretest and posttest result, and the practicality level of the e-module in introduction of nuclear physics based on students' point of view as the users.
The research instruments were learning outcomes test sheet and practicality questionnaire of the emodule usage. The research subject was students who programmed the introduction of nuclear physics in Lambung Mangkurat University in academic year of 2019/2020. The mean score of students' learning outcomes (pretest and posttest) the implementation of e-module in introduction of nuclear physics was determined by using this following equation: x = ∑x/n (1) Note: x = mean scores of students' learning outcomes ∑x = sum of students' learning outcome scores n = number of students The N-gain analysis which stated that there was an increase in students' pretest and posttest was carried out in order to determine the effectiveness of cooperative-blended learning model on students' learning outcomes. The calculation of N-gain and the N-gain criteria refer to Hake [16].
Then, the data of students' responses were presented in percentages as obtained from the sum of questionnaire scores (scale 1-5) for each indicator as presented in equation (2).
PS= R/SM. 100% (2) Note: PS = Percentage score R = Score of each indicator SM = Total sum The above percentages were then classified according to the interpreted percentage score guidelines [17] presented in Table 1. 81%-100% Very practical
Result and Discussion
E-Module of introduction of nuclear physics was made by using the Flip PDF Professional application. The display of pages in the e-module is as seen on Figure 1. Before the implementation, the e-module was validated and stated as feasible to use as learning source for students, especially in introduction of nuclear physics subject.
Figure 1. The pages of introduction of nuclear physics e-module
Students' learning outcomes were described based on the mean score of pretest and posttest result before and after the implementation of e-module. Both tests were done to detect whether there was an improvement of students' learning outcomes. The mean score of pretest and posttest can be seen on Figure 2.
Figure 2. Pretest and Posttest Result
Based on Figure 2, it shows that pretest result which represented students' initial knowledge about nuclear physics knowledge gained the mean score of 27,6. After the e-module was applied, the mean score of posttest was 75,8. The N-gain result showed a normalized gain score <g> of 0,67. The effectiveness category based on Ngain criteria was included in moderate criteria. Therefore, the implementation of introduction of nuclear physics e-module in physics lecture during pandemic was effective descriptively to boost students' learning outcomes. Then, the percentage of practicality of the e-module was calculated by using equation (2). The percentage of practicality obtained was 95,27% and included in 'very practical' category. The comparison of practicality score is as seen on Figure 3. The practicality score as seen in Figure 3 was obtained from the questionnaire in scale of 1-4. The questionnaire was filled online via Zoho form application. The questionnaire consisted of 15 statements, which were: (1) I do not feel helped on engaging in the physics lecture by using the e-module; (2) the electronic module makes the material I learn becomes more difficult; (3) the lecture do not end on time after the implementation of this e-module; (4) The introduction of nuclear physics lecture by using emodule makes learning atmosphere more conducive, especially during pandemic; (5) This e-module makes motivated to learn the nuclear physics materials; (6) I found many concepts in nuclear physics material independently while learning with this e-module; (7) This e-module consumes more battery power; (8) This e-module uses language, words, sentences, and paragraph which ease me to understand the material; (9) This e-module is easy to access whenever and wherever rather than printed ones; (10) This e-module does not consume much internet data; (11) The loading of animation, video, pictures, link, or pages in this module does not take much time; (12) The material delivered in this e-module makes it difficult for me to understand the lecture faster; (13) This e-module is easy to access via any kind of network (Wi-Fi, 3G, or 4G); (14) The provided animation video, pictures, and link in this emodule do not make it easy for me to understand the material; (15) After learning with this e-module, I do not get a lot of new knowledge and information. According to the discussion above, the use of e-module in introduction of nuclear physics in physics lecture gave positive impact on learning, especially on students' learning outcomes during online learning in pandemic of Covid-19 situation. It is in line with Purwaningtyas and Hariyadi [18] who stated that the use of module could accommodate learning activity to be more well-planned, independent, complete, and produced clearer output. The function of this module supports the solution of online learning problems during the pandemic. According to study by Allo [19], students suggested that learning material and assignment in lecture should have been explained before. A module facilitates that limitation because there are clear instructions in it to reach its characteristics which is self-instructional. E-module of introduction of nuclear physics is very practical in its use as students believed so. Therefore, e-module can be an option as an effort to provide reading source [20]. E-module which is made by using flip pdf professional application is very flexible and easy to use. Output format can also be applied on mobile application by using mobile device [14]. Hence, it enables students to access the e-module wherever and whenever. With its easy access, it may improve students' ability and interest in learning using e-module [15,[21][22][23][24][25].
Conclusion
To conclude, the implementation of Introduction to Nuclear Physics E-Module as a teaching material during Covid-19 Pandemic obtained different mean scores of pretest and posttest descriptively, showed effectivity in moderate category to improve students' learning outcomes descriptively, and showed a | 2,375.8 | 2021-01-01T00:00:00.000 | [
"Education",
"Physics"
] |
Transcriptome Analysis of Stigmas of Vicia faba L. Flowers
Pollination in angiosperms depends on complex communication between pollen grains and stigmas, classified as wet or dry, depending on the presence or absence of secretions at the stigma surface, respectively. In species with wet stigma, the cuticle is disrupted and the presence of exudates is indicative of their receptivity. Most stigma studies are focused on a few species and families, many of them with self-incompatibility systems. However, there is scarce knowledge about the stigma composition in Fabaceae, the third angiosperm family, whose stigmas have been classified as semidry. Here we report the first transcriptome profiling and DEGs of Vicia faba L. styles and stigmas from autofertile (flowers able to self-fertilize in the absence of manipulation, whose exudate is released spontaneously) and autosterile (flowers that need to be manipulated to break the cuticle and release the exudates to be receptive) inbred lines. From the 76,269 contigs obtained from the de novo assembly, only 45.1% of the sequences were annotated with at least one GO term. A total of 115,920, 75,489, and 70,801 annotations were assigned to Biological Process (BP), Cellular Component (CC), and Molecular Function (MF) categories, respectively, and 5918 differentially expressed genes (DEGs) were identified between the autofertile and the autosterile lines. Among the most enriched metabolic pathways in the DEGs subset were those related with amino acid biosynthesis, terpenoid metabolism, or signal transduction. Some DEGs have been related with previous QTLs identified for autofertility traits, and their putative functions are discussed. The results derived from this work provide an important transcriptomic reference for style-stigma processes to aid our understanding of the molecular mechanisms involved in faba bean fertilization.
Introduction
In angiosperms, pollination depends on complex communication between the male (pollen grains) and the female (stigma/style) reproductive organs.In the compatible pollen-pistil interaction, several events are involved: pollen capture, adhesion, germination, penetration of pollen tube into the stigma, growth of the pollen tube through the style, and final entry of the pollen tube into the ovule.Stigmas can generally be classified into two main groups according to the presence (wet stigmas) or absence (dry stigmas) of a viscous secretion on the stigma surface [1,2].Once pollen grains are transferred to the stigma by abiotic (e.g., water, wind) or biotic vectors (e.g., insects, birds), or directly by contact between the anther and the stigma, pollen-pistil interactions differ between species.In species with wet stigma (e.g., Nicotiana tabacum, Lilium longiflorum), the cuticle is disrupted due to the presence of exudates, which can be composed by lipids, proteins, carbohydrates, phenols, glycoproteins, ions, and enzymes, such as esterases and peroxidases [3,4].Unspecific pollen grains adhere to the stigma surface thanks to exudates, and pollen hydration occurs passively, transferring water from the stigmatic exudates [5].By contrast, in species with dry stigmas (e.g., Arabidopsis thaliana, Zea mays, Oryza sativa), the events following pollination are species-specific and highly regulated [6].Pollen adhesion and germination have been well studied in species with self-incompatibility systems such Plants 2024, 13, 1443 2 of 17 as Brassicaceae, Poaceae, or Papaveraceae, and a high diversity of molecules and processes have been discovered (reviewed in [4,7]).
Previous proteomic and transcriptomic studies in species with wet and dry stigmas indicate that both strategies express unique as well as common genes and proteins during stigma maturation.It is expected that the evolution of genes involved in sexual reproduction occurs at a higher rate than those in charge of background processes; moreover, those genes responsible for the maintenance of species boundaries will be species-specific and, therefore, different between species [5,8].Allen et al. [9] found that certain gene families were consistently found in pistil tissues of different species such as cytochrome P450, ATP-binding cassette (ABC) transporters, lipid transfer proteins (LTPs), zinc finger proteins, extensin-like proteins, receptor protein kinases, disease resistance proteins, or nodulin/mtn3 genes.Similarly, Sang et al. [10] found, at a broad level, that the proportion and abundance of stigma proteins in different functional categories (e.g., 'defense and stress response', 'carbohydrate and energy metabolism', 'protein metabolism and folding') were similar between maize (dry) and tobacco stigmas (wet), indicating that, in general, similar processes occur in both types of stigmas.However, the specific proteins found in 'signal transduction' and 'lipid metabolism' categories showed low protein homologies between wet and dry stigmas [10].
Fabaceae is the third largest plant family after Asteraceae and Orchidaceae [11] that is found to be globally distributed, but for which few stigma composition studies have been carried out [12][13][14].Most studies performed so far have been focused on a few species and families such as maize, rice, Lilium, Arabidopsis, Brassica, Crocus, Petunia, and Nicotiana.Allen et al. [9], conscious of this reality, added a new clade to the pool of studied species: Senecio squalidus.It belongs to the Asteraceae family and possesses a 'semidry' stigma, which shows intermediate characteristics between dry and wet stigmas.This condition is characterized by having secretory cells with exudate retained by cuticle or protein pellicle that can be ruptured by pressure or physical friction [15,16].The stigma of the Fabaceae has been classified as wet or semidry, although some cases of dry stigmas have been reported (e.g., Cassia grandis, Caesalpinia echinata) [17].The semidry stigma is particularly characteristic of the Papilionoideae subfamily [17], which comprises ~14,000 species [18].Some of its members are economically and culturally important legume crops such as pea, lentil, chickpea, and faba bean.Legumes fix atmospheric nitrogen into available ammonia, promoting the nitrogen fertilization of natural soils.Many of them are used for food or forage because of their high content in protein, starch, fiber, and other essential nutrients, but they can also be exploited for industrial processes (dyes, gums) or have medicinal properties [19].
In a global climate change context, it is expected that the reproductive success of plants, including those involved in agriculture, will be affected [20,21].In addition to physiological alterations caused by abnormal climatic conditions, the reproductive success of entomophilous plants can be also affected by changes in plant-pollinator interactions such as variations in the population distribution of pollinators or the uncoupling of flowering phenology and insect life cycles [22][23][24].Hence, it is important to extend the knowledge about the mechanisms that promote self-fertilization, since pollinator dependence could restrict plant reproduction under climate change scenarios.
Faba bean (Vicia faba L.) is a partially allogamous species, with both cross-and selffertilization happening in the same plant [25].Cross-fertilization depends on pollinator activity, and an unstable yield as well as low fruit and seed sets are related to low visitation rates.On the other hand, self-fertilization occurring by spontaneous selfing could ensure pod and seed set in the absence of pollinators [26][27][28].The ability of a flower to self-fertilize in the absence of pollinators or mechanical disturbance is termed autofertility [29].The degree of autofertility differs among faba bean genotypes, and it has been related to some floral features like wider style-ovary angle, shorter style, shorter stigmatic papillae, few and shorter stylar hairs, thinner intervening cuticles with rupture previous to anthesis, or lower quantities of pollen grains [27,30,31].Despite the importance of the rupture of the stigmatic cuticle for successful fertilization in faba bean flowers, little is known about the underlying processes taking place on the stigmas.Recently, a highly saturated genetic map was built, and several quantitative trait loci (QTLs) associated with different autofertility traits were detected.Some of the QTLs were related to the rupture of the stigmatic cuticle in chromosomes I and VI, although the function of the associated marker was not clearly related with autofertility [32].
Advances in faba bean breeding have been slow and costly due to their large genome (13 Gbp) and mixed breeding system.RNA-Seq analysis is a relatively inexpensive method and provides data for single-nucleotide variations, clarifying transcriptional and posttranscriptional gene regulation and transcript rearrangements.Differentially expressed genes (DEGs) can be identified with this method to facilitate our in-depth understanding of key biological and physiological mechanisms.Although some comparative transcriptomic analyses have been performed in faba bean using different tissues to understand stress responses such as drought [33], frost [34], or disease resistance [35], no transcriptional information on the genes involved in the fertilization process is available, and the molecular basis of this essential process is still unknown.Herein, we have performed the first transcriptome analysis of the styles and stigmas of faba bean flowers from lines contrasted for autofertility and combined this information with previous QTL analyses for autofertility traits with three main objectives: (i) amplify the genetic information available for stigmas in a different plant species and family, (ii) identify differentially expressed genes (DEGs) between autofertile and autosterile lines to better understand the functional biology underlying this important trait, and (iii) overlay these DEGs on the previous QTLs to identify the candidate genes associated with autofertility.
Transcriptome Sequencing and De Novo Assembly
A total of 1,189,079,630 raw reads were obtained from the 18 libraries.After quality control and filtering, the total number of reads was 1,077,199,910.A summary of the transcriptome de novo assembly data is shown in Table 1.The assembly of sample AF27.18 in Trinity produced 76,269 contigs with an N50 of 2387 bp and an average contig length of 982.9 bp.
Annotation and Differential Expression Analysis
From the 76,269 contigs of the whole transcriptome, 45.1% of the sequences (34,421 contigs) could be annotated with at least one GO term against the PLAZA 4.5 dicots database in TRAPID, and 34,379 of them were assigned to 7720 gene families.In addition, 29.1% of the contigs showed full-length or quasi full-length sequences, although more than 55% of the transcripts provided no information.
Gene ontology analyses retrieved a total of 8439 different GO terms assigned, which were summarized according to the GO slims categories for the plants in Figure 1.A total of 115,920, 75,489, and 70,801 annotations were assigned to Biological Process (BP), Cellular Component (CC), and Molecular Function (MF) categories, respectively.The top two within BP were 'cellular process' and 'metabolic process'.Some other terms revealed by the analysis were 'nucleobase-containing compound metabolic process', 'response to stress', 'anatomical structure development', 'reproduction', response to different stimuli, or 'transport'.Among the CC category, 'intracellular' followed by 'cytoplasm' and 'membrane' were the most abundant terms.Regarding the MF category, the majority of contigs were annotated within 'binding' and 'catalytic activity'.In the binding category, 'protein binding', 'nucleic acid binding', 'nucleotide binding', 'DNA binding', and 'RNA binding' were the most abundant categories.On the other hand, 'hydrolase activity', 'transferase activity', 'kinase activity', and 'transporter activity' were also important categories (Figure 1).
Annotation and Differential Expression Analysis
From the 76,269 contigs of the whole transcriptome, 45.1% of the sequences (34,421 contigs) could be annotated with at least one GO term against the PLAZA 4.5 dicots database in TRAPID, and 34,379 of them were assigned to 7720 gene families.In addition, 29.1% of the contigs showed full-length or quasi full-length sequences, although more than 55% of the transcripts provided no information.
Gene ontology analyses retrieved a total of 8439 different GO terms assigned, which were summarized according to the GO slims categories for the plants in Figure 1.A total of 115,920, 75,489, and 70,801 annotations were assigned to Biological Process (BP), Cellular Component (CC), and Molecular Function (MF) categories, respectively.The top two within BP were 'cellular process' and 'metabolic process'.Some other terms revealed by the analysis were 'nucleobase-containing compound metabolic process', 'response to stress', 'anatomical structure development', 'reproduction', response to different stimuli, or 'transport'.Among the CC category, 'intracellular' followed by 'cytoplasm' and 'membrane' were the most abundant terms.Regarding the MF category, the majority of contigs were annotated within 'binding' and 'catalytic activity'.In the binding category, 'protein binding', 'nucleic acid binding', 'nucleotide binding', 'DNA binding', and 'RNA binding' were the most abundant categories.On the other hand, 'hydrolase activity', 'transferase activity', 'kinase activity', and 'transporter activity' were also important categories (Figure 1).The differential expression analyses performed in edgeR revealed 5918 differentially expressed genes (DEGs) between the autofertile lines (AF) and the autosterile lines (AS).Of them, 3443 genes were upregulated (higher expression values in AF than in AS) and 2475 genes were downregulated (with significant lower expression values in AF than in AS) (Supplementary File S1).The KEGGs pathway enrichment analysis using KOBAS-i Plants 2024, 13, 1443 5 of 17 indicated that the up-and downregulated genes were significantly enriched in 39 functional groups, with 'Biosynthesis of secondary metabolites', 'Metabolic pathways', and 'Starch and sucrose metabolism' being the most significantly enriched terms in both groups.Upregulated genes were enriched in 'Selenocompound metabolism', 'One carbon pool by folate', 'Monoterpenoid biosynthesis', 'Nitrogen metabolism', or biosynthesis of certain amino acids like arginine, valine, leucine, and isoleucine.On the other hand, downregulated genes were particularly enriched in 'Limonene and pinene degradation', 'Phosphatidylinositol signaling system', 'Inositolphosphate metabolism', 'AGE-RAGE signaling pathway in diabetic complications', 'Phagosome', 'ABC transporters', or 'Glycerolipid metabolism' (Figure 2).Some of these significant KEGG terms were exclusive of up-or downregulated genes.Thus, 'Nitrogen metabolism' and 'Monoterpenoid biosynthesis' were exclusive of upregulated genes, whereas 'Limonene and pinene degradation', 'AGE-RAGE signaling pathway in diabetic complications', and 'Glycerolipid metabolism' were exclusive of the downregulated genes (Figure 2).
The differential expression analyses performed in edgeR revealed 5918 differentially expressed genes (DEGs) between the autofertile lines (AF) and the autosterile lines (AS).Of them, 3443 genes were upregulated (higher expression values in AF than in AS) and 2475 genes were downregulated (with significant lower expression values in AF than in AS) (Supplementary File S1).The KEGGs pathway enrichment analysis using KOBAS-i indicated that the up-and downregulated genes were significantly enriched in 39 functional groups, with 'Biosynthesis of secondary metabolites', 'Metabolic pathways', and 'Starch and sucrose metabolism' being the most significantly enriched terms in both groups.Upregulated genes were enriched in 'Selenocompound metabolism', 'One carbon pool by folate', 'Monoterpenoid biosynthesis', 'Nitrogen metabolism', or biosynthesis of certain amino acids like arginine, valine, leucine, and isoleucine.On the other hand, downregulated genes were particularly enriched in 'Limonene and pinene degradation', 'Phosphatidylinositol signaling system', 'Inositolphosphate metabolism', 'AGE-RAGE signaling pathway in diabetic complications', 'Phagosome', 'ABC transporters', or 'Glycerolipid metabolism' (Figure 2).Some of these significant KEGG terms were exclusive of up-or downregulated genes.Thus, 'Nitrogen metabolism' and 'Monoterpenoid biosynthesis' were exclusive of upregulated genes, whereas 'Limonene and pinene degradation', 'AGE-RAGE signaling pathway in diabetic complications', and 'Glycerolipid metabolism' were exclusive of the downregulated genes (Figure 2).The GO annotation analysis of the DEGs between AF vs. AS performed in TRAPID showed that 2802 out of 5918 contigs (47.3%) were annotated, with at least one GO term and 2793 transcripts being assigned to 1285 gene families.Based on the GO slims categories for plants, the general GO term classification of the DEGs showed a similar distribution to the one exhibited by the whole transcriptome (Supplementary File S2).Thus, 'cellular process' and 'metabolic process' were the most abundant subcategories in BP, 'intracellular', 'cytoplasm', and 'membrane' were the most abundant in the CC, and 'binding' and 'catalytic activity' were the most abundant in the MF categories.The GO enrichment analysis performed in TRAPID showed that more than 60, 10, and 50 GO terms were significantly enriched in the BP, CC, and MF categories, respectively (Supplementary File S3).Among the BP component, the most enriched general terms for the upregulated genes in AF vs. AS were 'glycoside metabolic process', 'aminoglycan metabolic process', 'chitin metabolic process', 'glucosamine-containing compound catabolic process', 'cell wall macromolecule catabolic process', 'nucleotide catabolic process', 'salicylic acid catabolic process', or 'lignin catabolic process' with log2 enrichment values > 2. In the downregulated genes, GO terms like 'negative Rho protein signal transduction', 'negative regulation of Ras protein signal transduction', 'pollen tube adhesion', 'protein homotetramerization', or 'phosphatidilinositol-mediated signaling' were the ones showing log2 enrichment values > 2. In the CC category, only one GO term ('plant-type cell wall') was enriched in the upregulated genes (log2 enrichment value of 0.5).However, among the downregulated genes, terms like 'exocytic vesicle', 'secretory vesicle', 'apical plasma membrane', or 'pollen tube' were revealed (log2 enrichment values > 1.6).Finally, within the MF category, GO terms like '(+)-neomenthol dehydrogenase activity', '(−)-menthol dehydrogenase activity', 'xanthoxin dehydrogenase activity', 'chitinase activity', 'bis (5 ′nucleosyl)-tetraphosphatase (assymetrical) activity', 'serine-type endopeptidase inhibitor activity', 'bis (5 ′ -adenosyl)-penthaphosphatase activity', 'phenylalanine ammonia lyase activity', or 'diphosphoric monoester hydrolase activity' were the sub-functional categories found for the upregulated genes (log2 enrichment values > 2).On the other hand, the downregulated genes showed enriched GO terms related with 'pectate lyase activity', 'Rho GTPase binding', 'phosphatidylinositol kinase activity', 'Rab GTPase binding', 'Bfructofuranosidase activity', 'sucrose alpha-glucosidase activity', 'solute: proton antiporter activity', or '1-phosphatidylinositol binding' (log2 enrichment values > 2).
Search of DEGs within QTL Intervals for Autofertility
In the recent high-density genetic map built for the RIL population Vf6 × Vf27 [32], used as well in this study, the authors reported 26 QTLs for traits related with autofertility.The number of markers within the QTL intervals ranged from 1 to 64, and most of them (>80%) matched with a known genome sequence.We checked the differentially expressed genes (DEGs) between AF and AS lines and overlaid these DEGs onto the QTL confidence intervals to find candidate genes associated with these traits.Up to 14 DEGs (transcriptome sequences) matched with at least one marker in the interval of eight different QTLs (Supplementary File S4).The corresponding genome sequences were selected and blasted (BLASTx) to determine the putative function of these genes (Table 2).One of these DEGs specifically matched with the significant marker associated with the QTL corresponding to the PSC2008/09 trait, related to the pod set in chromosome IV.This marker was downregulated in autofertile lines and was identified as AT3G07960, a phosphatidylinositol 4-phosphate 5-kinase (PIP5K6) protein (Table 2).
Quantitative Real-Time PCR Analysis
To corroborate the relative expression levels obtained by RNA-Seq, the transcript levels of seven selected DEGs that were highly up-or downregulated were analyzed by quantitative real-time PCR (qRT-PCR) in the parental lines.Four of them were included in the QTL intervals of some autofertility traits, including those related to the rupture the stigmatic cuticle.Vf1g128360 was upregulated in the autofertile line (Vf27), whereas Vf4g039440, Vf4g042680, and Vf6g026920 were downregulated in the autofertile line (Supplementary File S5).The expression profile of the DEGs analyzed by qRT-PCR was consistent with the original RNA-Seq data, indicating the reliability of the data obtained in this study.
Sequencing, Assembly, and Annotation
In this study, more than 1000 million clean reads have been obtained from the transcriptome sequencing of the styles and stigmas of faba bean flowers using the Illumina Novaseq platform.A minimum of 42 million clean reads per sample and a maximum of 67 million were obtained (with the exception of the sample Vf27.18 sequenced at a higher depth that produced more than 121 million clean reads).The de novo assembly resulted in 76,269 contigs, with an N50 of 2387 bp and an average contig length of 982.9 bp.Compared to other recent faba bean transcriptome studies [36][37][38][39][40], the number of unigenes acquired in our study was intermediate, but the N50 value was higher.All these previous studies used the Illumina platform to sequence the libraries, and most of them performed the assembly with the Trinity v. 2.8.4 software.
Concerning the annotation, only 34,421 out of 76,269 sequences (45.1%) could be annotated with at least one GO term using the TRAPID v. 2.0 software.This percentage was relatively low despite the several software assayed to analyze the data.The percentage of annotation for the abovementioned faba bean transcriptomes varied depending on the databases used, ranging from 40.8% [39] to 71.5% [38] for unigenes annotated against the NCBI non-redundant (Nr) protein sequence database, which is usually the one to obtain the highest annotation percentages.Poor annotation rates were also reported in the stigma transcriptomes from different species.Thus, Wang et al. [41] annotated 43.7% of the sequences in jasmine, He et al. [42] annotated 53.8% of the sequences in Camellia oleifera, and Quiapim et al. [43] reported no hits or known function matches in 52.1% of the Nicotiana novel sequences after BLASTx searches.Additionally, in a recent proteome and transcriptome analysis using stigmas and pollen from Brassica, Robinson et al. [7] pointed out that many of the proteins revealed in their study still have no known biological roles.This relatively low percentage of annotation or identification may be a result of the scarce genetic information available for flower stigmas and the underlying processes beyond the genetics of incompatibility systems.In addition, many of these sequences may correspond to rapidly evolving species-specific genes involved in sexual reproduction which display high diversity in order to maintain species boundaries [5,8].
Previous works on the pistil composition highlight broad similarities between species with wet and dry stigmas for some functional categories such as 'defense and stress response', 'carbohydrate and energy metabolism', 'protein metabolism and folding', 'cell wall remodeling', 'signal transduction', 'photosynthesis', or 'lipid metabolism' [10].Our analysis also showed numerous GO annotations related to these broad categories like 'response to stress' or other types of stimuli, 'carbohydrate metabolic process', 'protein metabolic process', 'signal transduction', or 'lipid metabolic process' (Figure 1).Similarly, the KEGGs enrichment analysis also showed pathways related to these categories, like 'plant-pathogen interaction', 'pyruvate metabolism', 'biosynthesis of amino acids', 'MAPK signaling pathway', 'Phosphatidylinositol signaling system', 'fatty acid degradation', etc. (Figure 2).
One of the goals of this study was to identify genes differentially expressed between autofertile and autosterile lines.The KEGGs enrichment analysis revealed several statistically enriched pathways in this set of genes.The upregulated DEGs in the autofertile Plants 2024, 13, 1443 9 of 17 lines were mostly enriched in pathways related to the synthesis of amino acids, such as 'Selenocompound metabolism', 'Valine, Leucine and Isoleucine biosynthesis', or 'Arginine biosynthesis'.Another highly enriched pathway was 'One carbon pool by folate'.Folates act as donors and acceptors in one-carbon transfer reactions and are involved in the synthesis important biomolecules such as amino acids, nucleic acids, and vitamin B5 (reviewed in [44]).But, it has also been related to stress responses, and, among them, the response to oxidative stress.
Metabolic pathways related to the synthesis or degradation of terpenoids were also highly enriched in the DEGs.'Monoterpenoid biosynthesis' was enriched in the upregulated genes, whereas 'Limonene and pinene degradation' (two monoterpenes) was notably enriched in downregulated genes.In addition, 'Diterpenoid biosynthesis' was significantly enriched in both up-and downregulated genes.Many monoterpenoids are volatile compounds and can be found in the essential oils of many plants.The biological functions of many of them are related to the attraction or repellent of insects such as pollinators or herbivores [45].For example, three volatile monoterpenes (linalool, limonene, and β-pinene) can be identified by wasps from receptive female flowers of figs, which is the only stage receptive to pollinators [46].Arabidopsis thaliana mutant plants that lacked the emission of a volatile sesquiterpene showed greater bacterial growth on their stigmas than the flowers of wild-type plants did [47].On the other hand, it has been seen that the beetle Bruchus rufimanus, an important faba bean pest, responds to floral volatiles in physiological and behavioral experiments, though the beetle did not necessarily pollinate the flowers [48].Therefore, terpenes play important roles in defense against biotic interactions, as could be the case in faba bean flowers, with the already receptive stigmas emitting volatiles of monoterpenes for both, protecting against pathogens and attracting pollinators.In addition to terpenoid metabolism, the 'Phenylpropanoid biosynthesis' was also enriched in the styles and stigmas of faba bean flowers.Phenylpropanoids are also part of the secondary metabolism of plants, contributing to all aspects of plant responses to abiotic and biotic stimuli [49].
Regarding signal transduction, several routes stand up in the enrichment analysis, highlighting the importance of the recognition of different stimuli in stigmas.The 'MAPK signaling pathway' was enriched in both up-and downregulated genes, whereas the 'Phosphatidylinositol signaling system' and 'AGE/RAGE signaling pathway' were notably enriched in the downregulated genes.A Mitogen-activated protein kinases (MAPKs) cascade is required for maintaining the stigma receptivity to accept compatible pollen in Arabidopsis.MAPKs converge in the receptivity factor Exo70A1, a member of the exocyst complex.The phosphorylation of Exo70A1 by MAPKs regulates pollen hydration and germination through exocytosis in Brassica and Arabidopsis species [50].As reported by McInnis et al. [51], the accumulation of reactive oxygen species (ROS) in mature stigmas in a constitutive way suggests that ROS might be an upstream candidate signal as they are known to activate these kinases.Regarding the AGE-RAGE signaling pathway, it is better known in animals than plants.AGEs is the acronym of advanced glycation end products.AGEs are involved in the pathogenesis of diabetes mellitus, Alzheimer's disease, aging, and are also involved in the thermal processing of foods.Multiple membrane and soluble proteins have been annotated as receptors for glycation products in mammals (e.g., RAGEs).Upon interaction with receptors, AGEs trigger an inflammatory response by the activation of mitogen-activated protein kinase (MAPK-), janus kinases (JAC-), and mitogenactivated protein kinases/extracellular signal-regulated kinases MAPK/ERK signaling pathways [52].However, the role of glycation in plants is poorly understood, and two main aspects are proposed: glycation as a marker of aging, senescence, and tag for protein degradation and as a possible mechanism of signaling (reviewed in [53]).
On the other hand, the 'Phosphatidylinositol signaling pathway' was enriched in downregulated DEGs.Inositol phospholipid compounds (such as IP3 and DAG) on the cell membrane are important secondary messengers involved in signal transduction [54].For example, many components of the phosphatidylinositol signaling system participate in vacuolar diversification during pollen development and vesicle transport in pollen tube growth.A good regulation of phosphatidylinositol-4-phosphate and phosphatidylinositol 4,5-bisphosphate pools is necessary for polarized secretion in plants (reviewed in [55]).Gradients of these compounds have been observed in root hairs and pollen tubes where they are linked to polarized secretion [56,57].Since the stigmatic papillae in faba bean are specialized in secreting the exudates, functions related with vesicle transport are expected to be found in this tissue.
DEGs within QTL Intervals Previously Described for Autofertility
From the significant markers found to be associated with autofertility traits by QTL analysis [32], one DEG (Vf4g039440) matched, in chromosome IV, with the significant QTL marker for PSC_2008/09, related to pod set under insect proof cages.This transcript was identified as a phosphatidylinositol 4-phosphate 5-kinase 6-like (PIP5K6) protein (Table 2) and was downregulated in autofertile lines (corroborated also by RT-qPCR).PIP5Ks are required for phosphatidylinositol 4,5-bisphosphate [PI(4,5)P 2 ] formation, which interacts with a wide variety of proteins modulating their molecular functions (reviewed in [58]).For example, it has been recently demonstrated that PI(4,5)P 2 production by PIP5K4, PIP5K5, and PIP5K6 is essential for pollen germination by the establishment of the germination polarity in a pollen grain [59].The role of phosphoinositides in membrane trafficking has been demonstrated in growing pollen tubes.The suppression of PIP5K6 expression impaired clathrin-dependent endocytosis and slow tube elongation; however, PIP5K6 overexpression showed plasma membrane invagination and the formation of tip branches due to a higher rate of endocytosis [57].Beyond the role of PI(4,5)P 2 in pollen germination or pollen tube elongation, this phosphoinositide and its production has also been studied in response to pathogens.The PI(4,5)P 2 levels were mildly reduced after the flg22 treatment of Arabidopsis plants, which was also related with a reduction in the endocytosis of different plant defense proteins such as the NADPH-oxidase RbohD.Reduced RbohD-endocytosis was correlated with an increase in ROS production [60].
Of particular interest in this study were the genes involved in the receptivity of the stigmas and autofertility.Successful pollination, fertilization, and seed set depend upon the receptivity of stigmas during the few days following anthesis.Some DEGs identified in the transcriptome matched within the genetic intervals of QTLs related with the rupture of the stigmatic cuticle [32], such as RUPTL (rupture length of stigma cuticle) in chr.I and %RUPTAREA (percentage of ruptured area) in chr.VI (see [31] for further details about these measures).In the genetic interval of the RUPTL QTL, we found one DEG (Vf1g128360) upregulated in autofertile lines and identified as a Proline dehydrogenase 2 (ProDH2) protein, which is involved in proline catabolism.Beyond its role in protein biosynthesis, regulated proline accumulation occurs in plant tissues in response to developmental and environmental stimuli.There are two ProDHs genes (ProDH1 and ProDH2) in A. thaliana which encode for homologous and functional isoenzymes; however, they show distinctive expression patterns.ProDH1 shows greater expression in pollen and stigmas and is expressed in most developmental stages and tissues; however, ProDH2 shows low expression levels and is mostly expressed at vascular tissues and senescent leaves [61].Proline degradation occurs in the mitochondria, where it is converted to glutamate, and notably ROS are generated as by-products of mitochondrial respiration [62].Recent studies have reported a regulatory role in the interaction between proline metabolism and ROS production in different tissues and processes [63,64].Therefore, here we find a new possible relation between proline metabolism and the rupture of the stigmatic cuticle, which is related with the receptivity and the presence of ROS in the stigmas of Vicia faba.
The second QTL related to the rupture of the stigmatic cuticle (%RUPTAREA), located in chr.VI, also included a DEG within its genetic interval.This DEG (Vf6g026920) was identified as an ATP-binding cassette (ABC) transporter G family member 28.ABC transporters in plants are more numerous than in other organisms and are classified into eight subfamilies: A-G and I.They are composed of nucleotide-binding domains (highly conserved) and transmembrane domains, with the latter being very variable, allowing for the transport of different substrates.Full-size ABC proteins can work like transporters themselves, but half-size transporters can form complexes to perform their functions.Many full-size ABCG transporters are implicated in defense against biotic stresses (e.g., see [65]).Two half-size ABCG transporters of M. truncatula that are present in peri-arbuscular membranes are implicated in arbuscule development in mycorrhizal symbiosis [66].Another two half-size ABCG transporters are implicated in the stigma exertion in Medicago [67].AtABCG28 is a critical half-size transporter of A. thaliana that establishes the correct level of reactive oxygen species (ROS) at the pollen tube and root tips.AtABCG28 is specifically localized to the membranes of secretory vesicles and expressed in mature pollen and growing pollen tubes.It is involved in sequestering polyamines (source of ROS) into the vesicles that move and fuse to the growing tip [68].Since these QTLs are implicated in the rupture of the stigmatic cuticle, which is also related with the presence of exudate and receptivity of the stigma, high levels of ROS are expected in this tissue.Therefore, the regulation of ROS levels and transport of these substances are important to maintain the correct cellular functions and prevent cell damage.
Plant Materials and Sample Collection
The recombinant inbred line (RIL) faba bean population of 124 individuals derived from the cross between lines Vf6 and Vf27 has been previously used for the localization of QTLs related to autofertility, dehiscence, flowering time, and other yield-related traits [31,69,70].The parental line Vf6 is a highly autosterile and asynaptic line, whereas Vf27 is considered highly autofertile.The materials selected in this study were six genotypes from this RIL population: the two parental lines (Vf6 and Vf27), two highly autosterile RILs (RIL19 and RIL96), and two highly autofertile RILs (RIL14 and RIL44).
Plants were grown in 5 L pots under controlled conditions (22 • C, 14 h day-10 h dark).At the peak of flowering production for each line, flowers previous to anthesis were collected over several days and dissected to extract the flower style.Style samples were immediately frozen in liquid N 2 and stored at −80 • C until the RNA extraction was performed.
RNA Extraction, Sequencing, and De Novo Assembly
The total RNA of approximately 100 styles per sample was extracted using TRIZOL reagent (St.Louis, MO, USA) with the Direct-zolTM RNA MiniPrep Kit (Zymo Research Corp, Tustin, CA, USA) according to the manufacturer's instructions.A total of 18 samples were finally prepared consisting of three replicates for each of the six genotypes.
Samples were sent to STABVIDA (Caparica, Portugal) for quality control, library construction (with a Stranded mRNA Library Preparation Kit), sequencing (Illumina Novaseq, 150 bp paired-end reads), and assembly.Raw sequences were trimmed to generate high-quality reads.For each original read, the following parameters were applied: a quality trimming based on a quality score of 0.01 (error probability), a limit of the length of ambiguity of 2 nt, and a minimum read length of 30 nt.Sample Vf27.18 was sequenced at a higher depth, and the high-quality sequence reads were used for the de novo assembly in Trinity 2.8.4 [71].The assembled transcriptome of sample Vf27.18 was used as the reference sequence for the expression analysis.The raw reads of this study have been deposited into the NCBI Sequence Read Archive (SRA) database under the accession number PRJNA1044928.
Annotation and Differential Expression Analysis
Contigs obtained from the assembly of sample Vf27.18 were annotated with TRAPID [72], a web application for taxonomic and functional analysis, using PLAZA 4.5 dicots [73] as the database and clade Papilionoideae as a similarity search database with a threshold of 10-5.GO graphs were summarized according to the GO slims categories for plants.
The high-quality reads from each sample were mapped against the de novo assembled transcriptome reference.A minimum similarity and length fraction of 0.8 were used as parameters to consider a correctly mapped read.The differential expression analysis was performed with edgeR package [74] in R v. 4.2.1 [75].The identified differentially expressed genes (DEGs) were filtered using a fold change value of >2 or <−2 and an FDR (False Discovery Rate) p-value < 0.05 as thresholds.
In order to identify significant metabolic pathways correlated with putative autofertility genes, we focused on the subset of DEGs existing between all the autosterile and all the autofertile samples.A KEGGs pathway enrichment analysis of the DEGs was performed using the KEGGs pathway database in KOBAS-I and the more recent KEGGs Orthology-Based Annotation System [76] using Medicago truncatula as the reference database.A p-value < 0.05 was considered to indicate significant over-representation of a certain KEGG pathway.We also performed a gene ontology (GO) enrichment analysis in TRAPID, which determines the over-representation of a certain GO term compared to the background frequency (i.e., Papilionoideae dataset).The Benjamini and Hochberg correction was further applied to control multiple testing and decrease the FDR (q-value < 0.05 was established as a threshold to determine if the GO term was enriched in the dataset).
Search of DEGs within QTL Intervals Previously Described for Autofertility
To identify candidate genes controlling autofertility, we combined the results of previous QTL mapping [32] with the transcriptome data.Molecular markers falling within the QTL intervals were selected, and the corresponding nucleotide sequences were extracted from 'Vfaba_v2' 60k SNP Array [77,78].Marker sequences were first aligned against the faba bean genome [79], and the genome sequences were then aligned against the transcriptome sequences of the DEGs.Those DEGs falling within the QTL intervals were identified by BLASTx.
Quantitative Real-Time PCR Analysis
A quantitative real-time PCR (qRT-PCR) analysis was used to corroborate the RNA-Seq results.This experiment was performed with the same RNA extraction of the parental lines (Vf6 and Vf27) used for the RNA-Seq experiment.For cDNA synthesis, 2 µg of total RNA was reverse-transcribed using the iScriptTM cDNA Synthesis Kit (BioRad, Hercules, CA, USA) and diluted to a concentration of 10 ng/µL.The experimental design consisted of a total of 12 samples (2 genotype × 2 technical repetitions × 3 biological repetitions).A pooled sample comprising all the samples considered in the experiment was included for each gene as an inter-run calibrator to detect and correct the inter-run variation.No template controls were included.
Specific primer pairs for seven DEGs were designed (Supplementary File S6).The DEGs selected were highly up-or downregulated in autofertile vs. autosterile lines, and four of them were overlaid on the QTL intervals related with autofertility traits.Three of them were upregulated (ProDH2 [Vf1g128360], a transmembrane protein [Vf4g117320], and a cytochrome P450 protein [Vf5g094440]) and four were downregulated (B-galactosidase protein [Vf2g121200], a PIP5K6 [Vf4g039440], BUPS1 [Vf4g042680], and a ABCG28 [Vf6g026920]) (Table 2).CYP2 and ELF1A, previously reported as the most stable genes for gene expression normalization in the faba bean experiments [80], were used as the reference genes.
The qPCR was carried out using the iTaqTM Universal SYBR ® Green Supermix on an ABI PRISM 7500 Real-Time PCR System (Applied Biosystems, Foster City, CA, USA).A master mix with a total volume of 11 µL for each PCR run was prepared, containing 4 µL of diluted cDNA (10 ngr/µL), 5 µL of iTaqTM Universal SYBR ® Green Supermix (Bio-Rad, Hercules, CA, USA), and a primer pair with a concentration of 0.45 µM each.The thermocycler was programmed to run for ten min at 95 • C, followed by 40 cycles of 15 s at 95 • C and 1 min at 60 • C. Specific amplifications were confirmed by the unique and sharp peak melting curves of the PCR products.
Plants 2024, 13, 1443 13 of 17 PCR efficiency was determined for all samples by means of the amplicon groups by LinRegPCR program v.1139 using raw normalized (Rn) fluorescence as the input data.Fluorescence was analyzed using 7500 Software v2.0.1 using a threshold value of 0.2 to obtain the Cq (quantification cycle) values for each gene-cDNA combination.The relative gene expression (RGE) was calculated using the advanced quantification model described by Hellemans [81] (Equation ( 1)), where RQ = E∆Ct, with E = PCR being the efficiency for each primer used to amplify each target gene (TG) and Ct being the number of cycles needed to reach 0.2 arbitrary units of fluorescence.The two reference genes (RG) used for data normalization were CYP2 and ELF1A.
RGE = RQTG/Geomean[RQRG]
( The RGE values were log-transformed, and ANOVA tests were used to compare the RGE values and obtain significance values with R v. 4.2.1 [75].
Conclusions
In this study, we used RNA sequencing to check for the first time the differential expression of gene transcripts between faba bean lines differing in autofertility.DEGs were overlaid onto QTLs detected in a recent high-density genetic map to find candidate genes associated with autofertility.The initial challenge in the current study was due to the lack of annotated stigma datasets.Although the experimental validation of the candidate genes has not been performed, DEGs up-and downregulated were identified, and some of them were hypothesized to be related with the traits under study.One DEG matched with the significant marker associated with one QTL related to the pod set and others DEGs mapped in the intervals of QTLs related to the rupture of the stigmatic cuticle.RNAseq combined with QTL mapping is a powerful approach for identifying candidate genes, and the results derived from this work provide an important transcriptomic reference for style-stigma processes to aid our understanding of the molecular mechanisms involved in faba bean fertilization.The new available transcriptomic data and the RIL population used will facilitate the fine mapping of the responsible genes and will provide targets for future study and improvements in the autofertility traits of this crop.
Supplementary Materials:
The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/plants13111443/s1,File S1: Heat map of the differentially expressed genes between autofertile (right) and autosterile lines (left).Low expression levels are depicted in blue whereas high expression levels are depicted in red.; File S2: Gene ontology (GO) functional classification of the differentially expressed genes (DEGs) between autofertile and autosterile lines.Histogram of the main transcripts annotated to specific GO categories: Biological Processes, Cellular Components and Molecular Function.The x-axis represents the GO term and the y-axis represents the number of genes annotated.;File S3: GO enrichment analysis performed in TRAPID of the differentially expressed genes of autofertile vs. autosterile lines.Highlighted in green GO terms related with upregulated genes and highlighted in red GO terms related with downregulated genes.;File S4: QTLs for autofertility traits in the RIL population Vf6 x Vf27 (Aguilar-Benitez et al. manuscript in preparation), number of markers within each of the QTLs intervals, markers associated with a known genome sequence and number of markers associated with a transcriptome DEGs.PSF: pod set field measure; PSC: pod set under insect proof cages; PST: pod set after tripping treatment; SSF: seed set field measure; %RUPT: percentage of rupture of stigma surface; RUPTL: length of the stigmatic rupture; OL/FL: ovary length divided by flower length; SL/FL: style length divided by flower length; STIGL: stigma length; SL: style length; PAPL: papilla length; %RUPTAREA: percentage of ruptured stigmatic area; NPAP/STIGL: number of papillae divided stigma length; SSC: seed set under insect proof cages; OL: ovary length; TOTALS: mean size of pollen grains; NORMAL%: percentage of pollen normal; RATIO_PSIZE: ratio of pollen size (NORMALS/TOTALS).;File S5: Expression values for the parental lines Vf27 (autofertile) and Vf6 (autosterile) of seven genes detected by RNA-Seq (bar graph, left y-axis) and qRT-PCR (dot and lines graph, right y-axis).RNA-Seq values have been log2 transformed.Statistically significant differences were found for all the genes.;File S6: Information on qRT-PCR primers used for verification of RNA-Seq data.PCR efficiency and PCR product Tm data represent mean values ± sd.
Figure 1 .
Figure 1.Gene ontology (GO) functional classification of the V. faba transcriptome obtained from stigma and style samples.Histogram of the main transcripts annotated to specific GO categories: Biological Processes, Cellular Components, and Molecular Function.The x-axis represents the GO term, and the y-axis represents the number of genes annotated.
Figure 1 .
Figure 1.Gene ontology (GO) functional classification of the V. faba transcriptome obtained from stigma and style samples.Histogram of the main transcripts annotated to specific GO categories: Biological Processes, Cellular Components, and Molecular Function.The x-axis represents the GO term, and the y-axis represents the number of genes annotated.
Figure 2 .
Figure 2. Metabolism pathway assignments of the downregulated (red, left) and upregulated (blue, right) differentially expressed genes (DEGs) in AF vs. AS based on the Kyoto Encyclopedia of Genes and Genomes (KEGGs).The enrichment degree is calculated compared to the number of genes for a certain category present in Medicago truncatula in the KOBAS-i database.
Figure 2 .
Figure 2. Metabolism pathway assignments of the downregulated (red, left) and upregulated (blue, right) differentially expressed genes (DEGs) in AF vs. AS based on the Kyoto Encyclopedia of Genes and Genomes (KEGGs).The enrichment degree is calculated compared to the number of genes for a certain category present in Medicago truncatula in the KOBAS-i database.
Table 1 .
Summary of the 18 libraries in terms of number of raw reads, number of bases, and number of clean reads obtained.Abbreviations: AF: autofertile; AS: autosterile.
Table 2 .
BLASTx searches for the DEGs associated with markers within the QTL intervals for autofertility traits.PSF: pod set field measure; RUPTL: length of stigmatic rupture; PSC: pod set under insect proof cages; PAPL: papilla length; NPAP/STIGL: number of papillae-divided stigma length; %RUPTAREA: percentage of stigmatic ruptured area; SSC: seed set under insect proof cages; OL: ovary length. | 9,741.2 | 2024-05-23T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Evaluation of Integrated Digital Forensics Investigation Framework for the Investigation of Smartphones Using Soft System Methodology
ABSTRACT
INTRODUCTION
The effort on making disclosure of cybercrime cases is done through a process known as digital forensics [1]. In this case, the digital forensics is science and methods of finding, collecting, securing, analyzing, interpreting, and presenting digital evidence related to the case in the interests of reconstruction of events as well as the legitimacy of the judicial process [2]. One of the current digital crime is malware with target of computer device and smartphone devices [3].
The way to proof valid evidence is to conduct an investigation using Digital Forensic Examination Procedures approach. A number of stages approach in handling these digital evidence procedures is known as Framework. The stage of investigation must be in accordance with the law and the science that exist by using four different steps in order to investigate the evidence presented in the court, which consists of the acquisition, identification, evaluation and admission [3]. Digital forensics process can be divided into four distinct, they are collection, preservation, analysis and presentation [4].
The implication stages of the Digital Forensic Investigation Framework has too much needs to be done, thus out of the 15 steps, can be simplified into 5 stages of General Digital Forensics Investigation Framework (DFIF) on all cases of incident without destroying evidence and protect the chain of the custody [5][6]. Integrated Digital Forensics Investigation Framework (IDFIF) is expected to become the standard method of investigation for investigator. Taking into account that the previous DFIF so that the DFIF that have been there before can be accommodated by IDFIF [7][8]. The application of IDFIF against the investigation process of the Smartphone needed to perform some evaluations in advance against the IDFIF stages since Smartphones have unique characteristics, so that it cannot be equated with ordinary computers handling [9][10]. Soft systems methodology is a method of evaluation that not only compares a model with the other models but rather compares a conceptual model with a process in the real world, so unknown deficiencies of the conceptual model can be known and directly perform corrective action against a conceptual model so that there is no difference between the conceptual model and the real activity [10][11][12]. The focus of the SSM is to create an activity system and human relationships within an organization or group in order to achieve common goals [12][13]. Then the SSM can be applied as a solution to the evaluation stage of the IDFIF investigative process for smartphones. Evaluation on the stage of the IDFIF is only done on a reactive and proactive process stages in the process so that the IDFIF v2 model can be more flexible and can be applied to the process of investigation of a smartphone.
LITERATURE REVIEW 2.1. Smartphone
In this day, Smartphone devices have the same function with a computer [13]. Yet, even though the functions are similiar, there are some differences in the process of digital forensics handling between computer and smartphones devices [9], , as shown in Table 1.
Integrated Digital Forensics Investigation Framework (IDFIF)
The IDFIF method has characteristics, which can record history of input, so it can be assumed that the method can detect the order of the previous DFIF to form a new DFIF. IDFIF (Integrated Digital Forensic Investigation Framework) is a framework that is built by performing the analysis and evaluation of the framework that existed previously. The method used to perform the analysis and evaluation is sequential logic. The final result and framework [7][8] are shown in Figure 1. collection, crime scene investigation, proactive preservation, proactive analysis, preliminary report, and securing the scene detection of incident/ crime. 3) Reactive: Identification, collection and acquisition, preservation, examination, analysis and presentation. 4) Post-Process: Conclusion, reconstruction and dissemination.
Soft System Methodology (SSM)
The system is as a human activity system (HAS). HAS is defined as a set of activities in which men involved in it and the relationship between its activities. SSM recommends that each individual has differences on perception of the situation and differences on interests. This is explicit in the decisions of an acceptable analysis of all people [11][12].
SSM is used to perform the analysis and evaluation of information technology so it produces a framework that is expected can be better than before. SSM can also be used to conduct the evaluation on framework of digital evidence handling so that the existing framework could be better than the previous one [10,13]. SSM consists of a 7-stage analysis process which uses the concept of human activity in understanding the situation in the surrounding areas to determine the action to be taken in order to develop the existing situation. The seven stages of SSM are: 1) Situation Considered Problematic: The first stage of SSM is undertaken to determine the process to be explored. A brief understanding about the process in general is attractive and is allowed for later producing a problematic situation of the process. Sources of information is obtained from observations on the course of the process. Overview of the general process is the basis for the making of rich picture to make the groove course of the process to be more visible.
2) Problem Situation Expressed:
The overview presented from the first stage can make a clearer picture called a rich picture. Rich Picture shows the entire details involved in the process and is described in a structured overview of the process.
3) Root Definition of Relevant System:
Defining the entire process that has been described in the problem situation expressed form textual storyline and concise.
4) Conceptual Model of System Described and Root Definition:
Based on the textual definition for each defined element, improvements to the conceptual model is needed to achieve the ideal goal.
5) Comparison of Model and Real World:
Comparing between the conceptual model with the reality in the real world so that adequate level of conceptual model can be revealed to solve a problem. 6) Systematically Desirable and Culturally Feasiable Changes: Defining the changes that must be made to the existing models. In this step, the specified change is possible. 7) Action to Improve the Problem Situation: Taking corrective action by means of intervening changes in the implementation model.
RESEARCH METHODS
Methodology for conducting evaluation of IDFIF has detail stages and is illustrated in Figure 2. 1) Identifying a Research Problem by seeing various phenomena, events and information in various ways to tests against IDFIF so that all the shortcomings of that framework can be known. 2) Identifying a Research Problem by seeing various phenomena, events and information in various ways to test against IDFIF so that all the shortcomings of that framework can be known. 3) Reviewing the literature by searching the basics theories related to the IDFIF problems of research and the process of smartphone evidence handling. 4) Soft System Methodology for IDFIF is the stage that must be performed in conducting the IDFIF evaluation. IDFIF stages are evaluated only at the proactive and reactive process stage. 5) Case Study by conducting an evaluation and is undertaken to test both IDFIF models on smartphone investigation handling. 6) Analysis and Evaluation is the evaluation process of comparison on the IDFIF v2 with a DFIF in the process of smartphone investigation.
RESULT AND ANALYSIS
The data obtained from the results of literatures are being processed in accordance with the standard digital evidence handling for smartphone. From the results of the evaluation, the framework handling smartphone based on IDFIF can be seen.
Stage 1 of SSM: Situation Considered Problematic
In General, based on the results in the process of implementation of the Integrated Digital Forensics Investigation Framework (IDFIF), the situational problems include: 1) Step 2.6 securing the scene should be placed in position 2.1 on proactive collection. 2) There is no conditioning if evidence was found in "on" or "off" mode, especially in the smartphone handling. 3) There is no determination of whether the digital evidence handling process is carried out on the spot or in a computer forensics laboratory.
Stage 2 of SSM: Problem Situation Expressed
The notion of IDFIF's proactive and reactive phase process does not comply with the conditions in the field and only describes the stages for handling computer as shown in Figure 3. Evaluation of improvement against the IDFIF is needed, so that the IDFIF can be applied globally on digital evidence handling process.
Stage 3 of SSM: Root Definition of Relevant System
The step process of the digital evidence handling should be made to overcome the general circumstances that may be encountered by investigator involving digital evidence particularly on electronic media and smartphone devices related in the field [17].
Incident response process is the digital evidence handling at the scene (the crime scene), especially in the smartphone device handling. 1) Securing the Scene is a process to keep the crime scene so that the necessary evidence does not lost, damaged, experience addition or subtraction and there is no different result in difficult or obscure crime scene processing and examination of technical scientific basis [18][19][20], [25]. 2) Documenting the Scene is a documentation process of all locations and digital devices including smartphones devices in the area of the crime scene without touching or polluting the smartphone device and the environment in which it was found [18][19][20][21][22]. 3) Event Triggering is the search process that triggers the events at the scene of the matter so that the investigator could conclude on the field while the type of crime has been done [2], [23][24]. 4) Plug In Portable Power Supply is a process to keep the smartphone device conditions in a on-state to get to the laboratory for further examination process [18]. 5) Communication Shielding is a process of smartphone device isolation from all radio network (e.g. Wi-Fi, Bluetooth) to keep traffic data, such as SMS messaging and others [18][19][20][21][22]. 6) Seize is a process of foreclosure against the digital evidence that is mainly on a smartphone device [18]. 7) Transportation is at removal process of digital evidence primarily on a smartphone device from scene heading to the lab for further examination process [18], [20][21].
Laboratory process is an examination process of digital evidence in the smartphone device especially those conducted in laboratories wich include several process, i.e.: 1) Acquisition is a process to obtain data or information from the smartphone device or related media [18][19][20][21][22][23], [25]. 2) Storage is the process or result of a doubling of storage acquisitions of digital evidence to maintain the security of the data that has been obtained [18][19][20][21].
Stage 4 of SSM: Conceptual Model of System Described And Root Definition
Digital evidence requires step that has flexibility in handling various types of digital evidence because every crime scene is always different and also uses tools and the investigator have to work based on the handling principles [2], [19]. Figure 4 is a conceptual IDFIF v2.
The principle of IDFIF is to get data that can be used and taken from the computer resources, computer systems, computer networks, communication lines, storage media, computer applications and others [21], [23]. Such data can be processed in accordance with the procedures, so that it can serve as legal and legitimate evidence [22][23].
The main principles to be followed by the investigator in the handling of digital evidence especially smartphones are as follows: 1) May not change the original data that has been obtained. 2) Make a complete record of all activities related to the acquisition and handling the original and copied data.
The original data must be preserved. 3) Must not undertake activities that are beyond the ability or knowledge. 4) Must consider all aspects of personal safety and equipment while doing the work. 5) Any time the legal rights of people affected by the action that should be considered. 6) Need to be aware of all the organization's policies and procedures related to activities performed. 7) Communication must be maintained in accordance with the clients, legal practitioners, supervisors and other team members.
IDFIF stages in the process of smartphone digital evidence investigation primarily has four main stages and each stage has a sub-process.
Preparation is a preparation that must be done to perform the process of investigation in handling the digital evidence begins on the sports scene of the matter until making the final report. 1) Notification: The implementation of investigation or crime reported to law enforcement. 2) Authorization: The stage to get the right evidence access and the legal status of the inquiry process. Incident Response is an activity that is carried out at the scene of the matter with a view to secure the existing digital evidence, so it is not contaminated by other things. 1) Securing the Scene: A mechanism to secure the crime scene and protect the integrity of evidence. 2) Documentation of the Scene: Processing the scene of things, looking for the source of the trigger event, looking for the connection of communication or network and documenting the scene by taking a picture of every detail of the scene. 3) Event Triggering: Initial analysis of the process. In the late stages of the event triggering, there is a decision process. 4) Proactive Preservation: This stage has five sub-phases i.e. network trace, is searching traces through the network that is used by the digital evidence: plug in portable power supply, is the process of safeguarding digital evidence in "on" state so the power contained in the digital evidence can be preserved along the way until in the forensic laboratory; communication shielding, is the isolating data communication on disabling stages exhibits digital so as to prevent data changes from the outside; and volatile and non-volatile evidence, is the process of safeguarding digital evidence. 5) Proactive Analysis: Live analysis stage against the findings and build the first hypothesis of a scene.
Detection of incident/crime, at this stage, is a stage to ensure that there has been a violation of the law. Acquisition is the process of data acquisition against the findings to relieve the workload of forensic digital analysis in the laboratory. Preliminary report is the process of making an initial report upon the proactive investigation activities that have been conducted. 6) Seize: The foreclosure process towards the digital evidence that has been found to further be analyzed. 7) Transportation: Represents the removal process of digital evidence from the scene of the matter towards forensic digital laboratory.
The laboratory process is the process of analyzing the data against the previous evidence in the laboratory so that the kind of crimes that have occurred can be found. Presentation is the process of making reports related to the analysis results conducted in the previous stage and to ensure that each process has been conducted in accordance with the rules of the applicable law. 1) Conclusion: Summing up the results of the investigations that have been conducted. 2) Reconstruction: Analysis process and evaluation on the overall investigation results. 3) Dissemination: The recording process of inquiry and notes that can be shared on other investigators who do investigations on similar cases.
Stage 5 of SSM: Comparation of Model and Real World
The next stage is the process of comparing a conceptual model to suit with the situation of the problem at the moment (the real world). This can be seen in Table 2.
Stage 6 of SSM: Change Systematically Desirable and Culturally Feasible
The next process is to determine the results of improvement based on the recommendations that have been specified in the previous stage and it can be seen in Table 3. The addition of the decision process of the investigation handling to be conducted.
The decision process on the investigation handling to be conducted.
Proactive Preservation
The addition of the decision process analysis of the evidence that have been discovered.
The decision process analysis of evidence that have been discovered.
Transportation
The addition of the decision process of the digital evidence handling to the next stage.
The decision process the handling of digital evidence to the next stage.
Preservation
The addition of the decision process of the evidence type handling.
The decision process the handling of digital evidence type.
Stage 7 of SSM: Action to Improve the Problem Situation
Based on the recommendations of improvements on the previous stage, four decision processes for handling the digital evidence is added on IDFIF v2 stage. It is necessary to make IDFIF v2 to become more flexible when applied on digital evidence handling process in the field based on evidence that has been found. IDFIF v2 in Figure 5.
Case Study
The digital evidence handling process is focused on Smartphone by using IDFIF v1 and IDFIF v2. Case scenarios and simulations in the research are tailored to the fraud cases through short messages service (SMS) and are adopted from the cases that have happened some time ago. The case is a fraud lottery with prizes. The mobile device that is used to send SMS to the victim is Lenovo S860. Perpetrators send SMS to victims with a message that the victims have won a sweepstakes with prizes from PT. X for a single unit car worth 400 million rupiah and the victim was told to contact the number specified by the perpetrator. The victim did what was ordered by the perpetrator without regarding for the sender's SMS number. After the victim contacted the perpetrator, the victim was instructed to send money to the perpetrators' acoount of 10% of the value of the prize for administrative expenses and the cost of delivery of the prizes. Without thinking ahead, the victim do money transfers amounting to 10% of the value of the gifts that was told will be received. However, after the delivery of the amount of money into the account that has already been specified by the perpetrator, the victim feels cheated against the SMS received so that the victim reported the incident to the authorities.
Digital Evidence Handling Using IDFIF v1
Digital evidence handling process towards smartphones using IDFIF v1 starts from post-process that have three sub-phases i.e. stage of preparation that is investigating the implementation of notices or reported crimes to law enforcement. The next stage is authorization that is the stage to get the right access to evidence and the legal status of the inquiry process. The next stage is the preparation stage that is the process of preparation that includes the availability of personnel and various tools, and all things needed in the investigation.
Proactive process is a process in the scene to get all the evidence related to the crime which has been committed by the perpetrator. The first process that should be done in the sports scene things is securing the scene. Securing the scene is a mechanism to secure the scene of the matter and protect the integrity of evidence.
Smartphone digital evidence handling process by using this model can only be made up to crime scene investigation process due to the smartphone review process that must be done on the digital forensic laboratory, the data security in the smartphone can be assured.
Next, on the IDFIF v1 model, there is no plug in portable power supply process which is a process of exhaustion of Smartphone security from the limitation of battery resources because the Smartphone that is in the scene not always in full condition. As for the handling of the Smartphone, when it is found in an "on" state, then it must remain "on" and if the smartphone is found in an "off" state, it should remain "off". The process of smartphone handling using IDFIF v1 can be seen in Figure 6.
Digital Evidence Handling Using IDFIF v2
The process of digital evidence handling process against smartphone use IDFIF v2 can be seen in Figure 7. The preparation process and post process (in the previous IDFIF) or Presentation (in IDFIF that have been evaluated) has the same stages. Different stages of both models are simply being on a proactive process/ incident response and reactive process/ laboratory process. Incident response as well as proactive collection is the process trought the scene things to get all the evidence related to the crime which has been committed by the perpetrator. However, there are several stages that must be done in incident response for handling evidence of smartphone are securing the scene, scene documentation, event triggering, proactive preservation, plug in a portable power supply, communication shielding, seize and transportation. 1) Securing the Scene is a step that must be done in the implementation of the supporting scene is keeping the scene from people who do not have any interest in the investigation process so that the integrity of digital evidence can be authentically guaranteed. 2) Documentation of the Scene is the documentation process towards the rounded area and stuff that potentially becomes evidence by photographing the crime scene and evidence in forensic photography (public photos, medium photos and close-up photos) after securing the scene of things.
3) The Event Triggering is the beginning of the analysis process of the events that have occurred on the scene.
After securing the scene, investigator conducts an initial analysis of the process against an event that has happened on the scene and searches things that trigger events in the scene so that the investigator can deduce the type of crime that has done to further process analysis in digital forensic laboratories. As for digital evidence found is one unit of Lenovo's smartphone S860 used actors to perform fraud action. 4) Proactive Preservation is the process of securing Smartphone evidence that have been found at the scene of the matter so that the integrity of the data that reside on Smartphone are staying awake until the analysis process in the forensic laboratory. 5) Plug in Portable Power Supply is a charging process against Smartphone evidence using a portable power supply because of the battery power condition on the Smartphone. It is found not always in full condition, so needs a charging process by using at portable power supply to maintain the condition of the smartphone to be in an "on" state until it goes to the digital forensic laboratories. 6) Communication Shielding is a stage of safeguarding Smartphone evidence by isolating it against data communication using a faraday bag so that the exchanging data or control process performed remotely via available networks will not happened. 7) Seize is the foreclosure process of Smartphone's evidence to be examined in the digital forensic laboratory after the battery power security and isolation. 8) Transportation is a process of transferring the evidence that has been found from the scene towards digital forensic laboratories. When in the process of transportation, smartphone's evidence must be completely guarded, so it will not be changed at all and will not reduce the evidence integrity. The next stage is laboratory process that is the smartphone review process in digital forensics laboratory. The stage of the review process is done in digital forensic laboratory, preservation, acquisition, storage, examination, analysis and documentation. 1) Preservation is the process of securing Smartphone evidence. The Smartphone condition when in the process of acquisition should be in a disconnected state from existing data communications. 2) Acquisition is the first thing that must be done in the laboratory of digital forensic towards Smartphone software that has been found on the scene of things. 3) Evidence Storage is the process of storing Smartphone evidence to a determined place. The form and the content of the digital evidence must be kept in a sterile place to ensure that there is no change. This is very noteworthy because a slight change in the digital evidence could impact the investigation results. Digital evidence is naturally to be temporary (volatile), so if it is not accurate, it will be very easily damaged, lost, altered, or experience an accident. 4) The Examination is the process of processing digital evidence to find the relationship with the crime that has been done by the offender against the victims. 5) Analysis is the process of technical studies in the examination of smartphone evidence and constructing the link among the findings. After getting the expected files or digital data from the review process, the data is then analyzed in detail and comprehensively prove the occurred crime and what to do by the perpetrators of the crime. The analysis results of the digital data known as the digital evidence that should be scientifically justified in the law in the courts. In some cases it is frequently required the collection of physical evidence and logical form of the ekstaksi data. Yet in this case, the required evidence is simply a record of outgoing and incoming calls as well as outgoing and incoming SMS located in the internal storage of a smartphone. The offender notifies the potential victims that were told win a car unit. As for the message sent by the perpetrator can be seen in Figure 8. ) Documentation is the process of making a written report of the investigation activities of the Smartphone evidence which is done from the beginning of the examination until the end of the examination. The report will later serve as consideration by the judge at the Court in the decision making process.
Analysis and Evaluation
Every digital forensic model has different stages in each handling of the digital evidence found, so in the handling of various evidences, it requires different digital forensic models. The digital forensic model should be applied to all digital evidence found in the field. The difference of each model can be seen in Table 4. Every digital forensic model also has advantages and disadvantages in the digital evidence handling process. The advantages and disadvantages of each of these models can be seen in Table 5. The steps are quite specific to the handling of the Smartphone All stages are done in place Based on Table 5, IDFIF v2 has a better flexibility level of the digital evidence handling, especially on investigation process of Smartphone evidence.
CONCLUSION
The evaluation stage of the IDFIF using SSM only performed on the stages of the proactive and reactive proces so that the results of the evaluation produce a more flexible IDFIF v2 and can be applied to the process of investigation of a Smartphone.The results of the testing that has been done in the evidence handling shows that IDFIF v2 Smartphone which has been through an evaluation process, flexible than the existing IDFIF v1 because on IDFIF v2, the securing process on behind the scene placed early in the incident response process as well as the presence of an extra plug in portable power supply and removal process as well as seize the transportation process from the laboratory process to the end of the process of incident response. Next research is IDFIF v2 testing should be done on every case is different as in the case of network forensic, cloud forensic etc. | 6,265 | 2017-10-01T00:00:00.000 | [
"Computer Science"
] |
PARALLEL APPROACH OF VISUAL ACCESS TENDENCY FOR BIG DATA CLUSTER ASSESSMENT
: Visual Access of Tendency (VAT) proficiency, for visually finding the amount of clusters in data. VAT makes a picture lattice work that can be made use of for visual examination of collecting disposition in either social or challenge data. A strategy is offered for ostensibly evaluating the celebration fondness of a course of action of Items O = {o 1, o 2, ...o n } when we dealt with either as inquiry vectors or by numerical pair sensible difference concerns. Things are modified and the reordered framework of join sharp dissent inconsistencies is appeared as a power photo. Packages are shown by plain squares of pixels along one side. In endeavor we are suggesting identical method for aesthetic accessibility disposition for bundling to update the execution by revealing various datasets without a minute hold-up in the single display. The concern of choosing if packages are accessible as a stage preceding accredited gathering is called the examining of packing disposition. So below we are using parallel VAT to manage the issue. Rather than showing the masterminded individuality organizes as 2D dimensional degree photo to individual understanding as is concluded by VAT, we go after the alterations in diversity next corner to edge of the ODM. This examination is basic in recognizing the basic conjecture of VAT as well as VAT-based estimations and also, just much more typically, unique computations that trust, or like, Prim's Algorithm Based on this procedure we establish a Parallel Visual Gain access to of collecting Tendency (VAT) matter to analyze much getting to educational documents as well as demonstrate its central focuses the extent that complex nature as well as predisposition for utilization in a scattered figuring condition. Clusters are revealed
I. INTRODUCTION
VAT winds up uncontrollable for significant educational buildups. The freshened VAT (reVAT) figuring lowers the amount of estimations finished by VAT, and also changes the photo link in a tactical plan of account outlines that are made use of for the aesthetic analysis action. In this way, reVAT beats the significant educational record issue which hinders VAT, but presents an additional issue: explanation of the course of action of reVAT account lays out winds up being astoundingly frustrating when the quantity of clusters is much reaching, or there is significant cover between get-togethers of points in the data. A structure is offered for clearly reviewing the affair tendency of a method of Objects O = {o 1 ,o 2 ,..o n } when they are often tended to either as difference vectors or by mathematical set smart aberration respects. The short articles are provided and also the reordered cross segment of suit sharp demand inconsistencies is turned up as a power picture. Groups are appeared by boring squares of pixels along the inclining. Nevertheless, in Existing structure we can generally process one dataset sometimes. In This paper we are recommending identical implementation or process for 2 datasets sometimes. In the identical VAT we are passing two significant differences question cross sections immediately as well as structure will certainly process them and reserve a couple of minutes. Choosing the proportion of events in a helpful hoarding is a vital issue in get-together examination. The Visual Access of (pack) Propensity (VAT) estimation is an inducing device for looking disposition, which passes on a hallmark photo of system as the depiction of complicated instructional buildups. Together, VAT can be computationally unbelievable for significant informing documents as a result of its O (N2) time varied high quality. This post recommends a rewarding parallel hope to breathe life right into the basic VAT.
We take into consideration a type of starter information examination pertaining to the version confirmation concern of gathering. Bunching or generating exam is the complication of limiting treatment of things O = into c identity-close departments due to open data and also some particularly depicted level of (gathering) resemblance. The kind of packs discovered is unquestionably pertaining to the belongings of the numerical version that inherent the party method. Every pressing check will certainly discover an optional (approximately 1 ≤ c ≤ n) number of get-togethers, paying little respect to whether no "ensured" numbers exist. In this manner, each a basic sense basic thing to ask before using a specific (as well as potentially prejudicing) organizing count is: do packs exist utilizing all strategies? The problem of selecting if packs come as a stage prior to genuine amassing is known as the examining of get-together liking. Diverse formal (quantifiably based) and also pleasing frameworks for propensity evaluation are evaluated in Jain as well as Dubes as well as Everitt. Not any of today systems is definitely exceptional (or deplorable). The motivation driving these symbols is to intertwine a crucial and also normal visual method to handle the here and now gettogether of fondness evaluation mechanical social events. Perceptible slant for numerous facts evaluation snags have been all around researched over the most up to date twenty five years; Tukey as well as Cleveland [4] common hotspots some perceptible techniques. The perceptible approach for researching number propensity appeared below can be utilized in each of cases including mathematical data. It is valuable and also expected that advanced frameworks in gathering have a sharp phrase. As necessary we entitle this pristine instrument VAT (visual gain access to of tendency).
II. RELATED WORK
A framework is given for evidently checking the plan propensity of a course of action of Things O = {o 1 ,o 2 ,..o n } when they have a tendency to either as examinating vectors or by mathematical set wise uniqueness respects. The posts are swapped as well as the transformed setup of combine smart demand significant differences is turned up as a power photo. Numbers are appeared by reduce squares of pixels along the edge to edge. Another system has been given for vat making use of asked for individuality depiction. The propound requesting tally is homologous to Prim's estimate for locating the immaterial going across point tree of a weighted structure. The strategy can flag the distance of all over withdrew bunches by strategies for the proximity of dull squares of element on the conventional inclining of the ODI. This treatment appertains to all estimations and every last mathematical datum makes, finish or poor. A few 2 dimensional viewpoints recommend that ODI's might enable us "to glimpse" congruous possessions of focal request instructive papers.
[2] Reliable VAT with time approach provide a starting assessment of time blueprint gathering with a thought on a novel outline-based level of identicalness, or, in other words consistent time relocation and also consistent ampleness scaling. Due to this measure a Visual Gain access to of pack Tendency (VAT) computation to contemplate enormous time course of action illuminating events and display its positive problems relatively as complex nature and also prejudice for implementation in a passed on enlisting condition. This computation is acknowledged as a cloud operation utilizing Flicker where the run-time of the giant flexible high quality difference network figuring's are minimized by up to 7.0 times in a 16 facility enrolling bundle with broadly higher promote aspects mean for even more noticeable preparation teams. This VAT figuring is affordable for use in Big Information settings where farthest point and also therapy of information is performed in a streamed preparing framework.
Masterminds under examination in Information Visualization are negligible globe systems. In this just how this metric can be battered through a brilliant training course of the system in context of semantic zooming is discussed. When the structure is spoiled right into a degrees of campaign of sub-deals with, a client can without a significant quantity of a stretch find events as well as subgroups of executing authorities and also understand their segments. Expanding Adjacency Matrix Traveler (ZAME), an acknowledgment tool for evaluating stories at a size of a broad number of concentrates as well as edges. Totals are handled into a pyramid placing that thinks about on-request paging to GPU shader tasks to assist plain multi range taking a look at.
[5] Format mastermind problems are a specific class of combinatorial streamlining problems whose objective is to situate a timely arrangement of a data chart in such manner in which a specific target price is updated. This paper attempted to give an entire perspective of today cutting edge concerning strategies concerns.
[6] Emphasis interface plots have as frequently as conceivable been made use of to deal with charts. In the layout attracting structure, different arrangements manage orchestrate systems consenting to rich requirements, for example, restricting the measure of edge-crossing focuses, engaging the degree in between the lengthiest side as well as the most minimal edge, and uncovering balances. We offer a gain access to looking two portrayals recollecting the genuine goal to demonstrate their specific focal shows as a strategy of nonexclusive assessment efforts.
[7] A novel method is made use of via a solitary picture one can see all the collections. The framework depends upon a computation for low-capacity enclose of collected details, with the residential or commercial property that package between all events is made sure, paying little identification to their tendency. [8] We offer an electronic unsecured beginning contraption to help cross-castigating system serration with the running with goals: to look at changed framework phases, to find diagrams from the info, comment on it, as well as accumulate discovering. Seriating is a without supervision details mining procedure that reorders items right into a movement along a one-dimensional continuum to understand the whole strategy. Gathering administers posts to events, while serial names points to a problem inside an activity.
[9] This calculation shows the results of examinations investigating the family member well worth (from a HCI point of view) of blueprint attracting feel as well as figuring's using a solitary overview. The end results exhibit that while some individual style influence human implementation, it is tough to share that one tally transcends to anything an additional from a social understanding viewpoint, Would certainly a close to stylish as well as mathematical close outcomes be relocating nearer on a plan of various describes, specific social implementation estimates, Would a similar cleaned as well as analytical family member results be particular with numerous social implementation estimates much more actions to this demand requirement to examined.
[10] Minimal world charts provide a technique to make versatile, instinctual representations of irrelevant globe outlines, involving the client to evaluate close to packs while prolong basic chart of the whole formation. The perception strategy uses a mix of both semantically as well as geometrical turns, while the plan is made by springtime fixed figuring utilizing a beginning late affected oblige to appear.
[11] Info turning techniques is utilized for multivariate depiction of info. Especially, the consequences of a couple of exams recommend that 3-D representations can be a lot more disastrous than fixed shows in subject judgments of info events as well as information structure. [12] Instead of showing the cross area as a 2-dimensional lessen degree photo ODI for human understanding, VAT DT(Diagonal Tracing) isolates the framework by taking navels of different sorts along its edge to edge and hands down the disposition turns, with one of the most strong of them being the dcontort. [13] Framework Powering is discussed as, Offered a method of n focuses with a grid of pair wise resemblance matter, one ought to require to separate focuses into packs so essentially equivalent concentrates are as one and also particular ones disengaged. We offer an estimation calling for primarily organize managing that executes a very long time a little while later and also bears an abundant understanding like self-emphatic strolls around a chart. In this chart, the visual-mess problem in perceptions has been presented. Beginning there, we consider the cost-based, geometry-based, and also picture based edge-packaging method for structures, identical energizes, as well as stream maps.
III. PROPOSED MODEL
A methodology is offered for evidently considering the bundle inclination of objects when they have a tendency to either as inquiry vectors or by mathematical match sensible uniqueness areas. The write-ups are reordered as well as the reordered network of suit smart demand dissimilarities is showed up as a power image. Packs are turned up by decline squares of pixels along the corner to edge. Nevertheless, in Existing framework we can just refine one dataset sometimes. In VAT experience we are suggesting identical execution or process for 2 datasets sometimes. In the identical VAT we are passing two dissimilarities examine networks quickly and framework will certainly process them as well as set aside a few minutes. Visual Access of Cluster Tendency for Vast Data Sets: Assessment of organizing affection is an essential start phase in put away assessment. One device for looking over pack propensity is the Visual Gain Access to Propensity calculation. VAT influences an image to sort out that can be used for visual gain access to of party propensity in either social or test information. Whatever considered VAT winds up un-restrain able for wide edifying get-togethers. The changed VAT (reVAT) figuring lessens the proportion of evaluations prepared by VAT, and also changes the photo random sample with a game-plan of account representations that are used for the aesthetic assessment step. Thus, reVAT massacres the excellent informing recap concern which hamper VAT, yet shows another problem: image of the technique of reVAT account charts swings to be especially troublesome when the percentage of packs is sweeping, or there is vital cover in between gatherings of articles in the data. In this paper, we recommend estimation called large VAT which (i) manage the wide data problem continued by VAT, and (ii) takes care of the understanding issue driven forward by reVAT. Big VAT blends the semi asking for method used by reVAT with a photo program of the strategy of account stories showing the squeezing disposition info with a VAT-like picture. A couple of numerical viewpoints are offered to deal with and support the brand-new structure. In the nuclear mix inquire about wizard, tools for data examination and also affirmation are essential for professionals to do take a look at job. Web Range is such a mechanical social event, or, to put it simply utilized in the assessment of EAST of Institute of Plasma Physics to exhibit waveforms as well as will be better for scientists to get to and also disengage data. It is an adjustment to the past device jScope and East Scope. Web Scope is a contraption with a B/S show up online for information exam and delineation on the EAST workplace, so it could be utilized via web program. With the target of getting to and also discovering the vital examination data on the web, this data evaluation and also monitoring framework is acknowledged as Java Applets tongue which can be signed up with right into a HTML web page. Thusly, authorities from all parts of the world can visit the tool WebScope with Net, or, to put it simply remarkable for them to different information at wherever as well as at whatever factor seeing that there is Internet, pointless to indicate it, moreover, it can show instructional document on different servers meanwhile. The greater bit of this isn't given by jScope and also EastScope. Thinking about, WebScope is an additional and also the entire all the in addition promising mechanical celebration for information assessment and instinct. This paper depicts the technique of its being displayed, its positive conditions, its synopsis and its request. Tool for Visual Access of (Cluster) Tendency: A system is designated for ostensibly gauge the gathering aptness of a game plan of Objects O = {o 1 ,o 2 ,..o n } when they are addressed either as inquiry vectors or by numerical pair wise contrast regards. The things are flip-flop and the flipflop cross section of match keen dissent diversity is appeared as a power picture. Bundles are exhibited by dull squares of pixels along the corner to corner. Visual Access of Cluster Tendency Using Diagonal Uncover: The aesthetic examination propensity system, for plainly discovering the measure of standard social affairs in information, made by J. C. Bezdek, R. J. Hathaway and J. M. Huband, is astonishingly important, at any rate there is area for overhauls. As opposed to showing the prepared disparity cross area (ODM) as a 2D lower degree picture for anthropoid translation as is ended up by VAT, we take after making the changes in significant differences further slanting of the ODM. This developments the 2D information framework (systems) into 1D shows, appeared as what we call the prejudice turns, which equips one to focus just on one inconstant, particularly the stature. One of these bends, called the dwind, comprehensibly displays the closeness of social occasion formation as factors of reference in tops and valleys, which can be managed human eyes as well as furthermore by the simulation. Our analytical tests presented that the simulation can get assemble frameworks from the d-contort also occasionally where the human eyes see no formation from the aesthetic yields of VAT. In addition, achievement on each analytical examination was attain utilizing the matching (resolved) technique of program criterion respects. Picking the action of packs in an edifying get-together is an essential problem in generate assessment. The Visual Accessibility of (pack) Tendency (VAT) calculation is an effective instrument for investigative regarding gathering disposition that makes a characteristic photo of framework as the representation of complicated edifying records. Regardless, VAT can be computationally over the top for vast helpful documents because of its O (N2) time diverse nature. In this paper, we propose a sensible parallel anticipate breathing life right into the central VAT.
IV. PRACTICAL RESULTS
Practical Results deals with VAT process and Parallel VAT process of visual cluster representation. When the unordered dissimilarity matrix is given as input and the VAT, process the data given and displays the visual image of unordered dissimilarity matrix as ordered dissimilarity matrix. Parallel VAT deals with two data sets at a time which reduces the time in giving accurate results. The below Figure 1 illustrates the VAT image of dissimilarity matrix.
Step 4 Derive the system diversity matrix R ~ using the ordering array P as: ij R ~ = RP(i)P( j) , for 1 ≤ i,j ≤ n.
Step 5 Display the interchange matrix R ~ as the ODII ~ using the protocols given above.
Abbreviations and Acronyms
VAT : Visual Access of (cluster)Tendency International Journal of Advanced Research in Computer Science, 9 (6), Nov-Dec 2018, Parallel VAT Ordering and Display Algorithm: Step 4 Derive the system diversity matrix R ~ using the ≤ i,j ≤ n.
Step 5 Display the interchange matrix R ~ as the ODII ~ using ODM : Ordered Dissimilarity Matrix
V. CONCLUSION AND FUTURE ENHANCEMENT
At last in this undertaking we showed that identical VAT will update the job capability by passing the two datasets at a possibility to framework. The structure will certainly become asked for cross location finally it wi the social affair partiality. We offered one more system for parallel apparently analyzing package disposition utilizing requested originality images. The proposed asking for count is connected to VAT figuring for discovering the unnecessary spreading over tree of a weighted graph. The method can signal the nearness of specifically separated events by approaches for the sign of boring squares of pixels on the essential corner to corner of the ODI. This method is suitable to all estimations an every single mathematical information kinds, total or not enough. A couple 2 dimensional models recommend that ODI's might equip us "to see" geometric homes of shrouded inquiry instructional lists. Here two varying datasets cross section can be in the me visual examination of celebrations can be seen. This parallel Visual Access Propensity supervises two datasets to demonstrate the packages evidently on the double. In the meantime we can offer estimations of 2 datasets as well as can see the aesthetic grouping. The future rise is to calls different datasets instantly can go to the structure.
AND FUTURE ENHANCEMENT
At last in this undertaking we showed that identical VAT will update the job capability by passing the two datasets at a possibility to framework. The structure will certainly become asked for cross location finally it will demonstrate the social affair partiality. We offered one more system for parallel apparently analyzing package disposition utilizing requested originality images. The proposed asking for count is connected to VAT figuring for spreading over tree of a weighted graph. The method can signal the nearness of specifically separated events by approaches for the sign of boring squares of pixels on the essential corner to corner of the ODI. This method is suitable to all estimations and every single mathematical information kinds, total or not enough. A couple 2 dimensional models recommend that ODI's might equip us "to see" geometric homes of shrouded inquiry instructional lists. Here two varying datasets cross section can be in the meantime taken and visual examination of celebrations can be seen. This parallel Visual Access Propensity supervises two datasets to demonstrate the packages evidently on the double. In the meantime we can offer estimations of 2 he aesthetic grouping. The future rise is to calls different datasets instantly can go to | 4,961.6 | 2018-12-20T00:00:00.000 | [
"Computer Science"
] |
Evaluation of photocathode emission properties in an electron gun: one-step photoemission from bulk band to vacuum states
A one-step photoemission analysis is developed, using the exact one-dimensional quantum solution for transmission over and through a triangular barrier presented by Forbes and Deane (2011 Proc. R. Soc. A 467 2927), to evaluate the emission properties of a photocathode in an electron gun. The analysis, which employs transverse momentum conservation in electron emission, includes the physical attributes (density of states and energy-momentum dispersion) of both the bulk band emission states and the recipient vacuum states in its evaluation of the mean transverse energy and relative quantum efficiency of the emitted electrons.
Introduction
Planar, pulsed laser-driven, solid-state photocathodes are the most commonly employed electron sources for of x-ray free electron lasers (XFELs) [1,2], ultrafast electron diffraction [3][4][5][6] systems, and current (and potential future ultrafast) dynamic transmission electron microscopes (DTEMs) [7][8][9]-cutting-edge research instruments designed to study the atomic-scale dynamic properties of matter on fast timescales. The space-time resolution performance of these instruments is known to be limited primarily by the emission properties of the cathode. Of particular importance is the normalized emittance of photocathodes or, equivalently, the mean transverse energy (MTE) of the emitted electrons [10,11] as this determines the spatial divergence of the electron beam and hence its focusability (or beam quality). A lower emittance (or MTE) will provide for higher quality and higher photon energy x-ray beams generated by XFELs [12,13], improved fidelity of electron diffraction patterns through increased spatial beam coherence [6,14], and higher spatial resolution in DTEMs through a reduction in the focal spot size.
Prior theoretical analyses have connected the MTE from planar photocathodes to both the maximum excess energy of photoemission [15,16], ΔE=ħω−f (where ħω is the incident photon energy and f is the work function), and the photocathode temperature [17], specifically the temperature T e of the electrons, through the inclusion of the Fermi-Dirac distribution. More recent work [18][19][20][21] has indicated that the bulk electronic states from which the electron originate also need to be included as their band dispersion (i.e. effective mass, m * ) and density of states variation affects, and can limit, the MTE of the emitted electrons through transverse momentum conservation in photoemission [22]. Together with the work function variation with crystal orientation [23], this is now leading to experimental investigations of the spectral emission properties of singlecrystal photocathodes [24][25][26][27] for which a functional theory [28] is required to aid our understanding of photoemission and, consequently, the selection of future high brightness (low MTE and high quantum efficiency (QE)) photocathode materials.
In this paper, we present a new theoretical formulation of one-step photoemission [29] based on the exact one-dimensional quantum solution for transmission through (ΔE<0) and over (ΔE>0) a triangular barrier evaluated by Forbes and Deane [30]. The exact quantum solution is extended into the transverse dimension, using conservation of transverse momentum in electron emission [22], to include a parabolic bulk electronic band associated with a 'perfect' metal; that is, an emission band with spherical symmetry and an electron mass equal to the free electron mass m 0 . In addition to incorporating the local density of the emitting states Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence.
Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
(multiplied by the appropriate Fermi-Dirac population distribution) in the photoemission simulation, we have also now included the vacuum density of states; that is, the density of the recipient states for the emitted electron. This hither-to-fore omitted (to our knowledge) latter factor has a significant effect on both the MTE of the electron distribution emitted from photocathodes and the QE of photoemission.
Photoemission formalism
The essential features of the presented one-step photoemission analysis are schematically illustrated in figure 1. The bulk electronic band states are photo-excited by the incident photon energy ħω to form a set of 'virtual' electronic states whose energy-momentum dispersion relation is, to a very good approximation, a replica (at the higher energy) of that of the bulk band states since the photon momentum is much less than the electron momenta in the band [18]. These excited states, if occupied in the bulk, may emit electrons into the vacuum under the necessary energy-momentum conservation by either above barrier photoemission or photo-assisted tunneling; respectively, paths A and B in figure 1. The transmission through or over the triangular barrier generated by an applied surface acceleration field E acc of the electron gun is described by Forbes and Deane's recent exact analytical one-dimensional quantum solution [30]. In our extension of this formalism into the transverse dimension parallel to the planar photocathode surface, we invoke transverse momentum conservation in the electron emission [22], employ the energy-momentum relationships of both the bulk and vacuum states, and include their local density of states (LDS). Consequently, at each transverse momentum p T associated with a bulk band energy E, the one-step simulation evaluates the product of the emitting bulk band LDS, their occupation (using a Fermi-Dirac population distribution), the transmission coefficient of the triangular barrier, and the local density of the available vacuum state into which the emitting electron is to be received. The inclusion of the latter implies that the 'joint density of states' between the initial occupied and final unoccupied states is evaluated explicitly as is required in any description of band-to-band transitions; for example, optical absorption in semiconductors [31].
For simplicity, our simulation of one-step photoemission assumes that the emission is from a positive dispersion and bulk electron band with an effective mass equal to the free electron mass m 0 in the first Brillouin zone; that is, an energy-momentum relation of the form where p z is the longitudinal electron momentum in the band. The LDS of such an isotropic band is proportional to and the occupation of the band at any energy E is a function of both the Fermi energy ε F and the electron temperature T e through the Fermi-Dirac distribution; Figure 1. Schematic of the simulated one-step photoemission process: photo-excitation of the bulk band states into a set of identical virtual band states from which electrons transmit (with transverse momentum conservation) into the vacuum states either above (photoemission with ΔE>0; path (A)) or below (photo-assisted tunneling with ΔE<0; path (B)) the triangular barrier generated by the applied acceleration field in an electron gun: ε F =Fermi energy.
where k B is Boltzmann's constant. Equation (1) defines the longitudinal kinetic energy of the electron in the band which then allows the bulk band dispersion to be included in the transmission coefficient of the triangular barrier for emitted electrons [30];ˊṕ a c c 1 3 with e the free electron charge and ħ Planck's constant divided by 2π, and Ai(ξ) and Bi(ξ) are Airy functions of the first and second kind, respectively, with the prime denoting the first derivative. The Airy function argument is given by where the bulk band threshold energy for above barrier photoemission E th =ε F −ΔE. Finally, the density of the recipient vacuum state for the emitted electron at the conserved value of the transverse momentum is proportional to + m p p , where p z0 is the longitudinal momentum in the vacuum at emission. The exact analytical solution of Forbes and Deane [30] also allows for the evaluation of p z0 for emission above and below the barrier; We note here that equation (5a) is entirely consistent with the expectation that the maximum value of p z0 is equal to when T e →0. Further, equation (5b) indicates that p z0 below the barrier is small but not equal to zero unless E acc =0, in which case below barrier transmission ceases.
The presented one-step photoemission simulation uses the above to evaluate the relative number of electrons emitted at each value of the transverse momentum by summing over all energy states contributing to emission at that value of p T . The MTE is then obtained by taking the normalized second order moment of the evaluated transverse momentum distribution of the emitted electrons in the vacuum, = á ñ/ MTE p m , T 2 0 and the QE (in arbitrary units) is calculated by simply integrating over the distribution.
In figure 2, as a comparative example, the result of evaluating both the MTE and QE as a function of excess energy ΔE for a Ag(100) photocathode are presented; for which the work function f=4.36 eV [23], the effective mass m * of the emitting electrons near the Fermi level in the bulk band is (to a good approximation) equal to m 0 [32], and the Fermi energy ε F =5.49 eV. Room temperature operation (T e =300 K) is assumed and we employ a surface acceleration field of 1 MV m −1 typical of a DC electron gun for the simulation. The evaluation of the MTE using our one-step photoemission simulation (black solid line) is compared in figure 2(a) with the results of two prior analyses; the formulation of Dowell and Schmerge [15] for which MTE=ΔE/3 (dashed red line), and the more recent expression derived by Vecchione et al [17] (red solid line) where Li n is the ploy-logarithm function of order n. The latter, which asymptotically tends to the result of Dowell and Schmerge [15] at high excess energies, underestimates the MTE at all ΔE since it does not include either the bulk band or vacuum states. The expression of equation (6) does however agree with our one-step model below the photoemission threshold (ΔE<0) when the vacuum density of states are omitted in the simulation (black dashed line)-both giving a limiting value k B T e ≈25 meV for the MTE, as is also the case for the extension of the Dowell and Schmerge theory presented in [20]. This is because the electrons emitting from the population in the thermal Boltzmann tail that extends above the photoemission barrier have a sufficiently small energy spread k B T e to ensure that they originate from a relatively constant density of states in the bulk band-the approximation employed in obtaining equation (6). Consequently, the increase in the MTE to 31.5(±0.5) meV when ΔE<0 evaluated with the full one-step simulation is entirely due to the increase with higher electron momenta of the recipient vacuum density of states. As ΔE increases above threshold, both one-step simulations with and without the vacuum states return higher values of the MTE than that predicted by the prior analyses [15,17] due to the inclusion of the bulk band states. For excess energies greater than 0.1 eV, our full one-step photoemission model predicts MTE values 15%-20% greater than obtained from equation (6) for the simulated Ag(100) photocathode. We note that photo-assisted tunneling (below barrier emission) is negligible in this example, contributing less than 1% of the emitted electrons even at ΔE=−0.2 eV for the employed 1 MV m −1 acceleration field, and so does not contribute significantly to the presented MTE results.
The simulated spectral dependence of the QE for the Ag(100) photocathode example is shown in figure 2(b) (black line) together with the QE dependence predicted by Vecchione et al [17] (red line); where S 12 is a constant associated with the matrix element of optical excitation, transmission into the vacuum, etc. For the purpose of comparison, both our simulated one-step photoemission data and the dependence described by equation (7) are normalized to unity at ΔE=0. Both our one-step model and equation (7) display the expected rapid increase of the QE with ΔE associated with the strongly increasing number of filled bulk band (6) (red line) [17], and ΔE/3 (red dashed line) [15]. (b) QE as a function of ΔE; full one-step simulation (black line), one-step simulation without the vacuum states (black dashed line), and equation (7) (red line) [17], with corresponding power law fits for ΔE>0.25 eV shown as thin dotted lines.
states that can emit electrons into the vacuum as the excess energy increases. However, the log-log plot of figure 2(b) clearly indicates that the one-step simulation predicts a different power law dependence for the QE on ΔE than equation (7) for excess energies greater than 0.25 eV=10k B T e . A fit (dotted line) to our one-step simulation for this Ag(100) example indicates that QE=A(ΔE) 2.85 , where A is a constant, whereas equation (7) returns the Fowler-DuBridge relation of a quadratic power law dependence (red dotted line); i.e. QE=A(ΔE) 2 [33,34]. The difference in these power law dependences is directly related to the inclusion of the bulk band and vacuum states in our one-step simulation, both of which are omitted in prior analyses [15,17,33,34]. Indeed, removal of the vacuum density of states from the one-step analysis generates the data set shown by the black dashed line in figure 2(b) for which the dependence on excess energy is of the form QE=A(ΔE) 2.4 for ΔE>10k B T e (dotted line)-a power law dependence between that of equation (7) and our full one-step photoemission simulation. This latter data set is normalized by the QE with the vacuum states included at ΔE=0 to illustrate the roughly factor of three reduction in the QE at low excess energies that is caused by the density of vacuum states at low emitted electron momenta. Further, both our photoemission simulation and the analysis of Vecchione et al [17] clearly show the influence of the 300 K Boltzmann tail on the QE at excess energies below 0.25 eV, which also provides for a finite QE when ΔE<0.
Although incorporating a more realistic triangular barrier solution [30] and the physical properties of both the bulk and vacuum states, our one-step model of photoemission does not include a number of factors that can affect photocathode performance. First and foremost, the photoemission simulations do not include the matrix element describing the optical excitation of the electrons into the emitting 'virtual' states. This of course important for an Ab initio determination of the QE [24], but it is unlikely to affect the MTE evaluations from the simulated electron emission distributions unless the matrix element has a significant variation in momentum space for the excited virtual state. Second, the employed exact triangular barrier solution of Forbes and Deane [30] does not allow for the inclusion of the Schottky effect [15,16,35] in a formal manner. However, other than the lowering of the work function, the Schottky effect is not expected to alter significantly the presented simulation results, except perhaps at the highest acceleration fields where the exact shape of the potential barrier becomes important for electrons emitted by photo-assisted tunneling. Third, for the sake of brevity, the optical properties of the photocathode material, specifically the surface reflectivity and absorption coefficient for the incident light, are not included in our analysis but they could be incorporated for each individual photocathode material. The spectral properties of both will of course affect the photocathode QE by determining the total number density of excited electronic states per incident photon, but not the MTE as this is a self-normalized parameter. Fourth, the effects of chemical and surface roughness, which have been treated elsewhere [35][36][37][38][39], are omitted; that is, the photocathode surface is assumed to be flat and at a uniform potential. Fifth, as the presented one-step photoemission formalism assumes transverse momentum conservation in electron emission [22], the scattering of the excited virtual state electrons by phonons [40] during or just before emission into the vacuum is also not included. The strength of electron-phonon scattering is strongly material dependent and can be expected to result in an increased MTE and likely a reduced QE. Finally, and for the same reason, carrier-carrier scattering [15,41] (e.g. inelastic electron-electron scattering) is not included in our analysis.
Simulation results
In the following sub-sections, we discuss the effect that the electron temperature, Fermi energy, and the surface acceleration field are expected to have, within one-step photoemission, on the spectral dependence of both the MTE and QE from planar photocathodes. The presented simulation results employ the Ag(100) exemplar of figure 2 as a template, changing a single parameter at a time to illustrate its effect on the photocathode's electron emission properties. As the QE is not explicitly evaluated from first principles, all the QE data is normalized to that at ΔE=0 for the Ag(100) photocathode in a DC gun (f=4.36 eV, band electron effective mass m * =m 0 , ε F =5.49 eV, T e =300 K, and a surface acceleration field E acc. =1 MV m −1 ).
Electron temperature
The effect of changing the photocathode temperature, or more specifically the temperature T e of the electron distribution in the simulated isotropic Ag(100) band, is shown in figure 3. As expected, the MTE below the work function (ΔE<0) is strongly temperature dependent due to over barrier emission from the Boltzmann tail of the electron distribution ( figure 3(a)). In this region just below photoemission threshold, the minimum value of the MTE is again ∼25% greater than k B T e , primarily due to the influence of the vacuum density of states. At lower negative excess energies, photo-assisted tunneling starts to dominate the over barrier emission from the thermal tail of the electron distribution and the MTE decreases due to the strong reduction in tunneling probability with transverse momentum p T -an effect not visible in figure 3(a). At high positive excess energies, when ΔE?k B T e , the spectral dependence of the MTE tends to the low temperature value since the effect of the Boltzmann tail population is diminished with respect to the rest of the occupied emitting states. The low temperature linear dependence of the MTE on the excess photoemission energy is of the form ΔE/2.53, which is to be compared with ΔE/3 from the prior analyses [15,17] that do not include the combined effects of the bulk and vacuum states.
For electron temperatures T e <100 K, our one-step photoemission simulation predicts that MTE values less than 10 meV should be attainable at low or negative excess energies for photocathode materials with similar parabolic band structures and m * ≈m 0 ; for example, appropriately oriented single-crystals of Cu, Au, and the alkali group metals [42]. We also note that a recent study of cryo-cooled Cs 3 Sb photocathodes illuminated at 690 nm reported a reduction of the MTE from ∼43 meV at 300 K to ∼12 meV at 90 K [43]. As in this case electron emission is expected to be from the Boltzmann tail of the electron distribution photo-excited into the conduction band states, the fact that both measured MTE values are greater than their corresponding thermal values of 25 and 8 meV is consistent with our predicted influence of the vacuum density of states on the MTE of electron emission. For Cs 3 Sb, an additional factor is likely be the effective mass m * and dispersion of the emitting conduction band state.
The spectral dependence of the QE at different electron temperatures T e ( figure 3(b)) also illustrates the strong influence of bulk band population in the Boltzmann tail at low and negative excess photoemission energies. Here we have plotted the normalized QE to the 0.348 (=1/2.875) power against ΔE as this power law dependence is the best fit to the simulation data at the lowest 30 K temperature where the Boltzmann tail population has the smallest effect. As T e increases much beyond 300 K, where QE 1/2.85 provides the best linear dependence with ΔE ( figure 2(b)), it is clear that a simple power law of the form QE=A(ΔE) n is no longer a valid expression for excess energies below 1 eV. Nonetheless, for T e around room temperature and below, a plot of QE 1/n against ΔE should allow for the extraction of the photocathode work function with reasonable accuracy [44], provided that the linear fit employs measurements taken for ΔE>10k B T e . As will be shown below, such a power law scaling for the QE only exists if the band Fermi energy ε F is much greater than the excess photoemission energy ΔE.
Fermi energy
As the Fermi energy defines the energy of the last electron in the bulk band as T e →0, the emission properties of a solid-state photocathode are expected to be affected when ΔE is of the order of or greater than ε F . The results of a one-step photoemission simulation for Fermi energy values of 0.2, 0.5 and 1 eV, depicted in figure 4, show that this is indeed the case. In all cases, the dependence of the MTE on ΔE ( figure 4(a)) is similar to that in figure 2(a) for ε F =5.49 eV (dotted-dashed line) at low excess energies, but displays a distinct 'cusp' when ΔE=ε F (vertical dashed lines). At this critical value of the excess energy, all the excited bulk band electronic states with positive p z (in the direction of emission) are 'resonantly' matched in momentum and energy to the vacuum states leading to an increased transmission through the barrier at all p T and hence a larger MTE. As ΔE increases beyond ε F , the MTE levels off to a slightly lower and relatively constant value as the barrier transmission for the electrons excited from the bulk band moves off the ΔE=ε F resonance and becomes less dependent on ΔE. This interpretation is supported by the spectral dependence of the QE displayed in figure 4(b) which shows a clear trend discontinuity at ΔE=ε F , just when all the band states with positive p z can emit. At higher ΔE, the barrier transmission does increase [30], but no new states are available leading to a slower increase in QE with ΔE.
Also evident from the log-log plot in figure 4(b) is that the QE no longer follows a simple power law dependence with excess energy, QE=A(ΔE) n for ΔE>10k B T e , when one-step photoemission is from a bulk band with a low Fermi energy. This must be the case since significant changes in the number density of available photo-emitting states occur as ΔE increases for excess energies less than, but of the order of, the Fermi energy. As a result, extraction of a value for the work function using measured QE data may prove difficult without a functional photoemission model in cases where ε F is in the range of 10−100k B T e . In addition, we note that the one-step photoemission QE from the bulk band near threshold increases as the Fermi energy decreases-all the QE data being normalized to that at ΔE=0 for ε F =5.49 eV and T e =300 K ( figure 2(b)). This is a direct result of increased barrier transmission when the longitudinal momentum p z of an excited emitting state is closer to the momentum of the emitted electron p z0 from that bulk state.
Surface acceleration field
Although the Schottky effect is not included in our one-step photoemission simulation, it is nonetheless informative to examine the predicted effect of the surface acceleration field E acc on both the MTE and QE within the exact triangular barrier solution [30]. Figure 5(a) shows the dependence of the MTE on the acceleration field for selected near threshold excess photoemission energies of −0.1, −0.05, 0. 0.05, and 0.1 eV. At positive values of ΔE, the MTE is fairly independent of E acc as above barrier photoemission dominates. Closer to photoemission threshold there are more significant effects. Most notably, the MTE is reduced for ΔE<0 as the applied field is increased, reaching a minimum value below the k B T e =25 meV thermal energy for surface fields between 40 and 80 MV m −1 when ΔE<−0.05 eV. This lower than expected MTE value is caused by the increased contribution at higher E acc of photo-assisted tunneling to the transverse momentum distribution of the emitted electrons. This contribution has a MTE lower than 25 meV for fields less than about 80 MV m −1 due to the rapid drop in barrier tunneling transmission probability as p T increases for an electron at a given bulk band energy. At higher fields, the triangular barrier becomes sufficiently narrow to increase the tunneling transmission probability at larger p T so that the MTE again increases somewhat for ΔE<0. As a result, a minimum in the MTE develops below the photoemission threshold-an effect that may not be observable experimentally since the Schottky effect is not included in this photoemission simulation.
The effect of E acc on the QE follows expected trends and is displayed in figure 5(b) for the same selected near threshold excess photoemission energies of −0.1, −0.05, 0. 0.05, and 0.1 eV. At low surface field strengths, where above barrier photoemission dominates, the QE slowly decreases with increasing E acc due to the initial E acc −1/3 dependence of the transmission coefficient for the triangular barrier (equation (3)). At field strengths greater than 20 MV m −1 , the contribution from photo-assisted tunneling increases and this eventually reverses the initial trend-the point of reversal being at lower values of E acc for lower values of ΔE since the QE of above barrier photoemission (due to the photo-excited Boltzmann tail of the electron distribution) falls rapidly with decreasing ΔE below the photoemission threshold. Aside from the increased tunneling probability at higher acceleration fields, we note that a higher density of recipient vacuum states is also available at larger E acc since equation (5b) states that the longitudinal momentum of the electron emerging into the vacuum from the barrier increases with the cubic root of E acc .
Summary
A one-step photoemission analysis is presented that employs the exact triangular barrier transmission solution of Forbes and Deane [30] to evaluate the MTE and QE (in relative terms) associated with the transition from the emitting bulk band states to the recipient vacuum states. The inclusion of both the local density of the virtual excited band states and the physical characteristics of the vacuum states is shown to have a significant effect on both the MTE of the electron distribution emitted from photocathodes and the QE of photoemission. For an electron-like (positive dispersion) bulk emission band, the vacuum density of states is shown to limit the minimum MTE attainable at low (and negative) excess energies to values about 25% greater than k B T e when ΔE>−10k B T e . Similarly, for positive excess photoemission energies, the combined physical characteristics of both the emitting band and the vacuum contribute to MTE values about 20% greater than that of the polylogarithmic functional form of equation (6) [17] when ΔE=ε F . For the QE, the one-step photoemission analysis indicates that the same effects will alter the quadratic power law dependence of the QE on excess energy predicted by equation (7) [17] for ΔE>10k B T e to a power law dependence closer to cubic [44]. As these simulation results represent a significant departure from prior theoretical formalisms of photoemission [15,17,19,33,34], they will need to be verified by experiment, ideally using single-crystals of commonly used photocathode materials; for example, Cu(100) as there is a single electron-like emission band in this case. Similarly, the predicted variation of the MTE and QE with excess photoemission energy when ΔE∼ε F (figure 4) for the simulated first Brillouin zone Γ point emitter will also require experimental verification. The one-step photoemission analysis also indicates that the MTE of the emitted electron distribution could decrease by 20%-30% and the QE increase by about a factor of 2 near (and below) the photoemission threshold when the surface acceleration field is around 50 MV m −1 (figure 5), although the Schottky effect [15,16,35] is not taken into account. However, since the QE is usually quite low (∼10 −7 or less) when ΔE≈0, large incident laser powers are likely to be required to generate sufficient electrons for may practical requirements. For short electron pulse generation with ultrafast ps and sub-ps laser pulses, significant laser-induced heating of the electron distribution in the photocathode material can then result [45], so that the anticipated reduction in MTE will likely be more than offset by the resultant increase in T e (see figure 3).
Although the presented photoemission simulation results have only employed an emitting band with spherical symmetry and an electron mass equal to the free electron mass, extension of the analysis to more realistic bulk bands encountered in photocathode materials appears quite possible. In particular, extension to parabolic electron-like bulk bands that possess cylindrical symmetry about the emission direction, but are characterized by a longitudinal effective electron mass different from that in the transverse direction, is straightforward. Inversion of the dispersion for hole-like bands, which may have different spectral dependences for the MTE and QE since their density of states increases (rather than decreases) with increasing ΔE, should also be possible. In principle, a direct connection could be made with the actual E(p) dispersion of the emitting band (s) in real photocathode materials using density functional theory based band structure calculations. It is further noteworthy that the presented analysis may also be employed to simulate the final emission step in three-step photocathode emitters, such as negative electron affinity photocathodes, once the temporal dynamics of the carrier distribution after photo-excitation are known, since figure 4 already shows results for ΔE>ε F . | 6,858.6 | 2019-03-29T00:00:00.000 | [
"Physics"
] |
Amplitudes and time scales of picosecond-to-microsecond motion in proteins studied by solid-state NMR: a critical evaluation of experimental approaches and application to crystalline ubiquitin
Solid-state NMR provides insight into protein motion over time scales ranging from picoseconds to seconds. While in solution state the methodology to measure protein dynamics is well established, there is currently no such consensus protocol for measuring dynamics in solids. In this article, we perform a detailed investigation of measurement protocols for fast motions, i.e. motions ranging from picoseconds to a few microseconds, which is the range covered by dipolar coupling and relaxation experiments. We perform a detailed theoretical investigation how dipolar couplings and relaxation data can provide information about amplitudes and time scales of local motion. We show that the measurement of dipolar couplings is crucial for obtaining accurate motional parameters, while systematic errors are found when only relaxation data are used. Based on this realization, we investigate how the REDOR experiment can provide such data in a very accurate manner. We identify that with accurate rf calibration, and explicit consideration of rf field inhomogeneities, one can obtain highly accurate absolute order parameters. We then perform joint model-free analyses of 6 relaxation data sets and dipolar couplings, based on previously existing, as well as new data sets on microcrystalline ubiquitin. We show that nanosecond motion can be detected primarily in loop regions, and compare solid-state data to solution-state relaxation and RDC analyses. The protocols investigated here will serve as a useful basis towards the establishment of a routine protocol for the characterization of ps–μs motions in proteins by solid-state NMR. Electronic supplementary material The online version of this article (doi:10.1007/s10858-013-9787-x) contains supplementary material, which is available to authorized users.
Introduction
The three-dimensional structure that a protein spontaneously adopts in its environment is dictated by a subtle balance of numerous interactions, which are all individually weak. At physiologically relevant temperatures, these interactions are continuously rearranged, allowing a protein to dynamically sample a range of different conformational states. The dynamic processes that connect these various conformational states on the complex energy landscape of a protein take place on a wide range of time scales. Elucidating the interconversions between these various states is crucial for the understanding of biomolecular function at atomic level. Characterizing protein motion at an atomic scale is a challenging task, as it requires, in principle, the determination of a multitude of structures, their relative energies as well as the time scales (and thus, energy barriers) that link these states. Relevant time scales for dynamic biomolecular processes cover over twelve orders of magnitude (ps-s), a breadth that represents a severe challenge to any experimental method. Solution-state NMR is a very well established method to address protein dynamics at atomic resolution. A number of solution-state NMR approaches exist to study motion on time scales from picoseconds to minutes (Kleckner and Foster 2011;Mittermaier and Kay 2009;Palmer 2004). The mobility of proteins on short time scales, from picoseconds to microseconds, corresponds to interconversion between structurally similar states separated by low energy barriers. This fast protein motion is the focus of the present paper. Most often, the breadth of the conformational space sampled on this time scale is expressed in the simplified terms of an order parameter, S 2 (Lipari and Szabo 1982a), or, equivalently, fluctuation opening angle (Brüschweiler and Wright 1994) that describes the motional freedom of a given bond vector under consideration; the corresponding time scale of the fluctuations is expressed as correlation time, s. Alternative to these approaches, the ''slowly relaxation local structure'' approach has also been employed to study ps-ns motion in proteins (Meirovitch et al. 2010).
Although these sub-microsecond time scale motions are generally much faster than actual functional turnover rates in proteins (e.g. enzymatic reactions or folding rates), the fast local motions may be functionally relevant as they are thought to contribute to stability and facilitate ligandbinding through the entropic contributions (Frederick et al. 2007;Yang and Kay 1996). Therefore, the determination of sub-microsecond motions is of considerable interest, and is routinely performed in solution-state NMR. In order to be able to decipher the above-mentioned entropy-motion relationship, it is crucial that the motional amplitudes can be determined with high accuracy, i.e. that systematic biases are eliminated. For the case of solution-state NMR, the measurement of 15 N relaxation is the established way to measure backbone mobility on time scales up to a few nanoseconds, the time scale of overall molecular tumbling. Provided some experimental care (Ferrage et al. 2009), these experimental approaches provide quantitative measures of motion, and, thus, can be translated e.g. to entropy (Yang and Kay 1996). The interpretation of dynamics data from NMR can also be guided and assisted through MD simulations, which allow getting a mechanistic insight. (Granata et al. 2013;Xue et al. 2011).
In recent years, solid-state NMR (ssNMR) spectroscopy has evolved into a mature method for studying protein structure, interactions, and dynamics in biological systems that are unsuited for solution-state NMR, such as insoluble aggregates or very large assemblies. In the context of fast (ps to ls) motions, solid-state NMR may be significantly more informative than its solution-state counterpart, as the time scale above a few nanoseconds-invisible in solutionstate NMR because of the overall molecular tumbling-is readily accessible. In contrast to solution state NMR, to date there is no consensus protocol about the methodology for measuring motions by ssNMR. In the solid state, several routes are possible to address fast motions (ps-ls). (1) Spin relaxation is sensitive to both time scales and amplitudes and, in the case of 15 N spins, can be measured and interpreted in a rather straightforward manner, as the relaxation is largely dominated by the dipole interaction to the attached 1 H and the 15 N CSA. Approaches for measuring longitudinal Giraud et al. 2004) (R 1 ) and transverse (Chevelkov et al. 2007;Lewandowski et al. 2011) (R 1q and cross-correlated) relaxation parameters in proteins have been proposed. (2) Measuring the motion-induced reduction of anisotropic interactions (dipolar couplings, chemical shift anisotropies) provides direct access to the amplitude of all motions occurring on time scales up to the inverse of the interaction strength (in the kHz range), through the reduction of the coupling values from the rigid-limit values; the case of the dipolar coupling of directly bonded nuclei is most attractive, as the rigid-limit value of the interaction is readily computed from the bond length. In principle, the measurement of site-specific CSA tensors may confer similar information (Yang et al. 2009), although the interpretation is more difficult because the static-limit CSA is not easily determined.
Different approaches have been proposed in recent studies of protein dynamics, as to which type of the above data should be used for determination of motional parameters, as well as to how these experimental data should be acquired (Chevelkov et al. 2009a, b;Lewandowski et al. 2011;Schanda et al. 2010;Yang et al. 2009), and even whether they should be interpreted in terms of local or global motion (Lewandowski et al. 2010a). In this manuscript, we systematically investigate ways to determine backbone dynamics in proteins using various longitudinal and transverse 15 N relaxation rates, as well as 1 H-15 N dipolar coupling measurements. We show that 15 N relaxation data are generally insufficient to correctly describe amide backbone dynamics, even when different types of relaxation rate constants are measured at multiple static magnetic field strengths. In particular, relaxation data fail to correctly report on picosecond motion. We find that only the addition of 1 H-15 N dipolar couplings allows resolving this problem. We investigate in detail how systematic errors in such dipolar-coupling measurements can arise, using the REDOR scheme, and show how they are suppressed to below 1 %. Together with the relaxation analysis, this study will serve as a useful guide for analysis of protein backbone motion by solid-state NMR.
We report new NH dipolar coupling measurements and 15 N R 1q data, measured on a microcrystalline preparation of deuterated ubiquitin at MAS frequencies of 37-40 kHz. Together with previously reported relaxation data (a total of up to 7 data points per residue), we investigate backbone mobility in microcrystalline ubiquitin, and compare the results to solution-state NMR data.
Materials and methods
In addition to previously reported relaxation data on microcrystalline ubiquitin (Schanda et al. 2010), we have measured 15 N R 1q relaxation rate constants and 1 H-15 N dipolar couplings. All experimental data reported were collected on a Agilent 600 MHz VNMRS spectrometer equipped with a triple-resonance 1.6 mm Fast-MAS probe tuned to 1 H, 13 C and 15 N. A microcrystalline sample of u-[ 2 H 13 C 15 N]-labeled ubiquitin, back-exchanged to 1 H at 50 % of the exchangeable sites was prepared as described previously (Schanda et al. 2010(Schanda et al. , 2009. 15 N R 1q relaxation rates were measured at a MAS frequency of 39.5 kHz. The 15 N spin lock field strength was set to 15 kHz and the R 1q decay was monitored by incrementing the spin lock duration from 5 to 250 ms (total 10 points). 1 H-15 N dipolar couplings were measured at 37.037 kHz MAS frequency. In all cases, the effective sample temperature was kept at 300 K, as determined from the bulk water resonance frequency. MAS frequencies were stable to within \10 Hz.
All NMR spectra were proton-detected; the pulse sequence for the 1 H-15 N dipolar coupling measurement is shown in Fig. 3, and the experiment for R 1q is similar, with the REDOR element being replaced by a 15 N spin lock of variable duration.
All NMR data were processed with nmrPipe (Delaglio et al. 1995), and analyzed with NMRview (OneMoon Scientific. Inc.). Peak volumes were obtained by summing over rectangular boxes; error estimates on the volumes were calculated from the square root of the number of summed points multiplied by three times the standard deviation of the spectral noise.
For the analysis of the 1 H-15 N dipolar coupling measurement experiment, in-house written GAMMA (Smith et al. 1994) simulation programs were used, and dipolar couplings were obtained using a grid-search strategy, as previously described (Schanda et al. 2010).
All data analyses, i.e. the fitting of the dipolar couplings, as well as the fit of R 1q relaxation curves and the modelfree analyses were performed with in-house written python programs. Relaxation rate constants for R 1 and the dipolar-CSA cross-correlated relaxation rate constants (g) were calculated as described before (Schanda et al. 2010); the R 1q rates were converted to R 2 via the chemical shift offset and the R 1 , as where h is the angle between the effective spin-lock field and the external magnetic field (90°represents a resonance exactly on-resonance with the spin-lock field). These corrected R 2 rate constants are essentially identical to the measured R 1q , because h is close to 90°for almost all residues (average: 88.6°, minimal value 85°), and R 1 is very small compared to R 1q . The R 2 rate constant is given as: where all constants are defined as in (Schanda et al. 2010). Equivalent expressions for R 1 and the cross-correlated relaxation rate constant are also given there. Spectral densities, J(x) were computed according to the simple model-free (SMF) or extended model-free (EMF) approach, as for SMF and for EMF. In EMF, fast and slow motional contributions are denoted with the subscripts ''f' ' and ''s'', respectively. In all analyses a N-H bond length of 1.02 Å was used (Bernado and Blackledge 2004), and the 15 N CSA was assumed to be axially symmetric with r z = 113 ppm (Dr = 170 ppm). The N-H bond length may vary slightly across the sequence, as a consequence of hydrogen bonding of amides. Particularly, it might be that amides in secondary structure elements have longer N-H bonds. This would lead to a decrease in the measured dipolar couplings. We assume that this effect is minor, because we find that the measured dipolar couplings in secondary structure elements are higher than in loops, which is the opposite of what is expected if bond elongation was dominant. Furthermore, we (Schanda et al. 2010) and others (Chevelkov et al. 2009b) also showed previously that there is no correlation between the dipolar coupling and the amide chemical shift (which, in turn, correlates with the H-bond strength). If our assumption of uniform bond length was incorrect, i.e. if NH bonds were longer in secondary structures than in loops, then the order parameters that we report would slightly underestimate the real values in secondary structures, and overestimate the values in loop regions. As explained above, we believe that it is safe to neglect these effects. 15 N CSA tensors may vary from site to site, as a consequence of structural differences between different peptide planes. As a consequence, the relaxation rates are impacted, because the CSA is one of the two relaxation mechanisms that relax the 15 N spin. Note that the CSA mechanism is generally the less important one compared to the dipolar interaction, in particular given the fact that our study was performed at a rather low field strength where the CSA is small (in Hertz).
Note that the site-to-site variation of the 15 N CSA tensor has only a very small effect on the determination of 1 H-15 N dipolar couplings using the REDOR experiment (Schanda et al. 2011b). Likewise, the 1 H CSA tensor has negligible effects on the apparent dipolar coupling, see (Schanda et al. 2011b) and Figure S9 in the Supporting Information.
Best-fit parameters in the two different model-free models were obtained by minimizing the target functions: where X i are the observables (R 1 , R 2 , g or dipolar order parameter S 2 ), r exp is the experimental error margin on the parameter X.
As described in the text, we also used fits where the order parameter was fixed to the dipolar-coupling derived one. Although different implementations can be devised, we achieved this by placing a strong weight, w i = 1000, on the dipolar-coupling term when minimizing the Chi square functions, as This implementation allows keeping the same minimization algorithm. Minimization of the target function was done both by grid search and by the fmin function in numpy, both of which yielding essentially identical results (with the latter one being faster).
All reported error margins on relaxation rates, dipolar couplings and fitted motional parameters were obtained from standard Monte Carlo simulation approaches (Motulsky and Christopoulos 2003).
The F-test analysis that is shown in Figure S6 in the Supporting Information was performed using standard methods that can be found elsewhere, e.g. in (Motulsky and Christopoulos 2003), and are just briefly summarized as follows.
The F-ratio, as shown in Figure S6, was calculated for each residue as: Here, DF SMF and DF EMF refer to the degrees of freedom in the fit of simple and extended model-free, respectively. The degrees of freedom are given by the number of experimental data minus 2 (SMF) or minus 4 (EMF). A probability value was obtained from this F value using a function implemented in the stats module of python.
Theoretical considerations Figure 1 shows the computed relaxation rate constants for 15 N longitudinal relaxation R 1 , and transverse relaxation, i.e. R 2 (that can be obtained at fast spinning from R 1q measurements (Lewandowski et al. 2011)), and 1 H-15 N dipole/ 15 N CSA cross-correlated relaxation (in the following denoted briefly as ''CCR''), as a function of the amplitude of motion and time scale within the simple model-free (SMF) approach. These relaxation rates are shown for time scales ranging from picosecondsa time scale often found in solution-state analyses of local backbone fluctuations-to microseconds, where the Redfield theory reaches its limit of validity (Redfield 1957). These plots show that R 1 relaxation is most sensitive to motion on time scales of nanoseconds, as expected from its dependence on J(x N ); both R 2 and the dipole-CSA CCR are sensitive to motions on time scales exceeding about 1 ns (leading to a measurable relaxation rate of about 1 s -1 ). For completeness, Fig. 1 also shows the information content from dipolar couplings measurements, which directly reflect the motional amplitude, independently of the time scale. The measurement of a single relaxation rate yields only very limited information, constraining the amplitude and time scale of motion to all combinations of S 2 and s falling on a given contour line in Fig. 1a, b, d. Obtaining amplitudes and time scales of motion from relaxation data requires the measurement of several relaxation data. Due to the different dependencies of longitudinal and transverse relaxation rates on motional parameters, it may be possible to derive these parameters from measurement of R 1 and R 2 / CCR measurements at a single static magnetic field. In addition, one may complement such data with measurements at different field strengths, as these relaxation rates (slightly) depend on the field strength (see Supporting Information in (Schanda et al. 2010)). To investigate how well such an approach would perform in practice, we calculated in-silico relaxation rates for a number of dynamic scenarios, and subjected them to a fit routine, assuming realistic error margins on the rate constants.
To this end, we have assumed a N-H bond vector that undergoes motion that is described by one order parameter and one time scale (SMF). Relaxation rates and dipolar couplings were back-calculated for different settings of S 2 and s, random noise was added and the data were fit with the SMF formalism. Figure 2a-d show the results of such fits for the case that the motion is in the picosecond range (a, b), or in the nanosecond range (c, d). If only relaxation data are used (panel a), and if the motion is fast then the fit does not provide reliable results, and the order parameter is very poorly defined. Given the insensitivity of transverse relaxation parameters to fast motion (see Fig. 1), this behavior is expected. Interestingly, even the inclusion of relaxation data at multiple fields does not significantly improve the situation, and the uncertainty of the fit remains essentially the same (data not shown). If the motion is in the nanosecond range, the use of relaxation data alone provides reasonable estimates of the motion (Fig. 2c), as transverse relaxation data contain information about motion on this time scale. The situation is generally greatly improved if dipolar coupling data are available (Fig. 2b, d), and in this case both the time scale and the order parameter are correctly obtained, irrespective of the time scale of motion.
It appears unlikely that the backbone exhibits only one single motional mode over the range of time scales that the experimental observables are sensitive to (ps-ls). Therefore, we performed a similar investigation, assuming two distinct motional modes, within the extended model-free approach of Eq. 4 (EMF). As above, various values of amplitudes and time scales of the two motional modes were assumed. The resulting back-calculated relaxation rate constants were fitted with the SMF and EMF approach. Here we assumed that the total order parameter is constant, and the two order parameters, S f 2 and S s 2 , are varied. The results of such fits, are shown in Fig. 2e, f. If only relaxation data are used, and the data are fitted with the SMF approach, then the resulting order parameter is always overestimated. This overestimation is particularly pronounced if the underlying motion is predominantly fast, i.e. if S f 2 is low (and, according to our assumption, S s 2 is high). Again, this reflects the fact that relaxation data alone are not capable of correctly picking up fast motion. This mirrors recent studies, where the analysis of relaxation data showed systematically overestimated order parameters (Lewandowski et al. 2011;Mollica et al. 2012).
Fitting the EMF model to relaxation data only essentially fails, as the parameter space is not sufficiently restrained, as was also reported elsewhere (Mollica et al. 2012). Given that in these analyses a total of 6 relaxation data were used, with 3 magnetic field strengths, it appears unlikely that the addition of even more static magnetic field strengths will improve the situation significantly.
The inclusion of dipolar coupling data changes this situation significantly, as shown in Fig. 2f. The order parameter is directly given by the dipolar coupling and therefore, trivially, this value is always correctly retrieved. In the EMF case, the two individual order parameters, S f 2 and S s 2 , as well as the two correlation times are all correctly obtained. When these data are fitted within the SMF approach, i.e. an oversimplified model, then necessarily the motion is either fast or slow. Interestingly, the fitted correlation time obtained in the SMF fit is very close to one of the two values assumed (lower panel in Fig. 2f). Whether the SMF fit retrieves a fast motion or a slow motion depends on their relative amplitudes, S f 2 and S s 2 , i.e. the fitted s jumps the fast to the slow regime once the amplitude of the slow motion exceeds a certain level.
These in-silico considerations show that relaxation data alone, even if measured at multiple field strengths, do not provide satisfactory fits, and often lead to systematic errors of order parameters, as sub-nanosecond motion cannot be detected properly with this approach. Only if dipolar couplings are measured, accurate data can be obtained. In the following section we, therefore, investigate how dipolar couplings, which are crucial for obtaining reliable measures of motion, can be measured at high accuracy.
Measurement of one-bond H-X dipolar couplings from REDOR
A number of recoupling sequences have been proposed for the measurement of heteronuclear dipolar couplings in proteins, in particular TMREV (Helmus et al. 2010;Hohwy et al. 2000), R sequences (Hou et al. 2011(Hou et al. , 2013Levitt 2002;Yang et al. 2009), phase-inverted CP (Chevelkov et al. 2009b;Dvinskikh et al. 2003), DIPSHIFT (Franks et al. 2005;Munowitz et al. 1981) and REDOR (Gullion Investigation of the robustness of fitting the amplitude and time scale of motion from different types of data. The left column shows fits using relaxation data alone, while the right column shows fits of relaxation and dipolar-coupling derived order parameters. In a, b, a single motion, with order parameter S 2 = 0.82 and s = 3.2 9 10 -11 s was assumed. From these parameters, 15 N relaxation rate constants (R 1 , R 2 and g) were back-calculated at a static magnetic field strength of 14.09 T via the model-free approach.
In a, these three relaxation rates were fitted in the framework of the SMF approach. Shown is the v 2 surface of obtained from a grid search. A rather poorly defined minimum extending over a wide range of S 2 values is found. Red points shown the best fits of 2000 Monte Carlo runs, obtained from varying these synthetic relaxation rates within error margins of 0.009 s -1 for R 1 , 0.46 s -1 for R 2 and 1.57 s -1 for g, which are typical average values found in the present and a previous study (Schanda et al. 2010). The swallow minimum of the target function results in a large error margin on S 2 in such a Monte Carlo error estimation. In b the dipolar order parameter is added to the relaxation data, greatly improving the accuracy and precision of the determined motional parameters. The error margin on the dipolar order parameter S 2 ((d D /d D,rigid ) 2 ) was 0.018. The dipolar coupling was treated equally as the relaxation data, as in Eq. 5. In c, d, the same analysis is performed with S 2 = 0.82 and s = 3.2 9 10 -8 s.
In e, f, the motion is assumed to be according to the EMF model (slow and fast motions), with correlation times of s s = 5 9 10 -8 s and s f = 1 9 10 -10 s. The total order parameters S 2 = S s 2 9 S f 2 = 0.72 and the S f 2 is varied as shown along the x-axis. Six relaxation rate constants were back-calculated (R 1 at 11.74, 14.09, 19.96T, R 2 at 14.09T, g at 14.09 and 19.96T) and fitted in the framework of either the SMF (black) or the EMF (blue: S f and Schaefer 1989; Schanda et al. 2010). A detailed description of these pulse sequences and their relative merits and weaknesses is not within the scope of this manuscript. We have recently investigated the robustness of most of these different experimental approaches with respect to experimental artefacts, such as rf field mis-settings and remote spin effects (Schanda et al. 2011b), by extensive numerical simulations. The primary source of systematic experimental errors in most of these approaches are mis-set rf field strengths employed during the recoupling pulse train, as well as the inevitable rf inhomogeneity. Notably, systematic errors on d D in the range of several percent are easily incurred in many of these recoupling approaches, even if the rf fields are only slightly offset. A notable exception seems to be the case of an approach based on R-sequences, which have been reported to be more robust, at least if samples are center-packed and if three different experiments are measured and fitted simultaneously (Hou et al. 2013).
In the present case of dynamics measurements a systematic error even of only a few percent is a major concern: as the motional amplitude is reflected by (1-S 2 ), an error of a few percent on d D , and thus, S (thereby quadratically impacting S 2 ) can easily lead to an error of the motional amplitude (1-S 2 ), by several tens of percent. In the numerical analysis of different measurement schemes, a time-shifted REDOR approach (Schanda et al. 2010) turned out to be the most robust approach, provided proper calibration of the RF fields. REDOR has the additional advantage that fitting is very robust and straightforward: as the data are obtained in a normalized manner (using a reference experiment), one can fit the data with a single parameter (the dipolar coupling of interest). Most other approaches require fitting signal intensities and line widths (and a zero-frequency component in the dipolar spectrum, that is often left our from the fit in a somewhat arbitrary manner) along with the dipolar coupling. These factors motivated our choice to focus here on the REDOR approach, and investigate experimentally how accurately the obtained order parameters can be measured, and how mis-settings of 1 H and 15 N p pulse power impact the apparent measured dipolar coupling. Figure 3 shows the pulse sequence that we employed here for measuring 1 H-15 N dipolar couplings in deuterated proteins, and some experimental data obtained on a microcrystalline sample of u-2 H 15 N-labelled ubiquitin, reprotonated at 50 % of the amide sites and undergoing MAS at m r = 37.037 kHz (s r = 27 ls). Akin to a previously proposed experiment (Schanda et al. 2010), the central REDOR sequence element in Fig. 3a features 1 H p pulses that are shifted away from the middle of the rotor period. This allows scaling down the effective dipolar evolution and thereby sampling the recoupling curve more completely on the sampling grid that is dictated by the rotor period (Gullion and Schaefer 1989). Provided that the 1 H spin network is diluted (deuterated sample, as used here), the main source of artifacts is mis-setting of rf fields (Schanda et al. 2011b). It is therefore instructive to inspect the effect of different calibrations of the p pulses on the apparent REDOR recoupling.
Calibrating rf fields to high precision is not trivial, and calibrations obtained from different methods might not match. For example, we find that the zero-crossing found when replacing a p/2 pulse by a p pulse does not necessarily match the calibration obtained from nutation experiments, where the pulse duration is varied over a large range, or calibration via rotary resonance conditions (data not shown). The possible source of error in all these rf calibrations are finite pulse rise times, amplifier droops or phase transients. This, of course, complicates the situation in many recoupling techniques, where a train of (phaseswitched) back-to-back pulses is applied. In the case of the REDOR experiment, the situation is more easily tractable, as it consists of a train of well-separated individual p pulses; phase transients and amplifier droops should thus not be a major concern, and calibration of the p pulse by searching a zero-crossing, thus, appears as the most appropriate way of calibration. Figure 3b shows a calibration of the 1 H p pulse, obtained by replacing the initial excitation pulse (Fig. 3a) by a 5 ls pulse, and varying the rf power in the vicinity of the expected 100 kHz (i.e. searching for a zero-crossing). Figure 3c shows REDOR curves, obtained for the different 1 H p pulse power levels during the recoupling, which correspond to the values shown in Fig. 3b. These curves were obtained by integration over the entire amide spectrum. The resulting fitted dipolar couplings are shown in Fig. 3d, assuming that the REDOR curves can be represented by a single value of d D .
These data show that the obtained dipolar coupling depends only slightly on the 1 H rf field setting, as long as the rf field is close to the value found for the zero-crossing. The apparent dipolar coupling has a maximum for an rf field setting slightly higher than the calibrated value from the zero-crossing (Fig. 3b). The rf field strength that corresponds to the nominally correct value of 100 kHz (Fig. 3b) leads to an apparent dipolar coupling slightly below the maximum value (Fig. 3d).
In order to understand this behavior, we have performed numerical simulations, shown in Fig. 3e. The dashed line shows the apparent dipolar coupling, obtained from simulating a three-spin N-H-H system, subjected to REDOR recoupling 1 H pulses of constant duration (5 ls), but different rf field strength. In agreement with the experimental data, we find that the obtained dipolar coupling slightly depends on the rf field setting, and that the maximum dipolar coupling is seen at an rf field strength slightly above the correct rf field.
In a realistic setting, inhomogeneity of the rf fields across the sample is inevitable. From the experimental data and simulations shown above, it is clear that such a distribution of rf field results in a distribution of REDOR oscillation frequencies over the sample volume. In order to account for this effect, we have experimentally measured the shape of the rf field distribution in the 1.6 mm Agilent fast-MAS probe used here, by performing a nutation experiment. The 1 H nutation spectrum, obtained from Fourier transformation of a series of 1D spectra with excitation pulses of variable length, shown in the Fig. 3f, reveals a distribution of rf fields over more than 5 kHz, distributed in a non-symmetric manner, i.e. a broader distribution towards lower rf fields, a situation typically found in solenoid coils. The rf power that was used in this experiment is identical to the one for which we found a 5 ls-long p pulse (i.e. a nominal 100 kHz pulse, Fig. 3b). Interestingly, the peak of this observed distribution is above 105 kHz and, thus, well above the field found from the zero-crossing of a single 5 ls pulse (Fig. 3b). We ascribe this finding to pulse rise time effects: when a single 5 ls pulse is applied, the finite pulse rise time results in a reduced flip angle of the spins relative to a perfect rectangular pulse; in the nutation experiment, where the pulse duration is arrayed (at the same power level for which the a Pulse sequence used in this study. b Calibration of 1 H p pulses, achieved by setting the initial 1 H excitation pulse in a to 5 ls, and varying the rf power. The grey shaded box in a was omitted for this experiment. A p rotation is achieved at the rf power level where the zero-crossing is observed. c REDOR oscillation curves measured in a 1D manner on microcrystalline ubiquitin, using rf power levels corresponding to the ones shown in b. d Dipolar coupling values, obtained from fitting the data shown in c. e Numerical simulations of the REDOR experiment with different 1 H rf power levels (5 ls duration pulse). Shown are simulations of 3-spin H-H-N systems, where the remote H was set at a distance of 2.6 Å to the proton and 4.1 Å to the nitrogen spin, corresponding to dipolar tensor anisotropies of and 13,668 and 353 Hz, respectively, according to the definition of the dipolar coupling tensor in (Schanda et al. 2010). The Euler angles describing the spin system are as follows ( , and pulses over the course of the nutation series go up to durations much longer than 5 ls, these rise time effects have a smaller effect than in the situation where a short pulse is applied. Thus, at the same power level, the rf field strength appears higher than in the single p pulse case. In order to account for the effect of such rf field distributions, we have explicitly simulated REDOR curves for the above three-spin system at various rf field strengths. Different REDOR curves were then added up with weighting factors according to a profile that matches the breadth and shape of the experimentally observed rf inhomogeneity profile of Fig. 3f. However, the center of mass of the distribution taken for these summations was shifted, such that we can investigate rf mis-setting with simultaneous rf field distribution. The solid line in Fig. 3e shows the fitted values of d D that are obtained when fitting these simulated curves against perfect two-spin REDOR simulations. We find that the shape of the profile of obtained d D as a function of the rf field setting is similar to the one that neglects the rf field inhomogeneity (dashed line). However, the obtained d D are generally lower; this is expected, as the rf field inhomogeneity leads to a situation where parts of the sample are subject to lower rf fields, and thus slower apparent REDOR oscillations.
The effect of this reduction of the apparent dipolar coupling is sizeable, and has to be taken into account when bias-free data should be obtained. This can be done upon data analysis either by fitting experimental data explicitly against simulations that take into account the rf field distribution, or by determining the factor by which the apparent dipolar couplings are reduced-using data as shown in Fig. 3e. While these two approaches are, in priciple, equivalent, the latter is computationally much less costly: it consists of fitting experimental data using a grid of simulations based on standard simulations (that neglect the rf distribution), and applying a correction factor a posteriori. In this work we apply this approach. From the simulations in Fig. 3e we find a correction factor of 1.1 % on the values of d D , by which the fitted couplings should be scaled up. This is in good agreement with the factor by which non-scaled dipolar-coupling order parameters (i.e. not corrected for rf inhomogeneity) and solution-state relaxation order parameters differ, which is 1.5 % (see Fig. 5 below). We note that in this analysis we have neglected the possibility that variations of the 1 H and/or 15 N CSA tensors may also contribute to some of the offset. As these tensors vary from site to site, no global scaling factor could correct for this effect. For R-type sequences, it has been shown that the 1 H CSA tensor has an impact on the accuracy of measured heteronuclear dipolar couplings (Hou et al. 2013). For the case of REDOR, previous analyses (Schanda et al. 2011b), as well as investigations shown in Figure S9, show that the systematic errors that CSAs might induce are very small, below 0.5 %, and we thus disregard CSA effects, and identify the rf field setting (and inhomogeneity) as the main point to consider. This is also corroborated by the close match between the scaling factor between REDOR-and solution-state order parameters, and the correction factor we identify from rf inhomogeneity, noted above (1.5 vs. 1.1 %).
Finally, we also investigate whether the behavior shown in Fig. 3 also holds if lower 1 H rf fields are used. As shown in Figure S10, the behavior found in Fig. 3e is also found if 8 ls pulses are used, instead of 5 ls.
We have also investigated the sensitivity of the obtained dipolar couplings to mis-settings of the 15 N p pulse. Figure 4 demonstrates both experimentally and through simulations that the apparent d D is much less sensitive to the 15 N rf field than it is the case for the 1 H field. Interestingly also, there is not a maximum of d D for a given rf field strength; thus, rf field inhomogeneities also tend to cancel their relative effects (data not shown). Based on these findings, we carefully calibrate the 15 N p pulse, and neglect 15 N rf field mis-settings and inhomogeneities in all analyses. Fig. 4 Dependence of the apparent dipolar coupling on the rf field strength of the central 15 N p pulse in the REDOR experiment of Fig. 3a. The experimental data (red) were obtained from 1D REDOR curves in an analogous manner as the data shown in Fig. 3c,d. Different points reflect different rf power level settings. Experimentally, the 15 N p pulse was calibrated by setting the pulse with phase U3 (Fig. 3a) to 10 ls, and searching the rf power that results in zero intensity, analogous to the procedure in b. The point at 50 kHz was set according to this calibration. The black solid curve shows simulated data. REDOR experiments were simulated by assuming a H-N dipolar coupling of 20.4 kHz, and perfect 100 kHz (5 ls) 1 H p pulses and 15 N p pulses of 10 ls duration and variable field strength. Remote protons and rf field distribution were ignored. The simulations were fitted against ideal two-spin simulations, and the resulting dipolar coupling is reported (relative to the nominal 20.4 kHz value). The red curve was set in the vertical axis such that 100 % is at an rf field of 50 kHz. The black curve is normalized to the nominal input value of d D = 20.4 kHz J Biomol NMR (2013) 57:263-280 271 Finally, we have also considered two different ways of performing the XY-8 phase cycling of the 1 H p pulses. One possibility is to cycle all pulses according to the XY-8 scheme, from the first pulse to the last pulse, irrespective whether the pulse is applied in the first or second half of the recoupling block. Alternatively, one can also keep the phases symmetrical with respect to the center of the recoupling block, i.e. increment the phases in the first half, and decrement the phases in the second half, as done before (Schanda et al. 2010). Although the differences are rather subtle, we find it preferable to chose the second approach; in the first one we find that the REDOR curves have slightly higher oscillation amplitudes, and the match with simulated recoupling traces is slightly less good (see Figure S1 in the Supporting Information). Figure 5 shows experimental dipolar coupling data obtained on microcrystalline ubiquitin. Representative REDOR curves for individual residues are shown in Figure S1. The black dataset was obtained taking into account the 1 H rf inhomogeneity. Figure 5a, b show, in addition to the data obtained with the procedure outlined above, a data set obtained in a previous study (Schanda et al. 2010), as well as data obtained in the present study with a different implementation of the XY8 phase cycle mentioned in the previous paragraph (see Figure S1). In these latter two data sets, calibration was performed with a somewhat lower degree of accuracy, and the rf inhomogeneity was ignored. In Fig. 5a these two data sets are shown without any scaling, while in Fig. 5b a global scaling parameter has been applied to minimize the offset to the black data set, which is the one described above (with rf field inhomogeneity correction and very accurate pulse calibration). Clearly, these two data sets are systematically lower than the data set that was obtained from the rf calibration and rf inhomogeneity treatment explained above. An underestimation in the other data sets is expected, as any miscalibration and rf inhomogeneity leads to underestimated dipolar couplings (see Fig. 3). It is interesting to note, however, that if the data sets are scaled by one global scaling factor, as shown in Fig. 5b, the agreement is excellent. This shows that the method yields highly reproducible results for the order parameter profile, even though the data were collected on different samples, different probes and different spectrometers.
Dipolar order parameters in ubiquitin
Notably, the scaling factor that needs to be applied to the previously published data set (Schanda et al. 2010) in order to match the new data set (shown in black in Fig. 5) is rather large (1.084 on S). This large scaling factor cannot be explained by rf inhomogeneities alone, at least not if they are in the same order of magnitude as the rf inhomogeneity found in the probe used here. Although it might be that the probe used in the previous study has a larger inhomogeneity, we rather speculate that the rf calibration in the previous study was not accurate (possibly it was done from a nutation Plots of S 2 are preferred rather than S or d D,exp , as possible offsets and differences are accentuated in such an S 2 plot. a Measured dipolar-coupling derived S 2 obtained in this study, with the pulse sequence in Fig. 3a, accurate 1 H p pulse calibration as described in Fig. 3, and correction for the 1 H rf inhomogeneity are shown in black. The data in red are the data previously published (Schanda et al. 2010), and data shown in blue were data obtained in this study, with a different phase cycling of the 1 H p pulses (see Figure S1 in the Supporting Information), and somewhat less accurate 1 H pulse calibration. In b the latter two data sets are scaled with one global scaling factor as to reduce the offset to the black data set. The scaling factor that was applied to the values of S shown in the red data set was 1.084, and the factor used for scaling the S values of the blue data set was 1.031. The good reproducibility of the data is evident. Note that the data set shown in red is, itself, already an average over three independent measurements, which themselves show high reproducibility of the S 2 profile (Schanda et al. 2011b). c Comparison with solution-state order parameters (Lienin et al. 1998), which were re-interpreted using a 15 N CSA of Dr = 170 ppm (data courtesy of R. Brüschweiler). Error bars are omitted for the sake of clarity. A correlation plot of the data in c is shown in the Supporting Information ( Figure S4) rather than a p pulse optimization), which might explain the offset. Another finding points in the direction of wrong rf calibration: in the previous data set the experiment was measured three times, using two different 1 H rf fields (100, 125 kHz) and two different delays s) for one of the two rf fields. While the data sets using the same rf field strength (100 kHz) resulted in very similar values, the data set at 125 kHz 1 H rf field is slightly offset (although within error bars) (Schanda et al. 2011b). This rather suggests that the rf calibration was not perfect. Figure 5c shows a comparison of the present order parameters with values derived from solution-state measurements (Lienin et al. 1998). This comparison reveals that, overall, the solid-state data are in very good agreement with the solution-state data, confirming previous findings that sub-microsecond protein dynamics is very similar in solution and crystals (Agarwal et al. 2008;Chevelkov et al. 2010).
The above analysis allows establishing guidelines for obtaining dipolar-coupling-derived order parameters with high accuracy in deuterated samples. (1) REDOR recoupling pulses should best be calibrated by directly searching the p pulse power, not via nutation experiments, as this best reflects the actual situation in the REDOR pulse train. (2) Once correct pulse calibration is used, RF field inhomogeneities slightly alter the outcome of the experiment, and these inhomogeneities should be taken into account by explicity measuring the rf profile of the probe. Simulations can establish the scaling factor by which raw fitted data should be scaled. We estimate that with these careful calibrations and corrections, the systematic error of the obtained dipolar couplings can be below 1% at most, as suggested also by the close correspondence of solution-and solid-state order parameters.
Transverse relaxation rates from R 1q measurements at *40 kHz MAS With the aim of obtaining a data set that is as comprehensive as possible, we furthermore measured 15 N R 1q relaxation data. Transverse relaxation data are inherently difficult to measure, due to the presence of coherent mechanisms of coherence loss, such as dipolar dephasing. A recent study indicated that fast MAS (about 40 kHz or more) can avoid these problems and provide access to the pure R 1q relaxation part of the coherence decay even in the dense network of a protonated protein and in the absence of proton decoupling (Lewandowski et al. 2011). Another study proposed the use of highly deuterated (20% amideprotonated) samples to obtain clean R 1q rates (Krushelnitsky et al. 2010). Here we use both a highly deuterated sample and fast MAS (39.5 kHz) to measure 15 N R 1q rates at an rf field strength of 15 kHz. There is strong evidence that the obtained rate constants truly reflect dynamics, because (1) back-calculated R 1q rates obtained from a model-free fit of 5 relaxation data sets and the dipolar coupling measurements are in good agreement with the experimentally obtained values of R 1q (see Figure S2), and (2) R 1q rates are independent of the rf field strength in the range explored (5-15 kHz; data not shown).
Fitting backbone motion from multiple data sets
In the following, we explore how the available relaxation data and dipolar couplings can be interpreted in a physical model of backbone motion. Altogether, we use up to 7 data sets (in cases of resonance overlap, for some residues less data may be available). (All relaxation data are shown in Figure S3.) As in the theoretical section above, we use either the one-time scale simple model-free (Lipari and Szabo 1982b) or the two-time scale extended model-free approach (Clore et al. 1990). Figure 6 shows fit results for the SMF approach, using three different implementations. In one case, only the 6 relaxation data sets were used; S 2 are reported as red curve in (a) and the corresponding s shown in panel (b). In another implementation, dipolar couplings were added to the fit, but the fitted S 2 was not imposed to match the dipolar-coupling derived one; rather, all relaxation and dipolar-coupling data were equally used for a v 2 minimization, according to Eq. 5 [S 2 shown as blue data set in panel (a), s in panel (c)]. Finally, a similar fit was performed, but this time the order parameter was fixed to the dipolar-coupling derived value [black curve and panel (d)].
If only relaxation data are used, the obtained order parameters are systematically overestimated, as compared to the dipolar order parameter. Furthermore, the time scale of motion is in the nanosecond range for all residues. This overestimation of S 2 by relaxation data, as well as the finding of nanosecond motion only is in agreement with the above in-silico data (Fig. 2). Although there is no physical foundation for such an approach, one might be tempted to search for a scaling factor, that would bring the relaxationderived S 2 to the level of the dipolar ones. Mollica et al. have shown that for their data set on GB1, that a scaling factor of 0.96 results in reasonable agreement with MD-derived order parameters. We have applied a similar procedure, and find that a scaling factor of 0.967 results in an overall similar level of order parameters, while a factor of 0.93 leads to best match for secondary structure elements. However, this apparent similarity merely reflects the fact that the backbone mobility tends to have a similar level throughout the protein, so it is always possible to find a scaling factor that makes these levels look similar. (A correlation plot of the data in Fig. 6a is shown in Figure S4). Such a scaling approach does not have physical foundation and is not expected to provide physically meaningful data.
Interestingly, if dipolar couplings are added to the SMF fit, but treated in the same manner as relaxation data (i.e. S 2 is not fixed to its dipolar-coupling derived value), the situation does not greatly improve, and a similar level of S 2 is found as if only relaxation data are used (blue data set in Fig. 6). This reflects the fact that the larger number of relaxation data outweighs the contribution from the dipolar data in the target v 2 function. In contrast, if S 2 is fixed to the REDOR-derived value, which are in close agreement with solution-state S 2 (Fig. 5), an interesting pattern of correlation times is observed, where values of s fall either in the fast or the slow regime (Fig. 6d). This clustering basically corresponds to s values falling either above or below the regime where the 15 N R 1 is maximum (see Fig. 1). Interestingly, residues for which we observe a slow motional time scale correspond almost exclusively to loop regions, while the residues for which the SMF fit shows a picosecond motion are mostly located in secondary structure elements. This observation is in line with the fact that loop motions are generally the result of concerted motion of several residues, which is a more rare event than localized motion. Of note, the fit that used only relaxation data did not detect this feature, and all the residues showed only motions on long time scales (tens of nanoseconds). Similar findings of exclusive nanosecond motion were reported also in previous relaxation-based analyses (Lewandowski et al. 2010b). Based on the in-silico analyses, and on the comparison with the fit including dipolar coupling data, we conclude that this detection of exclusively slow motion for all residues is essentially an artifact arising from fitting relaxation data only.
The SMF approach is tempting for its small number of fit parameters, which makes it applicable even if only one field strength is available. However, the assumption that backbone motion over 6 orders of magnitude in time can be described as a single process appears too simplistic. From a physical point of view, it seems more realistic that for those residues that exhibit slow motion in Fig. 6d, the slow motion dominates, rather than being exclusive. We tried to investigate how the simultaneous presence of slow and fast motion would impact a SMF fit procedure. To this end, we performed an analysis extending on the above theoretical considerations of Fig. 2f. We assumed that the actual a Results from fitting only relaxation data (up to 39 R 1 , 29 g and 19 R 1q per residue) are shown in red. Inclusion of dipolar coupling data results in the blue data set. In this data set, the order parameter was not fixed to the dipolar order parameter, but the dipole coupling was included in the fitting of S 2 and s in the same manner as the relaxation data, as shown in Eq. (3). In the black data set, the order parameter was fixed to the dipolar-coupling derived value. b-d show the fitted time scales for the three scenarios, using the same color code. The fitted order parameters and time scales from the fit where S 2 was fixed to the dipolar-coupling derived value (black curves) are plotted on the structure in e, f. In the fits that included dipolar coupling data, a minimum of 3 data points was required for a residue to be considered; in the fit with relaxation data only, a minimum of 4 data points was required motion can be described with the (somewhat more realistic) EMF model; we systematically varied all the parameters of the model (S f 2 , S s 2 , s f , s s ), back-calculated relaxation and dipolar-coupling data from these parameters, and then fitted them through an SMF approach. A representative plot of these data is shown in Fig. 7. Whether the SMF-derived correlation time falls into the slow or fast regime not only depends on the relative amplitudes of slow and fast motion, but also on the correlation times. For example, in the case that the time scale of the slow motion is long (hundreds of nanoseconds) the SMF fit would find a slow motion even if the amplitude of that motion is much smaller than the amplitude of the simultaneously present fast motion (see Fig. 7). This is expected, as large transverse relaxation rate constants can result even from very low-amplitude motions, as long as the time scale is long enough. We also note that the plot shown in Fig. 7 does not depend on the total amplitude of motion, but only on the fast-motion correlation time, s f ( Figure S5).
We conclude from this analysis that our finding of slow motion for a number of loop residues in ubiquitin ( Fig. 6d) does not mean that there is no fast motion in the concerned regions, nor does it necessarily mean that the amplitude of slow motion is larger than the amplitude of fast motion, as both the time scale and the amplitude are decisive for whether slow motion is detected in the SMF fit.
We also fitted the more complex EMF model with two motional time scales to our data (i.e. 4 fit parameters). If only relaxation data are used, even if measured at multiple fields (6 data sets in our case), the fit results in an underdetermined parameter space, i.e. very large error bars, and physically rather unrealistic fit parameters, such as high order parameters (Fig. 8a). Figure 8b shows results of an EMF fit to relaxation and dipolar-coupling data. A number of physically intuitive patterns emerge from this fit. Slow-motion order parameters tend to be lowest in loop regions, while some secondary structure elements have S s 2 close to unity; the lowest fast-motion order parameters are found in loop regions, similar to solution-state analyses. The time scale of fast motion is in the range of tens to hundreds of picoseconds, while slow-motion correlation times are in the range of tens of nanoseconds for most residues, while for some residues we find values up to about one microsecond. The EMF fit also shows some features that are physically less intuitive. For example, residue 10, located in a loop and exhibiting enhanced transverse relaxation, has a slow-motion close to unity, but a very long correlation time. It's neighbor, residue 11, has a significantly lower S s 2 , and a correlation time that is one order of magnitude shorter.
A statistical analysis of the two fit models, SMF and EMF, using F-test reveals that the EMF model is the accepted model for a 31 out of the 46 residues ( Figure S6). In contrast, however, a Akaike Information Criterion test rejects the EMF model for all residues (data not shown). To get further information about the robustness of the EMF fit, we systematically eliminated individual data sets from the fit. The results of these fits (shown in Figure S7), reveal that many of the features are retained if data sets are eliminated, e.g. the amplitude of slow motion is generally smaller than the fast-motion amplitude. However, when seen at a per-residue level, the relative amount of fast vs slow motion, as well as the correlation times, vary in some cases substantially when data sets are eliminated, even for residues that are fitted significantly better with EMF (according to an F-test). As expected, the SMF model is much more robust to elimination of individual data sets, and the fitted correlation time is hardly sensitive, at least as long as both longitudinal and transverse relaxation data are available ( Figure S7).
Getting a large set of relaxation data, in particular measurements at multiple field strengths, is often impracticable. Practical problems with multiple-field measurements include the availability of multiple NMR magnets, and fast-MAS probes at the different magnets (as the relaxation rates are best measured at fast spinning), and possibly the need for preparing multiple rotors for the different probes, which may cause problems of comparability of different preparations. In addition, the temperatures in different measurements on different probes may not be exactly identical. Therefore, we investigated the information that can be obtained from fitting data collected at only one magnetic field strength, i.e. using only 15 N R 1 , Fig. 7 Investigation of the outcome of SMF fits when applied to a two-time scale motion. Shown is the fitted correlation time of motion in an SMF fit, applied to in-silico relaxation (R 1 , R 1q at 600 MHz) and dipolar-coupling data, calculated from an EMF model. The slow time scale, s s , used in the EMF model, is shown along the vertical axis, while the amplitude of the slow motion (1-S s 2 ), relative to the the total amplitude of motion (1-S total 2 ), where S total 2 = S f 2 9 S s 2 , is shown along the horizontal axis. The correlation time of fast motion, s f , was assumed as 2 9 10 -12 s. Plots for other values of s f are shown in Figure S5 J Biomol NMR (2013) 57:263-280 275 15 N R 1q and 1 H-15 N dipole couplings. We left out the dipolar-CSA CCR data, as their information content is similar to the on of R 1q , while in our hands the latter can be measured with higher precision. Figure 9 shows results from such fits using data obtained at a single B 0 field (14.1T). In the case of SMF, the obtained fitted correlation times agree remarkably well with the ones obtained from the full data set that comprises (a) (b) Fig. 8 EMF fit of relaxation data only a, and with relaxation data and dipolar couplings b. In b, the overall order parameter S s 2 9 S f 2 was fixed to the REDOR-derived value. Only residues for which at least 4 data points are available were considered 6 relaxation rates (instead of 2 used here). Note that in this fit the relaxation data serve only to constrain the correlation time, as the order parameter is defined by the dipolarcoupling measurement.
We also investigated EMF fits from a limited data set. Obviously, fitting four parameters from three experimental data sets is impossible. However, our finding of rather uniform values of s f (see Fig. 8b) prompted us to set s f to a fixed value for all residues. Figure 9 shows EMF fit results for the case of s f = 80 ps. Despite the very limited data set, the fitted parameters are in relatively good agreement with the fit that uses the entire data set. However, the choice of the s f value has a clear impact on these fits ( Figure S8), such that such an approach must be interpreted with some care.
Comparison of order parameters with solution-state data
We have compared above the dynamics on time scales of picoseconds to a few microseconds, as seen by REDOR, with solution-state relaxation data, which are sensitive to a smaller time window, reaching from picoseconds up to a few nanoseconds only. In recent years, a number of studies have addressed protein dynamics in solution from residual dipolar couplings (RDCs). RDCs are sensitive to motion on time scales from ps to ms, and thus overcome the limitation of solution-state relaxation measurements. Due to difficulties to disentangle the amount of alignment in anisotropic solution, the structural component to the RDC, and the amount of dynamics, RDC analyses are challenging. Solid-state dipolar couplings might provide complementary insight, as they are only sensitive to local motion, but not to the structure. It is, thus, interesting to compare our present dipolar-coupling data to order parameters from solution-state RDC analyses. Of course, one does not necessarily expect perfect agreement between these data sets, because dynamics may be impacted by the crystalline environment (Tollinger et al. 2012). Figure 10 shows the comparison of REDOR-derived S 2 with S 2 derived from an extensive set of RDC data, analyzed with two different approaches (Lakomek et al. 2008;Salmon et al. 2009). Overall, the amplitude of motion seen in our REDOR data appears to match better the data set shown in (a) (Salmon et al. 2009) than the one in (b) (Lakomek et al. 2008). In both cases, a number of notable differences can be seen between solution-state RDC order parameters, and RE-DOR order parameters. Notably, the RDC-derived order parameters have much more site-to-site variation. This may appear surprising, as the solution-state relaxationderived order parameters agree much better with the solidstate REDOR order parameters (Fig. 5c). For a number of residues (e.g. residues 60, 62, 65) the RDC data show Fig. 9 Model-free fits from data obtained at a single magnetic field strength (14.09 T), using dipolar-coupling derived S 2 , R 1 and R 1q . Shown are fitted parameters for the SMF and EMF cases. In both cases, the overall order parameter, i.e. the S 2 in SMF, or S s 2 9 S f 2 in EMF, was fixed to the dipolar S 2 . In the SMF case, only the time scale is shown, as S 2 is identical to the data shown in Fig. 5. In the EMF case, the time scale of fast motion, s f , was fixed to 80 ps, as the number of fitted parameters would otherwise exceed the number of observables. Fits using other assumed s f are shown in the Supporting Information Figure S8. For comparison, the fit parameters obtained from a fit of all available experimental data (up to 7) are shown in red. In these fits, for all residues three data sets were used (R 1 , R 2 , S 2 ) markedly lower RDC-order parameters than the REDOR data. One possible explanation could be found in the presence of motion on time scales between the one relevant for solid-state (*10 ls) and solution-state (*10 ms). In fact, there is some experimental evidence for motion in ubiquitin on a time scale of 10 ls (Ban et al. 2011). This microsecond motion was, however, detected there for a small set of residues, and these data do not provide an explanation for all residues for which we observe lower RDC order parameters. Somewhat unexpectedly also, in the RDC data set that matches the REDOR data apparently better (in terms of overall motional amplitude), there are a few residues that have rather large order parameters, i.e. values of S 2 (RDC) exceeding the REDOR order parameters (see Fig. 10a). In these cases the RDC order parameters also exceed the solution-state relaxation-derived value (which is sensitive to sub-nanosecond motion). This is an unphysical situation, as has been noted before (Salmon et al. 2009), and it has been speculated that uncertainties in the relaxation order parameters may be the origin. Although we cannot exclude this possibility, the good match of solution-state relaxation data with REDOR data seems to weaken this argument. An alternative explanation to both the large site-to-site variation of RDC-order parameters, and the unphysically high values might also lie in uncertainties in the determination of the RDC order parameters. Our new dipolar coupling data might be useful as a benchmark for continued development of approaches to analyze RDC data. As has been pointed out previously, the REDOR data might also be compared directly to solution-state order parameters, and differences might be interpreted in terms of ns-ls motion (Chevelkov et al. 2010).
Conclusions
In this paper, we have provided a detailed analysis of some approaches for determining protein backbone motion on time scales from picoseconds to microseconds, both from the perspective of simulated data, as well as from a rather extensive set of experimental data, including previously unpublished data sets of 15 N R 1q data as well as 1 H-15 N dipolar couplings.
We have analyzed in detail the protocol to determine one-bond dipolar couplings in deuterated proteins, placing particular emphasis on eliminating possible sources of systematic errors. From our investigations it has become clear that rf mis-settings and inhomogeneities may introduce systematic errors of a few percent, which is a substantial error when seen on the scale of the motional amplitude, 1-S 2 . The REDOR experiment has a significant advantage over other recoupling schemes, namely its reliance on a train of well-separated p pulses. Calibrating such pulses is more straightforward than calibrating rf fields for continuous-irradiation based approaches. Furthermore, phase transients and instability of the rf fields impact these sequences more than the REDOR sequence. The only limitation of the REDOR experiment is its requirement for rather well isolated spin pairs, such as 1 H-15 N in deuterated proteins (Schanda et al. 2011b). Our new data, obtained with accurate pulse calibration and consideration of the rf inhomogeneity, also point to systematic offset of a previously published data set, but reveal that the profile of S 2 values over the sequence is highly reproducible.
We have shown that it is crucial to have such dipolecoupling data for fitting backbone dynamics. Using relaxation data alone generally leads to systematic bias of order parameters, i.e. an overestimation of S 2 . Furthermore, fitting R 1 and R 1q or CCR data will essentially always lead to fits that indicate nanosecond motion, even if there is no such motion present. This has been shown by simulations (Fig. 2) and experimental data (Fig. 7). In this sense, the detection of nanosecond motion, that has been claimed in recent studies (Knight et al. 2012;Lewandowski et al. 2010a, b), may be artifactual.
We show that measurements at a single magnetic field strength are sufficient if only the SMF approach is used, (a) (b) Fig. 10 Comparison of REDOR-derived order parameters with RDC-derived S 2 values in solution, using two different approaches, according to (Salmon et al. 2009) (a) and (Lakomek et al. 2008) (b) and with some reasonable assumptions even an EMF approach may be fitted from single-field measurements.
The findings and protocols that are shown here for 15 N sites along the backbone likely also apply to other sites, that have so far received less attention, such as methyl side chains (Agarwal et al. 2008;Schanda et al. 2011a), or backbone carbon sites (Asami and Reif 2013). We foresee that for these sites it will be equally important to complement relaxation data with dipolar coupling measurements. Given appropriate labeling, such as recently proposed selective methyl labeling (Agarwal et al. 2008;Schanda et al. 2011a) or random sparse protonation (Asami et al. 2010), the findings reported here for 15 N amide sites can readily be applied to other backbone and side chain sites. Such studies will allow to provide a comprehensive picture of protein motion, including proteins that are inaccessible to other techniques, such as insoluble or very large protein assemblies. | 15,797.6 | 2013-10-09T00:00:00.000 | [
"Physics"
] |
Box-Particle Implementation and Comparison of Cardinalized Probability Hypothesis Density Filter
This paper develops a box-particle implementation of cardinalized probability hypothesis density filter to track multiple targets and estimate the unknown number of targets. A box particle is a random sample that occupies a small and controllable rectangular region of nonzero volume in the target state space. In box-particle filter the huge number of traditional point observations is instead by a remarkably reduced number of interval measurements. It decreases the number of particles significantly and reduces the runtime considerably. The proposed algorithm based on box-particle is able to reach a similar accuracy to a Sequential Monte Carlo cardinalized probability hypothesis density (SMC-CPHD) filter with much less computational costs. Not only does it propagates the PHD, but also propagates the cardinality distribution of target number. Therefore, it generates more accurate and stable instantaneous estimates of target number as well as target state than the box-particle probability hypothesis density (BP-PHD) filter does especially in dense clutter environment. Comparison and analysis based on the simulations in different probability of detection and different clutter rate have been done. The effectiveness and reliability of the proposed algorithm are verified by the simulation results.
Introduction
Multi-target tracking has drawn much attention for its significant role in military and civilian fields.In a multiple target environment, the target number as well as the target states is important unknown information.How to track multiple targets with varying number during a clutter environment has been a difficult research issue in both academic and engineering fields for a long time.With finite set statistics (FISST), a theoretic approach in which targets and measurements are modeled by random finite sets (RFS) [1] was introduced by Mahler recently.This approach allows multi-target tracking in the presence of clut-ter and with uncertain associations to be cast in a Bayes filter.Based on this theory, Mahler then proposed the probability hypothesis density (PHD) filter [2] and the cardinalized probability hypothesis density (CPHD) filter [3].Compared with the PHD filter, the CPHD filter relaxes the Poisson assumptions in target and measurement number to achieve better estimation performance.Plenty of works have been done for their implementations [4][5][6][7], such as the Sequential Monte Carlo (SMC) approximation and the Gaussian mixture (GM) approximation.In order to achieve a satisfactory performance a large number of weighted particles are needed to approximate the intensity function in the SMC implementation.It results in high computational complexity and more execution time.
In recent years, many practical applications such as the wireless sensor networks quantized their measurements to only a few bits to reduce the communication bandwidth.Obviously, the standard measurement model is not adequate in this case.In order to solve these problems, the concept of box-particle (BP) filter [8], [9] based on the practical application background is proposed by Amadou Gning, Branko Ristic, and Lyudmila Mihaylova recently.In this approach the huge number of traditional point observations was substituted by a remarkably reduced number of interval measurements.It would certainly reduce the runtime without a lot loss in the performance.An interval measurement expresses a type of uncertainty which is referred as the set-theoretic uncertainty, vagueness or imprecision.The box-particle filter was studied and explained through the Bayesian perspective by interpreting each box particle as a uniform probability density function (PDF) [10].A single target box-particle Bernoulli filter with box measurements was presented in [11].The box-particle PHD filter for multi-target tracking with an unknown number of targets, clutter and false alarms was derived in [12].The box-particle cardinality balanced multi-target multi-Bernoulli filter and its implementation was proposed in [13].The crowd target tracking based on box-particle filter is proposed in [19].Reference [21] presents an implementation of box-particle filter on extended target.Various works have shown that the box-particle filter can reach a similar performance as the traditional particle filter with less computational complexity and runtime [11][12][13], [18].
The main contribution of this paper is that a new kind of implementation of the cardinalized probability hypothesis density (CPHD) filter, the box-particle cardinalized probability hypothesis density (BP-CPHD) filter is proposed.The approach which is suitable to deal with interval measurements can track multiple targets and estimate the unknown number of targets with low computational complexity and good performance.A comparison of the boxparticle probability hypothesis density (BP-PHD) filter, the BP-CPHD filter and the SMC-CPHD filter is performed.Both the two box-particle methods share a remarkably decreased number and less runtime than the SMC-CPHD filter.The BP-CPHD filter has a more accurate and stable instantaneous estimates of the target number than the BP-PHD filter, especially when the probability of the detection decreases and more clutter appears.Thus, it leads to smaller OSPA distance.
The rest of the paper is organized as follows.The necessary backgrounds on interval methodology, the RFS theory and the CPHD filter are described in Sec. 2. The box-particle PHD filter and the box-particle CPHD filter are presented in Sec. 3. Simulation results are performed in Sec. 4. Finally, conclusions are drawn in Sec. 5.
Background
This section introduces the random finite set, interval analysis and the SMC-CPHD filter.
Random Finite Sets
In a multiple target scenario, the number of the target may vary over time due to the appearance or disappearance of a target.As a result, the dimensions of the state space vary with the number of targets.Since the number of targets and the number of measurements are a random process, the state set and the observation set can be represented by the RFSs of multi-target state space and multitarget observation space, respectively, where N(k) and M(k) is the number of targets and the number of measurements at time k.F(X) and F(Z) denote the sets of state space X and the observation space Z, respectively.In this paper, we assume that if targets or clutter are detected, the sensor does not report the conventional measurement Z k and it will report a closed interval [Z k ] instead, which contains the target originated point measurement Z k with some probability.The set of all such closed intervals on Z, denoted by IZ is the interval measurement space.Due to the imperfect detection process are collected at time k .The inter- val measurements can be represented by a finite set: where IZ F is the space of finite subsets of IZ .If the state RFS at time k -1 is X k -1 , and the state RFS X k at time k can be expressed by where S kk - where k (x) is the measurement set generated by the true targets, and k represents the measurement set from the clutter.To deal with the heavy computational burden in processing the joint probability density function of X k and Z k directly, Mahler proposed the probability hypothesis density (PHD) filter which can approximate the probability density of multi-target RFS with its first-order moment [2].Thereafter, he proposed the cardinalized probability hypothesis density (CPHD) filter [3].These two filters can be implemented by GM or SMC approximations.
Interval Analysis
This section will briefly introduce the interval analysis which used in this paper.More details on this field are available in [14].The original idea of interval analysis is to deal with intervals of real instead of real numbers for exact computation in the presence of rounding errors.Box-particle filter based on interval analysis is suitable to deal with the interval measurements.A real interval [ ] [ , ] x x x is defined as a closed and connected subset of the set of real numbers.x and x represent the lower bound and the upper bound of the interval.The arithmetic operations between numbers and the operations between sets have been extended to intervals.In a multi-dimensional vector form, it becomes a box [ ] x defined as a Cartesian product of l intervals: And mid [ ] x denotes the center of a box.
A box [x] through a nonlinear transformation in general has a non-box shape.In order to remain in the realm of boxes, an inclusion function of a given function f is defined such that the image of a box ).The inclusion function is used so that the size of the box [f]([x]) is minimal but still covers the whole image of the box [x] in this paper.The inclusion function is able to reduce the calculation and make the process to converge faster.Another significant concept is contraction [8], which will be used in the definition of likelihood functions and the update step of the proposed filters.In this paper, the Constraint Propagation (CP) [13] will be used, for the sake of its good suitability in the context of tracking problems.
The SMC-CPHD Filter
The cardinalized probability hypothesis density (CPHD) filter which relaxes the Poisson assumptions in target and measurement number is proposed by Mahler.Not only does it propagate the PHD, but also propagates the cardinality distribution of target number.Therefore, it generates more accurate and stable instantaneous estimates of the target number and admits more false alarm processes than the PHD filter does.Both the Sequential Monte Carlo (SMC) and the Gaussian mixture (GM) approximations can be used to implement the CPHD filter.In this section we will introduce the SMC approximation.The description and detailed analysis of SMC-CPHD is available in [7].The evaluation for SMC-CPHD has been done using ground truth data obtained by a marker-based motion capture system [23].The SMC-CPHD recursion is briefly summarized as follows: 1) SMC-CPHD prediction: The distribution of the target number as well as the target state must also be propagated.Assume that the particle set from the previous time , where ,( ) ,( ) ,( ) ,( ) 1 .
x are the single target transition density and the probability of target survival, respectively.p kk -1 (n) is the predicted cardinality distribution of the target number.M is the transfer matrix and p birth (n) is the probability for n new targets to appear between scan 1 k and k derived from the birth model.
2) SMC-CPHD update: Assume that the predicted particle set at time k is x .The update equation of the state intensity and the cardinality distribution is realized as follows.p c (m) denotes the probability for m false alarms.
where the conditions D and -D are short hand notations for target detected and not detected, respectively.L[] denotes likelihood function.The likelihood ratios above can be found in [15].In this paper, the interval measurement is used in the SMC-CPHD filter.Thus the generalized likelihood is different from the traditional SMC-CPHD filter.
The generalized likelihood function under uncertain measurement satisfies the following formula [21].Assume that inf([z]) and sup([z]) represent the lower and upper limits of the interval measurement [z], respectively.
ϕ(x,μ,Q) represents the cumulative distribution function of Gaussian distribution N(t,μ,Q) with mean μ and variance Q.Therefore, the generalized likelihood of the SMC-CPHD filter under interval measurement is as follows.R is the variance of Gaussian distribution Applying this to equation (11), we can easily get the weight of the particle.The generalized likelihood of the BP-CPHD filter is different from that of the SMC-CPHD filter.It will be introduced in the following of the paper.
Implementations
For varying number multi-target tracking in clutter environment, PHD and CPHD filter based on random finite sets provides good solutions [2], [3].Their implementations include Sequential Monte Carlo (SMC) approximation and the Gaussian mixture (GM) approximation.Boxparticle filter emerging recently provides a new implementation for PHD filter [12], [18].On the other hand, compared with the PHD filter, the CPHD filter relaxes the Poisson assumptions in target and measurement number to achieve better estimation performance.Therefore, we propose a box-particle implementation for CPHD filter in this section.
The Box-Particle PHD Filter
The sequential Monte Carlo implementation details using a box particle representation are presented in the following.
1) Prediction: Suppose that the particle set from the previous time is denoted by 1 ,( ) ,( ) is the corresponding weight and N k -1 represents the number of particles.The newborn particle set , ,( ) ,( ) of the previous scan k -1.The number of the newborn particles is N k,new .More details about producing the new particles can be found in [12].
is the whole number of the box-particles.These particles mentioned above are propagated through the motion model and inclusion function.The survival probability is P S . ( 2) Update: The update of the state intensity is realized as follows, which has a similar scheme of the SMC-PHD filter.The probability of target detection P D is assumed to be constant, and the number of false detections per scan is modeled by a Poisson distribution with mean .The prior probability of false detection is modeled by c([z j ]).
A different likelihood function from that of SMC-PHD filter is presented here.
The function z is found, this particle is not contracted.The contraction approach here we used is . The contraction approach is widely used.Reference [18] provides a contraction example.More information on contraction step can be found in [16].For interval measurements [ ] j z , compute the correction term below: 3) Estimate: The state and number of the targets is estimated in this step.
( ) mid[] means finding the center of the box [ ] .N ̂k is the estimated number of the target.In prediction and update steps, the first order moment of multi-target RFS, rather than multi-target multi-Bernoulli density mentioned in reference [13] is propagated.
4)
Resampling:Assume that N is the number of the resampled particles, instead of replicating box-particles which have been selected more than once in the resampling step as the traditional particle filter does, we divide them into N equally weighted box-particles and resample them times to obtain particle set Here a random dimension is picked to be divided for the box-particle which has been selected.The details are introduced in [13].
5) Clustering: The box particle is changed to point particle according to mid([x k i ]).For these obtained point particles, we use the K-means clustering method [24] to get N ̂k cluster centers as the position of the targets.
The Box-Particle CPHD Filter
The box-particle implementation to the CPHD filter is presented in the following.The basic concepts of box-particle approximation of the CPHD filter are essentially the same as that of the PHD filter.The difference is that the distribution of the target number must also be propagated.
1) Prediction:
The states and weights of the particles were propagated through the same approach as the BP-PHD filter.Suppose that at time k -1 the particle set is 1 ,( ) ,( ) . The newborn particle set , ,( ) ,( ) x is generated from the measurement set from the previous scan k -1.In this paper, five particles are produced for each measurement [z].The newborn particle is generated around the previous measurement with a corresponding weight according to (24).P b is the probability of birth.More details can be found in [12].These particles mentioned above are propagated through the motion model and inclusion function.The survival probability is P S .
,( ) The prediction equation for the cardinality distribution can be written in terms of a transfer matrix M: P birth (n) is the probability for n new targets to appear between scan k -1 and k derived from the birth model.For a constant scan rate, the matrix M is constant and can be calculated in advance.
2) Update: Assume that the predicted particle set at time k is ( ) x .The update equation of the state intensity and the cardinality distribution is realized as follows.p c (m) denotes the probability of m false alarms. ( where the conditions D and -D are short hand notations for target detected and not detected, respectively.The likelihood ratios above are given by: Here, not only the first order moment of multi-target RFS, but also the cardinality distribution is propagated in prediction and update steps. 3) Contraction: A contraction algorithm is used to obtain a contracted version of ( ) x with its corresponding measurement [z j ].Given the contraction function , and , if no [z j ] is found, this particle is not contracted.The contraction approach here we used is: . More information on contraction step can be found in [16].Normalized step and the estimates of the target state is the same as that of BP-PHD filter.For the BP-CPHD filter the MAP estimate may produce more accurate and stable estimates of target number N ̂k.The cardinalized distribution, rather than the weights of particles, is used to estimate the number of targets here.It is different from BP-PHD filter.
4) Resampling and clustering: Assume that N is the number of the resampled particles, instead of replicating box-particles which have been selected more than once in the resampling step, we divide them into N equally weighted box-particles and resample them times to obtain . In this paper we randomly pick a dimension to be divided for the selected box-particle .The K-means clustering method [24] is still used in the BP-CPHD filter to get the position of the targets.The key steps of the BP-CPHD filter are summarized in Tab. 1.
Numerical Studies
Numerical studies for the proposed box-particle CPHD filter are given in this section.We evaluate filter performance using the optimum sub pattern assignment (OSPA) distance comparing with the SMC-CPHD filter and BP-PHD filter.Interval measurements are utilized to all the three methods.
Tab. 1. Algorithmic flow of the box-particle CPHD filter.
Simulation Setup
Consider a five target scenario on the surveillance region.The targets are moving according to the nearly constant velocity motion model in two dimensions and the prediction of the persistent particles can be modeled by: Here, are the target position interval and velocity interval, respectively.The state noise w is white Gaussian noise with a covariance matrix Q = diag ([0.5, 0.1]).The inclusion functions are hidden in (42) for the individual dimension of the state space.What we can see from (42) is that every variable only appears once for each dimension and all operations are continuous, so these natural inclusion functions are minimal and the propagated boxes have minimal size.The point measurements function is , so an interval measurement at time k is defined as: where, assume the interval length is Δ = [25, 23] T .The measurement noise v k is white Gaussian noise with a covariance matrix R = diag ([2.5 2 , 2.5 2 ]).The initial position, velocity and moving duration of the targets are listed in Tab. 2. In this paper, we needn't to initialize all the targets, for the sake of that the new box particles are generated from the measurements of the previous time.The first target is initialized as follows: m = [-500, 20 ,-80, -25] T , and P = diag ([10, 2, 10, 2]).m and P represent the mean vector and variance matrix, respectively.For the SMC-CPHD filter, 2000 particles are sampled from , ( ) N m P .While the box particle number for the BP-PHD filter and BP-CPHD filter is only 35.The probability of target survival and target birth is S P 0.99 and P 0.01 b respectively.The clutter is modeled as a Poisson RFS with the mean r per scan over the surveillance region.The parameters of the OSPA distance are set to be p = 2 and c = 70.Figure 1 shows the true target trajectories together with interval measurements in the presence of the clutter in x-y plane.The average computation time of one Monte Carlo trial for the three filters is listed in Tab. 3.
Experiments
x and every measurement [z j ] according to (33); .
- The larger the number of the clutter, the worse the performance of the filters is.The OSPA has a little big value when there is a newborn target appearance.The BP-CPHD filter and the SMC-CPHD filter share a similar average OSPA distance.However, the BP-CPHD filter needs much less particle number and runtime than the SMC-CPHD filter as seen in Tab. 3.Both of the box-particle methods use less time and smaller number of particles than the traditional Sequential Monte Carlo method.Figure 3(a) shows that all the three methods are able to get a relative accurate estimate of target number when r = 3.When the number of the clutter rise to r = 8 and 12, the BP-CPHD filter and SMC-CPHD filter have an advantage over the BP-PHD filter especially in estimating target number.It can be seen from Fig. 4(a) and Fig. 5(a).Compared to CPHD, PHD is more sensitive to clutter.The cardinalized distribution, rather than the weight of particle, is used to estimate the number of targets in CPHD.It reduces the influence of clutter to estimate the number of targets.This experiment demonstrates that the proposed algorithm is superior to the other two methods in different clutter rates.In addition, the runtime of three filters is increased with the increase in the number of the clutter because more measurements generated by the clutter need to be handled.2) Comparing the performance of the three filters with different probability of detection: In this experiment below we investigate the performance of BP-CPHD filter, BP-PHD filter and SMC-CPHD filter with different probability of detection P d = 0.95 and 0.90, respectively.The number of the clutter is set to r = 3. Figure 6(a) and (b) show that the estimated number for all the three methods are inaccurate under a low probability of detection.The lower the probability of detection, the worse the estimate of the target number is.It's obvious that the traditional SMC methods outperformed the other two box-particle methods.This is because the number of particles the traditional SMC method used is much larger than that of the box-particle methods.Though the probability of detection is low, there are still a lot of particles to approximate the filter in the traditional SMC methods for its large particle number.However, for Box-particle filter, with the decline of detection probability the box-particle number detected is not enough to approximate the probability density function.Therefore it needs increase of the box-particle number to improve the tracking performance.Of cause the adding box-particle number is relatively small compared to the thousands of particle number.Figure 6 also demonstrates that the performance of the BP-CPHD filter and the SMC-CPHD filter is better than that of the BP-PHD filter.These results lead to the conclusion that the proposed BP-CPHD filter is more reliable than BP-PHD filter.Furthermore, taking the runtime into consideration comprehensively, the BP-CPHD filter has a good performance.
Conclusions
This paper presents a novel approach for nonlinear multi-target tracking based on box particles, called boxparticle CPHD (BP-CPHD) filter.It is based on the random finite set theory, and the interval analysis is used to get a box-particle implementation of the CPHD filter.Compared with the SMC-CPHD filter, the number of the particles is decreased greatly.As a result, it is able to guarantee a similar accuracy with less average running time.Experiments demonstrate that the BP-CPHD filter has a higher degree of accuracy and more accurate estimate to the number of targets than the BP-PHD filter, especially in dense clutter environment.Experiments in different probability of detection imply the box-particle based filters need increase of the box-particle number to achieve good tracking performance in low detection probability.In addition, the number of the clutter had an influence on the runtime.So the estimates of the clutter number should be taken into consideration to make the BP-CPHD algorithm agree with actual situation in the future.
the corre- sponding weights of the persistent particles.N k -1 represents the number of particles.The newborn particle set is ,
1 ) 7 .
Comparing the performance of the three filters with different number of clutter: To evaluate the average performance, 50 Monte Carlo (MC) trials are performed.True targets trajectories and tracking results are shown in Fig. 2. Set the mean number of the clutter r = 3,8,12, respectively.In this experiment the probability of detection is Algorithmic flow of the box-particle CPHD filter i = 1,…,N k according to (28Compute the generalized likelihood
8 . 9 .Estimate 10 .
Compute the weight ŵ k (i) for i = 1,…, N k according to (31); Compute the cardinality p k (n) according to (32); Get the center of the particle according to mid([x k i ]); 11.Compute the number of the target N ̂k according to (41); 12. Use K-means clustering method to get N ̂k cluster centers as the position of the targets.13.Weight normalization: ŵ k (i) for i = 1,…, N k .14. Resample N equally weighted particles from ( )
Fig. 1 .Fig. 2 .
Fig. 1.True target trajectories together with interval measurements in the presence of the clutter in x-y plane.The solid lines are the true target trajectories and the start position for each track is shown with circular.The measurements are visualized as boxes.
Fig. 2 .
Fig. 2. (b) True targets trajectories and tracking results of SMC-CPHD (+), BP-PHD (*) and BP-CPHD (o) in X and Y directions.set to P d = 0.99.The mean estimated target number and mean OSPA distances for different filters are shown in Fig. 3(a) to Fig. 5(b).It can be seen that the OSPA values of the SMC-CPHD filter and the BP-CPHD filter are in general lower than that of the BP-PHD filter with different | 5,805.8 | 2016-04-14T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Zeros of Green Functions in Topological Insulators
This study demonstrates that the zeros of the diagonal components of Green functions are key quantities that can detect non-interacting topological insulators. We show that zeros of the Green functions traverse the band gap in the topological phases. The traverses induce the crosses of zeros, and the zeros' surface in the band gap, analogous to the Fermi surface of metals. By calculating the zeros of the microscopic models, we show the traverses of the zeros universally appear in all six classes of conventional non-interacting topological insulators. By utilizing the eigenvector-eigenvalue identity, which is a recently rediscovered relation in linear algebra, we prove that the traverses of the zeros in the bulk Green functions are guaranteed by the band inversions, which occur in the topological phases. The relevance of the zeros to detecting the exotic topological insulators such as the higher-order topological insulators is also discussed. For the Hamiltonians with the nearest-neighbor hoppings, we also show that the gapless edge state guarantees the zeros' surfaces in the band gap. The analysis demonstrates that the zeros can be used to detect a wide range of topological insulators and thus useful for searching new topological materials.
I. INTRODUCTION
The Green function is one of the most fundamental tools used for solving quantum many-body problems 1,2 . The Green function method can be used to perform a systematic analysis for a wide range of quantum manybody systems, from elementary particle physics to condensed matter physics. One of the key components of the Green function is the pole, at which the value of the Green function becomes infinite. In solid state physics, the poles correspond to band dispersion in solids and play a central role in describing the low-energy excitations of solids.
In insulating phases, the zeros of the Green function, rather than the poles, appear in the bandgap. The zeros can be used for characterizing the insulating phases, wherein the poles (band dispersions) do not govern lowenergy physics. In fact, it has been proposed that the surface of the zeros of Green functions (zeros' surfaces) in Mott insulators can act as Fermi surfaces in metallic phases 3 . For particle-hole symmetric systems, it has been demonstrated that the Luttinger's sum rules on the zeros' surface can be satisfied for the insulating phases 4,5 . Moreover, studies on the doped Mott insulators have established that the interplay of the zeros and poles of the Green functions can govern the unconventional electronic properties such as non-Fermi liquid behaviors, pseudogap phenomena, and Fermi arc structures that are observed in high-T c cuprates [6][7][8][9] .
Recent findings on topological insulators 10 have triggered intensive experimental and theoretical investiga-tions of topological materials [11][12][13][14] . Systematic clarifications of topological insulators have been proposed [14][15][16] , and the periodic table for six conventional classes of topological insulators has been established, as presented in Table I. It has also been demonstrated that the existence of the topological invariant is related with the gapless surface states 17 .
Although topological insulators can be detected by calculating the topological invariants, direct calculations of the topological invariants are generally not easy. Thus, several simplified methods have been proposed for detecting the topological phases, such as the Fu-Kane formula 18 for the Z 2 topological insulators with the inversion symmetry. Recently, classification methods utilizing the symmetries of solids such as the symmetry eigenvalues 19 and the symmetry indicators 20 were proposed. In particular, the symmetry indicators have been used for searching a wide range of materials for topologically nontrivial phases [21][22][23][24][25][26][27][28] .
The present study shows that the zeros of the diagonal components of the Green functions are useful quantities for detecting the non-interacting topological insulators. Because the microscopic Hamiltonians for the topological insulators in non-interacting systems are generally multi-orbital systems, we should consider the matrices of the Green functions. This investigation focuses on the zeros of the diagonal components of the Green functions because they have characteristic features. These zeros exist between poles and are given by the eigenvalues of the minor matrix M n , which is obtained by removing the nth row and column from the original Hamiltonian. It is noted that the simple and fundamental relationship be-arXiv:2102.04665v3 [cond-mat.mes-hall] 29 Aug 2022 class Θ Ξ Γ d = 1 2 3 4 A 0 0 0 0 Z 0 Z AI 1 0 0 0 0 0 2Z AII -1 0 0 0 Z2 Z2 Z AIII 0 0 1 Z 0 Z 0 BDI -1 -1 1 Z 0 0 0 CII 1 1 1 2Z 0 Z2 Z2 15 . The indices Θ, Ξ, and Γ represent the timereversal symmetry, particle-hole symmetry, and chiral symmetry. For topological insulators without chiral symmetry (Γ = 0, upper panel), this study demonstrates that the zeros of the Green function traverse the bandgap. For topological insulators with chiral symmetry (Γ = 1, lower panel), this investigation establishes that it is necessary to perform an appropriate unitary transformation to observe the traverse of the zeros.
tween the zeros of the Green functions and the minor matrix is not well known. This study proves that these characteristic features are obtained from an argument that is provided in Ref. [29], which rediscovers the fundamental but lesser-known identity between the eigenvectors and the eigenvalues called the eigenvector-eigenvalue identity.
The fact that the eigenvector-eigenvalue identity is not widely known may be the main reason why the relationship between the zeros and the minor matrix is not well understood. The relationship with the minor matrix offers a mathematical foundation that can be used to analyze the zeros of the diagonal components of the Green functions. It also offers an efficient way to numerically obtain the zeros of the diagonal components of the Green functions, i.e., it is not necessary to search for the zeros in the energy direction. While the importance of the zeros of the Green functions for characterizing the interacting topological insulators has been proposed by Gurarie in Ref. [30], the definition of the zeros of the Green functions for the present study is different from the definition used in Gurarie's work. In Gurarie's work, the zero eigenvalues of the Green functions (in other words, zeros of the diagonalized Green functions) are defined as zeros of the Green functions. Because the diagonalized Green functions have no zeros in the non-interacting case, Gurarie's method cannot be applied to characterize non-interacting topological insulators. In contrast, our argument is based on the diagonal components of the Green functions that can characterize the non-interacting topological insulators, as will be described in detail below. It is noted that the diagonalized Green function at the zero frequency 31,32 is used for detecting the interacting topological phases. We also note that the zeros of the diagonal component of the Green functions were used for detecting impurity states 33,34 and the appearance of the Majorana bound states 35 .
By using the mathematical foundation of the zeros for the diagonal component of the Green functions, we clarify their behavior in topological insulators. First, by taking the two-dimensional Chern insulator as an example (class A topological insulator in Table I), it is demonstrated that the zeros in the bulk Green functions traverse the bandgap in topological insulators; however, the zeros do not traverse in trivial insulators. We establish that the existence of the topological invariant (Chern number) guarantees the traverse of the zeros in the bulk systems. As the consequence of the traverses of the zeros from the different diagonal components of the Green functions, we find that the crosses of the zeros appear in the topological phases. We give a proof that the band inversion, which generally occurs in topological phases, guarantees the traverse of the zeros in the topological phases in any dimensions.
It is also demonstrated that the traverse of the zeros generically occurs in the other topological insulators such as the Z 2 topological insulator in two and three dimensions (class AII topological insulators in Table I) as well as in 2Z topological insulators in four dimensions (class AI topological insulators in Table I). This study also investigates the zeros in topological insulators with chiral symmetry (lower panel in Table I) and shows that traverses of the zeros also occur in chiral topological insulators when we take appropriate gauges. Furthermore, for the Hamiltonian with the nearest neighbor hoppings, it is also demonstrated that the gapless edge states induce the zero's surface in the bulk systems. In other words, at least one diagonal component of the Green functions becomes zero in the band gap due to the gapless edge states.
Recent studies have suggested the existence of exotic topological phases, which are not listed in Table. I. Higher-order topological insulators 36,37 , which show hinge states or quantized corner charges, are candidates for these exotic topological phases. This study demonstrates that the zeros of Green functions are also useful for detecting higher-order topological insulators. These results indicate that the traverses and the resultant crosses of the zeros are universal features of the topological phases.
Here, we summarize the main features of the zeros of the diagonal components of the Green functions that are clarified in this paper: This paper is organized as follows. In Sec. II, the mathematical foundation of the zeros of the diagonal components of the Green functions is explained. In Sec. III, we examine the zeros of the Green function for twodimensional Chern insulators and demonstrate that the traverse of the zeroes is induced by the non-trivial Chern number. We also demonstrate that the traverse of the zeroes is gauge invariant.
Sec. IV establishes that the traverses of the zeros occur in Z 2 topological insulators in two and three dimensions. In Sec. V, we give a proof that the traverses of the zeros in the topological insulators universally occur due to the band inversions, which necessarily occur in the topological phases. In Sec. VI, we examine the zeros in topological insulators with chiral symmetry and demonstrate that it is necessary to use appropriate gauges of the Hamiltonian to see the traverses of the zeros in the topological phases. Sec. VII shows that the traverses of the zeros occur even for higher-order topological insulators. In Sec. VIII, we show that the gapless edge states guarantee the zeros' surface in the band gap for the Hamiltonian with the nearest neighbor hoppings. Finally, Sec. IX summarizes the study.
II. MATHEMATICAL FOUNDATION OF ZEROS OF THE GREEN FUNCTION
In this section, we discuss the mathematical foundation of the zeros of the diagonal components of Green functions. By following the argument in Ref. [29], we show that the zeros can be represented by the eigenvalues of the minor matrix M n of the Hamiltonian H, which is generated by removing the nth row and column from H.
Using the eigenvalues and eigenvectors of the Hamiltonian H(k), where k (ω) represents the momentum (energy), the nth diagonal component of the Green function is expressed as where E i (k) is the ith eigenvalue of the Hamiltonian and Ψ (n) i (k) is the nth component of the ith eigenvec-tor Ψ i (k). By applying the Cramer's rule, we can obtain where I N represents the N -dimensional identity matrix.
Here, for clarity, we explicitly show the dimensions of the identity matrix. This relation Eq.
(3) shows that the zeros of G n (k, ω) are represented by the eigenvalues of M n . It is known that the eigenvalues of the minor matrix M n exist between the eigenvalues of the original Hamiltonian H, i.e., The above relation is known as the Cauchy interlacing inequalities relation 38,39 . A proof of the inequalities is given in Appendix A. From the definition of the zeros, it is obvious that the zeros form curves in one dimensions and surfaces in two dimensions as the band dispersions do. By applying the residue theorem to Eq. (3), the following expression can be obtained. . ( This relation Eq. (5) is called the eigenvector-eigenvalue identity and has been recently rediscovered 29 . From this identity, it can be concluded that the zero and the pole should coincide if the nth component of the eigenvectors becomes zero and vice versa. We note that the existence of this special point (|Ψ To examine the behaviors of the zeros of the Green functions in topological insulators, we employed a twoband model for a two-dimensional Chern insulator 41 on a square lattice, defined as follows. h ν,j = c † j+eν T ν c j + H.c., where c † j (c j ) represents the two-component fermion creation (annihilation) operator that is defined on the site j on the square lattice spanned by two orthogonal unit vectors e ν=x,y . The matrices T ν are defined as where It is noted that the system becomes a topological (Chern) insulator for −4 < m < 0, and it becomes a trivial band insulator for m > 0.
The Hamiltonian in the momentum space can be de-scribed as R y (k) = sin k y , (16) R z (k) = 2 + m − (cos k x + cos k y ). (17) By diagonalizing the Hamiltonian, the eigenvalues (band dispersions) can be obtained as Following the argument in Sec. II, the zeros of the Green functions are represented by the eigenvalues of the minor matrix M n . For the 2×2 matrices, the eigenvalues are the diagonal components of the Hamiltonian. Thus, the zeros of the nth diagonal component of the Green function (ζ n ) are represented by In Fig. 1(a)-(c), a plot is presented for the band dispersions and zeros of the Green functions for several values of m. It was observed that the zeros of the Green function traverse each other in the topological insulator while they do not traverse in the trivial insulator. The traverse is guaranteed by the traverse of the zeros. This feature of the zeros is in sharp contrast with the band dispersions (poles of Green functions). This is because it is impossible to distinguish the topological insulator and band insulator based on only the band dispersions. When considering the standard way to identify the topological insulators, it is necessary to examine the existence of the topological invariant or the gapless edge states. However, as shown in Fig. 1, the traverse of the zeros of the Green functions in the bulk system can be used to identify the topological insulator. We note that traverses of the zeros are robust against the perturbations such as the nextnearest-neighbor transfers unless they do not destroy the topological phases. The next subsection explains how the topological invariant can guarantee the traverse and the resultant cross of the zeros of the Green functions.
B. Relation with the Chern number
This subsection shows that the topological invariance (Chern number C) guarantees the traverse of the zeros in the bandgap. From the eigenvector-eigenvalue identity Eq. (5), an important relation can be obtained: a component of the eigenvector is zero ( |Ψ (n) j (k)| 2 = 0) ↔ the zeros and poles of the Green function coincide (ζ(k) = E(k)). These special points where |Ψ (n) j | 2 = 0 are called the vortex cores 40 . From the existence of such a special point, it is demonstrated how the topological invariant guarantees the traverse of the zeros. If the Chern number C is nontrivial, for example, C = 1, there is a oneto-one mapping from (R x (k), R y (k), R z (k)) to the twodimensional sphere S 2 , which covers the two-dimensional sphere at least once.
In contrast to the case of the topological insulator, when the Chern number C is trivial (C = 0), there exists (R x (k), R y (k), R z (k)) that does not completely cover the two-dimensional sphere S 2 . In other words, one such point is present where (R x , R y , R z ) ∝ (0, 0, ±1) does not exist in general. In general, the zeros of the Green function do not traverse the bandgap and do not cross. It is noted that the accidental crosses of the zeroes of the Green function might occur even in trivial insulators. An example is provided in Sec. VI C.
C. Gauge invariance
As demonstrated, the traverse of the zeros is guaranteed by the existence of the non-trivial topological invariant. This result indicates that the traverse of the zeros is gauge invariant. In this subsection, by explicitly performing the unitary transformation, it is established that the traverse of the zeros is gauge invariant.
By using the unitary matrix U , a unitary transformation is performed as follows.
where u is a real number, v = v x +iv y , and u 2 +v 2 x +v 2 y = 1. The explicit form of the transformed Hamiltonian is given as where a = 2uv It is noted that a 2 + b 2 + c 2 = 1 is satisfied. After the unitary transformation, the zeros of the Green function are given as For R x (k) = a|R|, R y (k) = b|R|, R z (k) = c|R|, the point where the zeros and poles coincide is given as ζ 0 = E 1 and ζ 1 = E 0 . For the antipodal point (R x = −a|R|, R y = −b|R|, R z = −c|R|), the point is given by ζ 1 = E 0 and ζ 0 = E 1 . Because the existence of the topological invariant guarantees that R/|R| can cover the unit sphere, R can take R x = a|R|, R y = b|R|, R z = c|R| and the corresponding antipodal point. Thus, the traverse of the zeros of the Green function in Chern insulators is gauge invariant.
IV. Z2 TOPOLOGICAL INSULATOR IN TWO AND THREE DIMENSIONS (CLASS AII)
In the previous section, as a canonical example of the topological insulators, the zeros of the Green functions in the Chern insulators were analyzed. In this section, we examine the zeros of the Green functions in the Z 2 (class AII) topological insulators in two and three dimensions. It can be confirmed that the zeros traverse the bandgap in topological phases, as demonstrated in the case of the Chern insulators.
A. Kane-Mele model
As an example of the Z 2 topological insulators, the Kane-Mele model 10 was employed on the honeycomb lattice, which is defined as is a creation (annihilation) operator of an electron with the spin σ on site i. Each parameter is defined as follows: t represents the nearest-neighbor hopping, λ SO represents the spin-orbit coupling, ∆ i is the staggered charge potential, and λ R represents the Rashba term. σ is defined as where d ij denotes the vector along two bonds that the electron traverses from site j to i through k (see Fig. 2(a)). For simplicity, we can first assume that the Rashba term is absent (λ R = 0). The effects of the Rashba term are examined in the next subsection. For simplicity, t = 1.
By performing a Fourier transformation, the Hamiltonian without the Rashba terms in the momentum space can be described as For this investigation, the lattice constant was set as a = 1. By diagonalizing the Hamiltonian, the band dispersions can be obtained as The zeros of the Green functions are given as In Fig. 2(b)-(d), the band dispersions and zeros of the diagonal components of the Green functions are plotted for the up spin (E i,↑ , ζ i,↑ ). In the Kane-Mele model, spinorbit coupling λ SO induces the Z 2 topological insulators 10 , while the staggered charge potential ∆ destroys the topological insulator. By changing ∆, the zeros of the Green function can be monitored in terms of how they behave in the topological and trivial insulator.
In Fig. 2(b), it is demonstrated that the zeros traverse the band gap in the Z 2 topological insulator as in the Chern insulator. Because the Z 2 topological insulator in the Kane-Mele model without the Rashba term is the spin Chern insulator, the traverses of the zeros are guaranteed by the existence of the spin Chern number.
By increasing the strength of the staggered charge potential ∆, the insulator changes into the band insulator at ∆ c = 3 √ 3λ SO . At the transition point, the crosses of the zeros are lifted at the Dirac point K
B. Effects of Rashba term
This subsection considers the effects of the Rashba term. By adding the Rashba term, the Hamiltonian becomes 4 × 4 and the spin Chern number is no longer well defined. The Hamiltonian with the Rashba term is given as Here, we define c x [s x ] as cos(k x /2) [sin(k x /2)] and c y [s y ] as cos( because it is difficult to obtain the simple analytical form of the eigenvalues and the zeros, they were numerically obtained. Even under the existence of the Rashba term, the traverses of zeros appear in the topological insulator as shown in Fig. 3(a) while the traverses disappear in the trivial insulators occur as shown in Fig. 3(b). At the K point, the first and fourth zeros in the bandgap coincide with the valence bands, while the second and third zeros coincide with the conduction bands. At the K point, since the sign of the mass term is the opposite from the K point, the opposite occurs. To connect each zero consistently, the first and the fourth zeros and the second and third zeros should traverse the bandgap and should cross at least once as shown in Fig. 3(a). These results indicate that the zeros of the Green function are also useful for characterizing the Z 2 topological insulator.
This subsection considers the zeros of the Green functions in the three-dimensional topological insulators. A typical model of the three-dimensional topological insulator on the cubic lattice 42,43 is given as follows.
It can be noted that the eigenvalues are doubly degenerate. The zeros of the Green function in the bandgap are given as follows.
From these expressions, it can be shown that the zeros and the poles coincide at eight time-reversal points, such as the Γ, X, and Z points. To analyze the traverses of the zeros, the following was defined.
If η(k) = 1 (η(k) = −1) at the time reversal point, ζ 1 (k) coincides with E 0 (k) (E 1 (k)). η(k) at the time-reversal points are shown in the inset. Because ζ 0 (k) = −ζ 1 (k), they traverse between η(k) = 1 and η(k) = −1. For example, for M 0 = −1 [ Fig. 4(a)], they traverse between Γ point and X point. Even for the weak topological region, they traverse as shown in Fig. 4(a) and they do not traverse in the trivial insulators. These results indicate that the traverses of the zeros can be useful for detecting the topological phases for strong and weak topological insulators since the traverses are guaranteed by the existence of the band inversions, as we detail in the next section.
V. RELATION WITH BAND INVERSIONS
In this section, we explain why the traverses of the zeros universally appear in the topological insulators. In the topological insulators, the band inversions generally occur. We denote that the occupied (unoccupied) eigenstates with the index i ≤ 0 (i > 0). The eigenvectors and eigenvalues are given by Here, E 0 (k) (E 1 (k)) is the highest occupied (lowest unoccupied) eigenvalues. We assume that the band gap exists between E 0 and E 1 , i.e., E 0 (k) < E 1 (k) at any k. Here, we define the band inversion as follows: A single pair of momenta k, (k 0 , k 1 ) exists such that one occupied eigenvector at k 1 [e n0 (k 1 )] is orthogonal to all the occupied eigenvectors at k 0 , i.e., where N is the number of the occupied bands. We detail the relation with the band inversion and the existence of the Chern and Z 2 topological number in Appendix B. Then, using the eigenvectors at a certain k point (k = k 0 ), we construct the unitary matrix We note that the eigenvectors of the transformed Hamil- For example, the lowest two eigenvectors are transformed asẽ Thus, under this unitary transformation, the zeros of the 0th (1st) component of the diagonal Green functions ζ 0 (ζ 1 ) coincide with the lowest unoccupied (highest occupied band) due to the eigenvector-eigenvalue identity Eq. (5). This situation is schematically shown in Fig. 5. Then, we will show that the zeros of the Green function,G traverse the band gap in the topological phase, where the band inversion occurs.
Here, we assume the band inversion between k 0 and k 1 , which is defined in Eq. (39). Without loss of generality, we can take n 0 = 0. Then, we obtaiñ where c is a non-zero number. This illustrates thatζ 0 traverses the band gap, i.e.,ζ 0 coincides with E 1 (k 0 ) at k 0 and coincides with E 0 (k 1 ) at k 1 (see Fig. 5). We can also show thatζ 1 also traverses the band gap since the band inversion also occurs in the unoccupied states according to the argument given in Appendix B. This result shows the band inversion induces the traverses of the zeros in the band gap. In Appendix. C, a detailed condition for the traverse of the zeros is given. According to Eq. (C7), when e 0 (k 1 ) is orthogonal to e 1 (k 0 ) and mainly consists of the unoccupied eigenvectors at k 0 ,ζ 0 traverses the band gap. We note that this condition can be regarded as a generalization of the band inversion defined in Eq. (39).
This argument demonstrates that the traverse and the resultant crosses of the zeros always appear in the topological phase by taking the unitary transformation defined in Eq. (40). As we will show in the next section, using the unitary transformation, the traverses of zeros appear in the topological insulators with chiral symmetry. Before that, we explain why the traverses of the zeros appear in the Chern insulator without taking the unitary transformation.
For the Chern insulator, the Chern number is defined as 40 It is known if the Berry connection a −n (k) is smooth over the Brillouin zone, the Chern number becomes zero according to the Stokes theorem 40 . Therefore, if the Chern number is non-zero, the point where the phase of the eigenvector is not well defined should exist (the point is called vortex core). At that point, one component of the eigenvector becomes 0, i.e., the zeros of the Green function (ζ i ) coincides with the eigenvalue E 0 . Due to the band inversion, the same coincidence should occur for the unoccupied band, i.e., ζ i coincides with E 1 at the different momentum. Therefore the zeros of the Green functions (ζ i ) traverse the band gap. Since another zero (ζ j ) also traverses the band gap, the zeros cross each other. This shows that the existence of the vortex core in any gauges is the reason why the traverses of the zeros appear in the Chern insulator without taking the unitary transformation. We note that the similar discussion can be applied to the four-dimensional 2Z topological insulators (class AI) because the topological invariant is characterized by the second Chern number. In fact, as shown in Appendix D, the traverses of the zeros appear in AI topological insulator without the unitary transformation. Thus far, we examined the zeros of the Green functions in the topological insulators without chiral symmetry. Here, we examine the behavior of the zeros of the Green functions in the topological insulators with chiral symmetry.
For the Hamiltonian that has chiral symmetry, the unitary matrix Γ that anti-commutes with the Hamiltonian always exists, i.e., From this, the Hamiltonian with chiral symmetry and its Green function at zero energy can be described as a block off-diagonal form as demonstrated below.
In this representation, the unitary matrix Γ has a diagonal form, which is given as From Eq. (49), it can be observed that the zeros of the bulk Green function in the bandgap always exist at ω = 0 for the topological and trivial insulators. This result indicates that it is impossible to distinguish the topological phase from the trivial phases if we take the block off-diagonal form of the Hamiltonians. We also note that the vortex core is apparently absent in this gauge since the zero can not coincide with the poles if the finite gap exists. In other words, a component of the eigenvectors are always non-zero and the smooth gauge can be taken. This feature is in sharp contrast with the topological insulators without chiral symmetry such as the Chern insulators. Thus, to identify the topological phase with chiral symmetry, it is necessary to perform the proper unitary transformation. By taking the unitary transformation defined in Eq. (40), as we show later, the traverses of the zeros appear in the topological insulators with chiral symmetry because the band inversions occur in the topological phases.
B. BDI and AIII topological insulators in one-dimension
As an example of the BDI and AIII topological insulators, the one-dimensional Hamiltonian 44,45 can be considered, which is given as where R x = 1 + γ cos (x − δ) and R y = γ sin (x − δ). When γ > 1 and δ = 0, this system becomes a topological insulator (d = 1, BDI). If δ = 0 and γ > 1, the system becomes an AIII topological insulator 45 .
The eigenvalues and eigenvectors of the Hamiltonian are given as follows.
where α = R x + iR y . Following the discussion above, by taking k 0 = δ so as to satisfy the relation R y (k 0 ) = sin(k 0 − δ) = 0, the unitary transformation is given by Using the unitary matrix, the transformed Hamiltonian is obtained as follows.
The zeros of the Green function for the transformed HamiltonianH are given as follows.
For the BDI topological insulator (δ = 0, γ > 1), the zeros traverse the bandgap as shown in Fig. 6(a) whereas the zeros do not traverse in the trivial insulator. It can be confirmed that the same behavior occurs in the AIII topological insulators as shown in Fig. 6(c).
C. CII topological insulator in one-dimension
As an example of the CII topological insulators, the following four-orbital models were applied 46 . where R x = t ⊥ , R y = 2t sin k x , and A = s + 2s cos k x . When (t ⊥ /t) 2 + (s/s ) 2 < 4, this system becomes a topological insulator (d = 1, CII). The eigenvalues of the Hamiltonian are given as follows.
Following the discussion above, by tanking k 0 = 0, we obtain the unitary matrix by numerically diagonalizing the Hamiltonian. Using the unitary matrix, we numerically obtain the zeros of the Green function of H = U † HU . The zeros are shown in Fig. 7. We note that the eigenstates are degenerate at k = 0 and the form of the unitary transformation is not unique. Nonetheless, the traverses of the zero are guaranteed by the existence of the band inversion as we showed above.
As illustrated in Fig. 7(a), in the CII topological insulator, the zeros of the Green functions traverse the bandgap. Basically, the crosses vanish in the trivial insulator sufficiently away from the transition point. However, the accidental crosses of the zeros still survive, even in the trivial insulators as shown in Fig. 7(c) near the transition point. These crosses are not guaranteed by the traverse of the zeros and we can remove the accidental crosses without gap closing.
VII. HIGHER-ORDER TOPOLOGICAL INSULATORS
As demonstrated in the previous sections, the zeros traverse in the topological insulators because the gapless edge states inevitably induce the traverse of the zeros. Recently, new classes of the topological insulators have been found, such as the higher-order topological insulators 36,37 . In the higher-order topological insulators in n dimensions, the gapless edge states appear below n − 2 dimensions. For example, the three-dimensional higherorder topological insulators have one-dimensional edge states and the two-dimensional higher-order topological insulators have zero-dimensional edge states. In this section, we demonstrate that the traverses of the zeros in the bulk system also appear in the higher-order topological insulators because the band inversion occurs.
A. Higher-order topological insulators in three dimensions
A model of the three-dimensional higher-order topological insulator on the cubic lattice is described as follows 37 .
The eigenvalues are given as and the zeros are given as It can be noted that without R 4 , this model is the same as that of the three-dimensional Z 2 topological insulators defined in Eq. (33). It is shown that the system changes from the Z 2 topological insulator into a higherorder topological insulator when R 4 is added 37 . As shown in Fig. 8, the traverses of the zeros exist even for the higher-order topological insulators. Although R 4 lifts the coincidences of the zeros and poles at some k points such as X, the coincidences still exist for Γ, M, and Z. This indicates that the band inversion still remains in the higher-order topological insulators.
B. Higher-order topological insulators in two dimensions
As an example of the two-dimensional higher-order topological insulators 36 , we consider the following Hamiltonian. where R x = E sin k y , R y = γ + E cos k y , R z = E sin k x , and A = γ + E cos k x . The eigenvalues are given as We note that the eigenvalues are doubly degenerate.
Because the system has chiral symmetry, according to the procedure described above, we define the following unitary matrix by taking k 0 = (0, 0).
We show zeros of the Green functions of transformed HamiltonianH = U † HU in Fig. 9. We find that the zeros traverse the band gap in the higher-order topological insulator while they do not traverse in the trivial insulator. This result indicates that the traverse of the zeros in the bulk system is useful even for detecting the higher-order topological phases.
C. Higher-order topological insulator in the Fu-Kane-Mele model under a magnetic field
Finally, we examine how the zeros of the Green functions behave in the Fu-Kane-Mele model under a magnetic field 47? ,48 , which is defined as follows.
R 1 = 1 + δt 1 + [cos k 1 + cos k 2 + cos k 3 ], (76) R 2 = sin k 1 + sin k 2 + sin k 3 , (77) where k 1 = (k y + k z )/2, k 2 = (k x + k z )/2, k 3 = (k x + k y )/2, and h z represent the magnetic field. It is shown that the three-dimensional Z 2 topological insulator appears in the Fu-Kane-Mele model. The gapless surface states are gapped out when the magnetic field is applied because the magnetic field breaks the timereversal symmetry. Nevertheless, by analyzing the entanglement spectrum, Turner et al. showed that the topological properties of the systems still remain 48 . They demonstrated that the entanglement spectrum behaves as gapless surface states, even under a magnetic field. Recently, it has been established that the one-dimensional hinge states appear in the Z 2 topological insulator under a magnetic field 49,50 . The higher-order topological nature may be the origin of the characteristic behavior in the entanglement spectrum. Here, we show that the zeros of the Green functions can also capture the topological properties of this system. In Fig. 10 To observe the surface/edge states, the twodimensional surface states and the one-dimensional hinge states were calculated in Fig. 10(c)-(f). As shown in Fig. 10(d), the gapless surface states are gapped out by the magnetic field. However, as shown in Fig. 10(f), the gapless one-dimensional edge (hinge) states appear under the magnetic field. This result shows that a higherorder topological insulator is realized in the Fu-Kane-Mele model under the magnetic field and is captured by the traverses of the zeros.
VIII. RELATION WITH EDGE STATES
The traverse and the resultant cross of the zeros resemble the gapless edge modes, which appear in the topological insulators. This section shows that the traverse of the zeros of the Green functions has a relation with the gapless edge state, i.e., degeneracies of the eigenvalues of the edge Hamiltonian guarantee the existence of the zeros' surface in the band gap. We note that this argument is applicable to the Hamiltonians only with the nearestneighbor hoppings. For the Hamiltonian with further-neighbor hoppings, the argument can not be directly applied.
We consider the two-dimensional Chern insulators as an example, which is defined in Eq. (9). We rewrite the Hamiltonians in the real space for the x direction (the length is set to L) as where p is the scholar value and controls the boundary conditions. For example, the open-boundary condition corresponds to p = 0, and the periodic-boundary condition corresponds to p = 1. We note that the periodic Hamiltonian (p = 1) includes the L − 1 edge Hamiltonian where B = (h x , 0, . . . , 0, p × h −x ). For simplicity, we denote H real (k y ) = H L (p = 1, k y ) and H edge (k y ) = H L−1 (p = 0, k y ). We note the relation between the edge states and the minor matrix are studied in the context of the Hermiticity of the tight-binding Hamiltonian operators 52 .
Here, we define the partial Fourier transformed Green function asḠ We note thatḠ n is the Green functions of H real (k y ), i.e., From this definition, ifḠ n (k y , ω) becomes zero, the following relation is satisfied.
This indicates that the G n (k x , k y , ω) changes its sign as a function of k x . The zeros ofḠ n (k y , ω) are given by the eigenvalues of the minor matrix M n of H real (k y ). Based on the periodicity, we only consider the eigenvalues of M 0 and M 1 . Because M 0 and M 1 include the edge Hamiltonian, H edge can be regarded as a minor matrix of M 0 and M 1 . Thus, from the Cauchy interlacing identity (see Appendix E and F), the eigenvalues of H edge are located between the eigenvalues of M 0 and M 1 and vice versa. This relation can be expressed as If the eigenvalues of the edge Hamiltonian are doubly degenerate, the zeros of the bulk Hamiltonian should coincide, i.e.,ζ This indicates that the gapless edge states (they cross in the bandgap) inevitably induce the degeneracy of the zeros inḠ n . Using numerical calculations, we demonstrate the application of this argument to the Chern insulators. In Fig. 12, the poles of the zeros of the Green functions for the periodic Hamiltonian H L=20 (p = 1) are defined in Eq. (82) for m = −0.5. We also show the poles of the edge Hamiltonian (edge states) for H L=19 (p = 0). We can confirm that the zeros of the periodic Hamiltonian exist between the poles of the edge Hamiltonian, and they degenerate when the poles of the edge Hamiltonian cross (k y = 0).
In general, for the m-orbital system (the size of h 0 is m × m), it can be shown that the m-fold degeneracy in the edge states (gapless edge states) induces the m-fold degeneracy in the zeros ofḠ. The proof of this statement is shown in Appendix E and F. However, because the proof is based on the Cauchy interlacing inequality, it can not be applied to the Hamiltonian with the furtherneighbor hoppings, where degeneracy of the edge states (m edge ) is less than the number of orbitals (m orb ), i.e., m edge < m orb . For example, if we introduce the nextnearest neighbor hopping, the size of h 0 becomes 4 × 4 while the degeneracy of the edge states still remains 2. In this situation, the Cauchy interlacing inequality can not be applied and the degeneracy of the edge states does not necessarily induce the degeneracy in the zeros ofḠ.
Here, we argue how the traverses of the zeros inḠ are related with the zeros of the Green functions in the momentum space. We first consider the region where the ω is negative (ω < 0) and the 0th component of the Green function (Ḡ 0 and G 0 ). For ω < 0, at the certain k y = ±k 0 ,Ḡ 0 (k y , ω) becomes 0. This indicates that the finite positive and negative regions exist in the original Green functions at ±k 0 according to Eq. (89). Since the positive and negative regions do not immediately vanish even if we adiabatically change k y , the zero's surface exists in the original Green functions G 0 as shown in Fig. 12(a).
At ω = 0,Ḡ 0 (k y , ω) becomes 0 at k y = 0. This also indicates the existence of the zero's surface around k y = 0 (see Fig. 12 (b)). We also note that the zero's surface can not immediately vanish even if we change ω, the zero's surface survives for ω > 0 as is shown Fig. 12 (c). From this consideration, we can say that the zero's surface of G 0 exists at least for ω ≤ 0 The opposite thing occurs for G 1 andḠ 1 , i.e., the zero's surface exists at least for ω ≥ 0. These results indicate that the zeros' surfaces exist in the band gap if the gapless edge states exist. This means that at least one diagonal component of the Green functions becomes zero in the band gap due to the existence of the gapless edge states. The existence of the zeros' surface is consistent with the existence of the traverses of the zeros in the bulk Green functions. We note that, however, the traverses of the zeros are not guaranteed by the degeneracy of the zeros ofḠ.
IX. SUMMARY
In summary, this study focused on the zeros of the diagonal components of the Green functions in topological insulators. Based on the arguments that were used in the eigenvector-eigenvalue identity 29 , it was first demonstrated that the zeros of the diagonal components of the Green functions are given by the eigenvalues of the minor matrix M n , which is obtained by removing the nth row and column from the original Hamiltonian. This mathematical foundation offers an efficient way to study the zeros of the Green functions both analytically and numerically. We have also shown that the zeros can visualize the information of the band inversions via the eigenvector-eigenvalue identity. For a two-dimensional Chern insulator, which is a canonical model of two-dimensional a topological insulator, it is established that the traverse of the zeros is a key to distinguishing topological insulators from trivial insulators. For the Chern insulators, this study has explicitly shown that the existence of the Chern number guarantees the traverse of the zeros in bulk systems. It has been demonstrated that the traverse of the zeros is gauge invariant, although the positions of the zeros are not. It is observed that the traverses of the zeros can occur in the other class of the topological insulators without chiral symmetries such as the Z 2 (class AII) topological insulators in two and three dimensions and 2Z (class AI) topological insulators in four dimensions.
We give a general argument that the band inversions induce the traverses of the zeros in the band gap. Since the band inversions occur in topological phases, the traverses of the zeros universally occur in the topological phase. This argument also shows that the traverse of the zeros always occur if we take the proper unitary transformation. The unitary transformation is used for identifying the topological phase with chiral symmetry.
For topological insulators with chiral symmetry, this study demonstrated that the zeros also traverse the bandgap when we take the suitable unitary transformation defined in Eq. (40). By taking the model Hamiltonians for class BDI, AIII, and CII topological insulators, this investigation has shown that the traverses of the zeros always occur in topological phases.
Since the traverses of the zeros are guaranteed by the existence of the band inversions it is expected that this is useful for detecting other exotic topological phases, which are not listed in the conventional periodic table for topological insulators (Table I). As an example of these exotic topological phases, we consider the higherorder topological insulators. The traverse of the zeros is also useful for detecting the higher-order topological insulators in two and three dimensions.
We also show that the gapless edge states guarantee the existence of the zeros' surface in the band gap for the Hamiltonian with the nearest neighbor hoppings. Although the relation with the edge states and the behavior of the zeros in the bulk system can be proved only for the limited case, this result implies that the zeros have close relation with the edge states and offers another point of view of the bulk edge-correspondence 53 .
The comprehensive analysis in this study demonstrates that the zeros of the Green functions can be used as a simple visual detection tool for topological phases via the band inversions. It may be useful when searching for new topological phases because the traverses of the zeros can be easily detected without any assumptions. In addition, since this method does not require full gap states but only needs the traverse of the zeros, the zeros of the Green functions can be useful to identify the exotic topological semimetals such as the Weyl 54-57 and the Topological Dirac semimetals [58][59][60] .
It needs to be noted that the accidental crosses of the zeros in the trivial phases may occur, as demonstrated for the CII topological insulator. We want to emphasize that this type of cross is not protected by the traverse of the zeros, i.e., the band inversion. By visualizing the behavior of the zeros in the band gap, it is easy to distinguish whether the crosses of the zero are accidental or not. We also note that the accidental crosses appear in the Rice-Mele model 61,62 , which has non-topological edge states. In the Rice-Mele model, since the zeros do not traverse the band gap either, we can distinguish whether the crosses are accidental. Thus, the traverse of the zeros of the Green functions is useful when searching for the topological phases since the absence of the traverses indicates that the system is not topological in the sense that they have no band inversions. This fact may be useful in the screening process for searching the topological materials by combining with the high-throughput ab initio calculations [23][24][25]27 .
In this paper, the rediscovered eigenvector-eigenvalue identity is essentially used for showing the existence of the traverse of the zeros due to the band inversion. Since the band inversion universally occurs in the topological phases, the traverse of the zeros offers a useful guideline for identifying the topological phases. Thus, our paper gives a direct application of the eigenvector-eigenvalue identity in condensed matter physics, which is a fundamental relation in linear algebra but has long been overlooked.
This study was restricted to the analysis of topological insulators for simplicity; however, a similar analysis is possible for topological superconductors, where the band inversion also occurs. It is known that several topological superconductors can be mapped to the models of the topological insulators,e.g., the model for the class D topological superconductors in the two dimension is mapper to the model for the Chern insulator 63 . In those systems, the traverses zeros also appear in the topological superconductors. Furthermore, a similar analysis might be possible for the disordered 64,65 and interacting 30,66,67 systems because the Green functions for the correlated and/or disordered systems are well-defined. We note that zeros of the Green functions in correlated electron systems are recently studied using dynamical mean-field theory 68 . These unexplored issues are intriguing but need to be further investigated. Although the proof of the Cauchy interlacing inequalities is shown in the literature 38,39 , we have provided another proof of the inequalities based on the structure of the Green functions to make this paper self-contained. To prove the Cauchy interlacing inequalities, the definition of the Green function (Eq. (2)) is rewritten as The roots of the (n − 1)th polynomial N n (k, ω) correspond to the zeros of G n (k, ω), i.e., the eigenvalues of the minor matrix M n (k). From Eq. (A2), it can be shown that N n (k, ω) changes its sign between E i (k) and E i+1 (k), i.e., N n (k, This indicates that the N n (k, ω) has at least one real root between E i (k) and E i+1 (k). Because the N n (k, ω) is the (n−1)th polynomial with respect to ω, the number of roots between E i (k) and E i+1 (k) should be one. For |Ψ . This is consistent with the eigenvector-eigenvalue identity.
Thus, for both |Ψ i | = 0, N n (k, ω) has one real root between E i (k) and E i+1 (k). In other words, one zero of G n (k, ω) should exist between every adjacent eigenvalue (poles of the Green function). This was also pointed out in a previous study 5 . This represents the Cauchy interlacing inequalities.
Appendix B: Relation between the band inversion and the topological invariants
Let e i (k) is the ith eigenvectors of Hamiltonian H(k). We take N (M ) as the number of the occupied (unoccupied) states. We define the overlap matrix U as where E(k) is the set of the eigenvectors at k, Since E(k) is unitary, U , which is a product of two unitary matrices, is also unitary. For the unitary matrix U , we consider the following block matrix representation: where U 11 (U 22 ) is the overlap matrix for the occupied (unoccupied) states and it is the N ×N (M ×M ) unitary matrix while U 12 (U 21 ) is the N × M (M × N ) overlap matrix between the occupied and unoccupied states. We note that det U 11 is a key quantity for identifying the non-trivial Chern and Z 2 topological insulators. When a system is the non-trivial Chern or Z 2 topological insulator, there exists, at least, a single pair of k, (k 0 , k 1 ), that satisfies det U 11 = 0. For the Chern insulator, it is shown that, if det U 11 = 0 holds at a pair of momenta, (k 0 , k 1 ), irrespective of the choice of the gauge, the gauge fixing is impossible 69 . Thus, if det U 11 = 0 holds at a pair of momenta, there is a non-trivial Chern number.
In the Z 2 topological insulators 10,70 , it is also shown that the non-trivial Z 2 topological invariant induces zeros of the Pfaffian of the following overlap matrix where Θ is the time-reversal operator and E occ (k) = [e 0 (k), e 1 (k), . . . , e N −1 (k)]. Since ΘE occ (k) can be obtained from an unitary transformation of E(−k), we obtain and where W is the unitary matrix. Then, we obtain Therefore, |P (k)| = 0 is equivalent to | det U 11 (k, −k)| = 0. When the band inversion occurs between k 0 and k 1 , i.e., for all i ∈ [0, N − 1], n 0 ∈ [0, N − 1] exists such that e i (k 0 ) † e n0 (k 1 ) = 0. This means that one column of U 11 is zero and, thus, det U 11 = 0. In the following, we will show that the opposite statement also holds, i.e., det U 11 = 0 induces the band inversion.
First, we notice that, for U 11 , U 12 , U 21 , and U 22 , the following relations hold 71 : We note that from | det U 11 | 2 = | det U 22 | 2 , if the occupied bands are non-trivial (| det U 11 | = 0), the unoccupied bands are also non-trivial (| det U 22 | = 0). If det U 11 = det U 22 = 0, U 11 and U 22 have the following singular value decompositions: 1 , λ 1 , . . . , λ ]) are the singular values of U 11 (U 22 ). Here, we explicitly denote that the smallest singular value is zero from the assumption, det U 11 = det U 22 = 0. Thus, we obtain , we can simultaneously diagonalize using the same unitary matrix This indicates that we can take where the diagonal elements of Σ 12 are given by [1, σ where the diagonal elements of Σ 21 are given by [1, σ (1) 2 , . . . ]. This means that we have the following decomposition of U as follows.
Since K and J act on the occupied and unoccupied bands separately, each occupied eigenstate inẼ(k 0 ) = E(k 0 )J andẼ(k 1 ) = E(k 1 )K only consists the occupied states in the original basis. From the structure of Σ 11 , i.e. one column of Σ 11 is 0, we can show that occupied bands at k 0 does not include at least one occupied state at k 1 . We take one missing occupied eigenstates as e n0 (k 1 ). Thus, we obtain This shows that e n0 (k 1 ) only consists of unoccupied states at k 0 . This proves det U 11 = 0 induces the band inversion.
Appendix C: Condition for the traverse of the zero As we explained in main text, when e 0 (k 0 ) is orthogonal to e 0 (k 1 ),ζ 0 (k 1 ) coincides with E 0 (k 1 ). However, this orthogonal relation does not always mean the traverse ofζ 0 . Whenζ 0 approaches to E 0 (k 1 ) from the upper side of E 0 (k 1 ) (lim k→k1ζ0 (k) = E 0 (k 1 ) + 0),ζ 0 traverses the band gap. In this appendix, we give the condition when theζ 0 traverses the band gap.
For K(k 1 ) = 0, the 1st order of δ vanishes. By considering the 2nd order of δ, we obtain the relation as |Ψ (0) 0 (k 1 + ∆k)| 2 ∼ δ 2 × C(k 1 ) A(k 1 ) , (C8) Since C(k 1 )/A(k 1 ) is always positive, δ has both the positive and the negative solutions. This indicates that both zeros below and above E 0 coincide at k 1 for K(k 1 ) = 0. Therefore, even for K(k 1 ) = 0,ζ 0 traverses the band gap. The same argument can be applied to the lowest unoccupied bands. When e 1 (k 1 ) contains the occupied eigenvectors of k 0 and their weight is dominant,ζ 1 traverse the band gap and the crosses of the zeros appear. E = E 0 = E 1 = · · · = E m−1 . The following form of the inverse ofḠ can be obtained.
where G A (E) −1 = (EI − A),G Dn−m (E) −1 = (EI − D n−m ), and D n−m = diag(E m , . . . , E n−1 ). From this, the bulk Green function can be expressed as where we assume thatF −1 exists. G bulk (ω = E) can be rewritten as By using the relation we can obtainḠ Therefore, the zeros ofḠ have m degeneracies when H edge has m degenerate eigenvalues. We note that this argument can not be applied for m edge < m orb since Eq. (E7) does not hold.
Appendix F: Proof of the relation with the m-fold degenerate edge states and the zeros inḠ II We show another proof of the relation using the Cauchy interlacing identity. Here, for H real defined in Eq. (E1), we define kth sub matrices M k (i 0 , i 1 , · · · , i k−1 ) that remove k columns and rows from the original Hamiltonian H bulk . The indices of the removal columns and rows are represented by i 0 , i 1 , · · · , i k−1 (i α = i β for α = β). We can take 0 ≤ i α ≤ m − 1 without loss of generality from the periodicity of H bulk . The ith (i = 0, 1, · · · , m − 1) zeros ofḠ (ζ i ) are given by the eigenvalues of the first minor matrix M 1 (i).
If the eigenvalues of M m (0, 1, · · · , m−1) = H edge have m-fold degeneracy (E = E m k = E m k+1 = · · · = E m k+m−1 ), the eigenvalues of M m−1 have (m − 1)-fold degeneracy (E = E m−1 k = E m−1 k+1 = · · · = E m−1 k+m−2 ) from the Cauchy interlacing inequality. By using the Cauchy interlacing inequality iteratively, we can show that one of the eigenvalues of the 1st minor matrix M 1 (i)(i = 0, 1, · · · m − 1) is the same as E. In Fig. 14, we show the schematic representation of this relation for the 4-orbital systems. This indicates that the zeros of the Green functions have m-fold degeneracy, i.e., E =ζ 0 =ζ 1 = · · · =ζ m−1 . Obviously, this argument can not be applied for m edge < m orb since the Cauchy interlacing inequality can be used at most m edge times. | 13,687.8 | 2021-02-09T00:00:00.000 | [
"Physics"
] |
Modelling of cirrus clouds – Part 1 b : Structuring cirrus clouds by dynamics
Abstract. A recently developed and validated bulk microphysics scheme for modelling cirrus clouds (Spichtinger and Gierens, 2009), implemented into the anelastic non-hydrostatic model EULAG is used for investigation of the impact of dynamics on the evolution of an arctic cirrostratus. Sensitivity studies are performed, using variation of large-scale updraughts as well as addition of small-scale temperature fluctuations and wind shear. The results show the importance of sedimentation of ice crystals on cloud evolution. Due to non-linear processes like homogeneous nucleation situations can arise where small changes in the outer parameters have large effects on the resulting cloud structure. In-cloud ice supersaturation is a common feature of all our simulations, and we show that dynamics is as least as important for its appearance than is microphysics.
Introduction
Homogeneous freezing of aqueous solution droplets is considered the main pathway to cirrus formation at temperatures below the supercooling limit of pure water droplets (see e.g.Sassen and Dodd, 1988;Heymsfield and Sabin, 1989;Haag et al., 2003).This process needs large ice supersaturation to commence because the foreign solute molecules impede the formation of the ice crystal lattice until they are sufficiently dissolved in water (Koop, 2004).The number of ice crystals that are formed in a nucleation event depends quite sensitively on the cooling rate, which is in turn determined by the vertical wind speed at the moment when the nucleation threshold is reached (Kärcher and Lohmann, 2002;Kärcher and Ström, 2003;Hoyle et al., 2005).Admittedly, box model studies including mesoscale temperature fluctua-Correspondence to: P. Spichtinger (peter.spichtinger@env.ethz.ch)tions, like the studies mentioned above, only show a part of the complexity, because of lack of spatial dynamics (in particular sedimentation).
Additionally, the presence of heterogeneous ice nuclei in the same airmass can modify the homogeneous cirrus formation substantially (DeMott et al., 1997;Gierens, 2003;Ren and MacKenzie, 2005;Kärcher et al., 2006;Liu et al., 2007).The dynamical influences on cirrus cloud formation and evolution will be considered in the present paper and the effects of heterogeneous ice nuclei in a subsequent one (Spichtinger and Gierens, 2008, hereafter Part 2).For the simulations we use our newly developed cirrus model (Spichtinger and Gierens, 2009, hereafter Part 1a).We will see that the results offer relatively simple explanations to the issue of longlasting substantial supersaturation found within cirrus clouds (see e.g.Comstock et al., 2004;Lee et al., 2004;Ovarlez et al., 2002;Krämer et al., 2008;Peter et al., 2008, and M. Krämer, personal communication).
The structure of this article is as follows.In Sect. 2 we describe the setup of the simulations and refer to a reference simulation of Part 1a in order to set the stage for the following discussions.Then (Sect.3) we first analyse the sensitivity of the simulated cirrus formation to small changes in uplift speed.Eventually we add random fluctuations to the velocity field and add wind shear and study their effects.Several aspects of the results are discussed in Sect. 4. We end with a summary and draw conclusions in Sect. 5.
Setup and reference simulation
We abstain here from a model description.The interested reader will find a detailed description in Part 1a and a short one in Part 2. However, the setup for the model simulations is described and results from the petition for available vapour, lower growth rates and reduced fall speeds.In this case the sedimentation time scale is longer than the cooling time scale in the lower part of the supersaturated layer, such that the threshold for homogeneous nucleation is reached there, i.e. the sedimenting ice crystals are not able to reduce the supersaturation efficiently.New crystal production starts thus lower in the ISSR, forming new peaks in the n c profiles, as shown in fig.6: The new peaks and the peak from the first nucleation event are vertically separated such that the cloud obtains a layered structure, clearly different from the cases with slower uplift.Further nucleation events can occur within the ISSR until eventually ice production (nucleation and growth) and sedimentation have filled the cloud everywhere with enough ice crystals such that further cooling is no longer able to drive the relative humidity above the nucleation threshold.Instead, the relative humid- ity is reduced inside the cirrus cloud close to ice saturation.
Still further increase of w to w = 0.1 m s −1 does not lead to further structural changes, yet even more ice crystals are produced in the primary and secondary nucleation events, and supersaturation is eventually reduced close to saturation very effectively.
In fig.7 vertical profiles of relative humidity for the set of simulations at times t(w = 0.05m s −1 ) = 240 min, t(w = 0.06m s −1 ) = 200 min, t(w = 0.08m s −1 ) = 150 min and t(w = 0.10m s −1 ) = 120 min, respectively, are shown; this impressively demonstrates how slight changes in the vertical velocity can lead to completely different cloud structures.This non-linear behaviour is, of course, a consequence of the Fig. 1.Initial vertical profiles (pressure, temperature, potential temperature and relative humidity wrt ice) for the simulations of a synoptically driven cirrostratus.in order to set the stage for the subsequent sensitivity studies and the 2-D simulations.We use the following setup for our simulations: The whole 2-D model domain (0≤x≤6.3km, 2≤z≤11 km) is lifted up adiabatically with a constant updraught velocity of w=0.05 m s −1 as described in Kärcher (2005).This is equivalent to a constant cooling of the background profile T e with a rate of dT /dt=dT /dz•dz/dt= −g/c p • w=−0.000489K/s.The cooling is adiabatic (i.e.θ e is constant), and is continued for a total simulation time of t s =7 h.In Fig. 1 the initial profiles for the simulations are shown.The profiles are identical in every vertical column.
We use a horizontal resolution of x=100 m with a horizontal extension of 6.3 km, cyclic boundary conditions in xdirection, a vertical resolution of z=10 m and a dynamical time step of t=1 s.Because of the small vertical velocity there is no need of a time splitting for the microphysics The peak is much more pronounced as in the reference case (w=0.05m s −1 , see Part 1a, Fig. 17) due to higher ice crystal densities due to the stronger updraught.Note how the position of the first nucleation peak shifts to lower altitudes relative to the nucleation layer in the upper part of the cloud with time.
scheme.For the background aerosol (H 2 SO 4 ) we use a number density of n a =N a ρ=300 cm −3 with geometric standard deviation σ r =1.4 and geometric mean radius of r m =25 nm for the lognormal distribution.In Fig. 2 the temporal evolution of relative humidity wrt ice, and ice crystal mass and number concentrations, resp., are shown (this is Fig. 17 of Part 1a).
As described in Part 1a, the first nucleation event occurs at t≈60 min.The supersaturation peak of about 154% RHi triggers homogeneous nucleation.Within a few minutes a large amount of ice crystals (N c ρ∼100 L −1 ) is formed.Because of the high supersaturation the ice crystals can grow quickly and deplete a fraction of the water vapour, which reduces the relative humidity.Ice crystals grow and soon start to fall.Therefore the peak of high supersaturation at the top of the ISSR is influenced very weakly by the depletion of the water vapour.The peak is permanently maintained for the whole simulation time and is a permanent source for homogeneous nucleation at the top of the ISSR.The combination of crystal growth and sedimentation causes two effects: On the one hand, the supersaturation is reduced by crystal growth such that the relative humidity cannot reach the threshold for homogeneous nucleation in the lower part of the cloud.This effect might be dubbed "sedimentation induced quenching of nucleation".
On the other hand, the falling ice crystals formed at the top of the cloud are the only sink for the water vapour.Although the continuous homogeneous nucleation events permanently form new ice crystals, these are spread vertically over the whole cloud depth resulting in relatively low number densities.Thus, inside the cloud, ice supersaturation is maintained.Sedimentation obviously plays a crucial role for the development and the structure of the simulated cirrus cloud and for the maintenance of supersaturation within the cloud.
The nucleation event at t∼60 min forms a large number of ice crystals, resulting in a downward moving peak of high ice crystal number densities.In agreement with former studies by Lin et al. (2005); Kärcher (2005) ice supersaturation inside the cirrus is found and maintained by the sedimenting ice crystals depleting the gas phase water vapour such that no homogeneous nucleation can take place within the cirrus.
Studies of dynamical effects
In this section we study the sensitivity of properties of cirrus clouds formed by homogeneous freezing on dynamics.First we consider how cloud properties change with varying vertical wind, which is the large-scale component of the dy- namics.Then we introduce temperature fluctuations that lead to small eddies and thus represent small-scale dynamics.Finally we add wind shear, again a large-scale component of the dynamics.
Variation of updraught velocities
First, we test the sensitivity of the simulation results shown in Part 1a and commemorated in the previous section to variations in the updraught velocity.We choose a set of values in the synoptic range, w=0.06/0.08/0.1 m s −1 .In order to avoid that supersaturation is reached below our usual cloud layer we use shorter simulation periods than above, namely t s =6 h for w=0.06 m s −1 and of t s =4 h for w=0.08/0.1 m s −1 , respectively.For the simulation with w=0.06 m s −1 the structure of the developing cirrus is quite similar to the reference simulation with w=0.05 m s −1 described in Sect. 2 (and in Part 1a).However, in spite of the small increase in w, several differences begin to appear.Since the number of ice crystals that form in a homogeneous nucleation event increases with updraught speed (roughly ∝ w 3/2 ) the peaks in the profiles of n c =N c ρ and q c are much more pronounced in this simulation than in the former one.The profiles are shown in Fig. 3.
Because of the larger crystal number concentration the water vapour is depleted more efficiently than before, in particular where n c peaks.The increased n c implies more competition for the available water vapour, on average smaller crystals and reduced terminal velocities.Thus it takes longer to seed the lower part of the ISSR layer with ice crystals and supersaturation can increase there to higher degrees than in the former simulation.Fig. 4 shows the vertical relative humidity profiles at equivalent time steps, t (w=0.05m s −1 )=240 min and t (w=0.06 m s −1 ) = 200 min, respectively.Fig. 6.Ice crystal number densities for w=0.08 m s −1 at different simulation times.For the early simulation times, the moving peak is visible, while for t=90/120 min, the in-cloud nucleation event in the vertical range 7700≤z≤8400 m appears.The ice crystals, formed in the secondary nucleation event are then dispersed over the lower part of the cloud (t=150/180 m).This leads to a layer structure of the cirrus cloud.
In both simulations enough ice crystals are eventually sedimented over the whole depth such that further nucleation events inside the supersaturated layer are prevented.
By increasing the vertical velocity further to w=0.08 m s −1 , the quality of the results dramatically changes, as the evolution of the variables RHi, n c and IWC shown in Fig. 5 makes evident: The first nucleation event takes place at t≈40 min, producing n c ∼230 L −1 of ice crystals.Now, the chain of events that we just have sketched reappears, but with enhanced intensity: More crystals imply stronger competition for available vapour, lower growth rates and reduced fall speeds.In this case the sedimentation time scale is longer than the cooling time scale in the lower part of the supersaturated layer, such that the threshold for homogeneous nucleation is reached there, i.e. the sedimenting ice crystals are not able to reduce the supersaturation efficiently.New crystal production starts thus lower in the ISSR, forming new peaks in the n c profiles, as shown in Fig. 6: The new peaks and the peak from the first nucleation event are vertically separated such that the cloud obtains a layered structure, clearly different from the cases with slower uplift.Further nucleation events can occur within the ISSR until eventually ice production (nucleation and growth) and sedimentation have filled the cloud everywhere with enough ice crystals such that further cooling is no longer able to drive the relative humidity above the nucleation threshold.Instead, the relative humidity is reduced inside the cirrus cloud close to ice saturation.Still further increase of w to w=0.1 m s −1 does not lead to further structural changes, yet even more ice crystals are produced in the primary and secondary nucleation events, and supersaturation is eventually reduced close to saturation very effectively.In Fig. 7 vertical profiles of relative humidity for the set of simulations at times t (w=0.05m s −1 )=240 min, t (w=0.06 m s −1 )=200 min, t (w=0.08 m s −1 )=150 min and t (w=0.10 m s −1 )=120 min, respectively, are shown; this impressively demonstrates how slight changes in the vertical velocity can lead to completely different cloud structures.This non-linear behaviour is, of course, a consequence of the non-linear behaviour of the nucleation process of which one could say it has only two states: on or off.This is because of the short duration of a typical nucleation event (in order of tens of seconds).
Effect of small-scale fluctuations and 2-D structure
So far the simulations were run without superposed fluctuations, such that there was no horizontal variability in the model (which made the simulations effectively 1-D).This is useful for process studies, for representing the qualitative structure of the formed cirrus clouds.Now we are going to make the simulations more realistic by inclusion of fluctuations of temperature (i.e.perturbation of the wind fields) and wind shear into 2-D simulations with EULAG.Temperature fluctuations on scales of the order hundred kilometres in the upper troposphere are of the order 1 K (Gierens et al., 2007), on the cloud resolving scale we expect smaller variations (see also Bacmeister et al., 1999).The fluctuations are generated by superposition of uncorrelated Gaussian perturbations with a standard deviation of σ T =0.1 K onto the background temperature field in the initialisation.This induces additional fluctuations in the horizontal and vertical wind field (i.e.small eddies).
The resulting perturbations of the vertical wind, w , at simulation time t=60 min are shown in Fig. 8.
We find for the case with background vertical wind w=0.05 m s −1 that the perturbations reach the same order of magnitude as the prescribed large scale updraught.Anyway, the resulting wind speeds are sufficiently small to allow a fixed time step of dt=1 s (cf.Part 1a).
Although the initial temperature perturbations are uncorrelated, the wind field develops coherent structures, small eddies of sizes of few hundred metres which remain persistent throughout the simulation.Nevertheless, the turbulent kinetic energy decreases with time.For illustration we show in Fig. 9 the evolution of the distribution for the component w .
The persistence of the eddies is a consequence of the stable initial background temperature profiles.Below z=5500 m and above z=9800 m (below and above the ISSR) the stability is strong, quantifiable by a Brunt-Vaisala frequency in the range 0.0172≤N ≤0.0182 s −1 (below) and in the range 0.0219≤N≤0.0226s −1 , representing stratospheric air (above).For comparison, a typical value of the Brunt-Vaisala frequency in the upper troposphere is N∼0.01 s −1 and in the lowermost stratosphere is N∼0.02−0.03s −1 , respectively (Birner, 2006).The ISSR layer is only slightly stable (N=0.006s −1 ), and indeed the initial temperature fluctuations can occasionally and locally cause neutral and unstable stratification.In Fig. 10 the vertical profiles of ice crystal number density, ice water content and relative humidity for a background vertical velocity of w=0.05 m s −1 are presented for simulation times t=120/180 min, respectively.By and large the mean profiles are similar in shape to those of the 1-D simulations, but the panels show also that temperature and wind fluctuations have a considerable effect on the results when looked at in more detail.
Effects concerning ice crystal number densities are mainly due to fluctuations in the vertical wind, which is clear from the sensitivity studies above and idealised box model calculations in Part 1a.In some regions the vertical wind is higher than normal, i.e. than the prescribed large scale updraught, hence more ice crystals are produced there.In other regions the vertical wind is reduced or even negative, so less than normal or no crystals are produced there.The effect on the humidity field is complex: In regions with higher w more ice crystals can consume more water, but also the cooling time scale gets shorter, i.e. the saturation vapour pressure is reduced more quickly.In regions with reduced or even negative w it is vice versa.Because of the full 2-D dynamics, the temperature fluctuations do not only affect the vertical velocity field but also introduce small scale circulations, i.e. horizontal motion, which can advect the ice crystals and thus mix ice from high-and low-w regions.
In places where higher than average crystal numbers are produced by chance stronger vertical updraughts, the crystals remain small and obtain low fall speeds.Hence they are predominantly transported horizontally by the eddies.In places, however, where less than average number densities are produced (due to chance weaker uplift, or even downward motion), crystals grow faster and obtain higher fallspeeds.These crystals reach the lower parts of the ISSR and reduce the ice supersaturation there.These sedimenting crystals can thus inhibit homogeneous nucleation further down in the ISSR, an effect one might term "sedimentation induced quenching of nucleation".
The sedimentation induced quenching of nucleation that occurs here as a simple consequence of the small-scale turbulent motions, has important consequences for the cloud evolution.This becomes obvious in the profiles of Fig. 10.The profiles of the 1-D simulations (i.e.no fluctuations) are close to the maxima (n c and IWC) and minima (RHi) of the ranges shown.Hence, additional fluctuations have the tendency to reduce the mean values of cloud ice and crystal number concentrations and to leave more water in the vapour phase, that is, considerable in-cloud supersaturation is maintained for a longer while.P. Spichtinger and K. Gierens: Modelling Cirrus Clouds.Part 1b The sedimentation induced quenching of nucleation that occurs here as a simple consequence of the small-scale turbulent motions, has important consequences for the cloud evolution.This becomes obvious in the profiles of fig.10.The profiles of the 1D simulations (i.e.no fluctuations) are close to the maxima (n c and IWC) and minima (RHi) of the ranges shown.Hence, additional fluctuations have the tendency to reduce the mean values of cloud ice and crystal number concentrations and to leave more water in the vapour phase, that is, considerable in-cloud supersaturation is maintained for a longer while.Fig. 11 shows the ice number density profiles for a case with a slightly stronger updraught, viz.w = 0.08 m s −1 .We see that the quenching of nucleation reaches far down in the ISSR, and almost no ice crystals are present in the zone around 8000 m altitude where the simulation that neglects fluctuations shows more than 200 L −1 (cf.fig.6).The 2D humidity field at t = 120 min for this case is shown in fig.12.The left part of that figure shows the corresponding humidity profile for the 1D simulation using the same colour code.The most clear differences appear (at that stage of the cloud evolution) in the upper part of the cloud, where instead of a relaxed humidity field (i.e.saturation, green) we find ice saturated spots intermixed into a supersaturated (RHi exceeding 130%) background.The rhs panel of that figure shows ice number density maxima (black contours) located exactly in the saturated spots, whereas IWC is distributed much more evenly, because this is controlled by the available water vapour.The cloud in the 2D simulation reaches far further down than the 1D cloud, because less but heavier ice crystals are produced that obtain higher fall speeds.
One might conclude that the previously shown layered Figure 11 shows the ice number density profiles for a case with a slightly stronger updraught, viz.w=0.08 m s −1 .We see that the quenching of nucleation reaches far down in the ISSR, and almost no ice crystals are present in the zone around 8000 m altitude where the simulation that neglects fluctuations shows more than 200 L −1 (cf.Fig. 6).The 2-D humidity field at t=120 min for this case is shown in Fig. 12.The left part of that figure shows the corresponding humidity profile for the 1-D simulation using the same colour code.The most clear differences appear (at that stage of the cloud evolution) in the upper part of the cloud, where instead of a relaxed humidity field (i.e.saturation, green) we find ice saturated spots intermixed into a supersaturated (RHi exceeding 130%) background.The rhs panel of that figure shows ice number density maxima (black contours) located exactly in the saturated spots, whereas IWC (purple contours) is distributed much more evenly, because this is controlled by the available water vapour.The cloud in the 2-D simulation reaches far further down than the 1-D cloud, because less but heavier ice crystals are produced that obtain higher fall speeds.
One might conclude that the previously shown layered cloud structure (cf.Fig. 11) is only a 1-D artifact.However, this is not true: When the prescribed vertical velocity is enhanced again to w=0.1 m s −1 the in-cloud nucleation events (which cause the layered structure) reappear, although slightly weaker than in the 1-D case.Of course, the basic random mechanism of having spots with lower than average nucleation rate, causing less but faster falling crystals, is still in effect.However, this time nucleation is not quenched in the middle and lower parts of the cloud because the falling crystals come too late.Before they arrive from above, the cooling has already driven the supersaturation over the threshold for homogeneous nucleation; hence in-cloud nucleation reappears.This stresses the point that dynamics is similarly important as microphysics in terms of structuring cirrus clouds and it is noteworthy to observe that even small scale dynamics and small changes in the background dynamics can completely change the vertical structures of cirrus clouds, by disturbing the fragile balance of the processes growth, cooling and sedimentation.
Effects of wind shear
Now we add horizontal wind shear of du/dz=10 −3 s −1 , i.e. we start with u(z=2 km)=0 m s −1 and end up with u(z=11 km)=9 m s −1 .The chosen shear value is weak, but in the range of what is observed in the upper troposphere (see, e.g., the statistics presented by Dürbeck and Gerz, 1996;Birner, 2006).Although the shear is weak it has a big effect on the cloud structure as seen when comparing Fig. 8 to Fig. 13 which show the perturbation of the vertical wind.Evidently, shear induces larger coherent structures in the turbulent wind field, which is a well-known effect (e.g.Gerz, 1991).The effect can be explained by the superposition of the vorticity from the random motions with the vorticity induced by the shear.The superposition enhances one vorticity direction and weakens or even cancels the opposite, which yields a rectification of the vortices.Superposition of rectified vortices yields larger vortices, that is, the coherent structures that we see.The figure shows also that the amplitudes of the wind perturbations are smaller in the sheared case than in the nonsheared one.This is probably due to conservation of kinetic energy which is redistributed from the small eddies to the coherent structures when shear is switched on.
The changed dynamics has an effect on the microphysical evolution in the following way.First, the coherent structures can be expected to enhance horizontal mixing, and second, the smaller amplitudes of w leave less spots than before in the ISSR with strongly reduced ice formation.Hence sedimentation induced quenching of nucleation is expected to be weaker in a shear than in a no-shear case.The effects of these changes can indeed be seen in the simulations.
In Fig. 15.Ice crystal number densities for a vertical updraught of w = 0.08m s −1 with fluctuations and wind shear (red dots: all grid points, green line: mean value) and in the corresponding 1D simulation (blue line) for t = 100 min (top) and 120 min (bottom), respectively.In contrast to the simulations including only temperature fluctuations (fig.11), here the incloud nucleation events at the cloud bottom occur; however, they are weaker than in the pure 1D simulations.
scales than waves.Additionally, the box model simulations lack the spatial component, hence neither the horizontal nor the vertical transport and mixing of ice crystals with their subsequent effects on in-cloud supersaturation and nucleation can be modelled.These effects, namely (1) the weakening of ice formation in the horizontal neighbourhood of spots that just experienced a nucleation burst (because it was at the high end of the w distribution) and ( 2) the sedimentation induced nucleation quenching (because a spot above was at the low end of the w distribution) are important for structure formation of cirrus clouds, as we have seen.
From the foregoing analysis we have seen how important sedimentation is for the evolution of a cloud and its humidity field.In order to make the effect of sedimentation still clearer we repeated some of our 2D simulations with w = 0.05m s −1 using the same initialisation without/with fluctuations and wind shear, respectively, but we switched off the sedimentation.In this case a relatively short simulation time is sufficient, and we choose a simulation time of 84 min (i.e. a fifth of the original time).For comparison of the three simulations with each other and with the corresponding cases including sedimentation we present the statistics of the ice crystal number concentration, which we counted in all grid cells and at every 2 minutes (all taken together).The resulting distributions are shown in fig.19.
The 1D run without sedimentation produces a sharp peak around ∼ 100 L −1 , a value that we expect from the validation runs of Part 1a (temperature for nucleation events near 215 K).The temperature and wind fluctuations in the 2D run lead to strong broadening of the peak which is then rather a bulge than a peak.The most probable value of n c = N c ρ is slightly shifted to a lower value.The mechanisms that w=0.08 m s −1 the lost 1-D structure, that is, the ice formation in the lower cloud levels, can be rediscovered, as Fig. 15 shows.
However, as the motions in the cloud are still random, the reappeared in-cloud nucleation events only occur at certain spots.An example of such an event is presented in Fig. 16 which shows the formation of a small nucleation region at the bottom of the cloud (in the altitude range 8300≤z≤8600 m).
This nucleation event occurs in a strong upwind zone of a small eddy.In high time resolution (not shown) it can be observed that much more ice crystals form there than in the vicinity on the same level.Hence a strong gradient in ice crystal number density appears, while the horizontal variation of supersaturation is small.The crystals on this level grow thus with different rates, slowly in the spot of the nucleation burst, and faster elsewhere.This causes horizontal variations of terminal velocities, which is clearly seen in the time evolution shown in the figure.Wind shear shifts the falling crystals horizontally, eventually producing fall streaks.The time evolution of the 2-D structure (IWC , N, w ) of this event is shown in Fig. 16.
In order to test whether such an event as just described occurs by chance or whether the dynamical effects mentioned before promote them we repeated the simulation with the same setup but with a different set of random numbers for the initialisation of the perturbations.In this case we got a similar in-cloud nucleation event, albeit a weaker one because the generating eddy was weaker.Hence it turned out that it is probably indeed the dynamical effects that promote such in-cloud nucleation bursts, and not simply random superposition of upward motions.
The strong depositional growth of the ice crystals in the nucleation burst leads to strong latent heat release cause these changes from the 1D to the 2D results have been explained above and need not be repeated.Inclusion of wind shear damps the fluctuations.Hence, there is still some broadening of the peak but less so than without the shear.Since sedimentation was switched off, sedimentation induced quenching of nucleation cannot occur here.Hence the distributions display the effect of the horizontal mixing of the ice crystals alone.First, as expected, the high tail of the w distribution leads to a high tail in the number density distributions, a larger one when wind shear is switched off and vice versa.But more interesting is the low tail, which displays the effect of the horizontal mixing.Both with and without wind shear there are much more grid boxes (and time steps) where low crystal number densities are found than in the case without fluctuations.Now, we present the same kind of n c -statistics for the simulations with sedimentation included.The values n c are counted every 10 minutes.The result is displayed in fig.20.We see clearly that the sedimentation process totally changes the distributions of n c with a strong shift of the frequencies of occurrence towards smaller ice crystal number densities.This is an important feature, which has to be taken into account for the interpretation of measurements: A priori, it is not clear, if the measured ice crystals were formed in situ or if they were sedimenting from formation layers above, changing the number density due to the sedimentation process.Sedimentation is a main process for structuring cirrus clouds in these simulations which were triggered by synoptic scale updraughts.For stronger updraughts (e.g.mesoscale waves or convective events) the picture might change because then much higher number concentrations of ice crystals are produced, with smaller and slower falling crystals.Under such conditions, when the sedimentation time scale exceeds the Time evolution (from top to bottom: t= 80/90/100/110/120/130 min) of the ice water content (purple, in mg m −3 , IWC =5mg m −3 ) and the ice crystal number density (black, in L −1 n c =50L −1 ) in case of a constant updraught of w=0.08 m s −1 including initial temperature fluctuations and moderate wind shear.Vertical velocity perturbations w are indicated by colours.approximately in their formation region.This heat in turn triggers additional updraughts with vertical velocities in the order of 0.05−0.1 m s −1 (cf.Fig. 16) which form the tails in the frequency distribution of w that we show in Fig. 17 for the time range 100≤t≤150 min in 10 min intervals.
Comparing the results from the differing dynamical setups we can see that even small scale dynamics strongly affects the structure of a cirrus cloud whose formation was initially driven by large scale dynamics.This emphasises that simulating cirrus clouds is a multi-scale problem.The processes which are important for the formation and evolution of cirrus clouds act on different scales (e.g.cloud microphysics, small scale circulations, synoptic scale in our setup) and the evolving structure of cloud results from a superposition of processes acting on widely varying scales.Due to latent heat there is also a microphysics feedback on the local dynamics.
RHi statistics
A main feature of our simulations is the occurrence of (persistent) supersaturation inside the cirrus clouds.For another view on this phenomenon we have produced statistics of relative humidity in the simulations.For this we used the RHi at every grid point and at every 10 min.
In Fig. 18 we compare the RHi-statistics for the 1-D and 2-D (with and without wind shear) simulations.
First we can note a cut-off in all distributions at around 160% relative humidity which is about the threshold for homogeneous nucleation at the cloud top temperatures.The finding of a cut-off is consistent with observations from the INCA campaign (Haag et al., 2003).Otherwise the figure shows that there is a tendency of all simulations to approach ice saturation after a while, as one expects.This tendency is strongest in the 1-D simulations and weakest in the simulation without wind shear, that is, when the wind fluctuations have the strongest effect.Inclusion of wind shear enhances the tendency to approach ice saturation within the cloud, but the tendency is considerably weaker than in the 1-D case.These results show clearly that in-cloud supersaturation is not only an effect of microphysics, which is treated in the 1-D simulation as well as in the more realistic 2-D simulations.Cloud dynamics and persistent small-scale fluctuations especially in the wind field are at least as important for an explanation of this effect, if not more.
Discussion
Current concepts on cirrus formation imply that mesoscale velocity fluctuations explain the high ice number densities that are often observed (e.g.Hoyle et al., 2005;Haag and Kärcher, 2004).In contrast, here we find that wind fluctuations have the tendency to reduce crystal numbers.This seeming contradiction is easily explained once the difference between mesoscale fluctuations of vertical wind (gravity waves) and our small-scale fluctuations is recognised.The latter act on short time scales of seconds to minutes which only allows for small amplitudes in w-fluctuations.Gravity waves cause larger amplitudes in w and act on longer time scales.Therefore, the largest vertical velocity fluctuations will dominate the nucleation process and produce the highest ice crystal number densities.The results may also contradict partially those of Kay et al. (2006) who estimated from box-model results that vertical velocity fluctuations affect the statistics of cloud optical thicknesses only when fluctuation time scales are shorter than fallout time scales, but longer than ice crystal growth rates.The "fluctuations" of Kay et al. (2006) are, however, waves whereas we use random perturbations of the wind field which act on shorter time scales than waves.Additionally, the box model simulations lack the spatial component, hence neither the horizontal nor the vertical transport and mixing of ice crystals with their subsequent effects on in-cloud supersaturation and nucleation can be modelled.These effects, namely (1) the weakening of ice formation in the horizontal neighbourhood of spots that just experienced a nucleation burst (because it was at the high end of the w distribution) and (2) the sedimentation induced nucleation quenching (because a spot above was at the low end of the w distribution) are important for structure formation of cirrus clouds, as we have seen.
From the foregoing analysis we have seen how important sedimentation is for the evolution of a cloud and its humidity field.In order to make the effect of sedimentation still clearer we repeated some of our 2-D simulations with w=0.05 m s −1 using the same initialisation without/with fluctuations and wind shear, respectively, but we switched off the sedimentation.In this case a relatively short simulation time is sufficient, and we choose a simulation time of 84 min (i.e. a fifth of the original time).For comparison of the three simulations with each other and with the corresponding cases including sedimentation we present the statistics of the ice crystal number concentration, which we counted in all grid cells and at every 2 min (all taken together).The resulting distributions are shown in Fig. 19.
The 1-D run without sedimentation produces a sharp peak around ∼100 L −1 , a value that we expect from the validation runs of Part 1a (temperature for nucleation events near 215 K).The temperature and wind fluctuations in the 2-D run lead to strong broadening of the peak which is then rather a bulge than a peak.The most probable value of n c =N c ρ is slightly shifted to a lower value.The mechanisms that cause these changes from the 1-D to the 2-D results have been explained above and need not be repeated.Inclusion of wind shear damps the fluctuations.Hence, there is still some broadening of the peak but less so than without the shear.Since sedimentation was switched off, sedimentation induced quenching of nucleation cannot occur here.Hence the distributions display the effect of the horizontal mixing of the ice crystals alone.First, as expected, the high tail of the w distribution leads to a high tail in the number density distributions, a larger one when wind shear is switched off and vice versa.But more interesting is the low tail, which displays the effect of the horizontal mixing.Both with and without wind shear there are much more grid boxes (and time steps) where low crystal number densities are found than in the case without fluctuations.Now, we present the same kind of n c -statistics for the simulations with sedimentation included.The values n c are counted every 10 min.The result is displayed in Fig. 20.
We see clearly that the sedimentation process totally changes the distributions of n c with a strong shift of the frequencies of occurrence towards smaller ice crystal number densities.This is an important feature, which has to be taken into account for the interpretation of measurements: A priori, it is not clear, if the measured ice crystals were formed in situ or if they were sedimenting from formation layers above, changing the number density due to the sedimentation process.Sedimentation is a main process for structuring cirrus clouds in these simulations which were triggered by synoptic scale updraughts.For stronger updraughts (e.g.mesoscale waves or convective events) the picture might change because then much higher number concentrations of ice crystals are produced, with smaller and slower falling crystals.Under such conditions, when the sedimentation time scale exceeds the growth time scale, sedimentation can be less important than in the cases presented here.
Box model simulations usually cannot treat sedimentation (recent developments excepted): Although the ice flux out of the lower lid of the box can easily be computed, the ice flux into the box at the upper lid cannot because this requires knowledge of the ice mass and number concentration above (i.e.outside) the box, which is not given.From this point of view it seems that box model studies of clouds should be constrained to the formation and early evolution phase or other conditions when sedimentation is still unimportant (e.g. when only small ice crystals are present), notwithstanding applications like those of Kay et al. (2006).
Many of the figures shown in this paper show profiles of relative humidity with double peaks.These are a consequence of the ongoing cooling of the ISSR/cloud layer interacting with the sedimentation of the ice crystals.The upper peak is always at the ISSR/cloud top.Ice crystals form there, grow and start to fall.Then supersaturation increases again (due to cooling) because the sink for excess vapour has fallen away.On reaching the threshold for homogeneous nucleation again new crystals form, and so on.The mid-cloud peaks of RHi are caused by the ongoing cooling in combination with sedimentation, as well.However, crystals sedimenting from above into the mid-cloud level make the timing and profile shaping of these peaks more complicated than that of the peaks at cloud top.P. We found that the qualitative structure of cirrus clouds driven by synoptic upward motions can be represented quite well using a 1-D approach.However, small scale fluctuations can affect the cloud structure substantially.There are some kind of "off equilibrium" situations (because of the non-linear processes involved, in particular nucleation) when small causes can lead to big effects, that is, change the cloud evolution completely, leading to totally different structure of the cloud.In these cases a 2-D approach using temperature fluctuations is needed to get deeper insight into the dominant processes that act in structuring the cirrus clouds.
Conclusions
We have used the newly developed and validated microphysics scheme (Spichtinger and Gierens, 2009) implemented into the the anelastic, non-hydrostatic model EULAG (Smolarkiewicz and Margolin, 1997) to investigate the sensitivity of cirrus cloud evolution to variations in largescale and small-scale dynamics.As a test object we reused the artic cirrostratus that we already used for model validation in Part 1a.In the sensitivity studies we varied the overall updraught velocity in the synoptic range w=0.05/0.06/0.08/0.1 m s −1 .In additional simulations we superposed inital temperature fluctuations leading to smallscale eddies and furthermore we included a moderate wind shear.
These studies led to the following conclusions: -Sedimentation is of utmost importance in the evolution of the cloud structure and the in-cloud humidity field; sedimenting ice crystals can quench in-cloud nucleation; -The almost binary behaviour of the nucleation process (on or off), that is, the existence of relatively sharp supersaturation thresholds (or supercooling thresholds) can lead to dramatic changes of cloud structures as a response to weak or moderate changes in the overall situation (e.g.uplift speed) and to the interaction between the local eddies and the large-scale wind shear; -Persistent in-cloud supersaturation is found in all our simulations.It is not only an effect of microphysics but at least as important is cloud dynamics on both the large and the small scale; -Cirrus clouds are good examples for a multi-scale problem.Microphysical processes act on the smallest scales, but they are driven by the external meso-and large-scale wind fields.Cloud-internal dynamics and the smallscale fluctuations modify the process rates locally, and are in turn affected by latent heat exchanges with the cloudy air.The superposition of these processes and the lasting shifting in their relative importance is crucially responsible for the structural evolution of the cirrus cloud.
In future applications the model will be used for investigating the effects of the competition of different nucleation mechanisms (Spichtinger and Gierens, 2008) and for investigations of the impact of orographic gravitiy waves on the formation and evolution of cirrus clouds (as in Spichtinger and Dörnbrack, 2006).
Fig. 3 .
Fig. 3. Profiles of ice crystal number densities at various simulation times in the simulation with constant uplift of w = 0.06 m s −1 .The peak is much more pronounced as in the reference case (w = 0.05 m s −1 , see Part 1a, fig.17) due to higher ice crystal densities due to the stronger updraught.Note how the position of the first nucleation peak shifts to lower altitudes relative to the nucleation layer in the upper part of the cloud with time.
Fig. 2 .
Fig. 2. Time evolution of the simulated cirrostratus lifted with a constant vertical velocity of w=0.05 m s −1 .The colours indicate relative humidity wrt ice, while lines indicate ice crystal number densities (black, in L −1 , n c =10L −1 ) and ice water content (purple, in mg m −3 , IWC =1mg m −3 ).
Fig. 3 .
Fig. 3. Profiles of ice crystal number densities at various simulation times in the simulation with constant uplift of w=0.06 m s −1 .The peak is much more pronounced as in the reference case (w=0.05m s −1 , see Part 1a, Fig.17) due to higher ice crystal densities due to the stronger updraught.Note how the position of the first nucleation peak shifts to lower altitudes relative to the nucleation layer in the upper part of the cloud with time.
Fig. 5 .
Fig. 5. Time evolution of the simulated cirrostratus lifted with a constant vertical velocity of w=0.08 m s −1 .The colours indicate relative humidity wrt ice, while lines indicate ice crystal number densities (black, in L −1 , n c =20L −1 ) and ice water content (purple, in mg m −3 , IWC =5mg m −3 ).
Fig. 9 .
Fig. 9. Evolution of the vertical velocity perturbation distribution for an constant updraught of w=0.05 m s −1 over a simulation time of 60≤t≤300 s.
Fig. 10 .
Fig. 10.Ice crystal number density (left), ice water content (middle) and relative humidity wrt ice (right) at t = 120 min (top panel) and at t = 240 min (bottom panel) for a constant updraught of w = 0.05 m s −1 and additional temperature fluctuations.The values at all grid points are indicated by the red dots, the mean value is represented by the green line and the blue line indicates the values of the corresponding 1D simulation.
Fig. 11 .Fig. 10 .
Fig. 11.Ice crystal number densities for a vertical updraught of w = 0.08 m s −1 with fluctuations (red dots: all grid points, green line: mean value) and in the corresponding 1D simulation (blue line) for t = 100 min (top) and 120 min (bottom), respectively.In the simulation including fluctuations the inside cloud nucleation events are missing at the bottom of the cloud, compared to the 1D case.
Fig. 11 .Fig. 12 .
Fig.11.Ice crystal number densities for a vertical updraught of w=0.08 m s −1 with fluctuations (red dots: all grid points, green line: mean value) and in the corresponding 1-D simulation (blue line) for t=100 min (top) and 120 min (bottom), respectively.In the simulation including fluctuations the inside cloud nucleation events are missing at the bottom of the cloud, compared to the 1-D case.
Fig. 13 .
Fig. 13.Vertical velocity perturbation w at simulation time t=60 min for a case with large-scale updraught of w=0.05 m s −1 and wind shear of 10 −3 s −1 .
Fig. 14 .
Fig. 14.Ice crystal number density (left), ice water content (middle) and relative humidity wrt ice (right) at = 120 min (top panel) and at t = 240 min (bottom panel) for a constant updraught of w = 0.05 m s −1 , additional temperature fluctuations and wind shear.The values at all grid points are indicated by the red dots, the mean value is represented by the green line and the blue line indicates the values of the corresponding 1D simulation.
Fig. 14 .
Fig. 14.Ice crystal number density (left), ice water content (middle) and relative humidity ice (right) at t=120 min (top panel) and at t=240 min (bottom panel) for a constant updraught of w=0.05 m s −1 , additional temperature fluctuations and wind shear.The values at all grid points are indicated by the red dots, the mean value is represented by the green line and the blue line indicates the values of the corresponding 1-D simulation.
Fig. 15 .Fig. 17 .
Fig. 15.Ice crystal number densities for a vertical updraught of w=0.08 m s −1 with fluctuations and wind shear (red dots: all grid points, green line: mean value) and in the corresponding 1-D simulation (blue line) for t=100 min (top) and 120 min (bottom), respectively.In contrast to the simulations including only temperature fluctuations (Fig.11), here the incloud nucleation events at the cloud bottom occur; however, they are weaker than in the pure 1-D simulations.
Fig
Fig. 16.Time evolution (from top to bottom: t= 80/90/100/110/120/130 min) of the ice water content (purple, in mg m −3 , IWC =5mg m −3 ) and the ice crystal number density (black, in L −1 n c =50L −1 ) in case of a constant updraught of w=0.08 m s −1 including initial temperature fluctuations and moderate wind shear.Vertical velocity perturbations w are indicated by colours.
Fig. 17 .
Fig. 17.Time evolution of the distribution of the vertical velocity perturbation w in case of a constant large scale updraught of w=0.08 m s −1 including initial temperature fluctuations and a moderate wind shear (du/dz=10 −3 s −1 ).
Fig. 18 .
Fig. 18.Statistics of the relative humidity wrt ice for the simulations with different prescribed vertical velocities, from left to right: w = 0.05m s −1 , w = 0.06m s −1 , w = 0.08m s −1 , w = 0.1m s −1 , respectively.Here, all data from the whole simulation are collected, respectively.Red lines indicate the 1D simulations, green lines denote 2D simulations with temperature fluctuations and blue lines are 2D simulations with temperature fluctuations and wind shear.
Fig. 19 .
Fig. 19.Ice crystal number densities for three reference simulations (pure 1D, temperature fluctuations, temperature fluctuations plus wind shear) without sedimentation.The whole model domain is lifted up with a constant vertical velocity of w = 0.05 m s −1 .Again, all data for the whole simulation are collected, respectively.The frequency of occurrence of the ice crystal number densities is indicated by the coloured lines (red: pure 1D, green: temperature fluctuation, blue: temperature fluctuations plus wind shear).The impact of the horizontal and vertical small scale motions, triggered by the temperature fluctuations is clearly visible.
Fig. 20 .
Fig. 20.Ice crystal number densities for three reference simulations (pure 1D, temperature fluctuations, temperature fluctuations plus wind shear) with sedimentation.The whole model domain is lifted up with a constant vertical velocity of w = 0.05 m s −1 .Again, all data for the whole simulation are collected, respectively.The frequency of occurrence of the ice crystal number densities is indicated by the coloured lines (red: pure 1D, green: temperature fluctuation, blue: temperature fluctuations plus wind shear).In comparison with fig.19, sedimentation influences already the pure 1D case dispersing the ice crystal number density over the whole cloud which leads to a shift in the frequency of occurrence towards smaller densities.
Fig. 18 .
Fig. 18.Statistics of the relative humidity wrt ice for the simulations with different prescribed vertical velocities, from left to right: w=0.05 m s −1 , w=0.06 m s −1 , w=0.08 m s −1 , w=0.1 m s −1 , respectively.Here, all data from the whole simulation are collected, respectively.Red lines indicate the 1-D simulations, green lines denote 2-D simulations with temperature fluctuations and blue lines are 2-D simulations with temperature fluctuations and wind shear.
Fig. 19 .
Fig.19.Ice crystal number densities for three reference simulations (pure 1-D, temperature fluctuations, temperature fluctuations plus wind shear) without sedimentation.The whole model domain is lifted up with a constant vertical velocity of w=0.05 m s −1 .Again, all data for the whole simulation are collected, respectively.The frequency of occurrence of the ice crystal number densities is indicated by the coloured lines (red: pure 1-D, green: temperature fluctuation, blue: temperature fluctuations plus wind shear).The impact of the horizontal and vertical small scale motions, triggered by the temperature fluctuations is clearly visible.
Fig. 20 .
Fig. 20.Ice crystal number densities for three reference simulations (pure 1-D, temperature fluctuations, temperature fluctuations plus wind shear) with sedimentation.The whole model domain is lifted up with a constant vertical velocity of w=0.05 m s −1 .Again, all data for the whole simulation are collected, respectively.The frequency of occurrence of the ice crystal number densities is indicated by the coloured lines (red: pure 1-D, green: temperature fluctuation, blue: temperature fluctuations plus wind shear).In comparison with Fig.19, sedimentation influences already the pure 1-D case dispersing the ice crystal number density over the whole cloud which leads to a shift in the frequency of occurrence towards smaller densities. | 11,722.2 | 2009-01-28T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Mechanism of mitochondrial permeability transition pore induction and damage in the pancreas: inhibition prevents acute pancreatitis by protecting production of ATP
Objective Acute pancreatitis is caused by toxins that induce acinar cell calcium overload, zymogen activation, cytokine release and cell death, yet is without specific drug therapy. Mitochondrial dysfunction has been implicated but the mechanism not established. Design We investigated the mechanism of induction and consequences of the mitochondrial permeability transition pore (MPTP) in the pancreas using cell biological methods including confocal microscopy, patch clamp technology and multiple clinically representative disease models. Effects of genetic and pharmacological inhibition of the MPTP were examined in isolated murine and human pancreatic acinar cells, and in hyperstimulation, bile acid, alcoholic and choline-deficient, ethionine-supplemented acute pancreatitis. Results MPTP opening was mediated by toxin-induced inositol trisphosphate and ryanodine receptor calcium channel release, and resulted in diminished ATP production, leading to impaired calcium clearance, defective autophagy, zymogen activation, cytokine production, phosphoglycerate mutase 5 activation and necrosis, which was prevented by intracellular ATP supplementation. When MPTP opening was inhibited genetically or pharmacologically, all biochemical, immunological and histopathological responses of acute pancreatitis in all four models were reduced or abolished. Conclusions This work demonstrates the mechanism and consequences of MPTP opening to be fundamental to multiple forms of acute pancreatitis and validates the MPTP as a drug target for this disease.
INTRODUCTION
Pancreatic necrosis, systemic inflammatory response syndrome, multiple organ failure and sepsis are characteristic of severe acute pancreatitis (AP), which results in death of one in four patients and is without specific drug therapy. 1 2 As the pancreatic acinar cell is an initial site of injury, 1 3 commonly initiated by bile or ethanol excess, investigation of its behaviour in response to toxins that induce AP may identify new drug targets. This cell typifies non-excitable exocrine cells with a high secretory turnover heavily dependent on mitochondrial production of ATP. 4 While zymogen activation has
Significance of this study
What is already known on this subject? ▸ Toxins that induce acute pancreatitis cause pancreatic acinar cell calcium overload, intracellular zymogen activation, cytokine release and cell death. ▸ Mitochondrial matrix calcium overload induces opening of the mitochondrial permeability transition pore (MPTP), a non-specific inner mitochondrial membrane channel that causes loss of mitochondrial membrane potential essential to ATP production. ▸ Calcium-induced opening of the MPTP occurs in acute pancreatitis, but the mechanism and consequences of this process have not been established.
What are the new findings?
▸ Toxins that cause acute pancreatitis induce the MPTP in isolated murine and human pancreatic acinar cells via second messenger receptor calcium channel release and mitochondrial calcium but not reactive oxygen species overload, resulting in mitochondrial depolarisation, impaired ATP production and necrosis. ▸ Pancreatitis toxin-induced MPTP opening causes activation of phosphoglycerate mutase 5, which executes necrosis, and retarded autophagy, which causes accumulation of activated digestive enzymes. ▸ Specific genetic or pharmacological inhibition of MPTP opening in a diverse range of clinically relevant mouse models dramatically improves all local pancreatic, systemic and distant pulmonary pathological responses.
long been considered the principal mechanism of injury, 1 3 mitochondrial dysfunction has been implicated increasingly, [5][6][7][8][9][10][11] presumed consequent upon intracellular calcium overload induced by toxins that include bile acids and ethanol metabolites. 6 11 12 Mitochondrial uptake of calcium drives normal cellular bioenergetics, but high calcium loads induce increasingly drastic responses culminating in necrosis. 13 Mitochondrial matrix calcium overload leads to opening of the mitochondrial permeability transition pore (MPTP), a non-specific channel that forms in the inner mitochondrial membrane allowing passage of particles under 1500 Da, causing loss of mitochondrial membrane potential (Δψ m ) essential to ATP production; 13 recent evidence implicates F 0 F 1 ATP synthase in MPTP formation. 14 15 MPTP opening is physiological in low conductance mode releasing calcium and reactive oxygen species (ROS) to match metabolism with workload, 16 but pathological in high conductance mode compromising ATP production and inducing cell death; 13 both functions are regulated by the mitochondrial matrix protein peptidyl-prolyl cis-trans isomerase (PPI, cyclophilin) D (also known as cyclophilin F). 17 Previous limited studies found that MPTP opening can occur in pancreatitis; 5 9 18 we found cyclophilin D knockout to ameliorate AP induced by ethanol and cyclosporine, 9 but in a model with no clinical correlate. How the MPTP is induced in pancreatic acinar cells has not been determined, nor what role intracellular calcium might play and whether there are downstream consequences in AP. Therefore, we sought to undertake a novel, wide ranging and detailed study to determine the mechanism and significance of MPTP opening in AP.
We report that MPTP opening is critical to all forms of pancreatitis investigated, causing diminished ATP production, defective autophagy, zymogen activation, cytokine release, phosphoglycerate mutase family member 5 (PGAM5) activation 19 and necrosis. Pharmacological or genetic MPTP inhibition in murine or human pancreatic acinar cells protected Δψ m , ATP production, autophagy and prevented necrosis from pancreatitis toxin-induced calcium release via inositol trisphosphate and ryanodine (IP 3 R, RyR) calcium channels. This mechanism was confirmed consistently across four dissimilar, clinically relevant, in vivo models of AP. All characteristic local and systemic pathological responses were greatly reduced or abolished in cyclophilin D knockout mice (Ppif −/− ) 20 and wild type (Wt) mice treated with MPTP inhibitors, confirming that MPTP opening is a fundamental pathological mechanism in AP.
METHODS Animals
Cyclophilin D-deficient mice were generated by targeted disruption of the Ppif gene 20 and provided by Dr Derek Yellon (University College London, UK) and Dr Michael A Forte (Oregon Health and Sciences University, USA). Transgenic green fluorescent protein (GFP)-LC3 mice 21 were a gift from Dr N Mizushima (Tokyo Medical and Dental University and RIKEN BioResourse Center, Japan). All experiments comparing Wt and Ppif −/− were conducted using C57BL/6 mice; experiments using toxins on Wt cells alone used CD1 mice.
Preparation of isolated pancreatic acinar cells and mitochondria
Normal human pancreata samples (∼1 cm×1 cm×1 mm, not devascularised during surgery before removal) were placed in a solution of (mM): 140 NaCl, 4.7 KCl, 1.13 MgCl 2 , 1 CaCl 2 , 10 D-glucose, 10 HEPES (adjusted to pH 7.35 using NaOH) at 4°C; sampling to start of cell isolation (or slicing below) was <10 min in every case. All experiments were at room temperature (23-25°C, except where stated) and cells used within 4 h of isolation. Isolation of murine 7 and human 22 pancreatic acinar cells was as described. Isolated murine cells were incubated at 37°C in 199 medium with or without 10 nM cholecystokinin-8 (CCK-8) or 500 mM taurolithocholic acid sulfate (TLCS); drug pretreatment was applied for 30 min. Mitochondria were isolated from mouse pancreata as described. 23 Confocal fluorescence microscopy Cells and tissue were viewed using Zeiss LSM510 and LSM710 systems (Carl Zeiss Jena GmbH), typically with a 63x C-Apochromat water immersion objective (aperture at 1.2) after loading with Fluo-4 (3 mM; excitation 488 nm, emission 505 nm) and tetramethyl rhodamine methyl ester (50 nM; excitation 543 nm, emission >550 nm) to assess cytosolic calcium and mitochondrial membrane potential, with simultaneous measurements of NAD(P)H autofluorescence (excitation 351 nm, emission 385-470 nm) to assess mitochondrial metabolism. The protonophore carbonyl cyanide m-chlorophenyl hydrazone (CCCP) was applied to dissipate Δψ m as a positive control. ROS were assessed after loading with 5-chloromethyl-2,7dichlorodihydrofluorescein diacetate acetyl ester (4.5 μM; excitation 488 nm, emission 505-550 nm) for 10 min at 37°C. 12 R110-aspartic acid amide (20 μM; excitation 488 nm, emission >505 nm) and propidium iodide (PI 1 mM; excitation 488 nm, emission 630-693 nm) were used to assess general caspase activation and plasma membrane rupture. Thirty random fields of view were taken of each isolate and the percentage number of cells displaying caspase activity or PI uptake counted per field, averaged across fields as mean±SEM (minimum three mice/ group). PI was used in patched cells (below), as was Mg Green (4 mM, excitation 476 nm, emission 500-550 nm), to monitor intracellular ATP concentrations. 6 Murine pancreas lobules were incubated with/without 500 μM TLCS and stained with Sytox Orange 24 (500 nM, excitation 543 nm, emission >560 nm), which like PI only stains cells with ruptured cell membranes: uptake was determined every two hours by % area tissue stained.
Patch-clamp current recording
The whole-cell configuration was used to record ICl Ca from single cells while recording cytosolic calcium (Fluo-4). 25 Patch-pipettes were pulled from borosilicate glass capillaries Significance of this study How might it impact on clinical practice in the foreseeable future?
▸ The demonstration of identical mechanisms in human as in murine pancreatic acinar cells indicates that the findings that establish MPTP opening to be of critical importance in experimental acute pancreatitis are likely to be of major importance in clinical acute pancreatitis. ▸ This study has shown the effectiveness in experimental acute pancreatitis of several drugs that target molecules that regulate the MPTP and that could be developed for the treatment of clinical acute pancreatitis. ▸ Translational drug discovery and development programmes that target the MPTP could provide specific, effective treatments for clinical acute pancreatitis.
(Harvard Apparatus) with resistance of 2-3 MΩ when filled with a solution of (mM):
Statistical analysis
Data are presented as mean±SEM. Analysis was by two-tailed Student's t test or χ 2 test, with p values <0.05 considered significant.
Study approval
For preparation of pancreas tissue slices and lobules, measurement of isolated mitochondrial responses, electron microscopy, immunofluorescence, further assessment of disease parameters in experimental AP and details of chemicals and reagents, see online supplementary materials.
Pharmacological MPTP inhibition prevents pancreatitis toxin-induced mitochondrial impairment and necrotic cell death pathway activation
First, we tested the effect of known MPTP inhibitors on toxin-induced changes in pancreatic acinar cells using cyclosporin A (CYA), which binds to and inhibits cyclophilin D, or bongkrekic acid (BKA), which favours the closed conformation of adenine nucleotide translocase. 29 We used murine cells hyperstimulated with CCK-8 18 30 to induce cytosolic and mitochondrial calcium overload, 6 12 and found loss of Δψ m 7 18 causing decreases in NAD(P)H (figure 1A), reflecting declining ATP production. 4 The bile acid TLCS 31 induced similar changes (figure 1B). Both CYA and BKA prevented losses of Δψ m and NAD(P)H. TLCS-induced mitochondrial impairment was completely prevented by the calcium chelator 1,2-bis(o-aminophenoxy) ethane-N,N,N 0 ,N 0 -tetraacetic acid and was dose dependent (see online supplementary figure S1). We then tested D-MeAla 3 -EtVal 4 -cyclosporine (Alisporivir, DEB025), which inhibits cyclophilin D but is not immunosuppressive 29 and 3,5-Seco-4-nor-cholestan-5-one oxime-3-ol (TRO40303), which also inhibits MPTP opening; 32 both prevented decreases of Δψ m in murine and freshly isolated human pancreatic acinar cells ( figure 1C). Marked cell death pathway activation was induced by CCK-8 and TLCS; whereas caspase activation occurred in the presence of CYA or BKA, PI uptake was largely prevented ( figure 1D). Marked protection from TLCS in human pancreatic acinar cells 22 and human pancreas slices followed pretreatment with CYA, DEB025 and TRO40303 (figure 1E, F).
Genetic MPTP inhibition prevents pancreatitis toxin-induced mitochondrial impairment and necrotic cell death pathway activation
Cytosolic calcium changes were significantly less marked in pharmacologically treated than control cells (seen with CYA and BKA due to an initial release of calcium from cell stores, but not with DEB025 or TRO40303, see online supplementary figure S1), which might reduce mitochondrial calcium loading, so we examined effects of genetic deletion (Ppif −/− ) of cyclophilin D. 20 Comparison of Ppif −/− and Wt (C57BL/6) cells showed CCK-8-induced cytosolic calcium elevations were similar, but in . Subsequent experiments demonstrated no difference between Ppif −/− and Wt cells in store-operated calcium entry or plasma membrane ATPase calcium pump extrusion (see online supplementary figure S1), consistent with more effective ATP supply in Ppif −/− compared with Wt cells subjected to CCK-8-or TLCS-induced calcium overload. As TLCS-induced ROS increases promote apoptosis not necrosis of pancreatic acinar cells, 12 23 we tested whether ROS increases are greater in Ppif −/− cells and found no differences from Wt (figure 2C, middle), ruling this out as a protective mechanism. Ethanol and POA, which form the toxic FAEE POAEE that induces AP, 11 also caused marked falls of Δψ m in Wt not Ppif −/− cells (figure 2C, right). There were marked effects of Ppif −/− on PI uptake but little on general caspase activation (figure 2D), consistent with a minor role for MPTP opening in pancreatic acinar cell apoptosis. 7 23 In keeping, cytosolic cytochrome c release was seen in both Ppif −/− and Wt cells after hyperstimulation, although less in Ppif −/− cells (figure 2E). We also tested pancreatic lobules, more closely representing events in vivo, and found necrotic pathway activation (Sytox Orange uptake) 24 Pancreatitis toxin-induced acinar cell MPTP opening causes collapse of ATP production and necrotic cell death pathway activation via second messenger receptor calcium channel release As bile acids and FAEEs induce global, prolonged acinar cytosolic calcium release via IP 3 R and RyR calcium channels, 6 33 which causes zymogen activation 34 35 dependent on sustained calcium entry, 36 we sought to determine how toxin-induced calcium release causes mitochondrial injury and pancreatic acinar cell death. Using patch clamp technology and confocal microscopy, we observed typical apical stimulus-secretion coupling calcium signals elicited by IP 3 (1-10 While Wt Δψ m was lost after one addition of 25 mM CaCl 2 , Ppif −/− Δψ m was lost after five successive additions ( figure 4B, C). Ppif −/− pancreatic mitochondria released only 35% less cytochrome c than Wt in 1.3 mM calcium (figure 4D), consistent with a modest contribution from MPTP opening to cytochrome c release. To further assess the significance of MPTP opening and falls in Δψ m , we measured levels of PGAM5, a mitochondrial executor of necrosis. 19 Falls in Δψ m cause PGAM5 cleavage from the inner mitochondrial membrane, 37 and increases in PGAM5 promote necrosis, facilitating mitochondrial fission. 19 After induction of CER-AP, PGAM5 was increased in Wt but significantly less in Ppif −/− pancreata (figure 4E), indicating a mitochondrial mechanism for necrosis induced by calcium overload in AP. These changes were associated with marked ballooning of and loss of cristae in Wt but not Ppif −/− pancreatic acinar mitochondria in CER-AP (figure 4F).
The MPTP mediates zymogen activation through impaired autophagy
Since zymogen activation is considered essential to AP and relates to disease severity, 1 38-40 we sought to determine whether and how this is MPTP dependent. We found CCK-8-induced trypsin activity significantly inhibited in Ppif −/− compared with Wt (figure 5A), despite no differences in the amount of trypsinogen (or amylase) between Wt and Ppif −/− mice pancreata (figure 5B; nor cathepsin B, Bcl-xL or Bcl-2, data not shown). This finding indicates that MPTP opening contributes to pathological, intra-acinar zymogen activation. Zymogen activation depends on intracellular calcium overload 30 and accumulation of activated zymogens in AP is due to impaired autophagy. 41 We therefore measured levels of microtubule-associated protein 1A/1B-light chain 3 (LC3), which in autophagy is converted from cytosolic LC3-I to lipidated LC3-II and recruited into autophagosomal membranes, and levels of sequestosome 1 (SQSTM1, p62), which sequesters ubiquitinated protein aggregates to autophagosomes; when autophagosomes fuse with lysosomes, both LC3-II and p62 are degraded. 42 Following induction of CER-AP that features marked falls in ATP production, acinar cell vacuolisation and zymogen activation, 7 38 43 significant increases in LC3-II and p62 occurred in Wt pancreata, showing retarded autophagy consistent with previous data. 41 Increases in LC3-II and p62 were significantly attenuated in Ppif −/− mice (figure 5C-E), indicating more efficient autophagy. 42 We confirmed the role of MPTP opening in defective autophagy using GFP LC3 mice, 21 crossed with Ppif −/− mice. Analysis of LC3 puncta (autophagic vacuoles, figure 5F) as well as increases in LC3-II and p62 in GFP-LC3 versus GFP-LC3×Ppif −/− mice (≥3 mice/group, data not shown) confirmed significant attenuation from genetic inhibition of the MPTP.
Genetic or pharmacological MPTP inhibition sustains ATP production and confers striking protection from experimental AP
To determine comprehensively the significance of these mechanisms in vivo, we compared responses of Ppif −/− versus Wt mice in four dissimilar models of AP: CER-AP, TLCS pancreatic ductal infusion 27 (TLCS-AP), ethanol with POA 11 (FAEE-AP) and CDE-AP diet. 28 These models represent the whole spectrum of human AP, including the commonest clinical aetiologies (gallstones and ethanol) and extending from mild to lethal disease. In all models, characteristic changes occurred in serum amylase and interleukin-6 (IL-6), pancreatic trypsin and myeloperoxidase, pancreatic ATP and histopathology (figures 6 and 7, see 7). These findings demonstrate that inhibition of MPTP opening confers striking local and systemic protection from pancreatitis. The further new finding of relative independence of apoptotic processes from the MPTP (figure 6B) confirmed that apoptosis is not a major contributor to the pathological responses of AP, 26 unless it is massive. 45
DISCUSSION
This study demonstrates that MPTP opening is critical to experimental AP, mediating impaired ATP production, defective autophagy, zymogen activation, inflammatory responses and necrosis (figure 8), features of AP at molecular, cellular and whole organism levels. 1 Our previous work identified metabolic effects of MPTP opening specific to ethanol. Here we have established the general significance of MPTP opening as a central mechanism in the pathogenesis of AP, and the primary role of calcium overload in this. The patch clamp data show how tight control of cytosolic calcium elevations essential to normal stimulussecretion coupling by IP 3 Rs and RyRs 4 is lost in Wt but maintained in Ppif −/− pancreatic acinar cells, which preserve ATP supply and clear calcium more effectively. Coupling of endoplasmic reticulum IP 3 Rs and RyRs with outer mitochondrial membranes tightly localises high calcium concentrations, 46 but may expose mitochondria to abnormal calcium release, despite modulation by Bcl-2 family proteins. 7 Here we have shown that pancreatitis toxins cause abnormal release of calcium via IP 3 Rs and RyRs that overloads pancreatic acinar mitochondria, which are markedly sensitive to calcium signals. 23 The mitochondrial calcium overload induces high conductance MPTP opening and dissipates Δψ m , initiating collapse of ATP production, diminished calcium clearance, PGAM5 activation and subsequent necrosis. Importantly for a disease without specific treatment, pharmacological MPTP inhibition 29 47 administered after AP induction came close to preventing all injury, notably in the clinically relevant TLCS-AP.
For more than a century following an original postulate by Chiari, 48 pancreatitis has been viewed as an autodigestive disease consequent on pathological zymogen activation. 3 34 38 39 45 In experimental AP, zymogens are activated inside acinar cells within minutes of toxin exposure, 1 3 30 41 which this work has shown to result from induction of the MPTP, caused by and contributing to calcium overload. Sustained calcium overload may activate degradative calpains, phospholipases or other enzymes 17 and damage zymogen granules, inducing autophagic 41 and/or endolysosomal 49 responses that activate digestive enzymes. Such activation was not completely prevented by MPTP inhibition, however, likely from global cytosolic calcium overload that was seen to be more effectively cleared in Ppif −/− cells, without which overload no enzyme activation occurs. 30 Nevertheless, intracellular expression of trypsin per se without mitochondrial injury leads to apoptotic not necrotic pathway activation 45 and trypsinogen activation does not appear necessary for either local or systemic inflammation; 50 knockout of cathepsin B greatly reduces trypsinogen activation with little effect on serum IL-6 or lung injury. 39 Hereditary pancreatitis caused by cationic trypsinogen gene mutations rarely features clinically significant pancreatic necrosis; 51 52 further, systemic protease inhibition has had little success as a clinical strategy, 1 suggesting that while zymogen activation contributes, it is not the critical driver of AP. This study, however, shows that MPTP opening triggers defective autophagy, while inhibition of MPTP opening preserved ATP supply, increased the efficiency of autophagy and decreased zymogen activation. Together with major effects of MPTP opening on PGAM5 activation that implements necrosis, 19 37 and on local and systemic inflammatory responses, these findings now place mitochondrial injury centrally in AP.
Our new data show that in pancreatic acinar cells IP 3 Rs and RyRs are vulnerable to specific toxins that markedly increase their calcium channel open-state probabilities. Toxic transformation of calcium channel function induced pancreatic acinar cell necrosis through calcium-dependent formation of the MPTP, with diminished ATP production the critical consequence. Toxic Figure 8 Summary diagram: the mitochondrial permeability transition pore (MPTP) plays a critical role in the development of acute pancreatitis. Exposure to pancreatic toxins leads to a sustained rise in cytoplasmic calcium that crosses the inner mitochondrial membrane (IMM) to enter the mitochondrial matrix. Consequent cyclophilin D (CypD) activation promotes MPTP opening across the IMM, causing mitochondrial depolarisation and impaired ATP production. These induce PGAM5 activation and retarded autophagy, downstream mechanisms in acute pancreatitis (upper panel). When MPTP opening is inhibited by genetic (Ppif −/− ) or pharmacological means (DEB025 or TR040303), mitochondrial membrane potential is preserved and ATP production sustained. This maintains the integrity of pancreatic acinar cells that clear calcium more effectively and prevents the development of acute pancreatitis (lower panel) (MPTP drawn after reference 14).
transformation by different toxins was specific to different second messengers, identifying potential for a variety of deleterious effects. ATP deficiency may be further exacerbated by fatty acids released on hydrolysis of FAEEs or triglycerides, 53 which may inhibit beta oxidation. 6 Without sufficient ATP, cytosolic calcium overload produces a vicious circle in which highaffinity, low-capacity sarcoendoplasmic reticulum calcium transport ATPase (SERCA) and plasma membrane calcium ATPase (PMCA) pump clearance of cytosolic calcium is impaired, further mitochondrial injury sustained and necrotic cell death accelerated. 6 12 Although the toxicity of cytosolic calcium overload depends on calcium store refilling from outside the cell, 30 54 specific second messenger receptor blockade demonstrated calcium overload to be due completely to release from their calcium channels, not direct effects of toxins on calcium entry or extrusion.
Whereas the vast majority of previous studies undertaken to determine mechanisms and/or new targets in AP have used only one model, our four models are broadly representative of a range of aetiologies, including biliary (TLCS-AP), hyperstimulation (CER-AP), ethanolic (FAEE-AP) and amino acid-induced (CDE-AP). 1 55 Our findings in experimental AP are entirely consistent with those made in isolated mitochondria and cells, identifying a generalised mechanism of pancreatic injury and necrosis, confirmed in murine and human pancreatic acinar cells, pancreas lobules and tissue slices. Pancreatic necrosis drives the inflammasome, 56 which can be induced by MPTP opening 57 and is part of the systemic inflammatory response contributing to multiple organ failure. 2 Further pancreatic injury is driven through tumour necrosis factor receptor activation that also promotes MPTP opening 58 and calcium deregulation, activating calcineurin and NFAT. 59 Our data link necrosis and inflammation directly, highlighting the potential of the MPTP as a drug target for AP. | 5,290.6 | 2015-06-12T00:00:00.000 | [
"Biology",
"Medicine"
] |
ON-LINE MONITORING OF TECHNOLOGICAL PROCESS OF MATERIAL ABRASIVE WATER JET CUTTING
Original scientific paper The paper deals with the indirect ways of on-line monitoring of technological processes of cutting. The objective of the study is a design of on-line monitoring system for the cutting technology through an abrasive water jet. In cutting by the abrasive water jet two parallel phenomena are formed. The phenomena are represented by generated surface and vibrations. For the purpose of proving of the hypothetical assumptions on dependence of generated surface quality on vibrations the experiments utilizing stainless steel AISI 304 were performed. The experiments were realized at four diverse settings of cutting head traverse speed. The material vibrations were collected by means of two independent accelerometers PCB IMI 607 A11. One of the accelerometers was oriented towards the cutting direction and the other in direction perpendicular to cutting. Sampling frequency was of 30 kHz. The generated topography of the material was measured by an optical profilometer FRT MicroProf. The collected data were evaluated by a virtual tool developed in LabView 8.5 in the form of vibration analyses being consequently mutually compared. Both phenomena proved to be dependent on common technological cause, i.e. on cutting head traverse speed. At the same time the study offers a theoretical design of the on-line monitoring system and of the future heading of the research in this field.
Introduction
The paper was written pursuant to constant increase of quality and production requirements.The increase demands the system design which would produce highquality products cheaply and fast.The fact is that quality is many times linked with the higher price level and thus fully flexible system able to eliminate purposeless production costs [1,2,3,4,5].One of the alternatives of generation of such system or at least of getting closer to it is introduction of on-line monitoring into already existing conventional and unconventional technological processes.The abrasive water jet cutting (AWJ) ranks among the unconventional technological processes of material cutting [1,7,11,13,14].This technological process is currently being pushed to the foreground since any available material is possible to be cut by the technology [6].The advantages of AWJ technology rest not only in low impact upon environment, but in non-occurrence of temperature changes in the cutting trace [12,15].As any other technological process even the AWJ technology is during cutting accompanied by the attendant phenomena such as vibrations or acoustic emissions.Therefore the aim of the paper is focused on the analysis and comparison of data acquired through the experimental measurements of vibrations and of their utilization in theoretical design of feasible method of on-line monitoring of the given technological process.
State of the art analysis
The problems related to the technology of abrasive water jet cutting represent objects of concern of research and development of many authors.However, the subject of their interest is chiefly typical topography of the generated surface and the effort to control and to predict the surface topography right at the beginning of cutting process.
At the beginning of the introduction of AWJ technology into practice the initial researches of the authors were oriented mainly towards the material removal mechanism.The issue represented a concern to the authors such as Hashish [8], Chao and Geskin [28], Arola and Ramulu [6].Boud et al. [17] concentrated on the study of morphology and mechanical properties of the abrasive (abradant) and on the finished surface in case of titanium alloy cutting.The surface generated during cutting of glass fibre of epoxide composites was examined by the authors Azmir et al. [18].Momber [11], Mohan [19], Kovacevic [20], as well as Hassan [7] and Arulu [21] focused their attention on the accompanying phenomenon of acoustic emission with the aim of its utilization in cutting depth monitoring in abrasive water jet cutting of material .In 1998 the analytical model of the overall cutting depth was elaborated by the authors [22].In 2001 Dasgupta et al. [23], Neelesh and Vijay [24] developed the analytical models of simulation, planning and optimization and on the basis of the particular type of operation, material, and conditions of machining they compiled selections of the appropriate models.Sharma et al. [12] used the Taguchi proposal of experiments to create a cutting model for the specified material group.
In 2007 Valíček et al. [10] oriented towards the formation of optical method of detection and analysis of geometrical parameters of topography of surface generated in the course of abrasive water jet cutting of material.The materials were classified according to cutability T CUT .Furthermore, they designed a system of feedback control with regard to the measured values of acoustic pressure La eq .Arulu et al. [21] and later Folkes [25] used the acoustic emission for the on-line detection of a workpiece state.The acoustic emission and vibrations were examined by Hloch et al. [9] and Valíček et al. [26].Authors give their attention also to the influence of factors upon unevenness of surface of stainless steel and aluminium [9].
Acoustic emissions as well as vibrations achieved success in diagnostics, prediction, and control of a number of technological procedures.Out of all present phenomena occurring in material machining chiefly acoustic emissions were applied in research.Already in 1992 author's [20] centre of attention was indirect monitoring of depth of penetration into wooden material in case of which normal forces generated on a workpiece were used as an indicator.Momber et al. [11] utilized acoustic emission for the on-line monitoring of the AWJ process of cutting of material disturbed by breakage.To measure the diffused energy RMS value was used.Authors [1] examined the alternatives of on-line control and prediction of the surface quality through the negative phenomenon -noise.In 2002 Ativitavas [32] devoted his study to linkage of acoustic emissions with the neural network to determine defects in plastic composite structures.In 2004 Asraf's study [26] offered a designed model of the continual cutting depth monitoring through acoustic emission in the AWJ process and at the same time he discovered that the RMS value increased linearly along with the growing cutting depth.Another author dealing with acoustic emission was Arulu et al. [21] who paid attention to the process of composite drilling.
Just a minor group of authors was concerned with material vibrations and information included in the process of abrasive water jet cutting.By the analysis of vibration spectrum in the process of abrasive water jet cutting Hloch et al. designated frequency components carrying significant information on instant condition of the cutting process.The vibrations occurring in such cutting got into the centre of attention of Hreha et al. [15,28,29,30].Hreha studied surface roughness parameters Ra, Rq and Rz and consequently, through vibrations authors studied the processes occurring during penetration of water jet through material.A team of authors pointed out the possibilities of utilization of vibrations as of information carriers for the on-line control of abrasive water jet cutting of material and at the same time the alternatives of the cutting technology utilization were presented.
Objectives of the paper
The objective of the paper is a design of indirect method of on-line monitoring of technological processes of cutting with the application in technology of material cutting by abrasive water jet through the accompanying phenomenon -vibrations.To meet the objective of the study it is inevitable to perform experimental measurements and to process the data of the experiments.Consequently, the data are necessary to be reciprocally compared by means of which the dependence between traverse speed of cutting head and generated surface quality will be detected.Last but not least both the possible method of on-line control and regulation of the particular technological process are important to be designed.
Experimental study
During the experiment stainless steel AISI 304 was used as an experimental material.The workpiece in case of which the measurements were performed was a plate with dimensions of 100 × 150 × 12 mm.In the course of the experiment the plate was cut four times in total by the method shown in Fig. 1.Collection of data inevitable for more profound analysis of vibrations was repeatedly performed several times.Two sensors placed directly on a workpiece (one was fixed axially and the other radially) served as data collectors in the form of accelometers PCB IMI of series type of 607 A11 with an integrated cable.The sensitivity of the sensors was of 100 mV/g and their frequency range was of up to 10 kHz.These sensors were connected to the measurement system of NI PXI.The system consisted of the measuring card PXI 4472B and was typical for the 8channel simultaneous collection and 24 bit analogue and digital converter.
The sampling frequency of the system was of 102 kHz with the dynamic range of 110 dB.The frequency analyser SKF Microlog Gx-S was used for the performance of verification and calibration measurements in case of which the analysis of the measured data was realized through the program equipment of SFK Aptitude Analyst.Measurement of the profile parameters of surfaces generated by the particular technology was carried out by means of optical contactless method through the optical profilometer Microprof FRT by the producer Fries Research & Technology GmbH that allowed 3D evaluation of the surface.
The experiment was performed under environmental and technological conditions presented in the Tabs. 1 and 2. In the frame of the experiment it was inevitable to monitor and to record the change of formed vibrations at the point of sensor fixation in dependence on cutting conditions by the system of NI PXI.Fig. 2 shows the model of experimental set up of the process.Four measurements were performed with diverse setting of traverse speed of cutting head (v = 50 mm/min, v = 75 mm/min, v = 100 mm/min, v = 150 mm/min).
Results and discussion 4.1 Surface topography analysis
As per graphic dependences it is clear that with the growing depth the overall numerical development of the individual roughness profile parameter of the generated surface changes as well.The phenomenon is caused by the fact that with the growing depth the jet acting upon the material gradually loses its energy and thus its higher curvature in larger depths occurs and at the same time surface quality in the particular depth lines declines.
Roughness parameters Ra, Rq, Rz were measured in 21 depth lines of the 20 mm long segment marked by green boundaries.The red boundary represents the end of the sample cutting (Fig. 3A ÷ Fig. 3D).If the dependence of the surface profile parameters Ra, Rq and Rz on cutting depth at speed of v = 50 mm/min is compared with dependence of the surface roughness parameters Ra, Rq and Rz on cutting depth at speed of v = 150 mm/min (Fig. 3E, Fig. 3F) from the perspective of the influence of speed upon the development of the values of these parameters, it might be stated that apart from the cut depth the development of the values of roughness parameters is significantly affected by speed at which the cutting head moves in the course of cutting.At speed of v = 50 mm/min it could be observed that the development of the aforementioned parameters is more linear contrary to the case in which the speed was set to the value of v = 150 mm/min.The phenomenon is possible to be explained from the point of view of interaction between a high-speed permeate and a workpiece.At traverse speed of cutting head of v = 50 mm/min the abrasive water jet disposed of sufficient amount of time to make the abrasive particles capable of even and intensive erosion of the material surface along the entire depth of cutting sample which is not applicable at traverse speed of v = 150 mm/min.The combination of higher traverse speeds of cutting head with larger depth of the cut causes disability of the jet to erode the material surface of cutting sample to satisfactory extent and that is in consequence demonstrated by the formation of striated zone the roughness of which is not sufficient from the perspective of input requirements for surface quality.
Analysis of detected vibrations
The elaborated developments of vibration signals are divided into two groups (Fig. 3).The first group includes the data measured by sensor S1 (Fig. 3A, Fig. 3C) which was fixed on the cutting material in axial direction and the second group contains the data measured by sensor S2 (Fig. 3B, Fig. 3D) fixed on the material in radial direction.Through the comparison of these time developments at speeds of v = 50 mm/min and v = 150 mm/min it is possible to state that at lower traverse speed of the cutting head the development of amplitude of vibration oscillations is more stable contrary to higher speeds and this fact opens the door for further research for the purpose of detection of utilizable vibration spectrum (inevitable is to determine the utilizable vibration spectrum both at lower and higher traverse speeds of cutting head) for the application of on-line monitoring of the particular technological process.Out of these developments the changes of amplitude of vibration signal oscillations is obvious not only in dependence on the means of the sensor fixation but also in dependence on the traverse speed of cutting head.
Utilization of acquired knowledge for the solution proposal
The series of performed experiments was carried out with the aim to show the link between surface topography and vibrations.In dependence on acquired knowledge of the technology in question and pursuant to realized analyses the real existence of the link between vibrations and surface topography might be referred to.Such indirect link is influenced by variant setting of traverse speed of cutting head as with the change of the speed setting the changes of intensity of amplitude of oscillations during vibrations occur as well as changes of surface topography quality.At low speeds the amplitude of oscillations is lower contrary to higher speeds.At the same time the assumption was proved that at lower speeds the surface topography is of higher quality than at higher speeds since at higher speeds the abrasive water jet did not possess sufficient amount of time to be able to erode the material surface equally along the entire depth of the material.
The paper along with other existing studies and future research direction might provide certain basis for formation of both theoretical and practical method of online monitoring of the AWJ technology.Such method of on-line monitoring is possible to be demonstrated from the theoretical point of view, Fig. 4. In case of proposal of on-line monitoring of AWJ technology it is inevitable to be particular about selection of adequate feedback.The established feedback regulation is required to take into consideration the time delay resulting from the need of processing and evaluation of the measured data.The proposal of correct feedback is significant due to prevention of repeated regulation of traverse speed of cutting head at points in case of which it is not desired.
Conclusion and the future direction of the research
Currently, the control of cutting process by AWJ technology is performed in so called off-line mode.In case of the off-line mode those are beforehand specified requirements for cutting quality of surface and the cutting quality is possible to be verified after material cutting.Although in the cutting by means of such technology the method of prediction of surface quality exists, it is still necessary to point out that only under certain conditions and due to diverse accompanying features of the technology and its complicated specification.At the same time through on-line control and regulation the working productivity will increase and potential defect occurring in the course of the system operation will be possible to be determined and thus the reject formation will be prevented (saving of costs related to rejects).
On-line monitoring is possible to be performed with the particular technology only indirectly by accompanying phenomena such as vibrations or acoustic emissions.The endeavour of the study was pursuant to performed experiments focused on vibration measurement to detect dependence between traverse speed of cutting head (the cause) and formation of vibrations and to find out the extent of influence of the aforementioned traverse speed upon final surface roughness of cutting material.
According to the acquired knowledge the existence of real option of vibration utilization in introduction of online monitoring of the AWJ technological process can be confirmed.However, it is necessary to mention that in general the introduction and utilizable proposal of functional method of on-line control in practice require performance of a number of experiments focused on evaluation of vibrations with diverse variants of setting of input process factors because the formation of vibrations and final surface roughness are except for the examined cause of traverse speed v [mm/min] influenced also by other factors, for instance, permeate pressure or abrasive type.Such experiments are also inevitable to be performed in case of materials other than that used in our experiment (stainless steel AISI 304) as formation and spread of vibrations along with the surface roughness depend even on material properties of the individual materials.At the same time it would be convenient to carry out the experiments in ideally isolated environment (under laboratory conditions) as well as in common environment (for instance, production hall in the proximity of main road) with the aim to reveal "strange vibrations -vibration zone" (main road vibrations, machine vibrations, etc.) that could negatively affect the development of on-line control and process regulation.With the idea of proposal of online control system a number of other open questions occur concerning, for instance, the appropriate proposal of feedback system since in on-line monitoring it is necessary to take into account even certain time delay of measured data processing and consequent repeated regulation of process quantities in occurrence of failure in the process.
Out of the aforementioned it is clear that the introduction of on-line monitoring into technological processes is not by far as simple as it might seem.In analysis of the measured data other outputs were gained as well which, however, were not possible to be presented in the paper due to the required amount of text.These outputs along with the need of other experiments with the aim to design and apply on-line monitoring of technological processes into practice represent a challenge for further direction of the authors of the paper.The processed data will be monitored and evaluated further on.The direction of the future research and its results will be presented in another study by these authors.
Figure 1
Experimental set up a) sensors placed on material, b) detail on cutting head and material being cut
Figure 2
Figure 2 Simplified representation of the experiment
Figure 3
Figure 3 Time development of vibration signal: A) at traverse speed of v = 50 mm/s -sensor S1; B) at traverse speed of v = 50 mm/s -sensor S2; C) at traverse speed of v = 150 mm/s -sensor S1; D) at traverse speed of v = 150 mm/s -sensor S2; Dependence of roughness profile parameters: E) on cutting depth at traverse speed v = 50 mm/s; F) on cutting depth at traverse speed v = 150 mm/s.
Figure 4
Figure 4 Simplified scheme of possible method of online-monitoring of the AWJ technology
Table 1
Environmental conditions
Table 2
Technological conditions | 4,414.2 | 2015-04-22T00:00:00.000 | [
"Materials Science"
] |
Cytokine profiling in anti neutrophil cytoplasmic antibody-associated vasculitis: a cross-sectional cohort study
ANCA-associated vasculitides (AAV) are severe diseases, potentially affecting lungs, kidney, and other organs. Nevertheless, risk profiling remains difficult. Aim of the current study was to analyze serological characteristics in AAV. The principal goal was to identify diagnostic markers that potentially allow a more sophisticated risk profiling in AAV. AAV subjects were recruited and evaluated for disease activity, disease stage, medication, and laboratory findings. Serum concentrations of the following parameters were measured: IL-1β, IL-6, IL-17 A, IL-17 F, IL-21, IL-22, IL-23, TNF-α, sCD40L, IL-4, IL-10, IL-25, IL-31, IL-33, and INF-γ. A total number of 62 AAV subjects was included in the study (39 females; 23 males). Forty-five subjects were PR3+, 17 subjects showed ANCA specificity for MPO. The majority of all cytokines fell under the lower detection limit of the assay. Serum IL-10 was higher in both, AAV and SSc as compared to controls; it was also higher in early systemic AAV. Serum IL-33 was elevated in AAV and SSc; in AAV, higher levels were found in non-necrotizing GN and RTX untreated subjects. Serum CD40L was raised in AAV as well; higher concentrations were also found in PR3+ and MPO+ patients and early systemic, generalized, and refractory AAV. IL-10 may potentially serve as a marker of early systemic AAV. IL-33 may help to identify subjects with a higher risk for necrotizing GN in AAV.
Introduction
ANCA-associated vasculitides (AAV) are the most frequent types of primary small-vessel vasculitides, according to the revised Chapel Hill consensus conference nomenclature from 2012 [1]. At least three distinct disorders represent AAV, each characterized by inflammatory damage of small blood vessels including arterioles, capillaries and venules, and each associated with peripheral circulating anti-neutrophil cytoplasmic antibodies (ANCA) of different antigen affinity. Granulomatosis with polyangiitis (GPA) typically affects lung and kidney, accompanied by the formation of granulomas in a locally destroying manner. Numerous other organs may be damaged, as well [2]. ANCA in GPA predominantly interacts with cytoplasmic proteinase 3 in granulocytes (PR3-ANCA). Antibodies may be absent during earlier disease stages but can be detected in more than 90% if the disease generalizes [3]. Clinically, microscopic polyangitis (MPA) is almost indistinguishable from GPA. However, granulomas are absent, and ANCA mostly interacts with the perinuclear antigen myeloperoxidase (MPO-ANCA) [2]. The third and least frequent type of AAV is eosinophilic granulomatosis with polyangiitis (EGPA) [4]. EGPA patients suffer from allergic rhinitis/ asthma and show enrichment of eosinophils in tissues and blood, findings that have been summarized in the current classification criteria [5]. Finally, renal-limited ANCA-associated vasculitis may be considered the fourth entity of AAV, as suggested by Pagnoux [2]. The prognosis of AAV patients often depends on rapidly initiated diagnostic steps helpful to identify possible end-organ damage. Before the era of intensified immunosuppressive treatment using steroids and cyclophosphamide combined [6], more than 90% of all patients presenting with kidney failure and lung involvement (pulmonary-renal syndrome) died from the disease(s). This situation has significantly been improved in recent years, last but not least, with the introduction of rituximab as a therapeutic measure in generalized and remitting GPA/MPA [7]. Nevertheless, even if the diagnosis is correct and immunomodulatory therapy has been initiated, the individual treatment response remains challenging to predict. For instance, refractory retroorbital manifestations have been shown to respond less sensitive to drug therapy than resistant glomerulonephritis [8].
Current diagnostics for identifying the disease per se and for monitoring patients during treatment include history, clinical examination, radiographic analyses, urine analyses, and ANCA testing. More disease-specific laboratory parameters are still missing. Especially markers with substantial predictive potency in terms of disease severity/activity and the risk for chronic damage are urgently needed. Also, the individual sensitivity towards certain types of immunosuppressive drugs is hardly predictable. It also remains unclear how subjects with higher relapse risk may be identified in advance and how such information can be transferred into the clinical management of AAV.
Therefore, the current study aimed to screen serological characteristics in AAV patients. We intended to evaluate individual serum cytokines in subjects with different epidemiological and clinical characteristics, particularly with varying severity of end-organ involvement. The principal goal was to detect future diagnostic candidates that allow a more sophisticated/reliable risk profiling in AAV.
The setting, study population and study criteria
All participants were recruited from the Clinic of Nephrology and Rheumatology of the University Hospital Göttingen (Germany). The local ethics committee approved the study (name: ´ethics committee of the Universitätsmedizin Göttingen; Approval Number: 09/10/15; date of approval: October 2015). Inclusion criteria: subjects with newly diagnosed or established AAV (GPA was initially classified according to the 1990 published criteria of the American College of Rheumatology [9]. Later, we employed revised criteria which have been introduced at the annual meeting of the American College of Rheumatology, held in Washington DC on 11/14/2016 [Raashid Luqmani (University of Oxford), Peter A. Merkel (University of Pennsylvania), Richard Watts (University of East Anglia)]. The new classification incorporates nine individual criteria, encompassing clinical and laboratory findings with either positive or negative predictive power. A total score of 5 or higher indicates the disease with high probability), age > 18 and < 90 years; exclusion criteria: malignant disorder, uncontrolled infection at the time of inclusion. The participants signed consent for the data to be analyzed and published. The fact that every patient fulfilling the respective criteria was included if she/he signed the consent form potentially reduced selection bias.
Numerous clinical and laboratory parameters were collected from each subject including history and physical examination, cardiovascular risk profile (e.g., family history, arterial hypertension, diabetes, smoking habits), average alcohol consumption, and nutritional state. Also, ANCA-associated organ involvement was documented, particularly involvement of upper/lower respiratory tract, kidney, skin, joints, and nervous system. Renal involvement was defined as biopsy-proven glomerulonephritis, a biopsy was either performed due to de-novo acute kidney injury as described in the latest version of the KDIGO criteria [10] or due to significant glomerular proteinuria. The disease activity was quantified using the Birmingham Vasculitis Activity Score (BVAS) [11] with a score of < 8 indicating low activity as opposed to ≥8 as indicative for high activity. Irreversible organ damage was evaluated using the Vasculitis Damage Index (VDI) [12]. AAV staging was performed according to 2007 published EULAR recommendations [13]: localized, early systemic, generalized, life-threatening, and refractory.
The control group included age-and gender-matched individuals with no known autoimmune-mediated disorder. Subjects were recruited from the staff of the University Hospital of Götingen. Thus, it was ensured that they matched the cohort of interest (AAV). The second control group included patients with systemic sclerosis (SSc). They were also recruited from the Clinic of Nephrology and Rheumatology of the University Hospital Göttingen (Germany).
Statistical analyses
All analyses were performed using the application STATIS-TICA (StatSoft). Data subsets underwent distribution analysis using the Kolmogorov-Smirnov test. Normal distribution was assumed if the respective p value was ≥0.05. Differences between two non-nominal data groups were calculated with the Mann-Whitney test for not normally distributed data and with the student's t test for normally distributed data. Differences between nominal data were calculated with the Chi-squared test. Correlation analysis was performed by calculating the Pearson correlation coefficient. Differences between more than two groups were calculated using the ANOVA test. Differences were considered significant if p values were below 0.05.
Results
In the first section, we will shortly summarize the clinical characteristics of the subjects; the second part will address serological abnormalities, particularly serum levels of specific pro-and anti-inflammatory cytokines. We will exclusively name differences that fulfilled the criteria of statistical significance.
Patients
Over a period of 1.5 years, we included 62 individuals with newly diagnosed or established AAV (newly diagnosed 16, established AAV 46; 39 females, 23 males), the age ranging from 24 to 83 years. Forty-five subjects were PR3+, 17 subjects showed ANCA specifity for MPO.
The mean age of all subjects was 60.5 years (females 60.9 and males 59.8 years), with a range of 24-86 years. The mean overall duration of the disease was 4.5 years; the range was 2 months to 22 years. The patients' clinical characteristics are summarized in Table 1.
Respiratory involvement
Involvement of the upper respiratory tract was diagnosed in 37 patients (59%), manifestations were either rhinitis or sinusitis or otitis media. Six patients (9%) revealed sinusoidal granuloma formation. Individuals without pulmonary granuloma showed a lower relapse rate than those with granuloma formation (32 vs. 73%; p = 0.004).
Comorbidities
Patients with diabetes mellitus (n = 11), arterial hypertension (n = 39), hypercholesterolemia (n = 15) or asthma (n = 5) did not show higher relapse rates nor did they differ in terms of remission incidences from individuals without such disorders. Patients with Hashimoto's disease (n = 11), however, showed a higher remission rate as opposed to subjects without the thyroidal disease (0.81 ± 0.4 vs. 0.37 ± 0.48; p = 0.006). a Relapse probability in GPA as compared to MPA, the results are depicted as relative risk with 1 reflecting a 100% relapse probability. The relapse probability was higher in GPA than in MPA. b since all GPA individuals were PR3+ and only one subjects with MPO positivity was diagnosed with GPA, the relapse probability was significantly higher in PR3+ as compared to MPO+ patients. c remission probability in relation to the mean BVAS. A higher likelihood for disease resolution was found in individuals with a BVAS of below 8 as compared to those with above 8. d patients with PR3 positivity displayed a higher mean BVAS at the time of diagnosis than MPO+ subjects (Kolmogorov-Smirnov test for normality: BVAS-p < 0.001; Data in d as media ± Q1/Q3; ✻p < 0.05for exact p values see text)
Drug therapy
The histological finding of necrotizing glomerulonephritis was associated with a higher cumulative dose of cyclophosphamide (8.610 ± 2.627 vs. 6.537 ± 2.213 mg; p = 0.02).
Patients with an initial VDI of above 1 required immunosuppressive therapy for relapse control significantly more frequent than those below 1 (p = 0.01). Patients receiving thyroid hormone due to Hashimoto's disease benefitted more frequently from partial or complete remission (84 vs. 35%; p = 0.001).
Interleukin-10
Our analysis showed higher Interleukin-10 serum levels in AAV patients as compared to healthy subjects (29 ± 14.7 vs. 4.6 ± 4.4 pg/ml; p = 0.004). The cytokine did, however, not differ between AAV and SSc (29 ± 14.7 vs. 42.7 ± 46.5 pg/ ml; p = 0.225) but SSc subjects showed higher IL-10 than healthy controls (42.7 ± 46.5 vs. 4.6 ± 4.4 pg/ml; p < 0.001). The cytokine did not differ between AAV individuals with different disease stages, according to the 2007 published EULAR recommendations [13]. However, AAV patients with early systemic vasculitis displayed higher serum IL-10 than healthy controls (51.2 ± 40.7 vs. 4.6 ± 4.4 pg/ml; p = 0.005). Also, serum IL-10 did neither differ between AAV patients with and those without relapsing vasculitis or between individuals with and those without renal or upper/ lower respiratory involvement, respectively. Regarding the immunosuppressive therapy, IL-10 levels were comparable in RTX treated and RTX untreated patients but lower in subjects undergoing treatment with RTX and cyclophosphamide combined. Figure 3 summarizes the essential results of all IL-10 analyses.
Interleukin-33
In comparison to healthy controls, AAV-patients showed significantly elevated serum IL-33 concentrations (165. 8 Regarding the AAV cohort alone, IL-33 did not differ between relapsing and non-relapsing disease (92.7 ± 48.1 vs. 221.2 ± 105.9 pg/ml; p = 0.1), but it was lower in subjects with renal involvement as compared to those without such a manifestation (87.9 ± 41.8 vs. 220.3 ± 103.7 pg/ ml; p = 0.04). Patients with versus without upper/lower respiratory involvement did not differ in serum IL-33 concentrations. If related to the immunosuppressive treatment regimens, one additional difference appeared: patients undergoing RTX treatment displayed lower IL-33 levels than subjects without such therapy (64.4 ± 39.8 vs. 221.1 ± 93.6 pg/ml; p = 0.01). Figure 4 summarizes the essential results of all IL-33 analyses.
Discussion
The current study aimed to investigate serological abnormalities in AAV subjects. The principal goal was to identify new candidates for serological testing, thus to widen the currently limited spectrum of diagnostic and prognostic serum markers in these serious autoimmune-mediated conditions. The most intriguing result of our study were all the findings which must be considered as negative or absent: numerous serum candidate cytokines were not detectable at all/fell under the lower detection limit of the assay (IL-1β, IL-4, IL-6, IL-17A, IL-17F, IL-21, IL-22, IL-23, IL-25, IL-31, INF-γ). Only four parameters finally fulfilled the criterium that at least 50% of all individual measurements were above the lower detection limit of the assay: IL-10, IL-33, TNF-α, and sCD40L. Although we analyzed clinical aspects of GPA subjects as well, the discussion section will exclusively focus on serological characteristics. IL-10 substantially promotes the immunoglobulin switch in B cells. It has been identified as an essential element in B cell-mediated autoimmunity [14]. Lepse and colleagues [15] found reduced numbers of B regulatory cells in AAV, while IL-10 levels did not differ between AAV subjects and healthy controls. All AAV subjects and particularly those with early systemic disease displayed higher IL-10 than controls; also, we found higher IL-10 in relapse-free subjects without reaching the level of significance. Comparable observations were made by Ohlsson et al., who detected significantly lower IL-10 in individuals suffering from any relapse within the first 3 months after successful remission induction [16]. Hruskova and colleagues found lower inremission IL-10 to be associated with a higher relapse probability [17]. Finally, higher IL-10 has been identified to go in parallel with an increased risk for future relapses [18]. Thus, we support the hypothesis of Ohlsson et al. [16] who proposed IL-10 as a suppressor of latent disease activity as it may persist even during complete remission.
As pointed out earlier, Interleukin-33 belongs to the IL-1 cytokine family. It is produced by stromal, epithelial, and endothelial cells, respectively [19]. Its effects include either propagation or inhibition of inflammatory processes, depending on the respective microenvironmental circumstances [20]. Our study revealed higher IL-33 in all AAV subjects. Renal involvement was associated with lower IL-33; the cytokine did in contrast not differ between apparent and absent upper/lower respiratory involvement. Renal manifestations belong to the most severe complications in AAV. One may argue that during renal injury and repair, apoptosis rather than necrosis is the predominant mechanism of cell damage. Apoptosis has been proposed to reduce IL-33 availability by caspase-mediated IL-33 degradation [21]. Our study revealed lower IL-33 levels to be associated with more frequent use of RTX.
The pro-inflammatory cytokine Tumor Necrosis Factor-alpha (TNF-α) mediates diverse processes involved in the inflammatory response. They include MHC induction, macrophage activation, leukocyte-endothelial adhesion, and increased hepatic synthesis of acute-phase proteins [22]. Our study revealed only a few differences in serum TNF-α between the respective subgroups. TNF-α was higher in SSc as compared to AAV and controls. The second finding regarding this particular cytokine was increased TNF-α levels in subjects undergoing successful induction therapy. We suppose that the relatively low concentrations in PR3+ and MPO+ AAV ensue from pre-established steroid treatment which has been performed in all individuals. Whether TNF-α indeed plays a pathogenic relevant role in AAV can be doubted, although earlier open-label studies showed beneficial effects of blocking the substance in vivo [23]. Today, anti-TNF-alpha agents are not even recommended in refractory disease courses [24].
CD40 Ligand belongs to the TNF family; it is expressed on activated CD4+ T cells, B cells, and platelets. In inflammatory states, de novo expression of the protein occurs on monocytes, natural killer cells, mast cells, and basophils [25]. Soluble CD40L (sCD40L) on the other hand, has been proposed as a marker of B cell activation [26]. Our analysis showed significantly higher sCD40L in AAV, and thereby in both PR3+ and MPO+ individuals. Remarkably, AAV subjects displayed elevated sCD40L in all disease stages 1 3 without any differences between individual stages, respectively. Thus, high levels of the cytokine most likely reflect activation of the cellular immune response in general rather than disease-or stage-specific phenomenons. We failed to show any correlation between ANCA titer and sCD40L as it has been demonstrated by Tomasson and colleagues [27]. We also did neither detect differences between patients with versus without successful (re-) induction therapy nor between those with versus without renal involvement. Further significant associations for sCD40L were missing as well. Therefore, we currently do not believe in any substantial diagnostic/prognostic value of sCD40L in AAV.
Conclusions
Serum IL-10 may potentially serve as a marker of early systemic AAV. Serum IL-33 may help to identify subjects with a higher risk for necrotizing GN. Further studies must focus on longitudinal dynamics of these cytokines in AAV of different severity/activity.
Limitations
The most relevant flaw of the study is the current lack of longitudinal data. Particularly the aspect of cytokine (Il-10 and -33) dynamics over time, from the moment of the initial diagnosis and before the initiation of any treatment until incomplete or complete remission needs to evaluated systematically. One may also argue that the exclusive inclusion of cytokines, of which at least 50% of all measured values were above the lower detection limit of the assay is a limitation. Without this rule, other significant differences occur as well (e.g., IL-6 and IFN-g).
Author contributions JCH recruited patients and performed thy cytokine analyses. DP analyzed data and wrote the manuscript. HD assisted in cytokine analysis. CM analyzed data and assisted in writing the paper. KS performed cytokine analysis. EH performed cytokine analysis. OR corrected the manuscript. GAM corrected the manuscript and supported the project financially. SP designed the study, applied for study approval by the ethics committee, and recruited patients.
Compliance with ethical standards
Conflict of interest There are no conflicts of interest.
Research involving human participants and animals and informed consent All participants were recruited from the Clinic of Nephrology and Rheumatology of the University Hospital Göttingen (Germany). The local ethics committee approved the study (name: ´ethics committee of the Universitätsmedizin Göttingen´; approval number: 09/10/15; date of approval: October 2015). The participants signed consent for the data to be published.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creat iveco mmons .org/licen ses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 4,289.6 | 2019-07-08T00:00:00.000 | [
"Medicine",
"Biology"
] |
Prediction of metabolic ageing in higher education staff using machine learning: A pilot study
The detection of individuals with obesity or overweight allows to predict the prevalence of health risks, such as premature death, disabilities and other chronic diseases. This study describes a pilot conducted on the members of a higher education staff in the city of Matehuala, Mexico. It involved processing anthropometric measurements, health indicators and the results of bioelectrical impedance analysis using machine learning techniques. The goal was to identify the metabolic aging of individuals. The recorded data were used to create a database that was subsequently employed in four different classification models: decision tree, random forest, artificial neural networks and adaptive boosting. Additionally, four statistical techniques were utilized to determine variable importance scores: Pearson, Chi 2 , Anova, recursive elimination method and the variance inflation factor. The variable importance score was employed to identify the features that were most consistently repeated across methods. This analysis concluded that both anthropometric measurements and the results of bioelectrical impedance analysis provide valuable references for identifying obesity and overweight in individuals. Among the anthropometric measurements that exhibited a greater impact on the models' predictions were waist-to-height ratio, hip and arm circumferences, body mass index, systolic and diastolic blood pressure and heart rate. Additionally, body fat and muscle mass also contributed significantly.
INTRODUCTION
In 2022, the World Health Organization (WHO) identified that there are more than 1 billion people in the world with obesity, of which 650 million are adults.In Mexico, obesity is a significant concern.The 2020 National Health and Nutrition Survey on Covid-19 found that 76% of adult women are overweight or obese, compared to 72.1% of men (Shamah-Levy et al., 2021).Mexican adults aged between 29 and 69 engage in approximately 300 min per week of moderate to vigorous physical activity.However, about 29% do not meet the minimal recommendation of 150 min per week (Medina et al., 2013).
On the other hand, nearly 50% of Mexican adults have metabolic syndrome due to sedentary behaviors, physical inactivity, unhealthy dietary habits and poor sleep patterns (Macias et al., 2021).A study on the sociodemographic and anthropometric characteristics of adults aged 20 to 69 years in Mexico City revealed that the prevalence of participants classified in the highest sitting time category (≥ 420 min/day) increased by 8% over nine years.This increase had an impact, leading to a rise of 5.4% in overweight/obesity and a 1.3% increase in the diagnosis of diabetes (Medina et al., 2017).
In higher education institutions, staff work activities are predominantly centered in offices, inducing sedentary behaviors.Given this, monitoring the health condition of the staff is crucial for detecting potential health risks that could contribute to the development of chronic diseases.
Previous studies focused on higher education staff have shown that poor nutritional habits and lack of physical activity promotes the prevalence of overweight/obesity.Consequently, it is important to implement strategies aimed at reducing obesity and promoting well-being among the teaching population (Rodriguez-Guzman, 2006;Freedman et al., 2010;He et al., 2014;Rodrigues-Rodrigues et al., 2018).
According to the WHO, obesity and overweight can be identified from the anthropometric measurements of an individual.For adults, body mass index (BMI) and waist circumference (WA) serve as reliable indicators to discern obesity and overweight.Anthropometric measurements encompass a variety of body metrics, including weight, height, standing length, skin folds, circumferences (head, waist, hip, etc.), length of limbs and widths (shoulder, wrist, etc.).The Official Mexican Standard (NOM) defines the parameters and anthropometric criteria considered to determine abdominal obesity within the Mexican population (Shamah-Levy et al., 2017).Table 1 shows the values of BMI and WA values used for classifying obesity in Mexican adults.
In this study, we aimed to determine the prevalence of obesity and overweight among the staff of a higher education institution situated in the city of Matehuala, Mexico.The assessment was based on BMI and WA measurements.Considering WA, our observations indicate that 72% of males and 60.5% of females among the staff members are classified as abdominally obese.Evaluating the BMI of the observed group, we found that 60% of males and 31.5% of females are categorized as overweight, while 16% of males and 5.2% of females fall under the classification of obesity.Among those identified as obese, 12% of males and 5.2% of females belong to the obese class I category, and 4% of males are in the obese class II category, while no females fall within this category.None of the individuals belong to the obese class III category.When considering the entire monitored staff as a collective, 65% are abdominally obese, 42.8% are overweight, and 9.5% are obese.In total, 52.3% of the observed group are categorized as overweight or obese.
While the utilization of BMI, WA and waist-to-height ratio (WHtR) for predicting mortality remains effective, an alternative approach involves analyzing the outcomes of bioelectrical impedance analysis (BIA).This method has been suggested for monitoring and tracking the health status of individuals, including those with chronic conditions such as obesity (Ricciardi and Talbot, 2007;Heydari et al., 2011;de-Mateo-Silleras et al., 2019;Aldobali et al., 2022).BIA results have previously been utilized to estimate body fat percentage (BF) and correlate it with assessing the risk of diseases or mortality (Böhm and Heitmann, 2013).In 2021, the significance of the association between fat, visceral fat (VF) and muscle mass (MM) obtained through BIA in identifying metabolic syndrome as a health concern was recognized (Pouragha et al., 2021).
Machine learning (ML) is the science of programming computers so that they can learn from the information that has been provided to them (Geron, 2019).There are multiple ML techniques that can be used to build projects related to healthcare, with the aim of improving medical diagnosis or assisting health staff in the process of identifying a patient's condition (Sprogar et al., 2001;Javaid et al., 2022;Manickam et al., 2022;Payal et al., 2022).Common ML techniques used for these purposes are DTs, Logistic Regression (LR) and Support Vector Machine (SVM).Classification algorithms are effective to predict syndromes related to the prevalence of overweight and obesity (Chatterjee et al., 2020;Gutierrez-Esparza et al., 2020;Safaei et al., 2021;Crowson et al., 2022;Dhabarde et al., 2022;Strzelecki and Badura, 2022).
The classification process uses the features that have the biggest impact on the prediction of the objective.The selection of feature importance is very relevant, especially in classification problems with few samples (Mohd and Awang, 2021).In Archer and Kimes (2008), the authors evaluate the effectiveness of using the variable importance score in the Random Forest (RF) technique, concluding that this methodology is applicable in classification problems when the objective is to produce an accurate classifier.Also using RF, in Chen et al. (2020) Class II Class III WHO < 18.5 18.5-24.925.0-29.930.0-34.9 35.0-39.9> 40.0 25.0-29.9presented in order to reduce the number of features based on the identification of the variable importance measures (VIMs); the authors evaluated and compared the accuracy of specific RF, SVM, K-Nearest Neighbors (KNN) and Linear Discriminant Analysis (LDA) classification models.Additionally, in Gregorutti et al. (2017) and Senan et al. (2021), Recursive Feature Elimination (RFE) is used to identify VIMs.Misra and Singh Yadav (2020) suggest that a less complex algorithm can improve the accuracy of the classification, so they propose a method that analyzes each of the features and registers its importance with a predictor variable.Supporting the statement that using several methods to obtain VIMs in a classification problem, offers more reliability and consistency in the classification of the objective (Kiang, 2003;Nithya and Ilango, 2019).The selection of features has helped to obtain important results in ML biomedical applications.For example, Gutierrez-Esparza et al. (2020) used VIMs in the prediction of metabolic syndrome in a Mexican population.McLaren et al. (2019) used VIMs to predict malignant lesions in the breast with magnetic resonance imaging as features.Also, Ganggayah et al. (2019) used VIMs to identify the factors that predict the survival of patients with breast cancer.Wilson et al. (2012) and Sparling et al. (2007) state that the factors contributing to overweight/obesity are diverse and require a comprehensive approach that takes into account environmental and cultural influences.They also emphasize the significance of early intervention in effectively reducing rates of overweight and obesity.
The problem addressed in this paper centers on the high prevalence of obesity and overweight.In Mexico, either the rate of obesity and the prevalence of metabolic syndrome are alarming, potentially leading to the suffering of longterm health conditions.The sedentary lifestyle, unhealthy dietary habits and poor sleep patterns among Mexican adults contribute to the high rates of obesity and metabolic syndrome.The problem is worsened by the increasing prevalence of overweight/obesity among higher education staff, who are primarily engaged in sedentary activities.
The motivation behind this study is based on the need to reduce the prevalence of obesity and metabolic syndrome among higher education staff in Mexico.By utilizing ML techniques, the paper aims to contribute to the development of effective strategies for identifying health risks and promoting wellbeing.The study's focus on higher education staff underlines the importance of creating interventions tailored to specific work environments to mitigate the adverse health effects associated with sedentary behaviors.
In the present study we use data obtained in a health condition monitoring initiative involving the staff of a higher education institution situated in Matehuala, Mexico.The aim is to identify health risks through the application of ML techniques.The features include individual records comprising anthropometric measurements, glucose levels, and results obtained from BIA. Python and Scikit Learn were used to implement four classification algorithms based on ML, and four statistical techniques, that helped to compute VIMs of the features in the prediction of the individual's risk of having obesity, by observing the body age or metabolic age.
MATERIALS AND METHODS
In biomedical applications based on supervised learning, medical data are used to train the algorithm in accordance with its relation to the target.In the present pilot study, a database was created with 63 records, identifying anthropometric measurements, glucose levels and BIA results as features (Fig. 1(a)).Fig. 1(b) depicts the block diagram representing the process flow: during the Extract Transform Load (ETL) phase, data is retrieved from the database and cleaned by replacing missing values with the computed mean.Additionally, feature scaling is performed at this stage.In the Exploratory Data Analysis (EDA) phase, statistical analyses are conducted using the univariate methods (Pearson, Chi 2 and ANOVA), and both RFE and variable importance factor (VIF) methods.Within the ML model block, the following classifiers are implemented: DT, RF, artificial neural networks (ANN) and adaptive boosting (AdaBoost), aiming to obtain VIMs through Shapley additive explanations (SHAP) values.For quality assessment of the classifiers, the F1 score, the Area Under the Receiver Operating Characteristic curve (AUC-ROC), and the confusion matrix were utilized as metrics.
The recorded data include the following anthropometric measurements: age (AG), weight (WE), height (HE), BMI, WA, WHtR, arm circumference (AR), hip circumference (HP), systolic blood pressure (SBP), diastolic blood pressure (DBP), and heart rate (HR).Additionally, the health indicator includes glucose (DX), and the following functional fitness parameters: MM, VF, body fat (BF) and body age.Ageing (AGG) is defined as the ratio age/body age.
Weight, BMI, MM, MA, VF, BF and body age were derived from the BIA results obtained using an Omron HBF-514C body monitor.This device sends electrical currents through the hands via electrodes that the individual holds with both hands and through the feet via electrodes placed on the scale's surface.This combination allows for an analysis of both the upper and lower body (Pribyl et al., 2011).Participants were instructed not to exercise and to fast on the test day, including refraining from coffee.Blood pressure (BP) was measured using an inflatable cuff with a gauge around the arm, providing measurements in millimeters of mercury (mmHg) for DBP and SBP.Waist (WA), hip (HP) and arm (AR) circumferences were measured using a tape measure in centimeters.Heart rate (HR) or pulse was measured at the wrist on the radial artery in beats per minute.
The WHtR is calculated by dividing waist by height measurement in centimeters.GLU measurements were taken using a blood sugar meter, with blood samples collected from fingertip pricks, reported in millimoles per liter (mmol/L).The status of "aged" was utilized as the objective or label, determined based on the ratio between body age, and the subject's real age.Specifically, if AGG > 1.0, the subject is considered "aged."Table 2 displays the anthropometric measurements, blood glucose levels, and functional physical fitness indices obtained by BIA for the staff members.The data is presented with average values, standard deviations, as well as maximum and minimum values for each characteristic.On average, the staff members are 40 years old, with an average weight of 70 kg and height of 1.65 meters.According to Table 1, the staff is classified as overweight with an average BMI of 25.58 kg/m² (> 25), they exhibit normal glucose levels (< 99) and normal blood pressure (109/73).The BIA results gave an AGG of 1.13, indicating that the staff members are "aged" with a BF percentage of 32.9%.
Machine Learning
One of the goals of applying ML techniques to large datasets is to discover patterns among features.As listed above, some of the most important algorithms used in supervised learning are SVM, DT and RF.SVM and DT can be used for classification, and regression tasks on complex datasets, while the RF algorithm is built from many individual DT.DT learn the best way to divide the training dataset into smaller and even more smaller subsets until reaching the target prediction.In RF, the predictions from all the trees are used to make the final prediction of the target.
In ML algorithms, the relative importance of each feature is scored after training the algorithm.This method is helpful to get a better understanding of which characteristics are more important when a selection of features is required, in addition "to discovering complex relationships between predictors corresponding to interaction terms".In these algorithms, variable importance can be measured by observing the decrease in model accuracy if the values of a variable are randomly permuted (Peter et al., 2020).In this work, the following models were used: DT, RF, ANN, AdaBoost.
Statistical Analysis
In addition to the ML algorithms, other methods exist to identify VIMs.We implemented univariate analysis, the RFE method and the VIF calculation.
The univariate analysis method involves analyzing each variable in the dataset using Pearson, Chi 2 and ANOVA correlation tests.The value of 'p' is used as a criterion to determine the degree of importance of each characteristic.The Chi 2 correlation test determines whether the variables are related to the objective.The RFE method employs a ML model to iteratively remove variables with the least impact on the target prediction.Various models can serve as a basis for this technique, such as linear, SVM, DT, among others.The VIF factor provides a measure of collinearity that assesses whether two variables in the model are highly correlated and conveys similar information about the dataset's variance.In multiple regression, this helps to identify the most significant predictor variables.
Python and Scikit-Learn were utilized for the statistical analysis of the dataset, data processing, and modeling using ML techniques.Data normalization was performed before conducting the data analysis.For the classification modeling, the dataset was randomly split into two subsets: the training dataset (80% of the data) and the testing set (20% of the data).The prediction of whether an individual is aged or not aged was based on the AGG ratio.
RESULTS AND DISCUSSION
Table 3 shows the characteristics of the population by group: aged (AGG > 1.0) or not aged (AGG ≤ 1.0).According to the table, 60.3% of the population had a body age of 1.27 years older than the mean age of the staff.This group have an average weight of 74.37 kg, a BMI of 26.94 kg/m 2 and BF of 33%.Likewise, 39.6% of the population had a body age of 1.91 years younger than the mean age of the staff; with an average weight of 63.52 kg, a BMI of 23.51 kg/m 2 and BF of 32.88%.Both groups display normal glucose levels (< 99) and normal blood pressure (< 120/80).
The Pearson correlation coefficient was used to compare the characteristics of the two groups, the p value varies between 1 and -1 with 0 indicating that there is no correlation.The values of the anthropometries WE, BMI, WHtR, AR, HP, SBP and DBP are higher for the aged population, as well as the DX health indicator; and the measurements of MM, VF and BF obtained by BIA.
Classification Models
The F1-score from Scikit-Learn was used as a measure of accuracy for the classification tasks.The score is normalized, a value approaching 1.0 indicates the best performance.The accuracy of all the classification models was above 0.9, as follows: DT (1.0), RF (0.923), ANN (0.923), AdaBoost (1.0).Additionally, as a measure of performance, the AUC-ROC was computed for each classification model.A value close to 1.0 implies that the model is accurate.The ROC Curve is shown in Fig. 3.The AUC of all the classification models was above 0.9: DT (1.0), RF (0.9), Artificial ANN (0.95), AdaBoost (1.0).A confusion matrix was also used to visualize the specific accuracy for each class (aged or not aged).The confusion matrix helped to identify that all the models classified correctly 100% of the aged individuals.In addition, the RF and the ANN models classified correctly only 80% of the not aged labels, with the rest of the models scoring 100%.
Variable Importance
Since DT, RF and AdaBoost performed better among the prediction models based on ML, SHAP values were used to identify the importance of each feature and its impact on the prediction.A SHAP value of zero indicates little contribution to the prediction, so the further from zero declares higher contribution.For the Pearson and Chi 2 correlation tests, Anova, RFE method and the VIF factor, the features are ordered by importance according to the scores given by the statistical analysis.Table 4 shows the top nine features characterizing metabolic age as a result of applying each method.
The anthropometric measurements that exhibit a greater impact on the prediction of obesity and overweight include WHtR, SBP and DBP, HP and AR, HR and BMI.Additionally, the BIA results, specifically BF and MM, also show a significant impact on the models' predictions.
Within the complete dataset, the classification models identified the most significant features as BMI, BF, WHtR, DBP and HE.Meanwhile, the statistical methods highlighted BF, SBP, WHtR and HP as the most important features.The features that demonstrated greater significance in predicting metabolic aging, considering the eight proposed methods, include BMI, BF and WHtR.
These findings align with previous studies conducted on populations of various ethnicities, where BMI, WA and WHtR are suggested indicators for assessing abdominal obesity and cardiometabolic risk.However, the authors of these studies have acknowledged certain limitations when using each parameter separately.
Devajit and Haradhan ( 2023) studied BMI as one of the most popular anthropometric tools to measure body fitness in order with the intention of uncovering its constraints in accurately assessing obesity in individuals of different ethnicity.The authors found that does not capture effectively and proficiently status of overweight/obesity across all populations, regardless of sex, age, socioeconomic standing, and ethnic background.Ashwell et al. (2011) completed a study on individuals with different ethnicity, about the utilization of WHtR in detecting abdominal obesity, along with the possible health risks associated with it.The study's results indicate that WHtR surpasses WA as a more accurate predictor for diabetes, dyslipidemia, HR, and the risk of cardiovascular disease; and that abdominal obesity offers more effective instruments for discerning cardiometabolic risks linked to obesity compared to BMI.On the other hand, previous research has studied the relation of BF with obesity and metabolic AGG.Sandeep et al. (2010), produced comprehensive gene expression profiles across both visceral and subcutaneous fat stores in Asian Indian individuals with and without diabetes.Additionally, the researchers assessed multiple intermediary phenotypic traits related to diabetes, including distinct anthropometric attributes, indicators of insulin resistance and secretion, glycemic control metrics, distribution of BF, among others.The authors conclude that adipose tissue pathology is linked to diabetes in both subcutaneous and VF deposits holding a crucial role in the development of metabolic syndrome.
In regards of BF as an indicator for obesity, Jensen ( 2008) study the roles of distinct fat deposits concerning the storage and release of fatty acids in both healthy individuals and those with obesity; with the aim to discuss the disagreement regarding to the fact that upper body or visceral obesity increases the risk for conditions such as type 2 diabetes and that elevated quantities of lower BF are independently linked to a decreased risk of metabolic issues.Also, that VF mass has a more pronounced correlation with an abnormal metabolic profile compared to subcutaneous fat in the upper body.The results concludes that abdominal fat accumulation in individuals with overweight is highly associated with the metabolic complications of obesity.
Previous studies performed on Mexican population also discuss the use of BMI, WA, BF and WHtR as indicators for obesity.Sanchez Soto et al. ( 2012) found that 80% of people with obesity had high percentage of BF.
In Gutierrez-Esparza et al. ( 2020), the authors used ML algorithms to prioritize health parameters, aiming to identify the most suitable variables for classifying Metabolic Syndrome (MetS) within the Mexican population of the city of Tlalpan.They used Correlation-based Feature Selection (CFS) and Chi 2 filter methods to identify pertinent features for diagnosing MetS.In their results, WHtR, coupled with the Adult Treatment Panel III (ATP III) variables (excluding waist measurement), outperforms WAIST and BMI in terms of classification accuracy, in the prediction of metabolic syndrome in Mexican population.
In Barquera et al. (2020), the authors analyzed the data of 16,256 individuals to study the prevalence of obesity among Mexican adults while considering various physical and sociodemographic factors, and subsequently, to assess trends in these prevalence rates over time.The classification considered obesity (according to WHO standards), abdominal adiposity (as per IFD criteria), and short stature (following NOM-008-SSA3-2017).The researchers used LR models to identify the correlation between obesity and various risk factors.The results showed that heigh plays an important role in identification of obesity in Mexican women and men, although it was more notorious in women, along with WA as a complementary index that allows the evaluation of VF accumulation.The authors recognize BMI as an indicator of the risk of comorbidities associated with excessive adipose tissue, although, they state that this indicator is not very accurate for assessing adiposity at an individual level.
BMI, WHtR, WA and BF are useful to assess cardiovascular disease risk, metabolic syndrome and obesity.Also, BMI is a relevant predictor associated with mortality due to chronic kidney disease and cardiovascular peril in diabetic patients (Sanabria-Arenas, 2015;Mendoza-Niño et al., 2023;Russo et. al., 2023).
The findings emerging from this investigation could offer valuable insights for shaping healthcare initiatives for Mexican population, especially those working in higher education institutions.Including the staff's behaviors in future studies, such as sedentary lifestyles, reduced sleeping hours, lack of health awareness and long working hours, may enhance the efficiency of healthcare supervision and the design of strategies for supervision, preemptive measures and active involvement.
CONCLUSION
Sedentary behaviors in people can lead to obesity or overweight.Therefore, monitoring an individual's health condition is essential for detecting potential health risks that could progress into chronic diseases.This work described the results of data modelling focused on anthropometric measurements collected from members of a higher education staff.The anthropometric measurements included age, waist, hip and ARs; heigh, BP, HR, BMI, among others.Additionally, the results of BIA such as BF, VF, MM and body age were incorporated.The health indicator glucose was also considered.These parameters were used as features in four classification models.Also, the data was analyzed using the univariate method, RFE and VIF.The objective was to determine the variable importance to identify which features played a more crucial role in predicting metabolic aging within the group.
The contributions of this work that collectively enrich the understanding of obesity, its assessment, and its links to metabolic aging, particularly within the Mexican population and higher education staff are: • An in-depth analysis of a specific population's health characteristics is provided.A detailed statistics about the population's body age in relation to their mean age, along with their average weight, BMI, BF percentage, glucose levels and BP is presented.This comprehensive exploration emphasize the variations and potential health implications within the studied population.• A correlation analysis using Pearson correlation coefficients to identify relationships between various characteristics of the population is conducted.This analysis reveals which attributes are positively or negatively correlated and offers insights into potential connections between different health indicators.• The performance of different ML classification models for predicting metabolic aging is evaluated.The F1-score and AUC-ROC as measures of accuracy and performance are applied.All classification models performed an excellent discrimination, achieving high accuracy scores (above 0.9): DT (1.0), RF (1.0), ANN (0.923), AdaBoost (1.0).A ROC curve is also provided to visualize the accuracy of each model, supporting the effectiveness of ML techniques in predicting metabolic aging.
• The SHAP values to interpret the importance of features in the prediction models was introduced.They are used to measure the impact in the prediction for each feature and the results are compared to find coincidence to the variable importance obtained from the statistical methods.
Both anthropometric measurements and the results of BIA provide valuable references for identifying obesity and overweight in individuals.
≥ 25 in low height adults Abdominal obesity according to the Official Mexican Standard Male ≥ 90 cm Female ≥ 80 cm BMI = Actual weight (kg)/ height (m) * Low height = Less than 1.50 meters in adult female and less than 1.60 meters in adult male.Source: INSP (2018).
Fig. 1.Block diagram that describes the work process
Fig. 3 .
Fig. 3. ROC curve of the different classification models
Table 1 .
different methods are Obesity classification by BMI and WA in Mexican adults, according to the Official Mexican Standard (NOM) and the WHO
Table 2 .
General description of the anthropometries, blood glucose measurements, and functional physical fitness indices obtained by BIA for the staff members
Table 4 .
Importance of the features characterizing metabolic ageing by method. | 6,025.6 | 2023-01-01T00:00:00.000 | [
"Medicine",
"Computer Science",
"Education"
] |
Virtual Screening and Network Pharmacology-Based Study to Explore the Pharmacological Mechanism of Clerodendrum Species for Anticancer Treatment
Background Cancer is a second leading cause of death in the world, killing approximately 3500 per million people each year. Therefore, the drugs with multitarget pharmacology based on biological networks are crucial to investigate the molecular mechanisms of cancer drugs and repurpose the existing drugs to reduce adverse effects. Clerodendrum is a diversified genus with a wide range of economic and pharmacological properties. Limited studies were conducted on the genus's putative anticancer properties and the mechanisms of action based on biological networks remains unknown. This study was aimed to construct the possible compound/target/pathway biological networks for anticancer effect of Clerodendrum sp. using docking weighted network pharmacological approach and to investigate its potential mechanism of action. Methods A total of 194 natural Clerodendrum sp. Compounds were retrieved from public databases and screened using eight molecular descriptors. The cancer-associated gene targets were retrieved from databases and the function of the target genes with related pathways were examined. Cytoscape v3.7.2 was used to build three major networks: compound-target network, target-target pathway network, and compound-target-pathway network. Results Our finding indicates that the anticancer activity of Clerodendrum sp. involves 6 compounds, 9 targets, and 63 signaling pathways, resulting in multicompounds, multitargets, and multipathways networks. Additionally, molecular dynamics (MD) simulations were used to estimate the binding affinity of the best hit protein-ligand complexes. Conclusion. This study suggests the potential anticancer activity of Clerodendrum sp. which could further contribute to scavenger novel compounds for the development of new alternative anticancer drugs.
Introduction
Cancer is a neoplasmic disease that regulates uncontrolled cell division and leads to abnormal growth of cell mass. In 2018, the World Cancer Report estimated about 1.16 million new cases with 784,800 deaths and 2.2.6 million in 5-year prevalent cases in India. Te National Cancer Institute had listed eight categories for the cancer treatment which include chemotherapy, radiation, surgery, targeted therapy, stem cell transplant, and medicines and immunotherapy [1]. Some of these therapies have resulted in drug resistance after initial positive response which is termed as acquired drug resistance that occur in both cytotoxic chemotherapies and targeted therapies. Moreover, cancer involves the interactions of multiple genes as well as functional proteins. Te success of drug discovery in cancer remains unsolved due to diversity of cancer types, excessive toxicity, constraints efcacy, and acquired treatment resistance [2]. Tis complication in cancer treatment arises due to the failure of "one gene, one drug, one disease" paradigm [3,4]. Terefore, the developments of new potent and nontoxic treatments are highly active in scientifc research [5].
With the development of computational approach, disease networks constructed through network biology are suggested as powerful tools for screening out drugs targets. Network pharmacology developed by Hopkins enhances high rate of clinical success with less side efects, and around 40% of the current drugs discoveries were contributed by this approach [3]. Tis approach is best appreciated in anticancer research, where both genetic and nongenetic bypass mechanism has led to inherent in diferent cancer phenotypes [6,7].
Te practices of natural products from plants are widely accepted as potential new lead source towards the discovery of new alternative anticancer drug. Te secondary metabolites extracted from plants were considered as the primary source of drugs in Indian and other ancient systems of medicine in the world due to its higher structural diversity, metabolism in the body with low toxicity, and complexity than synthetic drugs [8]. Tese bioactive natural compounds could inhibit potential targets and reduce the cost of new drug development due to its availability in nature as well as provide option for combination therapies [9]. Since 1981 to 2014, a total of 174 compounds were approved for commercialization in the treatment of cancer. Many studies suggested that the modulation of multitargets rather than single targets could lead to discovering efective drugs [10]. Several studies reported the use of diferent plant-based resources for the treatment of cancer with scientifc validations but a majority of plants were still left to be documented [11,12]. Te Clerodendrum sp. reported to inhibit cancer activity but its efciency and mechanism remain unclear until today [13]. In this study, we focused on chemical constituents of Clerodendrum to be efective against cancer. Te genus Clerodendrum consists of 580 species that are widely distributed in tropical and subtropical regions of the world that comprises of small trees, shrubs, and herbs [14]. Many researchers isolated and identifed diferent biological active compounds and other major chemical constituents such as favonoids, phenolics, steroids, and terpenoids from the genus [15][16][17]. More than 280 major chemical constituents from various species of Clerodendrum were isolated and identifed till date. Many species of this genus were used as folk medicines by various tribes of African and Asian continents for the treatment of lifethreatening diseases, anticancer, antitumor, antidiabetic, antihypertensive, and antidiarrhoel activities [18,19]. Besides its medicinal importance, this genus has ornamental values. Species such as C. thomsoniae, C. indicum, C. panniculatum, C. inerme, C. japonicum, and C. speciosum are cultivated for their aesthetic values. C. inerme, C. thomsonia, and C. splendens are among the most sought for cultivation in gardens, for covering fences and walls.
Even though there are many studies on pharmacological efects of the genus but the anticancer activity of the genus based on in silico analysis and network pharmacology were elucidated till date. Terefore, this study was aimed to explore the active compounds, potential targets, and biological pathways of the Clerodendrum sp. using molecular docking, network pharmacology, and molecular simulations analysis, which could further provide basis for subsequent studies and clinical applications. Te detail fowchart of this study was depicted in Figure 1.
Collection of Natural Compounds and Space Analysis.
Te bioactive natural compounds (NCs) of Clerodendrum sp. were retrieved through extensive literature survey [20][21][22][23]. Te two dimensional (2D) structures of the compounds were retrieved from PubChem and Chemspider databases and prepared into a library. Te duplicate structures were deleted and some compounds which did not have specifc structures were sketched using the Marvin Sketch v6.2.0 and saved in.mol2 format. Te two dimensional (2D) structures were converted into 3D structures by using Corina 3D analysis tool in Tsar (Tools for structure activity relationships) software (https://www. accelrys.com/). Further, the three dimensional (3D) structures were converted to.pdb format. Te dataset of compounds was further analysed, and hydrogen was added using CHARMm based on smart minimizer that generates 1500 steps of Steepest Descent followed by conjugate gradient algorithms with a convergence gradient of 0.001 kcal mol. Te details of bioactive compounds retrieved from Clerodendrum sp. are listed in Table S1.
Additionally, 43 approved drugs molecules related to cancer were collected from Drug Bank and selected as reference to analyse the NCs library. Te molecular descriptors of NCs and drugs were calculated using the PaDEL-Descriptor software.
Te principal component analysis (PCA) was conducted on the NCs and drug library by using the BioVinci tools to visualize the distribution of libraries in the chemical space. Te PCA analysis was performed with 8 molecular descriptors: ALogP, Molecular Weight, Number H-Donors, Number-H Acceptors, Number Rotatable Bonds, Number of Rings, Number of Aromatic Rings, and Molecular Fractional Polar Surface Area.
ADME and Toxicity
Profling. Te ADMET study refers to the pharmacokinetics approach of a molecule where absorption, distribution, metabolism, excretion, and toxicity of the compounds could be analysed. Te ADME and toxicity properties of the selected NCs were predicted using PreADMET server and Osiris Property Explorer (https:// www.openmolecules.org/propertyexplorer/applet.html).
Retrieval of Cancer Targets.
Te protein targets associated with human cancer were collected from four resources: (i) Terapeutic Target Database (TTD) [24], (ii) DrugBank [25], (iii) Uniport [26], and (iv) Protein Data Bank [27]. Te duplicates structures, structures with no active site, incomplete structures, and structures from other organisms were deleted. Finally, 60 important drug targets from key type 'Homo sapiens' were selected. Te information of the collected protein targets is listed in Table S2. Te downloaded PDB structures were selected based on the parameters such as (a) Protein extracted from X-ray difraction process and (b) must contain one or more active sites for binding of ligands. All the structures were cleaned and optimized using USCF Chimera [28]. Among 60 3D protein structures, the resolutions of crystal structures were found to be ranging from 1.2Å to 3.5Å.
Protein-Protein Interaction (PPI) Network Analysis.
Te PPI data was obtained from STRING 11.0 (Search Tool for the Retrieval of Interacting Genes/Proteins) with species restricted to "Homo sapiens." Tis database constructs nodes and edges of network to represent proteins and proteinprotein associations. To ensure high confdence information, the minimum score was set to >0.9 and excluded the disconnected proteins in the networks. Te potential target proteins and protein pathways involved in cancer were recorded in an excel table and imported into the Cytosca-pev3.7.2 to obtain target pathway network.
Gene Ontology (GO) and Koyo Encyclopedia of Genes and Genomes (KEGG) Pathway Analysis of Cancer Targets.
All the potential target genes were uploaded to DAVID (Database for Annotation, Visualization and Integrated Discovery) database. Te identifer was selected as 'OFFICIAL GENE SYMBOL', and species were selected as 'Homo sapiens' for GO (Gene Ontology) enrichment analysis and KEGG (Kyoto Encyclopaedia of Genes and Genomes) pathway annotation [29].
2.6. Molecular Docking Approach. Te bioactive NCs were docked to target protein using PyRx tool [30]. In this study, AutoDockVina, which allows fexible docking in active site by allowing fexibility in the ligand, was used for docking [31]. Blind docking was performed in NCs against cancer targeted proteins in order to detect the possible binding sites and modes of the ligands by examining the complete mass of targeted protein. Based on docking score the results were analysed and ranked accordingly. Further, these docking results were subjected to the construction of biological networks to study the polypharmacological nature of selected NCs as cancer inhibitors.
Network Pharmacology Study.
Network Pharmacology approach was used to predict the polypharmacological potency of NCs to identify potential anticancer drugs [32]. Te three networks were constructed as follows: (i) compound-target network, (ii) target-target pathway network, and (iii) compound-target-pathway network for better understanding of the active compounds and their mechanism against cancer. All the networks were visualized with the Cytoscape v3.7.2 software and analysed using the Network Analyzer plugin [33]. Evidence-Based Complementary and Alternative Medicine targets. In this study, NVIDIA RTX 1060 GPU accelerated GROMACS 2021 software; running over Linux Ubuntu 20.04 LTS operating system was used. Te topology of protein and ligands were generated using Charmm36 force feld and SwissPARAM server [34]. Te solvation of each system was performed with TIP3P water model followed by neutralization with suitable Na + and Cl − ions. Te energy was minimized by the steepest descent minimization algorithm with maximum 50,000 steps. Position restrains were applied to receptor and ligand of the each systems for 100 ps throughout heating (300 K) utilizing NVT (No. of atoms, Volume, Temperature) together with leap-frog integrator, 2 fs time step, and LINCS holonomic constraints. After NVT, NPT (No. of atoms, Pressure, Temperature) equilibration was conducted at 300K temperature for 100 ps using 2 fs time step. Finally, 10 ns MD production run was generated without any restrain with 2 fs time step, and structures were recorded at each 10 ps coordinates. Te trajectories fles of root mean square deviation (RMSD), root mean square fuctuation (RMSF), Radius of gyration (Rg), and a number of hydrogen bond analysis (H-bonds) were calculated after completion of MD simulations for further analysis.
Screening of Active NCs of Clerodendrum sp.
A total of 194 NCs of Clerodendrum sp. were screened to create a compound library and evaluated with the eight molecular descriptors such as molecular weight (MW), number of hydrogen bond acceptors (HBAs), number of hydrogen bond donors (HBDs), total polar surface area (TPSA), and the drug-like properties of the NCs and drugs. In this study, it was found that out of 194, 126 natural products satisfed the eight physiological conditions and the details are provided in Table 1. A similar set of results were analysed for reported drug compounds. Further, a target library of 60 cancer proteins was compiled. Te NCs of Clerodendrum and reported drug libraries were further evaluated using PCA to visualize its allocation in the chemical space. Te eight molecular descriptors were used to generate the PCA model. We found that the distributions of NCs (Red sphere) are analogous to the 3D space occupied by the reported compounds (Blue sphere) and specify the resemblance of drug like assets in NCs with potential anticancer activity. Te variances of PCA1, PCA2, and PCA3 are -1.98, -1.45, and -3.27, respectively ( Figure 2). Te structures of 126 compounds were imported to PreADMET server and Osiris Property Explorer. Te results of ADME/toxicity screening showed that 58 compounds had good ADME parameters and relatively safe to be considered as drug-like compounds for inhibition of cancer. Hence, the selected NCs satisfes the overall parameters to be taken over as positive drugs and listed in Table 2 where "Green" indicates drug-conform behaviour.
GO and KEGG Pathway Enrichment Analysis.
Te potential 60 cancer targets genes were uploaded to DAVID 6.8 database for GO annotation and KEGG pathway analysis.
Te threshold was set as P ≤ 0.05, and the pathways or gene function with maximum count were analysed further.
Analysis of Target Proteins in Protein-Protein Interaction
Network. PPI network was constructed to analyse the physical interactions of 60 cancer proteins by using STRING software. Te data were downloaded in CSV fle format and imported into Cytoscape v3.7.2 software [10]. In PPI networks, a higher degree value of a node indicates the importance of node in the network. A total of 149 edges and 52 nodes with an average node degree of 4.97 were obtained the in PPI network after fltering the confdence level >0.9 and rejecting the target protein independent of the network. Te network analyser tool indicates that the proteins MAPK1, TP53, HSP90AA1, SRC, PIK3CA, HRAS, EGFR, ESR1, ERBB2, PRKCA, AURKA, BRCA1, AR, PGR, JAK2, PDPK1, PPP2CA, TYMS, TOP2A, AURKB, CDK2, and TERT contained degree higher than average node degree, which means that these proteins might play an essential role of bridge to connect other nodes in PPI network ( Figure 5).
Molecular Docking of NCs with Cancer Targets.
In this study, the 58 screened NCs were docked with 60 selected cancer target proteins using AutoDockVina in the PyRx tool. Te NCs were considered as ligands against 60 targeted cancer genes. A similar method was applied for the reported drugs to have a comparative study. All the NCs were found to bind with 60 potential targets resulting in diferent docking scores. Based on the docking score a threshold value of lower than −8.0 kcal/mol was selected and ranked the compounds based on this threshold value ( Figure 6). Te cancer target PDB ID 1RY0 (AKR1C3) showed the highest interactions with the majority of the NCs, followed by 1HFQ (DHFR), 3ZGC (NFE2L2), 4AF3 (AURKB), 4K33 (FGFR3), 5P21 (HRAS), 2I0V (CSF1R), and 3ZIM (PIK3CA) protein targets and least interaction was found with the 1M9Z (TGFBR2) protein. Further, these docking results were subjected for the NCs-Cancer target network construction in order to study the polypharmacological nature of selected NCs as cancer inhibitors.
Compound-Target (C-T) Network
Analysis. Te C-T network was constructed with a docking score (lower than −8.0 kcal/mol) to understand the mechanism of action between NCs of Clerodendrum and cancer targets (Figure 7). Te C-T network consists of 118 nodes (58 compounds and 60 potential targets) and 1492 edges (C-T interactions). Te possible interaction between NCs and target proteins wasevaluated with important topological parameters such as degree, average betweenness centrality of nodes, closeness centrality, average shortest path length, and Topological Coefcient. We found that 57 compounds (expect L13) interact with multiple targets and 59 targets expect TGFBR2 interacts multiple compounds in the constructed network. Tus, these fndings indicate multicomponents multitargets interaction in Clerodendrum sp.
Target-Target Pathway Analysis (T-P).
A total of 63 pathways related to cancer (P ≤ 0.05) were obtained by placing cancer-targeted genes in DAVID for KEGG pathway analysis. We constructed the T-P network by combining the pathways with cancer-related gene information. Tis network analyses the interaction of cancer targets and relationship between pathways and targets ( Figure 8). Te network consists of 106 nodes (63 pathways and 43 genes) and 435 edges. Te results showed that pathway in cancer contained maximum genes (22°) followed by PI3K-Akt signaling pathway, proteoglycan in cancer, prostate cancer, and focal adhesion could play a crucial role in the treatment of cancer and regulates the complex biological and metabolic processes. Tese pathways could be suggested as potential signaling pathways that mediate the efects of Clerodendrum NPs against cancer. Moreover, the target genes such as MAPK1, PIK3CA, HRAS, EGFR, TP53, PRKCA, PDPK,1and SRC genes could act on more than 18 pathways. Interfering target genes of these pathways could be a potential strategy for the future treatment of cancer disease.
Compound-Target-Pathway Network Analysis (C-T-P).
Te C-T-P network was constructed based on active compounds, target identifcation, and pathway analysis. Tis Evidence-Based Complementary and Alternative Medicine intervene the potential pathway such as pathway in cancer, PI3K-Akt signaling pathway, proteoglycans in cancer, MAPK signalling pathway, focal adhesion, and prostate cancer for treating cancer in future.
MD Simulation of Scutellarein and 17-HydroxyteuvincenoneG with 1ry0 Cancer Target.
To study the stability of protein-ligand complexes, MD simulations were performed for 10 ns with the top 2 NCs (Scutellarein and 17-hydroxyteuvincenone G) and 1ry0 cancer target along with PG2 the co-crystallized ligand as the standard. Te MD trajectories of Scutellarein-1ry0 and 17-hydroxyteuvincenoneG-1ry0 complexes were compared with co-crystallize ligand prostaglandin D2 (PG2)-1ry0 and values of RMSD, RMSF, Rg, and H-bond were recorded. Te complex with a lesser RMSD value could be more stable as compared to higher RMSD values. Te average RMSD values for Scutellarein-1ry0 and 17-hydroxyteuvincenoneG-1ry0 complexes were found to be 0.29 nm and 0.34 nm, respectively, which are signifcantly stable as compared to the native co-crystal ligand PG2 (0.56 nm). Te Scutellarein-1ry0and17-hydroxyteuvincenone G-1ry0 complexes had a lower RMSD value that range from Evidence-Based Complementary and Alternative Medicine 0.06 nm to 0.59 nm, 0.07 nm to 0.76 nm, while PG2-1ry0 complexes range from 0.06 nm to 0.98 nm (Figure 10(a)). It indicates that Scutellarein and17-hydroxyteuvincenoneG formed stable complexes with 1ry0 as compared to cocrystallize ligand PG2. Te backbone RMSF of each residue was calculated to access the amino acid residues contributing to the complex structural fuctuations. Te average RMSF values of Scutellarein-1ry0, 17-hydroxyteuvincenoneG-1ry0 and PG2-1ry0were 0.14 nm, 0.23 nm, and 0.57 nm, respectively. From the graph Figure 10(b), we could notice that there was an overall decrease in the fuctuation of Scutellarein-1ry0 as compared to 17-hydroxyteuvincenoneG and PG2. Tis suggests that Scutellarein was more stable and rigid than the 17-hydroxyteuvincenoneG structure and PG2. Te analysis of Rg is considered as root mean square distances from each atom of the system to its centre of mass. As depicted in Figure 10(c), the Rg of Scutellarein had a lower value as compared to 17-hydroxyteuvincenoneG and PG2, which indicates that Scutellarein had a compact and stable complex with 1ry0 during the MD simulation period. Additionally, the H-bonds indicate a crucial role in molecular recognition and determine the stability of the drugprotein complex structure. In Figure 10(d), the intermolecular H-bond trajectories with time-dependent bond distances variation for 10 ns were illustrated. Te average number of hydrogen bonds in the standard ligand PG2 (0.13 nm) was less than 17-hydroxyteuvincenoneG-1ry0 (0.58 nm) and Scutellarein-1ry0 (2.12 nm) throughout the MD simulation time period. Te lesser number of H-bond in the PG21-1ry0 complex makes it relatively unstable and the higher number of H-bonds in Scutellarein-1ry0 and 17-hydroxyteuvincenoneG-1ry0 represent the stability of the complexes. Overall, the analysis indicates that Scutellarein as compared to 17-hydroxyteuvincenoneG and PG2 forms a stable protein-ligand complex with 1ry0 cancer target protein and does not depict any conformational change in protein structure during the simulation process.
Discussion
Some of the developed cancer therapies have resulted in drug resistance against the synthetic drugs [35]. Terefore, natural products are considered as a potential new source of targets for drug discovery as it had higher structural diversity and could reduce the cost of new drug development due to their availability in nature. Terefore, in this study 194 NCs of Clerodendrum sp. were evaluated against 60 reported human cancer proteins. All the compounds were initially screened based on molecular descriptors and PCA analysis was evaluated to determine the relationship between statistically meaningful conformations sampled during the trajectory. ADMET profling of compounds was considered as signifcant features for drug development [36]. Te parameters such as Blood Brain Barrier Penetration (BBB), Caco2 cell permeability, Human Intestinal Absorption (HIA), Madin-Darby canine kidney cells (MDCK), Plasma Protein Binding (PPB), and Skin Permeability (SP) were selected to screen ADME properties of compounds. For toxicity prediction, parameters such as Mutagenic, Tumorigenic, Irritant, Reproductive efective, Druglikness, and Drug score were used. A total of 58 NCs were found to be Evidence-Based Complementary and Alternative Medicine ADME/toxicity positive drug-like compounds for further study. Te possible mechanism of action and signaling pathways associated with the 60 cancer genes were elucidated with enrichment analysis. Te GO enrichment analysis of cancer proteins recorded 29 charts, 45 charts, and 161 charts for CC, MF, and BP analysis. KEGG pathway analysis was performed to examine the signaling pathways and functions of the identifed target genes [37]. Te analysis of KEGG pathways found that 63 pathways were signifcantly correlated with target genes. Te highest enrichment scores in cancer pathways were obtained in the PI3K-Akt signaling pathway, proteoglycan in cancer, and prostate cancer. Te pathway in cancer depicts similar signifcance in cancer apoptosis, metastasis, cell proliferation, survival, and angiogenesis [38]. Te activation of PI3K triggers the phosphorylation of PIP2 to PIP3 that phosphorylates AKT to regulate the metabolism of insulin [39]. Te PI3K-Akt signaling pathway involves tumor cell apoptosis and autophagy [40]. Tese results supported the application of Clerodendrum compounds in treating cancer. PPI networks were constructed with 52 nodes and 149 edges and Te docking weighted network pharmacology approach was employed to explore the polypharmacological efects of Clerodendrum sp. compounds in the treatment of cancer by analysing network interaction between C-T, T-P, and C-T-P. Te network between the C-T was evaluated using degree and betweenness centrality, which indicates the number of related nodes in the network. Te C-T network suggested that 6 compounds L51 (Scutellarein), L2 (17-hydroxyteuvincenone G), L54 (Teuvincenone A), L10 (Acacetin), L55 (Teuvincenone E), and L50 (Scutellarein 7-glucuronide) had potential pharmacological activities against cancer ( Figure 11). Tis study showed similar comparisons with various works of literature in citing these compounds as anticancer agents as such the compound Scutellarein and Scutellarein 7-glucuronide, favones glycoside was reported to exhibit antiproliferative and antiapoptotic activities among various human malignancies [41,42]. Similarly, 17-hydroxyteuvincenoneG showed signifcant cytotoxicities activities against the growth of human promyelocyticleukemia (HL-60) and human lung adenocarcinoma epithelial (A-549) tumor cell lines [43]. Teuvincenone A and E had remarkable in-vitro cytotoxicity activity against human cell lines [44], and acacetin exerts an antiproliferative efect by blocking cell cycle progression and could enhance the therapeutic potential in nonsmall cell lung cancer cell [45] and breast cancer [46]. Te nine target proteins with high degree including AKR1C3 (Aldo-ketoreductase family 1 member C3), DHFR (Dihydrofolate Reductase), NFE2L2 (Nuclear factor erythroid 2-related factor 2-Like 2), AURKB (Aurora B Kinase), CD44 (CD44 antigen), PIK3CA (Phosphatidylinositol 4,5-bisphosphate 3-kinase catalytic subunit alpha isoform), FGFR3 (Fibroblast growth factor receptor 3), CSF1R (Macrophage colony-stimulating factor 1 receptor), and HRAS (GTPaseHras) were identifed as a key target of Clerodendrum sp. in the treatment of cancer ( Figure 12). Terefore, targeting these mutant proteins with novel therapeutics agents could eliminate the morbidity and mortality of human cancer.
Te network analysis of T-P and C-T-P supports different pathways such as Pathway in cancer, PI3K-Akt signaling pathway, Proteoglycans in cancer, MAPK signaling pathway, Focal adhesion, and Prostate cancer as potential signaling pathways to mediate the signifcant efects of Clerodendrum compounds against cancer.
MD simulation could determine the underlying dynamics of protein-ligand interactions [47]. Based on Network pharmacology, the MD simulation was performed for 10 ns with PG2 co-crystallize ligand as standard and determined the stability of NCs with the cancer target protein.
Te top 2 bioactive compounds (Scutellarein and 17hydroxyteuvincenone G) had maximum interactions with cancer targets and the 1ry0 cancer target based on the highest docking scores was used as starting point for MD simulation analysis.Te analysis of MD trajectories revealed that Scutellarein and17-hydroxyteuvincenoneG had favourable conformational stability, fexibility, and binding energy when docked with 1ry0 as compared to the cocrystallized structure of the standard 1ry0-PG2 complex. Terefore, Scutellarein and17-hydroxyteuvincenoneG could be proposed as efective compounds in inhibition of protein target for further in-vitro and in-vivo anticancer studies. Many studies suggest that modulation of multi-targets rather than single targets could lead to discovering efective drugs [48]. Hence, these fndings could beneft from the current knowledge of drug-protein interaction and relate pharmacological space with genomic space in order to treat cancer disease.
Conclusions
A total of 58 compounds were screened based on molecular descriptors, PCA analysis, and ADME/toxicity. Te 60 cancer target genes were analysed with GO annotation, KEGG pathway, and PPI interactions. Te selected NCs and cancer targets were analysed with network-related tools to confrmed and reveal the potential anti-cancer activity and molecular mechanism of Clerodendrum compounds. Te result of Network pharmacology analysis revealed that 6 active compounds exert their anti-cancer activity with 9 targets in 63 pathways. Tese results suggested that the identifed compounds might contribute to regulate with diferent cancer related genes and signaling pathways with potential applications in cancer treatment. Further in-vivo experimental studies are still in demands to validate our fndings as this study was performed based on data analysis and contributes to scavenger new source of compounds and the development of new anti-cancer drugs.
Data Availability
Te data are available within the manuscript and also accessible from the corresponding author upon request.
Conflicts of Interest
Te authors declare no conficts of interest. | 6,028.4 | 2022-11-02T00:00:00.000 | [
"Medicine",
"Biology",
"Chemistry"
] |
Bounds on polarization problems on compact sets via mixed integer programming
Finding point configurations, that yield the maximum polarization (Chebyshev constant) is gaining interest in the field of geometric optimization. In the present article, we study the problem of unconstrained maximum polarization on compact sets. In particular, we discuss necessary conditions for local optimality, such as that a locally optimal configuration is always contained in the convex hull of the respective darkest points. Building on this, we propose two sequences of mixed-integer linear programs in order to compute lower and upper bounds on the maximal polarization, where the lower bound is constructive. Moreover, we prove the convergence of these sequences towards the maximal polarization.
Introduction
Suppose you were given a set A and N lamps you are to place such that the darkest point in A is as bright as possible.In less descriptive terms this max-min problem is known as the maximal polarization problem, which we now state in mathematical language.
Let A, D ⊂ R n be nonempty sets and let K : A × D → R ∪ {+∞} be a function bounded from below.An N -point multiset C ⊆ D will be refered to as point configuration (of N points) and the set of all N -point configurations supported on D will be denoted by C. We assign the discrete K-potential associated with C to every point p ∈ A as U K,A (p, C) = c∈C K(p, c).
To any point configuration we associate its polarization It is then natural to consider the (maximal) polarization problem: For a broader context and overview of this formulation of the polarization problem we refer to the recent monograph [1,CH. 14].Problems of this kind have been extensively studied.In particular the case of A = D = S n−1 being a unit sphere and K(x, y) = x − y −s being related to a Riesz potential is rich in results on explicit optimal configurations of few points (eg.[2], [3], [4], [5], [6], [7] [8]), bounds on maximal polarization (eg.[4], [7]) and asymptotic results (eg.[9], [10], [11], [12]).Asymptotic results are also available for more general choices of A, such as rectifiable sets.Moreover, the polarization problem as stated in (1) is closely related to the well-studied covering problem, i.e. the question, whether A can be covered by balls of radius r > 0.
In particular, let K(x, y) = 1 [0,r] ( x − y ), then, a covering with N balls exists if and only if 1 ≤ P K (A).General discussions of covering problems can be found, for example in the seminal book by Conway and Sloane [13].For covering problems on compact metric spaces we refer to [14] for an overview, whereas constructive methods have been developed, e.g. in [15] and [16].
In this paper we consider polarization problems of the following kind.The set A ⊂ R n will be a compact set and we will impose no restrictions on the point configurations, i.e.D = R n .Furthermore, we restrict to functions K(x, y) = f ( x − y ) for some continuous strictly monotone decreasing function f : R + → R + and use the notation U f,A (p, C), P f,A (C), P f (A).If the subscript parameters are clear from context we omit them.
Under the above assumptions, we therefore consider the optimization problem (2) For explicit computations we choose Gaussians f (x) = e −ax 2 .These functions appear rather naturally in the context of universal optimality (cf.[17]): Recall that a function g : (0, ∞) → R is completely monotonic if it is infinitely differentiable and the derivatives satisfy (−1) k g (k) ≥ 0 for all k.The functions g(x) = e −αx are completely monotonic and we can write f ( x − y ) = g( x − y 2 ).In this context functions f (x) = g(x 2 ) are called completely monotonic functions of squared distance.
From this one obtains that the set of completely monotonic functions of squared distance is the cone spanned by the gaussians and the constant function x → 1.
In particular the commonly used Riesz potentials can be written in this way.
We fix some more notation for the case that the infimum P f,A is in fact a minimum, i.e. the minimizers of this function are points in A. In this case, any such minimizer will be called a darkest point of A. Moreover, will be called the set of darkest points of C. To explain this wording we invite the reader to recall the interpretation of the problem we gave in the beginning: we center lamps at the points in C which now illuminate A. The polarization of A is then the lowest level of brightness any point in A can have, any point realizing this is a "darkest point".
Note, that requiring A to be compact is rather natural.Indeed if A were unbounded, then the value of the polarization would always tend to N • inf f .If A were not closed, darkest points need not exist.Consider for example A to be the open disc and C only containing the origin.In this case, P f,A (C) is not attained at any point in A.
In Section 2 we provide some results connecting a locally optimal configuration to the set of its respective darkest points.Theorem 2.1 states that the points of such a configuration are contained in the convex hull of the darkest points while on the other hand Theorem 2.5 states that the darkest points are located either on the boundary of A or in the interior of the convex hull of the configuration.These restrictions provide necessary conditions for optimality.
In Section 3 we investigate mixed-integer approximations of the polarization problem providing upper and lower bounds.These are collected in Theorem 3.5.We then prove that these bounds indeed converge to P f (A) in Theorems 3.8 and 3.9.
In Section 4 we illustrate capabilities and limitations of the approach on some benchmark instances.
Darkest points and necessary conditions
In this section, we investigate structural properties an optimal configuration needs to satisfy in order to potentially falsify the optimality of a given polarization and reduce the search space of optimal configurations.
In particular, we have the following necessary condition that relates local optimality of a configuration to the set of its darkest points: Theorem 2.1.If C is a locally optimal solution of (2), then Proof.Suppose C is a configuration for which we have c ∈ C such that c / ∈ conv(Dark A (C)).In the following, we discuss how to construct a new configuration C in an arbitrary neighbourhood of C such that P (C ) > P (C).Thus C can not be locally optimal.Since f is continuous, the niveau line containing the darkest points is closed and thus Dark A (C) = A ∩ S is compact.Therefore, conv(Dark A (C)) is a compact convex set and we can find a hyperplane H = {x : a x = b} strictly separating this set from c such that a c < b.For ε > 0 small enough c = c + εa still satisfies a c < b.We obtain a new configuration C = C ∪ {c } \ {c}.Note, that for every neighbourhood of C, there is a sufficiently small ε such that C is contained in said neighbourhood.Obviously |c − p| < |c − p| for all points p in the non-negative halfspace of H.In particular c is closer to all of the darkest points than c and since f is monotonously decreasing for all points p in the non-negative halfspace of H.
It remains to assert this also on the negative halfspace.Since all the darkest points are on the positive side of H, a point p ∈ A ∩ (H ∪ H − ) satisfies for some constant δ.By continuity of f , for ε small enough, we can guarantee that The formulated condition is very "unstable" in the following sense: Proposition 2.
2. Suppose C ⊂ conv Dark A (C ).Then we can apply 1. to C with the roles of c, c reversed.But this would give P (C ) < P (C) < P (C ), which is a contradiction.
Optimization methods which only consider single components (like pattern search) or move single configuration points therefore possibly converge to a configuration contained in the convex hull of the darkest points which is not locally optimal.Therefore it seems reasonable to only use optimization methods which are able to move several points at once.Another conclusion is the following, which seems to suggest that the number of optimization variables can be reduced to only m − 1 vectors.Corollary 2.3.For given points We can use Theorem 2.1 to study the structure of the darkest points even more.First, we discuss a way to find certificates for p / ∈ Dark A (C). Lemma 2.4.Let C be a configuration and p ∈ R n be an arbitrary point.Let Since f is strictly monotone decreasing, we have U (q, C) < U (p, C).From this, the second claim follows immediately.
The above definition of N (p, C) of a point p contains only points at which the potential is strictly smaller than at p itself, as we just showed.We think this object will be useful beyond the scope of the previous lemma and subsequent theorem, but in the present work we only need it here.
If we recall the visualization of the polarization problem as placing light sources C to illuminate A the above definition of N (p, C) of a point p contains only points that are illuminated less than p itself.It is (by cone duality) somewhat related to the idea of a physical shadow (which would be resembled most closely by − cone{p − c : c ∈ C}).With this we prove the following result which further restricts the location of the darkest points: Theorem 2.5.Let C be a feasible configuration for (2).Then the points of Dark A (C) are either in the interior of conv(C) or in δA, i.e.
Proof.Let p ∈ Dark A (C) and assume p / ∈ int conv(C).Furthermore, let N (p, C) be defined as in Lemma 2.4.We can find a hyperplane In addition, if C is also locally optimal for (2) by Theorem 2.1 we immediately obtain that C ⊂ conv Dark A (C). Now, assume Dark A (C) ∩ δA = ∅, then as seen above Dark A (C) ⊆ int conv C and we obtain which is a contradiction since C is finite.To summarize, locally optimal configurations C of (2) and its corresponding darkest points Dark A (C) share a similar containment property as is illustrated in Figure 1.
An MIP approach to polarization
The current section is dedicated to the development of two hierarchies of mixed-integer linear programs (MIP) that approximate the maximal polarization of a compact set A with respect to a monotonically decreasing and continuous function f : R + → R + .The MIP, that computes the lower bounds is constructive, i.e. solutions to this MIP are configurations whose polarization is lower bounded by the value of the MIP.The actual polarization of these configurations may very well exceed this lower bound by a significant margin, cf. Figure 3 for some numerical evidence.
First we give an equivalent description of problem (2).For this we observe that by Theorem 2.1 any locally optimal point configuration is necessarily supported on conv(A).Furthermore we can get rid of the infimum by adding new constraints.The resulting optimization problem is then where X N describes the set of all multisets of size N with elements in X.It is now clear, that the sup is actually a max, since the feasible region can easily be made compact by bounding x from below (e.g.x ≥ 0) without changing the value of the program.
MIP Hierarchies
We observe that Problem ( 3) is an optimization problem with finitely many variables (namely x, C), but infinitely many constraints -it is a semiinfinite program (SIP)and therefore not solvable using standard solvers.In the remainder of this section we introduce two hierarchies of (tractable) MIPs, that approximate P(A) from above and below (see Theorem 3.5).For this we make use of the following concept of functions which "control" the difference of two values of f .Definition 3.1.We call a family of functions g c,p : R + → R + for c ∈ conv(A), p ∈ A a family of control functions (with respect to f , A) if for all c ∈ conv(A), p ∈ A: where • denotes the standard Euclidean norm.
Note that f is related to a function K taking two points c, p as arguments: K(c, p) = f ( c − p ).A family of control functions allows us to control the way K changes as we vary either c or p.This control will be an important ingredient of the proof of Theorem 3.5.For continuous functions this is related to bounding the slope of K as can be illustrated by the following example: Suppose the function However, applying global Lipschitz-continuity is not a very precise approximation as it ignores local information around specific points c, p. Therefore we provide a more suitable family of control functions.Proposition 3.2.For f monotonously decreasing and continuous the following is a family of control functions: Proof.We fix c, p and write g = g c,p and ĝ = ĝc,p .Clearly g(0) = ĝ(0) = 0. Since f is continuous, so is g.
which is decreasing since f is decreasing.For x ∈ (0, ∞) we have which is increasing since f is decreasing.Overall g(ε) = max(ĝ(ε), ĝ(−ε)) is an increasing function on R + .By symmetry, it is sufficient to prove that g provides an upper bound for ∆ = |f ( c−p )−f ( c−p )| for all p ∈ conv(A).To this end, we use the triangle inequalities For explicit computations we need to discretize two aspects of the problem.Firstly, we discretize the set of possible point configurations.For this we choose a finite sample Λ ⊂ conv(A) and only optimize over Secondly, we replace the infinite number of constraints, parameterized by A, by a finite subcollection.For this we again choose a finite sample Γ ⊂ A, and only consider the inequalities x ≤ U (p, C) for all p ∈ Γ.
However, this naively sampled problem is not necessarily connected to the original problem, since we enforce only a subset of the infinitely many constraints and allow only a finite number of configurations.Either one of these changes would provide valid bounds but they unfortunately work in different directions.We will now show how to overcome this problem by utilizing the above family of control functions to obtain lower and upper bounds on the original problem.
Let us first consider lower bounds on (3).It is clear that we can restrict the choice of configurations to be supported on a finite sample Λ of conv(A) as in ( 4) and obtain a program that computes a lower bound.
Discretizing the constraints is the harder part, since removing constraints lets the maximum grow.The following lemma shows how a slight variation of discretized constraints for some fintie sample Γ of A imply the validity of all of the infinitely many original constraints.Lemma 3.3.Let g c,p be a family of control functions.Let ε > 0, Λ be an arbitrary finite sample of conv(A) and be an ε-net of A. Furthermore, suppose Proof.Let p ∈ A be arbitrary and n(p) = argmin p∈Γ { p − p } denote the closest sample point to p ∈ A. Note that p − n(p) < ε since Γ is an ε-net.Then, which is larger than x since g c,n(p) is non-decreasing and p − n(p) < ε.
Conversely, if we consider upper bounds on (3), we now cannot simply choose a finite sample Λ of conv(A) to approximate the above SIP.Indeed this would restrict the set of feasible solutions of (3) and thereby lower the maximum instead.Again, the following lemma provides a way around this problem using a variation of the constraints.
Lemma 3.4.Let g c,p be a family of control functions.Let ε > 0 and Λ be an ε-net of conv(A).Furthermore, suppose C ∈ conv(A) N and x satisfy Then, there exists a configuration where the last inequality holds since g n(c),p is non-decreasing and c − n(c) < ε as Λ is an ε-net of conv(A).
Now we can prove the main result of this section.Theorem 3.5.Let ε Λ , ε Γ > 0 and Λ be an ε Λ -net of conv(A) and Γ be an ε Γ -net of A. Furthermore, let g c,p be a family of control functions.Then we have the following: for all p ∈ Γ Proof.We show, that feasible solutions of the left hand sides are also feasible for the right hand sides with the same objective value justifying the asserted inequalities.First, observe that Lemma 3.3 implies that a feasible solution x, y of (6a) is also feasible for (6b) and the objective values coincide.Next, we consider a feasible solution x, y of (6b) and observe that y encodes a multiset x, C satisfy the constraints in ( 2) and with the same objective value x.The next inequality follows rather immediately since (6d) is a relaxation of (2) due to dropping constraints for p ∈ A \ Γ. Lastly, if x, C is a feasible solution of (6d), we apply Lemma Let us briefly comment on the computational complexity of the mixed-integer programs (6a) and (6e).It is worth noting, that mixed-integer linear programming usually refers to optimization problems that include binary variables, which run significantly faster.We would like to note that the integral variables y ∈ {0, . . ., N } Λ in both (6a) and (6e) can be replaced by |Λ| • log(N ) binary variables.
Moreover, in the lower bound of Theorem 3.5 the vector y can be chosen as y ∈ {0, 1} Λ , which still provides a (potentially worse) lower bound and reduces the number of binary variables significantly.Unfortunately, a similar simplification is not immediately possible for the upper bound.However, we introduce another concept which aims to reduce the computational complexity in a similar fashion in the upper bound case.Definition 3.6.
For every p ∈ A there are at least k distinct points p 1 , . . ., p k ∈ Λ such that |p i −p| < ε.
Using an (ε Λ , N )-net we obtain a hierarchy similar to Theorem 3.5 restricting the possible entries of y to {0, 1}.Proposition 3.7.Let ε Λ , ε Γ > 0 and Λ be an (ε Λ , N )-net of conv(A) and Γ be an ε Γ -net of A. Furthermore, let g c,p be a family of control functions.Then, for all p ∈ Γ Proof.The proof works similar to the proof of Theorem 3.5 by replacing C = {{n(c) : c ∈ C}} in the proof of Lemma 3.4 by a set C of N distinct points of Λ.This is possible since Λ is an (ε Λ , N )-net (see Definition 3.6).
A trivial example of an (ε, N )-net can basically be obtained by a multiset consisting of N copies of an ε-net.However, in practise there are usually solutions that need fewer points, albeit more than a classical ε-net.
Convergence Results
After establishing upper and lower bounds to P(A) through the hierarchies presented in Theorem 3.5, we study the quality of these bounds.To this end, we show in this section, that solutions of the bounding problems (6a) and (6e) converge, as ε Λ , ε Γ both tend to 0, to a solution of the original problem (3).Both proofs rely in large parts on the proof of Lemma 6.1 in [19], which proves similar convergence for more general semiinfinite programs, but include minor necessary modifications.At first, we focus on the lower bounds, i.e., we show, that (6a) converges to (6b) as ε Γ → 0: Theorem 3.8.Let (ε k ) be a non-negative sequence converging towards 0. Furthermore, for every k ∈ N choose an ε k net Γ k of A. Then, any accumulation point of a sequence (x k , y k ) k∈N of optimal solutions of (6a) w.r.t.Γ k and ε k is an optimal solution of (6b).
Proof.Let (x, ȳ) be an accumulation point of (x k , y k ).By passing to a subsequence we can assume that (x k , y k ) → (x, ȳ) if k → ∞.We are now going to prove, that (x, ȳ) is feasible and in fact optimal for (6b): Consider an arbitrary p ∈ A and observe that since Γ k is an ε k -net of A, there exists a sequence (p k ) with p k ∈ Γ k such that p k → p as k → ∞.We observe further, that for all k we have and by taking limits x Hence, (x, ȳ) is feasible for (6b).Now, let (x, y) be an arbitrary solution to (6b).Since A is compact and ε k > 0, we know that g c = max p∈A g c,p is a continuous, monotonously non-decreasing function with g c (0) = 0. We now observe, that is feasible for (6a) with respect to Γ k .Since (x k , y k ) is an optimal solution to (6a), we have x k ≥ x − c∈Λ y c • g c (ε k ).Consequently, as g c (0) = 0, in the limit we obtain that x ≥ x.Since x was chosen arbitrarily, we conclude, that (x, ȳ) is indeed optimal for (6b).
One difficulty of the following theorem is the different kinds of feasible solutions when altering the sample Λ. Feasible solutions of (6e) have the form y ∈ {1, . . ., N } Λ with 1 y = N while feasible solutions of (6d) are N -point multisets supported on conv(A).Note that these objects do not permit an easy discussion of convergence.However, both notions can be translated into an element ω ∈ (conv(A)) N which is independent of Λ and allows a discussion of convergence.Note that ω can canonically be translated back into a multiset.Theorem 3.9.Let (ε k ) be a non-negative sequence converging towards 0. Furthermore, for every k ∈ N choose an ε k -net Λ k of conv(A).Let (x k , y k ) be a sequence of optimal solutions of (6e) w.r.t.Λ k , ε k .Identifying each y k with ω k ∈ (conv(A)) N , any accumulation point (x, ω) of this sequence corresponds to an optimal solution of (6d) by identification of ω with a multiset.
Proof.The proof is similiar to the proof of Theorem 3.8.Note that, since order of elements is not important for the discussed problems, we can regard to elements of (conv(A)) N either as tuples or as multisets depending on the context.Suppose (x k , ω k ) with has an accumulation point (x, ω).By passing to a subsequence we can assume that (x k , ω k ) → (x, ω).Consider the continuous function g p = max c∈conv(A) g c,p with g p (0) = 0.Then, we have for all k and p ∈ Γ: By taking limits we obtain x ≤ N i=1 f ( ωi − p ) for all p ∈ Γ.Thus x, ω is feasible for (6d).Now suppose x, ω is an arbitrary solution of (6d).Then by Lemma 3.4 there exists ω k such that x, ω k is a feasible solution for (6d).Since (x k , ω k ) is an optimal solution, we have x k ≥ x and by taking limits x ≥ x.Therefore x is also optimal for (6e).
Note, that the proofs of Theorems 3.8, 3.9 still work if we restrict y to be binary as was discussed at the end of Section 3.1.
Combining Theorems 3.8 and 3.9, we conclude that by choosing a suitable sequence (ε Γ ) k , (ε Λ ) k , we can in theory bound the value of P(A) as tightly as we need.However, solving the respective mixed-integer linear problems in practice will pose a computational challenge.
Computational results
This section presents numerical experiments illustrating the capabilities and limits of the MIP approach presented in this paper.All computations have been performed using Gurobi on a HP DL380 Gen9 server with two Intel(R) Xeon(R) CPU<EMAIL_ADDRESS>(each with 14 cores) and 256 GB RAM.We first focus on a simple illustrative example, where A is an equilateral triangle and the size of the configuration is N = 3.In addition, we chose f (x) = e −5 x 2 for our potential function and ε Γ = 0.014, ε Λ = ε Γ /3 as the respective discretization widths of Γ ⊆ A and Λ ⊆ conv(A).Lastly, we restrict both, (6a) and (6e) to binary variables y ∈ {0, 1} Λ instead of integral y ∈ {0, . . ., N } Λ as discussed below Theorem 3.5.Since we expect the resulting configuration to consist of three separate points, this should not significantly impact the quality of the bounds.
We illustrate the configuration given by (6a) in Figure 2. It was obtained after approximately 10 hours.We continue by assessing the numerical evidence on the convergence for the above example.To this end, we illustrate the quality of the binary versions of both, (6a) and (6e) for decreasing values of ε Λ and ε Γ .Here, the binary variant of (6e) was derived from Proposition 3.7.To be precise, for every ε ∈ {0.04, 0.038, . . ., 0.014} we computed the lower bound using ε Λ = ε/3, ε Γ = ε and the upper bound using ε Γ = ε Λ = ε.We chose these scalings for a better comparability, since the (ε Λ , 3)-net in the upperbound case contains more sample points and therefore yields more variables than an (ε Λ , 1)-net.Furthermore, we used scaled versions of the A 2 lattice complemented with additional sample points on the boundary to generate the samples Λ and Γ.This construction ensures that both, Λ and Γ are indeed ε Λ and ε Γ -nets respectively.The obtained bounds are visualized in Figure 3.
It is apparent, that lower values of ε do not always yield better bounds although there is a clearly visible trend to close the gap between the bounds as can be expected from our convergence results established in Theorems 3.8 and 3.9.A drawback of this approach is the computational runtime of the respective MIPs, which vastly increases with the sample size of Γ and Λ from a few seconds if ε = 0.04 to 10 hours for ε = 0.014.As an additional academic example, we use the same approach for different suitable choices of ε = ε Λ = ε Γ and different convex, non-convex or even non-connected A to showcase the wide applicability of our approach.We illustrate the polarizations derived by the binary approximation of our lower bound MIP (6a) in Figure 4.
Fig. 4 Optimal configurations of (6a) for different A in orange with a heatmap of the respective potential (from dark blue over green to yellow).The border of the respective shape A is highlighted in blue (from left to right: ball, triangles, non-convex shape).
Moreover, we briefly summarize the computational results on these additional shapes A in Table 4 below.The respective sample widths were chosen such that the corresponding MIPs could be solved in reasonable time.We note, that the shape of A significantly impacts the runtime of our MIP approach.It seems that the large symmetry group of the ball may contribute to a larger runtime as good solutions may be found everywhere in the branch-and-bound tree used by solvers such as Gurobi.If true, symmetry reduction techniques may lead to substantial improvements.
Outlook
We have seen in Section 2 that the location of the darkest points and the location of the points of a locally optimal configuration are intertwined.We suspect that these results can be extended, in particular by utilizing symmetries of A or requiring A to be convex or even a polytope.Furthermore, it would be interesting to extend these results to other choices of D. However, it is clear that there will be limitations to these kinds of results.Consider for example A = D = S n−1 the unit sphere.In this case, no obvious variant of Theorem 2.1 holds.
In this paper, we have not dealt with explicit computations of locally or globally optimal point configurations, even on simple sets such as n-gons or the unit ball.However, numerical experiments suggest that such configurations show some structure and we hope that extensions of the results in Section 2 can be utilized to obtain proof of optimality for some configurations.Here, we would like to highlight one result in this direction we are aware of, namely that for certain Riesz potentials of modest decay and A equals the closed d-dimensional unit ball, the optimal point configuration consists of N copies of the origin (see [1,Theorem 14.2.6]).We were able to observe similar effects in numerical experiments on regular polytopes.
The MIP hierarchies presented in Section 3 give provable upper and lower bounds converging to the optimal solution.However, unsurprisingly computing these bounds for sufficiently fine samples is very time consuming since MIP is NP-complete.A natural question is, whether well known techniques from mathematical programming -such as convex relaxations, inner approximations, column generation or local refinement, that speed up the computations can be utilized to achieve results for finer samples.However, most of these techniques only provide approximations of the discussed MIP hierarchies, which might limit the gain achieved through the finer samples.
Moreover, it might be helpful to carefully fit the choice of the samples to the specific instance of the problem.For example, if one has a conjecture for an optimal configuration and/or the correct location of the darkest points, this information can be fitted into the samples while retaining the ε-net property of the samples.Furthermore, these ideas might provide a way to use our bounds for analytic proofs of optimality in highly structured situations.Data Availability.Data sharing not applicable to this article as no datasets were generated oranalysed during the current study.
2 . 1 .
Let C be a configuration such that C ⊂ conv Dark A (C).Let c ∈ C and c = c and C = C ∪ {c } \ {c}.Then 1. P (C ) < P (C) and 2. C conv Dark A (C ).Proof.Consider the hyperplane H with outer normal c − c through c, oriented such that c is on the negative side.Since c ∈ conv Dark A (C) there has to be a darkest point d ∈ Dark A (C) in the non-negative halfspace of H (it might be in H).Then c − d < c − d and by monotonicity f ( c − d ) > f ( c − d ).The potentials U (d, C ) and U (d, C) differ by f ( c − d ) − f ( c − d ), therefore the above implies
Fig. 1
Fig. 1 Illustration of Theorems 2.1 and 2.5.A is depicted in red, Dark A in black and the configuration C in orange.The dashed lines depict the convex hulls conv C and conv Dark A (C), whereas the black line depicts all points p ∈ R 2 , such that U (p, C) = P (C).
3. 4
to obtain a set C ∈ Λ N satisfying the constraints of (6e).Then, by encoding C through y ∈ {0, . . ., N } Λ with 1 y = N we obtain a feasible solution to (6e) with the same objectve value x.
Fig. 2
Fig.2Optimal configuration for (6a) with ε = 0.014 and a heatmap of the respective f -potential (from dark blue over green to yellow).The points of the configuration are represented by orange circles.
Fig. 3
Fig.3Upper and lower bounds computed with decreasing values of ε and the respective running optimum (dashed lines) as well as an approximate polarization of the lower bound configuration. | 7,488.6 | 2023-03-17T00:00:00.000 | [
"Mathematics"
] |
Dynamic Analysis of a Particle Motion System
This paper formulates a new particle motion system. The dynamic behaviors of the system are studied including the continuous dependence on initial conditions of the system’s solution, the equilibrium stability, Hopf bifurcation at the equilibrium point, etc. This shows the rich dynamic behaviors of the system, including the supercritical Hopf bifurcations, subcritical Hopf bifurcations, and chaotic attractors. Numerical simulations are carried out to verify theoretical analyses and to exhibit the rich dynamic behaviors.
Introduction
There are some scholars who have studied the dynamic behaviors of particle motion. The results show that particle motion is a complex dynamic behavior in some case, such as chaotic motion. For instance, Abbott N. L. investigated the diffusion of a colloidal particle in a liquid crystalline solvent [1]. Chen C. and his colleague studied the chaotic particle dynamics in free-electron lasers and obtained that the particle motion becomes chaotic on a time scale. Here, the time scale is the characteristic time scale for radial-gradient-induced changes in the particle orbits, which is shown to be of the order of the beam transit time through a few wiggler periods [2]. Research showed that the chaos of a particle probing the black hole horizon had a universal upper bound for the Lyapunov exponent [3]. Since chaos began to be studied, it has been a common belief that understanding and utilizing the rich dynamics of a nonlinear system have an important impact on modern technology. Therefore, it also promotes the study of chaos, and some useful results have been obtained. For example, Sprott J. C. and Xiong A. [4] presented a method for classifying basins of attraction and quantifying their size for any dissipative dynamical system, and the results were useful to describe the basin of attraction and quantifying its shape and size for both theoretical and practical reasons. By using the Pynamical software package, Boeing G. [5] investigated visualization methods of nonlinear dynamical systems' behavior and indicated that these methods can help researchers discover, examine, and understand the behaviors of nonlinear dynamical systems, including bifurcations, the path to chaos, fractals, and strange attractors. Bradley E. and Kantz H. [6] illustrated that the results of nonlinear time-series analysis can be helpful in understanding, characterizing, and predicting dynamical systems. In fact, chaos has many manifestations in many different situations [7]. Meanwhile, many systems will appear with multiple equilibrium points under some parameter conditions, and the increase of the equilibrium points or multi-equilibrium points may lead to richer dynamic behaviors of the system [8][9][10]. For these reasons, we will formulate a particle motion model under external force and discuss the Hopf bifurcation and chaotic behaviors of the system.
In this paper, we will formulate a new model for particle motion and the stability of equilibrium points. The continuous dependence on initial conditions of the system's solution and Hopf bifurcation are investigated in Section 2. To further study the complex dynamic behaviors of particle motion, simulations including Lyapunov exponents, Poincaré maps and phase portraits of the chaotic attractor for the system are given in Section 3. A summary of our results and further discussion are presented in Section 4.
Model
There are rich dynamic behaviors in some cases, such as in sheared suspensions [11], in creeping flow [12], and around a weakly-magnetized Schwarzschild black hole [13]. This shows that the particle motion becomes complex because of the existence of external force, and the particle system has different dynamic behaviors under different external forces [14][15][16]. Here, we assume that a particle with unit mass is moving on a horizontal smooth plane (p, q), and the forces on the particle in p and q direction are F p and F q , respectively, where: a 11 , a 12 , a 13 , and a 14 are all positive parameters. The dot expresses the derivative with respect to the time variable t. Then, the particle motion equations are described by: (1)
Symmetry and Dissipation
Obviously, System (1) is symmetric with coordinate transformations: The divergence of (1) is: Thus, the system is dissipative. This indicates that the volume element V 0 e −2a 14 t as t → ∞, then all the trajectories of the system (1) are ultimately in an attractor.
The following theorem is obtained.
The sufficient condition for the existence and uniqueness of the solution of (1) with initial conditions P(0) = P 0 in the region D 1 × D 2 is 0 < ρ < 1.
Continuous Dependence on Initial Conditions
Based on the results in Section 2.1, we have: where: . P 10 and P 20 are all the initial conditions to (2) and P 10 − P 20 ≤ ε. Under the condition of Theorem 1, the following inequality is obtained: There is continuous dependence on the initial conditions of the solution of (1) under the condition of Theorem 1.
Equilibrium and Stability
It is easy to visualize that (1) always has three equilibrium points, i.e., , 0).
In the following, we will discuss the bifurcations at the rest of the equilibrium points.
The same approach can be used to study the Hopf bifurcation at the other equilibrium e 4 .
In this case, the system (1) has nine equilibrium points. Table 1 indicates the eigenvalues of the corresponding Jacobian matrix and the equilibria type and shows the unstable manifold and stable manifold at the equilibrium points of the particle motion system. It has been long supposed that the existence of chaotic behavior in the microscopic motions is responsible for their equilibrium and non-equilibrium properties [18], and the increase of the equilibrium points or multi-equilibrium points may lead to abundant dynamic behaviors of the system [8][9][10]. From the above results, it is shown that the system (1) has multi-equilibrium points with a stable manifold and an unstable manifold. Therefore, the particle motion system has rich dynamic behaviors. The Lyapunov exponents are 0.01, 0.00, −0.01, and −0.02 by using the method in [19]; thus, the system (1) is chaotic. In addition, the chaotic phenomena can also be reflected by the Poincaré maps [20]. Therefore, the chaotic attractor in the p − q plane of System (1) with parameters a 11 = 1, a 12 = 2, a 13 = 0.2, a 14 = 0.01 and initial values (0.7, 0, −0.01, 0) is shown in Figure 3a, and the Poincaré mapping on the section hyperplane u = 0 is given in Figure 3b. The chaotic attractor in the u − v plane of System (1) with parameters a 11 = 1, a 12 = 2, a 13 = 0.2, a 14 = 0.01 and initial values (0.7, 0, −0.01, 0) is shown in Figure 4a, and the Poincaré mapping on the section hyperplane p = 0 is given in Figure 4b. Here, the Runge-Kutta method of order four is employed with the time step of 0.001 from t = 0 to t = 300. This shows that the particle motion trajectories and the velocities of the particle in both directions are complex and the particle motion chaotic. Hence, the particle motion system shows chaotic behavior. To further illustrate the strange attractors of System (1), Figures 5 and 6 show the chaos phase portrait of p − u and q − v and the corresponding Poincaré map by taking the same parameters and initial values as Figure 3.
Conclusions
In this paper, a particle motion model is formulated by introducing external forces. The dynamic behaviors of the system are investigated, including the symmetry, the existence and uniqueness of the solution, and the continuous dependence on initial conditions. The range of the parameter where the solution of the system shows continuous dependence on initial conditions can be determined from Theorems 1 and 2. Consequently, the range of parameter values of the system where the system does not exhibit chaotic behavior can be determined in theory. The results provide great help in controlling the dynamic behavior of the particle motion system. By using the center manifold theorem and simulations, the Hopf bifurcations at the equilibrium and chaotic behavior are studied. This illustrates that the particle motion system has rich dynamic phenomena and also indicates the influence of the external force on the particle motion trajectories. Compared to [21,22], different results are obtained, such as Theorems 1 and 2, and the dynamic behaviors of the particle motion system are investigated by applying different methods such as the method in [17] and the Poincaré section. These results are helpful for further understanding the state of particle motion under external force. How to effectively control the chaotic behavior and bifurcation phenomena of particle motion will be our next research direction.
Author Contributions: All authors have equally contributed to this work. All authors revised and edited the final version of the manuscript.
Acknowledgments:
The authors would like to thank the editor and referees for their positive and constructive comments, which are all valuable and very helpful for improving this paper.
Conflicts of Interest:
The authors declare no conflict of interest. | 2,100 | 2018-12-21T00:00:00.000 | [
"Mathematics",
"Engineering"
] |
The Potential Role of the cABR in Assessment and Management of Hearing Impairment
Hearing aid technology has improved dramatically in the last decade, especially in the ability to adaptively respond to dynamic aspects of background noise. Despite these advancements, however, hearing aid users continue to report difficulty hearing in background noise and having trouble adjusting to amplified sound quality. These difficulties may arise in part from current approaches to hearing aid fittings, which largely focus on increased audibility and management of environmental noise. These approaches do not take into account the fact that sound is processed all along the auditory system from the cochlea to the auditory cortex. Older adults represent the largest group of hearing aid wearers; yet older adults are known to have deficits in temporal resolution in the central auditory system. Here we review evidence that supports the use of the auditory brainstem response to complex sounds (cABR) in the assessment of hearing-in-noise difficulties and auditory training efficacy in older adults.
Introduction
In recent years, scientists and clinicians have become increasingly aware of the role of cognition in successful management of hearing loss, particularly in older adults. While it is often said that "we hear with our brain, not just with our ears, " the focus of the typical hearing aid fitting continues to be one of providing audibility. Despite evidence of age-related deficits in temporal processing [1][2][3][4][5][6], abilities beyond the cochlea are seldom measured. Moreover, when auditory processing is assessed, behavioral measures may be affected by reduced cognitive abilities in the domains of attention and memory [7,8]; for example, an individual with poor memory will struggle to repeat back long sentences in noise. The assessment and management of hearing loss in older adults would be enhanced by an objective measure of speech processing. The auditory brainstem response (ABR) provides such an objective measure of auditory function; its uses have included evaluation of hearing thresholds in infants, children, and individuals who are difficult to test, assessment of auditory neuropathy, and screening for retrocochlear function [9]. Traditionally, the ABR has used short, simple stimuli, such as pure tones and tone bursts, but the ABR has also been recorded to complex tones, speech, and music for more than three decades, with the ABR's frequency following response (FFR) reflecting the temporal discharge of auditory neurons in the upper midbrain [10,11]. Here, we review the role of the ABR to complex sounds (cABR) in assessment and documentation of treatment outcomes, and we suggest a potential role of the cABR in hearing aid fitting.
The cABR Approach
The cABR provides an objective measure of subcortical speech processing [12,13]. It arises largely from the inferior colliculus of the upper midbrain [14], functioning as International Journal of Otolaryngology part of a circuit that interacts with cognitive, top-down influences. Unlike the click-evoked response, which bears no resemblance to the click waveform, the cABR waveform is remarkably similar to its complex stimulus waveform, whether a speech syllable or a musical chord, allowing for fine-grained evaluations of timing, pitch, and timbre representation. The click is short, nearly instantaneous, or approximately 0.1 ms, but the cABR may be elicited by complex stimuli that can persist for several seconds. The cABR's response waveform can be analyzed to determine how robustly it represents different segments of the speech stimulus. For example, in response to the syllable /da/, the onset of the cABR occurs at approximately 9 ms after stimulus onset, which would be expected when taking into account neural conduction time. The cABR onset is analogous to wave V of the brainstem's response to a click stimulus, but the cABR has potentially greater diagnostic sensitivity for certain clinical populations. For example, in a comparison between children with learning impairments versus children who are typically developing, significant differences were found for the cABR but not for responses to click stimuli [15]. The FFR comprises two regions: the transition region corresponding to the consonant-vowel (CV) formant transition and the steady-state region corresponding to the relatively unchanging vowel. The CV transition is perceptually vulnerable [16], particularly in noise, and the transition may be more degraded in noise than the steady state, especially in individuals with poorer speech-in-noise (SIN) perception [17].
The cABR is recorded to alternating polarities, and the average response to these polarities is added to minimize the cochlear microphonic and stimulus artifact [18,19]. Phase locking to the stimulus envelope, which is noninverting, enhances representation of the envelope and biases the response towards the low frequency components of the response. On the other hand, phase locking to the spectral energy in the stimulus follows the inverting phase of the stimulus; therefore, adding responses to alternating polarities cancels out much of the spectral energy [13,20]. Subtracting responses to alternating polarities, however, enhances the representation of spectral energy while minimizing the response to the envelope. One might choose to use added or subtracted polarities, or both, depending on the hypothetical question. For example, differences between good and poor readers are most prominent in the spectral region corresponding to the first formant of speech and are therefore more evident in subtracted polarities [21]. In contrast, the neural signature of good speech-in-noise perception is in the low frequency component of the response, which is most evident with added polarities [22]. The average response waveform of 17 normal hearing older adults (ages 60 to 67) and its evoking stimulus and stimulus and response spectra (to added and subtracted polarities) are displayed in Figure 1.
The cABR is acoustically similar to the stimulus. That is, after the cABR waveform has been converted to a .wav file, untrained listeners are able to recognize monosyllabic words from brainstem responses evoked by those words [23]. The fidelity of the response to the stimulus permits evaluation of the strength of subcortical encoding of multiple acoustic aspects of complex sounds, including timing (onsets, offsets), pitch (the fundamental frequency, 0 ), and timbre (the integer harmonics of the 0 ) [13]. Analyses of the cABR include measurement of latency and amplitude in the time domain and magnitude of the 0 and individual harmonics in the frequency domain. Because of the cABR's remarkable stimulus fidelity, cross-correlation between the stimulus and the response also provides a meaningful measure [24]. In addition, responses between two conditions can be crosscorrelated to determine the effects of a specific condition such as noise on a response [25].
Latency analysis has traditionally relied on picking individual peaks, a subjective task that is prone to error. Phase analysis provides an objective method for assessing temporal precision. Because the brainstem represents stimulus frequency differences occurring above 2000 Hz (the upper limits of brainstem phase locking) through timing [26] and phase representation [27,28], the phase difference between two waveforms (in radians) can be converted to timing differences and represented in a "phaseogram. " This analysis provides an objective measure of the response timing on a frequencyspecific basis. For example, the brainstem's ability to encode phase differences in the formant trajectories between syllables International Journal of Otolaryngology 3 such as /ba/ and /ga/ can be assessed and compared to a normal standard or between groups in a way that would not be feasible if the analysis was limited to peak picking ( Figure 2). Although the response peaks corresponding to the 0 are discernible, the peaks in the higher frequency formant transition region such as in Figure 2 would be difficult to identify, even for the trained eye.
In natural speech, frequency components change rapidly, and a pitch tracking analysis can be used to evaluate the ability of the brainstem to encode the changing fundamental frequency over time. From this analysis, a measure of pitch strength can be computed using short-term autocorrelation, a method which determines signal periodicity as the signal is compared to a time-shifted copy of itself. Pitch-tracking error is determined by comparing the stimulus 0 with the response 0 for successive periods of the response [29,30]. These and other measures produced by the pitch-tracking analysis reveal that the FFR is malleable and experience dependent, with better pitch tracking in individuals who have heard changing vowel contours or frequency sweeps in meaningful contexts, such as in tonal languages or music [24,31].
Other automated analyses which could potentially be incorporated into a clinical protocol include the assessment of response consistency and phase locking. Response consistency provides a way of evaluating trial-to-trial withinsubject variability, perhaps representing the degree of temporal jitter or asynchronous neural firing that might be seen in an impaired or aging auditory system [6]. Auditory neuropathy spectrum disorder would be an extreme example of dyssynchronous neural firing, affecting even the response to the click [32][33][34]. A mild form of dyssynchrony, however, may not be evident in the results of the typical audiologic or ABR protocol but might be observed in a cABR with poor response consistency. The phase-locking factor is another measure of response consistency, providing a measure of trial-to-trial phase coherence [35,36]. Phase locking refers to the repetitive neural response to periodic sounds. While response consistency is determined largely by the stimulus envelope, the phase-locking factor is a measure of consistency of the stimulus-evoked oscillatory activity [37].
The cABR and Assessment of Hearing Loss and the Ability to Hear in Noise
The cABR may potentially play an important role in assessment of hearing loss and hearing in noise. It has good testretest reliability [39,40], a necessity for clinical comparisons and for documentation of treatment outcomes. Just as latency differences of 0.2 ms for brainstem responses to click stimuli can be considered clinically significant when screening for vestibular schwannomas [9], similar differences on the order of fractions of milliseconds in the cABR have been found to reliably separate clinical populations [41,42]. Banai et al. [41] found that the onset and other peaks in the cABR are delayed 0.2 to 0.3 ms in children who are good readers compared to poor readers. In older adults, the offset latency is a strong predictor of self-assessed SIN perception in older adults, with latencies ranging from 47 to 51 ms in responses to a 40 ms /da/ (formant transition only) [43]. Temporal processing deficits are also seen in children with specific language impairment, who have decreased ability to track frequency changes in tonal sweeps, especially at faster rates [44].
Because of the influence of central and cognitive factors on speech-in-noise perception, the pure-tone audiogram, a largely peripheral measure, does not adequately predict the ability to hear in background noise, especially in older adults [45][46][47]. Due to the convergence of afferent and efferent transmission in the inferior colliculus (IC) [48,49], we propose that the cABR is an effective method for assessing the effects of sensory processing and higher auditory function on the IC. While the cABR does not directly assess cognitive function, it is influenced by higher-level processing (e.g., selective attention, auditory training). The cABR is elicited passively without the patient's input or cooperation beyond maintaining a relaxed state, yet it provides in essence a snapshot in time of auditory processing that reflects both cognitive (auditory memory and attention) and sensory influences.
In a study of hearing-, age-, and sex-matched older adults (ages 60-73) with clinically normal hearing, the older adults with good speech-in-noise perception had more robust subcortical stimulus representation, with higher rootmean-square (RMS) and 0 amplitudes compared to older adults with poor speech-in-noise perception (Figure 3) [38]. Perception of the 0 is important for object identification and stream segregation, allowing us to attend to a single voice from a background of voices [50]; therefore, greater representation of the 0 in subcortical responses may enhance one's ability to hear in noise. When we added noise (six-talker babble) to the presentation of the syllable, we found that the responses of individuals in the top speech-in-noise group were less degraded than in the bottom speech-in-noise group (Figure 3). These results are consistent with research from more than two decades documenting suprathreshold deficits The responses in the poor speech-in-noise group were more susceptible to the degrading effects of noise, as shown by greater differences in responses to the /da/ in quiet and noise (cross-correlations) (c). Relationship between speech-in-noise perception and the quiet-noise correlation (d). * < 0.05, * * < 0.01. Modified from [38]. that cannot be identified by threshold testing [46,47,[51][52][53][54][55][56][57][58]. Even in normal-hearing young adults, better speech-innoise perception is related to more robust encoding of the 0 in the cABR [53]. Furthermore, in a study with young adult participants, Ruggles et al. [51] found that spatial selective auditory attention performance correlates with the phase locking of the FFR to the speech syllable /da/. Furthermore, they found that selective attention correlates with the ability to detect frequency modulation but is not related to age, reading span, or hearing threshold.
The cABR provides evidence of age-related declines in temporal and spectral precision, providing a neural basis for speech-in-noise perception difficulties. In older adults, delayed neural timing is found in the region corresponding to the CV formant transition [59,60], but timing in the steady-state region remains unchanged. Importantly, agerelated differences are seen in middle-aged adults as young as 45, indicating that declines in temporal resolution are not limited to the elderly population. Robustness of frequency representation also decreases with age, with the amplitude of the fundamental frequency declining in middle-and in older-aged adults. These results provide neural evidence for the finding of adults having trouble hearing in noise as soon as the middle-aged years [61].
What is the role of the cABR in clinical practice? The cABR can be collected in as little as 20 minutes, including electrode application. Nevertheless, even an additional twenty minutes would be hard to add to a busy practice. To be efficacious, the additional required time must yield information not currently provided by the existing protocol.
International Journal of Otolaryngology 5 One of the purposes of an audiological evaluation is to determine the factors that contribute to the patient's selfperception of hearing ability. To evaluate the effectiveness of possible factors, we used multiple linear regression modeling to predict scores on the speech subtest of the Speech, Spatial, and Qualities Hearing Scale [62]. Pure-tone thresholds, speech-in-noise perception, age, and timing measures of the cABR served as meaningful predictors. Behavioral assessments predicted 15% of the variance in the SSQ score, but adding brainstem variables (specifically the onset slope, offset latency, and overall morphology) predicted an additional 16% of the variance in the SSQ (Figure 4). Therefore, the cABR can provide the clinician with unique information about biological processing of speech [43].
The cABR is Experience Dependent
As the site of intersecting afferent and efferent pathways, the inferior colliculus plays a key role in auditory learning. Indeed, animals models have demonstrated that the corticocollicular pathway is essential for auditory learning [63,64]. Therefore, it is reasonable to expect that the cABR reflects evidence of auditory training; in fact, the cABR shows influences of both life-long and short-term training. For example, native speakers of tonal languages have better brainstem pitch tracking to changing vowel contours than speakers of nontonal languages [24]. Bilingualism provides another example of the auditory advantages conferred by language expertise. Bilingualism is associated with enhanced cognitive skills, such as language processing and executive function, and it also promotes experience-dependent plasticity in subcortical processing [65]. Bilingual adolescents, who reported high English and Spanish proficiency, had more robust subcortical encoding of the 0 to a target sound presented in a noisy background than their age-, sex-, and IQ-matched monolingual peers. Within the bilingual group, a measure of sustained attention was related to the strength of the 0 ; this relation between attention and the 0 was not seen in the monolingual group. Krizman et al. [65] proposed that diverse language experience heightens directed attention toward linguistic inputs; in turn, this attention becomes increasingly focused on features important for speaker identification and stream segregation in noise, such as the 0 .
Musicianship, another form of auditory expertise, also extends to benefits of speech processing; musicians who are nontonal language speakers have enhanced pitch tracking to linguistically relevant vowel contours, similar to that of tonal language speakers [31]. Ample evidence now exits for the effects of musical training on the cABR [28,60,[67][68][69][70][71][72][73]. The OPERA (Overlap, Precision, Emotion, Repetition, and Attention) hypothesis has been proposed as the mechanism by which music engenders auditory system plasticity [74]. For example, there is overlap in the auditory pathways for speech and music, explaining in part the musician's superior abilities for neural speech-in-noise processing. The focused attention required for musical practice and performance results in strengthened sound-to-meaning connections, enhancing top-down cognitive (e.g., auditory attention and memory) influences on subcortical processing [75]. Musicians' responses to the cABR are more resistant to the degradative effects of noise compared to nonmusicians [68,73]. Background noise delays and reduces the amplitude of the cABR [76]; however, musicianship mitigates the effects of six-talker babble noise on cABR responses in young adults, with earlier peak timing of the onset and the transition in musicians compared to nonmusicians. Bidelman and Krishnan [73] evaluated the effects of reverberation on the FFR and found that reverberation had no effect on the neural encoding of pitch but significantly degraded the representation of the harmonics. In addition, they found that young musicians had more robust responses in quiet and in most reverberation conditions. Benefits of musicianship have also been seen in older adults; when comparing effects of aging in musicians and nonmusicians, the musicians did not have the expected age-related neural timing delays in the CV transition indicating that musical experience offsets the effects of aging [60]. These neural benefits in older musicians are accompanied by better SIN perception, temporal resolution, and auditory memory [77].
But, what about the rest of us who are not able to devote ourselves full time to music practice-can musical training improve our auditory processing as well? Years of musical training in childhood are associated with more robust responses in adults [67], in that young adults with zero years of musical training had responses closer to the noise floor compared to groups of adults with one to five or six to eleven years of training who had progressively larger signal-to-noise ratios. In a structural equation model of the factors predicting speech-in-noise perception in older adults, two subsets were compared-a group who had no history of musical training and another group who had at least one year of musical training (range 1 year to 45 years). Cognitive factors (memory and attention) played a bigger role in speechin-noise perception in the group with musical training, but life experience factors (physical activity and socioeconomic status) played a bigger role in the group with no experience. Subcortical processing (pitch encoding, harmonic encoding, and cross-correlations between responses in quiet and noise) accounted for a substantial amount of the variance in both groups [78]. Short-term training can also engender subcortical plasticity. Carcagno and Plack [79] found changes in the FFR after ten sessions of pitch discrimination training that took place over the course of approximately four weeks. Four groups participated in the experiment: three experimental groups (static tone, rising tone, and falling tone) and one control group. Perceptual learning occurred for the three experimental groups, with effects somewhat specific to the stimulus used in training. These behavioral improvements were accompanied by changes in the FFR, with stronger phase locking to the 0 of the stimulus, and changes in phase locking were related to changes in behavioral thresholds.
Just as long-term exposure to tonal language leads to better pitch tracking to changing vowel contours, just eight days of vocabulary training on words with linguistically relevant contours resulted in stronger encoding of the 0 and decreases in the number of pitch-tracking errors [29]. The participants in this study were young adults with no prior exposure to a tonal language. Although the English language uses rising and falling pitch to signal intonation, the use of dipping tone would be unfamiliar to a native English speaker, and, interestingly, the cABR to the dipping tone showed the greatest reduction in pitch-tracking errors.
Training that targets speech-in-noise perception has also shown benefits at the level of the brainstem [80]. Young adults were trained to discriminate between CV syllables embedded in a continuous broad-band noise at a +10 dB signal-tonoise ratio. Activation of the medial olivocochlear bundle (MOCB) was monitored during the five days of training through the use of contralateral suppression of evoked otoacoustic emissions. Training improved performance on the CV discrimination task, with the greatest improvement occurring over the first three training days. A significant increase in MOCB activation was found, but only in the participants who showed robust improvement (learners). The learners showed much weaker suppression than the nonlearners on the first day; in fact, the level of MOCB activation was predictive of learning. This last finding would be particularly important for clinical purposes-a measure predicting benefit would be useful for determining treatment candidacy.
There is renewed clinical interest in auditory training for the management of adults with hearing loss. Historically, attempts at auditory training had somewhat limited success, partly due to constraints on the clinician's ability to produce perceptually salient training stimuli. With the advent of computer technology and consumer-friendly software, auditory training has been revisited. Computer technology permits adaptive expansion and contraction of difficult-toperceive contrasts and/or unfavorable signal-to-noise ratios. The Listening and Communication Enhancement program (LACE, Neurotone, Inc., Redwood City, CA) is an example of an adaptive auditory training program that employs topdown and bottom-up strategies to improve hearing in noise. Older adults with hearing loss who underwent LACE training scored better on the Quick Speech in Noise test (QuickSIN) [81] and the hearing-in-noise test (HINT) [82]; they also reported better hearing on self-assessment measures-the Hearing Handicap Inventory for the Elderly/Adults [83] and the Client Oriented Scale of Improvement [84,85]. The control group did not show improvement on these measures.
The benefits on the HINT and QuickSIN were replicated in young adults by Song et al. [66]. After completing 20 hours of LACE training over a period of four weeks, the participants improved not only on speech-in-noise performance but also had more robust speech-in-noise representation in the cABR ( Figure 5). They had training-related increases in the subcortical representation of the 0 in response to speech sounds presented in noise but not in quiet. Importantly, the International Journal of Otolaryngology amplitude of the 0 at pretest predicted training-induced change in speech-in-noise perception. The advantages of computer-based auditory training for improved speech-innoise perception and neural processing have also been observed in older adults [86]. Based on this evidence, the cABR may be efficacious for documenting treatment outcomes, an important component of evidence-based service.
The cABR and Hearing Aid Fitting
Any clinician who has experience with fitting hearing aids has encountered the patient who continues to report hearing difficulties, no matter which particular hearing aid or algorithm is tried. Although we have not yet obtained empirical evidence on the role of the cABR in the hearing aid fitting, we suggest that implementation of the cABR may enhance hearing aid fittings, especially in these difficult-to-fit cases. The clinician might be guided in the selection of hearing aid algorithms through knowledge of how well the brainstem encodes temporal and spectral information. For example, an individual who has impaired subcortical timing may benefit from slowly changing compression parameters in response to environmental changes.
We envision incorporating the cABR into verification of hearing aid performance. Cortical-evoked potentials have been used for verifying auditory system development after hearing aid or cochlear implant fitting in children [87][88][89].
In adults, however, no difference is noted in the cortical response between unaided and aided conditions, indicating that the cortical response may reflect signal-to-noise ratio rather than increased gain from amplification [90]. Therefore, cortical potentials may have limited utility for making direct comparisons between unaided and aided conditions in adults. We recently recorded the cABR in sound field and compared aided and unaided conditions and different algorithms in the aided condition. There is a marked difference in the amplitude of the waveform in response to an aided compared to an unaided condition. By performing stimulus-to-response correlations, it is possible to demonstrate that certain hearing aid algorithms resulted in a better representation of the stimulus than others ( Figure 6). These preliminary data demonstrate the feasibility and possibility of using this approach. Importantly, these data also demonstrate meaningful differences easily observed in an individual.
Conclusions
With improvements in digital hearing aid technology, we are able to have greater expectations for hearing aid performance than ever before, even in noisy situations [91]. These improvements, however, do not address the problems we continue to encounter in challenging hearing aid fittings that leave us at a loss for solutions. The cABR provides an opportunity to evaluate and manage an often neglected part of hearing-the central auditory system-as well as the 8 International Journal of Otolaryngology biological processing of key elements of sound. We envision future uses of the cABR to include assessment of central auditory function, prediction of treatment or hearing aid benefit, monitoring treatment or hearing aid outcomes, and assisting in hearing aid fitting. Because the cABR reflects both sensory and cognitive processes, we can begin to move beyond treating the ear to treating the person with a hearing loss. | 5,859 | 2013-01-30T00:00:00.000 | [
"Physics"
] |