id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
258740687
pes2o/s2orc
v3-fos-license
A Survey on Zero Pronoun Translation Zero pronouns (ZPs) are frequently omitted in pro-drop languages (e.g. Chinese, Hungarian, and Hindi), but should be recalled in non-pro-drop languages (e.g. English). This phenomenon has been studied extensively in machine translation (MT), as it poses a significant challenge for MT systems due to the difficulty in determining the correct antecedent for the pronoun. This survey paper highlights the major works that have been undertaken in zero pronoun translation (ZPT) after the neural revolution so that researchers can recognize the current state and future directions of this field. We provide an organization of the literature based on evolution, dataset, method, and evaluation. In addition, we compare and analyze competing models and evaluation metrics on different benchmarks. We uncover a number of insightful findings such as: 1) ZPT is in line with the development trend of large language model; 2) data limitation causes learning bias in languages and domains; 3) performance improvements are often reported on single benchmarks, but advanced methods are still far from real-world use; 4) general-purpose metrics are not reliable on nuances and complexities of ZPT, emphasizing the necessity of targeted metrics; 5) apart from commonly-cited errors, ZPs will cause risks of gender bias. Introduction Pronouns play an important role in natural language, as they enable speakers to refer to people, objects, or events without repeating the nouns that represent them. Zero pronoun (ZP) 1 is a complex phenomenon that appears frequently in pronoundropping (pro-drop) languages such as Chinese, Hungarian, and Hindi. Specifically, pronouns are often omitted when they can be pragmatically * Longyue Wang and Siyou Liu contributed equally to this work. 1 ZP is also called dropped pronoun. The linguistic concept is detailed in Appendix §A.3. or grammatically inferable from intra-and intersentential contexts (Li and Thomson, 1979). Since recovery of such ZPs generally fails, this poses difficulties for several generation tasks, including dialogue modelling (Su et al., 2019), question answering (Tan et al., 2021), and machine translation (Wang, 2019). When translating texts from pro-drop to non-prodrop languages (e.g. Chinese⇒English), this phenomenon leads to serious problems for translation models in terms of: 1) completeness, since translation of such invisible pronouns cannot be normally reproduced; 2) correctness, because understanding the semantics of a source sentence needs to identifying and resolving the pronominal reference. Figure 1 shows ZP examples in three typological patterns determined by language family (detailed in Appendix §A.1). Taking a full-drop language for instance, the first-person subject and third-person object pronouns are omitted in Hindi input while these pronouns are all compulsory in English translation. This is not a problem for human beings since we can easily recall these missing pronoun from the context. However, even a real-life MT system still fails to accurately translate ZPs. In response to this problem, zero pronoun translation (ZPT) has been studied extensively in the MT community on three significant challenges: • Dataset: there is limited availability of ZPannotated parallel data, making it difficult to develop systems that can handle ZP complexities. • Approach: due to the ability to capture semantic information with distributed representations, ideally, the representations of NMT should embed ZP information by learning the alignments between bilingual pronouns from the training corpus. In practice, however, NMT models only manage to successfully translate some simple ZPs, but still fail when translating complex ones (e.g. subject vs. object ZPs). • Evaluation: general evaluation metrics for MT Figure 1: An overview of pro-drop languages by considering their typological patterns and language families. Example of ZP phenomenon in other languages (i.e. Korean, Hungarian and Hindi). Words in brackets are pronouns that are invisible in source language (implicit and explicit). The underlined words are corresponding antecedents. "EN" represents the human translation in English, which is a non-pro-drop language. "OT" is output translated by SOTA NMT systems with inappropriate translations. are not sensitive enough to capture translation errors caused by ZPs. We believe that it is the right time to take stock of what has been achieved in ZPT, so that researchers can get a bigger picture of where this line of research stands. In this paper, we present a survey of the major works on datasets, approaches and evaluation metrics that have been undertaken in ZPT. We first introduce the background of linguistic phenomenon and literature selection in Section 2. Section 3 discusses the evolution of ZPrelated tasks. Section 4 summarizes the annotated datasets, which are significant to pushing the studies move forward. Furthermore, we investigated advanced approaches for improving ZPT models in Section 5. In addition to this, Section 6 covers the evaluation methods that have been introduced to account for improvements in this field. We conclude by presenting avenues for future research in Section 7. Linguistic Phenomenon Definition of Zero Pronoun Cohesion is a significant property of discourse, and it occurs whenever "the interpretation of some element in the discourse is dependent on that of another" (Halliday and Hasan, 1976). As one of cohesive devices, anaphora is the use of an expression whose inter-pretation depends specifically upon antecedent expression while zero anaphora is a more complex scenario in pro-drop languages. A ZP is a gap in a sentence, which refers to an entity that supplies the necessary information for interpreting the gap (Zhao and Ng, 2007). ZPs can be categorized into anaphoric and non-anaphoric ZP according to whether it refers to an antecedent or not. In pro-drop languages such as Chinese and Japanese, ZPs occur much more frequently compared to nonpro-drop languages such as English. The ZP phenomenon can be considered one of the most difficult problems in natural language processing (Peral and Ferrández, 2003). Extent of Zero Pronoun To investigate the extent of pronoun-dropping, we quantitatively analyzed ZPs in two corpora and details are shown in Appendix §A.2. We found that the frequencies and types of ZPs vary in different genres: (1) 26% of Chinese pronouns were dropped in the dialogue domain, while 7% were dropped in the newswire domain; (2) the most frequent ZP in newswire text is the third person singular 它 ("it") ( Baran et al., 2012), while that in SMS dialogues is the first person 我 ("I") and 我们 ("we") (Rao et al., 2015). This may lead to differences in model behavior and quality across domains. This high proportion within informal genres such as dialogues and conversation shows the importance of addressing the challenge of translation of ZPs. Literature Selection We used the following methodology to provide a comprehensive and unbiased overview of the current state of the art, while minimizing the risk of omitting key references: • Search Strategy: We conducted a systematic search in major databases (e.g. Google Scholar) to identify the relevant articles and resources. Our search terms included combinations of keywords, such as "zero pronouns," "zero pronoun translation," and "coreference resolution." • Selection Criteria: To maintain the focus and quality of our review, we established the following criteria. (1) Inclusion, where articles are published in journals, conferences and workshop proceedings. (2) Exclusion, where articles that are not available in English or do not provide sufficient details to assess the validity of their results. • Screening and Selection: First, we screened the titles and abstracts based on our Selection Criteria. Then, we assessed the full texts of the remaining articles for eligibility. We also checked the reference lists of relevant articles to identify any additional sources that may have been missed during the initial search. • Data Extraction and Synthesis: We extracted key information from the selected articles, such as dataset characteristics, and main findings. This data was synthesized and organized to provide a comprehensive analysis of the current state of the art in ZPT. Evolution of Zero Pronoun Modelling Considering the evolution of ZP modelling, we cannot avoid discussing other related tasks. Thus, we first review three typical ZP tasks and conclude their essential relations and future trends. Overview ZP resolution is the earliest task to handle the understanding problem of ZP (Zhao and Ng, 2007). ZP recovery and translation aim to directly generate ZPs in monolingual and crosslingual scenarios, respectively (Yang and Xue, 2010;Chung and Gildea, 2010). This is illustrated in Figure 2. Zero Pronoun Resolution The task contains three steps: ZP detection, anaphoricity determination and reference linking. Earlier works investigated rich features using traditional ML models (Zhao and Ng, 2007;Kong and Zhou, 2010;Chen andNg, 2013, 2015). Recent studies exploited neural models to achieve the better performance (Chen and Ng, 2016;Yin et al., 2018;Song et al., 2020). The CoNLL2011 and CoNLL2012 2 are commonlyused benchmarks on modeling unrestricted coreference. The corpus contains 144K coreference instances, but dropped subjects only occupy 15%. Zero Pronoun Recovery Given a source sentence, this aims to insert omitted pronouns in proper positions without changing the original meaning (Yang and Xue, 2010;Yang et al., 2015Yang et al., , 2019a. It is different from ZP resolution, which identifies the antecedent of a referential pronoun (Mitkov, 2014). Previous studies regarded ZP recovery as a classification or sequence labelling problem, which only achieve 40∼60% F1 scores on closed datasets Song et al., 2020), indicating the difficulty of generating ZPs. It is worth noting that ZP recovery models can work for ZPT task in a pipeline manner: input sentences are labeled with ZPs using an external recovery system and then fed into a standard MT model (Chung and Gildea, 2010;Wang et al., 2016a). Zero Pronoun Translation When pronouns are omitted in a source sentence, ZPT aims to generate ZPs in its target translation. Early studies have investigate a number of works for SMT models (Chung and Gildea, 2010;Le Nagard and Koehn, 2010;Taira et al., 2012;Xiang et al., 2013;Wang et al., 2016a). Recent years have seen a surge of interest in NMT Wang et al., 2018a), since the problem still exists in advanced NMT systems. ZPT is also related to pronoun translation, which aims to correctly translate explicit pronoun in terms of feminine and masculine. The DiscoMT 3 is a commonly-cited benchmark on pronoun translation, however, there was no standard ZPT benchmarks up until now. Discussions and Findings By comparing different ZP-aware tasks, we found three future trends: 1. From Intermediate to End. In real-life systems, ZP resolution and recovery are intermediate tasks while ZPT can be directly reflected in system output. ZP resolution and recovery will be replaced by ZPT although they currently work with some MT systems in a pipeline way. Overview Modeling ZPs has so far not been extensively explored in prior research, largely due to the lack of publicly available data sets. Existing works mostly focused on human-annotated, small-scale and single-domain corpora such as OntoNotes (Pradhan et al., 2012;Aloraini and Poesio, 2020) and Treebanks (Yang and Xue, 2010;Chung and Gildea, 2010). We summarize representative corpora as: • OntoNotes. 5 This is annotated with structural information (e.g. syntax and predicate argument structure) and shallow semantics (e.g. word sense linked to an ontology and coreference). It comprises various genres of text (news, conversational telephone speech, weblogs, usenet newsgroups, broadcast, talk shows) in English, Chinese, and Arabic languages. ZP sentences are extracted for ZP resolution task (Chen and Ng, 2013, 2016). • TVSub. 6 This extracts Chinese-English subtitles from television episodes. Its source-side sentences are automatically annotated with ZPs by a heuristic algorithm (Wang et al., 2016a), which was generally used to study dialogue translation and zero anaphora phenomenon (Wang et al., 2018a;Tan et al., 2021). • CTB. 7 This is a part-of-speech tagged and fully bracketed Chinese language corpus. The text are extracted from various domains including newswire, government documents, magazine articles, various broadcast news and broadcast conversation programs, web newsgroups and weblogs. Instances with empty category are extracted for ZP recovery task (Yang and Xue, 2010;Chung and Gildea, 2010). • BaiduKnows. The source-side sentences are collected from the Baidu Knows website, 8 which were annotated with ZP labels with boundary tags. It is widely-used the task of ZP recovery Song et al., 2020). (Yang and Xue, 2010) ZH Human News 10.6K ✗ ✓ ✗ KTB (Chung and Gildea, 2010) KO Human News 5.0K ✗ ✓ ✗ BaiduKnows ZH Human Baidu Knows 5.0K ✗ ✓ ✗ TVsub (Wang et al., 2018a) ZH, EN Auto Movie Subtitles 2.2M ✗ ✗ ✓ ZAC (Pereira, 2009) PT Human Mixed Sources 0.6K ✓ ✗ ✗ Nagoya (Zhan and Nakaiwa, 2015) JA Auto Scientific Paper 1.2K ✓ ✗ ✗ SKKU (Park et al., 2015) KO Human Dialogue 1.1K ✓ ✗ ✗ UPENN (Prasad, 2000) HI Human News 2.2K ✓ ✗ ✗ LATL (Russo et al., 2012) IT, ES Human Europarl 2.0K ✓ ✗ ✓ UCFV (Bacolini, 2017) HE Human Dialogue 0.1K ✓ ✗ ✗ Table 1: A summary of existing datasets regarding ZP. We classify them according to language (Lang.), annotation type (Anno.) and text domain. We also report the number of sentences (Size). "Reso.", "Reco." and "Trans." indicate whether a dataset can be used for specific ZP tasks. The symbol ✓ or ✗ means "Yes" or "No". Discussions and Findings 2000; Bacolini, 2017). 2. Domain Bias. Most corpora were established in one single domain (e.g. news), which may not contain rich ZP phenomena. Because the frequencies and types of ZPs vary in different genres (Yang et al., 2015). Future works need more multi-domain datasets to better model behavior and quality for real-life use. Become An Independent Research Problem. Early works extracted ZP information from closed annotations (e.g. OntoNotes and Treebanks) (Yang and Xue, 2010;Chung and Gildea, 2010), which were considered as a sub-problem of coreference or syntactic parsing. With further investigation on the problem, MT community payed more attention to it by manually or automatically constructing ZP recovery and translation datasets (e.g. BaiduKnows and TVsub) (Wang et al., 2018a;. 4. Coping with Data Scarcity. The scarcity of ZPT data remains a core issue (currently only 2.2M ∼ 0.1K sentences) due to two challenges: (1) it requires experts for both source ZP annotation and target translation (Wang et al., 2016c(Wang et al., , 2018a; (2) annotating the training data manually spends much time and money. Nonetheless, it is still necessary to establish testing datasets for validating/analyzing the model performance. Besides, pre-trained modes are already equipped with some capabilities on discourse (Chen et al., 2019;Koto et al., 2021). This highlights the importance of formulating the downstream task in a manner that can effectively leverage the capabilities of the pre-trained models. Overview Early researchers have investigated several approaches for conventional statistical machine translation (SMT) (Le Nagard and Koehn, 2010; Xiang et al., 2013;Wang et al., 2016a). Modeling ZPs for advanced NMT models, however, has received more attention, resulting in better performance in this field (Wang et al., 2018a;Tan et al., 2021;Hwang et al., 2021). Generally prior works fall into three categories: (1) Pipeline, where input sentences are labeled with ZPs using an external ZP recovery system and then fed into a standard MT model (Chung and Gildea, 2010;Wang et al., 2016a); (2) Implicit, where ZP phenomenon is implicitly resolved by modelling document-level contexts Ri et al., 2021); (3) Endto-End, where ZP prediction and translation are jointly learned in an end-to-end manner Tan et al., 2021). Pipeline The pipeline method of ZPT borrows from that in pronoun translation (Le Nagard and Koehn, 2010;Pradhan et al., 2012) due to the strong relevance between the two tasks. Chung and Gildea (2010) systematically examine the effects of empty category (EC) 9 on SMT with pattern-, CRF-and parsing-based methods. The results show that this can really improve the translation quality, even though the automatic prediction of EC is not highly accurate. Besides, Wang et al. (2016aWang et al. ( ,b, 2017b proposed to integrate neural-based ZP recovery with SMT systems, showing better performance on both ZP recovery and overall translation. When entering the era of NMT, ZP recovery is also employed as an external system. Assuming that no-pro-drop languages can benefit pro-drop ones, Ohtani et al. (2019) tagged the coreference information in the source language, and then encoded it using a graph-based encoder integrated with NMT model. Tan et al. (2019) recovered ZP in the source sentence via a BiLSTM-CRF model (Lample et al., 2016). Different from the conventional ZP recovery methods, the label is the corresponding translation of ZP around with special tokens. They then trained a NMT model on this modified data, letting the model learn the copy behaviors. Tan et al. (2021) used ZP detector to predict the ZP position and inserted a special token. Second, they used a attention-based ZP recovery model to recover the ZP word on the corresponding ZP position. End-to-End Due the lack of training data on ZPT, a couple of studies pay attention to data augmentation. Sugiyama and Yoshinaga (2019) employed the back-translation on a context-aware NMT model to augment the training data. With the help of context, the pronoun in no-pronoun-drop language can be translated correctly into pronoundrop language. They also build a contrastive dataset to filter the pseudo data. Besides, Kimura et al. (2019) investigated the selective standards in detail to filter the pseudo data. Ri et al. (2021) deleted the personal pronoun in the sentence to augment the training data. And they trained a classifier to keep the sentences that pronouns can be recovered without any context. About model architecture, Wang et al. (2018a) first proposed a reconstruction-based approach to reconstruct the ZP-annotated source sentence from the hidden states of either encoder or decoder, or both. The central idea behind is to guide the corresponding hidden states to embed the recalled source-side ZP information and subsequently to help the NMT model generate the missing pronouns with these enhanced hidden representations. Although this model achieved significant improvements, there nonetheless exist two drawbacks: 1) there is no interaction between the two separate reconstructors, which misses the opportunity to exploit useful relations between encoder and decoder representations; and 2) testing phase needs an external ZP prediction model and it only has an accuracy of 66% in F1-score, which propagates numerous errors to the translation model. Thus, Wang et al. (2018b) further proposed to improve the reconstruction-based model by using shared reconstructor and joint learning. Furthermore, relying on external ZP models in decoding makes these approaches unwieldy in practice, due to introducing more computation cost and complexity. About learning objective, contrastive learning is often used to let the output more close to golden data while far away from negative samples. Yang et al. (2019b) proposed a contrastive learning to reduce the word omitted error. To construct the negative samples, they randomly dropped the word by considering its frequency or part-of-speech tag. Hwang et al. (2021) further considered the coreference information to construct the negative sample. According to the coreference information, they took place the antecedent in context with empty, mask or random token to get the negative samples. Besides, Jwalapuram et al. (2020) served the pronoun mistranslated output as the negative samples while golden sentences as positive sample. To get the negative samples, they aligned the word between model outputs and golden references to get the sentences with mistranslated pronoun. Implicit Some works consider not just the ZPT issue but rather focus on the overall discourse problem. The document-level NMT models (Wang et al., 2017a;Werlen et al., 2018;Ma et al., 2020;Lopes et al., 2020) are expected to have strong capabilities in discourse modelling such as translation consistency and ZPT. Another method is the round-trip translation, which is commonly-used in automatic post-editing (APE) (Freitag et al., 2019), quality estimation (QE) (Moon et al., 2020) to correct of detect the translation errors. Voita et al. (2019) served this idea on context-aware NMT to correct the discourse error in the output. They employed the round-trip translation on monolingual data to get the parallel corpus in the target language. They then used the corpus to train a model to repair discourse phenomenon in MT output. proposed a fully unified ZPT model, which absolutely released the reliance on external ZP models at decoding time. Besides, they exploited to jointly learn inter-sentential con- Table 2: A comparison of representative ZPT methods with different benchmarks. The ZPT methods are detailed in Section 5.1. The Baseline is a standard Transformer-big model while ORACLE is manually recovering ZPs in input sentences and then feeding them into the Baseline (Wu et al., 2020). As detailed in Section 4.1, TVSub (both translation and ZP training data) and BaiduKnows (ZP training data) are widely-used benchmarks in movie subtitle and Q&A forum domains, respectively. The Webnovel is our in-house testing data (no training data) in web fiction domain. As detailed in Section 6.1, BLEU is a general-purpose evaluation metric while APT is a ZP-targeted one. text (Sordoni et al., 2015) to further improve ZP prediction and translation. Table 1 shows that only the TVsub is suitable for both training and testing in ZPT task, while others like LATL is too small and only suitable for testing. To facilitate fair and comprehensive comparisons of different models across different benchmarkss, we expanded the BaiduKnows by adding human translations and included in-house dataset 10 . As shown in Table 2, we re-implemented three representative ZPT methods and conducted experiments on three benchmarks, which are diverse in terms of domain, size, annotation type, and task. As the training data in three benchmarks decrease, the difficulty of modelling ZPT gradually increases. Existing Methods Can Help ZPT But Not Enough. Three ZPT models can improve ZP translation in most cases, although there are still considerable differences among different domain of benchmarks (BLEU and APT ↑). Introducing ZPT methods has little impact on BLEU score (-0.4∼+0.6 point on average), however, they can improve APT over baseline by +1.1∼+30.1. When integrating golden ZP labels into baseline models (ORACLE), their BLEU and APT scores largely increased by +3.4 and +63.4 points, respectively. The performance gap between Oracle and others shows that there is still a large space for further improvement for ZPT. 10 The Webnovel testing dataset contains 1,658 Chinese-English sentence pairs in 24 documents, with the target side translated by professional human translators. Pipeline Methods Are Easier to Integrate with NMT. This is currently a simple way to enhance ZPT ability in real-life systems. As shown in Table 3, we analyzed the outputs of pipeline method and identify challenges from three perspectives: (1) out-of-domain, where it lacks in-domain data for training robust ZP recovery models. The distribution of ZP types is quite different between ZP recovery training data (out-of-domain) and ZPT testset (in-domain). This leads to that the ZP recovery model often predicts wrong ZP forms (possessive adjective vs. subject). (2) error propagation, where the external ZP recovery model may provide incorrect ZP words to the followed NMT model. As seen, ZPR+ performs worse than a plain NMT model NMT due to wrong pronouns predicted by the ZPR model (你们 vs. 我). (3) multiple ZPs, where there is a 10% percentage of sentences that contain more than two ZPs, resulting in more challenges to accurately and simultaneously predict them. As seen, two ZPs are incorrectly predicted into "我" instead of "他". Data-Level Methods Do Not Change Model Architecture. This is more friendly to NMT. Some researchers targeted making better usage of the limited training data (Tan et al., 2019;Ohtani et al., 2019;Tan et al., 2021). They trained an external model on the ZP data to recover the ZP information in the input sequence of the MT model (Tan et al., 2019;Ohtani et al., 2019;Tan et al., 2021) or correct the errors in the translation outputs (Voita et al., 2019). Others aimed to up-sample the training data for the ZPT task (Sugiyama and Yoshinaga, 2019;Kimura et al., 2019;Ri et al., 2021). They preferred to improve the ZPT performance via a data augmentation without modifying the MT architecture (Wang et al., 2016a;Sugiyama and Yoshinaga, 2019). Kimura et al. (2019); Ri et al. (2021) verified that the performance can be further improved by denoising the pseudo data. 4. Multitask and Multi-Lingual Learning. ZPT is a hard task to be done alone, researchers are investigating how to leverage other related NLP tasks to improve ZPT by training models to perform multiple tasks simultaneously (Wang et al., 2018a). Since ZPT is a cross-lingual problem, researchers are exploring techniques for training models that can work across multiple languages, rather than being limited to a single language (Aloraini and Poesio, 2020). 6 Evaluation Methods Overview There are three kinds of automatic metrics to evaluate performances of related models: • Accuracy of ZP Recovery: this aims to measure model performance on detecting and predicting ZPs of sentences in one pro-drop language. For instance, the micro F1-score is used to evaluating Chinese ZPR systems Song et al. (2020). 11 • General Translation Quality: there are a number of automatic evaluation metrics for measuring general performance of MT systems (Snover Table 4: Correlation between the manual evaluation and other automatic metrics, which are applied on different ZPT benchmarks, which are same as in Table 2. et al., 2006). BLEU (Papineni et al., 2002) is the most widely-used one, which measures the precision of n-grams of the MT output compared to the reference, weighted by a brevity penalty to punish overly short translations. ME-TEOR (Banerjee and Lavie, 2005) incorporates semantic information by calculating either exact match, stem match, or synonymy match. Furthermore, COMET (Rei et al., 2020) is a neural framework for training multilingual MT evaluation models which obtains new SOTA levels of correlation with human judgements. • Pronoun-Aware Translation Quality: Previous works usually evaluate ZPT using the BLEU metric (Wang et al., 2016a(Wang et al., , 2018aRi et al., 2021), however, general-purpose metrics cannot characterize the performance of ZP translation. As shown in Table 3, the missed or incorrect pronouns may not affect BLEU scores but severely harm true performances. To fix this gap, some works proposed pronoun-targeted evaluation metrics (Werlen and Popescu-Belis, 2017; Läubli et al., 2018). Discussions and Findings As shown in Table 4, we compare different evaluation metrics on ZPT systems. About generalpurpose metrics, we employed BLEU, TER, ME-TEOR and COMET. About ZP-targeted metrics, we implemented and adapted APT (Werlen and Popescu-Belis, 2017) to evaluate ZPs, and experimented on three Chinese-English benchmarks (same as Section 5.2). For human evaluation, we randomly select a hundred groups of samples from each dataset, each group contains an oracle source sentence and the hypotheses from six examined MT systems. We asked expert raters to score all of these samples in 1 to 5 scores to reflect the cohesion quality of translations (detailed in Appendix 3332 §A.4). The professional annotators are bilingual professionals with expertise in both Chinese and English. They have a deep understanding of the ZP problem and have been specifically trained to identify and annotate ZPs accurately. Our main findings are: 1. General-Purpose Evaluation Are Not Applicable to ZPT. As seen, APT reaches around 0.67 Pearson scores with human judges, while generalpurpose metrics reach 0.47∼23. The APT shows a high correlation with human judges on three benchmarks, indicating that (1) general-purpose metrics are not specifically designed to measure performance on ZPT; (2) researchers need to develop more targeted evaluation metrics that are better suited to this task. 2. Human Evaluations Are Required as A Complement. Even we use targeted evaluation, some nuances and complexities remain unrecognized by automatic methods. Thus, we call upon the research community to employ human evaluation according to WMT (Kocmi et al., 2022) especially in chat and literary shared tasks (Farinha et al., 2022;Wang et al., 2023c). 3. The Risk of Gender Bias. The gender bias refers to the tendency of MT systems to produce output that reflects societal stereotypes or biases related to gender (Vanmassenhove et al., 2019). We found gender errors in ZPT outputs, when models make errors in identifying the antecedent of a ZP. This can be caused by the biases present in the training data, as well as the limitations in the models and the evaluation metrics. Therefore, researchers need to pay more attention to mitigate these biases, such as using diverse data sets and debiasing techniques, to improve the accuracy and fairness of ZPT methods. Conclusion and Future Work ZPT is a challenging and interesting task, which needs abilities of models on discourse-aware understanding and generation. Figure 3 best illustrates the increase in scientific publications related to ZP over the past few years. This paper is a literature review of existing research on zero pronoun translation, providing insights into the challenges and opportunities of this area and proposing potential directions for future research. As we look to the future, we intend to delve deeper into the challenges of ZPT. Our plan is to leverage large language models, which have shown great potential in dealing with complex tasks, to tackle this particular challenge (Lu et al., 2023;Wang et al., 2023b;Lyu et al., 2023). Moreover, we plan to evaluate our approach on more discourseaware tasks. Specifically, we aim to utilize the GuoFeng Benchmark (Wang et al., 2022(Wang et al., , 2023a, which presents a comprehensive testing ground for evaluating the performance of models on a variety of discourse-level translation tasks. By doing so, we hope to gain more insights into the strengths and weaknesses of our approach, and continually refine it to achieve better performance. Limitations We list the main limitations of this work as follows: 1. Zero Pronoun in Different Languages: The zero pronoun phenomenon may vary across languages in terms of word form, occurrence frequency and category distribution etc. Due to page limitation, some examples are mainly discussed in Chinese and/or English. However, most results and findings can be applied to other pro-drop languages, which is further supported by other works (Ri et al., 2021;Aloraini and Poesio, 2020;Vincent et al., 2022). In Appendix §A.1, we add details on the phenomenon in various pro-drop languages such as Arabic, Swahili, Portuguese, Hindi, and Japanese. 2. More Details on Datasets and Methods: We have no space to give more details on datasets and models. We will use a Github repository to release all mentioned datasets, code, and models, which can improve the reproducibility of this research direction. Ethics Statement We take ethical considerations very seriously, and strictly adhere to the ACL Ethics Policy. In this paper, we present a survey of the major works on datasets, approaches and evaluation metrics that have been undertaken in ZPT. Resources and methods used in this paper are publicly available and have been widely adopted by researches of machine translation. We ensure that the findings and conclusions of this paper are reported accurately and objectively. A.1 Zero Pronoun in Different Languages The pronoun-dropping conditions vary from language to language, and can be quite intricate. Previous works define these typological patterns as pro-drop that can be subcategorized into three categories (as shown in Figure 1): • Topic Pro-drop Language allows referential pronouns to be omitted, or be phonologically null. Such dropped pronouns can be inferred from previous discourse, from the context of the conversation, or generally shared knowledge. • Partial Pro-drop Language allows for the deletion of the subject pronoun. Such missing pronoun is not inferred strictly from pragmatics, but partially indicated by the morphology of the verb. • Full Pro-drop Language has rich subject agreement morphology where subjects are freely dropped under the appropriate discourse conditions. A.2 Analysis of Zero Pronoun As shown in Table 5, 26% of Chinese pronouns were dropped in the dialogue domain, while 7% were dropped in the newswire domain. ZPs in formal text genres (e.g. newswire) are not as common as those in informal genres (e.g. dialogue), and the most frequently dropped pronouns in Chinese newswire is the third person singular 它 ("it") (Baran et al., 2012), which may not be crucial to translation performance. A.3 The Linguistic Concept Zero anaphora is the use of an expression whose interpretation depends specifically upon antecedent expression. The anaphoric (referring) term is called an anaphor. Sometimes anaphor may rely on the postcedent expression, and this phenomenon is called cataphora. Zero Anaphora (pronoundropping) is a more complex case of anaphora. In pro-drop languages such as Chinese and Japanese, pronouns can be omitted to make the sentence compact yet comprehensible when the identity of the pronouns can be inferred from the context. These omissions may not be problems for our humans since we can easily recall the missing pronouns from the context. A.4 Human Evaluation Guideline We carefully design an evaluation protocol according to error types made by various NMT systems, which can be grouped into five categories: 1) The translation can not preserve the original semantics due to misunderstanding the anaphora of ZPs. Furthermore, the structure of translation is inappropriately or grammatically incorrect due to incorrect ZPs or lack of ZPs; 2) The sentence structure is correct, but translation can not preserve the original semantics due to misunderstanding the anaphora of ZPs; 3) The translation can preserve the original semantics, but the structure of translation is inappropriately generated or grammatically incorrect due to the lack of ZPs; 4) where a source ZP is incorrectly translated or not translated, but the translation can reflect the meaning of the source; 5) where translation preserves the meaning of the source and all ZPs are translated. Finally, we average the score of each target sentence that contains ZPs to be the final score of our human evaluation. For human evaluation, we randomly select a hundred groups of samples from each domain, each group contains an oracle source sentence and the hypotheses from six examined MT systems. Following this protocol, we asked expert raters to score all of these samples in 1 to 5 scores to reflect the quality of ZP translations. For the inter-agreement, we simply define that a large than 3 is a good translation and a bad translation is less than 3. The annotators reached an agreement of annotations on 91% (2750 out of 3000) samples. In general, the process of manual labeling took five professional annotators one month in total, which cost US $5,000.
2023-05-18T01:29:17.136Z
2023-05-17T00:00:00.000
{ "year": 2023, "sha1": "8fa265a8ca46c9c0ad35d2c5b519a86e2a092dfe", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "8c09ac8fa162ea30382aaa3a3be562affeb3e825", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
247818753
pes2o/s2orc
v3-fos-license
Functional Evaluation of Human Bioengineered Cardiac Tissue Using iPS Cells Derived from a Patient with Lamin Variant Dilated Cardiomyopathy. Dilated cardiomyopathy (DCM) is caused by various gene variants and characterized by systolic dysfunction. Lamin variants have been reported to have a poor prognosis. Medical and device therapies are not sufficient to improve the prognosis of DCM with the lamin variants. Recently, induced pluripotent stem (iPS) cells have been used for research on genetic disorders. However, few studies have evaluated the contractile function of cardiac tissue with lamin variants. The aim of this study was to elucidate the function of cardiac cell sheet tissue derived from patients with lamin variant DCM. iPS cells were generated from a patient with lamin A/C (LMNA) -mutant DCM (LMNA p.R225X mutation). After cardiac differentiation and purification, cardiac cell sheets that were fabricated through cultivation on a temperature-responsive culture dish were transferred to the surface of the fibrin gel, and the contractile force was measured. The contractile force and maximum contraction velocity, but not the maximum relaxation velocity, were significantly decreased in cardiac cell sheet tissue with the lamin variant. A qRT-PCR analysis revealed that mRNA expression of some contractile proteins, cardiac transcription factors, Ca2+-handling genes, and ion channels were downregulated in cardiac tissue with the lamin variant.Human iPS-derived bioengineered cardiac tissue with the LMNA p.R225X mutation has the functional properties of systolic dysfunction and may be a promising tissue model for understanding the underlying mechanisms of DCM. D ilated cardiomyopathy (DCM) is a common cause of heart failure and is characterized by left ventricular dysfunction and wall thinning. Over 50 genes are related to DCM, and 20-50% of DCM cases are considered to include gene variants. 1) In a previous study, 65% of DCM cases exhibited gene variants, of which the titin (TTN) (33%) and lamin A/C (LMNA) (11%) variants were the most frequently observed. 2) Recent advances in medicine and device therapies have improved the left ventricular dysfunction and prognosis in patients with DCM. 3) Left ventricular ejection fraction (LVEF) recovery is observed in approximately 40% of patients with DCM, and the mortality of these patients tended to decrease. 3,4) How-ever, patients with the lamin variant of DCM have shown an unrecovered LVEF despite optimal medical therapy, leading to a poor prognosis. 2) Therefore, current therapeutic strategies may not be sufficiently effective for patients with lamin variants of DCM, and the development of a disease model for understanding the molecular mechanisms of lamin variant-mediated cardiac dysfunction is necessary. Recently, tissue engineering technologies have been developed and applied to regenerative therapy and tissue models; an example of which is sheet-based cell tissue engineering. The cell sheet, a scaffold-free bioengineered monolayered tissue, is harvested from a temperature- responsive culture surface that is coated with a temperatureresponsive polymer (poly N-isopropylacrylamide) by lowering the culture temperature. 5) Various types of cell sheets have been fabricated and applied for regenerative medicine in the cornea, heart, esophagus, cartilage, gingiva, ear, and lungs. 5) Cell sheets have also been used in tissue models. Furthermore, induced pluripotent stem (iPS) cell technology enables the generation of human cardiomyocytes. 6,7) We have developed a human cardiac cell sheet tissue derived from iPS cells and contractile force measurement system to evaluate cardiac tissue function. 8) Recently, we confirmed in a human cardiac cell sheet tissue model that the relaxation function is impaired even after the full recovery of systolic function in a hypoxia/reoxygenized condition, which is a common condition in various types of heart disease. 9) Therefore, a human cardiac cell sheet tissue model may be useful for identifying disease-specific functional profiles and for understanding the underlying molecular mechanisms. In the present study, we generated iPS cells from a patient with the lamin variant of DCM and evaluated the contractile function of iPS cell-derived cardiac cell sheet tissue. Establishment of iPS cell lines: This study was performed in line with the principles of the Declaration of Helsinki and approved by the Institutional Review Boards on Human Subjects Research of Tokyo Women's Medical University, The University of Tokyo, Center for Regenerative Medicine, National Center for Child Health and Development Research Institute, and Nara Medical University. The NI2-1 cell line was established from a patient with familial DCM associated with the LMNA p.R225X mutation at Nara Medical University ( Figure 1A). We selected 1 (NI2-1) out of 4 clones from the patient with the LMNA p.R225X mutation, due to the convenience of maintaining iPS cells and efficiency of cardiac differentiation. To establish iPS cells with the lamin variant, a combination of plasmids encoding for OCT3/4, SOX2, KLF4, L-MYC, LIN28, and shRNA for TP53 was induced into peripheral blood mononuclear cells obtained from the patient as previously described. 10) The human iPS cell line 201B7 was purchased from RIKEN (Tsukuba, Japan) as a normal control. A puromycin-resistance gene under the control of the alpha-myosin heavy chain promoter and a neomycin-resistant gene under the control of the Rex-1 promoter were introduced into the iPS cells using the lentiviral vector. 7) The iPS cell line was cultured as previously reported. 11) Phase contrast images of undifferentiated iPS cells were obtained using an inverted microscope (Nikon, Tokyo). Cardiac differentiation: Cardiac differentiation of iPS cells was induced by slight modification of a previously described procedure. 12) Briefly, iPS cells were harvested from culture dishes and the aggregates were cultured in a stirred bioreactor system (Able, Tokyo). For cardiomyo-MIURA, ET AL Purification of iPS-derived cardiomyocytes: Human iPS-derived cardiomyocytes were purified as previously described. 11) On day 21, 1.5 μg/mL puromycin (Sigma-Aldrich) was added for 24 hours. On day 22, the medium was changed, and the cardiomyocytes were incubated for 24 hours in medium A. Cardiac cell sheet tissue engineering: Fibrin gel sheets were prepared as the basis of cardiac cell sheets for contractile force measurements, as previously described 8) and used for cell sheet transfer. The schematic diagram of cardiomyocyte tissue fabrication is shown in Figure 2A. Sterilized silicone frames were attached to the surface of temperature-responsive dishes (UpCell; CellSeed, Tokyo) to restrict the cell culture area to a 12 mm square. The cells were seeded and cultured on temperature-responsive dishes at 1.2 × 10 6 cells/mL ( Figure 2B) and transferred onto the fibrin gel ( Figure 2C, D) as previously described. 8) Contractile force measurement system: The contractile force measurement device was composed of a load cell (LVS-10GA; Kyowa Electronic Instruments, Tokyo) and a culture bath made of acrylic plates ( Figure 2E). The contractile force measurement was conducted as previously described. 8) Flow cytometry: At day 23, cells were fixed with 4% paraformaldehyde for 15 minutes. The fixed cells were then preserved at 4°C in PBS. The cells were labeled with anti-cardiac troponin T rabbit polyclonal antibody (Abcam, Cambridge, UK) in PBS containing 5% FBS. As isotype controls, the cells were labeled with rabbit polyclonal IgG (Abcam). The cells were then labeled with FITCconjugated anti-rabbit antibody (Jackson Immuno Research, PA, USA) as a secondary antibody, and analyzed using a Gallios flow cytometer (Beckman Coulter, Brea, CA, USA) and Kaluza analysis software (Beckman Coulter). In this study, 50-75% of cardiac troponin T-positive samples were used for contractile force measurement and qRT-PCR analysis. Quantitative reverse transcription polymerase chain reaction (qRT-PCR): After the contractile measurement, the cardiac tissue was preserved with Buffer RLT (QIAGEN, Hilden, Germany) at -80°C. RNA extraction was performed using the RNeasy Micro Kit (QIAGEN) according to the manufacturer's protocol. cDNA was synthesized using a high-capacity cDNA reverse transcription kit (Thermo Fisher Scientific, Rockford, IL, USA) with random hexamer primers. Real-time PCR analysis of each sample was performed using the Applied Biosystems ViiA 7 RT-PCR system (Thermo Fisher Scientific). TaqMan gene expression assays for real-time PCR (Thermo Fisher Scientific) are listed in the Table. The average copy number of gene transcripts was normalized to that of glyceraldehyde-3-phosphate dehydrogenase for each sample. The data were analyzed using the ΔCT method. All statistical analyses were performed by comparing the 2 -ΔCT values between groups and the results were plotted as fold change ± standard deviation (2 -ΔCT ). RT-PCR analysis of undifferentiated markers: PCR analysis of undifferentiated markers such as OCT3/4, NANOG, TERT was performed as previously described. 13) RNA was extracted from cells using the RNeasy Plus Mini kit (Qiagen). An aliquot of total RNA was reverse transcribed using an oligo (dT) primer. For the thermal cycle reactions, the cDNA template was amplified (ABI PRISM 7900HT Sequence Detection System, Thermo Fisher Scientific) with gene-specific primer sets using the Platinum SYBR Green qPCR SuperMix-UDG (Invitrogen, Carlsbad, CA) under the following reaction conditions: 35 cycles at 95°C for 15 seconds and 60°C for 1 minute, after an initial denaturation at 95°C for 2 minutes). Fluorescence was monitored during every PCR cycle at the annealing step. The authenticity and size of the PCR products were confirmed by a gel analysis. RNA from iPS cells (MRC-5#25P52) 14) was used as a positive control, and RNA from fibroblasts (MRC-5+P10) 14) was used as a negative control. The primers used were as follows: OCT 3/4, Forward: cgagcaatttgccaagctcctga, Reverse: ttcgggcactg caggaacaaattc; NANOG, Forward: tccagcagatgcaagaactctcc, Reverse: tccaggcctgattgttccaggatt; TERT, Forward: gagcaag ttgcaaagcattg, Reverse: tttctctgcggaacgttctg. Immunohistochemistry: Immunostaining of NI2-1 iPS cells was performed as previously described. 13) Anti-Oct3/ 4 mouse monoclonal antibody (Santa Cruz, CA, USA) and anti-Nanog rabbit polyclonal antibody (ReproCELL) were used as primary antibodies. Sanger sequencing analysis: We performed Sanger sequencing analysis of NI2-1 as follows. In order to confirm the genetic mutation of the LMNA p.R225X iPS cells, the region of genomic DNA containing the mutation was amplified and sequenced. A Guide-it mutation detection kit (Takara, Kusatsu, Japan) was used to amplify the relevant region. The cells were lysed in the extraction buffer and incubated at 95°C for 10 minutes. A portion of the lysate was used for PCR using Terra PCR Direct Polymerase (Takara). The reaction protocol was as follows: 98°C for 2 minutes (98°C for 10 seconds, 60°C for 15 seconds, 68°C for 20 seconds) × 35, then 4°C. The sequence of the primers was as follows: Forward, GCTGGTAGTGGCTC ATGGAG; Reverse, ATACTGCTCCACCTGGTCCT. The PCR product was purified using the Wizard SV Gel and PCR Clean-Up System (Promega, Tokyo), confirmed to be a single band by agarose gel electrophoresis, and submitted to sequencing reactions (FASMAC, Atsugi, Japan). Karyotype analysis: Karyotypic analysis was also contracted out to Chromosome Science Labo Inc. (Sapporo, Japan). Metaphase spreads were prepared from cells treated with 100 ng/mL of Colcemid for 6 hours. The cells were fixed with methanol: glacial acetic acid (2:5) 3 times, and dropped onto glass slides. The chromosome numbers of 50 cells were analyzed and 20 cells were randomly chosen for karyotype analysis using the Q banded method. FUNCTIONAL EVALUATION OF LAMIN VARIANT DCM Statistical analysis: Data are presented as the mean ± standard deviation. Statistical comparisons between the 2 groups were performed using the unpaired Student's t-test. Statistical significance was set at P < 0.05. Results Generating iPS cell lines and confirming the pluripotency of the cells: NI2-1 cells formed flat-shaped colonies consistent with iPS cells ( Figure 1B). When we carried out immunostaining of undifferentiated markers to confirm their pluripotency, almost all cells expressed OCT 3/4 and NANOG ( Figure 1C). The RT-PCR analysis revealed that NI2-1 cells also expressed mRNA of OCT3/4, NANOG and TERT ( Figure 1D), suggesting that the generated cells were compatible as iPS cells. Sanger sequencing revealed that the NI2-1 iPS cells have the p. R225X LMNA mutation ( Figure 1E). The karyotype analysis showed that most cells (17/20) of NI2-1 clones at passage 19 had an intact karyotype ( Figure 1F). Cardiac tissue of the lamin variant exhibits impaired contractile force: Next, we fabricated the cardiac tissue and measured the contractile force as shown in Figure 2 A-E. Since the purity of cardiomyocytes in tissue might affect its contractile properties and certain extent levels of non-cardiomyocytes such as fibroblasts are indispensable for fabricating cardiac cell sheet, 6,7,15) we used differentiated cells with a cardiomyocyte purity of 50-75% for cell sheet tissue fabrication. As shown in Figures 3A and B, the purity of cardiomyocytes was identical between the groups [201B7 (Wild Type), 59.1 ± 5.5% (n = 3); NI2-1 (LMNA p.R225X), 55.9 ± 4.9% (n = 3)]. After initiating the contractile force measurement, the contractile force tended to increase gradually and was stabilized at day 37 (day 5 on the measurement system, data not shown). Therefore, we compared the contractile properties of the cardiac cell sheet tissue at day 37. The representative cardiac force traces are shown in Figure 4A. The cardiac parameters were defined as follows: the contractile force was the maximum height of the curve; the beating rate MIURA, ET AL represented the beating time per min; the time of systole was the time taken from 20% of the maximum height of the peak, and the time of relaxation was the time taken from the peak to 20% of the peak; the contraction velocity was the maximum +dF/dt during systole, and the relaxation velocity was the maximum -dF/dt during relaxation ( Figure 4B). Although the systolic functions, including contractile force and contraction velocity, of the cardiac cell sheet tissue with the lamin variant were significantly impaired [contractile force: 201B7 (Wild Type), 1.10 ± 0.15 mN (n = 4); NI2-1 (LMNA p.R225X), 0.56 ± 0.18 mN (n = 7), P = 0.001; contraction velocity: 201B7, 6.6 ± 1.2 mN/second (n = 4); NI2-1, 3.7 ± 1.0 mN/second (n = 7), P = 0.004], relaxation function such as relaxation velocity was identical between the groups [201B7, -3.0 ± 0.9 mN/second (n = 4); NI2-1, -2.9 ± 1.0 mN/second (n = 7), P = 0.95] ( Figure 4C). Beating rate and time of systole were also identical between the groups [beating rate: 201B7, 30.5 ± 6.7 bpm (n = 4); NI2-1, 38.6 ± 19.3 bpm (n = 7) (P = 0.48); time of systole: 201B7, 0.25 ± 0.13 seconds (n = 4); NI2-1, 0.26 ± 0.03 seconds (n = 7) (P = 0.61)]. Time of relaxation was significantly decreased in NI2-1 compared to the control (201B7, 0.34 ± 0.03 seconds (n = 4); NI2-1, 0.24 ± 0.03 seconds (n = 7), P = 0.001). These findings suggest that systolic dysfunction, but not relaxation dysfunction, might be a functional property of cardiac cell sheet-tissue with the lamin variant. During the measurement period, no apparent arrhythmia was observed in either sample. qRT-PCR analysis: We performed qRT-PCR analysis to clarify the underlying mechanisms of impaired systolic function in cardiac cell sheet tissue with the lamin variant. As shown in Figure 5, the cardiac gene expressions of contractile proteins including MYL2, MYL7, MYH6, MYH7, TNNT2, transcription factors including NKX2.5, GATA4, GATA6, MEF2C, ion channels such as HCN4, KCNE1, KCNH2, KCNQ1, SLC8A1, gap junction proteins such as GJA1, GJA5, GJC1 and Ca 2+ handling proteins including ATP2A2, RYR2 were downregulated in cardiac cell sheet tissue with the lamin variant. On the other hand, the expression of certain cardiac genes for transcription factors including TBX5 and WT1, and Ca 2+ handling proteins including CACNA1C and PLN, were not different in cardiac tissue between the cell lines 201B7 and NI2-1. Therefore, downregulated gene expression of contractile proteins, transcription factors, ion channels, and Ca 2+ handling proteins might be responsible for the impaired systolic function of cardiac cell sheet tissue with the lamin variants. Discussion In the present study, we generated iPS cells from a patient with the lamin variant and succeeded in identifying impaired systolic function as the functional property of cardiac cell sheet tissue. We also observed the downregulation of mRNA expression associated with some contractile proteins, cardiac transcription factors, ion channels, and Ca 2+ handling proteins. We have identified that contractile force along with contraction velocity was impaired in cardiac cell sheet tissue with the lamin variant. Further, the systolic dysfunction in cardiac tissue with the lamin variant is comparable to the clinical profile of patients with DCM. 16) The clinical phenotypes of patients with lamin variants of DCM have been reported to be distinct between the ages of 30 and 40. 17) However, in the present study, impaired contractility was observed in freshly differentiated cardiomyocytes, which are considered fetal-like cardiomyocytes. Various types of compensatory mechanisms, including the Frank-Starling law and neurohormonal factor activation in the living body, may delay the typical onset even in cardiomyocytes with systolic dysfunction. Further, the inadequate inborn or responsive cell proliferation capacity of cardiomyocytes has been reported to contribute to the development of DCM with the lamin mutation. 18) Therefore, intensive observation and management might be necessary in patients with the lamin variant of DCM. Diastolic function is mediated by various factors such as Ca 2+ overload and fibrosis and becomes increasingly evident in DCM with disease progression and age. [19][20][21] The intact relaxation function of cardiac cell sheet tissue with the lamin variant in the present study may also be compatible with the early phase of DCM. According to the qRT-PCR analysis, mRNA expression levels of contractile proteins, ATP2A2, RYR2, transcriptional factors, and ion channels were downregulated in cardiac cell sheet tissue with the lamin variant. Although the downregulation of contractile protein gene expression directly affects systolic function in cardiac tissues, we cannot exclude the possibility that this phenomenon might be the result of the impaired contraction of cardiac tissue with the lamin variant. In the present study, cardiomyocyte purity was identical between the groups. However, since the mRNA expression levels of cardiac transcription factors, including NKX2.5, GATA4, GATA6, and MEC2C, were downregulated in cardiac tissue with the lamin variant, cardiomyocytes might be more immature, which can result in impaired systolic function. It is well known that the NKX2.5, 22) GATA4, 23) and MEF2 C 24) genes are critical for heart development, and variants in these transcription factors cause congenital heart diseases including atrial septal defect and ventricular septal defect. It has been reported that iPS cell-derived cardiomyocytes from subjects with a heterozygous GATA4-G296S missense mutation have impaired contractility, calcium handling, and metabolic activity. 25) The GATA4 binding site has also been reported to be located within the 5' flanking sequence of the human cardiac alpha-myosin heavy chain encoding gene. 26) Since MYH6 was downregulated in cardiac tissue with the lamin variant, insufficient expression levels of GATA4 might impair systolic function through the downregulation of MYH6. NKX2.5 and MEF2C genes have been reported to upregulate each other's expression in the process of cardiac differentiation in P19 cells, 27) and these transcription factors have been reported to interact cooperatively in heart development. 28) Although it remains unclear how NKX2.5 and MEF2C expression levels are downregulated in patients with the lamin variant, lower expression levels of these genes might affect cardiomyocyte differentiation and contractile function. ATP2A2 codes for SERCA2a, which acts as a subtype of SERCA expressed in the heart. As SERCA2a mediates Ca 2+ reuptake into the sarcoplasmic reticulum in cardiomyocytes, the downregulation of ATP2A2 in cardiac tissue with the lamin variant might cause Ca 2+ re-uptake dysfunction. SERCA2a expression levels have been reported to be reduced in failing hearts, and the amelioration of SERCA2a expression using adeno-associated virus leads to the attenuation of reduced cardiac contractility and heart failure. 29) Therefore, the downregulation of ATP2 A2 may be a cause of contractile impairment in cardiac tissue with the lamin variant. Although cardiac cell sheet tissue with the lamin variant showed low expression levels of KCNE1, KCNH2, KCNQ1, which regulate the repolarization of cardiomyocytes, an apparent tendency of arrhythmia was not observed. iPS cells have been used for research on DCM with lamin variants. Lee, et al. generated iPS cells from patients with DCM who carry a frameshift of LMNA, leading to the early termination of translation (348-349 insG; K117fs). 30) Further, they showed that cardiomyocytes have impaired Ca 2+ intensity, and that aberrant calcium homeostasis led to arrhythmias through the activation of the PDGF signaling pathway. 30) Bertero, et al. reported that iPS-derived cardiomyocytes with the LMNA p.R225X mutation showed increased Ca 2+ intensity and increased contractility. 31) In contrast, cardiac cell sheet tissue with the LMNA p.R225X mutation showed impaired systolic function in the present study. Differences in contractile force measurement strategy and cellular components in cardiac tissue might explain this discrepancy. They evaluated the contractile function by calculating the change of motion in the imaging data, 31) while the contractile force measurement system of the present study enabled a direct evaluation of the contractile force. The cardiomyocyte purity in a previous study was over 95%. 31) Since certain levels of fibroblasts are necessary for fabricating cardiac cell sheets, 15) we used cardiomyocytes whose purity was 50-75% in the present study. We previously reported that almost all noncardiomyocytes after cardiac differentiation of human iPS cells were fibroblast-like cells, 7,32) and that the gene expression profile of iPS cell-derived fibroblast-like cells after cardiac differentiation was similar to that of human atrium and ventricle-derived fibroblasts. 32) Recently, Lachaize, et al. reported that cardiac fibroblasts from neonatal rat hearts with a missense mutation in the lamin A/C gene (LMNA D192G mutation) showed cytoskeleton disorganization, decreased elasticity, and altered cell-cell adhesion properties. 33) The analysis of the profile of noncardiomyocytes with the LMNA p.R225X mutation will be necessary to understand the differences in noncardiomyocytes between 201B7 and NI2-1. However, con-sistent with the evidence that the native heart contains various types of cells, including cardiomyocytes and noncardiomyocytes such as fibroblasts, the co-existence of non-cardiomyocytes in cardiac tissue might be similar to that in the native heart, and the systolic dysfunction observed in the present study might be more compatible with the phenotype of DCM with the lamin variant. It has been reported that cardiomyocyte apoptosis contributes to the pathogenesis in DCM with the lamin variant. 34,35) Since cardiomyocytes with lamin variants may be more fragile or susceptible to contraction in an in vitro stringent environment compared to normal cardiomyocytes, we cannot exclude the possibility that loss of cardiomyocytes in the experimental period affected the decreased contractile force in the cardiac tissue with the LMNA p.R225X mutation. Analysis of the number of cardiomyocytes in cardiac tissue will be necessary to understand the precise mechanisms of impaired contractility of cardiac tissue with the LMNA p.R225X mutation. In conclusion, we have showed that systolic dysfunction provides a phenotype for cardiac tissue with the LMNA p.R225 mutation. Bioengineered cardiac tissue may provide a novel tool for understanding the molecular mechanisms of DCM with the lamin variant. Limitations: In this study, we used 201B7 as a healthy control since an isogenic control of the corrected LMNA mutation was not available. We observed a small amount of cells had abnormal karyotypes and cannot exclude the possibility that a small amount of cells with abnormal karyotype affects the function of cardiac tissue from NI2-1. Acknowledgments We thank E. Matsuda, H. Miyatake, K. Sugiyama, and M. Tejima for their excellent technical assistance. Tatsuya
2022-03-31T15:22:45.022Z
2022-03-30T00:00:00.000
{ "year": 2022, "sha1": "283f304dfc6a50e291f71bb0ac1af1d461176bfa", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/ihj/63/2/63_21-790/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "2c2014ea21792cff1e83517466ff4cbe5f88f2b0", "s2fieldsofstudy": [ "Biology", "Engineering", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119171463
pes2o/s2orc
v3-fos-license
Two floor building needing eight colors Motivated by frequency assignment in office blocks, we study the chromatic number of the adjacency graph of $3$-dimensional parallelepiped arrangements. In the case each parallelepiped is within one floor, a direct application of the Four-Colour Theorem yields that the adjacency graph has chromatic number at most $8$. We provide an example of such an arrangement needing exactly $8$ colours. We also discuss bounds on the chromatic number of the adjacency graph of general arrangements of $3$-dimensional parallelepipeds according to geometrical measures of the parallelepipeds (side length, total surface or volume). Introduction The Graph Colouring Problem for Office Blocks was raised by BAE Systems at the 53rd European Study Group with Industry in 2005 [1]. Consider an office complex with space rented by several independent organisations. It is likely that each organisation uses its own wireless network (WLAN) and ask for a safe utilisation of it. A practical way to deal with this issue is to use a so-called "stealthy wallpaper" in the walls and ceilings shared between different organisations, which would attenuate the relevant frequencies. Yet, the degree of screening produced will not be sufficient if two distinct organisations have adjacent offices, that is, two offices in face-to-face contact on opposite sides of just one wall or floor-ceiling. In this case, the WLANs of the two organisations need to be using two different channels (the reader is referred to the report by Allwright et al. [1] for the precise technical motivations). This problem can be modeled as a graph coloring problem by building a conflict graph corresponding to the office complex: to each organisation corresponds a vertex, and two vertices are adjacent if the corresponding territories share a wall, floor, or ceiling area. The goal is to assign a color (frequency) to each vertex (organisation) such that adjacent vertices are assigned distinct colors. In addition, not every graph may occur as the conflict graph of an existing office complex. However, the structure of such conflict graphs is not clear and various fundamental questions related to the problem at hands were asked. Arguably, one of the most natural questions concerns the existence of bounds on the chromatic number of such conflict graphs. More specifically, which additional constraints one should add to the model to ensure "good" upper bounds on the chromatic number of conflict graphs? These additional constraints should be meaningful regarding the practical problem, reflecting as much as possible real-world situations. Indeed, as noted by Tietze [7], complete graphs of arbitrary size are conflict graphs, that is, for every integer n, there can be n organisations whose territories all are in face-to-face contact with The second and third author's work were partially supported by the French Agence Nationale de la Recherche under references ANR-12-JS02-002-01 and anr 10 jcjc 0204 01, respectively. each other. The reader is referred to the paper by Reed and Allwright [6] for a description of Tietze's construction. Besicovitch [4] and Tietze [8] proved that this is still the case if the territories are asked to be convex polyhedra. An interesting condition is when the territories are required to be rectangular parallelepipeds (sometimes called cuboids), that is, a 3-dimensional solid figure bounded by six rectangles aligned with a fixed set of Cartesian axes. For convenience, we shall call box a rectangular parallelepiped. When all territories are boxes, the clique number of any conflict graph, that is, the maximum size of a complete subgraph, is at most 4. However, Reed and Allwright [6] and also later Magnant and Martin [5] designed arrangements of boxes that yield conflict graphs requiring an arbitrarily high number of colours. On the other hand, if the building is assumed to have floors (in the usual way) and each box is 1-floor, i.e. restricted to be within one floor, then the chromatic number is bounded by 8: on each floor, the obtained conflict graph is planar and hence can be coloured using 4 colours [2,3]. It is natural to ask whether this bound is tight. As noted during working sessions in Oxford (see the acknowledgments), it can be shown that up to 6 colours can be needed, by using an arrangement of boxes spanning three floors. Such a construction is shown in Figure 1. The purpose of this note is to show that the upper bound is actually tight. More precisely, we shall build an arrangement of 1-floor boxes that spans two floors and yields a conflict graph requiring 8 colours. From now on, we shall identify a box arrangement with its conflict graph for convenience. In particular, we assign colors directly to the boxes and define the chromatic number of an arrangement as that of the associated conflict graph. The boxes considered in Theorem 1 have one of their geometrical measures bounded: their height is at most one floor. We also discuss bounds on the chromatic number of box arrangements with respect to some other geometrical measures: the side lengths, the surface area and the volume. More precisely, assuming that boxes have integer coordinates, we obtain the following. Theorem 2. We consider a box arrangement A with integer coordinates. (1) If there exists one fixed dimension such that every box in A has length at most ℓ in this dimension, then A has chromatic number at most 4(ℓ + 1). In the next section, we give the proof of Theorem 1 and in the last section we indicate how to obtain the bounds given in Theorem 2. Proof of Theorem 1 We shall construct an arrangement of 1-floor boxes that is not 7-colorable. Before that, we need the following definition. Consider an arrangement A and let A 1 , A 2 and A 3 be (non-necessarily disjoint) subsets of the boxes in A. Given a proper coloring c of A, let C i be the set of colors used for the boxes in The collection of all signatures can be endowed with a partial order To build the desired arrangement, we use the arrangement X of 1-floor boxes described in Figure 2 as a building brick. The arrangement X has three specific regions, X 1 , X 2 and X 3 . We also abusively write X 1 , X 2 and X 3 to mean the subsets of boxes of X respectively intersecting the regions X 1 , X 2 and X 3 (note that some boxes may belong to several subsets). We start by giving some properties of the signatures of X with respect to proper colorings and according to the three subsets X 1 , X 2 and X 3 . Figure 2. The gadget X with the regions X 1 , X 2 and X 3 . The proof of this assertion does not need any insight, we thus omit it. However, the reader interested in checking its accuracy should first note that one can restrict to the cases where the three vertical (in Figure 2) boxes are respectively colored either 1, 2 and 1; or 1, 1 and 2; or 1, 2 and 3. Now, the arrangement Y is obtained from three copies X 1 , X 2 and X 3 of the arrangement X. We define three regions Y 1 , Y 2 and Y 3 on Y as depicted in Figure 3. As previously, we also write Y 1 , Y 2 and Y 3 for the subsets of boxes intersecting the region Y 1 , Y 2 and Y 3 , respectively. We set X j i := Y i ∩ X j for (i, j) ∈ {1, 2, 3} 2 . Assertion 4. In any proper coloring of Y , at least four colors are used in one of the three regions. Proof. Suppose on the contrary that there is a proper coloring c of Y with at most three colors in each of Y 1 , Y 2 and Y 3 . For i ∈ {1, 2, 3}, the restriction of c to X i is a proper coloring of X i , which we identify to c. The condition on c implies that none of σ X 1 (c), σ X 2 (c) and σ X 3 (c) fulfills inequality (1) of Assertion 3. In particular, note that exactly 3 different colours appear on X 1 2 , and they also appear on X 2 2 and on X 3 2 . Since X 2 1 = X 2 2 , these three colours appear on Y 1 . Similarly, since X 3 3 = X 3 2 , these three colours appear on Y 3 . Assume now that σ X 1 (c) satisfies (2). Then exactly three colours appear on X 1 1 , one of which does not appear on X 1 2 as c(X 1 1 ∪ X 1 2 ) 4. Thus in total at least four colours appear on X 1 1 ∪ X 2 1 ⊂ Y 1 , which contradicts our assumption on c. It remains to deal with the case where σ X 1 (c) fulfills (3) of Assertion 3. Thanks to the symmetry of (2) and (3) with respect to the regions X 1 and X 3 , the same reasoning as above applied to X 3 3 instead of X 2 1 yields that four colours appear on Y 3 , a contradiction. To finish the construction, we need the following definition. Consider two copies Y 1 and Y 2 of Y . The regions Y 1 i and Y 2 j fully overlap if every box in Y 1 i is in faceto-face contact with every box in Y 2 j . Observe that for every pair (i, j) ∈ {1, 2, 3}, there exists a 2-floor arrangement of Y 1 and Y 2 such that Y 1 i and Y 2 j fully overlap: it is obtained by rotating Y 2 ninety degrees, adequately scaling it (i.e. stretching it horizontally) and placing it on top of Y 1 . We are now in a position to build the desired arrangement Z spanning two floors. To this end, we use several copies of Y . The first floor of Z is composed of seven parallel copies Y 1 , . . . , Y 7 of Y (drawn horizontally in Figure 4). The second floor of Z is composed of fifteen parallel copies of Y (drawn vertically in Figure 4): for each j ∈ {1, 2, 3} and each i ∈ {2, . . . , 6}, a copy Y (i, j) of Y is placed such that the first region of Y (i, j) fully overlap the regions Y 1 j , . . . , Y i−1 j , the second region of Y (i, j) fully overlaps the region Y i j , and the third region of Y (i, j) fully overlaps the regions Y i+1 j , . . . , Y 7 j . Consider a proper coloring of Z. Assertion 4 ensures that each copy of Y in Z has a region for which at least four different colours are used. In particular, there exists j ∈ {1, 2, 3} such that three regions among Y 1 j , . . . , Y 7 j are colored using four colours. Let these regions be Y i1 j , Y i2 j and Y i3 j with 1 i 1 < i 2 < i 3 7. Now, consider the arrangement Y (i 2 , j). By Assertion 4, there exists k ∈ {1, 2, 3} such that the k-th region of Y (i 2 , j) is also colored using at least four different colors. Consequently, as this region and the region Y i k j fully overlap, they are colored using at least eight different colors. This concludes the proof. Bounds with respect to geometrical measures In this part, we provide bounds on the chromatic number of boxes arrangements provided that the boxes satisfy some geometrical constraints. Namely, we prove Theorem 2, which is recalled here for the reader's ease. Theorem 2. We consider a box arrangements A with integer coordinates. (1) If there exists one fixed dimension such that every box in A has length at most ℓ in this dimension, then A has chromatic number at most 4(ℓ + 1 Proof. (1) The conflict graph corresponding to an arrangement where the boxes have height at most ℓ can be vertex partitioned into ℓ + 1 planar graphs P 0 , . . . , P ℓ . Indeed if the distance between the levels of two boxes is at least ℓ + 1, then these two boxes are not adjacent. So the planar graphs are obtained by assigning, for each x, all the boxes that have their floor at level x to be in the graph P k where k := x mod (ℓ+1). Consequently, the whole conflict graph has chromatic number at most 4(ℓ + 1). (2) The boxes can be partitioned into three sets according to the dimension in which the length is bounded. In other words, A is partitioned into U 1 , U 2 and U 3 such that for each i ∈ {1, 2, 3}, all boxes in U i have length at most ℓ in dimension i. Consequently, (1) ensures that each of U 1 , U 2 and U 3 has chromatic number at most 4(ℓ+1) and, therefore, A has chromatic number at most 3·4(ℓ+1) = 12(ℓ+1). (3) For each box, the minimum length taken over all three dimensions is at most √ s, and thus (2) implies that the chromatic number of A is O( √ s). However, one can be more careful. Let us fix a positive integer ℓ, to be made precise later. The set of boxes is partitioned as follows. Let U be the set of boxes with lengths in every dimension at least ℓ and let R be the set all remaining boxes, that is, R := A \ U . By (2), the arrangement R has chromatic number at most 12ℓ. Now consider a box B in U with dimensions x, y and z, each being at least ℓ. We shall give an upper bound on the number of boxes of U that can be adjacent to B. The surface of a face of a box in U is at least ℓ 2 . So in U there are at most s/ℓ 2 that have a face totally adjacent to a face of B. Some boxes of U could also be adjacent to B without having a face totally adjacent to a face of B. In this case, such a box is adjacent to an edge of B. For an edge of length w, there are at most w/ℓ + 1 such boxes. So the number of boxes of U adjacent to B but having no face totally adjacent to a face of B is at most 4(x + y + z)/ℓ + 12. Since ℓ min{x, y, z}, we deduce that 2ℓ(x+y +z) 2xy +2yz +2xz s. Hence the total number of boxes in U that are adjacent to B is at most s/ℓ 2 + 2s/ℓ 2 + 12 = 3s/ℓ 2 + 12. Consequently, by degeneracy, U has chromatic number at most 3s/ℓ 2 + 13. Therefore, A has chromatic number at most 12ℓ + 3s/ℓ 2 + 13. Setting ℓ := 3 s/2 yields the upper bound 9 3 √ 4.s + 13. (4) Once again, for a fixed parameter ℓ to be made precise later, the set of boxes is partitioned into two parts: the part U , composed of all the boxes with lengths in every dimension at least ℓ and the part R, composed of all the remaining boxes. By (2), we know that R has chromatic number at most 12ℓ. Let B be a box in U with dimensions x, y and z. Since ℓ min{x, y, z}, the volume v B of B satisfies that 6v 6v B = 6xyz 2(ℓxy + ℓxz + ℓyz) = ℓs B , where s B is the total surface area of B. So every box in U has total surface area at most 6v/ℓ and thus (3) implies that U has chromatic number at most 9 3 4.6v/ℓ + 13. Therefore, A has chromatic number at most 9 3 24v/ℓ + 12ℓ + 13. Setting ℓ to be 4 3v/8 yields the upper bound 24 4 √ 6v + 13. In the previous theorem, we are mainly concerned with the order of magnitude of the functions of the different parameters. However, even in this context, we do not have any non trivial lower bound on the corresponding chromatic numbers.
2014-05-26T15:33:29.000Z
2014-05-26T00:00:00.000
{ "year": 2014, "sha1": "879e2032f9f3828076c34aebac0ef0efef1e5e95", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "879e2032f9f3828076c34aebac0ef0efef1e5e95", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
244874643
pes2o/s2orc
v3-fos-license
Facilitators and Barriers to Student Learning and Impact of an Undergraduate Clinical Posting in Psychiatry: A Thematic Analysis Background: There is an absence of information on empirical evaluation of undergraduate psychiatry training programs in India. We aimed to evaluate a clinical posting in psychiatry for undergraduate medical students. Methods: We employed levels one and two of Kirkpatrick’s four-level program evaluation model. The qualitative study used written feedback that was collected using a semistructured questionnaire. For quantitative metrics, we used end-of-posting assessment scores and frequencies of standard comments provided by examiners on case-based discussions with students to evaluate their clinical skills. Results: We obtained written feedback from 40 female and 19 male fifth-semester students. We identified facilitators (patient interaction, outpatient department observation and teaching, demonstration of signs, case presentation and discussion, evening posting, observation of clinical work, use of anecdotes while teaching, and lectures by senior faculty) and barriers (organizational issues related to evening posting and disinterest in didactic teaching) to the students learning psychiatry, and the perceived impact of the posting for the students (changed attitudes, knowledge, self-efficacy, and skills acquired). The mean total score on case-based discussion, assigned to 22 groups of students, was 3.86 out of 5. Conclusion: We described the impact of the posting and identified unique facilitators and barriers to students’ learning in psychiatry. These findings will inform the choice of teaching-learning methods in the context of the new Competency-Based undergraduate Medical Education (CBME) curriculum. Methods: We employed levels one and two of Kirkpatrick's four-level program evaluation model. The qualitative study used written feedback that was collected using a semistructured questionnaire. For quantitative metrics, we used end-ofposting assessment scores and frequencies of standard comments provided by examiners on case-based discussions with students to evaluate their clinical skills. Results: We obtained written feedback from 40 female and 19 male fifth-semester students. We identified facilitators (patient interaction, outpatient department observation and teaching, demonstration of signs, case presentation and discussion, evening posting, observation of clinical work, use of anecdotes while teaching, and lectures by senior faculty) and barriers (organizational issues related to evening posting and disinterest in have psychiatry as a separate examination subject at the UG level. [5][6][7] The new competency-based undergraduate medical education (CBME) curriculum has been described as a laudable attempt to modernize medical education in India. 8,9 It provides an opportunity to structure the psychiatry curriculum and use optimal teaching-learning methods to equip the Indian Medical Graduate with the requisite knowledge, attitudes, and skills to help patients with psychiatric disorders who seek help in primary care settings. 10 However, it is necessary to identify facilitators and barriers to UG medical students' learning in psychiatry that will inform the choice of optimal teaching-learning methods. Also, as students are the ultimate beneficiaries, it is imperative that the teaching-learning methods adopted fulfill their needs and are well-suited to enable them to learn effectively. Models of teaching psychiatry to UG medical students have been described based on the consensus of experienced UG psychiatry teachers. [11][12][13] Although there has been some evaluation of these models, 11 not all have been evaluated empirically. Other studies in this area have focused either on evaluating specific teaching-learning methods or only on data from student feedback. 14,15 Addressing the absence of an empirical evaluation of a clinical teaching program, we aimed to evaluate a 15-day clinical posting in psychiatry for UG medical students across two aspects: (a) qualitative-facilitators and barriers to learning, and perceived impact of the posting for students; and (b) quantitative-psychiatry clinical skills in students. Setting We conducted the study in a general hospital psychiatry unit of a private medical college in Bengaluru with more than 50 years of experience running the UG medical training program. The hospital's Department of Psychiatry has been in existence for 40 years and, at the time of writing this article, included 15 psychiatrists (nine faculty and five senior residents), two clinical psychologists, and three psychiatric social work consultants. The medical college accepts 150 UG medical students per year. The Program: Fifteen-Day Clinical Posting in Psychiatry The program is based on the model for teaching psychiatry to UG medical students proposed by Manohari et al. 13 The posting is of 15 days, consisting of three hours per day. Thirty students in the fifth semester of their UG medical course are posted at a time and are divided into groups of five to six students. The end-of-posting assessment is conducted on the penultimate day of the posting. It consists of the following: (a) Case-based discussion (CBD), in groups, on the inpatient allotted at the beginning of the posting (5 marks). We chose this method of assessment, given its greater relevance in assessing learning during the clinical posting. 16 Students are assessed on the following parameters: history, physical examination, mental status examination, diagnosis, and management plan. (b) Logbook for detailed notes made on the case during the posting and documentation of the CBD (5 marks). Consultants provided feedback to the students on the final day of the posting regarding their performance and areas for improvement, based on standard comments noted by the examiner during the CBDs. Procedures The study was approved by the Institutional Ethics Committee. We employed levels 1 and 2 of Kirkpatrick's four-level evaluation model for the program evaluation. 17 This approach has been widely used to evaluate learner outcomes in training programs. It assesses four hierarchical "levels" of program outcomes: (a) learner satisfaction or reaction to the program; (b) measures of learning attributed to the program, such as, knowledge gained, skills improved, and attitudes changed; (c) changes in learner behavior in the context for which they are being trained; and (d) the program's final results in its larger context. In the present study, level 1 comprised the qualitative study and level 2 comprised the quantitative metrics. For the qualitative study, we used convenience sampling to recruit participants. The first author approached UG medical students reporting on the last day of their clinical posting in psychiatry to participate in the study. Informed consent was obtained. We obtained written feedback from 59 students using a semistructured questionnaire (Box 1). For quantitative metrics, we used the scores from the CBDs and standard comments provided by the examiners on the CBDs conducted in groups. Examiners were given a checklist of specific aspects to focus on during the CBDs. These were negative history for substance use and organicity, history of functional impairment, central nervous system (CNS) examination, and assessment of affect and mood. An examiner's comment, noting adequate or inadequate performance, on any of these aspects was considered a standard comment. The CBDs were attended by 64 students, divided into 22 groups, with each group containing two to four students. Data collection was done from September 2019 to October 2019, before the implementation of CBME in 2020. Statistical Analysis The data were anonymized by removing all identifiers and assigning an alphanumeric code to each participant. For the qualitative study, we employed Braun and Clarke's method of thematic analysis. 18 The data corpus, consisting of written responses to the semistructured questionnaire from 59 students, was analyzed manually in its entirety, forming the data set. The analysis aimed at specifically identifying facilitators and barriers to learning and describing the perceived impact of the posting for the students. The analysis identified semantic themes within the framework of a realist epistemology. All the authors had conducted lectures and clinical demonstrations for the students and interacted with them during their clinical posting in psychiatry. The first author coded the data. Then the first and second authors independently searched for themes, reviewed themes, and defined and named the themes. The first, second, and fourth authors jointly produced the report after discussion. All disagreements were resolved by reaching a consensus through discussion. For quantitative metrics, we used descriptive statistics, expressed as mean values of total scores and subscores of the CBDs assigned to the groups of students. A score of 1 mark was assigned for each of the following: history, physical examination, mental status examination, differential diagnosis, and management, leading to a total score of 5 marks. We also calculated the frequencies of standard comments indicating inadequate performance by the students, noted by the examiners during the CBDs. Qualitative Study We obtained written feedback from 59 students, that is, 40 (67.8%) females and 19 (32.2%) males. The themes and subthemes are shown in Figure 1. Data extracts of the subthemes are presented in Tables 1, 2, and 3. Quantitative Metrics The scores of the CBDs assigned to 22 groups of students were available for analysis. The mean scores were as follows: history: 0.80; physical examination: 0.55; mental status examination: 0.91; differential diagnosis: 0.84; management: 0.82; and total score: 3.86. The frequencies of standard comments made by the examiners, on a total of 22 groups, were as follows: did not do CNS examination: 15; did not assess affect/ mood adequately: 9; did not assess negative history or substance use/ organicity: 7; and did not assess functional impairment: 4. Qualitative Study Facilitators to Learning Most of the students perceived that patient interaction in the form of observing and interviewing patients in the ward and OPD stimulated interest, helped make connections to what was taught in theory classes and facilitated professional growth. They also expressed the need to see more cases during the posting. This finding receives support from the results of a prior cross-sectional survey of student feedback of a clinical posting in psychiatry. 15 Students also reported that observing a clinical encounter in the OPD between consultant and patient enhanced their learning. The feedback also implicated a benefit in increasing outpatient case discussions and visit times for Results of the Qualitative Study (Themes and Subthemes) OPD observation and teaching N32: I felt OPD (was useful). From OPD madam gave a small description about the patient before the patient comes in and after the patient leaves. Then she explains about the condition of the patient in detail. N46: OPD was useful-saw how the doctors elicited history from the patients. N59: OPD was useful in (a) way that we were exposed to more people, many experiences of different people. Demonstration of signs N12: The direct demonstration of the patient with a particular illness was pretty helpful. N47: Explaining cases bedside, if possible, to elicit signs and symptoms. N50: Bring patients to the class and demonstrate during the lectures taken. Evening posting N42: Convenient to talk to patients and interact with them, to get to know them better. N47: We got to learn more things from duty doctors and got to see more cases. Case presentation and discussion N57: It will be better if each student will be presenting case per day so at the end we will get a good idea about history taking and examination. Observation of clinical work N22: Please take the students on rounds. Learning by observation is extremely useful. N31: Like how Dr X's unit had a round table discussion with (the) patient, we would like that… N59: Taking some of us for rounds and talking to patients in front of us so that we'll know how to manage a patient who is feeling low or in anger. Use of anecdotes while teaching N33: The classes could be made more interactive, with stories and cases they (the teachers) have seen related to the disorder. N58: Explain using blackboard and class should be short and with (a) lot of incidences and experiences with psychiatric patients. Lectures by senior faculty N45: I would like to mention the lecture on alcohol use by Dr X (senior professor) as it was very good. N56: Class of Dr X (senior professor) was good and helpful. OPD: Outpatient department. 21 Although we did not incorporate video-based training in our program, we suggest that comparing video-based teaching to traditional bedside teaching could be an area for future research in India. Most students also perceived the evening posting as a good opportunity to interact with the on-call psychiatry resident and observe psychiatric emergencies. Such an initiative, along with other suggested teaching-learning methods, could provide clinical exposure in emergency psychiatry to medical students. 22 Students found the observation of consultant ward rounds to facilitate learning. Powell et al. have recommended using simulation-based ward round sessions in UG medical teaching to improve the confidence of junior doctors while leading ward rounds. 23 Other key aspects that students identified as conducive to learning were teaching sessions with senior faculty and the use of anecdotes of patients who were treated. Such use of narratives as a learning tool in medicine has been shown to promote humanistic aspects of medicine, including empathy. 24 Other facilitative aspects to learning were case presentation and discussion. Barriers to Learning While most students perceived evening postings as facilitative, a few noted otherwise, expressing that they did not add any extra value in terms of learning. However, further explication revealed this to be because of organizational aspects such as the coordination of the evening posting. This can be overcome by a better organization of the evening posting, for example, making a roster for the students and entrusting the on-call PG with the responsibility to ensure that students are present and to facilitate learning. A minority of students also expressed disinterest in didactic teaching as part of the clinical posting and perceived it as not being useful. Zinski et al. showed that first-year medical students preferred lectures while second-year students preferred clinically oriented teaching methods, leading to the inference that further investigation is needed to identify the optimal mix of teaching-learning methods for medical education, taking into consideration the stage at which they are to be deployed. 25 Perceived Impact of the Posting The clinical posting in psychiatry changed the students' attitudes toward psychiatry as a subject, psychiatric illnesses, and psychiatrists. Specifically, their perception of psychiatry changed toward understanding it as a medical subject that is scientific and nuanced. Also, students now considered it to be as important as any other field of medicine, and some were even considering it as an option for postgraduation. A similar finding was reported in a qualitative study by Brown et al. 26 Likewise, positive changes in attitudes toward psychiatry as a subject were also reported by Tharyan et al., who explored the impact of their clinical teaching program in psychiatry on student knowledge, attitudes, and clinical skills in psychiatry. 11 Many students perceived that the posting helped dispel misconceptions of fear and the "horrible" experience of having to "deal" with patients with psychiatric illness. They now understood that psychiatric illnesses were common and were medical problems like any other illness that could be improved by medication, counseling, and support. These findings are in line with those of a previous study that reported positive changes to students' preconceived notions of psychiatric patients as a result of clinical exposure. 26 Students reported an increase in their knowledge of psychiatric illness-etiology, classification, and management-similar to findings in an earlier study. 15 Students also perceived that the posting helped them hone their skills as doctors in training, enhancing their self-confidence and professional growth. These included skills specific to psychiatry, such as establishing rapport with uncooperative patients and eliciting a history of behavioral problems, and more generic skills such as being patient and demonstrating empathy with their patients. These findings are similar to the perceived improvements in communication skills reported by students following a clinical posting in psychiatry. 15 Quantitative Metrics The results of the quantitative metrics are encouraging, as the mean total score on the CBD was 3.86 out of 5, reflecting an adequate performance of clinical skills at the end of the posting. Mean scores from 0.80 to 0.91 out of 1 on history, mental status examination, differential diagnosis, and management also reflect adequate performance in these aspects at a UG level. The mean score for physical examination was 0.55. Deficits in the clinical evaluation of patients noted by examiners were missed CNS examination, incomplete assessment Subtheme Data Extracts Skills acquiredpsychiatry-specific N43: From this posting I have gained the knowledge of how to elicit behavioral or psychiatric problems from the patient normally present in the OPD. N24: Learnt to develop rapport with an uncooperative patient. Self-efficacy N59: Seeing patients and talking to them helped me to grow as a doctor, to build the relationship with the patient. ( Original Article of mood and affect, incomplete negative history in terms of substance use and organicity, and incomplete assessment of functional impairment. These results indicate that although students reported that their perception of psychiatry changed following the posting, to understand that it follows the medical model, changing their behavior in terms of the clinical approach to patients with psychiatric illness to include aspects such as a complete physical and neurological examination may require further emphasis on these aspects during clinical demonstrations and CBDs. This is because a change in knowledge does not always translate to a change in behavior. 27 Deficits in the students' clinical skills can be minimized by adhering to detailed lesson plans and having checklists of learning objectives for each teaching session. Additionally, clinical assessment of mood and affect is a skill that is crucial to empower Indian Medical Graduates to identify and manage common mental disorders in primary care settings, and it must also be a key component of clinical teaching modules. These findings can also guide further inquiry on the development of clinical teaching-learning methods based on deficits in students' skills identified herein. Strengths and Limitations Our study utilized qualitative and quantitative research as part of a standard program evaluation model to understand the facilitators and barriers to UG medical students' learning, and the perceived impact and efficacy of a clinical teaching program in psychiatry, which is a strength. The medical students represented in our sample were from across the country, increasing the study findings' generalizability. However, as the students are from a private medical college, these findings might not necessarily apply to those from government colleges. The findings may also not be generalizable to colleges with fewer teachers or with lack of infrastructure such as adequate space in OPD to accommodate students for observation. As the students were known to us, their responses to the qualitative study may have been influenced by social desirability. 28 The quantitative metrics included scores assigned to groups of students as part of the CBDs, which may not accurately reflect individual performance. Additionally, the assessments for students' knowledge and attitudes were not done before the clinical posting, but only at the end of the posting, and therefore do not capture the change in clinical skills from before the posting. Conclusions We have described the perceived impact of the posting for students in the qualitative study and have demonstrated its impact using quantitative metrics to evaluate clinical skills. We have also identified unique facilitators and barriers to students' learning in psychiatry in the qualitative study. These learnings will inform the choice of teaching-learning methods in the context of the new CBME curriculum.
2021-12-05T16:05:18.628Z
2021-12-03T00:00:00.000
{ "year": 2021, "sha1": "6378ef9141bebfab501b7538488623a463b9cf86", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/02537176211056366", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "36df9d64233d8051a2836830f8c413d6c9b88d52", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [] }
251371596
pes2o/s2orc
v3-fos-license
Thermal Ringdown of a Kerr Black Hole: Overtone Excitation, Fermi-Dirac Statistics and Holography We find a significant destructive interference among Kerr overtones in the early ringdown induced by an extreme mass-ratio merger of a massive black hole and a compact object, and that the ringdown spectrum apparently follows the Fermi-Dirac distribution. We numerically compute the spectral amplitude of gravitational waves induced by a particle plunging into a Kerr black hole and study the excitation of multiple quasi-normal (QN) modes. We find that the start time of ringdown is before the strain peak of the signal and corresponds to the time when the particle passes the photon sphere. When the black hole has the near-extremal rotation, the Kerr QN frequencies are close to the fermionic Matsubara frequencies with the Hawking temperature and a chemical potential of the superradiant frequency. We indeed find that the absolute square of the spectral amplitude apparently follows the Fermi-Dirac distribution with the chemical potential of around the real QN frequency of the fundamental mode. Fitting the Boltzmann distribution to the data in higher frequencies, the best-fit temperature is found out to be close to the Hawking temperature, especially for rapid rotations. In the near-extremal limit, the gravitational-wave spectrum exhibits a would-be Fermi degeneracy with the Fermi surface at the superradiant frequency ω = µ H . This opens a new possibility that we can test the holographic nature of Kerr black holes, such as the Kerr/CFT correspondence, by observationally searching for the Boltzmann distribution in frequencies higher than µ H without extracting overtones out of ringdown. I. INTRODUCTION A black hole is one of the simplest astrophysical objects in the Universe as it has only three hairs in general relativity, i.e., mass, angular momentum, and charge. The quasi-normal (QN) modes of a black hole are also characterized only by the three hairs by virtue of the no-hair theorem. A black hole ringing results in the emission of gravitational-wave (GW) ringdown, whose waveform is represented by a superposition of QN modes. Each QN mode is represented as a damped sinusoid and has a complex frequency, whose real and imaginary parts are the frequency and damping rate of the QN mode, respectively. The GW ringdown signal is emitted during the relaxation process of a ringing black hole. For instance, a binary black hole merger eventually leads to the emission of ringdown signal soon after the two progenitor black holes merge. To date, the detection of ringdown signals sourced by binary black hole merger events have been reported, e.g., in Refs. [1][2][3][4]. The poles of the retarded Green's function associated with the gravitational perturbations of a black hole are nothing but the QN modes of the black hole [5][6][7][8][9][10][11][12]. In the context of the holographic principle, it is conjectured that there would be a duality between the QN modes of a black hole and the poles of the retarded Green's function associated with the corresponding conformal field theory (CFT). As a supporting evidence, it was shown that a Bañados-Teitelboim-Zanelli black hole has its QN modes dual to the poles of the thermal Green's function in the corresponding CFT [13]. Also, even Kerr black holes that exist in our Universe may have the holographic nature according to the Kerr/CFT correspondence [14] (see also [15,16]). The near-horizon geometry of an extremal Kerr black hole, which has a quotient of AdS 3 space at a fixed polar angle, can be mapped to a dual two-dimensional CFT. Indeed, it was reported that the scattering of a Kerr black hole agrees with the thermal CFT correlators [17][18][19][20]. Also, the hidden conformal symmetry associated with the photon sphere was recently proposed in [21]. Those supporting evidence for the holographic nature of black holes imply that the geometrical degrees of freedom in the vicinity of a black hole would correspond to a lower-dimensional CFT. As such, it is highly motivated to study the perturbation of a Kerr black hole in the context of the holographic principle. Besides the motivation arising from the holography, the detection of excited multiple QN modes is important to test general relativity in strong-gravity regime. Recently, the excitation of multiple QN modes was confirmed around the strain peak of GW signal [22][23][24], at least based on the fit of QN modes to the numerical relativity waveforms [25]. The fit of the QN modes to the GW data of GW150914 was also performed and the detection of the first overtone was claimed [2] although it may be still controversial 1 as was discussed in Ref. [27,28]. If ringdown can start at around the strain-peak time, the multiple QN modes can be detected with a higher signal-to-noise ratio. As such, the excitation of multiple QN modes, including overtones, has been actively studied in various contexts (see, e.g., Refs. [29][30][31][32][33][34][35][36][37][38][39][40][41][42][43][44][45]). Still, the ringdown emission or significant excitation of overtones at earlier times can be controversial. Perhaps, one of the most crucial issues is the linearlity problem of the black hole ringing. For instance, in the head-on collision of two non-spinning black holes, the two holes do not completely merge immediately, but the initial two marginally outer trapped surfaces (MOTSs) continue to exist and an outermost common MOTS forms that hides the inner (non-linear) region behind it [46,47]. Only the outside of the outermost MOTS may be relevant to what we observe at a distant region, provided that the MOTS has an analog in the event horizon. If these are the cases even for a collision of two rotating black holes, we could observe the GW ringdown signal regardless of the non-linearity inside the MOTS as long as the geometry outside the MOTS can be described by the Kerr spacetime with linear perturbations. A similar idea was carefully probed and discussed based on the numerical relativity approach for a non-rotating system [36]. In this paper, we show observable supporting evidence for the holographic nature of a Kerr black hole. We argue that the excitation of mulptiple QN modes would follow the Fermi-Dirac statistics, at least for an extreme mass ratio merger. We numerically compute the spectral amplitude of a GW signal induced by an extreme mass ratio merger involving a massive black hole, with mass M and angular momentum J, and a plunging small-mass compact object with mass m p . Our computation methodology is based on Ref. [48], where the self force of the particle is ignored and the whole signal is computed in a linear manner, that is, the background spacetime is fixed. Therefore, any non-linear effects are not involved in our analysis. This picture is valid only for an extreme mass ratio M m p . Fitting multiple QN modes of the Kerr black hole to the obtained spectral data in the frequency domain and performing the inverse Fourier transformation to obtain the best-fit waveform in the time domain, we find a significant destructive interference among overtones occurs at the beginning of ringdown, followed by the strain peak and exponential damping. Such a strong destructive interference is possible only when multiple QN modes are excited simultaneously. We then carefully analyze the obtained spectral data and find out that the absolute square of the spectral amplitude for frequencies higher than the real QN frequency of the fundamental mode can be modeled by the Boltzmann distribution. Identifying the superradiant frequency as a chemical potential, we arrive at the modeling of ringdown spectrum with the Fermi-Dirac distribution. Fitting the Boltzmann factor to the spectrum, we find the best-fit temperature takes a value close to the Hawking temperature, especially for rapid rotations. We will show that the higher a black hole rotation is, the more overtones are excited. These imply that the excitation of multiple overtones can be modeled by the Fermi-Dirac distribution. Our finding is consistent with the analogy between the QN modes for a rapidly rotating black hole and the poles of the Fermi-Dirac distribution, known as the Matsubara modes. Also, the analogy between the ringdown spectrum and the Fermi-Dirac distribution opens a novel possibility such as a modeling of ringdown waveforms with the Fermi-Dirac distribution or with the superposition of the Matsubara modes. Also, searching for the Boltzmann distribution in an observed GW spectrum, including a ringdown signal, from a rapidly rotating black hole 2 may put the thermal or holographic picture of Kerr black holes to the test. The paper is organized as follows. In the next section, we briefly review the procedure to compute a GW signal induced by a particle plunging into a Kerr black hole. In Sec. III, by fitting QN modes to numerically obtained data, we verify the excitation of overtones and search for the best-fit start time of ringdown. We then show that a significant destructive interference occurs at the beginning of ringdown, which is possible only when multiple overtones are excited simultaneously. In Sec. IV, we study the thermal nature of the ringdown spectra we obtained. We obtain the best-fit temperature associated with the Boltzmann factor and find that it takes a value close to the Hawking temperature. We will also show that in the near-extremal situation, for which the Hawking temperature is almost zero, the GW spectrum exhibits a would-be Fermi degeneracy with the Fermi surface at the superradiant frequency. We also investigate the modeling of ringdown with the fermionic Matsubara modes. Our conclusions and discussions are provided in Sec. V. II. GRAVITATIONAL WAVE EMISSION INDUCED BY A FALLING PARTICLE In this section, we compute waveforms of GW induced by a particle with mass m p plunging into a Kerr black hole, whose mass and angular momentum are denoted by M and J, respectively. For an extreme mass ratio, M m p , the particle does not non-linearly disturb the background spacetime, and one can compute the GW signal in the linear approximation. In this case, a signal is obtained by solving the Teukolsky equation [50]. In this manuscript, we solve the Sasaki-Nakamura (SN) equation [51] that has a short-range angular momentum barrier and is, therefore, easier to solve numerically. The source term determined by the dynamics of the falling particle is given in Ref. [48]. The line element on the Kerr spacetime has the form where a ≡ J/M and the functions, Σ and ∆, are defined as The SN equation with the SN perturbation variable, X lm = X lm (ω, r * ), is given by where r * is the tortoise coordinate defined by the source termT lm isT and the explicit forms of F lm , U lm , γ lm , and W lm are shown in Appendix A. The label (l, m) specifies an angular mode of the spin-weighted spheroidal harmonics −2 S lm (aω, θ)e imϕ . The function W = W (τ (r)) is determined by the trajectory of an plunging particle, whose proper time is denoted by τ . The trajectory on the equatorial plane (t(τ ), r(τ ), ϕ(τ ), π/2) is obtained by solving these equations [48,52,53]: where L p is the orbital angular momentum of the particle and Also, these geodesic equations correspond to the case of E = m p , where E is the energy of the particle, and the zero of the Carter constant (see, e.g., Ref. [54]). The SN equation can be solved with the boundary condition of where λ lm is the eigenvalue determined by the regularity of the spheroidal harmonics: with k ± ≡ |m ∓ 2|/2. One can obtain X (out) lm with the Green's function technique as where X (hom) lm is a homogeneous solution to the SN equation with the boundary condition of The coefficient B lm (ω) is important since it has zeros at complex frequencies, ω = ω lmn , which are the poles of the Green's function. They are also known as the QN modes of a Kerr black hole. Note that in this formalism, the backreaction on the falling particle is ignored. This approximation does not work at all when its orbital angular momentum L p takes either the critical value of −2M (1 + √ 1 + j) or 2M (1 + √ 1 − j), for which the particle takes infinite time to fall into the black hole [48]. Here we define the non-dimensional spin parameter j ≡ a/M . Therefore, the value of L p is limited to We numerically solve the geodesic equations (8-10) and homogeneous SN equation with the 4th-order Runge-Kutta method. The detailed discussion on the resolution of our numerical computation is provided in Appendix B. The spectrum and time-domain waveform for (l, m) = (2, 2) are shown in Figure 1 along with the trajectory of the particle. Throughout the manuscript, the parameter is set to M = 1/2 and the amplitude is normalized by a factor of m p /r obs , where r obs ( M ) is a radial position of an observer detecting the signal on the equatorial plane. One can see that the peak in frequency-domain comes around the real part of QN frequencies. In Figure 2, one can see that the GW signal of higher angular modes induced by the falling particle have smaller amplitudes compared to that of (l, m) = (2, 2). We fix the orbital angular momentum of the particle to L p = 1 throughout the manuscript, but the change of L p does not affect the qualitative result presented in the following (see Appendix C). In the following, we will often omit the subscript of (l, m) for the sake of brevity. III. DESTRUCTIVE INTERFERENCE OF EXCITED QUASI-NORMAL MODES We fit QN modes to the obtained GW signal in the frequency domain to extract the start time of ringdown without truncating the original waveform data, including orbital and ringdown signals. We then show that the start time of ringdown is before the strain-peak time, where a destructive interference among multiple overtones occurs. Fitting higher overtones to a ringdown, beginning before the strain peak, in time domain may be difficult as each higher overtone is exponentially enhanced at earlier times, which makes the fitting in time domain unstable. The fitting analysis in the frequency domain has the advantage of fitting QN modes in a stable way even for a GW signal whose rigndown starts before the strain peak. Let us begin with the modeling of GW ringdown waveforms in the frequency domain. In the GW spectrum,h(ω), computed in the previous section, we have two main signals, orbital and rindown signals, and it is obvious that the orbital signal is emitted, and the ringdown emission follows. The start time of ringdown, t * , is encoded in the spectral amplitude,h(ω), with the factor of e iωt * . To demonstrate this, let us first introduce the ringdown-waveform model in the time domin: where C lmn is the excitation coefficient and t * is the start time of ringdown. The step function in the model, θ(t − t * ), is introduced by assuming that ringdown starts instantaneously at t = t * [55]. In this manuscript, we consider the fitting of GW spectrum with a fitting function [55] h QNM, which is the Fourier transformed function of (22). The phase factor of e iωt * in (23) originates from the step function in (22) and leads to the oscillation pattern in the spectral amplitude. We then perform the fitting analysis in the frequency domain by using the unweighted least squares to determine C lmn . The mismatch in the frequency domain is then obtained bỹ To see how well the fit in the frequency domain works, another mismatch is also computed by performing the inverse Fourier transform ofh andh QNM : The Figure 3 shows the mismatches, (a) M and (b)M, with respect to t * and for several harmonics. The best-fit values of t * obtained for each angular mode 3 should be consistent with each other as t * is independent of angular modes. One can indeed read that M gives the best-fit value of t * in the range of 34 t * 36 for all angular modes we compute (Figure 3-(a)). Also, the range of the best-fit values is well consistent with the result ofM ( Figure 3-(b)). Figure 3-(c) implies that the ringdown emission starts when the particle passes the photon sphere whose radius is It is consistent with that the GW ringdown emission is governed by the photon sphere. On the other hand, the strain peak of the GW signal is emitted when the particle reaches the vicinity of the outer horizon. To see the convergence of the QN-mode fitting with respect to the number of overtones included in the modeling, we compute the mismatch M with respect to n max and the result is shown in Figure 4. One can read that for higher harmonics, M converges at higher values of n max , but at less than n max = 10. It implies that the higher an angular mode is, the more overtones are excited 4 . This can be analytically verified by computing the excitation factors [56][57][58] for higher harmonics and up to higher overtones as the excitation factor quantifies the ease of excitation of each QN mode 5 independent of the source of a GW ringdown, and we will come back to this in our future work. Figure 5 shows the time-domain waveforms, h andh QNM , obtained in the frequency-domain fit with t * = 30, 36, and 45. One can see that the model waveform for t * = 30 has noise at earlier times and that for t * = 45 has a spiky noise at t = t * . Those noise are visible in the frequency domain as is shown in Figure 6. For t * = 30 or 45, the model functionh QNM,lm (red dashed) has noise spreading over the long-range frequency domain. On the other hand, our analysis indeed works for a preferred value of t * = 36. Also, one can see that the frequency-domain analysis is very efficient for higher harmonics (see the bottom panels in Figure 6). This is because the orbital signal of l = m > 2 emitted by a particle falling into the black hole is less significant compared to the ringdown signal. Therefore, the contamination from the orbital signal inh(ω) is smaller for higher harmonics. The fit of QN modes in the frequency domain has the advantage of searching for the start time of ringdown, especially when it starts before the strain peak. The beginning of ringdown before the peak would involve a destructive interference among overtones, whose amplitudes at earlier times are exponentially amplified. As such, the fitting analysis in the time domain can be unstable against adding higher overtones since it requires controlling exponentially amplified overtones at earlier times (see Figure 7). On the other hand, in the frequency-domain analysis, we fit the Fourier transformed QN modes, ∼ e iωt * /(ω − ω lmn ), which no longer have the exponential behaviour and does not lead to such an instability, as shown in Figure 7. In summary, we confirm that the ringdown induced by a particle plunging into the rotating hole starts before the strain peak based on the frequency-domain analysis. This is caused by the destructive interference among the fundamental mode and overtones at earlier times. A significant destructive interference is possible when superposed modes have frequencies close to others, like the QN frequencies of a Kerr black hole. This would be supporting evidence that multiple overtones are simultaneously excited at earlier times. In the next section, we analyze the spectral data we numerically obtained and show that the excitation of multiple QN modes apparently follows the Fermi-Dirac distribution. We then propose a conjecture that the degrees of freedom of free oscillation of the Kerr black hole corresponds to that of a many-body thermal Fermi-Dirac system. It would be relevant to the Kerr/CFT correspondence, in which the agreement between the near-extremal QN modes and the Matsubara modes of the retarded Green's function in the corresponding CFT was demonstrated [14,17,20]. IV. THERMAL EXCITATION OF OVERTONES AND THE FERMI-DIRAC STATISTICS In this section, we propose that the absolute square of the spectral amplitude, |h +/× (ω)| 2 , can be modeled by the Fermi-Dirac distribution 6 : where µ is a chemical potential and T is a temperature. We especially find that for near-extremal rotations, the spectrum of GW signal we numerically computed exhibits the Fermi-Dirac distribution with around the Hawking temperature T H and the chemical potential of superradiant frequency µ H : We also find that for medium spins (j 0.9) and for lower angular harmonics, the temperature and chemical potential take a value close to Based on these supporting evidence shown below in more detail, we conjecture that the degrees of freedom of a black hole ringing might follow the Fermi-Dirac statistics. Indeed, the QN frequencies of a Kerr black hole is very similar to the Matsubara frequencies of the Fermi-Dirac distribution (28). From the absolute square of spectral amplitude shown in Figure 8, the Boltzmann distribution is observed in the range of ω ≥ µ 0 rather than in ω ≥ µ H although µ 0 → µ H in the extremal limit [59]. In the limit, the thermal excitation is suppressed, and a would-be Fermi degeneracy can be seen in frequencies lower than the superradiant frequency ω ≤ µ H (see the right panel in Figure 8). We fit the Boltzmann distribution, Ae −(ω−µ)/T , to the GW spectral data, where µ is fixed to µ = µ 0 , and A and T are fitting parameters. Also, we use the GW data of ω ≥ 1.1 × Re(ω lm0 ) in the fitting analysis for intermediate rotations (j = 0.8 and 0.9). For a near-extremal situation (j = 0.99), we use the data of ω ≥ 1.0 × Re(ω lm0 ). After the fitting analysis, we evaluate the following two relative errors of the best-fit T : As a result (see Table I), we find that the best-fit value of T is very close to T H for the near-extremal case (j = 0.99). This is consistent with the fact that QN modes for a near-extremal case have the separation of 2πT H . On the other hand, the best-fit value of T for medium spins and lower harmonics is closer to T 0 than T H . For higher harmonics, it is interesting to see the consistency with the Lyapunov exponent associated with an instability of null geodesics at the photon sphere, which is not our focus and left for future work. Our conjecture is supported by the analogy between the QN modes of a near-extremal black hole and the Matsubara modes of the Fermi-Dirac distribution. Remember that the Fermi-Dirac distribution with a chemical potential, µ, and temperature, T , has the Matsubara frequencies and indeed, the QN frequencies of a Kerr black hole can be approximated by the Matsubara frequencies (32) especially for a near-extremal case as is shown in Figure 9. In the near-extremal limit, the imaginary part of QN modes indeed approach a half-integer times 2πT H [59,60]. Therefore, the apparently thermal excitation observed in the GW spectra may be understood as the excitation of multiple overtones. In the context of the Kerr/CFT correspondence, the QN modes of a near-extremal black hole can be derived from the dual CFT [14,17,20]. The CFT absorption cross section is where (h L , h R ) are operator dimensions, (q L , q R ) are charges, (T L , T R ) are temperature, and (Ω L , Ω R ) are chemical potentials of the two-dimensional CFT. The identification [14,20] leads to the poles of σ at where the spin s = h R − h L takes |s| = 1 or 2 for the gauge field or the graviton, respectively [20]. Note that the left sector is associated with the azimuthal rotation and is not relevant to the QN modes. The poles (36) are consistent with the Kerr QN frequencies in the extremal limit [59]. In this section, we do not extract the overtones from the GW signal, but nevertheless one can see the thermal excitation of overtones in the spectral amplitude in this way, i.e., the fit of the Boltzmann distribution to a GW spectrum in higher frequencies. It is a novel observable footprint of the thermal or holographic nature of a rotating black hole. To see how the ringdown modeling with the Matsubara modes works, let us fit the Matsubara modes to the GW data in the frequency domain as was done in the previous section. It is totally non-trivial whether the replacement of the QN-mode basis by the Matsubara-mode one works as a set of damped sinusoids cannot construct a complete basis in general. Nevertheless, we find that the mismatch, M, evaluated with the Matsubara-mode modeling with (µ, T ) = (µ 0 , T 0 ) agrees with the mismatch obtained by the QN-mode modeling for each value of n max ( Figure 10). It implies that the Matsubara modes in (32) can be a proper basis to model the GW ringdown including the excitation of overtones although the Matsubara frequencies agree with the QN frequencies at lower tones only (n 2). On the other hand, the Matsubara-mode modeling with (µ, T ) = (µ H , T H ) works in the near-extremal limit only. One can see that the mismatch reaches 0.01 at around n max ∼ 11, 14, and 19 for j = 0.8, 0.9, and 0.99, respectively. It implies that the more rapid rotation the ringing black hole has, the more overtones are excited. Also, we confirm that from our fitting analysis, the isolated mode of j = 0.99 (see the right panel in Figure 9) does not contribute to the fit, which is consistent with the author's previous work on the excitation factors [58], where the author confirmed that the absolute value of the excitation factor of the isolated mode is strongly suppressed. As such, we exclude the isolated mode from the fitting analysis shown in Figure 10. V. DISCUSSIONS AND CONCLUSIONS In this paper, we have studied the ringdown GW emission out of a Kerr black hole induced by a particle plunging into the hole. Then we have confirmed (i) the destructive interference of overtones before the strain peak and (ii) the thermal excitation of overtones that apparently follows the Fermi-Dirac statistics. The thermal nature of a black hole ringing can be tested by searching for the Boltzmann distribution in a GW spectrum at higher frequencies ω Re(ω lm0 ). Fitting multiple QN modes to GW waveforms in the frequency domain, we found that the ringdown starts before the strain peak and the significant destructive interference occurs as the real frequencies of overtones are close to that of the fundamental mode. Remember that the beating phenomenon, involving a destructive interference, is caused by the superposition of multiple sinusoids whose frequencies are close to each other. The fitting in the frequency domain works in a stable manner as the exponential amplification of the fitting mode functions is absent in the frequency domain. Also, one can perform the fitting analysis without truncating GW data beforehand and the start time of ringdown, t * , can be handled as one of the fitting parameters. We have also found the Boltzmann distribution in the GW spectrum at higher frequencies sourced by an extreme mass ratio merger. We have shown that it can be modeled by the Fermi-Dirac distribution with the chemical potential of around the superradiant frequency and the Hawking temperature of the Kerr black hole. The would-be Fermi degeneracy was also observed in the near-extremal case. The suggestive relation between the black hole ringdown and Fermi-Dirac distribution is consistent with the fact that the QN frequencies of a Kerr black hole is very similar to the fermionic Matsubara frequencies, especially in the near-extremal situation. We have fit the Matsubara modes to the GW spectrum and have shown that it works as well as the fit with the Kerr QN modes. We conclude that those are supporting evidence of the thermal excitation of overtones. It also implies that ringdown GW signals would include a footprint of the holographic nature of Kerr black holes, i.e., the correspondence between a many-body Fermi system and the free oscillation of a Kerr black hole. A well-known proposal relevant to this is the Kerr/CFT correspondence [14]. As a supporting evidence of the Kerr/CFT, the cross section of a near-extremal black hole agrees with the dual CFT prediction. The Matsubara frequencies of the retarded Green's function of the dual CFT also agrees with the Kerr QN modes [17,20] where (l, m) is the label of an angular mode, n is an overtone number, Ω H is the horizon frequency, and T H is the Hawking temperature. We here consider the perturbation of the gravitational field that is a spin-2 field and follows the Bose-Einstein statistics, but we found that the statistics of a black hole ringing apparently follows the Fermi-Dirac distribution. It implies that there might exist a supersymmetric relation between the degrees of freedom of gravitational field and those of a black hole ringing. The thermal or holographic nature of rotating black holes can be testable without extracting overtones from GW data, and searching for the Boltzmann distribution in the spectrum of ringdown may put it to the test. The existence of near-extremal black holes has been reported in, e.g., Ref. [49] (for a theoretical modeling, see [61]) and the ringdown signals emitted from those holes would be observable in the next-generation GW detectors, e.g., Laser Interferometer Space Antenna (LISA). As possible future works, it will be important to include the self force of the plunging particle to beyond the extreme mass ratio. Also, it is important to study how sensitive the Fermi-Dirac statistics of GW ringdown is with respect to the orbital angular momentum or the initial kinetic energy of the plunging particle. It is also interesting to study ringdown emission with other spin-fields, e.g., scalar, vector, or Fermi fields, to see if they exhibit the would-be Fermi degeneracy in the spectra of ringdown. The detectability of the excitation of overtones or Boltzmann distribution with LISA will be studied elsewhere. (A19) (A20) Appendix B: Resolution of the numerical computation In this Appendix, we describe the details of our numerical computation solving the Sasaki-Nakamura equation in the frequency domain. To reduce the computation time while keeping the accuracy high, we divide the frequency space into N segments: ω n−1 ≤ ω ≤ ω n with n = 1, 2, ..., N, and we take a larger (smaller) spatial step size in a lower-frequency (higher-frequency) segments. Our computation is performed in each segment with the frequency step size of ∆ω n = (ω n − ω n−1 )/60 and with the spatial step size of ∆r * n . The computation of the source termT lm involves multiple integrals on the particle trajectory. We compute it by the quadrature method with the step size of ∆r * W . Depending on a spin parameter and angular mode we consider, the resolutions and the frequency segments are optimized. As an example, let us consider the case of j = 0.8 and (l, m) = (2, 2), where we take seven segments (N = 7): ω 0 = 1/200, ω 1 = 1/10, ω 2 = 1/2, ω 3 = 3/4, ω 4 = 1, ω 5 = 5/4, ω 6 = 3/2, ω 7 = 2, and we take the step size of and ∆r * W = 1/4. To perform the resolution test, we perform the numerical computation with the above resolution and with another two resolutions: ∆r * n → 2 × ∆r * n , ∆r * W = 1/2 (low resolution), ∆r * n → 1 2 × ∆r * n , ∆r * W = 1/8 (high resolution). As is shown in Figure 11, the optimized (medium) resolution is high enough to compute the spectral peak at around ω ∼ Re(ω lm0 ) and to identify the Boltzmann distribution in the GW spectrum. Appendix C: Insensitivity of the thermal excitation of overtones to the orbit of the plunging particle In this Appendix, we show the insensitivity of the qualitative results presented in this paper to the orbital angular momentum of the plunging particle L p . We use L p = 1 throughout the main text. Here we compute the GW spectra for j = 0.99 and (l, m) = (2, 2) with L p = 0.5 and 0.75. We then confirm that our results are insensitive to the value of L p . We check that the fit of the Boltzmann distribution leads to the best-fit temperature T which is close to the Hawking temperature T H even for L p = 0.5 and 0.75 ( Figure 12). The relative error to T H is around 1% for both cases. It implies that our main result of the thermal excitation of overtones is insensitive to the detail of the trajectory of the plunging particle. In this manuscript, we still fix the energy of the particle as E = m p , i.e., the particle is static at infinity, and restrict the trajectory on the equatorial plane (θ = π/2). Relaxing these conditions makes the computation of the source term,T lm , more complicated. A more detailed study on the thermality of ringdown for a more general plunging orbit is left for a future work. FIG. 12: Absolute squares of the GW spectra for j = 0.99, l = m = 2. The orbital angular momentum is set to Lp = 1, 0.75, and 0.5. The Boltzmann factor is fitted to the data of ω ≥ 1.0 × Re(ω220). The Hawking temperature for j = 0.99 is TH 0.197. The relative error is ∆H 1% for Lp = 0.75 and 0.5.
2022-08-08T18:24:27.525Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "2a490deac56034f121bfc9d43754878e8bdc2e2f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "2a490deac56034f121bfc9d43754878e8bdc2e2f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
56503565
pes2o/s2orc
v3-fos-license
Vitamin D and Chronic Obstructive Pulmonary Disease Vitamin D is an important regulator of calcium and bone homeostasis. It is also involved in the regulation of different genes and cellular functions, particularly in the context of inflammation, regeneration and immune control. Conversely, vitamin D deficiency which is often found in chronic, infectious and inflammatory diseases is thought to drive or enhance uncontrolled inflammation. Chronic obstructive pulmonary disease (COPD) is characterized by chronic inflammation of the airways most often because of cigarette smoking. It has been recognized that repetitive airway infections and systemic consequences or co-morbidities also contribute to the progressive nature of COPD. Vitamin D deficiency is known to sneak in from the early stages of COPD, to become highly prevalent at the more severe stages, and may thereby catalyse airway infection, inflammation and systemic consequences. Undoubtedly, vitamin D deficiency enhances bone resorption and osteoporosis in COPD for which appropriate vitamin D supplementation is recommended. However, conflicting evidence has emerged on the extra-calcemic effects of vitamin D in COPD. A recent intervention trial with high-dose supplementation in COPD was only able to reduce exacerbation frequency in the subgroup of patients with lowest baseline vitamin D levels. It confirms that severe vitamin D deficiency is a health hazard but that more clinical and experimental studies are needed to explore how vitamin D deficiency may affect airway biology and systemic effects in the context of smoke-induced lung diseases. Introduction Over the last years, there is an increasing interest in the role of vitamin D and vitamin D de fi ciency in various chronic diseases. Besides the well-known effect of vitamin D de fi ciency on bone loss in adults, accumulating evidence also links a low vitamin D nutritional status to highly prevalent chronic illnesses, including cancers, autoimmune diseases, infectious and cardiovascular diseases [1][2][3] . Vitamin D is now known to have an important in fl uence on immunoregulation and on the expression of antimicrobial peptides. Since patients with chronic obstructive pulmonary disease (COPD) often integrate all of these co-morbid diseases in one and given that most of COPD exacerbations are triggered by bacterial and viral infection, vitamin D could play a key role in the pathogenesis of this disease. This chapter aims to discuss the prevalence and determinants of vitamin D de fi ciency in COPD, the wellknown effect of vitamin D in the development and treatment of COPD-associated osteoporosis and its potential role in the uncontrolled in fl ammatory cascade and systemic consequences of the disease. De fi nition COPD is a chronic disease characterized by air fl ow limitation that is progressive, not fully reversible and associated with an abnormal in fl ammatory response of the lungs to noxious particles or gases. In Western countries, tobacco smoke is the major cause for COPD, accounting for 90-95% of the cases. Since only 20% of smokers develop severe COPD, other factors (biological, hereditary, environmental) must be involved [ 4 ] . Narrowing of the airways by in fl ammation, mucus production, irreversible remodelling and emphysema results in a limitation of expiratory air fl ow and disturbed gas exchange [ 5 ] . Currently, it is estimated that 210 million people suffer from COPD worldwide, a number that is still increasing. By 2020, the World Health Organization predicts that COPD will rise from the sixth to the third leading cause of death, next to only cardiovascular disease and cancer [ 6 ] . According to the Global Obstructive Lung Disease (GOLD) de fi nition, diagnosis of COPD should be based on spirometry measurements with a post-bronchodilator forced expiratory volume in 1 s over forced vital capacity ratio (FEV 1 /FVC) below 0.7. Subsequently, COPD can be categorized in different stages of severity (GOLD stages) going from mild, moderate and severe to very severe disease according to FEV 1 [ 7 ] . It should be noted, however, that the majority of COPD patients, especially in the early stages of the disease, report no complaints, do not perform spirometry and thus are unaware of their disease. Treatment for COPD focuses on minimizing symptoms and preventing exacerbations. With the exception of smoking cessation, so far, no treatment has clearly proven to signi fi cantly modify COPD disease progression. Recent data, however, indicate that pharmacological intervention with inhalation therapy may slow down the yearly decline of FEV 1 [8][9][10] . Pathophysiology The pathophysiology of COPD is characterized by an increased in fl ammatory response in the airways and parenchyma. Besides an increase in number of neutrophils, macrophages and T-lymphocytes, COPD is associated with elevated concentrations of various cytokines, including interleukins (IL-1, IL-6 and IL-8), tumour necrosis factor alpha (TNF-a ), oxidative stress and the release of proteolytic enzymes. Cigarette smoke is known to cause a direct injury of the airway epithelial cells leading to the release of endogenous intracellular molecules or danger-associated molecular patterns (DAMPs). These signals are recognized by Toll-like receptors on epithelial cells which initiate a non-speci fi c in fl ammatory response. As reaction on the release of early cytokines (TNF-a , IL-1, IL-8), macrophages, neutrophils and dendritic cells are recruited to the site of in fl ammation [ 11 ] . Proteolytic enzymes and reactive oxygen species are released which may cause further damage to the lung. Next, self-antigens and antigens coming from pathogens bound to dendritic cells can activate naive T cells into Th1 cells [ 12 ] and may lead to antibody producing B cells. This adaptive immune response may then further enhance the in fl ammatory cascade. In addition, the observed reduction of regulatory T cells (Treg) in COPD lungs against the rise of pro-in fl ammatory Th17 cells is pointing towards an impaired immune regulation in COPD [ 13 ] . Macrophages are believed to play a central role in the pathophysiology of COPD [ 14 ] . They are important sources of pro-in fl ammatory mediators but may also protect against infection by phagocytosis. In COPD, alveolar macrophages appear to be resistant to the anti-in fl ammatory effects of corticosteroids by the reduced activity of histone deacetylase 2 (HDAC2), a nuclear enzyme that switches off in fl ammatory genes activated by the nuclear factor NF-k B [ 15 ] . Additionally, Taylor and colleagues recently demonstrated that the phagocytotic capacity of macrophages of COPD patients is impaired, which may lead to bacterial colonization and increased exacerbation frequency [ 16 ] . Overall, the complexity of the pathogenesis of COPD with different mechanisms and risk factors is re fl ected in the broad variation of clinical phenotypes. COPD Exacerbations With progression of the disease, as marked by a decline of forced expiratory volume in 1 s (FEV 1 ), COPD patients become more prone to exacerbations. An exacerbation of COPD is an acute worsening of respiratory symptoms which is associated with increased symptoms and worsening of lung function. Recent studies have shown that quality of life and health status of patients are mainly determined by the presence and frequency of such exacerbations which, in turn, may lead to a faster decline in FEV 1 [ 17 ] . Although viral and bacterial infections are assumed to be the major cause of exacerbations, other factors including environmental pollution and allergens have also been identi fi ed [ 18 ] . The exact role of bacterial infection in COPD exacerbations is often biased by bacterial colonization of the airways during stable state with the same organisms as those isolated at exacerbations: Haemophilus in fl uenzae , Streptococcus pneumoniae , Moraxella catarrhalis , Staphylococcus aureus and Pseudomonas aeruginosa . Different studies have shown that prevalence and load of organisms increases during an exacerbation [ 19 ] , but recent evidence indicates that the acquisition of a new bacterial strain is likely the main trigger of an exacerbation [ 20,21 ] . Besides bacterial infections, exacerbations may also be triggered by viral infections, including infections with rhinoviruses , coronavirus , respiratory syncytial virus , in fl uenza , parain fl uenza and adenoviru s. Moreover, recently, a frequent exacerbator phenotype has been identi fi ed suggesting that the innate and adaptive immune defence of the host also contributes to exacerbation susceptibility and that individualized therapy may become important in the nearby future [ 22 ] . Systemic Consequences/Co-Morbidities of COPD COPD is not restricted to the lungs but also associated with systemic in fl ammation and increased co-morbidities which are now considered as important targets in the therapeutic approach of COPD. Increased systemic levels of TNF-a , IL-6 and C-reactive protein (CRP) are commonly found in patients with COPD [ 23 ] . Common co-morbidities of COPD include lung cancer, cardiovascular disease, osteoporosis, diabetes and skeletal muscle dysfunction [ 24,25 ] . Whether these co-morbidities are caused by the underlying disease or just co-exist because of common risk factors such as smoking, ageing and inactivity is far from understood. Most likely, both mechanisms play together and the attractive hypothesis of in fl ammation in the lung "spilling over" into the systemic circulation and affecting other organs is corroborating this idea [ 26 ] . Whatever the mechanism may be, the presence of these co-morbidities clearly contributes to the poor outcome in COPD [ 27,28 ] . Vitamin D Metabolism Vitamin D is generally obtained by photosynthesis in the skin but can also be derived from nutrition (fatty fi sh, fi sh liver oils and dairy products). Ultraviolet light catalyses the fi rst step in the vitamin D biosynthesis, which is the conversion of 7-dehydrocholesterol into pre-vitamin D. The next step is a hydroxylation in the liver into 25-OHD, which then circulates in serum with a long half-life of 15 days. Next, 25-OHD is hydroxylated again into the active vitamin D metabolite 1,25(OH) 2 D by 1 ahydroxylase (CYP27B1) in the kidney which is controlled by serum levels of calcium and phosphate and regulating hormones such as parathyroid hormone (PTH), calcitonin and phosphatonins. 1,25(OH) 2 D also induces the expression of a 24-hydroxylase (CYP24A1) which catabolizes 25-OHD and 1,25(OH) 2 D into biologically inactive, water-soluble metabolites, thereby serving as its own negative feedback. The majority of 25-OHD and 1,25(OH) 2 D are bound to plasma proteins, of which more than 90% to the vitamin D binding protein (DBP), which carry out the delivery to their respective target organs [ 29 ] . 1,25(OH) 2 D may thus bind to the nuclear vitamin D receptor (VDR) in the intestine, bone, kidney and parathyroid gland cells, resulting in the maintenance of normal serum calcium and phosphorus levels and their related effects on mineralization and turnover of bone [ 30,31 ] . Since 1 a -hydroxylase and the nuclear receptor VDR are widely present in cells of several extrarenal tissues such as skin, bone, prostate and immune cells, local 1,25(OH) 2 D concentrations can also exert different autocrine and paracrine functions. Although the enzyme found here is identical to the one that is expressed in the kidney, its expression is regulated by immune signals instead of mediators of bone and calcium homeostasis [ 32,33 ] . 1,25(OH) 2 D mediates its effects by binding to the nuclear VDR The vitamin-VDR complex may then activate vitamin D response elements (VDRE) on genes involved in different cellular processes. It is estimated that about 3% of the mouse/human genome is regulated by vitamin D [ 34 ] . Directly or indirectly, vitamin D controls many genes that are involved in the regulation of cellular proliferation, differentiation and apoptosis of healthy and pathological cells. Determinants of Vitamin D De fi ciency Because of its long half-life, 25-OHD is typically used to determine vitamin D status. It re fl ects vitamin D synthesized in the skin as well as that acquired from the diet and vitamin D degradation by catabolizing enzymes. When focusing on the calcemic effects, vitamin D insuf fi ciency is best de fi ned as a 25-OHD level below 20 ng/ml (50 nmol/l) [ 2,35 ] . A sensitive parameter to determine vitamin D de fi ciency is serum levels of PTH. Older data have clearly demonstrated that levels of 25-OHD below 20 ng/ml are associated with an increase in PTH expression [ 36 ] . Based on observational studies, several experts have suggested that, for non-calcemic effects, serum levels of at least 30 ng/ml (75 nmol/l) are required, but so far, intervention studies to support this are lacking. Dietary sources of vitamin D are limited, and food forti fi cation is mostly inadequate or nonexistent. Although sunlight is the most important source of vitamin D, several other factors can in fl uence the amount of vitamin D that can be synthesized: season, latitude, clothing, the use of sunscreen, darker skin pigmentation and age. For example, ageing is associated with decreased concentrations of 7-dehydrocholesterol in the skin, thereby reducing the capacity to synthesize vitamin D. Even if regularly exposed to sunlight, elderly people produce 75% less cutaneous vitamin D than compared to young adults [ 36 ] . Due to cultural habits and clothing, even those who live in sunny climates are commonly found to be de fi cient in vitamin D. Data from the Third National Health and Nutrition Examination Survey (NHANES III) revealed that in US adults, only 30% of the white population and 5% of the African Americans had levels of vitamin D of at least 30 ng/ml [ 37 ] . According to the current de fi nitions, it is estimated that more than one billion people worldwide have impaired serum levels of vitamin D. As current supplementation regimens with a daily dose of 800-1,000 IU of vitamin D restore de fi cient serum 25-OHD levels in a general adult population to concentrations above 20 ng/ml, higher doses are probably required to increase 25-OHD levels to even higher levels that may be needed for non-calcemic diseases in a population at risk [ 38,39 ] . At present, we can only speculate on what such ideal target range of 25-OHD levels is to maximally exploit these extra-calcemic effects [ 40 ] . We should also acknowledge that an extensive expert analysis of the potential effects of vitamin D supplementation on the health outcome of North-American subjects concluded that there is presently insuf fi cient evidence for extraskeletal bene fi ts of vitamin D therapy and that only new randomized controlled trials will be able to de fi ne such effect [ 41 ] . Vitamin D De fi ciency in COPD COPD patients should be considered at high risk to become vitamin D de fi cient for a variety of reasons: lower food intake, reduced capacity for vitamin D synthesis of the skin by ageing and smoking, the absence of outdoor activity and sun exposure, impaired activation by renal dysfunction and a lower storage capacity in muscles or fat by wasting may all contribute to a defective vitamin D status in COPD [ 2 ] . In 2005, Black and colleagues who examined spirometric data from the Third National Health and Nutrition Examination Survey, a cross-sectional survey on 14,091 US civilians over 20 years of age, discovered an important link between vitamin D and spirometric data [ 42 ] . After adjustment for potential confounders, a strong relationship between serum levels of 25-OHD and pulmonary function, as assessed by FEV 1 and FVC, was found. Although a signi fi cant correlation with airway obstruction could not be found, the observed dose-response relationship suggested a causal link [ 43 ] . The observation that smoking African-Americans more rapidly develop severe air fl ow obstruction as compared to Caucasians is also in agreement with the idea that a presumed lower vitamin D status in African-Americans correlates with an increased susceptibility to COPD [ 44 ] . Furthermore, different genetic variants involved in the vitamin D signalling pathway are shown to determine 25-OHD levels [ 45 ] , and some of these variants have repeatedly been associated with COPD. For instance, variants of the DBP gene ( GC ) have been shown to be protective or risk factors for COPD [ 46 ] , and a more recent robust candidate gene study in two large data sets identi fi ed the GC genes as susceptibility loci for COPD [ 47 ] . In a study of Forli and colleagues, vitamin D de fi ciency (<20 ng/ml) was found in more than 50% of a cohort waiting for lung transplantation [ 48 ] , but they failed to compare vitamin D serum levels with a matched control group. However, we recently demonstrated in a group of 414 smoking individuals that patients with COPD were more likely to suffer from vitamin D de fi ciency than aged and gender-matched healthy control smokers without COPD [ 49 ] . In COPD patients, we found a signi fi cant association between vitamin D serum levels and severity of disease assessed by FEV 1 . The prevalence of vitamin D de fi ciency de fi ned by 25-OHD levels below 20 ng/ml increased to, respectively, 60% and 75% in severe (GOLD 3) and very severe COPD patients (GOLD 4) (Fig. 11.1 ). Interestingly, we also showed that 25-OHD levels were determined by genetic variants in the vitamin D binding gene ( GC) after correcting for age, gender, smoking history and disease severity and that homozygous carriers of the rs7041 T allele with lowest vitamin D levels exhibited an increased risk for COPD. Although the risk effect of certain GC alleles may relate to a lower bioavailability of vitamin D, other authors have suggested that protein variants of DBP may directly affect in fl ammation in the airways [ 50 ] . Wood and colleagues, for instance, demonstrated that local DBP expression in sputum of COPD patients correlates with macrophage phagocytosis capacity [ 49 ] suggesting that the risk effect of DBP may be found in a different activation of macrophages [ 51 ] . A recent population based study in 2,943 individuals living in Hertfordshire (UK) found reduced vitamin D intake in the COPD subgroup of 521 individuals compared to the non-COPD subjects but was not able to con fi rm the positive association between 25-OHD serum levels and FEV 1 or FVC [ 52 ] . Surprisingly, Shaheen et al. found that patients with highest vitamin D levels were more likely to have COPD. They concluded that in contrast to a prudent dietary pattern with the intake of high amounts of antioxidants [ 53 ] , vitamin D was not an important determinant for adult pulmonary function and risk of COPD. Unfortunately, more than 50% of their population was taking dietary vitamin D supplements for unspeci fi ed reasons, which may have affected their analysis and biased their conclusions. Finally, in a small sub-study of the Lung Health Study III conducted in 198 selected individuals, mean 25-OHD levels of COPD patients with a rapid FEV 1 decline were found to be similar to mean 25-OHD levels of patients with a slow decline, suggesting again that vitamin D de fi ciency does not contribute to COPD progression [ 54 ] . Together, from these con fl icting data, it is clear that more prospective studies are needed to determine causal relationships between vitamin D de fi ciency and pulmonary function in COPD. But even if causality cannot be found for pulmonary function variables, vitamin D de fi ciency may still alter the disease course by affecting other outcomes such as respiratory tract infections or co-morbidities. Vitamin D, Calcemic Effects and Osteoporosis Low levels of vitamin D result in low bioavailability of calcium which stimulates parathyroid glands to increase secretion of PTH. In the kidneys, PTH reduces the reabsorption of phosphate from the proximal tubule and increases calcium reabsorption in the distal tubule, resulting in a net increase in calcium/phosphate ratio. PTH also induces renal 1 a -hydroxylase expression which then leads to an increased production of active 1,25(OH) 2 D. 1,25(OH) 2 D enhances intestinal calcium absorption and acts on the immature osteoblastic cells to stimulate osteoclastogenesis through the RANKL/RANK regulatory system, with enhanced bone resorption and mobilization of calcium from the bone compartment, causing osteopenia, osteoporosis and increased risk for bone fractures [ 55 ] . This results in higher levels of calcium and 1,25(OH) 2 D with a negative feedback on PTH and a subsequent limitation of bone resorption. Osteoporosis is a skeletal disorder which is characterized by compromised bone strength resulting in a higher susceptibility to fractures. Bone strength is determined by the structural quality of the bone and by bone mineral density, the latter measured by dual X-ray absorptiometry (DXA) and used to de fi ne osteoporosis. Osteoporosis is a major health problem as osteoporotic fractures are a frequent cause of signi fi cant and long-lasting morbidity in older individuals. Hip fractures and other types of nonvertebral fractures account for most of the burden of osteoporosis, with increased mortality, functional decline, loss of quality of life and need for institutionalization [ 56 ] . Female gender, advancing age, a history of fragility fractures, current or former smoking, low body weight or weight loss and the use of systemic glucocorticoids are well-established risk factors for osteoporosis and osteoporotic fractures. As many of these risk factors are present in COPD patients, especially at the more severe stages, it should be no surprise that osteoporosis and COPD are strongly linked. A recent review of Graat-verboom et al. con fi rmed low body mass, disease severity, use of corticosteroids, age and female gender to be independent risk factors for osteoporosis in COPD [ 57 ] . The majority of studies have reported an increased risk for osteoporosis with decreasing FEV 1 (Fig. 11.2 ) [59][60][61] . The prevalence of osteoporosis in COPD varies between 9% and 59% depending on the diagnostic methods used, the population studied and the severity of the underlying respiratory disease [ 57 ] . Sin and colleagues used the NHANES III data to demonstrate that air fl ow obstruction is independently associated with reduced bone mineral density [ 59 ] . In their population-based cohort of 9,502 non-Hispanic White participants, 33% of all women with severe COPD had osteoporosis whereas almost all women with mild airway obstruction had osteopenia. In comparison, men were at lower risk than women but still, in men with severe COPD, the prevalence of osteoporosis and osteopenia was 11% and 60%, respectively, which was approximately three times higher than expected. Most studies looking at prevalence of osteoporosis have used DEXA scans which measure bone mineral density but do not evaluate micro-architectural changes of the bone. It is known that these microarchitectural changes may equally cause fragility fractures, even with normal bone density, and should therefore be considered to be osteoporotic as well. When taking DEXA measures and vertebral fragility fractures into account, Graat-Verboom et al. found that prevalence of osteoporosis almost doubled in 250 COPD outpatients [ 62 ] . It indicates that prevalence of osteoporosis in COPD might be much higher than the current prevalence data suggest [ 63 ] . Interesting relationships are also found with emphysema which is associated with reduced bone mineral density and lower body mass index and which may represent a clinical phenotype at risk for osteoporosis [ 64,65 ] . Visual emphysema as assessed by CT scan was found to be an independent risk factor for osteopenia/osteoporosis [ 66 ] , and different bone turnover markers are increased in COPD. Recent data also show that osteoprotegerin, which is critically involved in bone turnover by blocking RANK-RANKL interaction, may become a useful marker for parenchymal lung destruction in COPD [ 67,68 ] . It is currently not known whether the vitamin D pathway is involved in the speci fi c development of emphysema, but studies in laboratory animals certainly corroborate this idea [ 69,70 ] . The consequences of osteoporotic fragility fractures in a COPD population may be detrimental. In hip-fracture patients, mortality is close to 20% within 1 year, and of those who do survive the fracture, again, some 20% will have to be institutionalized because of its functional consequences [ 71 ] . The exact prevalence of hip fractures in COPD patients has not been studied in detail, but it is probable that the impact of such events in disabled COPD patients will be even worse. Additionally, vertebral compression fractures can lead to back pain, functional impairments, increased kyphosis with reduced rib cage mobility and decline of pulmonary function [72][73][74] . The impact of loss of vertebral height on pulmonary deterioration in COPD has been demonstrated by Leech and colleagues who found that vital capacity and total lung capacity incrementally declined as the number of thoracic vertebral fractures increased [ 72 ] . In the EOLO study, a large COPD cohort of up to 3,000 participants, more than 40%, had one or more vertebral fractures and the prevalence signi fi cantly correlated with severity of disease [ 75 ] . Kyphosis related to osteoporosis may also cause limitation in rib mobility and inspiratory muscle dysfunction and was also correlated with loss of FEV 1 and FVC [ 76 ] . There is no doubt that vitamin D protects against osteoporosis and osteoporotic fractures, and therefore, suf fi cient vitamin D supplementation should be encouraged. The fact that the majority of COPD patients are of older age, have many common risk factors for osteoporosis and are more likely to be de fi cient in vitamin D supports standard supplementation, especially at the more severe stages of disease. A daily dose of 700-800 IU of vitamin D together with an adequate daily calcium intake (1,000 mg) is probably the best strategy to prevent fractures in older subjects. Such supplementation is known to restore low serum 25-OHD levels in a general adult population to concentrations above the 20 ng/ml (50 nmol/l) threshold. Patients with COPD, particularly those on systemic corticosteroids, should also be considered for a DEXA scan and if needed, osteoporosis medication [ 58 ] . Even though causality and therapeutic bene fi ts of vitamin D remain to be established for pulmonary in fl ammation and other co-morbidities, prevention of vertebral fractures will positively affect pulmonary function [ 75 ] . Exacerbations Along with a progressive loss of pulmonary function, COPD patients become more prone to acute COPD exacerbations which are an important cause of hospitalization, impaired quality of life and mortality [ 77 ] . Appropriate antimicrobial treatment is essential in the treatment of acute bacterial exacerbations whereas in case of colonization, repetitive and long-term antibiotic treatments are still avoided as they contribute to the multi-resistance of colonizing strains. Anti-in fl ammatory treatments different from inhaled and systemic corticosteroids are currently validated in COPD to reduce exacerbations on top of bronchodilator therapy. PDE4 inhibitors and neo-macrolides are the most promising agents from this perspective and seem to be bene fi cial for a subgroup of patients with repetitive exacerbations [78][79][80] . An attractive alternative approach might be the up-regulation of the innate immune defence system with vitamin D, particularly with regard to antimicrobial polypeptides [ 81 ] . Wang and colleagues demonstrated that in different cell types such as epithelial cells and white blood cells, the genes encoding for antimicrobial polypeptides such as cathelicidin (LL-37) and b -defensin are driven by VDR elements containing promoters [ 82 ] . In human monocytes, TLR activation up-regulates expression of the VDR and the 1-a -hydroxylase genes, leading to induction of LL-37 and killing of intracellular Mycobacterium tuberculosis [ 83 ] . LL-37 is also found to be very effective in the killing of a number of antibiotic-resistant strains such as Pseudomonas and S. aureus , different viruses and Chlamydia [ 84,85 ] . As LL-37 is diffusely expressed in the surface epithelia of human airways, in the submucosal glands and in macrophages and neutrophils [ 86 ] , substitution of local vitamin D insuf fi ciency may reduce bacterial load and concomitant airway in fl ammation [ 87 ] . Apart from its potential bene fi t on bacterial eradication, vitamin D may also down-regulate the complex in fl ammatory cascade at several levels. In vitro vitamin D can reduce the expression of TLRs which are critical in the induction of the early immune response [ 88 ] . High levels of vitamin D also inhibit dendritic cell maturation with lower expression of MHC class II molecules, down-regulation of co-stimulatory molecules and lower production of pro-in fl ammatory cytokines such as IL-2, IL-12, IFN-g and IL-23 [ 33,89 ] . In several mouse models, vitamin D also leads to a switch from a Th1/Th17 responses towards a Th2 and regulatory T cell answer [ 1,[90][91][92] . Low serum levels of vitamin D have also been correlated with a decreased phagocytic activity of macrophages in patients with rickets [ 93 ] whereas antimicrobial activity of macrophages against M. tuberculosis could be increased by vitamin D supplementation [ 94 ] . Overall, the potential of vitamin D in reducing proin fl ammatory processes of the innate and adaptive immune system, together with an increased bacterial eradication by self antimicrobial peptides and enhanced macrophage phagocytosis, may offer great potential in the treatment of exacerbations. Indirect clinical evidence for such hypothesis may be found in the observation that exacerbations of COPD are most common in winter, when 25-OHD levels are lowest. In addition, data from NHANES III showed that upper respiratory tract infections were most frequent in patients with lowest vitamin D levels. To further investigate such intriguing hypothesis, a randomized placebo-controlled intervention trial was performed to evaluate the effect of vitamin D supplementation in COPD patients, prone to exacerbations [ 95 ] . One hundred and eighty-two patients with moderate to very severe COPD and a history of recent exacerbations were supplemented with 100,000 IU vitamin D or placebo every 4 weeks over one year. Because of high-dose supplementation mean, 25-OHD serum levels increased signi fi cantly in the intervention arm and reached stable mean serum levels of 50 ng/ml, which is in the therapeutic range of the hypothesized extra-calcemic effects [ 2 ] . However, despite effective supplementation, no signi fi cant difference in time to fi rst exacerbation, time to fi rst hospitalization and exacerbation rate could be observed between the intervention and the control group. The absence of a therapeutic effect of vitamin D may relate to the fact that most of the patients presented with severe disease and were on maximal inhalation therapy. As all these treatments are known to reduce exacerbations [ 96,97 ] , it is likely that any additional effect of vitamin D on top of regular treatment is more dif fi cult to obtain. Intervention within the more early COPD stages taking less medications might therefore be more effective which is in line with the idea that such milder stages are also more sensitive to disease modi fi cation [ 98 ] . Interestingly, a signi fi cant baseline vitamin level by treatment interaction was observed for exacerbation rates. When performing a post hoc analysis in the subgroup of patients with very de fi cient vitamin D levels at baseline (<10 ng/ml), a signi fi cant 43% reduction of the number of exacerbations was observed in the intervention group. As one out of six COPD patients in the trial presented with such asymptomatic low baseline 25-OHD levels which persisted during the entire course of the study, further focus and future studies on this important subgroup may be warranted, eventually resulting in better patient-tailored interventions. Recently, a frequent exacerbator phenotype has been indenti fi ed suggesting that individualized therapy-including appropriate vitamin D substitution-may become important [ 22 ] . We also assessed for the presence and load of pathogenic bacteria in cultures of morning sputa during the trial but found no difference in eradication between both study arms. However, monocyte phagocytosis capacity in peripheral blood monocytes of patients receiving vitamin D was signi fi cantly increased compared to the placebo group, an effect which was more pronounced in the subgroup with lowest baseline levels. It is therefore tempting to speculate that the signi fi cant reduction of exacerbations in the vitamin D de fi cient subgroup is explained by an important upregulation of impaired phagocytosis capacity [ 51 ] . So far, hard evidence for this mechanism is lacking. Finally, it should be noted that the lack of an overall effect could also relate to local vitamin D insensitivity because of epigenetic modi fi cations. In line with the known corticosteroid resistance in COPD [ 99 ] , recent studies in cancer have shown that epigenetic silencing of key enzymes of the vitamin D pathway may occur leaving tumour cells insensitive to vitamin D therapy [ 100,101 ] . At least, epigenetic modi fi cations especially in the context of smoking or poor diet may explain why many observations suggest causal relationships between vitamin D de fi ciency and in fl ammatory diseases whilst supplementation later on in the disease cannot reverse this process [ 102 ] . Peripheral Muscle Function Skeletal muscle weakness is common in moderate to severe COPD and is an independent predictor of respiratory failure and death [ 103 ] . Although the underlying mechanisms of skeletal muscle dysfunction in COPD are not entirely understood, it is generally accepted that the combination of disuse because of respiratory limitation, with elevated oxidative stress, systemic in fl ammation, hypoxia and frequent steroid intake, is the main cause of deterioration [ 25 ] . Rehabilitation programmes in COPD are proven successful, but there is still a large variability in training effectivity [ 104,105 ] . Since muscle weakness is a prominent feature in rickets and chronic renal failure, and epidemiological studies found a positive association between 25-OHD levels and lower extremity function in older persons, vitamin D could be an important factor in muscle health [ 106 ] . In elderly individuals, vitamin D status predicts physical performance and consequent decline during long-term follow-up [ 107 ] . Several double-blind randomized controlled trials demonstrated that vitamin D supplementation increased muscle strength and balance and reduced the risk of falling in elderly [ 108 ] . Although a recent meta-analysis looking at the effect of vitamin D supplementation on muscle strength was negative, positive effects were still found in very de fi cient patients [ 109 ] . Moreover, the cross-sectional analysis from NHANES indicated that muscle strength continued to increase throughout 25-OHD serum levels of 9-37 ng/ml, indicating that for obtaining bene fi cial effects on the muscle, higher dose supplementation might be necessary [ 106 ] . On the pathological level, adults with vitamin D de fi ciency show predominantly type II muscle fi bre atrophy [ 110 ] with several muscle abnormalities such as enlarged inter fi brillar spaces, in fi ltration of fat, fi brosis and glycogen granules [ 111 ] . Conversely, increase in relative fi bre composition and type II fi bre dimensions has been reported in elderly after treatment with vitamin D [ 112 ] . These type II fi bres are also the fi rst to be recruited to prevent falling [ 113 ] , which may explain why vitamin D supplementation is shown to reduce falling [ 114 ] . In COPD, limb muscle adaptation leads to a decrease of the proportion of slow oxidative type I fi bres with a relative increase towards glycolytic type II fi bres [ 115,116 ] , those fi bres that are preferentially affected by vitamin D de fi ciency. Therefore, vitamin D de fi ciency may be of particular concern in COPD patients with muscle weakness and dysfunction. The exact mechanisms by which vitamin D affect muscle function are not fully understood. However, 1,25(OH) 2 D may impair muscle function by altering calcium regulation. In particular, 1,25(OH) 2 D is responsible for the active calcium transportation into the sarcoplasmic reticulum by Ca-ATPase. It is known to regulate Ca-ATPase by phosphorylation of proteins in the sarcoplasmic reticulum membrane; it can increase phosphate transport across the membrane and interacts with calmodulin [ 110,117 ] . Interestingly, calmodulin is highly sensitive to oxidative stress [ 118 ] , a typical feature of COPD, and 1,25(OH) 2 D may yield antioxidant properties. Thus, both vitamin D de fi ciency and increased oxidative stress through, e.g. smoking may act synergistically impairing calmodulin function, muscle structure and contractility. 1,25(OH) 2 D has also an important role in protein synthesis in the muscle cell, mediated through nuclear receptor-mediated gene transcription [ 113 ] . For example, it can affect actin and troponin C content, two major contractile proteins in skeletal muscle [ 119 ] , or may up-regulate gene expression of muscular growth factors, for instance, IGF-I [ 120 ] . Although it is tempting to extrapolate these vitamin D-mediated actions in skeletal muscles of healthy or elderly subjects [ 108,121 ] to a speci fi c population of COPD patients, it is still to be shown that vitamin D de fi ciency contributes to the observed muscle weakness in COPD. At present, there is no direct evidence for such causal relationship but the observation that VDR genotypes may in fl uence quadriceps strength in COPD patients is in line with this assumption [ 122 ] . Recent data also show that vitamin D de fi cient COPD patients referred for rehabilitation have a higher risk for dropout and reduced bene fi t on walking endurance [ 123 ] . In a post hoc analysis in moderate to severe COPD, we also found a signi fi cant effect of high-dose vitamin D supplementation on top of 3 months of rehabilitation in terms of improved exercise capacity [ 124 ] . Therefore, randomized controlled trials investigating if supplementation of vitamin D de fi ciency may positively affect muscle force, muscle function and general COPD outcomes are urgently needed. Systemic Consequences In the last years, many clinical studies have associated low vitamin D levels to prevalence and incidence of cancer, including lung cancer [ 125,126 ] . Several studies associate vitamin D de fi ciency with autoimmune diseases like type I diabetes, multiple sclerosis and rheumatoid arthritis [127][128][129] . De fi cient vitamin D levels have been linked to chronic infections such as tuberculosis and acute viral infections like in fl uenza or upper tract respiratory infections [130][131][132][133] . Similar data are also available which link vitamin D de fi ciency to cardiovascular diseases, arterial hypertension and even all cause mortality [ 134 ] . It is beyond the scope of this book chapter to review all evidence on vitamin D in these different chronic diseases, but it is striking that many of them are currently considered to be co-morbid conditions of COPD and accepted as important determinants of COPD outcome and prognosis [ 23,25 ] . Tackling vitamin D-mediated effects in these co-morbid conditions may therefore indirectly improve COPD status [ 135 ] . It should be stressed, however, that the above mentioned relationships between vitamin D de fi ciency and different chronic diseases are speculative and most often rely on cross-sectional and retrospective observations or on evidence of in vitro and animal research. In humans, placebo-controlled intervention studies and observational studies with prospective long-term follow-up speci fi cally designed to demonstrate causal relationships are often lacking. Moreover, recent intervention studies with vitamin D supplementation in multiple sclerosis, diabetes, in fl uenza and tuberculosis have reported disappointing results, most often by their limitation of statistical power or insuf fi cient supplementation [136][137][138][139] . Recent expert analysis therefore concluded that there is yet insuf fi cient evidence for extra-calcemic bene fi ts of vitamin D therapy and that more randomized placebo-controlled interventions trials are needed to de fi ne such effects [ 41 ] . For COPD in particular, intervention studies targeting co-morbid conditions to improve COPD-speci fi c outcomes will be needed and in analogy with the ongoing trial on statins in COPD patients ( www.clinicaltrials.gov ), one may think about a large intervention study with vitamin D supplements. Lung Tissue Remodelling Indirectly or directly, vitamin D is also believed to regulate extracellular matrix homeostasis in other tissues than bone, within particular lung and skin tissue via the control of transforming growth factor-b , matrix metalloproteinase and plasminogen activator systems [140][141][142][143] . There is compelling evidence that vitamin D plays a key role in foetal lung growth, development and maturation [ 144,145 ] . Although 1.25 (OH) 2 D toxicity in Klotho-null mice results in a phenotype of skin atrophy, osteoporosis and emphysema [ 146 ] , recent data in mice show that severe vitamin D de fi ciency from early life also results in a impaired lung function by differences in lung volume and growth [ 69 ] . Evidence from human epidemiological studies also suggests that higher prenatal uptake of vitamin D protects against childhood wheezing [ 147 ] . As impaired lung growth and childhood asthma are known risk factors for COPD at later age [ 148,149 ] , the causal link between vitamin D de fi ciency and COPD may therefore already exist from early childhood on. Again, recent evidence from animal studies corroborates this idea. VDR knockout mice develop emphysematous airspace enlargement which is associated with the up-regulation of matrix metalloproteinases in the lung, an increased in fl ux of in fl ammatory cells and the development of typical lymphoid aggregates around the peripheral airways [ 70 ] . Conclusion The evidence that the vitamin D pathway plays a pivotal role in the biology of healthy and diseased lungs is compelling. As with many chronic diseases, vitamin D de fi ciency is highly prevalent in COPD and occurs more frequently with increasing disease severity. Such de fi ciency may not only enhance local airway in fl ammation but may also induce or accelerate ongoing co-morbid diseases and subsequently impair the general prognosis of the disease (Fig. 11.3 ). So far, there is only hard evidence for a causal role of vitamin D de fi ciency in the pathogenesis of COPDrelated osteoporosis. For extra-calcemic effects, a recent randomized controlled trial demonstrated no overall effect of high-dose supplementation on exacerbations but was supportive for an important effect in a limited subgroup of patients highly de fi cient for vitamin D. Before embarking on new intervention trials with vitamin D in COPD targeting subgroups, more fundamental research is needed to learn how vitamin D and its de fi ciency interact with smoking in the development of COPD. Only then we will be able to understand why so far most intervention trials with vitamin D have failed to yield important health bene fi ts in chronic diseases including COPD.
2019-01-23T00:11:36.132Z
2012-02-17T00:00:00.000
{ "year": 2012, "sha1": "d21f09fa4ae03478c1a429d0296e085c61e17418", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "62884a459ed22441eceda3b3c041e4f8ca3803e8", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
40512127
pes2o/s2orc
v3-fos-license
Autoimmunity against eye-muscle antigens may explain thyroid-associated ophthalmopathy The ophthalmopathy associated with Graves' hyperthyroidism and Hashimoto's thyroiditis is an autoimmune disorder of the extraocular muscle and the surrounding orbital connective tissue (OCT) and fat ([Box 1][1]). Its most appropriate name is thyroid-associated ophthalmopathy (TAO). ![Figure][2]</ T he ophthalmopathy associated with Graves' hyperthyroidism and Hashimoto's thyroiditis is an autoimmune disorder of the extraocular muscle and the surrounding orbital connective tissue (OCT) and fat (Box 1). Its most appropriate name is thyroid-associated ophthalmopathy (TAO). Several theories exist to explain the disorder's unusual association with thyroid autoimmunity. Most of them favour a role of autoimmunity against a thyroid-stimulating hormone receptor (TSH-r)-like protein in the orbital preadipocyte and, possibly, extraocular muscle fibre. This is still merely hypothetical, for several reasons: TSH-rstimulating antibody levels do not cor-relate with the severity and activity of the eye signs; newborns with thyrotoxicosis do not have the ophthalmopathy, even when the mother has eye signs; and ophthalmopathy has not been convincingly produced in animals immunized with the TSH-r protein. Moreover, a recent study 1 found that mRNA for TSH-r was not upregulated in the orbital fat of patients with Graves' ophthalmopathy compared with that in control subjects. One observation in favour of the theory of a primary role of autoimmunity against TSH-r is the generalized connective-tissue disorder in Graves' disease: autoimmunity against TSH-r, expressed in fat and connective tissue, could explain the development of pre-tibial myxedema, acropachy and the OCT component of TAO. In addition, all patients with Graves' ophthalmopathy test positive for serum TSH-r antibodies, at least when they are hyperthyroid. Proponents of the so-called TSH-r hypothesis point out that eye muscles are 30% of OCT by volume, and propose that the eye-muscle damage is secondary to antibody targeting of TSH-r in OCT. An alternative view is that the eyemuscle and OCT-fat reactions are both autoimmune disorders that can occur alone or together in patients with thyroid autoimmunity, manifesting as 3 subtypes of TAO: ocular myopathy, congestive ophthalmopathy or mixed disease. In ocular myopathy, double vision, reduced eye movement and increased volumes of eye muscle (seen with orbital imaging techniques) result from damage to the eye muscles. In congestive ophthalmopathy, eye swelling, redness, chemosis and increased tearing are caused by inflammation in the periorbital tissues. Mixed disease is the most common manifestation of TAO. Chronic eyelid lag, occurring alone or with other features of TAO, may be a fourth subtype ( Table 1). The notion of subtypes of Graves' ophthalmopathy was first raised by Solovyeva 2 and has been borne out by our extensive clinical studies. In support of the "eye-muscle hypothesis," we have identified several autoantigens in eye muscle; namely, the flavoprotein (Fp) subunit of mitochondrial succinate dehydrogenase, G2s, sarcalumenin and the calcium-binding protein calsequestrin (Fig. 1). The corresponding autoantibodies are associated with eye-muscle dysfunction in patients with TAO. There have been both criticism of the putative role of these eye-muscle antibodies in the pathogenesis of the ophthalmopathy and arguments in favour of such a role for them. First, because the antigens are not specific to eye muscle, the rarity of systemic myopathy among patients with Graves' disease is difficult to explain. This is true, except that we have shown that the antibodies that target calsequestrin, which is expressed 4.8 times more in eye muscle than in other skeletal muscle, 3 are sensitive markers of recent-onset ophthalmopathy, detected in 86% of patients with active eye signs and symptoms (Table 2) and in the great majority of patients with recentonset ocular myopathy who have been tested so far (n = 30). Second, since the candidate antigens are all located inside muscle fibres, they would not encounter antibodies or T-lymphocytes until the cell walls are breached by some other reaction. Although Fp and G2s are indeed intracellular proteins, mRNA for calsquestrin is distributed throughout the cell (including the cell membrane) in the myotube stage of differentiation, where in some situations it could be targeted. Third, because the antibodies are detected in 5%-10% of healthy subjects and patients who do not have ophthalmopathy, the clinical utility of the tests is limited to serial studies of patients with Graves' hyperthyroidism. Just the same, anticalsequestrin antibodies are detected neither among patients with Hashimoto's thyroiditis or multinodular goitre, nor among people without thyroid problems, and therefore may be specific markers of the ocular myopathy subtype of TAO. Taking into account all the evidence, the development of the eyemuscle component of TAO may be best explained by cytotoxic T-lymphocyte targeting of a cell-membrane antigen; the eye-muscle antibodies that we identify may be secondary. On the other hand, antibodies targeting the TSH-r in OCT and fat cells may trigger periorbital inflammation. Recent studies suggest that type XIII collagen, which is expressed in the cell membranes of fibroblasts, may also be a target in the fat and OCT of patients with active congestive ophthalmopathy (Fig. 1). 4 The development of the eye-muscle component of TAO can be best explained by T-cell targeting of an eyemuscle membrane antigen, whereas reactivity against TSH-r in the orbital Fig. 1: Eye-muscle and orbital fibroblast proteins recognized by T-lymphocytes or antibodies in thyroid-associated ophthalmopathy (TAO). Our working hypothesis is that the ocular myopathy subtype of TAO results from T-lymphocyte-mediated targeting of eyemuscle fibre; serum antibodies against G2s protein, calsequestrin, flavoprotein and sarcalumenin are secondary to their release after fibre necrosis; and the congestive ophthalmopathy subtype of TAO results from reaction against TSH-r or collagen XIII in fibroblast cell membranes, which leads to fibroblast stimulation and excess production of collagen and glycosaminoglycans. MHC = major histocompatibility complex, TSH-r = thyroid-stimulating hormone receptor. Note: TSH-r = thyroid-stimulating hormone receptor. *Mixed disease, the subtype most commonly seen, is not listed separately from its components. fibroblast may be the key abnormality that leads to the congestive ophthalmopathy subtype, as well as other features of connective-tissue disorder in Graves' disease. The most specific antigen marker for the eye-muscle component of TAO appears to be cal-sequestrin. The availability of testing for antibodies to anticalsequestrin as a clinical aid for TAO may lead to a better understanding of the reaction of eye muscle and its relation to reactions in orbital connective and thyroid tissue.
2018-04-03T04:50:34.096Z
2006-08-01T00:00:00.000
{ "year": 2006, "sha1": "71905a5e94c5746d5f55497ec89272a33af91b14", "oa_license": null, "oa_url": "http://www.cmaj.ca/content/cmaj/175/3/239.full.pdf", "oa_status": "GOLD", "pdf_src": "Highwire", "pdf_hash": "89ca1425457465ca842edc91e2eacf415da8c764", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
247798564
pes2o/s2orc
v3-fos-license
Early-Life Antibiotic Exposure Associated With Varicella Occurrence and Breakthrough Infections: Evidence From Nationwide Pre-Vaccination and Post-Vaccination Cohorts Background Antibiotic-driven dysbiosis may impair immune function and reduce vaccine-induced antibody titers. Objectives This study aims to investigate the impacts of early-life antibiotic exposure on subsequent varicella and breakthrough infections. Methods This is a nationwide matched cohort study. From Taiwan’s National Health Insurance Research Database, we initially enrolled 187,921 children born from 1997 to 2010. Since 2003, the Taiwan government has implemented a one-dose universal varicella vaccination program for children aged 1 year. We identified 82,716 children born during the period 1997 to 2003 (pre-vaccination era) and 48,254 children born from July 1, 2004, to 2009 (vaccination era). In the pre-vaccination era, 4,246 children exposed to antibiotics for at least 7 days within the first 2 years of life (Unvaccinated A-cohort) were compared with reference children not exposed to antibiotics (Unvaccinated R-cohort), with 1:1 matching for gender, propensity score, and non-antibiotic microbiota-altering medications. Using the same process, 9,531 children in the Vaccinated A-cohort and Vaccinated R-cohort were enrolled from the vaccination era and compared. The primary outcome was varicella. In each era, demographic characteristics were compared, and cumulative incidences of varicella were calculated. Cox proportional hazards model was used to examine associations. Results In the pre-vaccination era, the 5-year cumulative incidence of varicella in the Unvaccinated A-cohort (23.45%, 95% CI 22.20% to 24.70%) was significantly higher than in the Unvaccinated R-cohort (16.72%, 95% CI 15.62% to 17.82%) (p<.001). In the vaccination era, a significantly higher 5-year cumulative incidence of varicella was observed in the Vaccinated A-cohort (1.63%, 95% 1.32% to 1.93%) than in the Vaccinated R-cohort (1.19%, 95% CI 0.90% to 0.45%) (p=0.006). On multivariate analyses, early-life antibiotic exposure was an independent risk factor for varicella occurrence in the pre-vaccination (adjusted hazard ratio [aHR] 1.92, 95% CI 1.74 to 2.12) and vaccination eras (aHR 1.66, 95% CI 1.24 to 2.23). The use of penicillins, cephalosporins, macrolides, or sulfonamides in infancy was all positively associated with childhood varicella regardless of vaccine administration. Conclusions Antibiotic exposure in early life is associated with varicella occurrence and breakthrough infections. INTRODUCTION The early-life microbiome has a fundamental role in human immunity. Indigenous microbiota provides crucial signals for maturation and modulation of the immune system (1,2). In contrast, dysbiosis in infancy might cause stunting and dysregulation of immunity (3,4). The composition of gut microbiota also correlates with vaccine immunogenicity (5). Evidence has suggested the association between early-life microbial colonization and sustainable vaccine-specific memory T-cells and antibody responses (6). Exposure to medications, particularly antibiotics, is a common cause of dysbiosis (7,8). Even short-term or lowdose antibiotics can disturb the delicate ecosystem of the infant microbiome (9,10). Early-life antibiotic exposure has been linked to a higher risk of various conditions, including inflammatory bowel disease, type 2 diabetes, and atopic disorders (11)(12)(13). However, little is known about the effect of infantile antibiotic exposure on susceptibility to later-life infections. In addition, although antibiotic-driven dysbiosis has been found to impair vaccine responses (10,14,15), limitations are that most studies were conducted with a small sample size and in animal models or adults. Whether early-life antibiotic exposures decrease vaccine efficacy or increase the risk of breakthrough infections in the pediatric population remains to be elucidated. Varicella was once associated with a significant impact on public health in Taiwan (16). Since the implementation of universal varicella vaccination (UVV) in 2003, disease transmission has been successfully controlled (17). However, varicella outbreaks among schoolchildren still occurred occasionally (18), and breakthrough infections continue to be reported despite high rates of national vaccination coverage (19). The present study was aimed to investigate the effect of early-life antibiotic exposure on childhood varicella risk and breakthrough infections. Data Source We conducted a nationwide cohort study using Taiwan's National Health Insurance Research Database (NHIRD) from 1997 to 2013. The NHIRD contains detailed healthcare information from more than 99% of Taiwan's population of 25 million people. Diagnoses are documented in the NHIRD using codes based on the International Classification of Diseases, Revision 9, Clinical Modification (ICD-9-CM). The accuracy of diagnosis in the NHIRD has been validated (20,21), and the data have been used extensively in clinical epidemiology and health service research (22,23). Personal information, including body weight, height, lifestyle, occupation, and cluster history, is unavailable from the database. This study has been approved by the ethical review board of Taichung Veterans General Hospital (No. CE20224B). Vaccination The live attenuated varicella vaccine was approved for use in Taiwan in 1997. Two brands of OKA-strain varicella vaccines, Varivax (Merck) and Varilrix (GlaxoSmithKline) are available in Taiwan. The vaccines have been first provided free to children aged 1 year in Taipei City and Taichung City since 1998 and 1999, respectively. In 2003, the Taiwan government implemented the UVV program, targeting 1-year-old children. The self-paid second-dose booster has been recommended for children aged 4 to 6 years. Despite unavailable 2-dose vaccination rates, the cumulative coverage rate of at least one dose of varicella vaccine among children born after July 1, 2004, has reached more than 94% to date (24). Study Design and Population From the NHIRD, children born from 1997 to 2010 were eligible for enrollment. Children born from 1997 to 2003 and living in regions other than Taipei City and Taichung City were considered unvaccinated (pre-vaccination era). Children born during July 1, 2004, and2009 were deemed vaccinated (vaccination era). We excluded children with a follow-up period of less than one year, death registration, malignancy, immunodeficiency disorders, white blood cell disorders, transplantation, chemotherapy, or immunotherapy before varicella development. The diagnostic codes for these comorbidities are presented in Supplemental Table 1. Early life, especially from conception to 2 years of age, is a critical window for microbiota development and immune maturation (25,26). In the vaccination era, children who received antibiotics for at least 7 days within the first 2 years of life were included in the antibiotic cohort (Vaccinated A-cohort). The reference cohort (Vaccinated R-cohort) comprised children who had not received antibiotics. We identified the Unvaccinated A-cohort and the Unvaccinated R-cohort in the pre-vaccination era using the same process. The index date was defined as the first day of the third year of life. All sampled children were followed up from age 2 years to the development of outcome of interest or death. Each child was followed up for a maximum of 5 years. In each era, 1:1 matching of children in both cohorts was carried out for gender, propensity score, and non-antibiotic microbiota-altering medications. The propensity score was calculated via logistic regression model (27) that included infectious diseases, non-bacterial gastroenteritis, and constipation (Supplemental Table 1). These have been common pediatric comorbidities that promote intestinal dysbiosis. Histamine type-2 receptor antagonists (H2RAs), proton pump inhibitors (PPIs), and laxatives have been found to cause perturbation of gut microbiota (28). Non-antibiotic microbiota-altering medication exposure was defined as using any of these drugs for at least 7 days within the first 2 years of life. Outcome Measurement The primary outcome was varicella with diagnostic code (ICD-9-CM code 052) in the NHIRD. Children with varicella before the index date were still censored during the follow-up. However, to evaluate the association between early-life antibiotic exposure and subsequent varicella, only varicella that occurred in children after 2 years of age was identified. Breakthrough varicella has been defined as varicella occurring over 6 weeks after at least one dose of vaccination (17,19). Since the age at varicella vaccination in Taiwan was previously reported to be 1 to 1.97 years (17), the identified varicella cases in the vaccination era were considered breakthrough events. Covariate Assessment Demographic factors such as gender, comorbidities, and medication were considered potential confounders. Comorbidities were defined as diseases based on diagnostic codes (Supplemental Table 1) after the index date. Exposure to drugs related to dysbiosis, including H2RAs, PPIs, or laxatives, was defined as the use of such medications for at least 7 days within the first 2 years of life. Exposure to immunomodulatory drugs, such as systemic corticosteroids and disease-modifying antirheumatic drugs, was defined as using these drugs for more than 30 days per year on average. The aforementioned medication is listed in Supplemental Table 2. Statistical Analysis We first analyzed the demographic data, comorbidities, and medications. The categorical variables and prevalence rates of varicella in the study cohorts of each era were compared using the chi-square test. The cumulative incidences of varicella were calculated using the Kaplan-Meier method. The differences in the full time-to-event distributions between the two cohorts of each era were tested via the 2-tailed log-rank test. We next performed multivariate analyses with modified Cox proportional hazard models to determine whether antibiotic exposure is an independent risk factor for subsequent varicella. The adjusted variables were gender, hospital visit number during the follow-up period, and well-known factors for dysbiosis, including antibiotic exposure, use of non-antibiotic microbiota-altering medications, infectious diseases, nonbacterial gastroenteritis, and constipation. We also conducted sub-analyses to examine the risk of exposure to different antibiotics in early life on varicella development. All data were managed via SAS 9.4 software (SAS Institute Inc., Cary, NC, USA) and the "cmprsk" package of R. The results are expressed as an estimated number with 95% confidence interval (CI). Demographic Characteristics of the Study Cohorts We initially enrolled 187,921 children born from 1997 to 2010 from Taiwan's NHIRD. Among them, 82,716 children not living in Taipei City or Taichung City were born from 1997 to 2003, and 48,254 children were born from July 1, 2004, to 2009. A total of 8,293 children with a follow-up period of less than 1 year or with comorbidities or therapy that may increase the risk of infections before the occurrence of varicella were excluded. Finally, 81,596 children were included in the pre-vaccination era group and 47,533 were included in the vaccination era group ( Figure 1). The baseline characteristics of the children in both groups are presented in Supplemental Table 3. In the pre-vaccination era, 69,430 children exposed to antibiotics for at least 7 days within the first 2 years were included in the Unvaccinated A-cohort, and 4,975 children in the reference group not exposed to antibiotics within the first 2 years of life were included in the Unvaccinated R-cohort. The baseline characteristics of children in both cohorts are shown in Supplemental Table 4. After matching for gender, propensity score, and non-antibiotic microbiota-altering medications at a ratio of 1:1, there were 4,246 children in each cohort ( Figure 1). Using the same process, we selected subjects from the vaccination era, with 9,531 children each in the Vaccinated Acohort and the Vaccinated R-cohort ( Figure 1). Demographic characteristics and comorbidities were comparable between the cohorts in each era, except higher numbers of hospital visits in both A-cohorts compared to the respective R-cohorts (median 72 vs. 59 in pre-vaccination era, and 77 vs.71 in vaccination era) ( Table 1). In the Unvaccinated A-cohort, penicillins (59.6%) were most common, followed by cephalosporins (33.4%), macrolides (32.0%), and sulfonamides (22.7%). In the Vaccinated Acohort, penicillins (61.1%) and cephalosporins (15.4%) were most common (Supplemental Table 5). Ages at varicella occurrence and hospitalization for varicella were comparable between the cohorts in each era. Multivariate Analyses In the pre-vaccination era, antibiotic exposure for at least 7 days within the first 2 years of life was independently associated with varicella occurrence (adjusted hazard ratio [aHR] 1.92, 95% CI 1.74-2.12). This risk was weaker but still significant among children born in the vaccination era (aHR 1.66, 95% CI 1.24-2.23) ( Table 2). Further analyses demonstrated that exposure to a specific type of the commonly-used antibiotics, including penicillins (aHR 1.47, 95% CI 1.31-1.66), cephalosporins (aHR 1.19, 95% CI 1.04-1.36), macrolides (aHR 1.46, 95% CI 1.28-1.67), and sulfonamides (aHR 1.27, 95% CI 1.09-1.48), were also independent risk factors for varicella occurrence in the pre-vaccination era. However, exposure to these antibiotics in the vaccination era was positively associated w i t h s u b s e q u e n t v a r i c e l l a b u t w i t h o u t s t a t i s t i c a l significance ( Table 2). DISCUSSION This nationwide cohort study suggests that antibiotic exposure early in life is an independent risk factor for childhood varicella. Even though herd immunity has been reached in the vaccination era, a significantly higher incidence of breakthrough varicella is observed in children exposed to antibiotics in early life. The present study adds to the mounting evidence that antibioticdriven dysbiosis during infancy may cause sequelae linked with immune dysfunction, including increased susceptibility to infections. Commensal-pathogen interactions involve the direct microbiota-related colonization resistance and the indirect microbiome-mediated immune modulation (29). Commensal microbiota can limit colonization of the invading pathogen through upregulating epithelial barrier function, competition for specific resources, and bactericidal or bacteriostatic effects (29,30). Eubiotic microbiota also supports healthy immune development, shaping optimal innate and acquired immune responses against infective challenges (1,2). Evidence has demonstrated that a decrease in bacterial taxa, vacant nutrient niches, and metabolic environment changes after antibiotic administration predispose individuals to certain infections (31,32), whereas the commensals may progressively return to baseline following antibiotic cessation (33). On the other hand, antibiotic-driven dysbiosis, especially in early life, might result in enduring immune alterations and long-lasting health impacts (3,4). Animal studies have demonstrated that infant mice exposed to antibiotics had reduced and dysfunctional interferon-g-producing CD8 T cells, resulting in subsequent increased mortality from vaccinia virus infection (34). In humans, children exposed to early-life antibiotics have been found to exhibit lower infection-induced cytokines, including interleukin 1b, interferon a, interferon g, tumor necrosis factor a, and IP10 protein (35). Our results align with these immunological findings and support the microbiome-immuneinfection axis theory. Early-life antibiotic exposure is associated with dysbiosis and impaired anti-infectious immunity and increases susceptibility to future varicella infections. The role of the microbiome in the modulation of vaccine immunogenicity has recently been addressed (5). Several observational studies have documented the correlation between microbiota composition, such as the abundance of Bifidobacterium and Bacteroides species, and vaccine responses (6,36,37). Immunomodulatory molecules derived from microbiota, such as flagellin, peptidoglycan, and lipopolysaccharides, regulate T cell priming and immunoglobulin production in response to antigenic stimulation (36,38,39). Increasing data also suggests that epitopes encoded by the microbiota can cross-reactive with pathogenencoded epitopes, presumably with vaccine-encoded epitopes (40,41). Despite the association between microbiome and vaccine responses, controversy exists over the influence of microbial perturbation on immunization. Antibiotic-driven dysbiosis impairs vaccine immunogenicity in infant mice but not in adult mice (14). From human research, adults with low preexisting immunity have been found to present markedly reduced post-vaccination antibody titers after experiencing antibiotic treatment (10). Nevertheless, antibiotic exposure in early life has not significantly affected immunogenicity induced by routine infant vaccines, while sample sizes of these studies were modest (42,43). Additionally, effects of prebiotics or probiotics on vaccine response are variable, depending on the antigens, probiotic strains, and population (44)(45)(46). To date, none of these microbiota-targeted interventions have been transferred from research into clinical practice. Our study assessed incidence A B FIGURE 2 | Cumulative incidences of varicella in patients exposed to antibiotics within the first 2 years of life and matched controls. The differences between the two study cohorts in the (A) pre-vaccination era and (B) vaccination era were determined by log-rank test. Substitution of specific type of antibiotic for any type of antibiotic in the same model on multivariate analyses. of breakthrough varicella among children exposed to early-life antibiotics. Although the UVV program has provided robust protection, infantile antibiotic exposure was still an independent risk factor for childhood breakthrough varicella. Such risk might result from increased varicella pathogenicity following antibiotic exposure overwhelming the vaccine protective efficacy or alteration of vaccine responses induced by antibiotic-driven dysbiosis. Further studies are needed to clarify the effects of early-life antibiotic exposure on immunization and vaccine efficacy. The microbiota changes related to antibiotics depend on the type of antibiotic used. Previous studies have suggested that almost all types of antibiotics affect gut microbiota. The penicillin family of antibiotics, such as amoxicillin, piperacillin, and ticarcillin, may increase the abundance of Enterococcus spp. and decrease the abundance of anaerobes (47). Cephalosporins, quinolones, and sulfonamides have been associated with abundant Enterobacteriaceae except for Escherichia coli (47). Macrolide treatment has been linked to long-term gut microbiota perturbations among pre-school children, including depletion of Actinobacteria and increases in Bacteroidetes and Proteobacteria (48). The antimicrobial spectrum also influences the impact of antibiotics on the immune response to vaccination. An adult study has demonstrated that the proportion of vaccinees with a more than 2-fold anti-rotavirus antibody titer by 7 days post-vaccination was significantly higher among subjects treated with vancomycin only than those treated with broad-spectrum antibiotics (15). In the present study, early-life exposure to penicillins, cephalosporins, macrolides, or sulfonamides were all independent risk factors for childhood varicella in the pre-vaccination era. The risk of breakthrough varicella due to exposure to these antibiotics in the vaccination era was also observed, although without statistical significance owing to the small number of cases. Relationship between the risk and antimicrobial spectrum of the administered antibiotic remains to be elucidated, since we only examined the effects of using different types of antibiotics rather than the specific antibiotic on varicella occurrence. Overall, caution is warranted in prescribing any type of antibiotic to infants despite their benefits. It should also be taken into account the effects on the human microbiome when administering antibiotic therapy. Our study has several strengths. The population-based cohort study design enabled us to assess the association between antibiotic exposure and varicella infections. By utilizing the nationwide NHIRD, we enrolled a large sample size, which prevented selection bias, allowing us to identify relatively rare conditions such as post-vaccination infection, and provide reliability in terms of statistics with a smaller margin of error. Despite these strengths, there are several limitations. First, as this was an observational study, we could only report an association between antibiotic exposure and subsequent varicella but could not infer causality. Second, patient-specific information such as lifestyle, contact history, seeking healthcare in private practice, and over-the-counter medication use was unavailable from the NHIRD. To minimize biases, cohorts possessed comparable characteristics after matching gender, propensity score, and non-antibiotic microbiota-altering medications. We also performed multivariable analyses to adjust for potential confounders. Third, the specific date of vaccination, the total number of varicella vaccines administered, whether concomitant vaccinations were used or not, and the interval from antibiotic exposure to vaccination were not recorded in the dataset. Therefore, it is difficult to assess the effects of antibiotic exposure on immunization. Instead, we reported the association between antibiotic therapy in infancy and varicella during childhood regardless of herd immunity. Finally, as our study focused on varicella, the generalizability of our results may be limited. Nevertheless, it provided valuable information on the microbiome-immune-infection axis theory. CONCLUSIONS In conclusion, children exposed to antibiotics in infancy are associated with varicella later in life. Antibiotic exposure is an independent risk factor for varicella occurrence, even though herd immunity has been reached. These findings suggest caution when administering antibiotics in early life to prevent increased infection susceptibility and poor vaccine efficacy. DATA AVAILABILITY STATEMENT The data analyzed in this study is subject to the following licenses/restrictions: All researchers who wish to use the NHIRD and its data subsets are required to sign a written agreement declaring that they have no intention of attempting to obtain information that could potentially violate the privacy of patients or care providers. Requests to access these datasets should be directed to Center for Biomedical Resources of NHRI, https://nhird.nhri.org.tw/en/Data_Protection.html. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the ethical review board of Taichung Veterans General Hospital (No. CE20224B). Written informed consent from the participants' legal guardian/next of kin was not required to participate in this study in accordance with the national legislation and the institutional requirements.
2022-03-31T13:29:49.473Z
2022-03-31T00:00:00.000
{ "year": 2022, "sha1": "ed4ec0ca104acdf995d3cc628aae4d0c77a5d96e", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2022.848835/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7ab5c877ca84417f7a71705d67d989cfdb914be5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
104295748
pes2o/s2orc
v3-fos-license
Fine-scale damage estimates of particulate matter air pollution reveal opportunities for location-specific mitigation of emissions Significance Health burdens of PM2.5 and its precursors vary widely depending on where emissions are released. Thus, advanced methods for assessing impacts on a fine scale are useful when developing strategies to efficiently mitigate the effects of air pollution. We describe a new tool for rapidly assessing the impacts of pollution emissions on a fine scale. We apply the tool to the US emissions inventory to better understand the contribution of each economic sector on reduced air quality. We show that, even for a national assessment, local (e.g., subcounty) information is important to capture the variability in health impacts that exist on fine scales. Our paper can help policymakers and regulators prioritize mitigation of emissions from the most harmful source locations. Fine particulate matter (PM 2.5 ) air pollution has been recognized as a major source of mortality in the United States for at least 25 years, yet much remains unknown about which sources are the most harmful, let alone how best to target policies to mitigate them. Such efforts can be improved by employing high-resolution geographically explicit methods for quantifying human health impacts of emissions of PM 2.5 and its precursors. Here, we provide a detailed examination of the health and economic impacts of PM 2.5 pollution in the United States by linking emission sources with resulting pollution concentrations. We estimate that anthropogenic PM 2.5 was responsible for 107,000 premature deaths in 2011, at a cost to society of $886 billion. Of these deaths, 57% were associated with pollution caused by energy consumption [e.g., transportation (28%) and electricity generation (14%)]; another 15% with pollution caused by agricultural activities. A small fraction of emissions, concentrated in or near densely populated areas, plays an outsized role in damaging human health with the most damaging 10% of total emissions accounting for 40% of total damages. We find that 33% of damages occur within 8 km of emission sources, but 25% occur more than 256 km away, emphasizing the importance of tracking both local and long-range impacts. Our paper highlights the importance of a fine-scale approach as marginal damages can vary by over an order of magnitude within a single county. Information presented here can assist mitigation efforts by identifying those sources with the greatest health effects. air pollution | environmental economics | marginal damages | particulate matter E xposure to air pollution is linked to many serious health effects, including respiratory infections, lung cancer, stroke, and cardiopulmonary disease (1)(2)(3), all of which come at great economic cost (4,5). The overwhelming majority of estimated monetized damages from air pollution is attributable to premature mortality (5); the main contributor is PM 2.5 . Ambient concentrations of PM 2.5 in the United States have fallen in recent decades, but devising and prioritizing strategies for efficiently reducing emissions, exposures, and health impacts will yield large additional benefits. Efficient approaches commonly target those sources with the lowest mitigation costs [i.e., economic costs per ton (t) emissions avoided] and the greatest marginal damages [i.e., economic damages t −1 emitted]. Here, we focus on the latter. The health impact of a given quantity of emissions depends on where it was emitted and where it travels as well as on the physical and chemical transformations that generate and remove PM 2.5 as it moves through the atmosphere. Marginal damage also depends in part on "intake fraction" [i.e., the fraction of emissions that are inhaled (6)], which varies by orders of magnitude among sources, depending on the size and proximity of populations to sources, and on the persistence of the pollution. Here, we use the Intervention Model for Air Pollution (InMAP) to calculate location-specific estimates of the marginal damages of emissions from all emission locations in the contiguous United States (7). These estimates form a series of matrices that describe linear relationships among multiple emission and impact locations; we call this InMAP Source-Receptor Matrix (ISRM). We consider emissions of primary PM 2.5 and of four chemical species-ammonia (NH 3 ), nitrogen oxides (NO x ), sulfur dioxide (SO 2 ), and volatile organic compounds (VOCs)-that react to form secondary PM 2.5 in the atmosphere. This novel approach integrates a fine spatial scale with information on the long-range transport of emissions, producing results with finer resolution in densely populated urban areas (as small as 1 km × 1 km) and coarser resolution in rural areas (as large as 48 km × 48 km) for greater computational efficiency. This approach allows us to identify large spatial gradients in marginal damages that result from emission location and to model the substantial impacts of emissions experienced far downwind of sources. We present our results in terms of the incidence of premature mortality attributable to exposure to PM 2.5 and the monetary valuation (or damages) of these deaths. Calculating damages from individual sources of emissions requires three steps: (i) tracing the air quality impacts of emissions to downwind receptors, (ii) converting changes in pollution exposure to changes in mortality, and (iii) applying a monetary valuation for changes in Significance Health burdens of PM 2.5 and its precursors vary widely depending on where emissions are released. Thus, advanced methods for assessing impacts on a fine scale are useful when developing strategies to efficiently mitigate the effects of air pollution. We describe a new tool for rapidly assessing the impacts of pollution emissions on a fine scale. We apply the tool to the US emissions inventory to better understand the contribution of each economic sector on reduced air quality. We show that, even for a national assessment, local (e.g., subcounty) information is important to capture the variability in health impacts that exist on fine scales. Our paper can help policymakers and regulators prioritize mitigation of emissions from the most harmful source locations. the risk of mortality. Our key contribution is in step i, which encompasses providing location-specific fine-scale estimates of the mortality effects of PM 2.5 from marginal changes in emissions, tracing the health impacts back to where the emissions occurred, and applying the results to a national emission inventory, so as to quantify the impacts of specific emission sources and emission locations throughout the United States. Methods we employ for steps ii and iii are straightforward state-of-knowledge approaches: a linear concentration-response (C-R) function that estimates changes in mortality from changes in exposure to PM 2.5 (8) and the value of a statistical life (VSL) (9) to translate increased mortality into monetary damages (see Methods and SI Appendix, section S1 for details). The use of monetized damages provides a broader context for understanding our estimates of exposure and of the impact of emissions and helps us compare our results with existing estimates in the literature (10,11). Results Our results are in five sections. First, we estimate the monetary marginal damages ($ t −1 ) at every emission source location in the United States. Those findings, which are the core of the ISRM, reveal the locations where a one-unit change in emissions will have the greatest impact on health. Second, we combine those results ($ t −1 ) with the National Emissions Inventory (i.e., t emitted) to understand total damages by emission location. Third, we explore total damages per sector of the economy. Fourth, we estimate where damages occur in terms of distance from each emission location. Fifth, we provide model validation and uncertainty analysis. Marginal Damages. Here, we estimate the marginal damages of emissions at every source location in the United States. Damages attributable to emissions at a specific location vary by pollutant and release height; we show here ( Fig. 1) results for the most common release height for each pollutant (ground level for primary PM 2.5 , NH 3 , NO x , and VOC; high stacks for SO 2 ). (Results for other release heights are in the SI Appendix, Table S1.) For each pollutant, marginal damages vary widely among source locations with marginal damages generally being higher for emissions released near population centers. Pearson correlation coefficients between population density at the emission location and marginal damages are highest for PM 2.5 and NH 3 emissions (0.76 and 0.74, respectively) and lowest for SO 2 emissions (0.13). The relatively low correlation for SO 2 occurs because this type of emission more frequently comes from high stacks and more time is required in the atmosphere for it to form secondary PM 2.5 , leading to a greater share of its impacts occurring far downwind of the source. Primary PM 2.5 , on the other hand, is often released at ground level and is already in fine particle form; consequently, a greater share of its impacts occurs near the source. Average marginal damages t −1 emitted are $94,000 for primary PM 2.5 , $40,000 for NH 3 , $13,000 for NO x , $24,000 for SO 2 , and $7,500 for VOC. The distributions of marginal damages exhibit positive skew, suggesting that a small quantity of emissions at the right tail of the distribution has very large marginal damages (SI Appendix, Table S1). Combining Marginal Damages with the National Emission Inventory. The previous section considers impacts per t emitted (ISRM); here, we combine the ISRM with estimates of actual emissions (t), taken from the US Environmental Protecton Agency (EPA) 2011 National Emissions Inventory (NEI) (12) to reveal total damages. We then calculate the distribution of marginal damages, weighted by the quantity of anthropogenic emissions from each grid cell. We find that the marginal damages of emissions vary widely by source location and that emissions from the highest marginal damage sources, although low in total mass, account for a large share of total emission damages. That finding emphasizes the importance of considering sources in terms of their impact, not just emissions. Impacts, measured as mortality and as monetary damages per t of emissions, can vary by an order of magnitude within a single county. The most harmful emissions per t are responsible for a substantial share of the total damages. For example, the top 1% and 10% most harmful primary PM 2.5 emissions are responsible for 17% and 54% of the total primary PM 2.5 damages, respectively. The damage per t of primary PM 2.5 for the 1% most harmful emissions is over $900,000-on average, every five t of these emissions are estimated to cause one additional case of premature mortality-a 400-fold greater premature mortality rate per t than that associated with the least harmful 1% of primary PM 2.5 emissions ($4,200 t −1 ; 2,000 t per premature mortality). The top 10% highest marginal damage emissions of NH 3 , NO x , SO 2 , and VOC account for 42%, 27%, 21%, and 37% of the total damages for each pollutant, respectively. For PM 2.5 and VOC, the most-damaging 10% of emissions mass is ∼15× more harmful (for NH 3 , NO x , and SO 2 , ∼5× more harmful) than the 10% leastdamaging emissions mass. The highest marginal damage emissions are concentrated almost exclusively in high-population-density areas. InMAP's variable-grid-cell design can resolve intraurban-scale spatial gradients in damages; a gradient map at this spatial scale has not previously been produced for national-scale location-specific estimates (10,11). Here, we explore within-county variation in marginal damages in terms of the ratio of the marginal damages in the most-to least-damaging ground-level emission locations within each county. In the 10% most densely populated countiescomprising 58% of the total US population-the average marginal damage ratio within a county is 8.1 for primary PM 2.5 , 6.7 for NH 3 , 3.4 for NO x , 1.8 for SO 2 , and 5.8 for VOC. That is, in these densely populated counties, primary PM 2.5 is on average ∼8× more harmful per unit in one location than in another location within the same county. As an illustration, Fig. 2 shows the heterogeneity in marginal damages for emissions in two large metropolitan areas: Los Angeles and Seattle. For Los Angeles County, CA, InMAP uses >1,000 grid cells; estimated marginal damages range from $52,000 to $2,900,000 t −1 for primary PM 2.5 (i.e., a 56-fold difference). For King County, WA (which contains Seattle, WA), InMAP uses 374 grid cells, and the marginal damages for primary PM 2.5 span a 127-fold range: $7,000 to $890,000 t −1 . Total estimated annual damages from anthropogenic PM 2.5 are $886 billion, corresponding to 107,000 cases of premature mortality. Primary PM 2.5 constitutes the largest share of damages (38%); the four other pollutants are each associated with 12-19% of total damages. Damages by Economic Sector. Connecting the ISRM with an emissions inventory enables us to next explore the damages by economic sector and the variability of damages within a sector. Total damages and incidence of premature mortality by pollutant, economic sector, and emission height (left and right axes, respectively, of Fig. 3) reveal the multifaceted nature of this environmental risk factor: Many sources and pollutants contribute meaningfully to total PM 2.5 . Ground-level emissions dominate total impacts, of which primary PM 2.5 is the largest contributor. The single largest contribution to total anthropogenic damages (in Fig. 3) is ground-level release of NH 3 from agriculture (i.e., application and storage of manure; fertilizer use), contributing 12% of total impacts. Among impacts from elevated emissions, SO 2 from coal-fired power plants is the largest contributor, responsible for 58% of total damages from elevated emissions (11% of total damages). Combining major sources associated with energy consumption [e.g., transportation (28%), electricity generation (14%)] constitutes 57% of total impacts. Although total damages from emissions of NH 3 and SO 2 are each dominated by a single sector (NH 3 : agriculture; SO 2 : coal-fired power plants), total damages from emissions of primary PM 2.5 , NO x , and VOC are not. As no one economic sector dominates total damages, sizable reductions to PM 2.5 air pollution requires focusing on many sources of pollution. (See SI Appendix, Tables S2 and S3 for total and marginal damages by disaggregated sectors.) Next, we build on the sector-specific estimates by exploring within-sector distributions of marginal damages. Analogous to the findings above, here we find that, for a given sector and pollutant, marginal damages by sources often exhibit a wide range of values. For example, for gasoline-vehicle VOC, the 10% most-damaging emission locations have marginal damages greater than $22,000 t −1 , whereas the 10% least-damaging locations have marginal damages less than $2,200 t −1 , a gap of more than 10×. The 10th to 90th percentile range for marginal damages is $12,000-$320,000 t −1 for locations of primary PM 2.5 from residential wood burning (difference: >26×), $10,000-$58,000 t −1 for NH 3 emission locations from agriculture (>5×), $11,000-$33,000 t −1 for SO 2 emission locations from coal-fired electric power plants (3×), and $5,200-$29,000 t −1 for NO x emission locations from on-road diesel vehicles (>5×). For a specific sector or pollutant, there are potentially large health advantages and efficiency gains from targeting the highestimpact locations. This aspect is especially relevant for difficultto-control sectors, such as agriculture, road dust, and residential wood burning: for those sectors, if nationwide emission controls are unlikely, an alternative approach is to target emission reductions in a small number of high-impact locations. In practical terms, this could mean focusing greater attention on local policy in high-impact locations rather than national policy. Impacts by Distance from Source Location. Results thus far have considered total damages by emission location, source, or species. In this section, we explicitly consider where damages occur. As described next, our results emphasize that local and long-distance components are both important for estimating total health impacts from PM 2.5 . We estimate-averaging across all locations, sources, and stack heights, and including primary and secondary PM 2.5 -that half of total PM 2.5 damages are incurred by people living within 32 km of a source (Fig. 4). (One-third of damages occur at locations within 8 km of the source; another one-quarter occur more than 256 km downwind of the source.) That finding emphasizes the benefits of the modeling approach employed here (InMAP and ISRM), which uses variably sized grid cells (as small as 1 km × 1 km). In contrast, a typical spatial resolution for conventional air pollution models [chemical transport models (CTMs) or reduced-complexity models] applied nationally and for annual averages is 36 km × 36 km grid cells or county level (the average land area per county in the contiguous United States is ∼2,500 km 2 , analogous to 50 km × 50 km grid cells)too large to capture spatial gradients amounting to more than half of total damages. For environmental justice (EJ) analyses (e.g., consideration of which demographic groups inhale more or less pollution) an ability to capture near-source gradients may be especially important. In that case, a second implication of findings here is that conventional models may be too coarse to adequately investigate many EJ questions (13). Spatial variability differs by pollutant: For primary PM 2.5 , more than half of damages occur less than 16 km from the source; for SO 2 , more than half are experienced by people living farther than 200 km from the source. This result suggests that finer-resolution models are more important for primary PM 2.5 and likely are less important for SO 2 . Another implication is that for a community aiming to reduce its ambient PM 2.5 , local (e.g., county-level) action may be more successful for primary PM 2.5 than for SO 2 . Model Validation and Uncertainty Analysis. To evaluate the reliability of our model to predict concentrations of ambient PM 2.5 , we compare observed year-2011 annual-average concentrations of PM 2.5 at EPA monitoring locations (14) with predicted concentrations from the ISRM, based on emissions from the 2011 NEI (Fig. 5). Average MFB is −6%; MFE is 36%. [These values reflect the combined impact of errors in the model (ISRM), emission inventory, and meteorological inputs.] Those bias/error values, which reflect annual-average observations at the 840 monitor locations throughout the United States, are well within published air quality model performance criteria: MFB ≤ ±60%, MFE ≤ 75% (15). That result supports the use of the ISRM to predict concentrations of ambient PM 2.5 . (InMAP performance is better for primary PM 2.5 and for SO 2 than for NH 3 and NO x ; details are in SI Appendix, section S3.2.) We next consider, in turn, uncertainty in the three main inputs to our calculations: the ISRM, the C-R function, and the VSL. First, we characterize error in the ISRM PM 2.5 concentration predictions and resulting mortality estimates as above: based on model-measurement comparisons (Fig. 5). Specifically, for the error in each ISRM spatial prediction of PM 2.5 concentration, we employ the model-measurement error at the nearest EPA monitor (see SI Appendix, section S1.5 for details and 95% confidence interval estimate of mortality using similar methods). The total estimated mortality from this sensitivity analysis is 99,000, or 8% less than our base-case estimate (107,000 estimated in this method differ by sector; for example, mortality in the sensitivity analysis (relative to in the base case) is 11% lower for emissions from industrial processes but only 3% lower for coal-fired electric generation. Second, we explore uncertainty in the C-R function, first by using a deterministic method (employing an alternative C-R function) and second by using a Monte Carlo simulation (adopting the reported 95% confidence intervals reported for regression coefficients from the underlying epidemiological study). For the first method, we replace the base-case C-R [from Krewski et al. (8)] with the C-R from Nasari et al. (16). The result is that total estimated mortality increases 21% to 129,000. This shows that our base-case estimates are of comparable magnitude but lower than the estimates using another authoritative C-R function. For the second method (Monte Carlo), the resulting 95% confidence interval [and interquartile range (IQR)] for total mortality is 44,000-171,000 (85,000-129,000). Third, we explore uncertainty in the VSL. The VSL employed here is the mean of estimates from several studies (9). We employ a Monte Carlo analysis by using the distribution of these studies' estimates. The resulting 95% confidence interval (IQR) for total damages is $90 billion to $2.3 trillion ($460 billion to $1.2 trillion). To summarize, 95% confidence intervals associated with the three main inputs for total damages (base-case estimate: $886 billion), in units of $billions, are 90-2,300 for the VSL, 360-1,400 for the C-R function, and 830-930 for the ISRM. (See SI Appendix, section S1.5 for details.) Those findings suggest that uncertainty is greatest for the VSL, smaller for the C-R function, and smallest for the ISRM. Discussion Here, we estimate the mortality impacts of PM 2.5 air pollution in the United States. Our approach advances the science by (i) developing a fine-scale source-receptor matrix (the ISRM), which simulates impacts near to the source and far from the source, (ii) using the ISRM to explore impacts by emission location, chemical species, and source category, and (iii) studying this topic nationally, with unprecedented spatial resolution. This approach was possible because of the computational efficiency of InMAP; analyses here would not be feasible with conventional CTMs (see Methods section). The existing literature on intake fraction documents that damages t −1 can vary widely, depending on release location (17)(18)(19). Some of our results have similar utility as intake-fraction values but usefully extend beyond that literature by, for example, calculating health impacts and monetized damages, accounting for all PM 2.5 (primary and secondary) from all sectors of the economy, employing a much finer spatial resolution for a national analysis, developing the source-receptor model, and integrating the source-receptor model with the NEI. Our estimates of total damages from anthropogenic PM 2. Our results emphasize the benefits of finer-scale spatial resolution, relative to the typical spatial resolution of conventional models. To further explore this issue, we recalculate our core results but using coarser fixed-size grid cells of 48 km × 48 km, rather than the smaller variably sized grid cells of our main approach (see SI Appendix, section S1.6 for details). Resulting estimates for total damages from PM 2.5 are ∼20% lower with the coarser grid than with our main approach; analogous differences are larger for mobile sources (27% lower) and residential wood burning (34% lower) with nearly zero difference for emissions from coal-fired electricity generation. The difference with the coarser grid compared with our main approach is nearly zero for low-damage locations and for elevated sources but is relatively large for highdamage locations. For example, the highest estimated marginal damages t −1 for primary PM 2.5 are $523,000 (coarser grid) vs. $919,000 (main approach). Thus, the sensitivity analysis supports the use of smaller grid cells for modeling spatial variability in damages and especially for discovering high-impact locations. The approach we present here has several limitations in addition to the uncertainties highlighted in the Results section. We do not account for differing effects of air pollution by season, and our model currently does not track all harmful air pollutants, such as ozone. Seasonal differentiation may be important where emissions and rates of PM 2.5 formation both vary by season [e.g., seasonal fertilizer application (agricultural emissions) in an area where ammonium or nitrates are rate-limiting species during different times of the year]. InMAP partially accounts for seasonality in how it tracks annual-average impacts, but if a location has emissions that exhibit seasonal patterns, use of an annual-average impact for that location could induce bias in the estimated impacts. This aspect is worthy of investigation and quantification using a different model than the one employed here. Exposure to ozone is also associated with increased risk of premature mortality, but those risks are generally small compared with the estimated risk from PM 2.5 . For example, Fann et al. (20) estimate that attributable mortalities are ∼30× greater for PM 2.5 than for ozone. Many minor local emission sources can contribute to ambient pollution, including fireplaces, cooking, and lawn care. Our approach includes those sources, which are in the NEI, but our model does not capture near-source exposures for which the relevant exposure travel distance is much less than the length scale of our model (1 km to 48 km). Such exposures include, for example, a cook directly inhaling grilling exhaust, a pedestrian directly inhaling emissions from a nearby vehicle's exhaust plume, or lawnmower-engine exhaust being directly inhaled by the person mowing a lawn. These ultra-near-source exposures are high concentration but generally short duration. Our approach does not include direct indoor inhalation of indoor sources; in some circumstances (e.g., "second-hand" cigarette smoke), indoor exposures can dominate total exposures. We use the VSL from the US EPA to convert changes in mortality risk to monetary damages. This approach is a common if controversial method. Other literature review estimates of the VSL are consistent with the EPA VSL employed here (24,25). Alternative (i.e., non-VSL) valuation methods are available, for example, considering years of life lost ("value of a statistical life year") or accounting for morbidity and mortality using "disability" adjustment factors ("value of a disability-adjusted life year") (26). These are important considerations that deserve attention in future analyses. Uncertainty is relatively large in the C-R function and in the VSL: the range of the 95% confidence intervals is a factor of 4 and a factor of 25, respectively. Such uncertainties are inherent in estimates for any one location, emission source, or pollutant; however, they do not impact the relative damages for one source compared with another source. There is potentially spatial and demographic variabilities in the C-R function and the VSL as well. For example, perhaps people in a certain neighborhood are highly susceptible to health impacts from air pollution. In that case, emission locations that lead to pollution in that neighborhood would have greater-than-average impacts. The same may be true for certain group's valuation of increased risks of mortality. To the extent that this variability can be estimated, there is also an ethical consideration regarding how that variability should be included in the types of analyses we produce here. Similarly, as fine-scale estimates of pollution exposure become available, policies that use this information to target reductions in certain locations and not others raise important questions of fairness in environmental quality. PM 2.5 is the largest environmental risk factor in the United States, causing >100,000 premature deaths per year-more than traffic accidents and homicides combined (27). Reducing PM 2.5 concentrations is aided by prioritizing among emission sources: which sources to reduce and by how much. The fine-scale damage estimates given here reveal new opportunities for location-specific mitigation of emissions. However, any policy implementation would need to consider trade-offs between the benefits of targeted emission reductions and the additional regulatory burden caused by location-specific policy. The ISRM is novel in connecting ambient concentrations and damages with the emission locations, sources, and species causing those concentrations and damages nationally and at a spatial resolution not previously possible. The new spatial resolution reveals, at a national level, large spatial gradients in damages, including within county and within urban. These new results are useful for (i) more-efficient environment policy (i.e., using emissionreduction policies, permitting decisions, and enforcement actions to reduce highest-impact sources, locations, and species), (ii) investigating EJ (i.e., understanding which groups are more/ less exposed and proposing policies to address potential undue burdens), and (iii) correctly estimating the magnitude of damages because results here account for near-source and long-range exposures. We have made the ISRM freely available online (28) with the hope that researchers and practitioners will find it useful for studying connections between changes in emissions and changes in concentrations and damages. Methods The primary innovation of this paper is creating the ISRM, a dataset containing estimates of linear relationships between marginal changes in emissions at every source location and marginal changes in annual-average PM 2.5 concentrations at receptor locations. Because of computational intensity, our approach would be infeasible using a conventional air pollution model. We built the ISRM by running InMAP >150,000 times (7), each time inputting a 1-t emission change from a single grid cell. In total, our analyses required 46 d of model run time. An analogous set of runs using a CTM would take ∼2,000 y with contemporary computational software based on the Weather Research and Forecasting/Chem model configuration used to inform InMAP (29) (see SI Appendix, section S2 for details). The results of each InMAP run describe the isolated impact of a 1-t emission change at the source upon PM 2.5 concentrations at every receptor grid cell in the model. This process is repeated for all 52,411 grid cells in InMAP and for each of three effective emission heights: ground level (emissions between 0 and 57 m), low (57-379 m), and high (>379 m). InMAP is designed with grid cell sizes that, for computational efficiency, vary based on spatial gradients in population density. The primary grid cell unit is 48 km × 48 km and is used in sparsely populated regions to achieve greater computational efficiency. For areas with progressively denser populations, the grid cells have dimensions with 24-, 12-, 4-, 2-, and 1-km sides. The ISRM, as described here [version 1.2.1, freely available for download at zenodo.org (28)], was created using InMAP version 1.2.1 (https://github.com/spatialmodel/inmap). Here, we estimate the marginal monetary damages associated with premature mortality owing to emission of an additional t of a pollutant at a location. We adopt a linear C-R function to convert changes in PM 2.5 concentrations into adult all-cause premature mortality (8). We use the US EPA recommended VSL of $8.3 million in year-2011 US dollars to assign monetary values to changes in the risk of mortality caused by pollution (9). To calculate total damages, we multiply marginal damages by the total anthropogenic emissions in each grid cell, taken from the US EPA 2011 NEI (12). We estimate anthropogenic emissions of each of the five pollutants for each InMAP grid cell, each emission height, and each of 12 sector groupings. Additional details on the methods are in SI Appendix, section S1.
2019-04-10T13:03:27.757Z
2019-04-08T00:00:00.000
{ "year": 2019, "sha1": "c6549449f5ec4ba777477d1c40816f34ce8c6cf9", "oa_license": "CCBYNCND", "oa_url": "https://www.pnas.org/content/pnas/116/18/8775.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "c6549449f5ec4ba777477d1c40816f34ce8c6cf9", "s2fieldsofstudy": [ "Environmental Science", "Economics" ], "extfieldsofstudy": [ "Medicine", "Environmental Science" ] }
9573943
pes2o/s2orc
v3-fos-license
Substance P-driven feed-forward inhibitory activity in the mammalian spinal cord In mammals, somatosensory input activates feedback and feed-forward inhibitory circuits within the spinal cord dorsal horn to modulate sensory processing and thereby affecting sensory perception by the brain. Conventionally, feedback and feed-forward inhibitory activity evoked by somatosensory input to the dorsal horn is believed to be driven by glutamate, the principle excitatory neurotransmitter in primary afferent fibers. Substance P (SP), the prototypic neuropeptide released from primary afferent fibers to the dorsal horn, is regarded as a pain substance in the mammalian somatosensory system due to its action on nociceptive projection neurons. Here we report that endogenous SP drives a novel form of feed-forward inhibitory activity in the dorsal horn. The SP-driven feed-forward inhibitory activity is long-lasting and has a temporal phase distinct from glutamate-driven feed-forward inhibitory activity. Compromising SP-driven feed-forward inhibitory activity results in behavioral sensitization. Our findings reveal a fundamental role of SP in recruiting inhibitory activity for sensory processing, which may have important therapeutic implications in treating pathological pain conditions using SP receptors as targets. Feedback/feed-forward inhibitory modulation driven by glutamate has been well studied in the dorsal horn of the spinal cord [1][2][3]. Little is know whether feedback/feedforward inhibitory active may be driven in a glutamateindependent manner. A number of neuropeptides including substance P (SP) are also released from nociceptive primary afferent fibers [4]. SP has been regarded as a pain substance for decades [5][6][7], as supported by studies, including chemical ablation of lamina I neurons expressing the SP receptors [8], genetic disruption of the genes encoding substance P [9] and its receptors [10]. The nociceptive function of SP is mainly attributed to the activation of NK1 receptors (NK1R) that are expressed on nociceptive projection neurons located in lamina I of the dorsal horn [8,11,12]. It is unknown whether endogenously released SP can directly drive, in a glutamateindependent manner, inhibitory activity within the spinal cord to control nociceptive responses. We performed patch-clamp recordings from dorsal horn neurons in lamina V (Figure 1a), a region important for nociceptive transmission and modulation [1,2]. When primary afferent fibers (dorsal roots) were briefly stimulated electrically (500 µA, 5 stimuli in 2.5 sec), EPSCs (excitatory postsynaptic currents) were recorded from lamina V neurons (Figure 1b). All EPSCs were blocked by ionotropic glutamate receptor antagonists 20 µM CNQX plus 50 µM APV (Figure 1b) or 3 mM kynurenic acid [3]. Brief stimulation of primary afferent fibers also evoked IPSCs (inhibitory postsynaptic currents). These immediate IPSCs (Figure 1c top) were driven by glutamatergic synaptic input, or glutamate-driven feed-forward inhibitory activity, because they were completely abolished in the presence of CNQX plus APV (Figure 1c bottom). However, when prolonged stimulation was applied (500 µA, 20 Hz, 1 min), a robust and long-lasting increase of IPSC frequency and amplitude was recorded in the presence of CNQX plus APV (n = 5, Figure 1d-f) or kyurenic acid (KA, n = 7, Figure 2c). These results revealed a feed-forward inhibitory (FFI) pathway not driven by glutamate. We used capsaicin, the active ingredient of hot chili peppers, to stimulate primary afferent fibers. Capsaicin is widely used as a natural stimulant for studying nociception. It excites nociceptive primary afferent fibers to release glutamate and neuropeptides including substance P through activation of TRPV1 receptors [13][14][15]. Capsaicin (2 µM) produced a robust and long-lasting increase in IPSC frequency and amplitude in the presence of 3 mM kynurenic acid (Figure 1g-j). The capsaicin effects were similar in the presence of kynurenic acid or other glutamate receptor antagonists (Additional file: 1, Figure 1ac), indicating that the effects were unlikely due to an incomplete block of glutamate-driven FFI. Inhibitory neurons in lamina V use both GABA and glycine as co-transmitters [16], and increases of IPSCs by capsaicin were completely abolished in the presence of 20 µM bicuculline and 2 µM strychnine (n = 8). It is unknown whether, transmitters, other than glutamate released from primary afferent fibers can directly drive inhibitory circuitry in the spinal cord. If a transmitter can drive FFI, exogenous application should increase inhibitory activity. We examined neuropeptides thought to be released from primary afferent fibers. Galanin (300 nM), NPY (neuropeptide Y, 1 µM), somatostatin (2 µM), and CGRP (calcitonin gene-related peptide, 0.5 µM) were tested, but none increased IPSCs (Figure 2a). However, SP significantly increased inhibitory activity under condi-tions when ionotropic glutamate receptors were blocked; SP increased IPSC frequency to ~350% of control ( Figure 2a, n = 6) and amplitude to ~200% of control (n = 6). NK1Rs couple with either the pertussis toxin (PTX)-insensitive Gq/G 11 family [17] or PTX-sensitive Gi/Go family depending on cell types [18,19]. To elucidate which type of G-proteins was involved in SP-driven FFI, PTX was tested. We found that capsaicin-induced increases in inhibitory synaptic activity were completely abolished when spinal cord slices were pretreated with PTX ( Figure 2b). Capsaicin-induced increases of inhibitory synaptic activity were also completely blocked in the presence of NEM (N-Ethylmaleimide), a Gi/Go protein inhibitor (Figure 2b). Thus, PTX-sensitive G-protein is involved in SPdriven FFI. Possible cellular mechanisms of SP-driven FFI include i) direct excitation of inhibitory neurons; ii) via intermediate steps; and/or iii) through synaptic modulation. If NK1Rs are expressed on dorsal horn inhibitory interneurons [20], SP may directly excite inhibitory neurons. To test this possibility, we used dorsal horn neuron cultures made from GIN mice, a strain of transgenic mice that express EGFP (enhanced green fluorescent protein) under control of a promoter for GAD67 [21]. In GIN mice, almost all EGFP neurons examined in the dorsal horn are inhibitory neurons [22]. As shown in Figure 2e, SP (100 Feed-forward inhibitory activity in the absence of glutamatergic driving force Horizontal bars indicate stimulation. Overall, at peak responses, IPSC frequency increased to 376 ± 47% of control (n = 5, P < 0.05); IPSC amplitude increased to 228 ± 74% of control (n = 5, P < 0.05). Similar results were also obtained in the presence of 3 mM kynurenic acid (see Figure 2c). g-j, Capsaicin-induced increases in inhibitory activity in the absence of glutamatergic driving force. g, The top trace is a continuous recording of IPSCs from a rat lamina V neuron before and following the application of 2 µM capsaicin in the presence of 3 mM kynurenic acid. The bottom 2 traces are at an expanded scale. h, The time course of IPSC frequency in (g). bin width: 10s. i&j, Capsaicin-induced increases in IPSC frequency (i) and amplitude (j) recorded from 6 rat lamina V neurons in the presence of 3 mM kynurenic acid. nM) increased intracellular Ca 2+ in ~30% (23/77) of EGFP neurons tested in the presence of 500 TTX and 3 mM kynurenic acid. We determined whether EGFP neurons in lamina V responded to SP using spinal cord slices prepared from GIN mice (Figure 2f). Most EGFP neurons recorded (64%) showed non-adaptive action potential firing in response to membrane depolarization (Figure 2g). Of 22 EGFP neurons examined, 7 (~30%) responded to 1 µM SP with prolonged membrane depolarization (5 ± 1 mV, n = 7) and action potential firing (Figure 2h). These results suggest that a cellular mechanism of SP-driven FFI is direct excitation of inhibitory interneurons by SP. We found that SP (Additional file: 1, Figure 3a-c) and capsaicin (n = 12) had no effect on mIPSCs. SP also did not affect paired-pulse eIPSC ratio and corresponding eIPSC ratio (Additional file: 1, Figure 3d-f). These results suggest that SP/NK1R-mediated increases of IPSCs represent feed-forward neuronal activity rather than pre-or post-synaptic modulation at inhibitory synaptic junction sites. We evaluated the extent SP-driving inhibitory activity contributes to the total inhibitory activity under normal conditions, i.e. without blocking glutamate-driven FFI. We also compared temporal phases between SP-driven FFI and glutamate-driven FFI. In NK1R +/+ mice, IPSC frequency and amplitude were increased after trains of electrical stimulation (Figure 3a,c,d), similar to the results when glutamatergic driving force was blocked (Figure 1af & Figure 2c). In contrast, in NK1R -/mice, IPSCs were not significantly changed after the same trains of stimulation (Figure 3b,c,d). We examined IPSCs during electrical stimulation and found that, in both NK1R +/+ and NK1R -/mice, IPSCs were elicited pulse-by-pulse immediately following each stimulus. These immediate IPSCs represented glutamate-driven FFI because they could be blocked by ionotropic glutamate receptor antagonists (see Figure 1c). Since the pulse-by-pulse inhibitory activity was seen in both NK1R +/+ and NK1R -/mice, but the long-lasting increases in IPSCs after trains of stimulation were only observed in NK1R +/+ mice, it suggests that the latter is driven by substance P through NK1R activation. Similar to electrical stimulation, a large and long-lasting increase in inhibitory synaptic activity was observed in NK1R +/+ mice but not in NK1R -/mice after capsaicin stimulation in the absence of ionotropic glutamate receptor antagonists (Figure 3e). Thus, SP-driven FFI and glutamate-driven FFI have distinct temporal phases. One physiological role of SP-driven FFI may be to balance neuronal activity and counteract SP-mediated nociceptive responses in the dorsal horn. To examine this potential physiological function, a behavioral model was used to see if blockade of SP-driven FFI, using an NK1R antago-nist, causes behavioral sensitization to nociceptive stimuli. However, an NK1R antagonist will also block SPmediated nociceptive response, thus interfering with the observation of a functional change following blockade of SP-driven FFI. To solve this complication, we chemically ablated NK1R-expressing neurons in the superficial lamina ( Figure 4a&b); most of these neurons are nociceptive projection neurons responsible for SP-mediated nociception [8] and SP-evoked descending modulation [23]. Ablating NK1R-expressing neurons in the superficial lamina was achieved by intrathecally applying substance Pconjugated saporin (SP-SAP) [8], a targeted toxin for NK1R-expressing neurons (Figure 4a&b). In these animals, NK1R-expressing neurons in deep laminae remain intact or less affected [8,12,24]. To verify that SPdriven FFI remains intact, we used spinal cord slices prepared from SP-SAP treated animals and made recordings from lamina V neurons. Capsaicin was found to increase IPSCs to a similar degree in animals with (Figure 4c) or without SP-SAP treatment (Figure 1g-j, Supplementary Figure 1), indicating that the ablation did not affect SPdriven FFI in lamina V. SP-SAP treated animals were used to access if the SPdriven FFI plays a role in controlling nociceptive behavioral responses. Reflexive lick/guard (L/G) responses to nociceptive heat stimuli at 44.5°C [24] were determined. Both the control and SP-SAP groups showed similar baseline responses to noxious stimuli (Figure 4d) [8]. Control rats showed behavioral sensitization following application of capsaicin cream to the planter surface, but a substantial attenuation of behavioral sensitization was observed in parallel experiments carried out in SP-SAP animals [8]. To examine whether the NK1-expressing neurons in deeper laminae of SP-SAP animals may intrinsically control behavioral responses to nociceptive heat stimuli, the behavioral responses were determined following blockade of NK1Rs by its antagonist CP-96,345 (36 nmol). Nociceptive reflexes showed sensitization when CP-96,345 was applied in SP-SAP animals, but behavioral hypersensitivity was attenuated by CP-96,345 in control animals (Figure 4d). The opposite effects of NK1R antagonists between normal and SP-SAP animals indicate a dual function of NK1Rs in nociceptive processing in vivo. The behavioral sensitization by the NK1R antagonist in SP-SAP animals revealed a role of SP-driven FFI in controlling nociceptive responses. SP-driven FFI is a novel sensory processing mechanism. The unique feature is its temporal phase that extends long time after stimulation. This is distinct from glutamatedriven feedback/feed-forward inhibitory activity. Compromising SP-driven FFI can result in sensory hypersensitivity, providing implications in sensory pathology and therapeutics that targets neurokinin system [8,12]. Figure 3 SP-driven inhibitory activity under conditions when glutamatergic driving force is intact. All experiments were performed in bath solution without glutamate receptor antagonists. a, A continuous recording of IPSCs from a lamina V neuron of a NK1R +/+ mouse. Four traces (bottom) show, at an expanded scale, the IPSCs before, during, 1 sec after, and 9 min after trains of stimulation. The trace during stimulation is at a more expanded scale to show pulse-by-pulse eIPSCs. b, Same as a except the experiment was performed on a NK1R -/mouse. The pulse-by-pulse eIPSCs (second trace of lower panel) were similar to those of NK1R +/+ mice, but ISPCs returned to the basal level immediately after termination of the train stimulation (third trace of lower panel). c, Time course of IPSC frequency (top) and amplitude (bottom). IPSCs during stimulation are not included. d, Peak IPSC frequency and amplitude after trains of stimulation in NK1R +/+ (n = 6) and NK1R -/mice (n = 8). In a-d, stimulation was applied at intensity of 500 µA and a frequency of 20 Hz. e, Capsaicin-induced changes of IPSCs in lamina V neurons of NK1R +/+ mice (n = 6) and NK1R -/mice (n = 16). To elicit feed-forward inhibitory activity by electrical stimulation, dorsal roots were stimulated electrically through a suction electrode. Stimulation was applied at an intensity of ~500 µA and pulse duration of 100 µsec. Unless otherwise indicated, stimulation was applied in a train of pulses that had a frequency of 20 Hz and duration of 1 min. Recordings were performed in the bath solution containing (in mM) 117 NaCl, 3.6 KCl, 4 CaCl 2 , 0.5 MgCl 2 , 1.2 NaH 2 PO 4 , 25 NaHCO 3 , 11 glucose, equilibrated with 95% O 2 and 5% CO 2 . To examine whether SP had effects on evoked IPSCs, paired-pulse evoked IPSCs were examined before and following application of 1 µM SP. Paired-pulse evoked IPSCs were elicited by focal stimulation in lamina V near the recorded neurons. Stimuli were applied at the intensity of 50-150 µA, pulse duration of 100 µs, and paired-pulse interval of 100 ms. The interval between two sets of paired-pulses was 10 s. Calcium Imaging was performed on dorsal horn neuron cultures (5-7 days) made from neonatal GIN mice [26]. Cells were perfused with bath solution containing (in mM): 150 NaCl, 5 KCl, 2 MgCl 2 , 2 CaCl 2 , 10 glucose, 10 HEPES, pH 7.4; 500 nM TTX and 3 mM kynurenic acid. EGFP neurons were first identified and an image was taken. Cells were then loaded with the Ca 2+ indicator Fluo-3 on the stage of microscope. Subsequently, calcium imaging was performed [27], the effect of SP (100 nM) on EGFP neurons was tested. To chemically ablating NK1R-expressing lamina I neurons with SP-SAP [8,24], a 32 g catheter was inserted into the lumbosacral subarachnoid space (L6-S1) of adult rats (250-300 g) [28] and SP-SAP (300 ng, substance P-conjugated saporin, Advanced Targeting System) was injected through the catheter to the lumbar enlargement. Fourteen days after this procedure, animals were used for in vitro electrophysiological recordings or in vivo behavioral tests. Controls were animals after sham operation. Behavioral tests were performed on 8 SP-SAP treated animals and 8 control animals. Reflexive lick/guard responses were assessed in two consecutive ten-minute trials involving 36.0°C (pre-test) trial and then a 44.5°C (test) [24,29]. Lick responses were defined as a stereotyped lifting of the hindlimb followed by holding and licking the hindpaw. Guard responses were defined as an exaggerated raising of the hindlimb. Peripheral sensitization of behavioral responses was induced by application of capsaicin cream (1%) to the planter surface of one hindpaw. Reflexive responses were assessed three hours after application. To test the effects on behavioral responses following blockade of NK1 receptors, CP-96,345 (36 nmol), an NK1 antagonist was applied through the catheter 10 min before behavioral tests. NK1 receptor immunostaining was performed after behavioral tests to confirm the effective removal of NK1Rexpressing lamina I neurons in SP-SAP treated animals. NK1R immunostaining was performed using a polyclonal anti-NK1R serum (1:3000) on a series of sections (100 µm in thickness) cut from L5 of the spinal cord. Analysis of synaptic events, including threshold setting and peak identification criteria, were performed according to a method previously described [26]. For calcium imaging experiments, responsive neurons are defined as ∆F/Fo > 20%. The duration of behavioral responses were collected by custom software (EVENTLOG) across testing sessions for all rats [24,29]. Unless otherwise indicated, data represent Mean ± SEM, * p < 0.05, student-t test. Statistical analysis of behavioral responses was performed by ANOVA, followed by Newman-Keuls post-tests.
2014-10-01T00:00:00.000Z
2005-01-01T00:00:00.000
{ "year": 2005, "sha1": "654664b6a430117bee8454841d147c183a2482bc", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/1744-8069-1-20", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7232b4cbb441ae3b343c2e15a5ac734d89bad4fc", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
233232978
pes2o/s2orc
v3-fos-license
Effect of Intracanal Medicaments on the Bond Strength of Bioceramic Root Filling Materials to Oval Canals Ab s t r Ac t Aim: This study aimed to evaluate the effect of the prior application of intracanal medicaments on the bond strength of OrthoMTA (mineral trioxide aggregate) and iRoot SP to the root dentin. Materials and methods: Thirty single-rooted mandibular premolars were standardized and prepared using ProTaper rotary files. The specimens were divided into a control group and two experimental groups receiving Diapex and Odontopaste medicament, either filled with iRoot SP or OrthoMTA, for 1 week. Each root was sectioned transversally, and the push-out bond strength and failure modes were evaluated. The data were analyzed using Kruskal Wallis and Mann–Whitney U post hoc test. Results: There was no significant difference between the bond strength of iRoot SP and OrthoMTA without medicaments and with the prior placement of Diapex (p value > 0.05). However, iRoot SP showed significantly higher bond strength with the prior placement of Odontopaste (p value < 0.05). Also, there was no association between bond strength of OrthoMTA with or without intracanal medicament (p value > 0.05) and between failure mode and root filling materials (p value > 0.05). The prominent failure mode for all groups was cohesive. Conclusion: Prior application of Diapex has no effect on the bond strength of iRoot SP and OrthoMTA. However, Odontopaste improved the bond strength of iRoot SP. Clinical significance: Dislodgment resistance of root canal filling from root dentin could be an indicator of the durability and prognosis of endodontic treated teeth. IntroductIon The crucial goal of endodontic treatment is preventing and eradicating apical periodontitis, 1 and long-term success of this treatment depends on the level of root canal disinfection as well as proper obturation of the root canal. 2,3 Gutta-percha in combination with a root canal sealer is the contemporary used obturation materials, although it cannot provide an adequate apical sealing and bonding to the root dentin. 4 In 2009, a bioceramic OrthoMTA (BioMTA, Seoul, Republic of Korea) was introduced to be used as grafting material to fill the entire root canal without gutta-percha. The manufacturer claimed that OrthoMTA can prevent microleakage by forming an interfacial layer of hydroxyapatite between the material and the dentinal wall. 5 Furthermore, it has a bioactive feature via releasing calcium ions throughout the apical foramen. It was reported that OrthoMTA contains mainly tricalcium silicate and has less heavy metals. 6 Moreover, it releases calcium ions that help to persuade regeneration of the periapical tissues. Additionally, its sealing ability is appropriate and comparable to AH Plus/Gutta-percha. 7 iRoot SP (Innovative Bioceramix, Vancouver, Canada) is another premixed, hydrophilic, injectable bioceramic material composed of calcium silicates, zirconium oxide and calcium hydroxide. The manufacturer claims that iRoot SP has numerous ideal characteristics, including good adhesion, hydrophilic, and osteoconductive, along with its capacity to bond to the radicular dentin chemically. 8 Furthermore, iRoot SP can be used solitary or in combination with gutta-percha to fill the root canal. Disinfection of the root canal is a tedious goal; therefore, association of the mechanical preparation with irrigation solutions and intracanal medicaments was proposed to reduce the infection of the root canal. 9 Currently, different types of intracanal medicaments are available. Odontopaste® (Australian Dental Manufacturing, Kenmore Hills, Queensland, Australia) is a zinc oxide-based intracanal medicament, and according to the manufacturer, it is composed of a broad-spectrum antibiotic clindamycin hydrochloride, a steroid-based anti-inflammatory agent triamcinolone acetonide and calcium hydroxide. It temporarily reduces inflammation and postoperative pain. 10 Diapex® Plus (DiaDent, Diadent group international, Korea) is a premixed Calcium hydroxide with added Iodoform. Combinations of calcium hydroxide with iodoform have been demonstrated to achieve a wide-spectrum antimicrobial outcome. 11 The challenge with the use of intracanal medicaments is the difficulty to remove them completely from the root canal. This could act as a physical barrier between radicular dentin and the material that negatively influence the adhesion and penetration of the filling materials into the dentinal walls. 12,13 On the contrary, prior placement of intracanal medicaments was demonstrated to enhance the adhesion of the root filling material or have no effect on it. 13,14 No abundant information about the effect of these medicaments on the bond strength of bioceramic root filling materials. Therefore, we aimed to evaluate the effect of Diapex and Odontopaste medicaments on the bond strength of OrthoMTA and iRoot SP to root dentin. Preparation and Obturation Thirty single-rooted mandibular premolars that were extracted for orthodontic reasons were used. The teeth without fracture, previous restoration, dental caries, and external resorption were selected and disinfected by using 0.05% of chloramine T-trihydrate. After cleaning the external root surface with ultrasonic scaler, the teeth were radiographed from mesiodistal and buccolingual directs. The teeth that have a ratio of the long to the short canal diameter ≥2 were selected to ensure the oval-shaped canal, as it was reported that prevalence of oval canal is common in the human teeth. 15 After that, the teeth were decoronated and the root length was standardized at 16mm. Working length was measured by deducted 1 mm short of apical foramen with K-file (Dentsply Maillefer, Switzerland). All root canals were instrumented using ProTaper universal files (Dentsply Maillefer, Switzerland) until size F3 and irrigated with 5.25% NaOCl throughout preparation. Final irrigation was achieved using 3 mL of 17% EDTA for 1 minute followed by 5 mL of distilled water. The specimens were then randomly divided into three groups (n = 10) based on the intracanal medicaments used as follows: • Group I: No intracanal medicaments (control group). • Group II: DiaPex® Plus (Calcium hydroxide, Diadent Group International, Korea). • Group III: Odontopaste® (Australian Dental Manufacturing, Kenmore Hills, Qld, Australia). All intracanal medicaments were applied according to the manufacturer using a lentulo spiral, and the orifice was sealed using IRM (Dentsply, Caulk, USA). The specimens were incubated for 1 week at 37°C with 100% humidity. After 1 week, rinsing of the medicaments was done using 10 mL 17% EDTA followed by 10 mL 5.25% NaOCl, and a final irrigation of 5 mL distilled water. After that, the canals were dried using paper points (Dentsply Maillefer). Each group was subdivided into two subgroups based on the assigned root filling materials: iRoot SP and OrthoMTA. The filling materials were according to the manufacturer instructions. Radiographs were taken mesiodistally and buccolingually to confirm complete filling. The orifice was covered with IRM and incubated for 2 weeks at 37°C with 100% humidity to ensure setting of the materials. Push-out Bond Test Each root was embedded in cold-cure epoxy resin (Mirapox A and B; Miracon, Malaysia). After setting, the specimen was sectioned transversally using a water-cooled precision diamond saw (Metkon-Micracut 125 low speed precision cutter). The cutting disk was placed perpendicular to the root long axis, and 5 mm from the apex of each root was discarded due to the small size of the filling material and the possibility of round cross-section of the root canal in this level. The rest of the root was cut to obtain 5 root slices (n = 25 sections/group) with 1 mm ± 0.01 thickness. For accuracy of the calculation, the thickness of each slice was gauged using a digital calliper (Mitutoyo/ Digimatic, Tokyo, Japan). Universal testing machine (UTM) (Shimadzu, Japan) was used to assess the push out bond force. A 0.6 mm diameter cylindrical stainless-steel plunger was equipped in the UTM. Each specimen was positioned in a customized fabricated jig to fix and align in a way that the apical surface faced the plunger. The filling material only was in contact with the plunger to prevent misreading by fracture of the root dentin. An increasing compressive load was applied at a crosshead speed of 0.5 mm/minute until bond failure occurred. The bond failure load (N) was recorded at the point where a sharp drop of the stress-strain curve was observed, and complete dislocation of the root filling material had occurred. The bond strength (MPa) was calculated by dividing the force (N) by the root canal filling bonding area (mm 2 ). The bonding area of oval canal was calculated as described previously. 16 Failure Mode The specimens were then checked under a stereomicroscope at 40× magnification to describe the bond failure mode. The interface area between filling materials and dentin wall was classified into three failure modes according to the measurement of the residual filling material percentage as follows: < 25%-adhesive, >25% to <75%-mixed, and >75%-cohesive. 17 Statistical Analysis Data were analyzed using SPSS software version 12 (Chicago, USA). Kruskal Wallis and Mann-Whitney Post hoc tests were performed to detect the variance among intracanal medicaments and control for each filling materials. Multiple Mann-Whitney tests were carried out to detect the difference between two root filling materials for each intracanal medicaments used and control separately. The association between failure modes and root filling was analyzed using Chi-square test. p value was set at 0.05. results The mean of push-out bond strength of all groups is presented in Table 1. Mann-Whitney test showed there was no significant difference between bond strength of iRoot SP and OrthoMTA dIscussIon Dislodgment resistance of root canal filling from root dentin could be an indicator of the durability and prognosis of endodontic treated teeth. The bond strength of root canal filling materials is affected by different factors, including anatomy of the root canal, 18 the prior placement of intracanal medicaments, obturation materials and techniques, slice thickness, and final irrigation protocol. [19][20][21] The material with the higher dislodgement resistance from root dentin maintains the root filling-dentin interface integrity during tooth flexure and the preparation of post spaces. 18 The push-out test is one of the tests to measure the bond strength, and it is considered a true shear test for parallelsided samples because the dislocation occurs parallel to the dentine-material interface. 22 It has been mostly used to evaluate the effectiveness of the dislodgement resistance of dental materials. [19][20][21] Although there was considerable variation in the push-out test used in laboratory studies, 23 the push-out test is not strongly influenced by these variables and seems to be suitable for ranking root filling materials. 24 The push-out test is considered a reliable method to measure the root canal filling materials bond strength. 25 It is less sensitive to small discrepancies among specimens and to differences in stress distribution during application of the load compared to shear strength test, and it is also easy to align specimens for testing. 26 In the current study, prior placement of Diapex had no effect on push-out bond of OrthoMTA, and this is consistent with previous study that reported the dislodgement resistance of bioceramic root canal fillings had not significantly affected by the use or absence of medicaments. 27 Conversely, prior placement of Diapex improved insignificantly the bond strength of iRoot SP, and this is in agreement with previous researchers who demonstrated that prior placement of water-based calcium hydroxide medicament seemed to enhance the dislodgement resistance of iRoot SP. This finding was attributed to the chemical interaction of calcium hydroxide residues with iRoot SP that could increase the dislodgment resistance and the micromechanical retention of the filling material. 14 Unlike water-based calcium hydroxide, Diapex is oil-based calcium hydroxide that might not well contact the root canal walls due to the large contact angles and therefore can be removed more efficiently and easily from the canal walls. 21 Consequentially, fewer residues of calcium hydroxide may interact chemically with iRoot SP that explained the nonsignificant increase in the bond strength of iRoot SP. Conversely, prior placement of Odontopaste enhanced the dislocation resistance of iRoot SP. There was no literature on the effect of prior placement of Odontopaste on different root filling materials. However, Odontopaste composed mainly of zinc oxide with addition of antibiotics, and zinc oxide inhibits the dentin demineralization and that may consequently enhance the bond strength of root filling material to dentinal wall by providing more mineralized dentin for hybridization. 28,29 Hybridization is the most widely applied mechanism of adhesion of dental materials to dentin. It involves the formation of hybrid layer as a result of demineralization of dentine and expose of collagen fibers that is followed by micromechanical interlocking of root canal sealer or filling components into the collagen matrix in the intertubular dentine. 30 Both OrthoMTA and iRoot SP can bond to the dentin chemically and mechanically by forming an interfacial layer of tag-like structures containing apatite-like precipitates. The particles in this interfacial hybrid layer can penetrate the dentinal tubules and intratubular mineralization forming interlocking. 7 However, their bonding to root dentin was affected contrarily by prior placement of intracanal medicaments, and the explanation of this could be attributed to the differences in the particle size and the flow rate of both materials. iRoot SP has a good flow value which is more than 23 mm. Flow rate of orthoMTA is not reported in the literature; however, it was demonstrated to have similar chemical composition and morphological characteristics to ProRoot MTA. 31 Hence, it may have similar flow rate which is 14.2 mm. 32 Premolars with oval canal specimens were used in this study, and it was established that preparation and obturation of oval canals is a difficult task and the quality of obturation depends considerable on the flow of root filling materials to unprepared recesses of the canal. 33 The good flow rate of iRoot effect positively on the filling quality of the oval canal and subsequently enhances the bond strength. In the current study, the bond failure mode was mainly cohesive for all groups. This outcome is in accordance with previous studies. 18 This was associated with the improved bond strength of sealer to root dentin and consequently decreased the sealer-dentin interface disruption and increases the probability that failure will occur within the sealer itself. 34 One of the limitations of this study are the difficulty to standardize the specimens due to the variations in human root canal anatomy. Push-out test was used in this study, thus, using another test may lead to different results. Whilst present data is only based on in vitro study, the result may vary in vivo conditions. conclusIon With the limitation of the study mentioned earlier, it can be concluded that prior application of Diapex has no effect on bond strength of OrthoMTA and iRoot SP. Meanwhile, prior placement of Odontopaste increases the iRoot SP bond strength and has no effect on that of OrthoMTA.
2021-04-15T06:16:27.056Z
2020-11-01T00:00:00.000
{ "year": 2020, "sha1": "85f74fa4724fda0726bb30f3ab5d91e947888e50", "oa_license": null, "oa_url": "https://doi.org/10.5005/jp-journals-10024-2958", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ac5ea01f26145086ef935c9baece1a2e2c319c31", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
264146357
pes2o/s2orc
v3-fos-license
Assessing the Reliability of Large Language Model Knowledge Large language models (LLMs) have been treated as knowledge bases due to their strong performance in knowledge probing tasks. LLMs are typically evaluated using accuracy, yet this metric does not capture the vulnerability of LLMs to hallucination-inducing factors like prompt and context variability. How do we evaluate the capabilities of LLMs to consistently produce factually correct answers? In this paper, we propose MOdel kNowledge relIabiliTy scORe (MONITOR), a novel metric designed to directly measure LLMs' factual reliability. MONITOR computes the distance between the probability distributions of a valid output and its counterparts produced by the same LLM probing the same fact using different styles of prompts and contexts.Experiments on a comprehensive range of 12 LLMs demonstrate the effectiveness of MONITOR in evaluating the factual reliability of LLMs while maintaining a low computational overhead. In addition, we release the FKTC (Factual Knowledge Test Corpus) test set, containing 210,158 prompts in total to foster research along this line (https://github.com/Vicky-Wil/MONITOR). Introduction Recently large pre-trained language models (LLMs), especially those with billions of parameters, have been used as de facto storage for factual knowledge.Applying LLMs to real-world scenarios inevitably leads to language generation deviating from known facts (aka "factual hallucination" (Chang et al., 2023)) due to multiple causes.For example, Cao et al. (2021) argued that the performance of an LLM is over-estimated due to biased prompts over-fitting datasets (also referred to as the framing effect in Jones and Steinhardt (2022)) and in-context information leakage. Given the variability of LLMs' performance under different prompts and contexts, it seems that purely evaluating them on accuracy is not enough and that we also need to gauge how robust they are to variations in prompting.In Figure 1 we show examples of factual probes where either the framing of the prompt, or the context to the prompt, is varied, leading to the issue of "accuracy instability".Prompt framing effect: An LLM generates different predictions depending on how prompts are framed.Predictions are associated with prompts instead of factual knowledge learned in LLMs.As shown in Figure 1(a), for a fact represented in a triplet <Cunter, is located in, Switzerland>, the generated predictions for re-framed prompts "Which country is Cunter situated?"and "Cunter is located in Switzerland.True or False?" are non-factual. Effect of in-context interference: An LLM leverages in-context information during its decoding stage.The in-context information is concatenated with a test input (prompt) and acts as a condition when inferring hidden states concepts (Xie et al., 2022;Min et al., 2022).In-context information may negatively affect an LLM's prediction during knowledge probing.As shown in Figure 1(b), for the same fact, when presented with a context "England."concatenated with the prompting question "Which country is the location of Cunter?", an LLM generates a non-factual prediction "England".How do we assess the reliability of factual knowledge of LLMs under the effects of these hallucination-inducing factors?Investigations into the behaviors of language models during knowledge probing (Petroni et al., 2019;Kassner and Schütze, 2020;Gupta, 2023) have mainly used metrics like precision and accuracy to quantify errors under a specified factor like prompt framing (Jones and Steinhardt, 2022) or mis-primed information (Kassner and Schütze, 2020).Despite the insights gained by showing the instability of LLMs during knowledge probing, these studies are subject to two limitations: • No Exploration of Uncertainty.Metrics like top-one accuracy may capture the ordering of predictions in the output space, but they lack the resolution to reflect on the degree of certainty of factual knowledge being learned by LLMs. Figure 2 depicts an example where two LLMs (Models A and B) may produce the same result even though their output probabilities vary.By equating the performance of Model A with that of Model B, one introduces a level of approximation.This approximation in knowledge representation at the output space can be regarded as a source of uncertainty.In this paper, we directly use the output probabilities and construct a high-resolution metric to perform knowledge assessment. • Limited Scope.Previous works focus on understanding the effect of variation of a specific type.We design experiments to investigate the combined effects of multiple causes of variation: prompt framing and in-context interference during knowledge assessment.In addition, few studies have experimented on LLMs with billions of parameters.In contrast, we investigate the knowledge reliability of 12 freely downloadable LLMs with a range of parameter sizes and origins (with and without instruction fine-tuning).2 In this paper, we propose a novel distancebased approach MOdel kNowledge relIabiliTy scORe (MONITOR) which captures the deviation of output probability distributions under contexts of prompting variance, and interference from mispriming (Kassner and Schütze, 2020) and positively-primed prompts.By leveraging the probability distribution of the output space, MONITOR serves as a high-resolution metric for assessing the reliability of factual knowledge of LLMs. We perform experiments on a comprehensive set of knowledge probing tasks and investigate the correlation between accuracy and MONITOR.Through experiments with a large variety of different facts, we show that the proposed MONITOR has a significant correlation (0.846 Pearson coefficient) with the average accuracy recorded in LLMs.Further analyses show that MONITOR can address the "accuracy instability" issue when used along with an end-to-end point measurement (like accuracy).Computing MONITOR takes only one-third GPU hours of those consumed by a comprehensive accuracy reliability study, making MONITOR a low-cost metric for assessing factual knowledge reliability of LLMs.We deploy MONITOR on various factual knowledge probing tasks including question and answer (QA), word predictions (WP) and fact checking (FC). Our contributions are: 1. We design a novel LLM assessment method under various major hallucination-inducing factors using probability distributions from the output space.MONITOR is a highresolution and low-cost metric suitable for evaluating the factual knowledge of LLMs under prompt framing effects and in-context interference; 2. We construct the FKTC (Factual Knowledge Test Corpus) test set by developing QA probing prompts (210,158 prompts in total) based on 16,166 triplets of 20 relations from the TREx dataset (ElSahar et al., 2018).We release FKTC to the public to foster research works along this line.Petroni et al. (2019) demonstrated that factual knowledge can be directly extracted from language models without needing an external knowledge source.However, extracting knowledge (aka knowledge probing) from language models is errorprone due to various biases.For example, Elazar et al. (2021) showed that the consistency of knowledge extracted is generally low when the same fact is queried with different prompts.Many works in prompt engineering attempt to automatically construct prompts outperforming manual prompts (Shin et al., 2020;Jiang et al., 2020;Zhou et al., 2023;Kojima et al., 2022).It is argued that the decent performance of a language model is ascribed mainly to the application of these biased prompts (Cao et al., 2021), in which "better" prompts are found to over-fit the answer distribution of the test set instead of reflecting on LLMs' generalization ability to predict factual knowledge. To ensure that language models are hallucination-free, we need to look at other factors originating from in-contextual information.For in-context bias, Kassner and Schütze (2020); Gupta (2023) showed that language models fail on most negated probes and are easily misled by misprimes added to the probing context.On the other hand, Zhao et al. (2021); Si et al. (2023); Webson and Pavlick (2022) found the presence of context biases in few-shot probing results.The works mentioned above focused on pinpointing issues affecting LLMs' factual prediction.Few studies were motivated to develop evaluation approaches insensitive to the hallucination-inducing causes.Recently, Raj et al. (2023) presented a framework for evaluating the consistency of LLMs based on accuracy.Zhu et al. (2023) designed a benchmark for assessing the robustness of LLMs to adversarial instruction attacks, measuring the corresponding end-to-end performance drops.Dong et al. (2023) proposed a new metric to measure factual knowledge capability under the bias caused by aliases (alternative names for entities or relations) by reducing the effect of entity and relation aliases in the factual probing.Without tackling other factors like prompt framing effects and in-context interference (and their interactions), the scope of the study is limited.The data is not released to the public, therefore a comparative analysis is not possible. Effect of Prompt Framing on Accuracy Factual knowledge in masked language models (MLMs) is evaluated using cloze-style prompts to probe whether the model accurately predicts the masked token (i.e., "object" in "subject-relationobject" triplets).LLMs have no such constraint in the token generation.Therefore, we design three probing templates to show the effect of prompt framing on LLMs, depicted below, and for each task, we use seven paraphrased prompts to ensure diversity: Word Prediction (WP) Template: Given the "subject" and the prompt template, LLMs perform word prediction to complete the sentence, e.g., the template (1) in Table 1.When LLMs generate a sentence rather than an "object" (as a one-word token), we manually evaluate the predicted results to ensure their validity. Question-Answer (QA) Template: In the QA template, question prompts are constructed from manually paraphrasing templates in TREx (ElSahar et al., 2018) targeting each fact.For example, a template " Fact Checking (FC) Template: An FC prompt is designed as a verification statement based on a template in TREx, i.e., "Statement: [X] is located in [Y].The statement is True or False?".We build the positive checking probe (FC-pos) and negative checking probe (FC-neg) corresponding to whether the statement is factual or not.For a negative factchecking prompt, we average the prediction accuracy for the five random entities chosen from the same category weakly-related with the "subject". The probing results are shown in Table 2 as accuracies in predicting P17 factual knowledge for each involved LLMs under prompting biases presented in terms of WP, QA, and FC templates.The performances of LLMs in predicting the fact test data vary significantly when presented with different prompting templates.The fluctuation under seven prompts shown as box plots in Figure 11 (Appendix A.1) further demonstrates the effect of prompt framing on the performance of LLMs. Effect of In-context Interference To explore the effect of in-context interference bias, we add probes with misprimed (Kassner and Schütze, 2020) ceding the associated QA templates (shown in Table 1).Table 3 captures the accuracies of LLMs in a comparative study using factual entity probes and misprimes consisting of weakly associated entities.We observe strong interference effects from nonfactual antecedents for all 12 LLMs in our study.It can be observed that a factual entity (positive interference) can improve the accuracy by up to +43.67 while a weakly related entity information (negative interference) reduces the accuracy by -56.54 at most. MOdel kNowledge relIabiliTy scORe (MONITOR) Figure 3: A primary anchor (for example, "Switzerland" with a probability of 0.4893) corresponds to its multiple foreign anchors with different output probabilities (i.e., "Switzerland" with a probability of 0.0145) when an LLM is exposed to different prompts."D" refers to the distance measurement between the probabilities of two anchors. In this section, we introduce MONITOR, a distance-based score, to assess the factual knowledge of LLMs under the influence of previously mentioned prompt framing and in-context interference. An important notion of "anchor" is defined to establish a reference point, which is the valid factual knowledge represented as the answer probabilities in the output space.By calculating the distance (using the probability changes) between an anchor under investigation (known as the primary one) and its corresponding counterparts (aka the foreign anchors) in an influenced output space, we can measure how reliable an LLM is for the fact test set experimented.In order to enforce that the primary anchor is factually-accurate, we concatenate the correct answer preceding the associated QA template (e.g.Template 4 in Table 1)3 , and the foreign anchors are generated using Templates 2 and 5 presented in Table 1.The distance calculation here fundamentally differs from that in Dong et al. (2023), who leveraged a division between an specified relation and other irrelevant relations. Firstly, we introduce a new variable (i) to represent hallucination-inducing in-context information into the initial knowledge representation triple <subject, relation, object>.The newly formed knowledge representation quadruple can be expressed as < s, r, o, i >.The information i can be further categorized into two variables: positive information i + for factual object entities and negative information i − representing expressions serving as a bias for identifying s.For example, "France" is considered as an i − when acting as a noisy condition to negatively affect an LLM in predicting a desirable outcome <Eibenstock, is located in, Ger-many>.Corresponding to an object, P (o|s, r, i) is the probability of the model generating the object o with the conditions of subject s, prompt framing expression r, and the in-context information i. To quantify the effect of i on LLMs, we establish a reference point by treating the valid answer as the primary anchor mentioned above.As top-1 output probability can be used (Dong et al., 2022) to detect false factual knowledge, we use the top-1 output probability to implement anchors.A primary anchor (for example, "Switzerland" with a probability of 0.4893 in Figure 3) is defined as the valid output of an LLM for a base probe, which is the prompting template without any add-on context information.A primary anchor has multiple foreign anchors with various output probabilities (i.e., "Switzerland" with a probability of 0.0145 in Figure 3) when an LLM is exposed to different prompts.In order to enforce that the primary anchor is consistently factually-accurate, we set the top-1 answer of the input with positive information i + as the primary anchor with probability P (o|s, r, i + ) and check their validity with Exact Match. MONITOR consists of two distance-based measurement components: Prompt-framing Degree (PFD) and Interference-relevance Degree (IRD). Prompt-framing Degree The prompt-framing degree (PFD) is the mean distance between the output probability distributions of an enforced-accurate result (primary anchor) and the output probability distributions produced by the same LLM using prompting frames probing the same fact (foreign anchors).PFD evaluates the similarity of two output probabilities between prompting frame relation expressions r (the basic prompt framing) and r j .It is defined as: where R is the count of prompt framing expressions for a subject, and the count of subject and object in a factual relation is S, c ∈ 1, ..., S .L c is the length of the anchor in terms of the number of subwords in the c-th object.PFD is a cumulative metric for assessing an LLM's capability in producing output probability distributions sharing the same characteristics under various prompting frames.PFD has a value between 0 and 1.The smaller the value is, the more robust an LLM is under the influence of prompt framing. Interference-relevance Degree Interference-relevance Degree (IRD) is the distance between the output probability distributions of accurate results enforced with positive information (primary anchor) and the probability distributions generated by the same LLM under the influence of in-context interference (foreign anchors).IRD measures an LLM's capability to predict factual knowledge under the influence of in-context interference. We define the count of positive and negative information as one and M , respectively, corresponding to an object.IRD has a value between 0 and 1.As positive contextual information likely leads to factual knowledge generation, a smaller value of IRD indicates a lower level of influence from in-context interference biases. MONITOR: MOdel kNowledge relIabiliTy scORe The prompt-framing degree PFD and interferencerelevance degree IRD are integrated to produce the proposed model knowledge reliability score (MONITOR).MONITOR captures the quadratic interaction of PFD and IRD, as illustrated in Eq 3 for a specified number of quadruples < s, r, o, i >, where the count of subject and object is S. A set of coefficients (α 1−3 ) is introduced to quantify the contributions from PFD, IRD, and their interaction on MONITOR.In this experiment, we consider an equal contribution scenario (α 1 = α 2 = α 3 = 0.33).The smaller the value of MONITOR, the less the model is influenced by hallucinationinduced factors when producing factual outputs. Taking the average output probabilities of primary anchors for an LLM as the denominator, MON-ITOR captures the degree of knowledge learned by an LLM when assessing its factual knowledge.MONITOR measures the effects of prompt framing and interference per unit of average primary anchor probability, demonstrating the strength of anchor representations. LLMs are resource-hungry even during their inference phases.It is essential to ensure that an assessment metric is computation-efficient.Combining PFD, IRD, and their interaction in one metric can reduce the computation cost when evaluating factual reliability.Considering a relation with R prompt frames, M negative interference, and one positive interference, there are R * M combinations required to compute the average accuracy (and accuracy range).In comparison, we only require R + (1 + M ) combinations to obtain MONITOR.The computation complexity for calculating MON-ITOR (O(n)) is considerably lower than that of accuracy (O(n 2 )). Experiments In this section, we describe how to apply MON-ITOR to assess the factual knowledge of the 12 LLMs as mentioned above. Data Setting In this section, we describe how we develop a test corpus to accommodate prompts with various styles and in-context interference. Expanding Probing Prompt: Based on 16,166 <subject, relation, object> triplets from T-REx (ElSahar et al., 2018), we develop QA probing prompts.We expand the probing prompt dataset by paraphrasing using GPT-4 (OpenAI, 2023) to create seven prompt frames for each triplet.In order to ensure the diversity of prompts, we choose prompts with a similarity score (BLEU) below a threshold (0.7). Adding In-context Interference: Based on the QA prompts constructed above, we create a test dataset to explore the effectiveness of the designed metric with in-context interference biases.The dataset FKTC stands for "Factual Knowledge Test Corpus".Following the template patterns (Templates 4 and 5) in Table 1, we concatenate interference information (in terms of positive and negative in-context information) with the probing question for each subject.The negative information is entities from the same category weakly related to the corresponding subject, sampled from all objects that share the same relation.This process is applied to all templates presented in The overall results are evaluated on FKTC with "bold" numbers indicating the best measurement over the same column category.The "avg", "max", and "min" mean the average, maximum, and minimum accuracy across the 20 test datasets.The "probs."depicts the probabilities of primary anchors."↓" means a smaller measurement wins. Table 5: Performance of various LLMs in predicting factual knowledge captured in P178, P108, and P37 4 testing datasets with "bold" numbers indicating the winning measurement over the same column category."Ins."means whether the LLM has been instruction finetuned.The "bold and italic" fonts on P37 show how MONITOR can differentiate two LLMs (BLOOMZ-3b and Vicuna-7b) with a similar average accuracy. Overall Results The overall results are shown in Table 4, and the results of each relation are shown in Table 10 (Appendix), where MONITOR and the average accuracy (avg acc) are recorded for each LLM across the 20 test datasets in our experiments.Each LLM's minimal and maximal accuracy are also recorded to show the accuracy variability.MON-ITOR incorporates internal representations of an LLM (i.e., primary anchor probabilities) and the influences from various in-context biases on its representations (in terms of the proposed distance measurement among distributions).The proposed MONITOR can not only indicate the degree to which external inputs influence a model, but it also reflects on the strength of factual knowledge learned by taking account of average primary anchor probabilities across knowledge.As shown in Table 4, LLaMa-30b-ins stands out as the most capable (with the least MONITOR 0.479) LLM, followed by Vicuna-13b (0.484) and Vicuna-7b (0.504).MONITOR correlates significantly with the average accuracy (0.846 Pearson coefficient), indicating its suitability for evaluating factual knowledge of LLMs over large-scale test cases. As shown in Table 5 (bold italic fonts), MONITOR can differentiate LLMs, for example, BLOOMZ-3b and Vicuna-7b, with a similar average accuracy on P37, by considering distance and probability information.We further discuss this in Subsection 6.2. It is worth noting that MONITOR adheres to the scale law via which a larger LLM tends to outperform smaller models in the same series (further in Subsection 6.4) Results for Specific Facts We present a detailed view of the knowledge assessment of LLMs by drilling down into specific relation types.Unlike the overall results in the previous subsection, showing a general trend, the results disclosed here show more detailed insights.As shown in Table 5, the overall winning LLM (i.e., LLaMa-30b-ins.) can lose its edge in a particular relation type (P37). An LLM trained with instruction finetuning (i.e., BLOOMZ-3b) does not consistently outperform a foundation model with an equivalent amount of parameters (for example, OPT-2b7) on results presented in Table 5 6 Discussion Accuracy Instability We analyze the LLMs' "accuracy instability" when predicting P14125 with the results captured in Table 6 and Figure 4.A variety of statistics, including the base accuracy ("base acc") and standard deviation ("std") of an LLM's accuracy, are recorded for comparisons.An LLM with a lower MONITOR has a lower accuracy standard deviation when two LLMs with equivalent base accuracies are evaluated.From an accuracy stability viewpoint, one may choose an LLM with a lower MONITOR.For example, we prefer Vicuna-13b over WizardLM-13b, even though they have similar accuracies as the MONITOR of Vicuna-13b is lower. MONITOR and Accuracy It can be observed in Table 4 that the correlation between MONITOR and average accuracy is significant.How should one use MONITOR when assessing the reliability of LLM knowledge?Resolution Characteristics: We regard MON-ITOR as a high-resolution metric because it directly uses output probabilities and their changes (in terms of anchored distance) induced by hallucination factors.MONITOR considers both the output (nominal or qualitative data) and the probability of the output (quantitative information).Comparatively, assessing LLMs' knowledge with an end-to-end metric, such as accuracy, is purely reliant on a nominal output from the softmax layer of a transformer.It is shown in Table 5 that two LLMs (BLOOMZ-3b vs. Vicuna-7b) with almost identical average accuracy on P37 relation have two distinctive MONITORs (0.570 vs 0.432).Delving into the log file of the inference task, we gain in-depth insights into why Vicuna-7b outperforms BLOOMZ-3b in the reliability score.As shown in Table 7, despite their similarities in the accuracy measurement, Vicuna-7b has much higher output probabilities than those of BLOOMZ-3b, contributing to their discrepancies in MONITOR. Additionally, we plot out the probability distribution of the above two LLMs with almost identical average accuracy but very distinctive MONITOR (Figure 5).It can be observed that a more reliable LLM based on MONITOR, Vicuna-7b, has a much higher percentage of solid output probability than those of a volatile LLM (BLOOMZ-3b in this case).It is recommended to adopt MONITOR when using accuracy alone cannot differentiate LLMs' knowledge reliability.The population percentages with a solid probability (aka, greater than 0.8) are 59% and 85% for BLOOMZ-3b and Vicuna-7b, respectively. Lower Computation Cost: We compare the GPU hours consumed in producing MONITOR and a full-scale accuracy of LLaMa-30b-ins., which is experimented on a specific fact (P1412) test set using 8 NVIDIA V100 GPUs.It can be observed in Table 8 that using MONITOR leads to a 2.97-fold resource saving in GPU hours compared to applying an accuracy metric to a factual reliability evaluation.MONITOR is an economical method for assessing the reliability of LLM knowledge with scale. inputs anchor French.What language is the official language of Haiti?in-context Irish.What language is the official language of Haiti?framing What language is considered the national language of Haiti?output prob.anchor in-context framing anchor in-context framing BLOOMZ-3b French French French 0.761 0.411 0.527 Vicuna-7b French French French 0.928 0.622 0.849 Table 7: Vicuna-7b outperforms BLOOMZ-3b in MONITOR when evaluated on P37 by producing correct answers with higher output probabilities in response to positive, negative in-context interference and prompt framing effect. Figure 6: Visualizing model behaviors of BLOOMZ-3b and OPT-2b7 under the influence of input with misprimed in-context interference.The input is "Danish.What language is the official language of Sotkamo?".We evaluate the attribution of each input feature to the model's outputs by applying the integrated gradient technique. Attribution of In-Context Interference To demonstrate the resilience of LLMs with different MONITORs, we conduct an additional experiment by applying the Integrated Gradients (Sundararajan et al., 2017) technique implemented in Sarti et al. (2023).By examining and visualizing the attribution of input features to the model's outputs, we can infer the reliability of LLMs with different MONITORs.We study the behaviors of two LLMs (OPT-2b7 vs. BLOOMZ-3b) with distinctive MONITORs (0.471 vs. 0.570).The heat map shown in Figure 6 illustrates that a more reliable model with a smaller value of MONITOR, OPT-2b7, is less influenced by the in-context interference, producing a correct answer. Analysis on LLMs Scale To further verify if MONITOR of LLMs follows the law of scaling, where larger LLMs are more knowledge-reliable, we present how MONITOR changes across BLOOMZ series for each specific fact (shown in Figure 7).While MONITORs of LLMs may not conform to the scaling law at the granularity of each fact, their aggregated values in a comprehensive scope of experiments do follow the rule of scale (shown in Figures 7-8). Prompt Ablations We design an ablation study to investigate the consistency of MONITORs across different prompt settings by analyzing the MONITOR results in predicting P178 facts.The MONITORs from an expanded prompts group setting (consisting of seven prompts) and a sub-sampled group with four prompts are captured in Figures 9 and 10.It is noted that MONITOR ranks LLMs in a consistent order for different prompt settings.Additionally, we observe a strong linear correlation between MONITORs of the expanded group and those from the sub-sampled group, indicating the scalability of MONITORs across prompt settings. Limitation We focus on proposing MONITOR to assess the reliability of factual knowledge of LLMs during knowledge probing.Whether MONITOR can be generalized to a wider scope of tasks (e.g., summarization) warrants a future study.Additionally, the initial setup of contribution coefficients of PFD, IRD, and their interaction on MONITOR should be further investigated to establish an empirical benchmark.Currently MONITOR applies exact matching to obtain anchors to measure the reliability of LLM knowledge.Extending the automatic evaluation to anchors consisting of sentences is challenging. Conclusion In this paper, we show that large language models are subject to the influences of various hallucination-inducing causes.As a result, an endto-end metric (like accuracy) is most likely to create an unstable reading.We propose a novel distancebased metric, directly measuring output probabilities and their changes to address "accuracy instability" caused by the prompt framing effect and in-context interference.A comprehensive scale of experiments demonstrates that the proposed MON-ITOR is a high-resolution economic method suitable for evaluating the reliability of large language model knowledge.The constructed FKTC dataset is available to the public to foster research along this line. Figure 1: "Accuracy instability" during language generation under various prompts. Figure 2 : Figure 2: The same top-1 answer with different probabilities. Figure 4 : Figure 4: MONITOR can be used to differentiate LLMs' factual knowledge reliability when models with an equivalent base accuracy are evaluated.The box plots show the related distributions of accuracy. Figure 5 : Figure 5: A comparison of the probability distribution of anchors between BLOOMZ-3b and Vicuna-7b on P37.The population percentages with a solid probability (aka, greater than 0.8) are 59% and 85% for BLOOMZ-3b and Vicuna-7b, respectively. Figure 7 : Figure 7: The BLOOMZ series adheres to the scale law for the specific facts with smaller MONITORs for bigger models.The horizontal axis represents the model's size in billions, and the vertical axis represents the results of MONITOR. Figure 8 : Figure 8: The BLOOMZ and Vicuna series adhere to the scale law based on the overall MONITOR results obtained from experiments on 20 test datasets.The horizontal axis represents the size of a model in billions, and the vertical axis represents the results of MONITOR. Figure 9 : Figure 9: The consistency of MONITOR when assessing LLM's factual reliability in predicting P178 facts across different prompts settings. Figure 10 : Figure 10: Significant correlation of MONITORs between the 7-prompt group and the 4-prompt group when assessing the reliability of LLMs in predicting facts from P178. Table 2 : Accuracy of various LLMs in predicting factual knowledge of P17 relation."Ins."means whether the LLM has been instruction finetuned.The performances of LLMs have undergone significant variations for different prompting templates.The unit of "size" is billion.Abnormal performances of LLMs between QA and WP template-based probes (bold numbers of Vicuna-7b) and between the FC probes for positive and negative interference (bold numbers of BLOOMZ-1b1) are strong evidence of prompt framing effects. Table 9 , to produce 210,158 prompts focusing on 20 relations. Table 6 : When two LLMs with equivalent accuracies are assessed, an LLM with a lower MONITOR is likely to produce a lower standard deviation in accuracy."base acc" is the accuracy associated with the base prompt.Bold fonts demonstrate evaluation cases. Table 8 : GPU hours consumed calculating MONITOR and accuracy on a fact testset (P1412) of FKTC for LLaMa-30b-ins."MONITOR-saved" denotes GPU hours saved from using MONITOR compared to accuracy.
2023-10-17T06:42:50.376Z
2023-10-15T00:00:00.000
{ "year": 2023, "sha1": "b535a71f03192b139ad8934d1fa075de6fdb796f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "b535a71f03192b139ad8934d1fa075de6fdb796f", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
43694131
pes2o/s2orc
v3-fos-license
Class Disjointness Constraints as Specific Objective Functions in Neural Network Classifiers Increasing performance of deep learning techniques on computer vision tasks like object detection has led to systems able to detect a large number of classes of objects. Most deep learning models use simple unstructured labels and assume that any domain knowledge will be learned from the data. However when the domain is complex and the data limited, it may be useful to use domain knowledge encoded in an ontology to guide the learning process. In this paper, we conduct experiments to introduce constraints into the training process of a neural network. We show that explicitly modeling a disjointness axiom between a set of classes as a specific objective function leads to reducing violations for this constraint, while also reducing the overall classification error. This opens a way to import domain knowledge modeled in an ontology into a deep learning process. Introduction As deep learning research progresses, efforts are made to add structure to the set of labels used as annotation data.This seems a logical step as the number of labels in object classification tasks becomes larger.For example the ImageNet challenge (Russakovsky et al., 2015) in its 2017 edition invites system to localize objects in 1000 categories, detect 200 types of objects in images, and detect 20 types of objects in videos. 1 However little about the organization of these classes is provided.Models are expected to implicitly acquire the knowledge that, for example, a person has two legs and a car four wheels from the training examples provided as part of the dataset.What works on a controlled dataset in an academic challenge task might however be less practical in a real world example where annotated data can be scarce, and the modeled domain vast and complex. The work presented in this paper is part of a larger project for automating the understanding of body worn camera footage in a law enforcement setting.The goal is to provide annotations that will facilitate the manual footage reviewing process that is mandatory across many agencies.We train deep learning models to categorize videos by their type, for example a Traffic Stop, or a Pedestrian Stop; and to give annotations indicating the type of event ongoing at a specific time segment, for example an Officer Driving, or a Handcuffing.The illustration below shows an example timeline our tool generates on an unseen video. In parallel to developing models able to capture the specificities of this visual domain, we develop an ontology modeling this domain.Type of detections are represented as classes modeled in RDFS and OWL (Dean et al., 2004).There are three higher level classes for topics, segments, and objects that represent different levels of granularity: topics apply to a whole video, segments apply to a scene in a video that may vary from a few seconds to many minutes, and objects can be detected at any point in the video. Because of legal and privacy concerns, and because of the cost of annotation, our training data set is limited to a few hundred videos.This results in the need for models able to learn from small training sets.In particular, we would like to be able to reflect the constraints modeled in the ontology into the neural network training process, so that this additional knowledge helps the neural network converge faster and compensates for the small training set.We indeed assume that if a much larger training set would be available, the constraints implicitly modeled in the annotations could be learned by the neural network.This paper focus on modeling and then learning a class disjointness constraint.After giving class disjointness modeling examples related to our domain in Section 2 we formulate the constraint as a loss function applied on the disjoint classes.Then in Section 3 we conduct experiments on a synthetic dataset showing that this method effectively leads to an improvement in enforcing the constraint. 2 From an ontology axiom to a loss function A long term goal of this work is to design a system where changes in the ontology would result in changes in the neural network training process.Especially we are interested in studying how modeling constraints as ontology axioms can result in adding specific losses on the neural network output classes corresponding to the ontology classes on which these axioms apply. OWL2 (Motik et al., 2009) provides three constructs for expressing class disjointness.The main one owl:disjointWith is a class axiom specifying disjointness between this class and another.Multiple owl:disjointWith statements can be defined within a class description.In our use case, a set of topics are all disjoint for a given annotated video.OWL2 introduces a convenient way to model this using the owl:DisjointUnion construct allowing to specify the disjointness outside of the class definition.The statement below expresses that the class Topic has three disjoint subclasses Pedestrian Stop, Traffic Stop and Car Pursuit.Similarly we find it more convenient to express that two classes are disjoint in a separate axiom than in the classes definitions.The OWL2 construct owl:DisjointClasses allows to express that a set of classes is pairwise disjoint.The statement below expresses that an officer cannot be at the same time driving a car and be a the passenger of that car. owl:DisjointClasses(leo:OfficerDriving leo:OfficerPassenger) With a large enough training data set, the constraints should be learnt by the model from the data.However, the dataset would need to cover all possible conditions that would happen when seeing new videos, and the space of possibilities is large.For example light conditions, vegetation, urban setting, police officers vehicles and uniforms can vary widely between police departments.There are multiple cases in other domains where the amount of data available for training may be limited, while the domain is quite complex.This can happen for reasons of data privacy, for example when dealing with personal or sensitive data, or when training happens on a user mobile device.Explicitly using ontological knowledge in the neural network training process will help the model avoiding to break these constraints on new data. We propose here to study the effect of a specific objective function applied to the neural network outputs corresponding to the classes on the ontology for which a disjointness axiom was defined.This function should maximize loss when the constraint is broken, allowing only one class to be positively output.More specifically, in this paper we treat the case where one of the disjoint classes has to be selected by the network.A common objective function to achieve this is to use categorical cross-entropy (De Boer et al., 2005).This objective is usually used in combination with a Softmax activation.However, we do not want to modify the neural network architecture when introducing constraints, but only the objective function.We thus apply Softmax on the predictions before passing them to the objective function. Equipped with this loss function for class disjointness we will next run experiments to study its influence on model training. Experiments Our experiments consist in testing the effect of the custom loss on forcing the neural network to learn the constraint.As motivated in the introduction of this paper, the sensitive character of the data we Disjoint classes Figure 2: Example generated dataset.Input on the left columns are randomly generated.Output for disjoint classes is 1 for the highest value, 0 for the others.Output for the other classes is 1 if the input is greater than a threshold here set to 0.5, 0 otherwise.are working with does not allow to share it for reproducibility.We thus present experiments on a synthetic dataset presented below.All the experiments and dataset generation code is open source and available at: https://github.com/OpenAxon/constrained-nn. Dataset In order to be able to control parameters of the dataset we work with and to speed up training experiments, we generate synthetic data using the following procedure.A number of pseudo-random numbers is generated for the input.Outputs are separated into two sets: one set that represents classes under the disjointness axiom and for which we would like the disjointness constraint to apply, and another set which represent the other classes.We set the number of outputs to be equal to the number of inputs and generate output values according to this: we take the argmax of the input set that represents disjoint classes, and assign the corresponding output to one.Every other output in the disjoint set is set to zero.The rest of the outputs are set to one if their corresponding input is greater than a given threshold, and to zero otherwise.Figure 1 illustrates inputs and corresponding output values. In the context of these experiments we use a dataset with a hundred thousand data points, having 50 generic classes and 5 disjoint classes.We set the threshold parameter for generic classes to 0.5.We selected the number of generic classes and disjoint classes to be about the same size of the number of classes and disjoint classes we have in our law enforcement use case. Model architecture Our hypothesis is that adding the ontology constraint in the network as a specific loss function will help the network enforcing this constraint.We verify this by training and testing a model on the dataset presented above.The model architecture is detailed in Figure 3 below.It consists in a fully connected network with three layers having non-linear activations, and an output layer with a sigmoid activation.Each intermediate layers has 200 neurons.We use an Adam optimizer with a learning rate of 0.001. We use this architecture to define two models by only changing the objective function: for the baseline we use binary cross-entropy on the whole set of output classes.For the constrained model we use a combination of binary cross-entropy defined on the non constrained classes, and the softmax categorical cross-entropy loss defined in Section 2 for the disjoint classes.A parameter controls the weight of the constraint loss with regards to the generic loss.We present our results below. Results Our goal is to compare how the model performs on learning the constraint for this dataset.In order to report this we define two metrics that measure how well the network can predict constrained and non-constrained classes correctly.Given P the set of predictions, G the ground truth and C the set of constrained classes, metrics are defined as follows: • Constrained classes metrics: categorical classification accuracy Where [P i (c)] represents the nearest integer to the prediction.• Unconstrained classes metric: binary classification accuracy for each batch b of the training set.We perform multiple training runs using a batch size of 32 on three dataset with varying sizes.Reported numbers below are averaged for each batch.For each dataset we perform: • one run for the baseline using a single loss. • four runs for the constrained model using weights of 1, 2, 4 and 8 for on the constraint loss. Three datasets are generated with size 5000, 10000 and 50000.Other dataset parameters are given in Section 3.1.Figures 4-6 below report results for the various configurations on each dataset. As expected, overall accuracy increases as the datasets size increase.The weight on the constraint loss does not have a significant effect on the accuracy of the constrained outputs.However, as the weight increases, the accuracy on the unconstrained output decreases significantly.These observations are valid for the three datasets.As expected, the unconstrained model has difficulty learning Figure 6: Constraints on 50k dataset using various weights for the constraint loss Legend: orange: no constraint, cyan: w=1, magenta: w=2, blue: w=4, green: w=8 the constraint on smaller datasets, but the gap between the models trained with a constrained loss and the model without the constrained loss decreases as the dataset size increases.This confirms the intuition than given enough data, explicit constraints might not be necessary.A constrained loss with a weight of 1 provides a very good tradeoff in loss of unconstrained accuracy versus improvement of the constrained outputs accuracy. Related Work Computer vision challenges provide unstructured or minimally structured labels, usually a list or a shallow taxonomy.Even fewer methods make use of the structure when it is provided. The fast and accurate object detection system YOLO9000 (Redmon and Farhadi, 2016) makes use of the WordNet lexical database to build a hierarchy of concepts from the flat list of ImageNet labels.Predictions are then conditional probabilities of a concept class given its parent class.Prediction computation is performed by computing the softmax of all siblings at every level of the concept hierarchy.The predicted class is obtained by following the highest confidence path from the top of the concept hierarchy, until a threshold is reached.This approach is complementing our work and will likely be used in the system we are building. In (Pathak et al., 2015) constraints are applied using a specific loss function in weakly supervised image segmentations.Only images labels are provided and constraints help the network identify images segments according to these labels.Constraints are defined specifying the presence or not of a class in the background or in the foreground according to the presence of a label.Another constraint specifies the relative number of pixels a class label is expected to occupy.The domain knowledge encoded in these constraints is specific to the datasets.It is possible to define the relative size of a class because the datasets contain iconic images where labels occupy a central position and a significant proportion of the image.The domain knowledge encoded in the constraints is thus dataset specific, rather than ontological.Our goal is to be able to transfer ontological knowledge of a domain that could apply to various datasets. Conclusion This work introduced constraints modeled in an ontology as part of a neural network training process.With these initial experiments we studied the usage of one class disjointness axiom over a set of classes.Adding the ontology axiom as a specific objective on the outputs corresponding to the disjoint classes provide a significant improvement: the classification accuracy of the network for the constraint is improved by 15 to 35 points depending on the dataset size.It is useful to note that adding the constrained loss does not result in a strict restriction, and the network may still in a few cases output classifications violating the constraint. We plan to apply the class disjointness axioms in training models on actual video data.In order to be able to report on findings on these data, we are working toward building and releasing an open dataset of law enforcement videos, that we hope will help foster new research. In future work we are interested to study the effect of multiple disjointness constraints.We are also studying how to model other ontology axioms as constraints.In a longer term, we are interesting in studying how ontology axioms can influence a neural network training process, but also how from an unconstrained neural network trained on a large dataset we could derive and materialize ontology axioms. Figure 1 : Figure 1: Illustration of the problem tackled in this paper with a timeline over a body worn camera video.Ground truth annotations are represented in black.Machine detected segments are represented in red.The neural network detect at the same time Pedestrian Stop and Pursuit which are disjoint activities. Figure 3 : Figure 3: Model architecture.The None values are placeholders for the batch size. (a) Categorical classification validation accuracy on constrained outputs of the 5k dataset (b) Binary classification validation accuracy on unconstrained outputs of the 5k dataset
2017-10-29T14:41:26.094Z
2017-09-01T00:00:00.000
{ "year": 2017, "sha1": "112bfe480a752a374bafb8daebeedb5c1814057f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "112bfe480a752a374bafb8daebeedb5c1814057f", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
245490945
pes2o/s2orc
v3-fos-license
Developing a spatially explicit method for delineating peri-urban landscape The ill-defined space between urban and rural areas is typically referred to as peri-urban landscape. One key reason for this lack of clarity is the unduly broad scale of conceptual and geographical resolution. This article focuses on its spatial elucidation at a sub-regional scale. It describes a method for delineating peri-urban landscape, based on spatial and demographic criteria. Arguably, spatially explicitly denoted peri-urban landscape on a sub-regional level would help in choosing appropriate local and regional planning approaches and policies for its development. The method, based on an overlay analysis, was tested using datasets from regional and municipal authorities in Ljubljana and Edinburgh. The results indicate that this less ambiguous spatial definition of peri-urban landscape offers a sound basis for planning and policy development. Introduction Although peri-urbanization is not a new phenomenon, it has attracted increasing attention in recent years from landscape and urban planners, geographers, and others. The outcome of peri-urbanization is a spatial type that early studies mainly linked to urban sprawl, but these spaces are now thought to share particular characteristics as the interface for rural and urban interactions and mixes (Meeus & Gulinck, 2008;La Rosa et al., 2018;Shaw et al., 2020). This kind of space has been variously characterized as peri-urban, urban fringe, suburban area, or urban periphery, but despite a growing number of studies, the definition remain unclear in conceptual and spatial terms (Gonçalves et al., 2017). These conceptual issues have been discussed elsewhere (see Žlender & Gemin, 2020;Žlender, 2021); for present purposes, we employ the term peri-urban landscape and define peri-urban areas as mixed land-use territories within that landscape (Žlender, 2021). The focus of this study is to elucidate the spatial character of peri-urban landscape. In geographic terms, peri-urban landscapes are characterized by a higher population density than rural areas and are likely to be affected by urban sprawl (Couch et al., 2008;Jacquin et al., 2008;Piorr et al., 2011;Maleas, 2018). These areas typically attract industrial hubs and tertiary sector structures like outlets, shopping malls, technology and logistics centres (Couch et al., 2008;Gant et al., 2011;Gonçalves et al., 2017;Martyniuk-Pęczek et al., 2017), with an accompanying decline in rural uses like agriculture or forestry. Vacant land and protected natural habitats are also likely to be found in peri-urban areas. To date, research on peri-urban landscapes has ranged from analyses of land use patterns (Jacquin et al., 2008;van Vliet et al., 2019) to integrated analyses of multiple dimensions such as population and economic flows, and mobility patterns (see Mortoja et al., 2020, for a review). While these integrated approaches attempt to provide a holistic view of spatial organization, land use and other dimensions (Gonçalves et al., 2017), the spatial characteristics of peri-urban landscapes can be very varied, and any analysis of peri-urban dynamics must take account of this diversity (Piorr et al., 2011). In particular, standard planning definitions must incorporate spatial analysis of land use patterns, appropriate scaling of spatial indices, and clear delineation to support spatial planning and policy implementation, especially when projecting urban growth boundaries to limit any undesirable effects of urban expansion (Inostroza et al., 2013;Wandl et al., 2014;Mortoja et al., 2020). To that end, the present study advances a spatially explicit method of analysis to define the spatial extent of peri-urban landscape and to classify peri-urban areas. As peri-urban landscape delineation at the regional scale has proved insufficiently precise, the proposed approach focuses on the sub-regional scale. The study addresses two main research questions. RQ1: What and where are the boundaries of peri-urban landscape? RQ2: Given the diversity of peri-urban land uses, morphological characteristics, and economic and cultural processes, is a more precise delineation possible or even necessary? This study describes a spatially explicit method for delineating peri-urban landscapes to guide more appropriate approaches to planning. Specifically, the objectives were (1) to propose an operational methodology to delineate peri-urban landscapes; (2) to select and assess datasets for analysis; (3) to evaluate the results in the context of the relevant literature; and (4) to suggest directions for future planning and policy development. While delineation seems possible, the quality and quantity of available data may be problematic, especially in terms of granularity, spatial extent, accuracy, and differences in approaches to land use classification. We discuss whether less ambiguous spatial delineation of peri-urban landscapes would enhance planning and development, and we suggest how the study findings might improve current planning practice. The proposed approach was first developed and implemented as part of a wider study (Žlender, 2014) and it has since been updated using more recent datasets for the test areas in the case cities of Ljubljana and Edinburgh. These two cities were selected as representative of the medium-sized cities in which most Europeans live (Giffinger et al., 2007), and for pragmatic reasons (e.g., access to databases and no language barriers for the researchers). The rest of the article is organized as follows. In Section 2, we define the study context, reviewing existing typologies to identify classification variables and spatial units of analysis and selecting the most appropriate typology for peri-urban delineation. In Section 3, we describe how we studied land use and other geographical aspects of the peri-urban landscape in both cities and outline the characteristics of the data and methodology used for delineation. The results of the analyses are discussed in Section 4. Finally, in Section 5, we evaluate the proposed method as a support tool for peri-urban planning and policy development on the basis of the case study results. Characterizing and classifying peri--urban landscapes: Literature review Among the changes caused by ongoing urbanization, some peri-urban areas can no longer be clearly or easily defined as urban or rural, as rapid urban growth continues to consume agricultural land for residential and economic purposes (Cattivelli, 2021a). In the late 1980s, this undefined land, which we characterize as peri-urban landscape, was recognized as a distinct spatial type for research purposes, if not in planning practice. It was further suggested that such areas constitute a link rather than a divide between rural and urban (Unwin, 1989;Adell, 1999) as a transitional space characterized by rapid change, complexity, intrinsic variability (especially in spatial organization and land use concentration) and blurred boundaries (Gant et al., 2011;Piorr et al., 2011;Gonçalves et al., 2017;Mortoja et al., 2020), often extending beyond administrative boundaries (Iaquinta & Drescher, 2000;Rauws & de Roo, 2011). As this vague geographical identity can also lead to tenure-related conflicts (Dadashpoor & Ahani, 2019), it has been argued that clearer delineation of such territories is needed to facilitate better governance (Cattivelli, 2021b). In the extensive body of research investigating the rural-urban relationship and the nature of the peri-urban, most scholars have relied on spatial perspectives (e.g., land use) to delineate this landscape and its limits (Gonçalves et al., 2017); some of these analyses have incorporated other factors such as socio-demographics. For example, the PLUREL project defined the peri-urban area in terms of an urban fringe (a zone along the edges of a built-up area, with scattered lower density settlement, transport hubs and large green open spaces) and an urban periphery (smaller settlements of lower population density, industrial areas and other urban land uses surrounding the main built-up areas) (Piorr et al., 2011). Additionally, the various regional typologies developed at the pan-European level have typically employed variables like population density of built-up areas, population size, morphology of mixed (built and open) spaces, infrastructure characteristics (e.g. accessibility), mix of functions at the regional scale, economic diversification, rate of urbanization, administrative boundaries, and distance to urban centres (Iaquinta & Drescher, 2000;ESPON, 2005;Korcelli, 2008;Perpar, 2009;Dijkstra & Poelman, 2010;OECD, 2010;Piorr et al., 2011;Internet 1). In an overview of 80 classification methods developed by statistical offices, national governments and scholars over the last two decades in Europe, Cattivelli (2021b) identified five distinct methods in terms of their defining variables: demographic dynamics, economic and social indicators, settlement structure, distance and hybrid. However, not all of these variables are easy (or even possible) to map. Among the studies reviewed, demographic census data, land cover data and administrative boundaries proved to be the most useful variables for mapping peri-urban landscape (Iaquinta & Drescher, 2000;Piorr et al., 2011;Wandl et al., 2014), and these inform our analysis here. Finally, while most of these approaches adopt a regional scale, this is sometimes narrowed to the metropolitan or sub-regional level, and some have identified this as the most appropriate scale at which to address rural and urban dynamics (Piorr et al., 2011). Research approach The classification variables and spatial units identified in the literature review helped determine the most appropriate typology for delineating the peri-urban landscape. On that basis, we devised a new methodology that builds on the understanding that this is not simply a gradient between urban and rural but refers to interconnected territories independent of administrative boundaries. The analysis of rural-urban territories in different cultural and topographic settings is based on the identification of general peri-urban land use types and overlay analysis as described below. Identification of peri-urban land use types The existing literature suggests that peri-urban boundaries cannot be defined in terms of particular land use characteristics such as discontinuous land use (Mortoja et al., 2020) but must take account of multiple factors as discussed above (Gonçalves et al., 2017). As some of these are difficult or impossible to map, we defined five general peri-urban land use types based on readily available information rather than new data to simplify the procedure for future users. This typology drew on existing concepts to describe the character and limits of peri-urban areas (see Section 2). Land use categories were assigned to each type in line with the general European Union approach to spatial development, which stresses the importance of conserving the landscape to halt the loss of biodiversity, cultural identity and ecosystem services associated with future land take, helping improve soil functions and sustain landscape quality (Committee on Spatial Development, 1999;Council of Europe, 2000;European Commission, 2011;EU, 2011). We also incorporated perceptual factors on the basis of previous evidence that local inhabitants regard some built structures (e.g. commercial and logistic centres, transport hubs, dumps, housing areas) as unattractive while semi-natural green spaces, open recreational areas, parks and similar are perceived as attractive (Žlender, 2021). Finally, it should be stressed that the data are determined by availability and so change from case to case; while the datasets used here relate specifically to the two case cities, we identified the following five general peri-urban land use types. • Areas of residential-scale agriculture and leisure uses (ARSALU): land uses that are managed formally, semi--formally or not at all and support utility and leisure uses. These include city (urban) farms, allotments, community Developing a spatially explicit method for delineating peri-urban landscape gardens, private gardens, residential amenity green space, churchyards and cemeteries. • Areas of industrial-scale agriculture and other monofunctional uses (AIAMU): agricultural and other areas that are large in scale and are used intensively or unsustainably. These include primary and secondary agricultural land, vineyards, orchards and forest nurseries. Golf courses also fall into this category because they involve intensive care that is often linked to environmental issues like herbicide pollution, soil erosion and biodiversity decline. These issues may be more pressing in continental Europe, as seasonal climatic variations entail higher maintenance demands. • Sealed land, wastelands, industrial and brownfield sites with accompanying infrastructure (SWIBS): built-up and poor quality land, including degraded landscapes, land with little or no vegetation cover, abandoned sand and gravel pits, quarries, industrial and business sites, special economic areas, areas of scattered development, infertile, derelict and vacant land, environmental infrastructure, landfill sites, degraded urban areas, dams, boatyards, drains, weirs, docks, lock-gates, ditches and proposed housing areas. • Cultural and amenity landscapes (CAL): larger semi-natural open spaces, parks and other managed green spaces, including country parks, regional parks, local parks, nature parks, historical parks and squares, informal recreation areas, tourism areas and green spaces, sport and recreation areas, playgrounds, linear green spaces, tree belts and woodlands, river and canal banks, semi-natural open spaces, special-purpose forests, forest reserves, nature reserves, ecologically important areas, Natura 2000 protected areas, grassland, pastures and marshland. • Protected natural areas for active and solitary recreation (PA): national parks and other wilderness environments. (This type was not found in either of the case cities.) Assessment of spatial datasets Having identified these general peri-urban land use types, the relevant datasets were acquired from the city council and other government offices and were assigned to the land use types defined in Section 3.2. The relevant data were transformed for use in a GIS environment, where they were overlaid and merged into clusters corresponding to the above types to produce a graphical representation of general land use types. Population densities from census databases (Internet 2; SURS, 2019) and data for peri-urban areas acquired from local spatial plans and/or other formal documents were also overlaid with data derived from the clusters of general peri-urban land use types. Overlay method The overlay method combines data or information from several datasets to derive new information that integrates spatial data with attribute data (which may be weighted). Input criteria can be transformed in various ways, including weighted overlay, spatial joins, cross tabulation, editing layers with clipping intersection, or union (ESRI, 2021). Overlay analysis is traditionally used in suitability modelling, but it has also been used to define spatial units -for instance, in landscape regionalization (Dang et al., 2000;Stahlschmidt et al., 2017) or to specify landscape types in landscape character assessment (Swanwick, 2002). The weighted overlay method was used here to delineate the peri-urban landscape in both case cities; criteria were differently weighted to distinguish between the urban periphery and urban fringe (see Section 2). All mapping was performed in a GIS environment using a combination of two computer software programs; vector data were prepared, adjusted and cleaned in ArcMap 9.2 for import to ProVal 2000 (ONIX, 2000) to be rasterized (homogeneous spatial units of 10 by 10 m) and weighted for final cartographic representation. The overlaid datasets yielded specific spatial patterns that were then compared with aerial images from Google Earth to assess whether the urban fringe and urban periphery exhibited the spatial properties described in the literature. On that basis, the peri-urban landscape was manually delineated as the sum of the urban fringe and urban periphery. The data overlaying procedure involved the following steps. First, we defined the characteristic features of urban fringe and urban periphery to derive an evaluation scale for the purpose of delineation. Peri-urban landscape has been characterized as a mix of low-value land combining landfill and brownfield sites, wasteland and semi-natural green open spaces that people value and use (Neuvonen et al., 2007;Qviström & Saltzman, 2008;Žlender, 2021). The urban fringe is characterised by more urban uses such as transport hubs and settlements of higher density than the periphery, as well as elements like large green spaces. In contrast, the urban periphery is more influenced by the rural milieu, including lower-density settlements and agricultural uses (Piorr et al., 2011). Accordingly, the two land use types that incorporate agricultural characteristics (ARSALU and AIAMU) were assigned a higher percentage of influence in the analysis of urban periphery, and areas of predominantly natural and sealed land (SWIBS and CAL) were assigned a higher percentage of influence in the analysis of urban fringe. In deciding how to value the datasets, we also drew on complementary field information, historical information about the development of both cities, and interviews with local authorities and experts in urban planning, architecture, landscape architecture, infrastructure and other disciplines to improve the accuracy of our results (for more details, see other research outputs : Žlender, 2014Žlender & Ward Thompson, 2017;Žlender & Gemin, 2020). This additional information was especially helpful in identifying the appropriate scale for delineation and in the final manual delineation of the urban core, urban fringe and urban periphery. The next step overlaid the population density variable using the logical OR command, along with information on peri-urban areas as designated in local development plans and/or other formal documents. Based on the literature review, we determined the most discriminative population densities as two classes: 100-149 people/km² as the higher percentage of influence in the urban periphery analysis, and 150-500 people/ km² as the higher percentage of influence in the urban fringe analysis (Perpar, 2009;Piorr et al., 2011). We then intersected with the logical AND command the land use cluster with the output variable that resulted from merging the population density and peri-urban area datasets from the published spatial plans. In the final delineation, we also considered morphological landscape characteristics and spatial homogeneity of landscape patterns as defined in Marušič et al. (1998). Figure 1 presents a flow diagram showing the procedure of combining data in peri-urban landscape delineation. The final outcome of this procedure are delineated areas of urban fringe and urban periphery as shown in Figures 3 and 6. The commentary in Section 4 details each step of the procedure and the final outcome for each case city. Ljubljana Instances of AIAMU were very dispersed and fragmented, reflecting the spatial characteristics of Slovenia's agrarian structure (Cunder, 2002). The few instances of ARSALU were mainly located in the city, and most of these were allotments. Instances of SWIBS were dispersed, and the size of these plots V. ŽLENDER suggests that these largely degraded areas were individual parcels of land, probably for private use. Larger areas were located on the edges of the city, indicating typical abandoned areas previously used for industrial purposes. Instances of CAL accounted for the largest area because the analysis included all forested land; for that reason, stricter criteria may be needed to prioritize some forest designations and/or exclude others. However, because urban dwellers favour nearby forest for recreational and leisure activities (Neuvonen et al., 2007), all forest designations were included in the analysis. For the settlements included in Ljubljana and its environs, a raster of 1 km² cells was used to measure densities of 100-149 people/km² and 150-500 people/km² (SURS, 2019). The areas were rather dispersed and randomly located, and the results show no readily discernible pattern other than dense cores of satellite bedroom communities that have emerged in the vicinity of Ljubljana over the past few decades. One would expect to find more peri-urban densities in the eastern part of the municipality, where urbanization is more dispersed, but the analysis shows that this is still a predominantly rural area when population densities are taken into account. In the final overlaid image (Figure 3), the city's core is clearly segregated, and the boundary between the urban area and urban fringe was readily definable. On the north side, the urban fringe's outer edge is defined by individual settlements within larger open spaces. On the south side, the presence of marshland makes the edge less definable. This instance of CAL extends from Ljubljana into the wider region. Based on this analysis, the peri-urban landscape on the south side of Ljubljana cannot be defined. To facilitate further analysis, artificial peri-urban borders were aligned with administrative borders (see Figure 2). This delineation may be appropriate at the regional scale but should be revisited at the local scale to enhance precision. Here, the delineated urban fringe reflects the model outcome, corrected and refined to align with morphological barriers (streams and land-use borders) and built structures (roads and settlement edges). For this reason, it may coincide with administrative borders, which often follow natural borders. In places where the edge of the peri-urban landscape was close to existing administrative borders, these were deliberately aligned to facilitate further analytical work. Edinburgh Instances of AIAMU were located outside the city of Edinburgh; compared to Ljubljana, these were much larger spaces. Instances of ARSALU typically included gardens and allotments inside the city, indicating that gardening activities are popular in Edinburgh. According to Edinburgh's Allotment Strategy (CEC, 2017), the City of Edinburgh Council (CEC) manages 1,488 allotment plots at forty-four sites across the city. The city has adopted a strategic approach to address demand and to ensure that the benefits of allotment gardening are properly recognized and available to all (CEC, 2017). Accordingly, allotments are located as close as possible to people's homes rather than on the edges of the city. In contrast, although the number of allotments in Ljubljana is relatively high (1,023), there are only nine sites (MOL, 2021; Figure 4). It should be noted that the backyards of Edinburgh flats, which were included in this category, are generally managed as grassy areas and not as allotments. As in Ljubljana, SWIBS were scattered across the Edinburgh area, with larger areas concentrated on the west side of the city toward the airport. Pentland Hills Regional Park accounted for the largest proportion of CAL. On the south side, CAL extended into the city, linking the Braid Hills to the city's urban parks and semi-natural areas to form a green wedge connecting the city's core to its boundaries. The output area of overlay analysis Delineation of urban periphery Note: Parts of the border were manually aligned with administrative borders to facilitate further analysis. In Edinburgh, population density was calculated using postcodes and included the Edinburgh City Council area and surrounding settlements (Internet 2). Because postcode areas can differ greatly in size, the dataset was complemented by Global Human Settlement Layer data, which is based on a 250 m² cell (European Commission, 2015). The resulting peri-urban densities coincided with the Green Belt and Countryside Policy areas, adding another layer to the delineation of the peri-urban landscape. Based on the overlay analysis, the inner edge of Edinburgh's urban fringe is marked by the Edinburgh City Bypass ( Figure 5). On the southeast side, the edge no longer follows the bypass but extends into the city, encompassing the Braid Hills and an area on the city side of the bypass between Gilmerton and Musselburgh. Edinburgh's urban fringe roughly corresponds to the area of the former Rural West Edinburgh Local Plan (CEC, 2006) and the Green Belt and Countryside Policy areas in the new Edinburgh Local Development Plan (CEC, 2016). Like the two previous plans, this includes policies and proposals to guide development and land use across Edinburgh. Beyond the stereotypical industrial sites, landfills, retail centres and green spaces, Edinburgh's urban fringe incorporates large predominantly agricultural areas governed by landscape policy. While urban fringes are generally perceived as low-value land use, Edinburgh's might instead be characterized as accessible countryside on the edge of the city. Nevertheless, there are also some typical fringe uses, including Edinburgh Airport, the Gyle shopping centre and the Heriot-Watt University campus. To the south and southeast, the urban periphery mainly consists of agricultural, forestry and recreational uses (e.g., Pentland Hills Regional Park, golf courses). With two distinct segments on the southwest and northwest sides, the periphery is not continuous, but land uses remain similar to those in the main peripheral area ( Figure 6). In this final representation of Edinburgh's peri-urban landscape, the urban core is well defined. Rather than stereotypical land uses, the peri-urban landscape can be characterized as accessible countryside. It also includes settlements, but these are more rural and self-contained in character than the peri-urban bedroom communities that were almost the rule in Ljubljana. The importance of recognizing peri-urban landscape In general, our results support existing descriptions of peri-urban landscape in the literature. In Ljubljana, the peri-urban landscape includes a relatively narrow urban fringe and a large urban periphery characterized by semi-natural and natural areas that people value and use for recreation rather than industrial and other typical peri-urban land uses (Žlender & Ward Thompson, 2017;Žlender, 2021). However, this area is located further from the city and is less easily accessible, for these activities, than the urban fringe. Interestingly, population density alone did not reveal any significant gradient from urban to rural in Ljubljana, unlike some other studies that emphasize this variable as a starting point (or the only one that matters) for analysis (see for example van Vliet et al., 2012;White et al., 2012;Wandl et al., 2014). The present findings suggest that an Note: Parts of the border were manually aligned with the city bypass. Map data ©2021 2km Map data ©2021 2km Developing a spatially explicit method for delineating peri-urban landscape Urban core Urban fringe/accessible countryside Urban periphery/accessible countryside account of peri-urbanization based entirely on demographics cannot be generalized to other geographic settings. In Edinburgh, the overlay analysis indicates an urban-rural dichotomy rather than a peri-urban landscape, which is also typical of UK cities in general (Bryant et al., 1982;Ambrose, 1992;Gallent et al., 2006). In this sense, it would be more appropriate to characterize these areas of Edinburgh as "accessible countryside". Indeed, the distinction between urban fringe and urban periphery may be largely irrelevant here, as land uses are very similar in both. This differs from Ljubljana, which is surrounded by multiple satellite settlements, with high levels of daily commuter traffic into and out of the city. While land uses in Ljubljana are less coherent than in Edinburgh, they are sufficiently differentiated to allow a clear distinction to be drawn between urban fringe and urban periphery. Directions for future planning and policy development The overlay analysis revealed that, although similar in size and population, the two case cities differ in spatial planning approach and in the existence and spatial extent of peri-urban landscape. Although these differences may relate to biophysical characteristics and purely operational decisions (such as choice of datasets), we contend that planning and policy decisions probably account for differences in urban growth (Hersperger et al., 2018;van Vliet et al., 2019). This is especially clear in Edinburgh, where a strict green belt strategy has prevented the city from spreading west and has increased densities within the urban envelope. However, the main purpose of the Edinburgh Green Belt is not to prevent the coalescence of settlements but to direct planned growth, protect landscape settings and ensure access to open space (CEC, 2016). This approach has remained largely unchanged since its introduction in 1957, although the new LDP has taken some areas out of the green belt, mostly to satisfy strategic housing requirements, possibly indicating the strategy's failure to counter the pressures of urbanization (Bunker & Houston, 2003). The LDP controls the types of development allowed in the green belt and promotes opportunities to enhance countryside appearance and access (CEC, 2016). Along with the Countryside Policy, the Green Belt Policy defines in detail what development, if any, will be permitted in the interest of protecting landscape quality and/ or rural character. Despite evidence of the inadequacy of planning policies in combating urban encroachment (see for example Silva, 2019), the LDP draws a clear distinction between urban and rural areas and makes no mention of peri-urban landscape, urban fringe or other terms referring to the territory between rural and urban. Our analysis also confirmed that non-urban areas of Edinburgh are rural rather than peri-urban in character. In Ljubljana, the OPN explicitly acknowledges peri-urban areas and defines basic criteria and guidelines for planning them. These provisions mainly pertain to the judicious use of space, promoting settlement concentration in existing built-up areas (infill and restoration) and mixed uses while preventing uncontrolled new construction. The OPN also provides for green spaces of different sizes and types and the future preservation of ecological and recreational assets. At the regional level, however, existing documents (both formally binding and non-binding) refer only generally or not at all to the peri-urban landscape (e.g., RRA LUR, 2020), let alone the goals and priorities of national-level legislation (e.g., Odlok o strategiji, 2004), which are deemed too broad and insufficiently quantified (MOP, 2016). It should be noted here that peri-urban areas are mentioned in the newly revised proposal for a national spatial development strategy (MOP, 2020), but this again fails to address the specifics of the peri-urban landscape. As our analysis shows, peri-urban landscape may extend beyond municipal boundaries and should therefore be strategically addressed at sub-regional or regional level. Accordingly, there is a clear need to acknowledge peri-urban landscape in the future regional spatial plans as provided for in the state Spatial Management Act (ZUreP-2; Zakon o urejanju prostora, 2017). In this regard, the sub-regional to local level seems most appropriate for the adequate identification and handling of peri-urban areas in the relevant implementation documents. We argue that action plans based on smaller units (e.g., spatial planning units) are urgently needed to specify the existing and future state of individual peri-urban areas. Although we are conscious that the method described here is in need of further refinement, we believe it can assist legislators in defining peri-urban landscape and providing for its development and management. Clearly, institutional differences of approach in managing rural-urban relationships can explain some of the variance in the extent and pattern of peri-urban development (Servillo & Van Den Broeck, 2012). For now, the prevailing view is that current planning tools and policies fail to address the present state and drivers of peri-urban spatial development, and that plans based on an urban-rural dichotomy can only regulate urban and rural areas (Wandl et al., 2014;Bajracharya & Hastings, 2018;Cattivelli, 2021a). With regard to scale, our analysis indicates that it is not enough to address peri-urban landscape issues in municipal plans. Instead, it is important to promote joint regulation of neighbouring areas with high levels of cross-sectoral cooperation in pursuit of integrated spatial planning and institutional governance (Nared et al., 2019;Cattivelli, 2021a;Žlender, 2021). We are confident that the proposed approach can help to ensure more accurate characterization of peri-urban landscapes and thus improve the links between spatial planning and policy and the reality of development in these areas. Some critical reflections on the proposed method The method proposed here involves the detailed description and analysis of spatial data at the regional or sub-regional level. The selected case studies facilitated comparison of results, and the selected variables reflect land use and some sociodemographic aspects of the peri-urban landscape. Like any method, its utility depends on the context and objectives and is therefore subjective in nature. This is also true of the criteria for mapping the data, such as unit of population density or classes of nature preservation. Different criteria and classifications for data collection and merging would undoubtedly alter the delineation of the peri-urban landscape in both cases. In addition, land use data do not always reflect functional or socioeconomic issues, and a major limitation of our method is the absence of other relevant data that are more difficult to map and therefore less commonly available as spatial datasets. Other relevant data would include the connecting and separating effects of infrastructure and elements that underpin the connectivity of places with different functions and intensities. These datasets would support more precise delineation of peri-urban landscape and, possibly, the particular characteristics of peri-urban sub-areas. As an attempt to shed light on territories that are neither urban or rural, we believe that the method described here is sufficiently flexible to accommodate additional datasets and different geographical settings. One important proviso is that adding further variables will inevitably increase the method's complexity, making it less attractive for potential users. Conclusion In this study, we described a spatial method for delineating peri-urban landscape that can be applied in different geographic settings and at different spatial scales. We argue that this spatially explicit approach can help to identify peri-urban areas and assess their quality, so enabling policy makers to optimize resources to facilitate spatially balanced and coherent urban growth while preserving peri-urban green spaces, which are currently neglected by planners and decision makers (Gant et al., 2011;Žlender & Ward Thompson, 2017;Mortoja et al., 2020). This spatial delineation should be based on variables that reflect peri-urban land use as well as other relevant variables like population density. In the present case, we decided Developing a spatially explicit method for delineating peri-urban landscape to use readily available datasets. To facilitate future peri-urban planning and policy formulation and for comparison of different spatial settings, the proposed method describes spatial characteristics as precisely as possible but is also applicable to other spatial settings. Clearly, the results would be improved and possibly altered by more and different data that are more accurate and by changing the thresholds that define classes. Nevertheless, we believe that this more explicit spatial framework serves as a useful starting point for scientific analysis and peri-urban policy development.
2021-12-26T16:11:35.468Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "2ba84b506211b350c5020951be957bfc2c69b2fa", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5379/urbani-izziv-en-2021-32-02-03", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "5cee36919609f7559bd48104ab7bff1c4aab145b", "s2fieldsofstudy": [ "Geography" ], "extfieldsofstudy": [] }
241083380
pes2o/s2orc
v3-fos-license
Chronic Kidney Disease (CKD) Prediction Using Supervised Data Mining Techniques ------------------------------------------------------------------ ABSTRACT --------------------------------------------------------------- Diseases are causing high rates of mortality in the modern world, chronic kidney disease (CKD) is one of the major causes of mortality, and it has a long-term disability. The predisposing factors for CKD include diabetes mellitus, hypertension, cardiovascular diseases, smoking, obesity, family history of kidney disease and congenital kidney problems. CKD is associated with many complications such as, proteinuria, anaemia of CKD, CKD-mineral and bone disorder, dyslipidemia and electrolytes imbalance. Renal replacement therapy (dialysis and kidney transplantation) is the treatment of choice for CKD. Data mining is an accurate technique helps to predict the disease using various methods includes logistic regression, naive bayes classification, k-nearest neighbours, and support vector machine. Apart from these previous techniques, it was necessary to use a classification method for data segmentation according to their diagnosis and regression method for finding risk factors. In this present study, data are classified using proposed Identification of Pattern Mining, Decision Tree methods and regression techniques are used to obtain the best levels and this can be taken as metrics that the proposed methods can help in diagnosing a patient with CKD. I. INTRODUCTION In the recent era diseases and its effects are increasing day by day. The aim of data mining is to make sense of large amounts of mostly supervised data and unsupervised data, in some domain [1]. Chronic kidney disease (CKD) also known as chronic renal disease, is a gradual loss in renal function over a period of an inordinate length of time. The symptoms of worsening kidney function are unspecific, and might include feeling generally unwell and experiencing a reduced appetite (feeling Hunger). The reported prevalence of CKD in India is varying from <1% to 13%, and some studies also reported higher prevalence of 17% [2]. Diabetes and Hypertension are the most common cause of CKD, other causes include Autoimmune disorders (such as SLE and scleroderma), Birth defects of the kidneys (such as polycystic kidney disease), Certain toxic chemicals, Glomerulonephritis, Injury or trauma, Kidney stones and infection problems with the arteries leading to or inside the kidneys, Analgesics and other nephrotoxic drugs, Reflux nephropathy, and other kidney diseases. In this proposed work, classification algorithms like Identification of Pattern Mining (IPM), and decision tree (DT) methods for the diagnosis of chronic kidney diseases. The Identification of Pattern Mining outperformed over other techniques. The rest of the paper is followed as: section 2 represents review on various disease life threatening issues, section 3 represents related works in diagnosis of chronic kidney disease, contains dataset and methods used. Section 4 represents the results and discussion. Section 5 comprises conclusion and at last references are mentioned. II. RELATED WORK Masethe and Masethe, used J48, REPTREE, Naïve Bayes, Bayes Net, Simple CART to predict heart disease [3]. In their study, the authors used Kappa Statistics, Mean Absolute Error, Root Mean Squared, Relative Absolute Error and Root Relative Squared Error, for the analysis. The accuracy of the prediction was 99.07%, 99.07%, 97.22%, 98.14% and 99.07% in J48, REPTREE, Naïve Bayes, Bayes Net and CART, respectively. A study by Patil showed that neural network trained with the selected patterns for effective prediction of Heart Attack. The author used K-mean clustering algorithm on the pre-processed data [4]. P.K. Anooj et al. proposed system for the diagnosis of heart disease [5]. This study was carried out from the computerized approach for generation of weighted fuzzy rules and decision tree, creating a fuzzy rule-based decision support system from raw dataset (UCI). Manikandan. T et al. excerpt the item set relations by using association rule. The data classification was based on MAFIA algorithms which resulted in better accuracy. The most common techniques were used to evaluate the data as entropy based cross validation and partition techniques and the results were compared. MAFIA (Maximal Frequent Itemset Algorithm) used a dataset with 19 attributes and the goal of the research work was to have highly accurate recall metrics with higher levels of precision [6]. Hybrid Intelligent techniques for the prediction of heart disease were presented by R. Chitra et.al. Some Heart disease classification system was reviewed in this study and concluded with justification importance of data mining in heart disease diagnosis and classification. The classification accuracy can be improved by reduction in features [7].  The K-means clustering was used to identify and eliminated incorrectly calc silicate instances.  A fine-tuned classification was done using Decision tree C4.5 by taking the correctly clustered occurrence of initial stage. Trial results signify that flowed K-means clustering and the rules generated by flowed C4.5 tree with definite data is easy to understand as compared to rules generated with C4.5 alone with continuous data. The cascaded model with categorical data obtained [8]. These studies, Decision tree and ID3 algorithms attained 72% and 80% of accuracy respectively. Han et al., proposed Data mining techniques through Rapid Miner for the classification of diabetes data analysis and diabetes prediction model [9]. The data were pre-processed and discretized the data by Breault et al. rough sets on the PIMA for the first time were applied by him and used the equal frequency binning criteria for intervals and then he created reduces by using Johnson reducer algorithm and classified using the batch classifier with the standard/tuned voting method (RSES). The rules were constructed for each of the data around 10 randomizations of the PIDD training sets from above [10]. The tests sets were classified according to defaults of the naïve Bayes classifier, and the 10 accuracies ranged from 70% to 86% with a mean of 74% and 95% CI of (71%, 76%). A clustering algorithm that is used for predicting diabetes based on graph b-colouring technique was developed by Vijayalakshmi et al. Their experiments were compared to approach with K-NN classification and K-means clustering. The results showed that the clustering based on graph colouring is much better than other clustering approaches in terms of accuracy and purity. The proposed technique presented a real representation of clusters by dominant objects that assures the inter cluster disparity in a partitioning and used to evaluate the quality of clusters [11]. CKD analysis Chronic Kidney Disease or chronic renal disease gradually evolutions and usually after few years the kidney loses its functionality. In general, it may not be spotted before it loses 25% of its functionality. In the initial stage of renal failure may not be predictable by the patients since kidney failure may not give any symptoms primarily. Kidney failure treatment targets to govern the causes and slow down the advance of the renal failure. If treatments are not adequate, patient will be in the end-stage of renal failure and the last treatment is dialysis or renal transplant. At present, 4 out of every 1000 person in the United Kingdom are suffering from renal failure [12] and more than 300,000 American patients in the end-stage of kidney disease survive with dialysis [13]. Due to detecting the chronic kidney failure is not feasible until the kidney failure is entirely progressed; thus, realizing the kidney failure in the first stage is enormously important. Through early diagnosis, the act of each kidney can be taken under control, which leads to decreasing the risk of irreversible consequences. For this reason, routine checkup and early diagnosis are crucial to the patients, for they can prevent vital risks of renal failure and related diseases [12]. Therefore, it can be distinguished by measuring factors, and physicians can decide treatment processes, reducing the rate of evolution [14]. The purpose of medical diagnosis is to mine useful information from the immense medical datasets which are accumulated frequently [15]. Enormous mainstream of the studies on medical datasets are related to cancer diagnosis. Sharma et al. used Naïve Bayes classification algorithmbased classifier to predict the Diabetes in Indian population. The authors reported 76.30% accuracy which is higher than the other classifiers (Decision Tree, K-Nearest Neighbors, Random Forest and Support Vector Machines) [16]. Feature selection and extraction Feature selection is the key arena in knowledge discovery, pattern recognition and statistical science. The persistence of feature selection is to eliminate a subset from inputs which are not significant. Features do not depend on information about predictive classes. Reducing the dimensions of features and unrelated features can produce an inclusive model for classification. The main challenge of feature reduction is recognizing the best subset of features to achieve the best results of classifications [17]. Feature selection can simplify the data realization, decrease over fitting problem and the size of data storage; also, it can decrease the cost of train to obtain higher accuracy [18]. Feature selection methods can be characterized into three groups. Filter, wrapper and embedded methods. Fig. 3.1 shows the schema of filter and wrapper methods for feature selection and classification algorithm for classifying the selected subset methods used in this paper. The filter method chooses the features whose levels are the highest among them, and then the selected subset can be prepared for any classification algorithm. After applying feature selection with filter method, numerous classification algorithms could be estimated (Fig. 3.1) [19]. An appropriate feature selection yields to performance improvement of classifier by reducing the computing time and by using optimized data in the dataset [20]. Furthermore, Filter method is a popular method in feature selection due to its fast performance and scalability [21,22]. Wrapper method estimates scores of feature sets that rely on the predictable power by using a classifier algorithm as a black box [23]. Assessment of specific subset is attained by performance test and training on that precise dataset. The wrapped search algorithm around the classifier gains the space of all features of subsets [19]. 2n (n is the number of features), various assessments are required for a full search in the wrapper method. Although dealing with correlated features and finding the relevant associations are the advantages of this approach, it might lead to the over fitting problems [24]. The advantage of embedded method is that they interact with their classification model, and they do not have complicated computation. Decision support systems use different methods to reduce the features dimension and classification algorithms to diagnose several kinds of diseases. WEKA classifier Open-source software provides tools for data preprocessing, implementation of several Machine Learning algorithms, and visualization tools so that you can develop machine learning techniques and apply them to real-world data mining problems. If you detect the beginning of the movement of the image, you will recognize that there are many stages in dealing with Big Data to make it appropriate for machine learning.  First, the raw data collected from the field. This data may contain some null values and unrelated fields. You use the data pre-processing tools provided in WEKA to cleanse the data.  Then, you would save the pre-processed data in your local storage for applying ML algorithms.  In ML model that you are trying to develop you would select one of the options such as Classify, Cluster, or Associate. The Attributes Selection allows the instinctive collection of features to create a concentrated dataset. Decision tree and mining Unwanted data are collected from the field contains things that leads to wrong analysis. The data may contain null fields; it may contain columns that are unrelated to the current examination, and so on. Thus, the data must be pre-processed to meet the requirements of the type of analysis you are seeking. This is done in the preprocessing module.  Resource discovering and filtering: Data centre Broker discovers the resources present in the network system and collects status information related to them.  Resource selection: The source is selected based on certain restrictions of task and resource.  Task submission: Task is submitted to source selected. This is determining stage. Result analysis The datasets were pre-processed and classified using WEKA tool. Obtained the feasible results are found after classification and the proposed work maximizes the accuracy of finding optimal results. IV. EXPERIMENTAL RESULTS We have implemented our proposed strategy on the WEKA 3.8.4 simulator to appraise the practicability along the performance of supervised learning techniques ( Fig. 4.1, 4.2, and 4.3). Disease prediction as well as management in large scale distributed system is complex. Therefore, for analysing the algorithm we use WEKA 3.8.4 simulators before applying them in real system. Conclusion Compared to related works, we have studied different supervised learning algorithms. We have analysed 14 different attributes related to CKD patients and predicted accuracy for different supervised learning algorithms like Decision tree and Identification of Pattern Mining. From the results analysis, it is observed that the decision tree algorithms give the accuracy of 91.75% and IPM gives accuracy of 96.75%. When considering the decision tree algorithm, it builds the tree based on the entire dataset by using all the features of the dataset. The benefit of this organization is that, the prediction process is fewer overwhelming. It will help the doctors to start the treatments early for the chronic kidney disease patients and also it will help to spot more patients within a less time period. Limitations of this study are the strength of the data is not higher because of the size of the data set and the missing attribute values. To build a supervised learning model targeting chronic kidney disease with overall accuracy of 99.99%, will need millions of records with zero missing values. Future research This work will be considered as part of implementation for the healthcare system for CKD patients. Also, postponement to this work is that application of deep learning since deep learning provides high-quality performance than machine learning algorithm.
2021-08-27T17:15:08.582Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "b3b8ed9e6dae24450ca18746a2e8286d821e8739", "oa_license": null, "oa_url": "https://doi.org/10.35444/ijana.2021.12607", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "9bb3fa075b6a809fcf1a06c4d2d6fd983a00d968", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
264191115
pes2o/s2orc
v3-fos-license
Fast and Robust Single-Exponential Decay Recovery From Noisy Fluorescence Lifetime Imaging Fluorescence lifetime imaging is a valuable technique for probing characteristics of wide ranging samples and sensing of the molecular environment. However, the desire to measure faster and reduce effects such as photo bleaching in optical photon-count measurements for lifetime estimation lead to inevitable effects of convolution with the instrument response functions and noise, causing a degradation of the lifetime accuracy and precision. To tackle the problem, this paper presents a robust and computationally efficient framework for recovering fluorophore sample decay from the histogram of photon-count arrivals modelled as a decaying single-exponential function. In the proposed approach, the temporal histogram data is first decomposed into multiple bins via an adaptive multi-bin signal representation. Then, at each level of the multi-resolution temporal space, decay information including both the amplitude and the lifetime of a single-exponential function is rapidly decoded based on a novel statistical estimator. Ultimately, a game-theoretic model consisting of two players in an “amplitude-lifetime” game is constructed to be able to robustly recover optimal fluorescence decay signal from a set of fused multi-bin estimates. In addition to theoretical demonstrations, the efficiency of the proposed framework is experimentally shown on both synthesised and real data in different imaging circumstances. On a challenging low photon-count regime, our approach achieves about 28% improvement in bias than the best competing method. On real images, the proposed method processes data on average around 63 times faster than the gold standard least squares fit. Implementation codes are available to researchers. Fast and robust single-exponential decay recovery from noisy fluorescence lifetime imaging Ali Taimori, Duncan Humphries, Gareth Williams, Kevin Dhaliwal, Neil Finlayson, James Hopgood, Member, IEEE Abstract-Fluorescence lifetime imaging is a valuable technique for probing characteristics of wide ranging samples and sensing of the molecular environment.However, the desire to measure faster and reduce effects such as photo bleaching in optical photon-count measurements for lifetime estimation lead to inevitable effects of convolution with the instrument response functions and noise, causing a degradation of the lifetime accuracy and precision.To tackle the problem, this paper presents a robust and computationally efficient framework for recovering fluorophore sample decay from the histogram of photon-count arrivals modelled as a decaying single-exponential function.In the proposed approach, the temporal histogram data is first decomposed into multiple bins via an adaptive multi-bin signal representation.Then, at each level of the multi-resolution temporal space, decay information including both the amplitude and the lifetime of a single-exponential function is rapidly decoded based on a novel statistical estimator.Ultimately, a game-theoretic model consisting of two players in an "amplitudelifetime" game is constructed to be able to robustly recover optimal fluorescence decay signal from a set of fused multi-bin estimates.In addition to theoretical demonstrations, the efficiency of the proposed framework is experimentally shown on both synthesised and real data in different imaging circumstances.On a challenging low photon-count regime, our approach achieves about 28% improvement in bias than the best competing method.On real images, the proposed method processes data on average around 63 times faster than the gold standard least squares fit.Implementation codes are available to researchers. I. INTRODUCTION T IME-DOMAIN fluorescence lifetime imaging microscopy (FLIM) is a powerful signal acquisition technique to characterise biological and chemical samples such as cells, viruses and molecules.FLIM has diverse applications in fields such as biology, chemistry, physics, materials science, medicine, pharmacology, and cancer research [1][2][3][4].Unlike in steady state fluorescence imaging which simply measures This research arose under EPSRC funding support on the Healthcare Impact Partnerships (HIPs) Project, Grant Ref EP/S025987/1, "Next-Generation Sensing For Human In Vivo Pharmacology-Accelerating Drug Development In Inflammatory Diseases". A. Taimori and J. Hopgood are with the Institute for Digital Communications, School of Engineering, University of Edinburgh, Edinburgh EH9 3JL, UK (e-mail: ataimori@ed.ac.uk;James.Hopgood@ed.ac.uk). N. Finlayson is with the Institute for Integrated Micro and Nano Systems, School of Engineering, University of Edinburgh, Edinburgh EH9 3FF (e-mail: n.finlayson@ed.ac.uk). the intensity of fluorescence produced by a sample, lifetime imaging allows for probing of the molecular environment.The fluorescence lifetime of a molecule, the time between photon absorption and fluorescence emission, is often dependent on the environment in which the fluorophore is present.Such environmental properties include viscosity, pH, temperature and quenching interactions with surrounding molecular species.This can allow for the design of specific probes to detect and monitor these environmental effects, or the observation of such changes through autofluorescence.Furthermore, fluorescence lifetime is relatively insensitive to concentration effects and other effects on intensity that cause large fluctuations in steady state imaging.In FLIM, a location in a specimen is first excited by a pulsed laser.Absorption of excitation photons can lead to fluorescence, where electrons moved to an excited state return to the ground state resulting in photon emission.The average elapsed time from excitation pulse to the emission of a photon is known as the fluorescence lifetime.The detection of the emitted photons associated with the electronic transition from the excited state to the ground state is recorded as a time-stamped event via a sensitive sensor relative to the laser pulse.For a predefined exposure time extending over multiple laser pulses, the measurement cycle is repeated many times.Photon arrival times in a number of quantised time intervals are then counted.Finally, the whole process generates a raw data sequence consisting of a histogram of photon-count arrivals representing a decay function for a pixel of the imaged sample.The data sequence contains noise originating from various sources such as dark counts and shot noise.On-chip histogramming has been reported in a single photon avalanche diode line sensor which permits very high throughput of fluorescence lifetime signals [5].Where the complexity of decay depends on the number of emitting species or environment within the image pixel, the histogram decay can be modelled by a single-, double-or generally multi-exponential function.The model of single-exponential decay is widely used across FLIM applications due to its simplicity and utilisability for high-speed imaging.In the literature, different methods have been proposed for estimating decay parameters including the fluorescence lifetime [6][7][8][9][10][11][12][13][14][15].The lifetime is a physical characteristic that describes the decay rate of fluorescence.As a biomarker or chemomarker, it provides contrast for distinguishing substances in biology and chemistry sciences.In this paper, we focus on both speed and robustness issues for lifetime sensing in a wide range of photon-count regimes. A. Review of related work We categorise state-of-the-art available lifetime estimation methods into three general families consisting of fitting-based, non-fitting-based, and fit-free approaches.The fitting-based family tries to fit a curve to the histogram of photon counts.Here, the least squares (LS) method is generally employed to estimate parameters of a fluorescence decay model [6].Maximum likelihood estimation (MLE) is another approach in this category that can account for a more realistic Poisson distribution for the histogram of photon counts, instead of the Gaussian assumption behind the LS fit [7]. In contrast to the direct fitting formulation and solution, non-fitting-based algorithms use specific techniques for finding fluorescence lifetime imaging characteristics.These include rapid lifetime determination (RLD) [8], RLD with overlapping windows (RLD-OW) [9], center of mass method (CMM) [10], and fluorescence lifetime estimation via rotational invariance techniques (FLERIT) [11] are all paradigms of non-fittingbased approaches.The methods available in the RLD class present closed-form solutions for estimating the lifetime parameter [8,9,16].CMM uses the center of mass of the single-exponential distribution to derive a simplified lifetime calculation formula suitable for on-board clinical firmware applications.FLERIT models the problem of lifetime estimation into a general class of direction of arrival estimation in array signal processing based on singular value decomposition [17]. Fit-free mechanisms are the third group of lifetime estimation methods.As two well-known approaches in this family, the phasor approach represents the fluorescence lifetime information in a diagram-wise manner [12], and learning-based methods exploit function approximation capability of shallow and deep neural networks for lifetime parameter estimation based on raw decay training samples [13,14].In practical FLIM system, lifetime estimation performance and its computation time are two key components.Each of the lifetime calculation families have their own benefits and shortcomings for a compromise between performance and complexity [3]. B. Current challenges Fitting-based methods are accurate to some extent but are computationally complex.Non-fitting-based algorithms, as addressed in this paper, are fast and provide an apt option for hardware realisation [18]; however, dealing with a range of uncertainties available in the acquired signal simultaneously remains a difficult task.Fit-free phasor and learning-based approaches suffer respectively from a need for intervention of experts for lifetime analysis, and a requirement for a large number of training time series instances.Moreover, nonautomatic implementation and the lack of data present two main challenges for medicine-related researchers. We can fundamentally summarise that the performance of a lifetime estimation approach is affected by three influential dynamics which originate from: 1) the nature of the sample being imaged; 2) the quality of the imaging instrument; and 3) the user experience on tuning acquisition parameters.All the dynamics appear in the captured histogram of photon counts in different forms, making lifetime determination a challenging task [19].To name a few, low, mid, and high photon counting regimes lead to low-, mid-and high-level signal-to-noise ratio (SNR) acquisitions to be managed [20,21].Due to limiting the power of light source for preventing damage to sensitive specimens, low photon-count imaging is often observed which leads to amplification of noise components.Blurring effects due to convolving the original decay signal with the instrument response function (IRF) of the imaging device deform the head and tail of a decaying function, which may results in overestimation of the lifetime [15].Different fluorophores expose varying lifetimes from short (less than 1ns) to long (more than 10ns) ranges.User tunable parameters during imaging such as bin width or size of the histogram of photon arrivals for exposure time control also influence the decay curve shape and ultimately the system's accuracy and precision [11].These factors demand a robust lifetime estimation approach coping with the various regimes, and this is the goal of the method proposed in this paper. C. Research contributions To take account of the variability in fluorescence lifetime acquisition, we suggest a perturbation-robust lifetime estimation framework.We first mathematically model the fluorescence decay function in both deterministic and stochastic modes.Afterwards, we propose a fast algorithm to be able to estimate amplitude and lifetime parameters of a single-exponential decay function in the presence of both inevitable convolution and noise perturbations.To control the uncertainties existing in a decay signal, we decompose the temporal histogram data as an adaptive multi-bin signal representation.The decomposition is done via a set of adjustable, successive Savitzky-Golay (S-G) low-pass filtering and binning.At each level, decay parameters are estimated from the transformed signals.We formulate the problem of reliable detection of the best representative signal from a set of candidates determined from multiple temporal resolutions using a 2-player game model [22].Game theory based approaches such as generative adversarial networks [23] have recently found beneficial applications in diverse areas including FLIM [24].Here, our main motivation of using the game model is to handle intrinsic dynamics of the histogram of photon counts in different circumstances.Amplitude and lifetime as players act on estimated parameters based on their own payoff functions to be able to recover optimal decay.Our approach falls into the category of non-fitting-based approaches, where the decaying signal can be rapidly and robustly recovered for different regimes.Figure 1 illustrates the suggested method on an exemplar histogram of photon count arrivals.The robustness of our single-exponential decay recovery method at high level of noise is also shown in Fig. 2. Implementation codes are available online to researchers 1 . In summary, our main scientific contributions are: ‚ a fast algorithm to decode lifetime information from the histogram of photon counts modelled by a singleexponential decaying function; ‚ an adaptive multi-bin representation of histograms of photon counts to deal with uncertainties of decay variability; ‚ a game-theoretic model to permit robust recovery of single-exponential decay from a set of signal selection. D. Paper organisation The remainder of the paper is organised as follows.In Section II, we present our theoretical analysis behind the lifetime estimation problem, and provide details of the proposed algorithms for single-exponential decay recovery in Section III.In Section IV, a set of coherent experiments for demonstrating the efficiency of our suggested method is described.Section V provides an analysis of the computational complexity of the algorithms.Finally, conclusions are presented in Section VI. II. THEORY In this section, we model the decay signal of fluorescence lifetime both deterministically and stochastically.A derivation of FLIM parameter estimation is presented for each case. A. Deterministic decay model and estimation In a FLIM system, the fluorescence decay function is ideally modelled by a first-order linear differential equation as: where xptq, k r and k n denote the quantity at time t, the rate constant of a radiative process, and the cumulative rate constant of environment-dependent non-radiative processes, respectively [7,25].Using the separation of variables method on (1) gives the solution xptq " Ae ´t τ , where A fi xp0q is the initial amplitude and τ fi 1 pkr`knq is the fluorescence lifetime.In practice, the fluorescence decay is measured at equallyspaced time intervals rather than a given time.In this case, a discrete-time representation of the exponentially decaying continuous-time function with unquantised intensities as xrns P R `follows from: where ∆, N and R `represent the measurement time interval (in order ranging from picosecond to nanosecond for FLIM devices), the number of measurements, and the set of all positive real numbers, respectively.It is obvious that A " xr0s.The function xrns is a strictly monotonically decreasing function with the descending sort property: xr0s ą xr1s ą ¨¨¨ą xrN ´1s.For a detailed discrete-time signal analysis, we provide standard definitions below to make this paper more accessible to a wider audience.Definition 1 (Linear difference operator): The linear difference operator Dt¨u is defined as: The descending sort property is held for drns, where dr0s ą dr1s ą ¨¨¨ą drN ´1s. Definition 2 (Maximum fall of signal): We define the point with maximum difference, or maximum fall, of the signal by: It is obvious that always n ˚" 0 in the ideal perturbation-free case.In this special case, this means the greatest difference is between the first sample and the second one.Theorem 1 (Lifetime decoding): Let xrns " Ae ´∆¨n τ , @n " 0, 1, . . ., N ´1 be a histogram of photon-count arrivals.The slope tangent to the point with maximum fall of xrns, i.e. n in (4), conveys lifetime information of decay.Geometrically interpreting, for the point n ˚, its initial slope just tangent to the exponentially decaying curve intercepts the n-axis at a point near n « τ , where the estimated lifetime is τ " ∆ lnp xr0s xr1s q .Proof: By using a first-order approximation of the Taylor expansion around the point with maximum difference, the tangent slope can be captured by an appropriate precision, which finally decodes the fluorescence lifetime value.The Taylor expansion of xrns around the point n ˚gives: where x 1 rn ˚s " xrn ˚`T s´xrn ˚s T represents the first-order difference.Neglecting the higher order terms by setting ε « 0, the first-order approximation of xrns is reached.By substituting x 1 rn ˚s into (5), taking the discrete interval T " 1 and reducing to the Maclaurin series for n ˚" 0 in the ideal model, we have: If the approximation xr1s is the same as the true value xr1s, taking the natural logarithm from two sides of (6) will yield: For the special case N " 2, (7) is equal to that of RLD. B. Stochastic noisy-blurred decay model and estimation 1) Corrupted measurements modelling: In real FLIM systems, the signal (2) is a discrete stochastic process, where both A and τ are random variables.Any FLIM device consists of two separate optical and electrical parts.Hence, two main sources of noise are contributable including: 1) signaldependent electronic shot noise arising from photon sensing equipment; and 2) signal-independent background noise due to instrument ambient disturbances.Contrary to the former with an electronic and discrete nature, the latter has mainly a non-electronic, optical source with continuous nature.The diverse noise components are assumed to combine to each other additively making a whole noise of the FLIM device.Let us model the histogram of photon-count arrivals in the presence of both blur and noise perturbation in a pixel as the following quantised measurements (See also Fig. 3.): xrns " txrns ˚hrns `ηrnss urtxrns ˚hrns `ηrnsss, (8) where xrns P Z `, @n " 0, 1, . . ., N ´1.Operators t¨s, ur¨s and ˚denote the round (quantiser) and step functions, and convolution, respectively.The step function in (8) acts as a signal clipper and ensures that the photon-count xrns takes on physical positive counts only.The functions hrns and ηrns fi e P rns `eG rns respectively represent the blurring kernel as the IRF of the system, and the noise term with errors of e P rns P Z `and e G rns P R. We assign a discrete Poissonian probability mass function to the signal-dependent noise and a continuous Gaussian probability density function to the signal-independent one [26].The characteristic of shot noise distribution is defined as e P rns " Ppλq, where Pp¨q denotes Poissonian distribution.As a property of Poisson process, one straightforward way, usually used in the literature, is to consider the expectation of shot noise as a constant value across bins.In this case, the standard deviation of shot noise equals to ?I and consequently total SNR fi I ? where I fi Np N denotes the average number of arrived photons [27].The parameter N p represents the number of photons per histogram, i.e., N p " ř N ´1 n"0 xrns.The average rate of the shot noise can be written as λ " I, which means its total dependency on the histogram signal. For the signal-independent noise, we assume e G rns " N pµ, σ 2 q, namely a Gaussian distribution with the mean Quantisation Blurring + Clipping Fig. 3: The proposed process for generating synthesised histogram of photon counts.Signal of each point has visualised for a spacial experimental setting.The x-axis of plots denotes the bin index n. µ " cσ and variance σ 2 .The parameter c ě 0 is a real constant, which, in case of c " 1, about 68.3% of random samples from the distribution cast over the zero level.To emphasise error resources in the system, we rewrite (8) as xrns " xrns `erns with the total error of: erns " e q rns `eb rns `eP rns `eG rns looooooomooooooon fiηrns `eu rns, where terms e q rns, e b rns and e u rns denote errors introduced by quantisation, blurring and other uncategorised noise sources, respectively.It is notable that the quantisation noise originates from rounding amplitudes, due to the quantised nature of light in photon counting process. 2) Modelling of time-resolved shot noise: Effects of noise would appear more in points with high variations, especially in bins with low number of photons from low-photon count regimes.As evidence, taking the logarithm of histogram of photon counts reveals the locations of noise, where signal values in bins with low SNR are seen to have higher than proportional noise components relative to regions with higher SNR.Consequently, the histogram dependency in shot noise can be retrospectively assigned to binned photons based on different point-wise behaviours of SNR.To take this reality into account, one can change the constant noise valuation and mathematically customise the signal-dependent noise model for a special device by using weighting mechanisms.A general form for the average rate of shot noise can be defined as a combination of increasing (Õ) and decreasing (OE) terms as: in which the symbol r¨s is the ceil function, and, parameters a, β P R `and ζ P Z `are arbitrary scale factors. Here, based on our device measurement observations, we have particularly modelled the variance of noise inversely proportional to the original signal, where we set β " 0 and ζ " 1 in (10).In this case, the equation λrns increases by increasing the bin index n.The coefficient a should be a small positive constant to control the growth of shot noise fluctuations with a power increase almost linear during falling photons in the tail of distribution.The simulation in Fig. 2 tries to mimic the noise behaviour for the tail of the fluorescence decay.Figure 3 visualises the proposed process for generating synthesised histogram of photon counts with an example. 3) Estimation under perturbation: For signals with a sufficiently large average of photons, e.g., I ą 10, Poissonian noise distribution can be approximated well by a Gaussian noise function [28], where Ppλq « N pλ, λq.Therefore, based on the central limit theorem [29], the distribution of the error term in (9) consisting of the superposition of discrete and continuous noises can be approximated by i.i.d.erns " N p0, σ 2 e q components [30].A substitution gives corrupted measurements as: A statistical analysis on the model in (11) reveals that an efficient minimum variance unbiased estimator (MVUE) [31] does not exist for estimating τ .In this case, based on the Cramér-Rao lower bound (CRLB) analysis [32], the variance of the estimator is given by: where the derivation details are provided in the Appendix.But, a MVUE does exist for estimating the amplitude given the true τ , and is given by: with CRLB of: We refer the reader to a similar derivation of this in the Appendix.Despite the lack of existence of a MVUE for τ , the theory still inspires the design of a novel estimator for the lifetime and the histogram amplitude, as described below. It is important to note that the point corresponding to maximum difference does not always occur at zero and its location may generally be a function of a number of parameters such as the IRF, the pulse amplitude and duration of the lighting source, response speed of fluorescence to the excitation pulse, the time bin width, photon detector speed, and any perturbation.Therefore, a little displacement to the right in the histogram of photon counts may occur.Nevertheless, the descending sort property still holds to some extent after a point m ˚, and the lifetime estimation problem can be reformulated by a Taylor expansion around this arbitrary point, m ˚ě 0. Consider the M -point signal of yrms, @m, M ď N as a transformed version, including low-pass filtering and dimensionality reduction, of corrupted measurements xrns; See (21) for a complete definition of the transformation.By defining the parameters r 0 fi yr0s, r 1 fi yrm ˚`1s, the ratio r fi r0 r1 , and taking natural logarithms from the first-order Taylor expansion equation, the lifetime is estimated as: Even in noisy measurements, points in the vicinity of m ˚are somewhat robust to shot noise, because starting points from the falling signal have inherently stronger intensities than other points in the tail of the photon-counts histogram.However, instead of selecting a specific single point from yrms for obtaining each of the parameters r 0 and r 1 , then inspired by the CRLB analysis in the Appendix, we propose an estimator to suppress these noise effects.To do this, we employed two left and right weighted average mechanisms of yrms around the central point m ˚, so that: in which ω L ris " , @i " 1, . . ., m ˚is a weighting function.To obey the trend of data which at first exhibits rise and then fall, we utilise the function f L ris fi 1 2b e ´pm ˚´iq b , @i for the left side of m ˚, called the left exponentially growth weighting function.Similarly, for the right side of m ˚, let: where ω R rks " , @k is called the right exponentially decaying weighting function.The parameter b denotes scale of the exponential functions, which is in relation to τ .In experiments, we set the optimal b " 3 by trial and error.On real FLIM measurements, effects such as outlier affect the position of m ˚and hence demand a control mechanism on lower and upper bounds of (17).The operator T p¨q performs as a signal-length truncation function such that: The weights in ( 16) and ( 17) are normalized, so that left and right weighting functions.Substituting ( 16) and ( 17) into (15) and simplifying by geometric series yields: ln ˜řk e in which the constant c τ fi e .Also, the approximate amplitude is equal to  " r 0 , i.e.:  " III. ALGORITHMS The proposed FLIM parameter estimator is summarised in Algorithm 1. Due to applying the difference operator defined in (3), noise components are naturally amplified.Although decay parameters are usually estimated from the initial samples of the histogram curve, where the SNR is higher due to higher intensities in those time bins, pre-smoothing of the signal xrns by means of low-pass filtering mechanisms is required in practice to be able to alleviate perturbation effects.In this paper, to robustly recover the original fluorescence decay signal, two-step smoothing is used including a sequential S-G filter and temporal binning with an adaptive approach in a multi-resolution representation.Then, based on a game rule, the final decay signal is recovered from decisions made in the multi-resolution temporal space. A. Multi-bin decay representation Temporal binning of a histogram as a technique for information representation also concurrently performs dimensionality reduction and intrinsic smoothing tasks [11,33].However, finding an optimum bin size in histogram binning is generally Algorithm 1 The proposed FLIM parameters estimator 1: Inputs: The M -point signal y " ry 0 , y 1 , ¨¨¨, y M ´1s T and the bin width ∆. 2: Apply the difference operator in (3) on yrms to find drms.3: Obtain the point with maximum fall m ˚from (4).4: Estimate lifetime from (19).5: Estimate amplitude from (20).6: Outputs: Decoded amplitude  and lifetime τ .a challenging issue in statistics.Here, the bin size is simply defined as the number of bins in a histogram.On the one hand, if the bin size is very small, the appearance of a decay curve cannot be sketched well and the shape of the function may be too smooth.On the other hand, if the number of bins is too large, broken combs occur in the histogram of photo counts and a noisy representation of data is made.To address these problems, in this paper a multi-bin representation of the FLIM signal is proposed to be able to gain both smoothness and crispness of the signal without any temporal resolution loss.Naturally, higher-levels of temporal resolution represent crisper signals and lower ones expose smoother signals.To control levels of smoothness/crispness of a signal, we apply a S-G smoothing prior to histogram binning in an adaptive manner according to the degree of the inherent smoothing property in the binning mechanism at different resolution levels of signal. 1) Adaptive Savitzky-Golay filtering: S-G filter is a prominent type I finite impulse response signal smoother which uses a local LS polynomial approximation [34,35].Characteristics of interest of the S-G filter are functions of shape and peak preservation.Also, the cut-off frequency of the filter is proportional to the polynomial order, f c 9O f , and inversely proportional to the window length of the filter, f c 9 1 L f .In practice, the polynomial order should be limited to a small number such as O f " 2 to prevent ill-conditioned responses; and, for achieving a filtering process, the window length of the filter should satisfy the condition L f ą O f .For adaptive filtering, at higher levels of temporal resolution, we use larger L f to only pass very low frequencies.By moving to lower levels, we exponentially reduce the filter length to gradually widen the pass-band, so that mid-and finally high-frequency components are also passed.At lower levels of resolution, due to strong smoothing by binning itself, initial S-G presmoothing is not needed; thus, S-G filter will only mimic the signal and pass all frequencies.The behaviour of the adaptive filtering process is illustrated in Fig. 1, where the S-G filter is ineffective in the right branch, and, the same behaviour is seen for the binning stage in the left branch. 2) Adaptive temporal binning: Consider the functions of xrns, @n as a N -sample histogram of photon counts and grns, @n as a S-G filtered version of the histogram, where N " 2 l and l is the number of levels in the multi-bin representation of the decay signal.The data points at the i th level of binned space can be determined from the summing formula of: grm ¨Bi `js, @m " 0, 1, . . ., M i ´1, (21) where the function y i rms, @i " 1, ¨¨¨, l represents a smoothed, dimensionality reduced version of the signal grns and M i fi 2 pl´i`1q ď N is the number of time bins in the i th reduced dimension from the multi-bin representation.The parameter B i fi N Mi refers the number of consecutive time bins that are binned into a new one at the i th level. 3) Decision fusion: For each of the temporal resolutions, we can estimate amplitude and lifetime parameters based on Algorithm 1.However, estimated parameters at each level need to be rescaled to the original signal space.To do this for the i th level, we can multiply the estimated amplitude and lifetime values by B ´1 i and B i , respectively.Specifically, rescaling the amplitude with B ´1 i assumes a uniform distribution which is not in agreement with a decaying trend.Instead, we use the scale s ´1 i for the amplitude, where: The bias γ accounts for an exponentially decaying distribution. In experiments, we set γ " 0.25 as an optimised value.Initial estimates from diverse levels may encompass optimal parameters, where each level exposes its own properties.For instance, in large binning, uncertainty in estimating amplitude increases, due to combining more intensity values.This also means amplitudes are estimated well at higher levels of temporal binning.Conversely, lifetime estimation at top levels of binning will be generally more sensitive to perturbation, where lower levels of temporal resolutions are preferable.This brings a conflict of interest for a final decision making.Hence, a consensus strategy is suggested to synergically decode the original decay signal based on the game modelled below. B. Game-theoretic modelling for decay recovery Suppose the goal is to decide the best parameters for an optimal decay recovery based on the multi-bin histogram representation and corresponding initial estimates.Both of optimal parameters A ‹ and τ ‹ are influential on the optimum selection.However, a single objective function is not available to consider both roles.Therefore, we suggest a game in which the problem is modelled by the "game" Gpυ, A, dq, where υ, A, and d denote the number of players, the set of actions, and payoff function of the game, respectively.We consider the initial estimation behaviour of each of the amplitude and lifetime random variables as the players, i.e. there exists υ " 2 players.The strategy of each player is to choose a set of known actions to be successful.Actions themselves are drawn from multiple temporal binning representations of histogram of photon-count arrivals, where we define the actions set A " tα 1 , α 2 , ¨¨¨, α k u, in which k " l is the number of actions.The game space is shown in the decision fusion stage of Fig. 1 as a grid.The action α i corresponds to 2 pk´i`1q -dimensional temporal resolution from the multi-bin decay representation. For the action i th , a single-exponential decay can be recovered from FLIM data of a given pixel, i.e. the pair p Âi , τi q.A Representation of actions taken by each player in the 2-D space Ω P Z 2 constitutes a matrix called a reward matrix as R " rr ij s.For one repetition of the game, the matrix's entries takes 1 or 0 values, referring to prize or penalty, respectively. Theorem 2 (Optimal decay recovery): Let the matrix R " rr ij s be a reward matrix for the amplitude-lifetime game Gpυ, A, dq, where for a repetition of the game, the entry corresponding to an optimal parameter is 1 and other locations are 0. A place in the space Ω of the reward matrix exists by which the underlying optimal decay signal is recoverable. Proof: Generally, Nash equilibrium proves that an equilibrium point exists for a game, where all rational players reach their maximum payoff [36].To show that for the game Gpυ, A, dq, consider the equilibrium position as location of optimal estimates.Then, the optimal decay signal recovery is possible by players' payoff function optimisation.Here, optimality is constrained to quality of initial estimates of amplitude and lifetime parameters in the multi-bin decay representation.To detect optimal A ‹ and τ ‹ , we first measure distance between each recovered signal from the pair p Âi , τj q, @i, j in the game space Ω and received corrupted measurements, and then, find the minimised one: see the blue star in the game space shown in Fig. 1 for illustration. In terms of curve-like shape of an exponentially decaying function, we define two critical fixation points for the distance optimisation.The first one is connected to amplitude, which controls the initial position of the decay signal on the y-axis; and, another is related to lifetime, in which the steady-state location of the curve on the x-axis is controlled.It is difficult to seek a single objective function for finding the optimal pair pA ‹ , τ ‹ q, in a manner that the metric concurrently copes with the two degrees of freedom well.Instead, we consider a specific distance measure for each player.Now, the problem is to search apt metrics for satisfying the two fixation points. The scale of squared error at transition times of a decay curve is usually more than settling times.Hence, for fixing the amplitude, mean squared error (MES) criterion is apt as: where f " r f0 , f1 , ¨¨¨, fN´1 s represents a single-exponential recovered signal in the space Ω.Conversely, for the tail of the decay curve, the scale is usually higher in terms of Neyman's chi-square test (CHI) metric χ 2 [37].Therefore, it is appropriate for probing optimum lifetime.The distance χ 2 for the lifetime player is determined by: It is worth to noting in case of observing division by zero in (24), we offset the effect by the trick of adding an extremely small number (e.g., 10 ´10 ) to its denominator [38].Assume vectors d MSE and d CHI contain MES and χ 2 distances ob- The proposed single-exponential decay recovery 1: Inputs: The N -point histogram of photo-count arrivals x and the bin width ∆. 2: Set O f " 2, N l " log 2 N , and n " r0, 1, ¨¨¨, N ´1s T .3: Initialise g " x, M " N , and k " 1. 4: while M ě 2 do 5: Smooth g by S-G filter with specs of O f and L f .Bin smoothed g into M bins by (21) to make a y. 10: Estimate  and τ by passing y to Algorithm 1. 11: Âk " p end for 23: end for 24: A ‹ " arg min Â,τ pd MSE q 25: τ ‹ " arg min Â,τ pd CHI q 26: f ‹ " A ‹ e ´∆ τ ‹ n 27: Output: The recovered fluorescence decay signal f ‹ .tained from the space Ω.Optimal influential parameters are selected from the minimisation of: τ ‹ " arg min Finally, the optimal decay signal is recovered by: Our robust recovery approach for a pixel is detailed in Algorithm 2. We term our method "Robust RLD". Example 1: Consider the original decay signal in (2) with A " 100, τ " 3ns, ∆ " 468.8ps, and N " 32.The signal was corrupted with the Gaussian blurring kernel h " r0.25, 0.5, 0.25s T and noise parameters of a " 0.05 and σ 2 " 25. Figure 1 illustrates the whole process for a random run.For a million random plays of the game, Table I (a) reports accumulated reward matrix normalised on summation of prizes for each player.The Nash equilibrium point is bold in the table.To be able to compare detection performance of our model to that of ground truth decay, we have also repeated the game for the ideal case.For each round of the game, the prize entry in the ground truth reward matrix was determined as the place in which the absolute difference between a ground truth value and corresponding multi-bin estimates is minimum.Outcomes are listed in Table I (b).Again, Nash equilibrium is bold in the table.Comparing bold numbers in Tables I (a) and (b) reveals the congruence of equilibria.Dominant success almost always occurs in upper triangle part of the reward matrices. IV. EXPERIMENTS In this section, using synthesised data with known ground truth, we evaluated the robustness of our method under different settings and circumstances through Monte Carlo simulation.We compared our proposed method to CMM, standard LS fit of MATLAB software, Poisson MLE, RLD, RLD-OW and FLERIT approaches.The effectiveness of the proposed method is also shown on real FLIM data. A. Tests on synthesised data 1) Evaluation on different bin sizes: In this experiment, we changed the bin size of histogram of photon counts from 2 1 to 2 10 for various decay signals and determined the performance of our method.The amplitude of functions was set in a manner that the number of photon counts per histogram remains approximately the same as 1,000 photons/histogram before perturbation, where A " in the deterministic model of (2).We adjusted the bin width to the formula ∆ " 5τ N , which guarantees settling a decay curve into total measurement cycle for a pixel.All signals were first blurred with a Gaussian kernel of the length 3 and then noise was added with noise characteristics of a " 0.1 and N p0, 25q. Figure 5 plots median lifetime, as a measure of bias or accuracy, vs bin size for three fluorescence decay functions with different ground truth lifetimes (ns) of 1, 2.5 and 4 ns.The median lifetime reported for each point of the plot was obtained for 100,000 random runs.As seen from the figure, median lifetimes are inaccurate at initial points for the bin size of 2, where we have only a line segment for representing a fluorescence decay but not a desired curve.However, the estimations sharply improve with increasing bin size.Estimations reach their corresponding ground truth lines for bin sizes between 64 and 128.Meanwhile, the proposed method delivers appropriate estimations for 32 ď N ď 256, where a sufficiently continuous decay curve apt for lifetime estimation exists.Due to photon-starved-like behaviours, an overestimation is seen for the bin size 1,024.Hence, the bound can be considered as a user guide for clinical settings of instruments. 2) Effects of blurring and noise: As mentioned before, fluorescence decay signals are affected by perturbation of blurring and noise in practice.However, both the shape and level of signal's perturbation may differ from instrument to instrument.This experiment provides two simulation modes for analysing those effects.In the first simulation, for a fixed Gaussian blurring kernel with length 5, we evaluated statistics of estimated lifetimes from a corrupted signal under various shot and background noises.Parameters of the original signal were N p « 2,600, A " 100, τ " 1.5ns, ∆ " 58.6ps and N " 128.The performance of our method is shown in Fig. 6 for 100,000 random runs.The original signal encountered with a spectrum of perturbation including shot noises with both variable and constant rates (λ " 10) plus the background fluctuations.The curves related to median of lifetime show a stable behaviour of bias in our estimator even for severe noise.In this regard, the amount of averaged overestimation than the ground truth value is at most around 0.5ns in the worst-case scenario for the case µ " 0 and λ " 10.As seen from Fig. 6, an increase at DC level of fluctuations leads to increasing level shift of the overestimation (bias).In terms of standard deviation, as a measure of precision, it remains under control with a tolerable rise during increasing noise power.The standard deviation of lifetime averaged over all noise powers is 0.2ns in the worst case.Results in σ 2 " 0 denote only the effect of shot noise in the absence of background fluctuations.Also, results of the single point σ 2 " 0 in the absence of shot noise (the case µ " 0, a " 0) mean perturbation only by blurring. In the second simulation, we fixed the parameter of shot noise on a " 0.05 and then evaluated mean and standard deviation of estimated lifetimes from a corrupted signal under different blurring kernel shapes and background noise powers.To model the IRF of a FLIM system, different blurring kernels may be utilised.Here, we employed impulse, box, approximate Gaussian and Airy smoothing functions with the same length of 7. The Airy IRF is the profile of a 2-D Airy pattern, which is approximately determined from the Bessel function of first kind of order [39].It is notable that applying an impulse function means an ideal case without any blurring effect.Therefore, in that case, only the noise effect is present.Parameters of the original signal were the same as the first mode above.Figure 7 plots results for 100,000 random runs.By increasing noise power, a decreasing trend is seen among estimations for all kernels.Responses show small deviations from the ground truth reference.In this regard, the lowest and highest absolute errors between the reference line and mean of lifetime averaged on different noise powers belong to Airy and box kernels, respectively.It is notable that throughout the experiments of this paper, we have not used any IRF cutoff, bin exclusion or deconvolution, where performing such procedures can potentially improve the quality of lifetime estimators. 3) Comparison to theoretical Cramér-Rao lower bound: This section investigates the variance of various estimators in comparison to the theoretical CRLB derived from Section II-B3 under the Gaussian noise assumption of N p0, σ 2 e q, where bias analysis is provided in next sections as well.We used the same settings of experimental parameters applied for generating synthesised histograms of photon counts in the first simulation of Section IV-A2.Figures 8 (a) and (b) plot the variance of estimated lifetime and amplitude vs the variance of noise σ 2 e for different compared approaches, respectively.The experiment repeated 10, 000 times for each σ 2 e .It is notable that in the implementation of LS fit, we utilised a nonlinear LS method of MATLAB that employs the "trust region" algorithm for optimisation [40,41].Also, for the Poisson MLE, we used an exhaustive 2-D search mechanism on possible amplitude and lifetime values to find those optimal parameters.It should be pointed out that the CMM method does not has any formulation for amplitude estimation; thus, it was ignored in Fig. 8 Estimated lifetime (ns) Background Gaussian noise with and 2 =4 Background Gaussian noise with and 2 =25 Background Gaussian noise with and 2 =64 Ground truth Fig. 7: Evaluation of our method for various blurring kernels. The number of photon counts before perturbation is approximately 2,600. both lifetime and amplitude grow with increasing noise power. In terms of these variances, our proposed approach exposes a middle precision with under control variances, where the gold standard LS fit is ranked first; although, based on estimation bias, we will show in future experiments that the suggested method outperforms the LS fit on a vast range of variations. 4) Performance comparison under decreasing shot noise: In this experiment, we particularly evaluate the performance of a set of lifetime estimators under a decreasing shot noise.To do this, we set parameters a " 0, β " 20 and ζ " 1 in (10).Other experimental settings are the same as those we used in Fig. 6.Table II reports (median, standard deviation) of different lifetime estimators for various background noises with the distribution N pσ, σ 2 q.Based on the ground truth lifetime of τ " 1.5ns, our Robust RLD shows the best accuracy (the bold median) among all.CMM achieves the best rank in terms of precision (the bold standard deviation); but, it is biased.At the same time, the proposed approach remains robust and maintains the precision at an acceptable level.Also, a pair-wise comparison between the averages of (MED, STD) for the proposed method here, i.e., (1.66, 0.09) in ns, and those obtained in Fig. 6 for the increasing shot noise with similar background noise, i.e., (1.72, 0.11) in ns, reveals the increasing shot noise is the best worst case scenario of noise modelling though.The results indicate that the performance of Robust RLD is not heavily dependent on shot noise models used. 5) Performance comparison under lifetime variations: The goal of this experiment is to examine the performance of different approaches in short-, mid-and long-lifetime regimes.Figure 9 shows lifetime estimation statistics of various methods on three random signals with short, mid and long lifetimes of 0.5, 3 and 12 ns, respectively.Common parameters utilised for the signals were N p " 1,500, ∆ " 500ps and N " 64.Perturbation characteristics were Gaussian blurring kernel of length 3 and noise parameters of a " 0, N p10, 100q.Reported statistics were obtained for 10,000 random runs.As is shown in the box plots related to short-and mid-lifetime regimes in Fig. 9, our method outperforms other benchmark approaches.Our results are the nearest ones to the ground truth values, and for the short lifetime, the first quartile is aligned with the ground truth line.The optimised nonlinear LS fit ranks second for those regimes.For the long-lifetime regime, the lifetime value is too long, so that one can not see settling of a decay curve into the given time measurement window of 64 ˆ500ps " 32ns, which can be interpreted as an inappropriate experimental setting of the window due to the small selection of the bin width.Contrary to the short-and mid-lifetime tests, CMM exposes the lowest bias than others in the special case of measurement window, where our estimator yet ranks second among all.It is important to note that all compared estimators were evaluated in a fair situation without compensating background components introduced by the bias due to the mean offset of the background noise, where such a mechanism can potentially alleviate the bias totally. The FLERIT approach [11] mainly employs neighbouring pixels information of a central point in an image to construct the observation matrix for estimating the average lifetime.Hence, we separately compare the performance of lifetime estimation between FLERIT and our method on a synthesised image.To do this, we generated an image of dimensions 1024 ˆ1024 including three regions with various lifetimes as shown in the intensity map of Fig. 10.Ground truth lifetimes in ns for pixels inside regions of A, B and C are 10, 6 and 2, respectively.Amplitudes were set under a low photon count regime, so that they were 10 3 , 2 and 2 3 for regions of A, B and C, respectively.Common characteristics of ground truth signals are N " 32 and ∆ " 0.4ns.The decay signals were corrupted by Gaussian noise of zero mean and 2.5 ˆ10 ´3 variance.In FLERIT, 3 ˆ3 neighbours and the number of consecutive merged bins equal to 4 were utilised.In Fig. 10, the left-side intensity map represents summation on all arrived photons, which is the same in FLERIT and our method for a fair comparison.The mid lifetime map visualises lifetime estimation performance.And, the right-side plot depicts lifetimes' histogram.Visual results demonstrate the superiority of our method over FLERIT.For regions A, B and C, Table III tabulates mean ˘standard deviation from estimated lifetimes of FLERIT and our proposed method.The best results are bold in the table.Except for the mean of region C, the evaluation of numerical results demonstrates that the suggested approach brings desired lower bias and variance of lifetime estimates than those of FLERIT.6) Comparison in various photon-count regimes: As mentioned earlier, SNR is the square root of the average number of arrived photons, I.This parameter is a function of amplitude.Hence, we change the amplitude to simulate low, mid and high photon count regimes, which are interpreted as different levels of SNR.To this intent, we considered three signals with amplitudes of 50, 250 and 1,000.We set other common parameters of signals as τ " 2ns, ∆ " 312.5ps and N " 32.All signals were first blurred with a Gaussian kernel of length 3 and then noise was added with characteristics of a " 0.1, N p0, 25q.Those settings result in the values of I approximately equal to 10, 50 and 200 averaged on the measurement cycle for low, mid and high photon counts, respectively.For all of the regimes, Fig. 11 represents the mean and standard deviation of the estimated lifetimes from different approaches including our proposed methods in Algorithms 1 and 2. Due to the differentiation operator used in Algorithm 1, we have also evaluated the effect of pre-binning as a simple signal smoother before estimation.To do this, we reduced the number of bins from 32 to reasonably sized 8.For a fair comparison, the pre-binning was repeated for the standard LS fit.Statistics in the bar graph were obtained for 100, 000 times random run.For the challenging low photoncount situation, comparing results of different approaches to the ground truth reference reveals the lowest bias for the proposed Robust RLD with a significant difference from other competing approaches.In the regime, the percent mean bias of our approach is 7.55%, whereas, for the best competitor (RLD-WO), it is 35.13%, which shows 27.58% improvement.This specifically demonstrates the robustness of our method under severe perturbation, as seen in the example of Fig. 2. The accuracy of the estimator in Algorithm 1 is ranked second but in an uncontrolled variance, which is because of estimations beyond the measurement window caused by noise amplification.However, with an accuracy-precision trade-off, the pre-binning clearly improves the variance of the estimator in Algorithm 1.As shown in results of LS estimator with pre-binning, the smoothing also improves slightly the bias than the LS counterpart with no binning at the expense of increasing the standard deviation.Thanks to high levels of SNR for the high photon counts regime, most of the lifetime estimators expose close performance.Interestingly, the bias of Algorithm 1 is less than that of Algorithm 2 in this regime.This fact also highlights that for low noise levels, it may not be needed to employ more complex approaches for lifetime estimation.Meanwhile, results of Robust RLD show its appropriateness for utilising in a wide variety of photoncount regimes.Specifically, our approach makes better chance for robust estimation of the lifetime under photon starvation Low Mid High Photon-count regime situations.7) Comparison on decay recovery error: Although the lifetime is known as the most important parameter in FLIM, some fields like materials science deal with the amplitude information, too.However, as mentioned in Section IV-A3, all benchmark FLIM systems do not have the capability of amplitude estimation such as CMM.Or, if capable, they may work weakly under difficult situations like RLD and RLD-OW.This fact also means they are not effective for recovering a continuous or discrete version of a decay signal.Therefore, due to the best rank of LS fit for amplitude estimation, we have reported a recovery performance comparison between our method and that of LS fit in Table IV.For this experiment, we considered three signals with small, mid and large amplitudes of 20, 100 and 500, respectively.Other parameters of the three signals were τ " 5ns, ∆ " 781.3ps and N " 32.Signals were first blurred with a Gaussian kernel of length 3 and then noise was added with characteristics of a " 0.05, N p0, 64q.In Table IV, bold mean and standard deviation values represent the best performance.For the small amplitude, our method has lower bias than LS fit; and, in mid and large amplitudes, it follows the LS fit with little differences.The LS fit is inaccurate for lifetime estimation in case of the small amplitude, where signal perturbation is severe, whereas the proposed method is robust for diverse amplitudes.Regarding recovery error of χ 2 , our method outperforms the LS fit for almost all cases. B. Tests on real data 1) FLIM system: FLIM images were recorded using a confocal scanning imaging system based on previously reported work [42].The system includes a 20MHz super-continuum laser filtered to produce excitation at 480nm.The system was used in conjunction with an optical imaging fibre bundle to allow for remote imaging.Light was guided down individual cores in the fibre and fluorescence collected from the same fibre core and directed onto a spectral, time resolved detector.The system can record between 2 and 512 spectral channels and between 2 and 32 temporal channels for each pixel in the image.For the data presented here, 2 spectral channels were used with an exposure time of 85µs per pixel and an image size of 128 ˆ128 pixels. 2) Experimental design: Neutrophil Activation Probe (NAP) [43] is an activatable fluorescent probe used for the detection of Human Neutrophil Elastase (HNE), an enzyme released by activated, pro-inflammatory neutrophils [44].Consisting of three internally quenched fluorescein moieties each conjugated to a cleavable peptide sequence specific to HNE, NAP (λ ex " 488nm and λ em " 525nm) in its activated form amplifies its fluorescence signal due to the release of fluorescein.By incubating NAP with HNE, we sought to characterise its fluorescence lifetime properties and test the effects of Sivelestat, a specific inhibitor of HNE, as well as Nafamostat mesylate, an antiviral drug currently undergoing clinical trials as a potential treatment for coronavirus (COVID-19) (clinical trial identifier NCT04473053) as a potential inhibitor of HNE. Another property of the experiment from the lifetime estimation viewpoint, is measuring fluorescence in a homogeneous solution as opposed to those of complex biological objects such as cells.This means that signals from each sample should be consistent, and we expect a uniform, flat lifetime map.Therefore, such partial information about samples' behaviour brings the ability to evaluate the performance of the proposed lifetime estimation method in real-world scenarios. 4) Sensing and outcomes: For each sample, microendoscopy was done using green and red spectral bands with wavelength ranges of p498nm " 570nmq and p594nm " 764nmq, called bands 1 and 2, respectively.Each sample contains video sequences with N f frames.Signal parameters are N " 16, ∆ " 800ps and the frames dimension of 128 ˆ128.Table V summarises statistics of samples for comparison of both our proposed and LS fit approaches side by side.Mean and standard deviation values were calculated on pixels inside the probe circle accumulated over all video frames of f " 0, 1, . . ., N f ´1; and, as a pattern of the plain specimens, Figs. 12 (a) and (b) show intensity and lifetime maps as well lifetimes' histogram from bands 1 and 2, 10 th frame, sample C, respectively.The zigzag-wise patterns on intensity maps are mainly due to fibre-bundle artefacts.Our obtained results are statistically significant for all sam-HNE activated NAP via the cleavage of peptide sequence to release fluorescein, resulting in increased fluorescence intensity and lifetime.10µM Sivelestat reduced the fluorescence intensity; however, it had no effect on fluorescence lifetime due to the partial inhibition of HNE.This potentially highlights the limitation of single exponential fitting and the small number of time bins used as there was likely both cleaved and uncleaved probe in the signal, with the cleaved probe dominating in terms of amplitude as the uncleaved probe is quenched.The fitted lifetime is therefore dominated by the longer lifetime signal.No reduction in fluorescence intensity or lifetime was observed with 10µM Nafamostat mesylate, indicating that it is not an inhibitor of HNE.Due to the fluorescence emission spectrum properties of NAP, fluorescence was primarily detected in band 1.Thus, standard deviations of band 2 are greater than band 1 counterparts because of noise amplification, where the band 2 red spectrum has lower photon energy than green one and falls largely outside of the NAP emission band.There is, however, some evidence in band two to support the partial cleaving inhibition in sample B with an intermediate lifetime observed.The increased fluorescence lifetime signature of NAP makes it suitable for FLIM.In terms of LS fit results, the lifetime of different specimens exposes lower variance; however, they are affected again by the important issue of bias due to the underestimates. V. COMPUTATIONAL COMPLEXITY ANALYSIS Except the performance of a lifetime estimator, another important issue in FLIM is computational complexity/run-time of the estimator.In this regard, estimators with low complexity enable on-chip lifetime estimation capabilities, which lead to benefits from reducing data transfer rate between a sensor head and a distant computer to portability of a medical device.This section provides approximate computational complexity order of our algorithms in terms of the input size of bins number N , yielding an upper bound on their time complexity.We have done that task for methods presented in Section IV.Additionally, we have calculated run-time for different approaches as a complementary analysis, where all influential factors on the complexity are considered in a real scenario. As shown in Fig. 1, the proposed Robust RLD algorithm consists of three processing stages at each level of the multitemporal resolution space and a decision unit at the end.The processing steps include S-G filter, binning, and estimator.The input bin size is variable for each level; and, the complexity of the decider is negligible in comparison to the processing stages.Therefore, the computational complexity is practically bounded to processing applied at the level with the highest resolution.At this level, no binning exists and all photon timestamps are individually recorded.The proposed estimator in Algorithm 1 has a liner complexity of the order OpN q.Therefore, the dominant term is the complexity of S-G filter [45], i.e. the order of 1-D convolution as OpN log 2 N q.The compared algorithms expose two types of complexity, regardless of their implementation details.On the one hand, the complexity of analytic closed-form approaches of CMM, RLD, and RLD-OW grows linearly with increasing N , which is the same as the proposed estimator in Algorithm 1.On the other hand, LS fit, Poisson MLE, and FLERIT have all matrix decompositionbased solutions.Hence, they shows cubic order of complexity.Table VI summarises the complexity for different categories.Our proposed Algorithm 2 is ranked second among them with a linearithmic complexity between the fair order OpN q and the worst one OpN 3 q. Figure 13 reports the average run-time vs bin size for synthesised histograms having τ " 2.5ns in experiments of Section IV-A1.For each bin size, we repeated the random experiment 1,000 times.In the exhaustive searchbased Poisson MLE, which can be considered as the most complex optimiser, we set expected lower and upper bounds of 1 and 1,100 for the amplitude, respectively, with the resolution step of 4. Also, lower and upper bounds for the lifetime were respectively 1 and 10, with the step size of 0.2.All simulations were performed in MATLAB R2021b environment and run on an Intel Core i7 2.2GHz machine.The ranking of runtimes justifies congruency of the two complexity analyses.On real images in specimens of Section IV-B, our robust method processed data on average about 63 times faster than the gold standard LS fit.To sum up, the analyses demonstrate that both of our algorithms are suitable options for hardware-level implementations. VI. CONCLUSION This paper presented a robust and computationally efficient fluorescence decay signal recovery algorithm in the presence of inevitable blurring and noise occurring during time-resolved single photon sensing.The proposed framework first provides a multi-bin decay representation from the histogram of photon count arrivals using an adaptive signal smoothing approach.Subsequently, each representation is fed to our lifetime decoding algorithm.Finally, estimated parameters from different temporal resolutions are fused to each other based on gametheoretic modelling.We theoretically proved that our method is capable of recovering optimal fluorescence decay robustly under a wide variety of imaging situations. In addition to being robust, due to the non-fitting-based nature of our method, it can be considered as a rapid approach for hardware-level realisation.This capability is of great importance in real-time fluorescence lifetime sensing systems such as in vivo, in situ microendoscopy. APPENDIX DERIVATION OF CRAM ÉR-RAO LOWER BOUND AND MINIMUM VARIANCE UNBIASED ESTIMATOR Starting from the corrupted observations model represented in (11), likelihood function for the given τ is: The first-order partial derivative of the log-likelihood gives: B ln f X px|τ q Bτ " A∆ τ 2 σ 2 (30) The estimation variance satisfies varpτ q ě CRLB, where: n"0 n 2 e ´2∆¨n τ . (31) Now, the condition of existence of a MVUE is to check ψpτ qpτ ´τ q " B ln f X px|τ q Bτ for a function like ψpτ q as: ψpτ qpτ ´τ q " A 2 ∆ σ 2 e τ 3 Fig. 1 : Fig. 1: The proposed multi-resolution fluorescence decay recovery method unfolded and visualised for an exemplar histogram of photon counts.S-G stands for Savitzky-Golay; L fi denotes window length of the S-G filter at the i th tunable branch; and the parameters A, τ , and M i represents amplitude and lifetime of the exponential decay, and the number of time bins after binning at the i th level, respectively.The functions g i rns and y i rms are output signals of S-G filtering and binning at the i th level, respectively.The blue star in the game space denotes the point of Nash equilibrium.The plots in the output of the flow diagram include: the original decay xrns in the blue colour, corrupted measurements of xrns with the red stems, and the recovered decay f ˚rns in the green colour. Fig. 4 : Fig. 4: An illustration of left and right weighting functions. the constant c A fi 1´e ´1 b e m b ´1 . 20 : Measure distance d CHI k between x and f by(24). Fig. 5 : Fig.5: The performance of our method for a diverse range of bin sizes from three lifetime-varying decay signals.For all the test points, the number of photon counts per histogram before perturbation is approximately the same as 1,000 photons/histogram. Fig. 6 : Fig.6: Evaluation of our method under noise variations.Numbers in parenthesis of the legend are average on all σ 2 .The number of photon counts before perturbation is approximately 2,600. (b).As shown in the plots, variances of Fig. 8 : Fig. 8: Estimation variance vs noise variance of different approaches in comparison to theoretical Cramér-Rao lower bound for both parameters of (a) lifetime and (b) amplitude.Numbers in parenthesis of legends show mean on all noise variances.The number of photon counts before perturbation is approximately 2,600. Fig. 9 :Fig. 10 : Fig. 9: Lifetime estimation statistics of various methods on (a) short-, (b) mid-and (c) long-lifetime regimes.For all plots, the number of photon counts before perturbation is approximately 1,500. Fig. 12 : Fig. 12: Results from left to right are intensity and lifetime maps, and lifetimes' histogram from (a) band and (b) band 2, 10 th frame, sample C in Table V, respectively. Fig. 13 : Fig. 13: Average run-time vs bin size for various methods.Numbers in parenthesis of the legend show averaging on all bin sizes. the MVUE does not exist due to being a function of the true parameter τ .■ TABLE II : Comparisons of (median, standard deviation) for different approaches under a decreasing shot noise with the ground truth lifetime of τ " 1.5ns TABLE III : Numerical comparison of the methods in Fig.10 TABLE V : Results of both our proposed and the least squares fit approaches side by side on real samples TABLE VI : Computational complexity of different algorithms
2022-05-25T06:23:38.621Z
2022-05-24T00:00:00.000
{ "year": 2022, "sha1": "4d107278fd3ea3141e347aa69e62b3bc1b8833f3", "oa_license": "CCBY", "oa_url": "https://www.pure.ed.ac.uk/ws/files/266539091/Final_Robust_RLD_Pure_16_page.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "bf9cd5a034b31b66952b23df3de799d7618ab317", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
269229485
pes2o/s2orc
v3-fos-license
Understanding depression and suicide rates in the UK in comparison to Pakistan ABSTRACT INTRODUCTION Depression is a widespread and complex mental health condition that affects millions of individuals worldwide, transcending age, gender, and socio-economic boundaries.According to the World Health Organization (WHO), depression is a leading cause of disability globally, affecting an estimated 5.0% of adults worldwide, contributing significantly to the overall burden of disease [1].This corresponds to over 280 million people who suffer from depression worldwide.In addition to affecting adults, depression also significantly affects the elderly; 5.7% of adults over 60 suffer from the illness [2]. As a multi-faceted phenomenon, depression has garnered attention from various health organizations, researchers, and policymakers aiming to unravel its intricate nature and provide effective interventions. WHO, a specialized agency of the United Nations concerned with global health, characterizes depression as a common mental disorder that affects people of all ages, from all walks of life.Their fact sheet on depression emphasizes its widespread prevalence and its impact on individuals, families, and communities.Depression is more than just feeling sad; it is a persistent state of low mood, affecting an individual's ability to perform daily activities and engage with world [1]. The extensive nature of depression in different demographic groups, including children, should not be overlooked.The significance of addressing depression in early stages of life, recognizing the long-term consequences if left unattended. WHO has stressed how urgently mental health care must change on a worldwide scale.Their extensive review highlights several key issues, including the large number of people affected by mental disorders, the significant impact of mental health conditions on disability and mortality, and the global structural threats to mental health like social and economic inequalities, public health emergencies, war, and climate crises.According to the survey, there were 14.0% of adolescents worldwide and around a billion individuals dealing with mental disorders in 2019 [3]. The COVID-19 pandemic has made this worse, with increases in anxiety and depression of over 25.0% in just the first year [3].The research also notes widespread discrimination, stigma, and human rights abuses directed towards those with mental health disorders.It also draws attention to the differences in access to mental health care, particularly in low-income nations, where a negligible percentage of the population in need has access to efficient, reasonably priced, and high-quality care. Burden of Depression & Associated Mortality Globally The burden of depression varies globally.Global burden of disease study 2019 spanning from 1990 to 2019 across 204 countries and territories noted that the incidence of depressive disorders has been decreasing globally [4].However, the incidence rate is still increasing in regions with high sociodemographic indices (SDI), especially among younger generations.The study also found that certain populations require more psychological support, such as individuals born after the 1950s in high SDI regions and males in middle SDI regions. Notably, the nations with the greatest incidence of depressive illnesses in 2023 were Greece and Greenland, which can be attributed to past traumas, economic crises, and hard living conditions.Depression affects over 86 million individuals in Southeast Asia, while the number varies significantly between countries.With an estimated 5.0% of the population affected by depression and a sizable lack of access to appropriate therapy, Latin America and the Caribbean likewise face obstacles in treating depression.These results highlight the prevalence of depression around the world and the demand for all-encompassing mental health interventions. Depression in the United Kingdom & Pakistan With 67 million people living there as of mid-2021, the UK is regarded as a high-income nation [5].Its advanced economy, which is supported by a robust industrial and technical basis and notable contributions from the services sector (banking, insurance, and real estate), is responsible for this status. As of early 2024, Pakistan has a population of over 243 million, making it one of the most populous nations [6].Pakistan, a nation with a sizable population, is regarded as low-income and has several economic difficulties.These issues, which together impede the nation's economic growth and residents' well-being, include a high poverty rate, restricted access to high-quality healthcare and education, and inadequate infrastructure. Depression can affect one in six individuals in the UK with studies showing that women experience depression twice as likely in comparison to men [7].However, less men have been found to receive treatment for depression and often males and females can go undiagnosed.The prevalence of depression in the UK adults is estimated to be 4.5% [8].Mild depression accounts for 70.0% of all cases.Moderate depression accounts for 20.0% and severe depression, 10.0% of all cases [9]. Depression is a prevalent mental health condition in Pakistan, with estimates suggesting that up to 34.0% of the population may suffer from depression at some point in their lives.More than 4.0% of all diseases in Pakistan are mental disorders, with women bearing a disproportionately heavy burden of mental health issues. In Pakistan, there are thought to be 24 million people who require mental health care.Unfortunately, the funding allotted for mental health condition screening and treatment is insufficient to satisfy the growing demand.Pakistan has one of the lowest rates of psychiatrists in WHO Eastern Mediterranean Region and the entire world, with only 0.2 per 100,000 people, according to WHO data [10]. Link Between Suicide & Depression There is a complex and well-researched relationship between depression and suicide.Studies have demonstrated a robust association between suicidal ideation and depression, underscoring the significance of emotional regulatory mechanisms in this connection [11].Understanding these emotional regulating processes, particularly in various groups such as never-suicidal individuals (never entertained the idea of suicide or carried out any suicidal acts), suicidal idolators (those who idolize or overly adore the idea of suicide), and suicide attempters (those who have made conscious attempts to take their own lives but were unsuccessful), was the focus of a study conducted by the University of St Andrews and others. One finding of the study was that brooding, a type of ruminative thinking, was a common trait among all three of the groups mentioned above and was strongly associated with a depressed mood at the time.For those who attempted suicide, their mood had stronger connections with brooding, aggression, and vague personal memories than the neversuicidal group.Meanwhile, those with suicidal thoughts had more ties to neuroticism and impulsivity, yet these traits had less impact on their mood [11]. Subsequent research endeavors may aim to delve deeper into these conjectures or examine the diversity among individuals who try or consider suicide. A meta-analysis of longitudinal studies evaluated the potential consequences of depression and hopelessness on suicidal ideation, attempts, and fatalities [12].The purpose of the investigation was to determine how well depression and hopelessness predicted future outcomes related to suicide.Strict inclusion criteria and an extensive literature search were part of the study's methodology, which made sure the analysis was founded on high-caliber, peer-reviewed papers.To ascertain how these methodological problems would affect the impact of depression and hopelessness on suicide, the metaanalysis considered several variables, including sample severity, sample age, and research follow-up length.This work is essential for identifying the specificity of impacts on discrete suicide-relevant outcomes, which will guide research and therapeutic practice. The meta-analysis found that while despair and depression are risk factors for suicide thoughts and actions, their predictive ability was not as strong as anticipated.The overall estimates of prediction did not go above an odds ratio of 2.0 for any outcome, with the most minimal effects observed in the prediction of suicide fatalities [12].These findings highlight how complicated the connection is between depression and suicide.They emphasize the need of considering a variety of risk factors, such as hopelessness and depression, in suicide prevention and treatment initiatives, as well as the necessity of developing an in-depth knowledge of emotional and cognitive processes in various suicidal groups. Suicide Rates in the United Kingdom vs. Pakistan There are notable disparities between suicide rates and preventive tactics in Pakistan and the UK, which are mostly caused by social, cultural, and medical system variables. The latest data in the UK reveals differences in suicide rates between genders and regions.The UK's total suicide rate in 2018 was 11.2 fatalities per 100,000 people, a significant rise from the year before [13].The suicide rate was much greater in men than in women.The rate of suicide deaths among men was 17.2 per 100,000, whereas the rate among women was 5.4 per 100,000.The total suicide rate in England in 2022 was 10.5 per 100,000, with 16.1 suicides per 100,000 males and 5.3 suicides per 100,000 females [14]. According to the most recent data available, Pakistan's suicide rate poses a complicated and worrisome public health problem.Although WHO does not always receive official suicide death figures, central estimate of 7.5 per 100,000 people [15].As of 2015, the global average rate was 9.5 per 100,000 persons, which is considerably lower than this figure [16].Nonetheless, the social shame and legal complications associated with suicide in Pakistan may mean that these numbers are underreported.For example, even if an amendment decriminalized attempted suicide in 2022, suicide is still a crime in the nation and is punished by jail time and fines [17]. There has been a noticeable rise in suicides in Pakistan's Thar Desert area.During a five-year period ending in 2020, the District of Tharparkar (Figure 1) reported the greatest number of suicide cases, despite having a lower population of 1.65 million people in comparison to other regions of Sindh, where there are more than two million residents [18,19]. According to police statistics, the district had 112 and 113 suicides in 2020 alone-the highest yearly totals ever reported in the area [20].This is a rate of 7.0 suicides per 100,000 people. The data indicates a noteworthy increase in suicide cases when compared to prior years.Nevertheless, the existing sources do not offer extensive statistical trend analysis over a longer period or year-by-year precise information.Numerous socioeconomic causes, including poverty, unemployment, health problems, and various societal pressures, are contributing to this surge. MENTAL HEALTH IN PAKISTAN Whilst in the UK, there are services available for those dealing with mental health concerns, and suicide prevention is often approached in a more systematic and integrated manner within the healthcare system, these aspects of care are not so well established in Pakistan. The economic burden of mental illness in Pakistan is substantial, and the allocation of government funding is a crucial aspect of addressing this issue.As of 2020, the economic burden of mental illness in Pakistan was estimated to be around 616.9 billion Pakistani rupees (1.73 billion GBP).In contrast, only a small fraction of the total health budget, about 2.4 billion PKR (6.8 million GBP) (0.4% of the total health budget), was allocated to mental health.This amount covers less than 2.0% of the total economic burden of mental illnesses [21].This situation adds to the difficulties in dealing with and preventing suicide.Public health initiatives are required to increase public awareness, destigmatize suicide, limit access to common suicide methods, enhance monitoring, and improve mental health services.For the early detection, care, and assistance of those who are in danger, training for schools, law enforcement, and healthcare professionals is also essential. In Pakistan, there is a notable gender bias in suicide rates, with variations in prevalence and methods between men and women.A study analyzing suicides in Pakistan between 2019 and 2020 reported that about 61.9% of suicides were committed by men and 38.1% by women [22].This disparity is significant and reflects broader socio-cultural dynamics within the country.Numerous sociocultural variables also have an impact on the gender variations in suicidal behavior in Pakistan. For instance, research on nonfatal suicidal behavior in Karachi found that women were more likely to be married and typically younger than males when it came to suicide attempts [23].Gender differences also existed in the techniques of suicide, whereas both sexes frequently used benzodiazepine self-poisoning, women were more likely to utilize organophosphate pesticides [23]. Given the enormous obstacles to successful suicide prevention posed by cultural attitudes and budget constraints, Pakistan must prioritize the improvement of mental health services and raising public awareness. Mental Health Gap: WHO's Perspective Regarding Current Situation in the United Kingdom & Pakistan WHO's mental health gap action program aims to scale up care for mental, neurological, and substance (MNS) use disorders, particularly in low-and middle-income countries [24].Its main objectives include closing the treatment gap, increasing capacity, and integrating mental health services into primary healthcare settings.The effort provides evidencebased recommendations, resources, and training for healthcare practitioners to detect and manage mental health disorders effectively.The objective is to overcome the notable gaps in mental health treatment by expanding access to highquality mental health services on a worldwide scale. There is a glaring difference in mental health care between Pakistan and the UK.Access to mental health services is typically greater in the UK, where a strong healthcare system offers a variety of treatments, such as psychological therapy and drugs.On the other hand, Pakistan confronts serious difficulties in providing mental health treatment because of its weak healthcare infrastructure, lack of finances, and stigma associated with mental illness in the society [25].This discrepancy supports WHO's concern that more than 75.0% of MNS use condition sufferers in low-and middle-income nations do not have access to essential medical care. MANAGEMENT OF DEPRESSION GLOBALLY: DISPARITIES IN SERVICE AVAILABILITY & STIGMA In Europe, pharmacotherapy (such as antidepressants), psychotherapy (such as cognitive behavioral treatment and interpersonal therapy), lifestyle adjustments, and support groups are just a few of the choices available for managing depression [1].While access to mental health treatments differs from nation to nation, comprehensive care is the goal of many European healthcare systems.The European approach to treating depression also emphasizes the need for community-based assistance, specialized psychiatric services, and the integration of mental health services into primary care. Depression prevalence and management vary widely, influenced by factors such as healthcare systems, cultural attitudes towards mental health, and socioeconomic conditions [25,26].European countries generally have more resources and established healthcare systems for mental health care compared to many regions such as those in the Middle East, potentially leading to better diagnosis and treatment rates.However, stigma and access to care can still be significant issues in various parts of Europe, affecting the overall management and understanding of depression. In comparison, depression and anxiety disorders are highly prevalent in the Middle East, with studies indicating significant portions of populations in countries like Lebanon, Iraq, and Saudi Arabia suffering from these conditions [26].Despite the high rates of mental illness, mental health care receives limited attention and funding from governments in the region, leading to challenges in diagnosis and treatment. Depression management in Asia is influenced by a variety of factors including cultural perceptions, availability of healthcare services, and societal attitudes towards mental health.In many Asian countries, there is a strong cultural stigma associated with mental health issues, which can lead to underreporting and a reluctance to seek help [4].Traditional beliefs and practices such as acupuncture often play a significant role in how mental health is understood and treated. The stigma associated with mental health is a worldwide issue, defined by institutional, social, and self-imposed attitudes towards mental disease.This causes prejudice, social exclusion, and a reluctance on the part of individuals impacted to ask for assistance.The problem is made worse by structural obstacles in healthcare and other sectors, and unfavorable stereotypes propagated by the media frequently contribute to the general misinformation and stigma associated with mental health disorders. UNITED KINGDOM & PAKISTAN: A COMPARATIVE CASE STUDY A cross-sector approach is the main emphasis of the UK government's suicide prevention strategy for England (2023-2028), which aims to lower the suicide rate and aid individuals who are impacted by suicide and self-harm [27].The NHS long term plan, which has been crucial in helping to create regional suicide prevention strategies and bereavement services, is funding this initiative with £57 million [28].Enhancing data and evidence, addressing common risk factors, offering customized support to high-risk groups, encouraging online safety and responsible media content, offering efficient crisis support, limiting access to suicide means and methods, and guaranteeing efficient bereavement support are some of the strategy's key components.The plan lists more than 100 activities that should be done and highlights that everyone has a duty to prevent suicide. There is still work to be done in the UK to lessen the stigma associated with mental illness and suicide.Even if there has been an improvement in public knowledge and acceptance, people still encounter social and personal obstacles when talking about mental health concerns or asking for assistance.Suicide is frequently stigmatized, which might discourage people from seeking help and add to a general lack of knowledge and understanding among the public.The government of the UK, together with many organizations, is persistently striving to de-stigmatize mental health and suicide, foster candid conversations, and motivate people to pursue assistance. Similarly, in Pakistan, a significant challenge is the stigma associated with mental health problems, which keeps many people from getting treatment because they fear social rejection.In addition, the public does not comprehend or have sufficient knowledge of mental health concerns. The lack of specialist mental health facilities and practitioners is another major issue facing the nation.Pakistan has a population of over 216 million, but only around 450 psychiatrists [10].This means that access to mental health treatment is especially challenging in rural regions, where there is only one psychiatrist for every million people.Most of the mental health budget is utilized in hospital psychiatric units, which are primarily located in urban areas and are often overburdened.Additionally, the public health sector in Pakistan has yet to fully recognize psychology as a profession, contributing to a limited number of psychology and psychiatry professionals in the country. A limited budget for mental health care is one of the other problems that exacerbates the treatment gap, as mentioned previously.Healthcare professionals require more mental health training, particularly those in the child protection and psychosocial support sectors, where there is frequently a lack of formal mental health education. To effectively assist people with mental health disorders, there is an urgent need for more financing and awareness of mental health services on a global scale.Increased funding for mental health services is essential for increasing patient access to high-quality care, lowering stigma, and making sure that patients get the all-encompassing assistance they need.Increasing awareness is crucial because it may improve knowledge of mental health concerns, motivate individuals to get treatment, and create a more accepting community, where physical and mental health are valued equally.When combined, these initiatives have the potential to greatly enhance the lives of persons impacted by mental health issues and promote global health in communities. Figure 1 . Figure 1.Map of Pakistan highlighting the Sindh Region, where Tharparkar is situated[18]
2024-04-19T15:11:02.885Z
2024-04-17T00:00:00.000
{ "year": 2024, "sha1": "e633d6df15b21dab2e489a085ceaff2d711ab878", "oa_license": "CCBY", "oa_url": "https://www.ejeph.com/download/understanding-depression-and-suicide-rates-in-the-uk-in-comparison-to-pakistan-14470.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a6ce0a7954e3f2d638d68dc46386f4462e626b90", "s2fieldsofstudy": [ "Psychology", "Medicine", "Sociology" ], "extfieldsofstudy": [] }
235755335
pes2o/s2orc
v3-fos-license
Gamma-Ray Emission Produced by $r$-process Elements from Neutron Star Mergers The observation of a radioactively powered kilonova AT~2017gfo associated with the gravitational wave-event GW170817 from binary neutron star merger proves that these events are ideal sites for the production of heavy $r$-process elements. The gamma-ray photons produced by the radioactive decay of heavy elements are unique probes for the detailed nuclide compositions. Basing on the detailed $r$-process nucleosynthesis calculations and considering radiative transport calculations for the gamma-rays in different shells, we study the gamma-ray emission in a merger ejecta on a timescale of a few days. It is found that the total gamma-ray energy generation rate evolution is roughly depicted as $\dot{E}\propto t^{-1.3}$. For the dynamical ejecta with a low electron fraction ($Y_{\rm e}\lesssim0.20$), the dominant contributors of gamma-ray energy are the nuclides around the second $r$-process peak ($A\sim130$), and the decay chain of $^{132}$Te ($t_{1/2}=3.21$~days) $\rightarrow$ $^{132}$I ($t_{1/2}=0.10$~days) $\rightarrow$ $^{132}$Xe produces gamma-ray lines at $228$ keV, $668$ keV, and $773$ keV. For the case of a wind ejecta with $Y_{\rm e}\gtrsim0.30$, the dominant contributors of gamma-ray energy are the nuclides around the first $r$-process peak ($A\sim80$), and the decay chain of $^{72}$Zn ($t_{1/2}=1.93$~days) $\rightarrow$ $^{72}$Ga ($t_{1/2}=0.59$~days) $\rightarrow$ $^{72}$Ge produces gamma-ray lines at $145$ keV, $834$ keV, $2202$ keV, and $2508$ keV. The peak fluxes of these lines are $10^{-9}\sim 10^{-7}$~ph~cm$^{-2}$ s$^{-1}$, which are marginally detectable with the next-generation MeV gamma-ray detector \emph{ETCC} if the source is at a distance of $40$~Mpc. Introduction The rapid neutron capture process (r-process) is believed to be responsible for the production of about half of the elements heavier than iron in our universe (Burbidge et al. 1957;Cameron 1957; see Cowan et al. 2021 for a recent review). Binary neutron star (or neutron star and black hole) mergers have long been considered as promising sites of the r-process nucleosynthesis (Lattimer & Schramm 1974, 1976Symbalisty & Schramm 1982). Hydrodynamic simulations of binary neutron star mergers reveal that a small amount of mass (∼ 10 −4 − 10 −2 M ) with a low electron fraction (Y e ∼ 0.1 − 0.4) is ejected with subrelativistic velocities (∼ 0.1 − 0.3c) (see Shibata & Hotokezaka 2019 for a recent review). Merger ejecta would be an ideal site for the r-process nucleosynthesis. Radioactive decay of r-process elements freshly synthesized in the merger ejecta is expected to produce an ultraviolet/optical/near-infrared transient (Li & Paczyński 1998;Metzger et al. 2010;Korobkin et al. 2012; Kasen et al. 2013;Tanaka & Hotokezaka 2013;Metzger & Fernández 2014;Barnes et al. 2016), which is called a "kilonova" (see Metzger 2019 for a recent review). Our understanding of the r-process advanced dramatically after the discovery of the first neutron star merger event GW170817 (Abbott et al. 2017a). Approximately eleven hours after the merger, the electromagnetic transient, named AT 2017gfo, was observed in the ultraviolet, optical and near infrared wavelengths in the galaxy NGC 4993 (Abbott et al. 2017b;Arcavi et al. 2017;Chornock et al. 2017;Coulter et al. 2017;Cowperthwaite et al. 2017;Drout et al. 2017;Evans et al. 2017;Kasliwal et al. 2017;Lipunov et al. 2017;McCully et al. 2017;Nicholl et al. 2017;Pian et al. 2017;Smartt et al. 2017;Soares-Santos et al. 2017;Tanvir et al. 2017;Valenti et al. 2017). The observed features are broadly consistent with a kilonova model Tanaka et al. 2017), indicating that r-process elements have been synthesized in this event. The merger event provides the first strong evidence for an astrophysical site of r-process elements production. Observations of the kilonova transient may be used to estimate the masses and detailed nuclide compositions of the merger ejecta. However, the ejecta mass estimate involves many systematic uncertainties, mostly due to the uncertain opacity, and to lesser extent due to the heating rate, thermalization efficiency and ejecta properties Tanaka & Hotokezaka 2013; Barnes et al. 2016;Kawaguchi et al. 2018;Wollaeger et al. 2018). The opacity of the merger ejecta is very sensitive to the abundance pattern of heavy elements. Thus, even with the pristine data of GW170817, it is difficult to accurately estimate the ejecta mass. In addition, all of these studies assumed that the radioactive decay powers the kilonova transient, but the later red emissions may also arise from delayed energy injection from a long-lived remnant neutron star (Yu et al. 2018;Ren et al. 2019). With all these uncertainties, it is difficult to accurately estimate the ejecta mass based solely on the kilonova lightcurve. Obtaining detailed nuclide compositions of the merger ejecta is even more challenging. The gamma-ray photons produced by the radioactive decay of r-process elements provide a unique opportunity to probe the detailed compositions of the merger ejecta. According to theoretical estimates, the ejecta will become optically thin after a day to a few days caused by the subrelativistic expansion (Pian et al. 2017;Drout et al. 2017;Troja et al. 2017;Kilpatrick et al. 2017;Shappee et al. 2017;Waxman et al. 2018). Then, the gamma-ray photons would escape from the ejecta material directly. Detection and observation of these radioactive gamma-ray emission would be the best approach for directly probing the detailed yields of heavy elements. In nearby supernovae, the gamma-ray emission produced by radioactive nuclides has been detected, e.g., the gamma-ray lines of 56 Co from the Type II SN 1987A (Matz et al. 1988;Teegarden et al. 1989), and those of 56 Ni and 56 Co from the Type Ia SN 2014J (Churazov et al. 2014;Diehl et al. 2014). The two gamma-ray emission lines of 56 Co at energies 847 and 1238 keV from the SN 2014J in the nearby galaxy M82 were detected by INTEGRAL. From the observed luminosity of the gamma-ray lines, it is successfully derived that about 0.6M radioactive 56 Ni were synthesized during the explosion (Churazov et al. 2014). The detection of radioactive gamma-ray emission can provide a conclusive evidence for identification specific heavy elements, in principle. Hence, it is worthwhile to study the spectrum and luminosity of the gamma-ray emission in a neutron star merger. Gamma-ray emission from neutron star merger through the decays of r-process elements is a timely topic which is important for both kilonova and r-process study. There have been several groups working on this topic. Hotokezaka et al. (2016) studied the gamma-ray emission from neutron star merger with a dynamic r-process network and found that it could be directly observed from an nearby event (≤ 3 − 10 Mpc) with future gamma-ray detectors. Li (2019) studied the features of the gamma-ray emission from a neutron star merger in detail. Instead of using r-process network, he used a Monte Carlo method to generate the initial abundance of unstable nuclides and compared the gamma-ray spectra produced by neutron-rich nuclides and proton-rich nuclides. studied the gamma-ray line signals from long-lived nuclei to search for remnants of past neutron star mergers in our Galaxy. Korobkin et al. (2020) studied the gamma-ray emission both in the kilonova phase and in the remnant epoch, with a 3D radiative transport code and the r-process network. Wang et al. (2020) included the contribution from fission fragments to calculate the high energy gamma-ray emission from neutron star mergers. Ruiz-Lapuente & Korobkin (2020) studied the contribution of the kilonova gamma-rays to the diffusive gamma-ray background emission. In the present work, based on the detailed r-process nucleosynthesis calculations and a multi-shell model for radiative transport, we will calculate the shape and feature of the gamma-ray spectra in the merger ejecta, identifying the features in the emission spectrum associated with r-process elements. The paper is organized as follows. In Section 2, we describe the procedure to calculate gamma-ray emission. In Section 3, we present the results of radioactive gamma-ray emission. Section 4 contains the summary and discussion. Procedure to Calculate Gamma-Ray Emission To calculate the gamma-ray emission from r-process nucleosynthesis, one should know the species of nuclide and the corresponding abundances inside the merger ejecta. In this work, the r-process nucleosynthesis in the merger ejecta is performed based on the code of SkyNet (Lippuner & Roberts 2017), which evolves the abundances of nuclides under the influence of nuclear reactions, involving 7843 nuclides ranging from free neutrons and protons to 337 Cn(Z = 112) and including more than 1.4 × 10 5 nuclear reactions. The nuclear masses and partition functions used in SkyNet are taken from REACLIB (Cyburt et al. 2010). The initial conditions of ejecta are set as followed. The dynamical ejecta is calculated with entropy s = 10 k B /baryon, expansion timescale τ dyn = 10 ms, and electron fraction Y e ∼ 0.05 − 0.20. For the wind ejecta, we choose entropy s = 20 k B /baryon, expansion timescale τ dyn = 30 ms, and electron fraction Y e ∼ 0.25 − 0.40. The adopted initial conditions of the ejecta are consistent with those found in the numerical simulations carried out by Nedora et al. (2021). The total gamma-ray energy generation is given by summing the energy generation of all decay mode for all nuclides. We denote a nuclide species by an index i, and a decay mode by an index j. Then, the gamma-ray energy generation is related to the time evolution of the nuclide abundance. Following Lippuner & Roberts (2017), the abundance of nuclides can be defined as where N i and N B are the total numbers of particles of ith nuclide species and baryons, respectively. Then the gamma-ray energy generation rate can be written aṡ Here, τ ij is the mean lifetime of a nuclide and estimated with τ ij = t 1/2,ij / ln 2, where t 1/2,ij is the half-life of the nuclide and obtained by t 1/2,ij = t 1/2,i /B ij with B ij being the branching ratio of a nuclide in a given decay mode. The ε ij is the total energy of the gamma-rays generated in the jth decay mode of the ith nuclide and can be written as where ijk is the kth photon energy of the gamma-ray generated in the jth decay mode of the ith nuclide and h ijk is the corresponding intensity (probability of emitting gamma-ray photon per decay). In our calculations, the gamma-ray radiation data for the unstable nuclides in each decay mode (including α decay and β decay) are taken from the NuDat2 database 1 . To calculate the spectrum of the gamma-ray emission, we divide the photon energy range of [10 0 , 10 4 ] keV into 400 energy bins in the logarithmic space. The specific photon energy rate in a bin of photon energy, e.g., [ 1 , 2 ], is defined by Then, one can have the emission coefficient j = L( )/(4πV ), which is used in our calculation of the observed photon flux, where V is the volume of the merger ejecta. To obtain the emitted gamma-rays from the ejecta, we use the radiative transfer equation to get the intensity I , i.e., dI dl where the absorption coefficient α depends on the mass density ρ ej and the opacity κ( ) as α = ρ ej κ( ), and l is the photon path length. The optical depth τ (l) for the photons is τ (l) = ρ ej κ( )dl. Thus, the formal solution of Equation 5 for photons traveling from l 0 to l m is written as Similar to the work of Metzger (2019), a spherical expanding ejecta is assumed in this work. The merger ejecta is divided into N shells with different expansion velocity v n (1 ≤ n ≤ N ), where v 1 = v min , v N = v max , and N = 100. The mass of each shell is determined by the density distribution, which is taken as a power law (Nagakura et al. 2014) where M ej is the total mass of the ejecta, and R max = v max t and R min = v min t are the outermost and innermost radius of the ejecta, respectively. In our model, the merger ejecta with M ej = 0.01M , v min = 0.01c, v max = 0.4c, and δ = 1.5 (e.g., Yu et al. 2018) is adopted. A sketch of the ejecta model is illustrated in Figure 1. For different viewing angle, the photons will travel in different shells toward the observer. Similar to Wang et al. (2020), the angle of the line of sight to the nth shell with respect to the line between ejecta centre and detector is θ n ≈ R n /D, where D is the distance between the detector and the source. Since the pathway of emission from different parts of the nth shell to the observer is different, our calculation of their emission in different cases are described as below. • In the case of θ n−1 < θ ≤ θ n , the length of the emitting region in the nth shell along the line of sight at angle θ is l(R n , θ) = 2 (R n ) 2 − (Dθ) 2 , and its emission intensity is given by where τ in (R n , θ) is the optical depth for photons within the nth shell itself, Considering the absorption of the outer shells, the observed intensity then is where τ out (R n , θ) is the optical depth of the outer shells, • For case of 0 < θ ≤ θ n−1 , the shell along the light of sight is divided into two separated segments: Part 1 and Part 2. The length of each segment is l( The emission of Part 1 passes though the outer shell only, being the same as that described in the case of θ n−1 < θ ≤ θ n . The emission of Part 2 passes through the iner shells and Part 1. The observed emission intensity from Part 1 and Part 2 is given by where τ is the total optical depth for photons from Part 2 to the observer, where m is the innermost shell that the photon pathway at angle θ passes through, which is given by m The observed flux contributed by the nth shell can be obtained by Thus, the total observed photon flux of the merger ejecta can be obtained by summarizing the contributions of all shells: Considering the effect of Doppler shift, the photon energy in the observer frame obs is related to that in the rest frame by where Γ is the Lorentz factor, Γ = (1 − β 2 ) −1/2 , β = v n /c, and α is the angle between the radius vector and the line of sight. In order to calculate the opacity of the merger ejecta, we take into account four processes of gamma-ray photons in the matter: photoelectric absorption, Compton scattering, pair production, and Rayleigh scattering. The total opacity of the ejecta is associated with the species of nuclides inside the merger ejecta and their abundance. In our model, the total opacity is estimated based on the nuclide composition, i.e., where κ i ( ) is the opacity of the ith nuclide and A i is the corresponding atomic mass number. The opacity values of the element from hydrogen (Z = 1) to fermium (Z = 100) are adopted from the XCOM database published by the National Institute of Standards and Technology (NIST) website 2 . The Abundance Pattern and Gamma-Ray Energy In Figure 2, we show the final abundance patterns in the situation with different initial electron fractions Y e . For comparison, we also plot the observed solar r-process abundances, which is taken from Arnould et al. (2007). For the dynamical ejecta with Y e 0.20, it is in good agreement of the second, third, and rare-Earth peak positions with the solar rprocess abundances. The abundance patterns are very similar for the situations with low Y e because these cases are neutron-rich enough to produce nuclides with A 250. As the ejecta becomes less neutron-rich, the r-process is not fully proceeded because there are not enough neutrons to reach the third r-process peak. This can be found in the wind ejecta with Y e 0.25, where the r-process fails to reach the third peak, being instead by producing nuclides around the first r-process peak and some iron peak elements. The gamma-ray energy generation rate with different initial electron fractions Y e are shown in Figure 3, where the dotted line indicates the power-law gamma-ray energy generation rate, i.e.,Ė γ ∝ t −1.3 . One can observe that the gamma-ray energy generation rate of merger ejecta can be roughly described with a power-law function. However, it is changed in the situations with Y e 0.35. This is due to that the final composition of the ejecta with Y e 0.35 is dominated by one or a few individual nuclides, which govern the radioactive gamma-ray production. To identify the nuclides which make dominant contribution to the gamma-ray energy, we calculate the generated gamma-ray energy of each nuclide during its radioactive decay. In Figure 4, we show the dominant nuclides to the gamma-ray energy generation in dynamical ejecta. The dominant contributions of gamma-ray energy generation come from the nuclides around the second r-process peak. In particular, it is clear that the most important nuclide for the generation of gamma-rays between 1 day and 10 days is 132 I. The gamma-ray energy generation rate of 132 I is higher than the other nuclides by a factor of at least 3 − 10 in the dynamical ejecta. This is owing to that 132 I is largely produced from the decay of doubly magic 132 Sn (50 protons and 82 neutrons) in the r-process. In the decay chain of 132 Sn (39.7 s) → 132 Sb (2.8 min) → 132 Te (3.2 d) → 132 I (2.3 h) → 132 Xe, the corresponding energies of the released gamma-rays are 1.28 MeV, 2.49 MeV, 0.21 MeV, and 2.26 MeV, respectively. Then we get a large gamma-ray energy from radioactive 132 I between 1 day and 10 days after merger. We also show the dominant nuclides to the gamma-ray energy generation of wind ejecta in Figure 5. At the wind ejecta with Y e 0.25, the dominant nuclides contributing to the gamma-ray energy generation rate between 1 day and 10 days mainly come from 128 Sb, 127 Sb, 77 Ge, and 72 Ga. Table 1 lists the dominant nuclides that contribute to the gamma-ray energy generation rate for different initial electron fractions. The Opacity of r-process Elements To calculate the spectrum of the gamma-ray emission from the merger ejecta, we need to consider the effect of absorption and scattering by ejecta material. In Figure 6, we show the total gamma-ray opacity caused by the four processes, including photoelectric absorption, Compton scattering, pair production, and Rayleigh scattering. It is found that the opacity increases quickly with decreasing photon energy. For ≤ 200 keV, the interaction of gammaray photons with matter is dominated by the photoelectric absorption, which is larger than that of the Rayleigh scattering and Compton scattering by orders of magnitude. For 200 keV ≤ ≤ 5 MeV, the opacity is dominated by Compton scattering, while the pair production in the nuclear field become important at energies 5 MeV. As shown in Figure 6, the total opacity in a few hundred keV range is sensitive to the nuclide compositions. The opacity of heavy material is larger by a factor of 1.5 than that of lighter elements at low energies 500 keV because photoelectric absorption is enhanced by high-Z atoms. Therefore, the opacity of the dynamical ejecta can be larger than that of the wind ejecta. The Spectrum of Gamma-Ray Emission In Figure 7, we show the observed gamma-ray spectra with different initial electron fractions Y e . It is found that the observed photon flux in the high-energy range decreases with time as neutron-rich isotopes gradually decay to stability. The low-energy part of the observed photon flux increases as time goes on, since the photons of energy smaller than a few hundred keV suffer the photoelectric absorption by the atoms in the ejecta during the initial optically thick stage. The observed photon flux in dynamical ejecta have very similar shapes, since the nuclide compositions in dynamical ejecta are similar. As can be found in Figure 7, the observed gamma-ray spectra have several distinct peaks both in the dynamical ejecta and the wind ejecta. To identify the radioactive nuclides that make dominant contribution to the spectral peak of gamma-ray spectrum, we have calculated the contribution of each nuclide to the observed photon flux. The nuclides we find to be dominant source of gamma-ray spectrum in dynamical ejecta are consistent. For the spectral peak around 700 keV, there are several bright gamma-ray lines come from 132 I with energies of 522.65 keV, 630.19 keV, 667.71 keV, 772.60 keV, and 954.55 keV, which is about 35% of the total observed photon flux. The dominant contribution to the spectral peak around 250 keV comes from 132 Te with energy of 228.16 keV, which is about 15% of the total observed photon flux. The low-energy spectral peak around 90 keV comes from 133 Xe with energy of 81.00 keV. The spectral peak near 50 keV is generated by the radioactive 132 Te. Note that, when the ejecta is optically thick, the Doppler broadening of emission lines is asymmetric, as only the photons distributing in the near side of the sphere can be seen. For the observed photon flux in wind ejecta, we see that the gamma-ray spectrum is very sensitive to the initial electron fraction Y e . The dominant contributions to the spectral peak mainly come from 128 Sb, 127 Sb, 77 Ge, 73 Ga, 72 Ga, 72 Zn, and 67 Cu. At the wind ejecta with Y e ∼ 0.25, there are several bright gamma-ray lines come from 128 Sb and 127 Sb. For higher initial electron fractions (Y e 0.30), the spectral peaks of gamma-ray spectrum come from the nuclides around the first r-process peak. The radioactive 72 Ga is responsible for the spectral peaks around 2400 keV and 850 keV, which produce several bright gamma-ray lines with eneriges of 2507.72 keV, 2201.59 keV, 894.33 keV, and 834.13 keV. The spectral peaks around 250 keV and 400 keV mainly come from 77 Ge with energies of 211.03 keV, 215.51 keV, 264.45 keV, and 416.35 keV. The spectral peak near 150 keV is generated by the radioactive 72 Zn. The low-energy spectral peak around 100 keV come from 67 Cu with energy of 93.31 keV. The dominant nuclides responsible for the spectral peak are listed in Table 2. In summary, the r-process network calculations of the merger ejecta suggest that 132 Te, 132 I, 131 I, 133 Xe, 133 I, 128 Sb, 127 Sb, 77 Ge, 73 Ga, 72 Ga, 72 Zn and 67 Cu are the dominant nuclides contributing to the gamma-ray spectra on a timescale of a few days. For the dynamical ejecta, the decay chain of 132 Te (t 1/2 = 3.21 days) → 132 I (t 1/2 = 0.10 days) → 132 Xe produces several bright gamma-ray lines with energies of 228.16 keV, 667.71 keV, and 772.60 keV. In the case of the lanthanide free wind ejecta, the decay chain of 72 Zn (t 1/2 = 1.93 days) → 72 Ga (t 1/2 = 0.59 days) → 72 Ge also produces several bright gamma-ray lines with energies of 144.70 keV, 834.13 keV, 2201.59 keV, and 2507.72 keV. These decay chains would be the promising one to be detected by future observations. Summary and Discussion In this paper, we studied the energy and spectrum of gamma-ray emission produced by the radioactive decay of r-process elements freshly synthesized in a neutron star merger ejecta. Basing on the detailed r-process nucleosynthesis calculations and a multi-shell model for radiative transport, we calculated the radioactive gamma-ray emission and identified the features in the emission spectrum associated with r-process elements. For the dynamical ejecta in the situation with low initial electron fractions (Y e 0.20), the dominant contributors of gamma-ray energy are the nuclides around the second r-process peak (A ∼ 130) ( Figure. 4). The decay chain of 132 Te (t 1/2 = 3.21 days) → 132 I (t 1/2 = 0.10 days) → 132 Xe produces several bright gamma-ray lines with energies of 228.16 keV, 667.71 keV, and 772.60 keV (left panel of Figure. 7), which would be the most promising decay chain to be detected by the MeV gamma-ray detectors. Our result is consistent with the previous work by Korobkin et al. (2020) on a similar timescale, which also appear spectral peaks around 250 keV and 700 keV from the spectra of dynamical ejecta. The decay chain of 132 Te is also the dominant source of heating rate obtained by Lippuner & Roberts (2015) and Zhu et al. (2021) with two different r-process nucleosynthesis network code. In the case of wind ejecta with high initial electron fractions (Y e 0.30), the dominant contributors of gamma-ray energy are the nuclides around the first r-process peak (A ∼ 80) (Figure. 5). The decay chain of 72 Zn (t 1/2 = 1.93 days) → 72 Ga (t 1/2 = 0.59 days) → 72 Ge produces several bright gamma-ray lines with energies of 144.70 keV, 834.13 keV, 2201.59 keV, and 2507.72 keV(right panel of Figure. 7). This result is similar to those estimated by , which suggests that the decay chain of 72 Zn plays a crucial role in powering the kilonova lightcurve of AT 2017gfo. Our calculations do not include the gamma-ray emission from the fission process being due to lack of radiation data for fissions in the NuDat2 database. This is not significantly affected our results since the fission process only affects the gamma-ray emission above 3.5 MeV at timescales longer than 10 days (Wang et al. 2020). In addition, the secondary photons from Compton scattering are ignored in our model, which may affect the gammaray spectrum in the low-energy band. The observed spectra in the low-energy band may become relatively smooth due to the contribution of secondary photons, similar to the results obtained by Korobkin et al. (2020), but the spectra peaks from the dominant decay chains should still be identified. Only nuclides whose gamma-ray radiation data are available in the NuDat2 database are included in our calculations of gamma-ray emission from the r-process nucleosynthesis. This may lead to underestimate the gamma-ray emission flux. However, this cannot significantly affect on the derived gamma-ray spectrum since the gamma-ray spectrum are dominated by the nuclides with long half-life (t 1/2 10 4 s, see Table 2). These nuclides are close to the valley of stability and their experimental data are available (Hotokezaka et al. 2016;Li 2019). Note that a simple, symmetric split model for fission reactions is adopted in our r-process simulations. This may overestimate the abundance of nuclides around the second r-process peak (Mumpower et al. 2018) and the corresponding gamma-ray emissions. The resulting gamma-ray line fluxes on a timescale of 1−7 days are 10 −9 ∼ 10 −7 ph cm −2 s −1 in the photon energy range of 0.05−3 MeV at a distance of 40 Mpc. The sensitivity of the current MeV gamma-ray missions, such as INTEGRAL (Diehl 2013), is 10 −4 ∼ 10 −5 ph cm −2 s −1 in the MeV band, being much lower than the line fluxes derived from our analysis. The sensitivities of the proposed next-generation missions, such as AMEGO (All-sky Medium Energy Gamma-ray Observatory, Moiseev 2017;Rando 2017), the e-ASTROGAM space mission (Tatischeff et al. 2016), ETCC (Eletron Tracking Compton Camera, Tanimori et al. 2015 and LOX (Lunar Occultation Explorer, Miller et al. 2018), are ∼ 10 −7 − 10 −5 ph cm −2 s −1 . The gamma-ray lines would be marginally detectable with these missions. We thank Bing Zhang, Hou-Jun Lü, Shan-Qin Wang, Ning Wang, Jia Ren, and Rui-Chong Hu for fruitful discussion and to the anonymous referee for helpful comments. This work was supported by the National Natural Science Foundation of China (Grant Nos.11533003, 11851304, 11773007, and U1731239)
2021-07-08T01:16:27.807Z
2021-07-07T00:00:00.000
{ "year": 2021, "sha1": "69424f8062ba827f2f4ac2891fc9870fc4d2e5f4", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2107.02982", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "69424f8062ba827f2f4ac2891fc9870fc4d2e5f4", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
198932555
pes2o/s2orc
v3-fos-license
Impact of Perfluoro and Alkylphosphonic Self-Assembled Monolayers on Tribological and Antimicrobial Properties of Ti-DLC Coatings The diamond-like carbon (DLC) coatings containing 1.6%, 5.3% and 9.4 at.% of Ti deposited by the radio frequency plasma enhanced chemical vapor deposition (RF PECVD) method on the silicon substrate were modified by n-decylphosphonic acid (DP) and 1H, 1H, 2H and 2H-perfluorodecylphosphonic acid (PFDP). The presence of perfluoro and alkylphosphonic self-assembled monolayers prepared by the liquid phase deposition (LPD) technique was confirmed by Fourier transform infrared spectroscopy (FTIR). It was shown that DP and PFDP monolayers on the surface of titanium incorporated diamond-like carbon (Ti-DLC) coatings had a huge influence on their wettability, friction properties, stability under phosphate- and tris-buffered saline solutions and on antimicrobial activity. It was also found that the dispersive component of surface free energy (SFE) had a significant influence on the value of the friction coefficient and the percentage value of the growth inhibition of bacteria. The dispersive component of SFE caused a reduction in the growth of bacteria and the friction coefficient in mili- and nano-newton load range. Additionally, both self-assembled monolayers prepared on Ti-DLC coatings strongly reduced bacterial activity by up to 95% compared to the control sample. Introduction Carbon-based coatings have enjoyed growing interest and can be successfully used in the electronics and microelectronics, especially in micro/nanoelectromechanical systems (MEMS/NEMS) [1,2]. Due to the high biocompatibility, chemical inertness, corrosion resistance and low friction coefficient of the diamond-like carbon (DLC) coatings, they are also ideal for biomedical and tribological applications [3][4][5][6]. The possibility of using DLC is wide but the high internal stresses occurring in the coating is a major problem that may result in poor adhesion to the substrate and a tendency to delamination. The use of dopants in DLC structure is the most effective method for solving this problem [7][8][9]. It is important to find a dopant that allows reducing the internal stresses while Ti-DLC Coating Deposition Process Titanium-containing DLC coatings with the thickness of 100 ± 2 nm were deposited on silicon substrates Si(100) using the radio frequency plasma enhanced chemical vapor deposition (RF PECVD) method at 400 V of negative self-bias under 20 Pa pressure of mixture of methane (CH 4 ) and titanium (IV) isopropoxide (Ti[OCH(CH 3 ) 2 ] 4 ). Controlling a content of precursors in work atmosphere pure DLC and three Ti-DLC structures with different contents of titanium (1.6 at.%-Ti-DLC1, 5.3 at.%-Ti-DLC2 and 9.4 at.%-Ti-DLC3) were deposited. The details of the deposition process can be found elsewhere [28]. The thickness was controlled by choosing an appropriate duration of the deposition process. Additionally, the thickness of the manufactured Ti-DLC coatings was measured with the use of field emission scanning electron microscope (FE-SEM) NovaNanoSEM 450 (FEI) equipped with a Schottky gun. Data was collected at an accelerating voltage of 5 kV. Formation of Perfluoro and Alkylphosphonic Self-Assembled Monolayers Self-assembled layers of perfluoro and alkylphosphonic were prepared on the Ti-DLC coatings with different contents of Ti by use of the liquid phase deposition (LPD) method. The modifications were performed with the use of n-decylphosphonic acid (DP) and 1H, 1H, 2H and 2H-perfluorodecylphosphonic acid (PFDP), which were purchased from ABCR, GmbH & Co. KG, Karlsruhe, Germany. Prior to the modification the samples were subjected to low pressure air plasma (Diener Electronic Plasma-Surface-Technology, Zepto, 40 Hz, 100 W) in order to remove organic contaminants but also to initiate the formation of -OH and Ti-O-Ti groups. These groups play the role of anchoring centers for modifying compounds [29]. DP and PFDP solutions were prepared by dissolving the modifier powder in ethanol at room temperature and under ambient conditions. The concentration of PFDP and DP solutions was 0.5% and 0.05% respectively. These concentrations were selected after the previous optimization performed. In the next step, the Ti-DLC with perfluoro and alkylphosphonic layers were removed from the acid solution and rinsed in ethanol. Finally, the samples after the deposition process were heated at 50 • C for 24 h. Surface Characterization Measurements of water contact angle and the quasi-static contact angle were employed to evaluate the wettability of pure Ti-DLC before and after the modification. The DSA-25 Drop Shape Analysis System (KRÜSS GmbH, Hamburg, Germany) working at 22 ± 2 • C and 45% ± 5% humidity was used for the measurements. The measurement included the placement of three types of liquids (water, diiodomethane and glycerine) on each surface at five different locations and measuring the wetting angle for these liquids. Knowing the values of contact angles, a surface free energy was calculated by the Van Oss-Chaudhury-Good method [30]. The quasi-static contact angle and contact angle hysteresis were measured by the sessile drop. The drops having a volume of 1.5 to 3 µL were dispensed automatically with a microsyringe. The final results were obtained using the software for the automatic measurement of advancing and receding contact angles. The effectiveness of carried modification was investigated using a Nicolet iS50 spectrometer equipped with a GATR accessory from Harrick Scientific Products Inc. The ultra-high sensitive, low noise, linearized MCT (cooled with liquid nitrogen) detector was used. In the case of the present investigations, FTIR measurements were performed in the spectral range of 700-3100 cm −1 . All spectra were recorded by collecting 64 scans at a 4 cm −1 resolution, in dry air atmosphere. The Solver P47 Atomic Force Microscopy(AFM) apparatus was used to study the morphology, roughness and friction coefficient of the coatings in nanoscale. All measurements were carried out in air under ambient conditions (20 ± 2 • C and 30% ± 2% humidity). The topography images were recorded employing the tapping mode. The scanned area was 2 µm × 2 µm at the scan rate 0.5 Hz. The values of the friction coefficient were calculated from the slope of the friction force versus normal force plots. During the measurements, the following parameters were used: Applied loads ranged from 5 to 100 nN, scan rate of 1 Hz and scan size of 1 µm × 1 µm. The obtained average data of measurements from three different places for each coating are shown in the graph. Tribological tests were carried out using a reciprocating ball-on-flat T-23 microtribometer with the following parameters: Velocity of 25 mm/min, traveling distance of 5 mm, range of load from 30 to 80 mN, humidity (30% ± 2%) and temperature (20 ± 2 • C). Si 3 N 4 sphere having 5 mm diameter and average roughness of 5.5 ± 0.5 nm was used as a counterpart. The measurements were performed in three different locations of all surfaces and were repeated three times. The stability of pure and modified Ti-DLC coatings was tested by immersing the samples in tris-buffered saline (TBS) and phosphate buffered saline (PBS) [31]. The samples were exposed to the solutions for different time (from 0.25 to 720 h). After exposure to the solutions, the water contact angle on each surface was examined. Determination of the Antimicrobial Activity of Analyzed Samples The antibacterial activity of perfluoro and alkylphosphonic acids were tested in the solution against Staphylococcus aureus ATCC 6538 and Escherichia coli ATCC 25922 using a modified broth microdilution method, according to the recommendation of the Clinical Laboratory Standard Institute (CLSI M07-A8). In the experiments the Mueller-Hinton broth was used, the final optical density was about 5 × 10 5 colony forming units (CFU). The self-assembled compounds were dissolved in sterile deionized water and were tested in concentrations ranging from 0.1 µg/mL to 200 µg/mL. The obtained data were compared to the control of biotic samples without the organic compounds. Next, the antimicrobial activity of perfluoro and alkylphosphonic self-assembled layers on Ti-DLC was examined. The antibacterial activity was tested against Staphylococcus aureus ATCC 6538 and Escherichia coli ATCC 25922 as in previous works [32,33] using the Japanese Industrial Standard JIS Z 2801:2000. Bacteria were cultured on Luria Bertani (LB) medium at 37 • C on a rotary shaker. After the incubation, the test inoculum of S. aureus and E. coli, containing 1 × 10 5 colony forming units (CFU per mL) in 500-fold diluted LB medium was prepared. Next, the bacteria suspension was applied to tested coatings of 1 cm × 1 cm. Diamond-like carbon (DLC) coatings were analyzed as a control sample. After dripping the suspension of selected bacteria on the coatings, each sample was covered with a sterile film. Then, the samples were incubated in the moist chamber in the dark for 24 h at 37 • C. After incubation, the samples were put in the sterile tube containing phosphate buffer and vortexed. After that, coatings and films were removed from the tubes and with the remaining solution, a serial dilution was performed in phosphate buffer. Out of each dilution, 100 µL of bacterial suspension was seeded on agar plates and incubated for 24 h at 37 • C. Next, the viable cells of S. aureus or E. coli were counted. Each type of coatings or solutions was tested in triplicate and analyzed individually in three independent experiments. The antibacterial activity of the tested coatings was calculated as the percentage of bacterial growth inhibition (+/− SD) toward the control sample without perfluoro and alkylphosphonic layers. Results and Discussion The chemical structure and the presence of self-assembled molecules on Ti-DLC were confirmed with the use of FTIR spectroscopy ( Figure 1). FTIR spectra for Ti-DLC appeared to reveal signals mainly in the range for Ti-OH bonds (at about 1455 cm −1 ), Ti-O-Ti bands (at about 700 and 820 cm −1 ) and Ti-O bonds (at about 940 cm −1 ). Observed bond vibrations indicate the formation of titanium oxides network, which is higher with the increasing participation of admixture in the DLC structure. This was confirmed by increasing intensity of these bands with the increasing titanium concentration. It also indicates that both titanium and oxygen are incorporated in DLC. Another bands characteristic for these surfaces were Ti-CH 3 bonds (at about 1255 and 1305 cm −1 ). The intensity of absorbance of these bands constantly increased with amounts of incorporated titanium. This fact confirms that titanium was connected to a carbon atom and played a dominant role in the tribological properties of these surfaces. On the spectra recorded after modification, phosphonic group characteristics appeared in the 700-1500 cm −1 spectral range. The peak at about 950 cm −1 was assigned to P-O-M bond vibrations. It was confirmed that the perfluoro and alkylphosphonic acid molecules reacted with the surface of the coating and formed a chemical bond. The P=O stretching vibration appeared in the region 1085-1415 cm −1 , while the P-O vibration was noted at 972-1030 and 917-950 cm −1 . The fact that the peak at about 1230 cm −1 corresponding to the P=O group was clearly visible on all IR spectrum could indicate a strong interaction between the phosphonic group and the surface. In the case of Ti-DLC modified by DP, the symmetric (υ s CH 2 ) and asymmetric (υ a CH 2 ) C-H stretching modes at 2850 and 2920 cm −1 corresponding to the methylene groups were visible in the IR spectra. In the same region, the symmetric (υ s CH 3 ) and asymmetric (υ a CH 3 ) C-H stretching modes were present at 2880 and 2960 cm −1 , respectively. DP molecules had the methylene backbone group and the peak observed at 2958 cm −1 came from the front group (-CH 3 ). The -CH 2 groups occurring in the carbon chain of DP compound were also found at 2851 and 2920 cm −1 . More -CH 2 than -CH 3 groups were present in its molecular structure, therefore a more intense peak was observed for the methylene bands than the methyl groups. These peaks were not present for Ti-DLC modified by PFDP while several peaks indicating the presence of fluoroalkyl groups in the sample in the range of 750-1300 cm −1 were observed. The bands noted at around 1200 cm −1 , 1147 cm −1 , 1114 cm −1 and 779 cm −1 corresponded to asymmetric and symmetric stretches of C-F for -CF 2 and -CF 3 groups originating from the fluoroalkyl chain. Perfluoro and alkylphosphonic layers present on Ti-DLC coatings with different concentrations of titanium gave them new, unique properties. Table 1 shows the results of the hydrophobicity analysis of Ti-DLC before and after DP and PFDP modification. It was found, that the titanium made the surface of Ti-DLC structures more hydrophilic, which was observed through decreasing the water contact angle value with the increasing amount of titanium in Ti-DLC. This fact confirms also the FTIR analysis in which it was shown that the presence of C-O and Ti-O bonds increased the hydrophilicity of the surface. Generally, the lowest value of the contact angle was received for Ti-DLC3 (Ti-DLC with 9.4 at.% of Ti). 9.6 ± 0.3 Perfluoro and alkylphosphonic layers present on Ti-DLC coatings with different concentrations of titanium gave them new, unique properties. Table 1 shows the results of the hydrophobicity analysis of Ti-DLC before and after DP and PFDP modification. It was found, that the titanium made the surface of Ti-DLC structures more hydrophilic, which was observed through decreasing the water contact angle value with the increasing amount of titanium in Ti-DLC. This fact confirms also the FTIR analysis in which it was shown that the presence of C-O and Ti-O bonds increased the hydrophilicity of the surface. Generally, the lowest value of the contact angle was received for Ti-DLC3 (Ti-DLC with 9.4 at.% of Ti). The wettability of the studied surfaces was also measured by means of water contact angles (SCA) and the advancing (θ a ), receding (θ r ) contact angles and hysteresis (∆θ). Hysteresis cannot be eliminated completely, because it depends on many factors such as the adhesion hysteresis, surface roughness and inhomogeneity [34]. In our study, we could clearly see ( Figure 2) that the change in the static and quasi-static contact angle values was dependent on the surface roughness expressed via the root mean square (RMS) surface roughness. Ti-DLC with 1.6 at.% of Ti exhibited generally lower values of contact angle hysteresis than Ti-DLC with a higher concentration of titanium, that was related with their high RMS values (0.33 ± 0.01 nm for Ti-DLC1, 0.31 ± 0.01 nm for Ti-DLC2 and 0.30 ± 0.02 for Ti-DLC3 coatings). Comparing these three types of Ti-DLC it was noticed that the surface of Ti-DLC3 coating was the flattest (Figure 2), which affects its hydrophilic properties. After the surface modification, the water contact angle significantly increased, indicating that the surface hydrophobicity was improved. It is related to the presence of the well-ordered perfluoro and alkylphosphonic layer on the surface. The most hydrophobic properties showed the surface modified by PFDP. The -CF 3 group present in perfluorinated self-assembled acid was more hydrophobic than the -CH 3 group in decylphosphonic acid. It is associated with the appearance of fluorine in the compound structure [35]. However, there were clear differences in the hydrophobicity when the layer was formed of compounds containing the -CH 3 and -CF 3 functional groups. The size of the hydrogen and fluorine atoms played an important role. Fluorine was significantly larger than hydrogen so consequently, the average volumes of the CF 2 and CF 3 groups were estimated as 38 A 3 and 92 A 3 , respectively compared to 27 A 3 and 54 A 3 for the CH 2 and CH 3 groups. It caused a greater stiffness of perfluorinated chains than their hydrocarbon equivalents, which prevented a drop of water to penetrate into the surface. Furthermore, effective overlapping of orbitals caused the C-F bond to be more stable (485 kJ mol −1 ) compared to a standard C-H bond (425 kJ mol −1 ). It meant that the dense electron cloud of the fluorine atoms acted as a cover that protects the perfluorinated chain against the approach of water molecules [36][37][38]. This was confirmed by the obtained values of contact angle and SFE. Comparing the non-polar -CF 3 and -CH 3 groups, better hydrophobic properties were characteristic for CF 3 groups. Describing the hydrophobic properties, it is particularly important to take into consideration the roughness of the studied samples. After the deposition process, the RMS values for all coatings increased. The highest increase of RMS value was observed after the formation of PFDP on Ti-DLC1 coating (1.41 ± 0.02 nm). In the case of modification by non-fluorinated DP the registered value was 0.75 ± 0.02 nm. These facts caused the high hydrophobicity of perfluorinated layers. Comparing the advancing contact angles of pure and modified coatings the significant difference in their values in favor of coatings with self-assembled compounds can be seen. The highest value of the advancing contact angle was obtained for the coating with the lowest content of titanium and with the phosphonic self-assembled layer. What is more, a similar trend also occurred in the case of a receding contact angle. After modification, the highest receding contact angle also was obtained for the DLC with 1.6 at.% of Ti and with DP and PFDP layers, reaching the values of 121 • and 123 • respectively. The high values of the advancing contact angle and the receding contact angle simultaneously caused a Materials 2019, 12, 2365 7 of 15 low hysteresis, which was 7.7 • for Ti-DLC1/DP and 6.3 • for Ti-DLC1/PFDP. For Ti-DLC2 and Ti-DLC3 modified by DP and PFDP, the values of the receding contact angle were in the range of 104-117 • causing a higher hysteresis for these coatings compared to those described earlier. Very low contact angle hysteresis and high water contact angle testifies to the high surface hydrophobicity, so when the surface is characterized by low contact angle hysteresis the water behaves as a spherical droplet, with a low roll-off angle. This indicates that the highest value of the advancing contact angle was obtained for self-assembled monolayers, which are well ordered and the water molecule does not penetrate into the surface of Ti-DLC. For better understanding of the wetting mechanism of solid surfaces, the surface free energy (SFE) and the corresponding acid-base and dispersive components were investigated using the Van Oss-Good method. The Van Oss divided the total surface free energy (γ TOT ) of a solid into two components, dispersive (γ LW ) and acid-base (γ AB ) component, presented by the equation: For better understanding of the wetting mechanism of solid surfaces, the surface free energy (SFE) and the corresponding acid-base and dispersive components were investigated using the Van Oss-Good method. The Van Oss divided the total surface free energy (γ TOT ) of a solid into two components, dispersive (γ LW ) and acid-base (γ AB ) component, presented by the equation: The dispersive component γ LW is based on Lifshitz-Van der Waals interactions and the acid-base component γ AB is based on hydrogen bonding interactions. The acid-base component is the sum of electron donor component (γ AB − ) and an electron acceptor component (γ AB + ). The SFE components, both dispersive and acid-base have an influence on wetting and tribological properties. Table 2 gives the SFE data, with the distinction between acid-base and dispersive components. It can be clearly seen that the SFE and acid-base component increased with the increasing concentration of titanium in the Ti-DLC coatings. It is related to the formation of titanium oxides on the surface. Titanium atoms present in the structure of Ti-DLC are free to bond with carbon atoms but also with oxygen and water, which are present during the deposition process as well as in the atmosphere. Therefore, as the concentration of titanium in the coating increased, the number of C-O and Ti-O polar bonds also increased, which in consequence increased the hydrophilicity of the Ti-DLC surface. The changes in the hydrophobicity of the modified Ti-DLC were also related to the acid-base and dispersive components of SFE as seen in Table 2. Generally, after the modification, the values of acid-base components of SFE changed considerably. In the case of Ti-DLC modified by DP, the surface energy value determined mainly the acid-base component, which was significantly reduced. In turn, the SFE value of Ti-DLC after PFDP deposition was influenced by both components of SFE: Acid-base and dispersive, which decreased by as much as about 50%-70% in relation to pure Ti-DLC. [39]. As can be seen the γ AB − value decreased for the surface after DP and PFDP deposition in comparison with pure Ti-DLC. It is associated with the presence of monolayer where the hydrophobic part of molecules is oriented outwards. A similar behavior was observed by Yan et al. [40]. In addition, Zhao et al. [39] reported that if the electron donor component is large then the surface is more negatively charged. Therefore in the case of surface modification by DP and PFDP the high value of the donor component generates a negative charge on the surfaces. The contribution of the acceptor component is practically negligible. The terminal group of perfluoro and alkylphosphonic acid had also a significant influence on the tribological behavior of Ti-DLC coatings. Figure 3 shows the coefficient of friction measured using a microtribometer for all studied surfaces. The highest value of the advancing contact angle was obtained for the coating with the lowest content of titanium and with self-assembled monolayers adsorbed on the Ti-DLC, which caused a decrease in the coefficient of friction compared to the unmodified coatings-previously observed in macroscale [28]. In our study, the coefficient of friction obtained in the milinewton load range for pure Ti-DLC was the lowest for Ti-DLC with 1.6 at.% of Ti. Similarly to SFE, the coefficient of friction increased with an increasing amount of incorporated titanium. After modification by self-assembled compounds, the coefficient of friction was significantly reduced. A considerable reduction in the obtained values of the coefficient of friction between the unmodified and modified coatings was connected with hydrophobicity and low SFE of the modified surfaces. The reduction in the coefficient of friction values primarily resulted from the fact that the friction forces were dominated by surface interactions. Moreover, the reduction in the coefficient of friction also resulted from the presence of well-ordered layers on the coatings. Ti-DLC after modification regardless of the type of the modifier used exhibited lower values of coefficient of friction than their pure equivalent. It is connected with the presence of molecules of alkylphosphonic layers on the surface, which acted as a lubricant during the friction process. The PFDP was a better lubricant and exhibited lower values of the friction coefficient than DP, which was associated with the stiffness of the chains. The larger size of the fluorine atom compared to hydrogen caused the backbone structure rotation of fluorinated chains to be much lower than non-fluorinated ones due to steric effects. Therefore, the hydrophobic layer formed by fluorinated chains was more rigid than the layer created by alkylphosphonic acid molecules. In nanoscale, similarly to the microscale, the coefficient of friction after modification by perfluoro and alkylphosphonic layers showed lower values compared to the pure coatings. This was due to the presence of a well-ordered layer on the surface of the coating, which exhibited completely different interactions with the surface of the counterpart than the pure Ti-DLC. As previously noted for pure coatings, the presence of polar bonds was responsible for an increase in capillary forces that had a large impact on the friction force and in consequence on the coefficient of friction. In the case of a hydrophobic surface, the capillary force was low, therefore the coefficient of friction was also decreased. Comparing the effect of the used modifier, the lowest values of the coefficient of friction were obtained for the Ti-DLC modified by the more hydrophobic PFDP layer. The obtained values of the coefficient of friction in the micro-and nanoscale for the same studied samples differed from each other and this was related to the size of the counterpart, different contact stresses, and load affecting the friction. In the case of friction measurements carried out in the microscale, the applied normal load was higher than in the nanoscale. What is important, in the case of friction measurements carried out in the nanonewton range of forces, we dealt with a point friction contact that was not present in the microscale. In nanoscale studies, the small apparent area of contact minimized the occurrence of an additional factor (plastic deformation) that increased the friction force. Therefore, the capillary forces and adhesional component had only an influence on the value of the friction force. The above mentioned factors had a significant influence on the recorded value of the friction force and consequently, on the differences in the friction coefficient values obtained in the nano-and microscale. Additionally, the effect of the dispersive component of SFE in mili-and nanonewton load range for Ti-DLC structures on the friction coefficient value was observed, which can be seen also in Figure 3. In the case of pure Ti-DLC, the high value of the dispersive component was observed, which was reflected in the high values of the friction coefficient. After the formation of perfluoro and alkylphosphnic self-assembled monolayers on the Ti-DLC surface, it was found, that the frictional properties were effectively improved and the γ LW components had been reduced. What is more, a drop in the dispersion component by as much as 90% resulted in a reduction of the friction coefficient up to a value of approximately 0.20. Generally, it was found that the value of the γ LW surface energy component had a significant effect on the friction properties of Ti-DLC. alkylphosphnic self-assembled monolayers on the Ti-DLC surface, it was found, that the frictional properties were effectively improved and the γ LW components had been reduced. What is more, a drop in the dispersion component by as much as 90% resulted in a reduction of the friction coefficient up to a value of approximately 0.20. Generally, it was found that the value of the γ LW surface energy component had a significant effect on the friction properties of Ti-DLC. The water contact angle measurements were used in our studies, to characterize changes in perfluoro and alkylphosphonic layer stability after immersion in PBS and TBS solutions (Figure 4). In the case of pure Ti-DLC1, Ti-DLC2 and Ti-DLC3 coatings, no significant changes in the water contact angle were observed. The water contact angle persisted at a constant level regardless of the time of immersion. While for modified samples, these changes were observed. In the case of Ti-DLC1 modified by DP and PFDP immersed in PBS solution a significant decrease in the contact angle after 360 h from 123 ± 3 • for DP and 127 ± 2 • for PFDP to 97 ± 2 • for DP and 100 ± 2 • for PFDP was observed. These changes indicated that the self-assembled layer became less homogeneous and disordered or that the organic molecules were desorbed. A similar trend was observed for DP and PFDP immersed in TBS solutions. It also occurred in the case of modified Ti-DLC2 and Ti-DLC3. The water contact angle measurements were used in our studies, to characterize changes in perfluoro and alkylphosphonic layer stability after immersion in PBS and TBS solutions (Figure 4). In the case of pure Ti-DLC1, Ti-DLC2 and Ti-DLC3 coatings, no significant changes in the water contact angle were observed. The water contact angle persisted at a constant level regardless of the time of immersion. While for modified samples, these changes were observed. In the case of Ti-DLC1 modified by DP and PFDP immersed in PBS solution a significant decrease in the contact angle after 360 h from 123 ± 3° for DP and 127 ± 2° for PFDP to 97 ± 2° for DP and 100 ± 2° for PFDP was observed. These changes indicated that the self-assembled layer became less homogeneous and disordered or that the organic molecules were desorbed. A similar trend was observed for DP and PFDP immersed in TBS solutions. It also occurred in the case of modified Ti-DLC2 and Ti-DLC3. Phosphonic self-assembled monolayers deposited on Ti-DLC not only showed high stability but also contributed to the increase of antibacterial properties of Ti-DLC coatings. In the literature, it was found that mixtures of phosphonopeptides are strong antimicrobial agents [26]. The obtained results showed that the tested solutions exhibited different activities against tested strains. DP as well as PFDP were more active against the Gram positive Staphylococcus aureus strain (Figure 5a) than against the Gram negative Escherichia coli strain. The addition of DP at a very low concentration of 0.1-10 Phosphonic self-assembled monolayers deposited on Ti-DLC not only showed high stability but also contributed to the increase of antibacterial properties of Ti-DLC coatings. In the literature, it was found that mixtures of phosphonopeptides are strong antimicrobial agents [26]. The obtained results showed that the tested solutions exhibited different activities against tested strains. DP as well as PFDP were more active against the Gram positive Staphylococcus aureus strain (Figure 5a) than against the Gram negative Escherichia coli strain. The addition of DP at a very low concentration of 0.1-10 µg/mL limited the growth of this strain almost by half, in comparison with control samples without self-assembled layers. At the highest concentrations (from 60 to 200 µg/mL), DP caused 70% growth reduction. The perfluoro alkylphosphonic acid exhibited better antimicrobial activity, in higher concentration (40 µg/mL). The Gram negative E. coli strain showed good tolerance to tested phosphonic acids compounds (Figure 5b), where the growth inhibition ranged from 10% to 30%. The best antibacterial effect against E. coli was obtained at the highest concentration of both compounds (from 120 to 200 µg/mL). A similar trend was observed by Abdelkader et al. who investigated the antimicrobial activity of alpha-aminophosphonic acids. The tested compounds inhibited the bacteria growth (Gram positive and Gram negative), with a better effect for Gram negative [41]. The aim of our study was also to determine the antimicrobial effect of perfluoro and alkylphosphonic layers formed on the surface of Ti-DLC with various contents of Ti. Generally, a smaller amount of bacteria occurred on the DLC coating with the highest content of Ti for both S. aureus and E. coli bacteria. This was due to the fact that the antimicrobial properties of pure Ti-DLC The aim of our study was also to determine the antimicrobial effect of perfluoro and alkylphosphonic layers formed on the surface of Ti-DLC with various contents of Ti. Generally, a smaller amount of bacteria occurred on the DLC coating with the highest content of Ti for both S. aureus and E. coli bacteria. This was due to the fact that the antimicrobial properties of pure Ti-DLC were affected by the acid-base component of SFE and more specifically by the donor interaction (γ AB− ). As mentioned earlier, the higher the component value, the more negatively charged the surface. Therefore, the most negatively charged coating (Ti-DLC3) showed the best antimicrobial activity. The presence of bacteria on the surface was also related to adhesion interactions. To describe this phenomenon, the DLVO theory (Derjaguin-Landau-Verwey-Overbeek) could be used. According to this theory, the interaction between the surface and the bacteria is the sum of long-distance interactions: attracting (van der Waals) and repulsive (electrostatic). These forces determine the approach of the bacteria to the surface. The dispersive component of SFE is related to the long-distance interactions. Figure 6 shows its influence on the growth of both S. aureus and E. coli bacteria. In all tested surfaces with self-assembled layers (DP, PFDP) a strong antibacterial activity against the Gram positive strain S. aureus (Figure 6b) was noticed. In the case of fluoride modified alkylphosphonic layers (PFDP) deposited on Ti-DLC, the growth inhibition was over 95% in all tested variants (compared to control). For PFDP layers the inhibition of E. coli growth reached a value of around 40% (Figure 6a). Summarizing, the dispersive component of Ti-DLC after modification had an influence on the inhibition of E. coli and S. aureus growth. When the value of this component decreased, the coatings exhibited better antimicrobial activity. This was particularly evident in the case of S. aureus. The growth of the Gram negative E. coli bacteria on the tested surfaces was higher than the Gram positive S. aureus bacteria and it was associated with the structure of the bacteria and its interaction with the coating. phenomenon, the DLVO theory (Derjaguin-Landau-Verwey-Overbeek) could be used. According to this theory, the interaction between the surface and the bacteria is the sum of long-distance interactions: attracting (van der Waals) and repulsive (electrostatic). These forces determine the approach of the bacteria to the surface. The dispersive component of SFE is related to the long-distance interactions. Figure 6 shows its influence on the growth of both S. aureus and E. coli bacteria. In all tested surfaces with self-assembled layers (DP, PFDP) a strong antibacterial activity against the Gram positive strain S. aureus (Figure 6b) was noticed. In the case of fluoride modified alkylphosphonic layers (PFDP) deposited on Ti-DLC, the growth inhibition was over 95% in all tested variants (compared to control). For PFDP layers the inhibition of E. coli growth reached a value of around 40% (Figure 6a). Summarizing, the dispersive component of Ti-DLC after modification had an influence on the inhibition of E. coli and S. aureus growth. When the value of this component decreased, the coatings exhibited better antimicrobial activity. This was particularly evident in the case of S. aureus. The growth of the Gram negative E. coli bacteria on the tested surfaces was higher than the Gram positive S. aureus bacteria and it was associated with the structure of the bacteria and its interaction with the coating. The comparison was made using one-way analysis of the Student t-test. * p ˂ 0.05 vs. control group. Conclusions The Ti-DLC coatings with different amount of incorporated Ti (1.6%, 5.3% and 9.4%) were prepared using the RF PECVD method. The self-assembled monolayers of perfluoro and alkylphosphonic acid by using the LPD method on the Ti-DLC surface were successfully deposited. The presence of self-assembled layers on the surface of the Ti-DLC was confirmed by using Fourier transform infrared spectroscopy. The chemical structure of investigated compounds had a huge influence on wettability, friction properties, stability under phosphate-and tris-buffered saline solutions and antimicrobial activity of examined self-assembled layers. The results of the static water contact angle, advancing and receding contact angle and also values of SFE indicated that the surface after deposition of perfluoro and alkylphosphonic layers changed its properties from hydrophilic to hydrophobic. The performed measurements suggested that the highest hydrophobic properties had a PFDP layer deposited on Ti-DLC with 1.6 at.% of Ti. The received value of the contact angle was Conclusions The Ti-DLC coatings with different amount of incorporated Ti (1.6%, 5.3% and 9.4%) were prepared using the RF PECVD method. The self-assembled monolayers of perfluoro and alkylphosphonic acid by using the LPD method on the Ti-DLC surface were successfully deposited. The presence of self-assembled layers on the surface of the Ti-DLC was confirmed by using Fourier transform infrared spectroscopy. The chemical structure of investigated compounds had a huge influence on wettability, friction properties, stability under phosphate-and tris-buffered saline solutions and antimicrobial activity of examined self-assembled layers. The results of the static water contact angle, advancing and receding contact angle and also values of SFE indicated that the surface after deposition of perfluoro and alkylphosphonic layers changed its properties from hydrophilic to hydrophobic. The performed measurements suggested that the highest hydrophobic properties had a PFDP layer deposited on Ti-DLC with 1.6 at.% of Ti. The received value of the contact angle was 127.3 • . What was more, the strong correlation between the dispersive component of SFE and both friction coefficient and antimicrobial activity was shown. The obtained results indicated that the increase in the dispersive component of SFE caused the increase of the coefficient of friction. The tribological measurements after the modification showed that the PFDP layers improved the friction properties and provided effective lubrication. The same trend was visible on a micro-and nanoscale. Stability tests confirmed that the Ti-DLC modified by self-assembled monolayers was stable in saline solutions for up to 30 days. An analysis of the antimicrobial properties of both self-assembled layers deposited on Ti-DLC showed an inhibition of bacteria growth by up to 95% for the Gram positive strain S. aureus and by about 40% for the Gram negative strain E. coli. For both cases the influence of the dispersive component as well as the SFE on the bacterial growth on the studied surfaces was shown. The obtained results indicated that Ti-DLC coatings modified by perfluoro and alkylphosphonic layers could be useful for potential biomedical applications.
2019-07-28T13:03:21.856Z
2019-07-25T00:00:00.000
{ "year": 2019, "sha1": "10abc3ac3925b8e5797df1ec2e78a72ba9661a50", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/12/15/2365/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9873822e96ea125fc514f971e02fa129e0808f6b", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
244896588
pes2o/s2orc
v3-fos-license
Exploring nonequilibrium phases of photo-doped Mott insulators with Generalized Gibbs ensembles Photo-excited strongly correlated systems can exhibit intriguing non-thermal phases, but the theoretical investigation of them poses significant challenges. In this work, we introduce a generalized Gibbs ensemble type description for long-lived photo-doped states in Mott insulators. This framework enables systematic studies of photo-induced phases based on equilibrium methods, as demonstrated here for the one-dimensional extended Hubbard model. We determine the nonequilibrium phase diagram, which features $\eta$-pairing and charge density wave phases in a wide doping range, and reveal physical properties of these phases. We show that the peculiar kinematics of photo-doped carriers, and the interaction between them, play an essential role in the formation of the non-thermal phases, and we clarify the differences between photo-doped Mott insulators, chemically-doped Mott insulators and photo-doped semiconductors. Our results demonstrate a new path for the systematic exploration of nonequilibrium strongly correlated systems and show that photo-doped Mott insulators host different phases than conventional semiconductors. Introduction Nonequilibrium control of quantum materials is an intriguing prospect with potentially important technological applications 1-6 . Experiments with various materials and excitation conditions have reported phenomena not observable in equilibrium including nonthermal ordered phases such as superconducting (SC)-like phases 4,7-9 , charge density waves (CDW) 10-12 and excitonic condensation 13 . Among various nonequilibrium protocols, photo-doping is a basic and important one, in which a radiation pulse creates electron-and holelike charge carriers with a long lifetime on the electronic timescale. Due to the nonthermal distribution of the carriers and the cooperative interplay between them, peculiar nonequilibrium phases can be induced. The theory of photo-doping has been extensively discussed for semiconductors, where a rich phase diagram including electron-hole plasmas and exciton gases 14-19 as well as exciton condensation 20-22 is found. The physical picture is that, after electrons and holes are created, they rapidly relax within the conduction and valence bands, while their recombination occurs on a much longer timescale. Thus, at the single-particle level the numbers of electrons and holes are separately conserved, so that in the intermediate time regime one has a pseudoequilibrium state that can be described by an effective equilibrium theory with separate chemical potentials for the electrons and holes [17][18][19] . A situation of great current interest is the photodoping of Mott insulators. In these systems, exotic equilibrium states such as unconventional SC phases emerge upon chemical doping 23 , while photo-doping creates novel pseudoparticle excitations not easily represented in a single particle picture, e.g. doublons and holons in the single-band case. When the Mott gap is large enough, these excitations are long-lived due to the lack of efficient recombination channels [24][25][26][27][28][29] . Therefore, as in semiconductors, a fast intraband relaxation results in a long-lived quasi-steady state. Previous studies based on short-time simulations indicated the emergence of enhanced CDW 30 or SC correlations 31-35 , as well as novel spin-orbital orders 36 . However, unlike in the semiconductor cases, the long-time behavior of photo-doped Mott insulators is not well understood, due to the lack of powerful theoretical frameworks. Steady state formalisms in which explicit heat/particle baths or other dissipative mechanisms are attached to the system have been recently applied 37-39 . These formalisms, however, require attention to the influence of the baths and dissipations, and the use of explicitly nonequilibrium methods. An alternative approach is a pseudoequilibrium description as in conventional semiconductors. Crucial differences from semiconductors are that the approximately conserved entities are pseudoparticles (local many-body states), with which the Hamiltonian is not initially specified, and that the approximate conservation of their number is not manifest in the Hamiltonian but arises from kinematic constraints. Previous works 37,40-43 introduced the idea of using the Schrieffer-Wolff (SW) transformation to reformulate the problem in a way that explicitly references the approximately conserved quantities and isolates the terms that eventually lead to full equilibration. However, the applications of such effective descriptions have been so far arXiv:2105.13560v2 [cond-mat.str-el] 3 Dec 2021 limited to small clusters and weak 40,41,43 or extreme excitation conditions 42 . In this work, we introduce a generalized Gibbs ensemble (GGE) type description for the effective model obtained from a SW transformation by incorporating different chemical potentials for pseudoparticles, i.e. for the many-body local states. This effective equilibrium description allows us to systematically scan the nonequilibrium states in photo-doped Mott insulators for extended systems using established equilibrium methods. We use this approach to identify and study emerging phases in the photo-doped one-dimensional extended Hubbard model. We determine the nonequilibrium phase diagram, where η-pairing 33,35,37,42,44 and CDW phases appear in a wide doping range, and reveal the corresponding spectral features. The CDW phase is strongly favored in photo-doped systems, compared to the chemically-doped ones, and it is characterized by unbound doublons and holons in contrast to photodoped semiconductors where electron-hole binding is an important effect. We show that the kinematics of photo-doped doublon/holon carriers, which is qualitatively different from the electron/hole dynamics in conventional semiconductors, plays an essential role and leads to development of CDW and η-pairing correlations described by squeezed systems without singly occupied sites. Results GGE description for photo-doped Mott insulators. The generic formulation of the GGE description is given in the Supplemental Material (SM), but here we explain the procedure focussing on the extended Hubbard model, whose Hamiltonian iŝ withĤ U = U i (n i↑ − 1 2 )(n i↓ − 1 2 ) the on-site andĤ V = V i,j (n i − 1)(n j − 1) the nearest-neighbor interaction. c † iσ is the creation operator of a fermion with spin σ at site i,n iσ =ĉ † iσĉ iσ the spin-density at site i,n i =n i↑ + n i↓ , and i, j denotes pairs of nearest-neighbor sites. v is the hopping parameter. For large U , the half-filled equilibrium system is Mott insulating. Photo-doping the Mott insulator creates doublons (doubly occupied states) and holons (empty states). When the Mott gap is large, the number of these excited local states is approximately conserved for kinematic reasons 24-29 . However, the original Hamiltonian explicitly contains recombination terms, which generate virtual processes that affect the physics even when recombination is kinematically suppressed. In order to explicitly remove the recombination terms, while effectively taking account of effects of virtual recombination processes, we perform the SW transformation 37,45,46 . Here, we assume U V, t hop . The effective Hamiltonian up to whereĤ kin,holon andĤ kin,doub describe the hopping of holons and doublons of O(t hop ), respectively. The remaining terms are of O(t 2 hop /U ).Ĥ spin,ex is the spin exchange term,Ĥ dh,ex is the doublon-holon exchange term andĤ (2) U,shift describes the shift of the local interaction.Ĥ 3−site represents three-site terms such as correlated doublon hoppings, see Method.Ĥ dh,ex sets the correlations between the neighboring doublons and holons asĤ spin,ex sets spin correlations between singlons (singly occupied states). In this sense, the above model is a natural extension of the t-J model 23 , which is obtained by assuming that either holons or doublons are added (chemical doping) and ignoringĤ 3−site . Due to intraband scatterings and environmental couplings (e.g. phonons), intraband relaxation occurs and the system reaches a steady state. Since the numbers of doublons and holons are conserved in the effective model, the steady state can be described by introducing separate chemical potentials for them. The corresponding number With these, the grand-canonical HamiltonianK eff can be written asĤ eff − µ holonNholon − µ doubNdoub , or where µ U = µ doub + µ holon and µ = −µ holon . Thus, the local interaction is modified from U by the photo-doping (U − µ U ), i.e. the energy difference between the doublons and holons is effectively reduced, analogously to the effective shift of the band splitting in photo-doped semiconductors 22,47 . The properties of the nonequilibrium steady states may then be described by the density matrixρ eff = exp(−β effKeff ) with an effective temperature T eff = 1/β eff 17,22,48 , which is a sort of GGE 49,50 . This is essentially an equilibrium problem that can be studied with established equilibrium techniques. Response functions −i [Â(t),B(0)] ± of the nonequilibrium states can also be computed within this framework, see SM. Our basic assumption that the nonequilibrium states can be characterized by a few parameters, such as the doublon number and effective temperature, is supported by a recent study 37 , which demonstrated a good agreement between time-evolving states and nonequilibrium steady states weakly coupled to thermal baths. Nonequilibrium phases diagram of photo-doped extended Hubbard model. We apply the above framework to the half-filled one-dimensional extended Hubbard model using infinite time-evolving block decimation (iTEBD)) 51 and exact diagonalization (ED) 52 . N i n i,d ) and the nonlocal interaction V , for U = 10. Phases are categorized by the dominant correlation evaluated from iTEBD, i.e. the correlation with the smallest critical exponent a. The phase boundary is only schematic and a guide to the eye. The critical exponent is extracted by fitting the correlation functions (χ(r)) with C1/r 2 + C2 cos(qr)/r a , where q = 2n d π, q = (1 − 2n d )π and q = π for charge, spin and SC correlations, respectively. We use r ∈ [6, 30] for the fitting range. As in the case of the t-J model, the effect ofĤ 3−site is not essential. We confirm this for the photo-doped situation using ED in the SM. Thus, in the following, we focus on the effective modelĤ eff2 , which ignoresĤ 3−site . We consider cold systems (T eff = 0) to clarify the possible emergence of nonequilibrium ordered phases. Such a situation may be achieved by energy dissipation to the environment 47,53,54 or entropy reshuffling 55,56 . In the following, we use t hop as the energy unit, and fix U = 10, Here, N is the system size, n av = 1 N i n i ,Ŝ z i = 1 2 (n i,↑ −n j,↓ ), and∆ i =ĉ i↑ĉi↓ . η-pairing is characterized by staggered SC correlations. Note thatĤ,Ĥ eff , andĤ eff2 are SU c (2) symmetric with respect to the η-operators for V = 0 44,57 . Due to this symmetry, a homogeneous state with long-range η-SC correlations is on the verge of phase separation, which we avoid by considering non-zero V 42 , see SM. In Fig. 1, we show the computed nonequilibrium phase diagram for the photo-doped Mott insulator. In onedimensional quantum systems, spatial equal-time correlations can show quasi-long-range order, i.e., power-law decay with a critical exponent a less than 2, which corresponds to a diverging susceptibility in the low-frequency limit 58 . The corresponding spatial dependence of the correlation functions is shown in Fig. 2. We see that generically more than one correlation function exhibits quasi-long-ranged order. The phase shown in Fig. 1 is identified from the correlation function with the smallest critical exponent. Without photo-doping, a SDW phase with staggered spin correlations is found 59 . However, other correlations quickly become dominant with photodoping. When V 0.2 (= Jex 2 ), the η-SC phase emerges in a wide photo-doping range. This is consistent with recent dynamical mean-field theory (DMFT) analyses for the pure Hubbard model in the infinite spatial dimension employing entropy cooling or heat baths 37,55 . Importantly, the sign of the SC correlations remains staggered regardless of doping and V . For larger V , the CDW phase is stabilized. We note that, in the extreme photo-doping limit (n d = 0.5), the effective model (Ĥ dh,ex +Ĥ V ) be-comes equivalent to the XXZ model 42 . Namely, we havê where J XY = −J ex , J Z = −J ex + 4V andη represents the η-operators (see METHODS). The XXZ model shows XY order (quasi-long range order) for |J XY | > J Z , while it shows Ising order (long range order) for |J XY | < J Z . In our language, the former corresponds to η-SC and the latter to CDW. Thus, it is natural that the phase transition between η-SC and CDW occurs at |J XY | = J Z , i.e. V = Jex 2 , for strong photo-doping. Interestingly, the phase boundary remains located near this value over a wide photo-doping range, see Fig. 1. As Fig. 2 shows, more than one order can be quasi-long ranged for a given set of parameters. When V is small, the SC correlations are dominant. The spin correlations are also quasi-long-ranged, while the charge correlations show no sign of CDW (alternation of signs). When V is increased, the exponent of the spin correlations remains almost unchanged, while the decay of the SC correlations becomes faster and the CDW correlation starts to develop around V 0.1 (= Jex 4 ). V = 0.2 (= Jex 2 ) is in a coexistence regime where CDW, SDW and η-SC orders are simultaneously quasi-long ranged [ Fig. 2a]. While the precise boundaries of the coexistence regime are difficult to determine (see SM), by V = 0.4 (= J ex ), the CDW correlations become dominant and the SC correlations decay exponentially. Origin and properties of photo-doped phases. The photo-doped states exhibit unique properties. Firstly, the η-SC is absent in equilibrium, since in chemicallydoped systems either doublons or holons are introduced and hence χ sc (r) vanishes. On the other hand, one expects that even in chemically-doped states, CDWs can develop due to the instability of the Fermi surface. Figures 3ab however show that the CDW correlations are much stronger in photo-doped than in chemically-doped states. Furthermore, the CDW correlations show incommensurate oscillations with q = 2n d π, see Fig. 2a. This indicates that holons and doublons do not bind in pairs. Instead, the holons (doublons) are located in the middle of neighboring doublons (holons), even though the doublon-holon interaction V dh ≡ Jex 4 − V is attractive, seeĤ dh,ex +Ĥ V . The absence of binding is also directly confirmed by the evaluation of the doublon-holon correlations (SM). This situation is in stark contrast with semiconductors, where the attractive interaction between photo-doped holes and electrons leads to condensation of electron-hole pairs (excitons) at low temperatures 20-22,47 . Let us discuss in more detail the physical origin of the CDW phase. In the extended Hubbard model the interaction between doublons (and between holons) is V dd ≡ − Jex 4 + V , and thus V dd ≡ −V dh . To investigate how the interactions among doublons and holons affects the CDW formation, we artificially add an interaction between neighboring doublons and holons,Ĥ V dh ≡ , so that the doublonholon interaction becomesṼ dh ≡ V dh + ∆V dh . We choose the parameters such that bothṼ dh and V dd are repulsive. Figure 3c shows that the relative magnitude of the doublon-doublon (holon-holon) and doublon-holon interaction controls the physics and that attractive doublonholon interaction is not essential for the CDW. Namely, oscillations in χ c appear if V dd Ṽ dh . This indicates that CDW correlations develop between the doublons and holons as if no singlons existed between them. The situation is analogous to the spin correlations in the one-dimensional t-J model, which can be explained by the squeezed Heisenberg chain without doublons and holons 60,61 . Underlying this phenomenon in the onedimensional t-J model is the conservation of the spin configuration in the J → 0 limit 60 . Since a singlon always encounters the same neighbors, the system favors the spin configurations described by the Heisenberg hamiltonian. The same situation is realized in the photo-doped case. In the limit of J ex → 0, the configuration of doublons and holons is also conserved due to their peculiar kinematics, seeĤ kin,holon +Ĥ kin,doub . (Note that in normal semiconductors, holes and electrons can switch position even in the one-dimensional case.) Thus, the configurations of doublons and holons are determined by the interaction termĤ dh,ex +Ĥ V , as in the case of spin configurations in the t-J model. To confirm the above scenario, we evaluate the correlations between the doublons and holons in terms of a reduced distance which ignores singlons. The corresponding correlation function is defined asχ ηz (r) = with l singlons between the 0th site and the (r + l)th site Fig. 3d]. Staggered correlations appear for V dd Ṽ dh , which supports the above argument. Note that, even when doublons and holons show the Ising-type order in the squeezed space, the correlations can still exhibit a power-low 62 . We thus conclude that the photo-induced CDW originates from the less repulsive doublon-holon interaction (compared to interactions between the same species), the peculiar kinematics of carriers and the one-dimensional configuration. Furthermore, the development of correlations between the doublons and holons in the squeezed system without singlons should also apply to systems with V Jex 2 . In these cases, the X and Y components ofĤ dh,ex +Ĥ V (we regardĤ dh,ex +Ĥ V as an XXZ model, as in the extreme photo-doing limit) is dominant and the η-paring phase emerges. This naturally explains the observation that the boundary between the η-paring phase and the CDW phase is close to V Jex 2 independent of the photodoping level. Single-particle spectra. We now focus on the singleparticle spectra, to clarify characteristic features of the different phases. Figure 4 shows the momentumintegrated spectrum A loc (ω) and the momentum-resolved spectrum A k (ω) for the η-SC phase and the CDW phase 46 . For the CDW phase, we use V = 1 to enhance the characteristic features. Unlike in equilibrium, but similar to photo-doped semiconductors, the photo-doped system exhibits two "Fermi levels" separating occupied (electron removal spectrum) from unoccupied (electron addition spectrum) states. The occupied states in the upper Hubbard band (UHB) region correspond to the removal of a doublon, while those in the lower Hubbard band (LHB) region correspond to adding a holon (see Methods for precise definitions). In the η-SC phase, within our numerical accuracy, no gap signature appears in A k (ω) around the new Fermi levels [ Fig. 4a], which is in stark contrast to a normal superconductor with a gap around the Fermi level. The absence of a gap is also found for η-paring states in higher dimensions 63 , and this suggests that the η-SC state is a kind of gapless superconductivity. On the other hand, in the CDW phase, gaps appear at the new Fermi levels [ Fig. 4b], as in the excitonic phase in photo-doped semiconductors 22 . Finally, we observe that in-gap states between the UHB and LHB develop with photo-doping, which are more prominent for larger V [Fig. 4]. These states may enable recombination processes suggesting that our assumption of approximately conserved doublon and holon numbers may become less valid as the excitation density increases. However, one needs to keep in mind the following points: i) For large enough U , the Mott gap remains clear and the doublon and holon numbers are approximately conserved. Since the CDW is driven by V , the value of U does not affect its existence and spectral features. ii) Even when in-gap states develop, the effective equilibrium description is meaningful. The recombination rate for a given state can be estimated by Fermi's golden rule, and if this rate is small compared to the intraband relaxation, the transient state can be described by (time-dependent) effective temperatures and chemical potentials 64 . Hence, our results show that the effective equilibrium description can be useful to study the closure/shrinking of a Mott gap via photo-doping, as a result of screened interactions and photo-induced spectral features 65 . Discussion We introduced a GGE-type effective equilibrium description for photo-doped strongly correlated systems. This provides a theoretical framework for systematic studies of nonthermal phases. Using this effective equilibrium description, we revealed emerging phases in the photodoped one-dimensional extended Hubbard model. The η-pairing phase is stabilized in the small V regime even when the SU c (2) symmetry that protects η-pairing in the pure Hubbard model is absent, and it is characterized by gapless spectra. The CDW phase emerges in the larger V regime, and it is characterized by gapped spectra. These states are unique to photo-doped strongly correlated system, where the peculiar kinematics of doublons and holons stabilizes them in a wide doping range. The similarity between the GGE-type description for strongly correlated systems and the pseudoequilibrium description for the photo-doped semiconductors allowed us to clarify some fundamental differences between these two systems. In particular, our results demonstrate that photo-doped strongly correlated systems and semiconductors exhibit qualitatively different phases due to the different nature of the injected carriers. Target systems to look for the characteristic Mott features include candidate materials of one-dimensional Mott insulators ranging from organic crystals, e.g., ET-F 2 TCNQ, to cuprates, e.g. Sr 2 CuO 3 , as well as cold-atom systems. We also note that further insights into our results for the one-dimensional system may be obtained by using equilibrium concepts such as Luttinger liquid theory and the exact wave function for J ex → 0 60 . In the nonequilibrium state, we expect at most three degrees of freedoms: spin, pseudo-spin (consisting of doublon and holon) and charge (position of singlons). Thus, unlike in the equilibrium case, the maximum value of the conformal charge c would be 3, which may be realized in the η-pairing phase. A systematic analysis in this direction is under consideration. The GGE-type description of photo-doped Mott states can be applied to various models and implemented with different equilibrium techniques such as slave boson approaches 66,67 and variational methods 51,68,69 . The description can provide useful insights into experimental findings, e.g. photo-induced SC-like state, as well as theoretical results. For example, a recently found metastable orbital order in a photo-doped multi-orbital system can be reasonably explained by an effective equilibrium picture 70 . Systematic explorations of nonequilibrium phases in higher dimensions and at nonzero effective temperatures with the GGE-type description should be undertaken. For example, the GGE-type description allows us to investigate important aspects of nonequilibrium states, such as the screening of the interactions 65 or the stability of photo-induced phases against light irradiation 71 . These questions are interesting topics for future investigations. Methods Infinite time-evolving block decimation. The infinite time-evolving block decimation (iTEBD) method expresses the wave function of the system as a matrix product state (MPS), assuming translational invariance 51 . iTEBD directly treats the thermodynamic limit and we use cut-off dimensions D = 1000 ∼ 3000 for the MPS to get converged results. We use the conservation laws for the numbers of spin-up and spin-down electrons at half-filling to improve the efficiency of the calculations. Away from half-filling, the trick with the conservation laws cannot be used, which makes the simulations less efficient. Thus, we use the exact diagonalization method in Fig. 3. Single-particle spectrum. The single-particle spectrum is defined as follows. The local spectrum is . Note thatĉ iσ (t) is the Heisenberg representation ofĉ in terms ofĤ eff . The occupied spectra correspond to A < loc (ω) ≡ 1 2π ImG < i (ω) and A < k (ω) ≡ 1 2π ImG < k (ω), where G < denotes the lesser part of the Green's functions. To evaluate these quantities using the effective equilibrium description and iTEBD, we employ the method proposed by some of the authors 46 . Namely, we evaluate G R (t) using an auxiliary band and perform the Fourier transformation with a Gaussian window, F Gauss (t) = exp(− t 2 2σ 2 ), and we use σ = 5.0. Thus, the broadening of the resultant spectrum is inevitable and a gap much smaller than the broadening cannot be captured. Still, we checked that no gap signature appears in the η paring state for an increased value of J ex , where we would expect an increase of the gap (if any). H eff for the U -V Hubbard model. The explicit expressions for the terms in Eq. (2) are as follows. The O(v) terms are given bŷ wheren = 1 − n andσ is the opposite spin to σ. For the O( whereŝ =ĉ † α σ αβĉ † β with σ denoting the Pauli matrices. The exchange term for a doublon and a holon on neighboring sites iŝ The shift of the local interaction is described bŷ Here, the superscript "(2)" indicates that the term is The 3-site term can be expressed asĤ 3−site ≡ H k,i,j ,σ n i,σ c j,σnj,σnk,σ c † k,σ + h.c. (10) Here, k, i, j means that both of (k, i) and (i, j) are pairs of neighboring sites. The sum is over all possible such combinations (without double counting), where we regard (k, i, j) = (j, i, k). In the evaluation of the physical quantities, we use the operators of the effective model. To be strict, if physical quantities for the original Hamiltonian are to be computed, one also needs to take account of corrections from the SW transformation to the operators. However, these corrections are not necessary to see the leading behavior. The same strategy is often used in the evaluation of physical quantities for the Heisenberg model or the t-J model. Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request. Code availability The source code for the calculations performed in this work is available from the corresponding authors upon reasonable request. Competing interests The authors declare no competing interests. Supplementary information Appendix A: General procedure In this section, we explain a general procedure to derive effective models and set the chemical potentials for photo-doped strongly correlated systems (SCSs). As a generic situation, we consider a Hubbard-type Hamiltonian consisting of a local part and a nonlocal part, H =Ĥ loc +Ĥ nonloc , whereĤ loc = iĤ loc,i andĤ nonloc includes nonlocal processes such as hoppings and nonlocal interactions. We represent the eigenstates of the local partĤ loc,i at site i by |α i , with α = 1, 2, · · · , N loc . For example, we have N loc = 4 in the single band Hubbard model. Next, we introduce pseudo-particle operatorsd † α,i for the local many-body states |α i , which are useful to formulate the generic and systematic procedure based on the Schrieffer-Wolff (SW) transformation. Here, d † α,i is a fermionic (bosonic) creation operator if |α includes an odd (even) number of fermions, and the physical space is defined by αd † α,id α,i = 1. With the help of these operators, one can express the local Hamiltonian asĤ loc,i = N loc α=1 E αd † α,id α,i . Let us assume that E α − E α can take large values of O(U ), while the remaining terms are O(t hop ), so that we can treatĤ nonloc perturbatively. To this end, we classify the local states into several groups, such that the energy difference of the states within the same group is less than O(U ), and the states in the same group have the same number of physical particles. For example, in the one band Hubbard model, we have three groups. The first one consists of holons, the second one consists of doublons and the third one consists of singly occupied states. The number operator for each group G is defined asN G = i α∈Gd † α,id α,i . H nonloc , written in terms of the pseudo-particle operators, readŝ Now, we apply the SW transformation,Ĥ = e iŜĤ e −iŜ withŜ = l≥1Ŝ is chosen such that, at each order,Ĥ does not include processes that change {N G }. The lowest order term ofŜ is chosen such that This is satisfied for In general, for any operator that is expressed aŝ we can definê and one finds that it satisfieŝ This fact can be used to obtain the higher order corrections forŜ. To summarize, the effective Hamiltonian up to second order becomeŝ (A7) Here, | 0 means that we only consider terms that do not change {N G }. In other words,Ĥ eff commutes withN G and recombination processes are explicitly removed. In a photo-doped system, the number of local states in each group can be considered as approximately conserved when the gap between different bands (O(U )) is large enough. On the other hand, a relaxation within the energetically separated bands occurs due to the scattering between local states and energy dissipation to other degrees of freedom such as phonons. For extended systems, the resulting steady state can be described by introducing "chemical potentials" µ G for each group, The properties of the nonequilibrium steady state may be described by this grand canonical HamiltonianK eff and an effective inverse temperature β eff , i.e., by the density matrixρ eff = exp(−β effKeff ). Now the problem becomes essentially equivalent to a conventional equilibrium problem, so that one can apply suitable equilibrium techniques to study the nonequilibrium state of SCSs. Strictly speaking, the above construction implicitly assumes that, within the same group, the ratio of the states can be changed. Such transitions may be caused by (i) hopping processes (i.e. effects of the effective model) or (ii) as a result of some assumed coupling with the environment. However, when neither of the two conditions applies, one needs to find a subgroup of G and introduce a chemical potential for each subgroup. (The remaining procedure is the same as above.) One may encounter such a case for example in multi-orbital problems with degenerated local states. The specific strategy will depend on the model and the likely properties of the environment. Appendix B: Response functions Static observables can be evaluated directly usinĝ ρ eff = exp(−β effKeff ). However, for response functions, one has to be careful since the time evolution involved is described byĤ eff and not byK eff . Specifically, to calculate response functions, we need to evaluate quantities of the typẽ where Z eff = Tr[ρ eff ]. To evaluate this, we express and B as where α (andB α ) changes the number of states in each group (G) by ∆N G,α . The operator hence changes the expectation value ofĤ µ ≡ − G µ GNG by λ α ≡ − G µ G ∆N G,α . Therefore, we have We thus obtain the following expression forF AB : Here, −α means that ∆N G,−α = −∆N G,α . The last equation implies that one can evaluate 1 Z eff Tr[ρ eff e iK eff t α e −iK eff tB −α ] using some equilibrium formalism and the "Hamiltonian"K eff . To obtain the response function, one needs to multiply these results by e iλαt and sum over α. In the main text, we use the above formalism to evaluate the single-particle spectra for the photo-doped systems with iTEBD 46 . This indeed yields consistent results with ED for finite systems. The window-Fourier transformed correlations also help us to identify the dominant "short-range" correlation by comparing the peak intensities of χ(q; σ), see illustration in Fig. 7. Compared to the phase diagram determined by the exponents of the correlation functions in the main text, the SDW region is extended and the CDW region is suppressed. We note that the relative size of peaks strongly depends on the choice of the width of the window σ. In Fig. 8, we show the critical exponents obtained by the fitting of χ(r) with C 1 /r 2 + C 2 cos(qr)/r a , where q = 2n d π, q = (1 − 2n d )π and q = π for charge, spin and SC correlations, respectively. It is clear that for V = J ex /2, the exponents of CDW, SDW and SC correlations are all less than 2 (quasi-long ranged). However, it turns out that determining the exact boundaries where the exponents exceed 2 is difficult within our present numerical scheme, where the correlation functions can be ) with C1/r 2 + C2 cos(qr)/r a , where q = 2n d π, q = (1−2n d )π and q = π for charge, spin and SC correlations, respectively. Here we use iTEBD forĤ eff2 at half filling with U = 10. We use r ∈ [6, 30] for the fitting range. converged up to r 30 against the cut-off dimension. Even though the coexistence phase is clearly extended, its boundary fluctuates depending on the fitting range. We however confirmed that the relative size of the exponents, which is used for Fig. 1 in the main text, is robust against the fitting range. In Fig. 9, we show supplemental data for the CDW states. Panel (a) shows the results of the dependence onĤ V dh , analogous to Fig. 3(c) in the main text which was obtained by ED. As in the ED case, the CDW correlations start to develop whenṼ dh < V dd . Panel (b) shows the correlation function between the doublons and holons, χ dh (r) = 1 N i n i+r,hni,d , in the CDW phase. When a doublon and a holon form a bound pair, the amplitude should be largest at r = 1. However, the result shows that the location of the holon is just in the middle of neighboring doublons, which indicates that the doublons and holons are unbound. Appendix D: Some remarks related to η operators In this section, we make some remarks related to the η operators and the effective Hamiltonian for the Hubbard model (V = 0). The following discussion implies that the homogeneous state with the η-type long range SC One can show that the SU c (2) symmetry holds also forĤ eff as well asĤ eff2 . We note that these are also true for the grand canonical Hamiltonian K eff at half-filling. Stability of the η states in the effective model For the effective model of the Hubbard model (V = 0) we have [Ĥ eff ,η ± ] = ±Xη ± , andĤ eff ,η 2 ,η z ,N d andN h commute with each other. Here, X is some number and it is zero in the present way of expressing Hamiltonian.η z , N d andN h are not independent, sinceη z = 1 2 (N d −N h ). (The following arguments apply toĤ eff2 as well.) Let us assume that a homogeneous state |α, η, N d , N h N is a ground state within a subspace specified by N d (= O(N )) and N h (= O(N )). We express the energy as E 0 = N 0 and assume η +η− = O(N 2 ), i.e., an η-pairing state. Now we introduce the following two states, These are also eigenstates ofĤ eff , whose energies are In this section, we present supplementary results obtained by exact diagonalization (ED) for finite size systems. Here we set the system size to N = 14 and apply the periodic boundary condition. We use U = 10 as in the main text. For any photo-doping, a homogeneous solution is obtained for finite size systems with ED. The ED calculations show that the energy surface is flat along the line N d + N h = constant for V = 0 (the pure Hubbard model) as we discussed in the previous section. Furthermore, they confirm the results from iTEBD in the thermodynamic limit. We note that for ED, one can explicitly specify the doublon number (N d ) and the holon number (N h ), hence there is no need to introduce µ U . Stability against phase separation In the normal equilibrium system, one may encounter a possible phase separation into a high density and low density region. In the present study of the photo-doped U -V Hubbard model, there are two types of possible phase separations: (i) separation into a high density and low density region, and (ii) separation into a doublon-holon rich region and a doublon-holon poor region. The stability against these phase separations can be discussed by expressing the energy (free energy for finite temperature) surface as a function of N d and N h , and checking whether the surface is concave or not. The analytical discussion in Sec. D implies that for V = 0, the energy is flat along the line of N d + N h = constant around the homogeneous solution with η 2 = 0 at half-filling. This means that the state is on the verge of phase separation into a high density and low density region. This is numerically confirmed in Fig. 10. We note that the energy surface is concave along N d = N h for small V , which means that the homogeneous solution is stable against the separation into a doublon-holon rich region and a doublon-holon poor region, see Fig. 11(a). On the other hand, when V is large, the function becomes convex, which is indicative of phase separation. We note that this unstable regime corresponds to the regime where the emergence of a doublon-holon cluster was previously reported 41 . Correlation functions and response functions In Figs. 12 and 13, we show the correlation functions for the photo-doped states evaluated with ED forĤ eff andĤ eff2 , respectively. We focus here on half-filling. First of all, one can see that the effect ofĤ 3−site is not essential since the spatial patterns obtained forĤ eff and H eff2 match nicely. Slight differences can be seen for the case of V = 0.4, where the charge (SC) correlation is slightly stronger (weaker) forĤ eff2 . In both cases, for V = 0, the system shows commensurate SDW correlations without photo-doping (N d = 0). As we photo-dope the system, the spin correlations χ s become weaker and their spatial pattern deviates from the commensurate one. While no clear pattern emerges for the charge correlations χ c , a staggered pattern appears in the SC correlations χ sc . The latter is the indication of η-pairing states, which are energetically favored due tô H ex,dh . The development of the SC pattern starts already for low photo-doping. When V is switched on, the behavior of the spin correlations is little affected. (Remember that U V .) On the other hand, some structure develops in χ c with increasing photo-doping and a commensurate CDW is found in the extreme photo-doping regime (N d = N/2). There the model becomes the XXZ model. The SC correlations are weakened by V , but the commensurate correlations do not vanish completely, in particular at intermediate dopings. These results are all consistent with the iTEBD simulations for the thermodynamic limit. One can furthermore see that the effects ofĤ 3−site is also small for dynamical properties such as singleparticle spectra. In Fig. 14 and Fig. 15, we compare the momentum-resolved spectral functions (A k (ω)) for different photo-dopings and interactions forĤ eff and H eff2 . The characteristic features in the spectral func- These features are all consistent with the iTEBD analysis. We next show additional results comparing photodoped and chemically doped systems. In Fig. 16, we show the results ofĤ eff andĤ eff2 for hole-doped cases with V = 0.4. Again one can see that the effects of the three site term are small. In Fig. 17, we show the results for different values of V . While the pattern of the spin correlations remains almost the same, one can see that the intensity of the charge correlation is slightly increased with increasing V . Finally, we discuss the behavior of the (linear) response functions 52 : Here,Ŝ z q = 1 √ N j e −iqjŝz j ,n q = 1 √ N j e −iqjn j , and ∆ q = 1 √ N j e −iqj∆ j . γ is a damping factor which we set to 0.15. Since one can confirm again that the effect of the three site term is minor for these quantities, we only show results forĤ eff2 . Figure 18 presents the response functions for V = 0. The charge response function N (q, ω) always shows a gapless mode at q = 0. In particular, for large doping, the energy scale of the dispersion of the collective mode becomes J ex . The spin re- sponse function S(q, ω) shows a massless mode at q = π without photo-doping, which is consistent with the development of quasi-long range commensurate SDW correlation. Also, the signals in S(q, ω) represent the spinon dispersion, whose energy scale is J ex for the undoped case. For large doping the energy scales become v and the signature resembles the free particle dispersion, see Fig. 18(h). In between, one can identify structures where the above two features are mixed. These presumably originate from the dynamics of the kink of the spin configuration and the kinetics of the particles that hold the spin (singlon). As we increase the doping, the massless mode shifts to some incommensurate value of q. As for the SC response function SC(q, ω), one can identify a gapless mode emerging from q = π and the intensity profile looks similar to that of N (q, ω) shifted by π. In Fig. 19, we show the results for V = 1. The behavior of S(q, ω) is similar to that for V = 0. On the other hand, N (q, ω) shows massless modes at incommensurate values of q, which is consistent with the development of quasilong range incommensurate CDW correlations. The SC response function shows a clear nonzero gap, which is consistent with the exponential decay of the SC correlations. In Fig. 20, we show the results for V = 1 for the hole-doped system. The behavior of S(q, ω) is similar to that for photo-doping. As for N (q, ω), there is no clear development of low-lying modes, which is consistent with weak CDW signals in the correlation function.
2021-05-31T01:16:02.656Z
2021-05-28T00:00:00.000
{ "year": 2021, "sha1": "9516db2fed706b030469a8bceddbbaa4bb9bd451", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "9516db2fed706b030469a8bceddbbaa4bb9bd451", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
250459802
pes2o/s2orc
v3-fos-license
Developing Graphene Grids for Cryoelectron Microscopy Cryogenic electron microscopy (cryo-EM) single particle analysis has become one of the major techniques used to study high-resolution 3D structures of biological macromolecules. Specimens are generally prepared in a thin layer of vitrified ice using a holey carbon grid. However, the sample quality using this type of grid is not always ideal for high-resolution imaging even when the specimens in the test tube behave ideally. Various problems occur during a vitrification procedure, including poor/nonuniform distribution of particles, preferred orientation of particles, specimen denaturation/degradation, high background from thick ice, and beam-induced motion, which have become important bottlenecks in high-resolution structural studies using cryo-EM in many projects. In recent years, grids with support films made of graphene and its derivatives have been developed to efficiently solve these problems. Here, the various advantages of graphene grids over conventional holey carbon film grids, functionalization of graphene support films, production methods of graphene grids, and origins of pristine graphene contamination are reviewed and discussed. INTRODUCTION The successful application of direct electron detection devices (Ruskin et al., 2013;McMullan et al., 2014;Wu et al., 2016) and well-developed imaging processing algorithms (Scheres, 2012;Li et al., 2013a;Punjani et al., 2017;Zheng et al., 2017;Caesar et al., 2020;Punjani et al., 2020;Kimanius et al., 2021;Nakane and Scheres, 2021;Punjani and Fleet, 2021) have greatly improved the resolution of cryoelectron microscopy (cryo-EM), transforming this method into an important approach for determining the structures of biological macromolecules at near-atomic resolution. Compared to the X-ray diffraction technique, cryo-EM does not require crystals and only requires a small amount of specimen in its physiological solution. Therefore, cryo-EM has unique advantages and has been successfully applied to the study of the near-atomic resolution structures of challenging protein complexes with high flexibility (Yan et al., 2015;Plaschka et al., 2017;Ramirez et al., 2019) and small proteins (Khoshouei et al., 2017;Fan et al., 2019;Zhang et al., 2019;Han et al., 2020;Nygaard et al., 2020). However, through cryo-EM studies, researchers have found that the conventional cryo-EM procedure does not always work and that there are obstacles in the specimen preparation procedure. The conventional plunge freezing procedure for cryo-EM sample preparation developed by Dr. Jacques Dubochet in the early 1980s (Dubochet et al., 1988) is still widely used. It comprises three steps. First, a drop of protein solution is applied to a holey carbon film grid that has been pretreated by plasma cleaning. Second, the excess solution is blotted using filter paper, resulting in a thin liquid film spanning across the grid holes. Third, the grid is rapidly plunged into a liquid cryogen, such as liquid ethane, which has been precooled using liquid nitrogen. After plunge freezing, protein particles are fixed in a thin vitrified ice layer ( Figure 1A). With this procedure, the distribution of protein particles is not always ideally uniform in sufficiently thin ice, and many problems could be encountered in different projects, including a high noise background due to thick ice, a nonuniform distribution of particles within holes (Snijder et al., 2017;Drulyte et al., 2018), beaminduced motion (Glaeser, 2016), air-water interface-induced specimen denaturation/degradation , and preferred particle orientation (Tan et al., 2017). These problems have become bottlenecks for high-resolution cryo-EM studies in many cases. FIGURE 1 | The potential advantages of pristine graphene grids in cryo-EM sample preparation. (A) the sample distribution using the holey carbon grid. Most protein particles are adsorbed onto the air-water interface. (B) the sample distribution using the graphene grid. Due to the interaction between the sample and the graphene-water interface, the protein particles can be kept away from the air-water interface. (C) normal single particle data collection using the graphene grid. The graphene grid can reduce the beam-induced motion. Thin and uniform ice using the graphene grid means that we can choose a smaller defocus without losing contrast. Most protein particles adsorbed onto the graphene layer are roughly in the same plane, which makes the subsequent contrast transfer function estimation more accurate. (D) application of the graphene grid for the tilt data collection strategy, which is a general solution to improve the map quality when the preferential orientation problem occurs (Lyumkis, 2019). In recent years, many efforts have been applied to developing various methods and techniques to solve the problems that can occur during cryo-EM specimen vitrification. One method is modifying the surface of the holey carbon support foil by manipulating glow discharging protocols (Isabell et al., 1999;Nguyen et al., 2015) or treatment with PEG (Meyerson et al., 2014) or detergents (Cheung et al., 2013). This type of approach can improve the particle distribution in the hole. A multiple blotting approach was proposed to increase the number of particles in the hole (Snijder et al., 2017). Holey metal support foils, including gold foil (Russo and Passmore, 2014b;Naydenova et al., 2020) and amorphous nickel-titanium alloy (ANTA) foil (Huang et al., 2021), were developed to decrease nonspecific interactions between particles and foils and reduce beam-induced motion. New types of cryo-EM sample preparation instruments have also been developed, such as Spotiton (Noble et al., 2018b), Vitrojet (Ravelli et al., 2020), and TED (Kontziampasis et al., 2019), to minimize the time of the vitrification procedure and to address the air-water interface problem. In this review, various advantages of graphene grids are discussed by comparing them with conventional holey carbon grids. In addition, to form an overall outline of the current development of graphene grids for cryo-EM, the recent progress in graphene film functionalization and the preparation of high-yield and clean graphene grids are discussed. If not specified, the phrase "graphene grid" represents both grids coated with graphene oxide film and grids coated with continuous pristine graphene film. ADVANTAGES OF GRAPHENE GRIDS A Brief Early History of Graphene Grid Development The successful exfoliation of 2D crystal monolayer graphene films was realized by Andre Konstantin Novoselov in 2004 (Novoselov et al., 2004). Since its discovery, pristine monolayer graphene has been applied in numerous scientific fields because of its superior qualities, including atomic-level thickness (3.4 Å), remarkable electrical (Geim and Novoselov, 2007) and thermal conductivities (Balandin et al., 2008), optical properties (Ghuge et al., 2017), chemical inertness (Bellunato et al., 2016), and good mechanical strength (Lee et al., 2008). According to a review of earlier work on graphene grid development, the first work using a pristine graphene film as a TEM specimen support for imaging light atoms and molecules was performed in 2008 (Meyer et al., 2008). Afterward, graphene grids began to be applied in the biological TEM field, such as the imaging of positively stained DNA (Pantelic et al., 2011), vitrified influenza virus (Sader et al., 2013), and frozen-hydrated apoferritin (Sader et al., 2013). However, unmodified graphene was not widely used, due to contaminant-induced hydrophobicity and degradation of image quality, until 2014, when Russo and Passmore et al. (2014a) adopted low-energy hydrogen-plasma treatment to render a pristine graphene film hydrophilic and found that the beaminduced motion could be efficiently decreased when using a graphene film-covered grid. Later, D'Imprima et al. (2019) proved that the denaturation effect of fatty acid synthase caused by the air-water interface (AWI) can be efficiently addressed using hydrophilized graphene-coated grids. Around the same time, Fan et al. (2019) developed their own high-yield and clean monolayer pristine graphene-coated grids, and they used these grids to determine the structure of streptavidin with a small molecular weight of 52 kDa to near-atomic resolution with cryo-EM. Graphene oxide (GO) was also introduced as a substrate material for cryo-EM experiments since it is naturally hydrophilic, nearly electron transparent, and easy to synthesize (Wilson et al., 2009;Pantelic et al., 2010). However, it is not easy to prepare a grid uniformly covered with a monolayer or a few layers of GO because GO is normally fragmented and easily selfaggregates. To address this problem, a simple and robust method to make GO-coated grids was reported and used to determine the 2.5-Å cryo-EM structure of the 20-S proteasome (Palovcak et al., 2018). The GO-coated grids can also be used to determine the structures of small protein particles with molecular weights lower than 100 kDa with cryo-EM (Patel et al., 2021). Improved Sample Distribution and Thinner Uniform Ice Vitrified particles in a hole may not look like what we expect. The ideal model of cryo-EM samples shows evenly distributed particles with random orientations, and the vitreous ice is uniformly thin. Owing to the interaction of particles with the air-water interface, support film, and neighboring particles, different situations can occur (Drulyte et al., 2018). Next, the vitrified ice thickness in the hole affects the final resolution that we can achieve. Thick ice increases the background and therefore reduces the image contrast. In addition, the problems of defocus gradient (Zhang and Zhou, 2011;Sun, 2018) and high-frequency information dampening (Voortman et al., 2011) become severe when the thickness of ice increases. Therefore, the ice thickness needs to be optimized to just cover the size of the particle to minimize the background. However, many specimens, especially membrane proteins, preferentially remain within thick ice (Han et al., 2020;Yokoyama et al., 2020). In addition, when the vitrified ice layer becomes too thin, the particles can be pushed toward the edge of the support film, causing aggregations of particles. Furthermore, some protein particles become deformed or denatured by the surface tension at the air-water interface (Unwin, 2013;Cheng et al., 2015;D'Imprima et al., 2019;Yokoyama et al., 2020). Thus, finding suitable areas that have both thin ice and a high density of evenly distributed particles in samples prepared on holey carbon grids remains challenging. Van de Put et al. (2015) achieved much thinner ice and uniform particle distribution by adsorbing specimens onto a GO support film. According to their results, an ultrathin vitrified ice layer with a thickness of 10 nm can be formed for high-resolution cryo-EM imaging of doublestranded DNA (300 bp), while the thickness of the ice layer is 130 nm using a conventional holey carbon grid. Han et al. (2020) determined a 2.6-Å resolution structure of streptavidin (52 kDa) using pristine graphene-coated grids. The ice thickness in their research was sufficiently thin, resulting in a very good contrast, even under a small defocus of −0.85 μm. In most cases, protein particles are not at the same Z-height when they are vitrified using a conventional holey carbon film grid ( Figure 1A), resulting in defocus variations for different particles. Although these variations can potentially be corrected during the image processing step using the contrast transfer function (CTF) refinement algorithm, using graphene grids, most protein particles are adsorbed onto the graphene support film and then kept roughly at the same Z-height ( Figure 1B), which makes defocus estimation more accurate and reduces the computational cost of CTF refinement. In some cases, when using a holey carbon film grid, the nonspecific interaction between protein particles and carbon film is nonnegligible, resulting in few particles found in the holes, while more particles are adsorbed on the carbon film. To obtain more particles in the holes, a high concentration of specimen would be necessary. However, using GO grids, Palovak et al. (2018) found that the concentration of 20-S proteasome could be 10 times lower than that using holey carbon film due to the interaction between the particles and the hydrophilic GO film. In addition, Han et al. (2020) reported that using pristine graphene grids, they observed a five times higher density of protein particles in the hole in comparison with that using a holey carbon film, and the particle distribution was more even. Therefore, graphene grids have the potential to yield a uniform distribution and high density of particles in the hole, which is expected to be important for studying membrane protein complexes reconstituted in liposomes (Yao et al., 2020). Protecting Specimens From the Air-Water Interface Many cryo-EM research groups have found that the quality of vitrified samples deteriorates in comparison with that of negativestained samples. This phenomenon was later explained by the interaction of protein particles with the air-water interface. When the protein particles are confined to the thin layer of the solution on the grid, the particles can diffuse and approach the air-water interface quickly. It was estimated that there were more than 1,000 collisions per second with the air-water interface in the thin ice (≤100 nm), which gives sufficient opportunity for adsorption of particles in preferential orientation (Taylor and Glaeser, 2008). Using the Stokes-Einstein equation, we calculated that the average time of particles (10-nm diameter) approaching the air-water interface in thin ice (40-nm thickness) was approximately 6 ms (Sun, 2018). After approaching the air-water interface, the proteins may adopt a preferential orientation or desorb away from the air-water interface (Glaeser, 2018). Next, Noble et al. (2018a) found that approximately 90% of particles were adsorbed at the air-water interfaces with a preferred orientation. In addition, the forces at the air-water interface may cause different degrees of denaturation and dissociation of protein complexes (Glaeser, 2018). This denaturation effect occurs frequently during conventional cryo-EM sample preparation . To minimize the effect of the air-water interface, many approaches, including adding the surfactants OG (Benton et al., 2018), amphipol (Owji et al., 2020), CHAPSO , and fluorinated Fos-choline-8 (Popot, 2018;Wang et al., 2019), collecting data from the thicker ice regions, utilizing affinity grids (Lahiri et al., 2019), and coating the holey carbon grid with a continuous thin carbon film (Thompson et al., 2016), were attempted and were successful for certain specimens. However, these approaches introduce a significant additional image background, necessitate exhaustive trials without a clear sign of success, or induce a new preferred orientation. In addition, certain new vitrification instruments were invented to address the air-water interface problem (Razinkov et al., 2016;Arnold et al., 2017;Ravelli et al., 2020). For example, Spotiton can minimize the spot-to-plunge time to 100-200 ms, which can reduce the number of particles adsorbed at the air-water interface (Razinkov et al., 2016;Noble et al., 2018b;Darrow et al., 2019). Using a time-resolved cryovitrification device (Kontziampasis et al., 2019), Klebl et al. (2020) further reduced the time taken to vitrify particles that adsorb at the air-water interface within 6 ms, which improved the cryovitrification quality of some specimens (Klebl et al., 2020). However, these instruments are expensive, contain many special consumable materials, are not easily accessible by most research groups, and can only partially address the air-water interface problem. In addition to our recent development of HFBI film-coated grids , the emergence of graphene grids provides a new solution to the air-water interface problem. With graphene grids, protein particles can adsorb at the graphene-water interface layer, thereby preventing protein particles from diffusing to the air-water interface ( Figure 1B), which can address the issues of the air-water interface-induced preferred orientation, denaturation, and dissociation effects (D'Imprima et al., 2019;Fan et al., 2019;Naydenova et al., 2019;Han et al., 2020;Joppe et al., 2020). Compared to the approach of using a continuous thin carbon film, graphene grids induce less extra background and are applicable to small particles. However, a potential new preferential orientation problem can arise due to the interaction between the protein particles and the layer of the graphene-water interface. In addition, if the ice is too thin, the risk of exposing the protein particles to the air-water interface still exists. Reducing Beam-Induced Motion When irradiated using an electron beam, the particles embedded in vitreous ice move, and this type of motion is called beam-induced movement (BIM). The BIM process involves two phases, including an initial rapid "burst" phase and a following slower phase (Glaeser, 2016). The burst phase may reflect the irradiation relieving stress that had been accumulated during plunge freezing, and the slower phase has three origins. The first origin is charging of the specimen, which has two subsequent effects, including electrostatic force-induced mechanical motion (Glaeser, 2016) and mini electrostatic lens-caused image deflection (Brink et al., 1998). A careful analysis has demonstrated that charging is not the dominant effect on image quality degradation (Russo and Henderson, 2018a;Russo and Henderson, 2018b). The second origin is radiation damage of proteins and amorphous ice, which can generate hydrogen gas, thereby causing a bubbling effect and introducing additional mechanical stress (McBride et al., 1986;Chen et al., 2008;Glaeser, 2008;Sun, 2018). The third origin is the beam-induced Brownian motion of water molecules (McMullan et al., 2015). The influence of this type of motion needs to be taken into account when the target resolution is smaller than 2 Å or a small particle size is studied (Sun, 2018). The BIM effect can cause blurring of images and limit the achievable resolution of cryo-EM. Regarding obtaining a 3-Å resolution reconstruction, the elimination of the BIM effect can significantly reduce the number of particles needed to reach the same resolution by~30-fold (Henderson, 2018). Owing to their high frame rate and detective quantum efficiency, the emergence of direct electron detectors facilitated video recordings and the possibility of motion correction (Mooney et al., 2011;Campbell et al., 2012;Bai et al., 2013;Li et al., 2013a;Zheng et al., 2017;Zivanov et al., 2019); therefore, the largest motions in the slower phase can be corrected, which marked the beginning of the resolution revolution in cryo-EM (Kuhlbrandt, 2014). However, the motion in the rapid "burst" phase is erratic and cannot be effectively corrected by the motion correction approach, which means that the first few frames, with less radiation damage and containing high-resolution information, cannot be effectively utilized (Grant and Grigorieff, 2015). The pristine graphene film-coated grids showed a unique advantage in reducing the BIM effect due to their high mechanical strength and electrical conductivity. Russo and Passmore (2014a) found that beam-induced motion could be reduced by a factor of~1.3 when adding a pristine graphene layer to a holey carbon grid ( Figure 1C). This motion could be further decreased by a factor of~3 when a holey gold-foil-coated gold grid was covered with a pristine graphene film, taking advantage of the high electrical conductivity and mechanical stiffness of both the pristine graphene and gold films (Russo and Passmore, 2014b;Naydenova et al., 2019). In addition, the graphene lattice can be used as a fiducial to potentially improve the movie alignment (Palovcak et al., 2018). We note that Naydenova et al. (2020) recently designed a new type of all-gold grid called HexAuFoil, which can decrease BIM to less than 1 Å by limiting the critical aspect ratio (hole diameter:ice thickness) to <11:1. However, this holey support cannot solve the air-water interface problem. The combination of the graphene film and this all-gold HexAuFoil support would be a new option to further improve the quality of cryo-EM sample preparation. Improving Particle Orientation Distribution The preferred orientation of particles is another common limiting factor in cryo-EM single particle analysis and can be induced by interaction between the protein particles and the air-water interface, the support film, or the neighboring particles. A preferred orientation causes a biased distribution of angular projections and yields an anisotropic resolution in the final reconstruction (Tan et al., 2017); this is especially severe when the particles have low or no symmetry. To address the issue of air-water interface-induced preferred orientation, in addition to the above approaches of minimizing the interaction between the particles and the air-water interface, methods for altering the properties of the supporting film or the air-water interface, including plasma treatment in the presence of N-amylamine (Miyazawa et al., 1999;da Fonseca and Morris, 2015;Nguyen et al., 2015), the addition of polylysine (Lander et al., 2012;Zang et al., 2016), and the use of self-assembled monolayers (Meyerson et al., 2014), have been tested on specific types of samples. As discussed above, the graphene grids can efficiently yield sufficiently thin ice; therefore, using graphene grids, more particles with different orientations can be selected, even for views with less contrast that are possibly ignored in holey carbon grid samples. In addition, graphene grids can efficiently keep particles away from the air-water interface and thus allow more orientations of particles. To address the existence of a preferred orientation, Tan et al. (2017) developed a data collection strategy of tilting the specimen to remedy the anisotropic resolution problem, and they successfully applied this strategy to determine the highresolution structures of the influenza hemagglutinin trimer and the large ribosomal subunit assembly intermediate. However, additional issues occurred for the dataset collected at the tilted angle, which included thicker ice, focus gradient, and increased specimen drift. These new problems, especially increased specimen drift, degrade both the quality of the micrographs and the speed of data collection, thereby limiting the potential of this data collection strategy for high-resolution structure determination. These problems of the tilt data collection strategy can be mitigated using graphene grids. First, a much thinner and uniform vitrified ice can be prepared, thus minimizing the concerns about the increase in the thickness of the ice for tilted specimens. Second, since most protein particles are adsorbed onto the layer of the graphene-water interface and sitting on the same plane, the focus gradient can be more accurately estimated, and CTF estimation can be performed more precisely. Third, more importantly, with their high electric conductivity and mechanical stiffness, graphene grids can effectively reduce BIM even under the condition of specimen tilting. Hence, a tilt data collection strategy using graphene grids would be a better solution for coping with preferred orientation problems in high-resolution structural studies ( Figure 1D); the work of Patel et al. (2021) showed that GO-coated gold foil grids could be used to collect tilted data of higher quality than that collected using a holey carbon grid. In particular, graphene grids can prevent the contact of protein particles with the air-water interface but cannot solve the preferred orientation problem due to the interaction between the particles and the graphene-water interface. The tilt data collection strategy with graphene grids would be necessary for many specimens. Of late, we developed a new type of grid based on a 2D crystal HFBI film that can adsorb protein particles by means of electrostatic interaction to protect particles from the air-water interface and play a role similar to that of a graphene grid, with minimal background, and to help form thin enough ice . FUNCTIONALIZATION OF GRAPHENE GRIDS Monolayer graphene is an ideal support for cryo-EM studies in comparison with holey carbon films due to the above advantages. To address the new potential for preferred orientation from the interaction between protein particles and the graphene-water interface and to increase the affinity of the graphene support to the specific molecules, there are many developments involving functionalizing graphene to regulate the interaction between the support and the protein particles. The first approach is functionalizing a graphene film using plasma treatment (Figure 2A). Naydenova et al. (2019) covalently functionalized graphene films with different organic molecules in low-energy helium plasma. The amine covalently modified graphene grid could efficiently improve the orientation distributions of 30-S ribosomal subunits in comparison with those associated with the hydrogen-plasma treated graphene grid. The second approach is to oxidize graphene to GO and then chemically modify GO (Figures 2B-E). Wang et al. (2019) established an affinity functionalization approach inspired by covalent bond formation between SpyCatcher and SpyTag ( Figure 2B). They first anchored an amino-PEG-alkyne linker to the GO grid and then coupled an azide PEG spacer that was linked to SpyTag or SpyCatcher. Therefore, the chemically functionalized GO grid had a general affinity for biomolecules fused with either SpyCatcher or SpyTag. In addition, the presence of a flexible PEG spacer not only kept particles away from any surface FIGURE 2 | Functionalization of graphene grids. (A) the graphene film can be covalently functionalized with different organic molecules in a low-energy helium plasma (Naydenova et al., 2019). (B) the chemically functionalized graphene oxide (GO) grid has a special and general affinity for biomolecules fused with either SpyCatcher or SpyTag . (C,D) GO grid functionalized with amino groups or PEG-amino groups (Wang et al., 2020b). (E) the bioactive graphene grid can selectively capture His-tagged samples with the introduction of Ni-Nα,Nα-dicarboxymethyllysine groups onto the graphene surface . (F) the antibody-coated grid has a high and specific affinity for the target protein samples (Yu et al., 2016b). Frontiers in Molecular Biosciences | www.frontiersin.org July 2022 | Volume 9 | Article 937253 (air-water interface and GO-water interface) but also allowed enough freedom to yield particles with different orientations. Wang et al. (2020b) also reported that the GO surface can be functionalized with amino groups or PEG-amino groups ( Figures 2C,D). They found that the amino-GO and PEG-amino-GO grids showed better hydrophilicity, more protein adsorption, and better orientation distribution than the original GO grids. Liu et al. (2019) modified monolayer graphene by introducing Ni-Nα,Nαdicarboxymethyllysine (Ni-NTA) onto the surface, and this functionalized graphene grid could specifically capture His-tagged proteins ( Figure 2E), which can adsorb purified protein particles directly from cell lysates (Benjamin et al., 2016). The third approach is to use reduced GO for high-resolution imaging . Compared to GO supports, reduced GO films were shown to have better electrical conductivity and a smaller interlayer space, which was proven to protect protein particles from the air-water interface and to facilitate the determination of the high-resolution structure of proteins with molecular weights smaller than 100 kDa . The fourth approach is similar to a previously developed affinity grid used for cryo-SPIM (cryosolid phase immune electron microscopy) (Yu et al., 2016b). The affinity grid is made by immobilizing antibodies on the support film of the grid ( Figure 2F) and has a high affinity for the target protein complexes based on the antigen-antibody interaction; this grid can be employed to adsorb purified protein particles directly from cell lysates. Yu et al. (2016b) demonstrated the feasibility of the affinity grid for studying various biological samples (including low abundance samples), whether purified or not. The affinity grid has also been successfully applied to study the morphology of pathogens such as human viruses (Lewis et al., 1988;Lewis, 1990;Lewis et al., 1995). The cryo-EM structure of Tulane virus with a low yield could be successfully determined to 2.6 Å using the affinity grid approach (Yu et al., 2016a). Considering the superior qualities of graphene supports (less background and high electrical conductivity) in comparison with that of carbon films, functionalizing graphene supports by immobilizing antibodies has great potential for wide application in studying the high-resolution structures of many challenging specimens. PRODUCTION OF GRAPHENE GRIDS With various advantages of graphene grids for cryo-EM study, many efforts to make reproducible and high-yield production of high-quality graphene grids have been performed in recent years. The nonuniform and low coverage as well as the surface contamination need to be efficiently solved during the production of graphene grids. Fabrication of Graphene Grids Naturally hydrophilic GO is easy to obtain at low cost and has already been applied in cryo-EM studies (Pantelic et al., 2010;Palovcak et al., 2018;Patel et al., 2021). However, its propensity for fragmentation and self-aggregation tends to produce nonuniform, mutilayered coverage of the grid. In contrast, a large area of continuous monolayer pristine graphene can be grown on a metal substrate, such as a copper foil, by chemical vapor deposition (CVD) (Li et al., 2009a;Novoselov et al., 2012). However, the lack of a method for transferring monolayer pristine graphene to a grid with a high coverage rate, while avoiding contamination, is a major bottleneck in the preparation of high-quality graphene grids. Three types of methods have been developed to make graphene grids. The first method is transfer-free; a holey carbon grid is placed on the top of the graphene film, isopropanol is used to facilitate the adherence of the graphene film to the holey carbon foil by solvent wetting, and then an etchant, such as FeCl 3 , is used to remove the unwanted copper support of the graphene monolayer ( Figure 3A) (Russo and Passmore, 2014a;de Martin Garrido et al., 2021). The second method uses an organic layer to assist the transfer of the graphene film ( Figure 3B and Table 1). D'Imprima et al. (2019) used polymethyl methacrylate (PMMA) to help transfer a graphene film to a grid. Han et al. (2020) chose a thin layer of methyl methacrylate (MMA) as the support during the transfer process. Warm acetone was used to dissolve and remove PMMA and MMA. PMMA contains carbonyl functional groups and has a strong noncovalent affinity with graphene (Leong et al., 2019), therefore resulting in a significant residue on the film. Compared to PMMA, less MMA was left on the graphene film after the transfer process due to its lower molecular weight. Naydenova et al. (2019) used collodion polymer to assist in the transfer of a graphene film onto a gold-foilcoated gold grid, where the collodion can be removed by dipping the grid into amyl acetate, 2-ethoxyethanol, chloroform, acetone, and isopropanol solvent in order. Compared to PMMA, the residual contamination, for example, nitrocellulose, could be circumvented by the combination of solvent cleaning and plasma treatment. Paraffin is a white or colorless, soft, solid wax made from saturated alkanes (Speight, 2020). Leong et al. (2019)developed an interesting approach that used a paraffin layer to achieve a residuefree and flattened transfer of a graphene film. Paraffin is adsorbed on the surface of graphene through noncovalent interactions and can be solubilized completely by organic solvents, such as hexane (Leong et al., 2019), or removed thermally (Qu et al., 2019). The difference in thermal expansion between graphene and its metal substrate is the source of the formation of graphene wrinkles (Deng and Berry, 2016;Wang et al., 2017), which can be efficiently avoided using paraffinbased transfer. Paraffin has a higher thermal expansion coefficient than PMMA (Ohashi and Paffenbarger, 1966). At the elevated temperature of 40°C, the paraffin film thermally expands and hence stretches the wrinkled graphene (Leong et al., 2019). Thus, in comparison with the PMMA-based transfer method, the paraffintransferred graphene film is smooth and homogenous and shows enhanced electric conductivity closer to its intrinsic characteristic (Leong et al., 2019). The third method is the use of direct etching to form the integral graphene grids without any transfer procedure ( Figure 3C) (Aleman et al., 2010;Zheng et al., 2020). A large sheet of graphene was grown on a copper substrate using the CVD method, and the backside of the copper substrate was selectively etched using a photoresist-assisted method to make the copper foil. As a result, ultraclean graphene grids were prepared. Aleman et al. (2010) developed this method, but they exhibited problems with amorphous carbon and iron Frontiers in Molecular Biosciences | www.frontiersin.org July 2022 | Volume 9 | Article 937253 oxide contamination. Zheng et al. (2020) adopted this method to remove amorphous carbon contamination, and they chose Na 2 S 2 O 8 as the etchant to dissolve unnecessary parts and then prepared cleaner graphene grids. Clean graphene is intrinsically hydrophilic and can form a strong H-π interaction with water molecules (Voloshina et al., 2011;Hamada, 2012). However, the graphene surface can easily adsorb hydrocarbons from ambient conditions in a short period of time. Different kinds and concentrations of hydrocarbons (alkanes, alkenes, alcohols, and aromatics) exist in the air (Millet et al., 2005). Schweizer et al. (2020) found that a thin and continuous recontamination layer appeared on freshly cleaned graphene after it was exposed to air for 5 min. The wettability of graphene can be reflected using the water contact angle (WCA). The WCA of monolayer graphene grown on a copper substrate using CVD FIGURE 3 | Fabrication of graphene grids. (A) a transfer-free method for preparing graphene grids. This method uses isopropanol to adhere a perforated carbon film onto a graphene film and then uses etchants, such as (NH 4 ) 2 S 2 O 8 , Na 2 S 2 O 8 , and FeCl 3 (Ullah et al., 2021) to etch away the copper substrate (Russo and Passmore, 2014a;de Martin Garrido et al., 2021). (B) organic molecule-assisted transfer method for preparing graphene grids. After the metal substrate, such as copper, is etched, different organic membranes, such as PMMA (D'Imprima et al., 2019), MMA (Han et al., 2020), colloid polymer (Naydenova et al., 2019), and paraffin (Leong et al., 2019;Qu et al., 2019), can be used to support the transfer of the graphene film to the grid and then can be dissolved using organic solvents, such as acetone. (C) direct etching method for fabricating graphene grids. The photoresist is used to make a pattern on the metal substrate, which is then selectively etched to complete the fabrication of graphene grids (Zheng et al., 2020). and transferred to a SiO 2 substrate was reported to be 90.4° (Kim et al., 2011), which is similar to the WCA (84°-86°) of graphite (Morcos, 1970;Werder et al., 2008). Li et al. (2013b) reported that asprepared graphene grown on a copper substrate was surprisingly hydrophilic, with a WCA of 44°. When the graphene was exposed to air, its WCA quickly increased to 60°within 20 min and plateaued at 80°after 1 day. They concluded that hydrophobic molecules from the air could accumulate on the surface of graphene, resulting in a change in the WCA and making graphene hydrophobic. These airborne contaminants could be partially removed using thermal annealing, plasma treatment, and ultraviolet-O 3 treatment. For thermal annealing, exposure to an elevated temperature of 550°C for a relatively long time was needed to reduce the hydrocarbon contaminants (Li et al., 2013b); these contaminants would damage the graphene by introducing defects (Cancado et al., 2011). Plasma treatment is commonly used to increase the hydrophilicity of TEM grids (Isabell et al., 1999). However, conventional plasma cleaning using air, oxygen, or argon would quickly destroy the monolayer graphene within seconds. Russo and Passmore et al. (2014a) developed a low-energy hydrogen-plasma treatment to make a graphene monolayer hydrophilic without significant damage. Ultraviolet-O 3 treatment can also effectively remove airborne contaminants at a slow and controllable rate using ozone gas oxidation (Han et al., 2020). To increase the hydrophilicity of graphene grids, D'Imprima et al. (2019) developed a noncovalent chemical doping method by coating the graphene surface with the compound 1-pyrene carboxylic acid (1-pyrCA) via π-π interactions. Their method preserved the pristine graphene structure without removing the hydrocarbon contaminants. In addition, when the concentration of 1-pyrCA was high, an extra background was introduced. Lin et al. (2019) found that another type of amorphous carbon contamination could be introduced during CVD growth, called CVD-induced contamination. Cu is the catalyst that decomposes hydrocarbons (carbon precursors), forming sp 2 crystalline carbon. With increasing graphene coverage, the graphitization process slows, and the formation of amorphous carbon becomes preferential (Robertson, 1996;Li et al., 2009b). Therefore, sufficient Cu catalytic activity is important during CVD growth. Cu foam mediation, owing to its high specific area, can furnish enough Cu vapors to decompose hydrocarbons and consequently restrain the formation of amorphous carbon (Lin et al., 2019). When the formation of sp3 amorphous carbon is suppressed, clean graphene can be obtained. This contamination can also be removed by posttreatment with CO 2 developed by Zheng et al. (2020). Since the monolayer graphene is very thin (3.4 Å), an extra support layer is needed when the graphene surface is isolated from the metal substrate. The PMMA layer is still the most widely used material for graphene transfer (Gao et al., 2021). However, polymer contamination is a major problem affecting the intrinsic properties of graphene. Lin et al. (2012) developed a thermal annealing method to remove PMMA contamination with two annealing steps. First, the layer of PMMA-A (PMMA facing the air) is decomposed at~160°C, and then, the layer of PMMA-G (PMMA facing the graphene) is removed at~200°C. Although annealing is an easy method for removing polymer contamination, there is still extensive PMMA residue on the graphene surface, even when annealing up to 700°C is performed, with the risk of graphene breakage. More importantly, the decomposition of PMMA is a kind of complex radical chain reaction. The radicals generated during annealing might covalently interact with the graphene defects, making PMMA residuals harder to remove. Organic solvents, such as acetone, do not work in this situation. In the future, searching for new organic molecules that do not require postannealing to decompose and are easily removed using organic solvents for use in the polymer-assisted transfer method is important. CONCLUDING REMARKS The structure of apoferritin has been resolved at atomic resolution with the development of new hardware, including cold field emission guns, monochromators, aberration correctors, and the latest generation camera coupled to a new energy filter (Nakane et al., 2020;Yip et al., 2020), which indicates a new era of cryo-EM. However, sample preparation remains the major challenge for highresolution structural determination using cryo-EM. Although holey carbon grids are still commonly used, developing other types of better grids to solve the problems of poor particle distribution, preferred orientation, air-water interface, beam-induced motion, etc., has been increasingly important to making cryo-EM more successful and efficient. Monolayer graphene grids, with minimal background, have become a promising approach to solve these problems and offer the opportunity to reveal near-atomic structures of proteins with a small molecular weight (<100 kDa), low concentration, and even a transient intermediate state. For the samples that can be solved to a medium-high resolution using a holey carbon grid, the use of a graphene grid can significantly reduce the beam-induced motion, prepare uniform and thinner ice, and thereby increase the possibility of higher resolution. As of now, most graphene grids for cryo-EM studies are prepared by researchers themselves and thus often are associated with poor reproducibility, a lower coverage rate, cleanliness problems, etc. With the development of more scalable and robust methods for fabricating high-quality and ultraclean graphene grids, further commercialization will be possible, and thus, monolayer graphene grids as well as different functionalization treatments will become widely used in the cryo-EM community and make cryo-EM sample preparation more successful and reproducible. AUTHOR CONTRIBUTIONS HF wrote the manuscript and prepared the figures. FS edited the manuscript and figures, and supervised the work. All authors listed made a substantial, direct, and intellectual contribution to the work and approved it for publication. FUNDING This work was equally supported by grants from the National Natural Science Foundation of China (31925026), Ministry of Science and Technology of China (2021YFA1301500), and Chinese Academy of Sciences (XDB37040102). This work was also supported by the Key-Area Research and Development Program of Guangdong Province (2020B0303090003).
2022-07-13T13:14:14.360Z
2022-07-13T00:00:00.000
{ "year": 2022, "sha1": "014587e08c157796ebc51f35a5afbbb743b3c09a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "014587e08c157796ebc51f35a5afbbb743b3c09a", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
213405987
pes2o/s2orc
v3-fos-license
Learning Make A Match Using Prezi in Elementary School in Industry 4.0 This research is motivated by the development of Industry 4.0 demanding innovation in the learning process in the classroom, but learning outcomes are still low because the teacher still uses a conventional learning model. The purpose of this study was to determine the effect of make-a-match cooperative learning model using Prezi on students' cognitive abilities. This research is quasi-experimental in a non-equivalent control group design. The sampling technique uses cluster random sampling. This research was conducted in the fourth grade of Elementary School 13, Tanjung Barulak, Tanah Datar, Indonesia. The results showed that there is an influence of make-a-match cooperative learning model using Prezi in elementary school. Research implication is as an additional reference for teachers and education practitioners in developing learning models in the classroom during Industry 4.0. I. INTRODUCTION Industry 4.0 is an era of using technology in social life process [1] [2]. Individual must not only recognize technology in life but also must understand technology as a tool to help daily life process. Industry 4.0 has a huge impact on all human life. This impact covers all fields of life. This impact also affects education field [3]. Education in industry 4.0 gets a challenge to produce graduates with the ability to compete globally. This ability includes thinking ability such as the cognitive ability. To answer these challenges, it is necessary to transform education from conventional learning systems to modern learning systems. This learning transformation requires innovation by a teacher as the main actor in the learning process in the classroom [4]. Teachers must innovate to create interesting learning and improve the quality of learning in the classroom. Improved learning in the classroom can be done in various ways such as using learning models and instructional media [5] [6] [7] [8]. The learning model is a design of the procedure process of teaching and learning activities illustrated as a whole from beginning to end. The learning model is a conceptual framework that described a systematic procedure in organizing learning experiences to achieve learning goals [9]. A good learning plan should have an interesting learning model thus goals can be achieved. One interesting learning model is cooperative. Cooperative learning model emphasizes student activities in groups of 4-5 people to discuss or work on assignments given by the teacher. Cooperative learning model is a learning model where students learn and work in small groups of 4-6 people collaboratively thus it stimulates students to be more passionate in learning [10]. Each group member must have responsibility for their learning and motivate others in learning or group [11]. Cooperative learning combines several important elements that can improve learning outcomes namely: positive interdependence, individual accountability, faceto-face interaction, social skills, and group processing [12]. Besides, this student-centered cooperative learning is facilitated, guided, directed and trained by teacher as explained in the Law of the Republic of Indonesia No. 14 of 2005 concerning Teachers and Lecturers Chapter I Article 1 (2009) that teacher is a professional educator with the main task of educating, teaching, guiding, directing, training, and evaluating students [13]. The teacher becomes a facilitator in all student learning activities hence learning becomes more directed towards the goals to be achieved. Cooperative learning models are very diverse, including make-a-match learning model. Make-a-match means finding a partner. This learning model developed by Lorna Curran in 1994 that emphasizes student activities to find a partner while learning about a concept or topic in a pleasant atmosphere [14]. The advantages of a paired learning model (make a match) are: (a) increasing student participation, (b) more opportunities for the contribution of each group member, (c) interaction is easier and faster to create [15]. Implementation of make-a-match cooperative learning model is by using question cards and answer cards, where students are given question cards and answer cards [16]. This make-a-match model can be used in thematic learning in the 2013 curriculum. The 2013 curriculum is also called an integrated curriculum because it combines various elements of subjects [17]. A learning process in the 2013 curriculum is done by students themselves, not by teachers. The teacher only functions as a guide and facilitator. Cooperative learning provides opportunities for students to collaborate with other students in structured assignments guided and facilitated by the teacher [18]. Thematic learning is learning that uses one specific theme to provide meaningful experiences for students. Thematic learning integrates several specific subjects in one theme to provide meaningful experiences for students [19]. Even though the teacher has applied the 2013 curriculum, there are still many teachers who do not understand the 2013 curriculum itself. Besides, to the concept and understanding of applying the 2013 curriculum, teachers find it hard to develop learning with learning techniques and models [20]. Cognitive ability is a benchmark used to see a level of student understanding of learning materials [21]. Low learning outcomes are also influenced by teachers who teach on certain themes relying only on teacher books and student books. A teacher does not develop learning with innovative learning models. Students appear to lack participation in the learning process. Therefore, the selection of learning models is very important to support better learning outcomes. One of them is a make-a-match cooperative learning model. Learning using this model provides opportunities for students to actively involved in the thinking process and learning activities. This model can also build social relationships (positive interactions in contributing ideas, cooperative attitudes, respecting opinions of friends) in groups to work on tasks to achieve learning objectives. Also, to the learning model, instructional media play an important role in the process of improving learning quality. Learning media are tools for teaching and learning process [22]. Everything can be used to stimulate the mind, feelings, attention, and abilities of students thus it can encourage the learning process. The existence of media can facilitate learning in Industry 4.0. One of the learning media using technology is a Prezi application. Media Prezi is an internet-based software or software as a service (SaaS) used as a media presentation and a tool to explore ideas on a virtual canvas [23]. Prezi is an application that can display presentations virtually by sharing many features such as a more varied appearance, many choices of themes, using the ZUI method, easy to use and easy to share [24]. This is the answer to Prezi as a learning medium that is in harmony with Industry 4.0. Responding to the challenge that teachers must innovate in the learning process in the classroom, the researcher wants to research by combining the process of learning to make-a-match using Prezi splitting media. Therefore, this study aims to determine the effect of makea-match cooperative learning models using Prezi learning media on cognitive abilities of elementary school students. II. METHOD This research uses quantitative research. Quantitative research is research by collecting data and analyzing it in numbers [25]. This type of research aims to see the effect of a treatment given to the sample by using a quasi-experimental design and control group design [26]. The sample used was fourth-grade students at Elementary School 13 Tanjung Barulak. The sample was selected using a probability sampling technique with a cluster sampling technique in which choosing samples is not based on individuals but rather based on groups, regions, or groups of subjects who naturally gather together [27] [28] [29]. [13] [14]. The research instrument is a test used to measure something in certain circumstances with predetermined rules [15]. The instrument used in providing the test was 36 question in objective questions with 4 answer choices. Questions are arranged in the order in which the material is taught. The class is divided into two, namely a control class and an experimental class. Experimental class is students who learn using cooperative learning models of make-a-match and Prezi media. Control class is students who learn using cooperative learning model of make-amatch without using Prezi media. III. RESULT AND DISCUSSION The study was conducted by doing prerequisite tests namely normality and homogeneity tests. Normality test aims to see whether the data are normally distributed or not [30]. The normality test results can be seen in the Table 1. Based on the above table, it can be seen that Lcount < Ltable thus it can be concluded that the two data are normally distributed. After that, Homogeneity test is performed. Homogeneity test aims to see whether the variance comes from a similar sample or not [31]. The results can be seen in the Table 2. Homogeneity test results of posttest value indicate that L count is smaller than L table so that it can be concluded that experimental class and control class have homogeneous variances. After the homogeneity test was carried out, then a hypothesis test was performed using a T-test considering the data was quantitative and interval data, data were normally distributed, while the two samples were free or unrelated. Criteria of the t-test are if the value of t ≥ t-table then Ha is accepted and Ho is rejected. If t-value of t ≤ t-table, Ha is rejected and Ho is accepted. The results can be seen in the Table 3. The results of post-test hypothesis show that t-test value ≥ t-table thus it can be concluded that there is a significant influence on using a make a match cooperative model using Prezi media on cognitive abilities of elementary school students. After the hypothesis is tested, the N test is carried out. The Ng test aims to show an increase in students' cognitive abilities before and after treatment. The results can be seen in the following Table 4. The gain test results show that cognitive average of students in experimental class and control class has increased. Make a match learning is cooperative learning where students learn in groups in heterogeneous groups [32]. Make a match learning asks students to look for pairs of cards before the deadline, students who can match the cards are given points [33]. In conducting the research, it can be seen that students are very enthusiastic in learning process thus learning becomes uplifting and far from saturated. This is due to a make-a-match learning done collaboratively between students and active learning, consequently, it can bring up student excitement in the learning process [34] [35]. Besides, in the learning process, students are seen working together to find a suitable pair of answers, it looks like students support each other to find the answer pair. This is because make-amatch can increase collaboration between students [36]. Make a match is also effective learning to save a teacher's time in delivering learning [37]. Students are active in exploring information relating to the ongoing lesson. Another factor support improvement of students 'cognitive abilities is that with active learning to find the right answer pair, students will more easily understand the material, due to the encouragement of students' motivation in understanding correct information [38]. Also, using Prezi is very helpful in delivering information in the learning process. Prezi is a software to display or present material with various ways thus it won't make students bored while learning. Prezi is very important in improving the cognitive abilities of students in this study. Prezi makes it easier for students to understand the material explained by the teacher [39]. This is because Prezi can present the material as a whole or in detail hence the material can be viewed simultaneously and thoroughly. In Prezi, presentation is carried out as a whole in one screen [40]. This has a good effect on making students remember the material presented previously. Besides, features in Prezi can combine text, images, and videos. This is very suitable for child development stages during a concrete operational period, where students learn from concrete or tangible objects [41]. Moreover, Prezi can present a variety of learning to increase student motivation [42]. Prezi is very appropriate to be applied in Industry 4.0 because it changes teacher-center learning into studentcentered learning. Therefore, the combination of using make a match and Prezi can answer the challenges in Industry 4.0 era which can make innovative learning interesting and improve the cognitive abilities of elementary school students. IV. CONCLUSION The results showed that there is an influence of make-a-match cooperative learning model using Prezi in elementary school.
2020-01-02T21:47:39.057Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "8eef6039638b5ce089b33051e560d697098d4236", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.2991/icet-19.2019.107", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "059f84d016b4ad4b2c23e1a34a1909fd4081c278", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Computer Science" ] }
220842154
pes2o/s2orc
v3-fos-license
Pyropheophytin a in Soft Deodorized Olive Oils Mild refined olive oil obtained by neutralization and/or by soft deodorization at a low temperature and its blending with extra virgin olive oil (EVOO) is not allowed and is difficult to detect. Chlorophyll derivatives, pheophytins and pyropheophytin, and their relative proportions were proposed as parameters to detect such processes. The objective of this study is to determine changes in EVOO, in terms of pheophytins and pyropheophytin, occurring after several well-controlled mild refining processes. The changes on those chlorophyll pigments due to the processes depend on the temperature, stripping gas, acidity and oil nature. The data obtained show that, at temperatures below 100 °C, the rate at which pyropheophytin a is formed (Ra) is lower than the rate at which pheophytins a+a’ disappear (Ra+a’). As a consequence, the Ra+a’ and Ra ratios are considered to be directly linked to pheophytins a+a’ decrease instead of to pyropheophytin a formation. Stripping gas very slightly affects the transformation of the chlorophyll pigments; actually both acidity and N2 enhance the increment in the Ra+a’ and Ra ratios. In relation to the oil nature, the higher the initial pheophytin a+a’ content, the higher the increase in the Ra+a’ and Ra relations. Introduction Olive tree (Olea europaea) is one of the most expanded crops in the world. This has repercussions not only regarding the nutritional point of view but also with respect to the economy of, mainly, Mediterranean countries. Both the International Olive Council (IOC) and the European Union consider virgin olive oil (VOO) as just the oil obtained from the fruit of the olive tree solely by mechanical or other physical processes under conditions, particularly thermal conditions, that do not lead to alterations in the oil, and which has not undergone any treatment other than washing, decantation, centrifugation, and filtration [1,2]. If the quality of the oil does not meet a number of standards [3], it cannot be considered as 'edible' and must be refined. Refined olive oil (ROO) is a flavorless, colorless product that cannot be sold by retail and that has to be mixed with genuine VOO. Controlled olive oil blends are available in the market under the designation of olive oil (OO) composed of refined and virgin olive oils. Blends of olive oil with other vegetable oils are available too [4]. However, the high market price that VOO can reach has made it target of both mislabeling and illegal blending, and much work has been done to uncover such practices, including that on the detection of the correct proportion of olive oil in legal blends with seed oils [5,6]. Regarding adulterations, they may consist, for instance, of the addition of hazelnut oil, of oil obtained from the second extraction of the olive paste (olive pomace oil), or of soft deodorized olive oil. Hazelnut oil can be detected within an interval of 20-25% through the determination of the difference between the actual and the theoretical content of triacylglycerols (TAG) with equivalent carbon number Chemicals All chemical reagents were of analytical grade. The standard of chlorophyll a was purchased at Sigma-Aldrich (Merck KGaA, Darmstadt, Germany). Sodium hydroxide pellets, phenolphthalein, and diethyl ether, were from Panreac Química, S.A.U. (Castellar del Valles, Barcelona, Spain). Acetone and methanol were from Romil Chemicals Ltd. (Waterbeach, Cambridge, GB, UK). The deionized water used was obtained from a Milli-Q 50 system (Millipore Corp., Burlington, MA, USA). Samples Four Spanish monovarietal olive oils (Hojiblanca, Picual and two Manzanilla samples) were purchased directly from producers. A second set consisting of six olive oils of several origins and qualities (L-1, M-1, H-1, L-2, M-2 and H-2), with no varietal specifications, was purchased from local markets or directly from producers. Qualitative Analysis of Chlorophyll Pigments Several methods have been proposed to determine chlorophyll pigments in olive oils, including rapid and routine techniques [28,30,31]. In this work, we follow the method described by the International Standard Organization [32] and the German Society for Fat Science [33], based on the procedure previously described by Gertz and coworkers [29]. This method is currently one of the most widely used for the determination of phy and pyphy. Briefly, 300 mg of the oil samples is weighed into a 4-mL vial and introduced, with the help of 1 mL n-hexane, into a 1-g silica solid phase extraction (SPE) column, previously activated with 5 mL hexane. Subsequently the vial is rinsed twice with 1 mL n-hexane and added onto the column. A first fraction is eluted with 5 mL of a mixture consisting of n-hexane:diethyl ether (90:10, v/v) and is discarded. A second fraction is eluted with 5 mL acetone and collected. Then, it is evaporated in a rotary evaporator and re-suspended in 0.5 mL acetone for its subsequent analysis using a high-performance liquid chromatography-diode array detector (HPLC-DAD). The HPLC analyses of the chlorophyll pigments were carried out with an HP Agilent 1100 Liquid Chromatograph (Agilent Technologies, Santa Clara, CA, USA) equipped with a DAD. Acquisition of data was done with the Agilent ChemStation for the HPLC System program. The conditions for the HPLC assays were: Waters Spherisorb ODS2 C18 column (250 × 4.6 mm internal diameter, 3-µm particle size) (Waters Ltd., Hertfordshire, UK), 20-µL injection volume through a Rheodyne Manual Sample Injector Valve (Idex Health & Science LLC, Rohnert Park, CA, USA), and isocratic elution conditions water:methanol:acetone (4:36:60, v/v/v), at a flow rate of 1 mL/min. Sequential detection was performed at 410 nm. Quantitative Analysis of Chlorophyll Pigments Pigments were quantified with a calibration curve obtained by least-squares linear regression analysis. The concentration range of the curve fitted the expected level of chlorophyll in VOO. We proceed as follows: From a 0.01% chlorophyll standard solution, we prepared five different diluted solutions in acetone (concentrations between 0.1 and 0.5 mg/kg) and we injected them, in duplicate, in the HPLC system. Sensitivity and Method Repeatability Tests to assess the repeatability of the method and trials to establish the limit of detection (LOD) were performed according to published procedures [34]. The LOD can be defined as the minimum concentration of an analyte that can be detected, although not necessarily quantified, with an acceptable confidence through a given analytical procedure. These concentration values should produce sharp, symmetrical analyte peaks with no tailing or shoulders and with a signal-to-noise ratio of at least 3. That is a concentration whose signal equals the blank signal (Y) plus three times (k = 3) its standard deviation (S): LOD = Y + k × S. In order to calculate the LOD, five olive oil solutions prepared at different dilutions from a sample with low chlorophyll pigments content were taken to the HPLC, their areas measured and the respective standard deviations calculated. The repeatability of the method was assessed with three VOO samples of different chlorophyll pigment concentrations (L-1, M-1 and H-1 with low, medium and high chlorophyll concentration, respectively). We determined the phy a+a' and of pyphy a concentrations (in mg/kg) together with the R a+a' percentage. Measurements were done in triplicate. The statistical analysis of the repeatability was carried out following the ISO 5725 Norm [35] and AOAC Regulation [36]. The statistical parameters used were: The statistical study of the results was carried out by one-way analysis of variance (one-way ANOVA) of a number of repeated samples. The minimum significant level was set at 5%. The analysis was performed using the SPSS 12.0 program (SPSS Inc., Chicago, WI, USA). Olive Oil Soft Neutralization Process We carried out the soft neutralization procedure using an aqueous sodium hydroxide solution at 12 % (w/v). In order to know the volume needed for the free fatty acid neutralization, we first determined the free acidity of the starting oil according to the method published by the IOC. This method drives to the calculation of the free acidity expressed as the percentage of oleic acid and its performance had already been tested according to the corresponding collaborative tests [37]. Next, we placed 10 ± 0.001 g of each olive oil sample in test tubes and added a volume of the 12 % (w/v) aqueous sodium hydroxide solution corresponding to the free acidity, plus a 5% excess (2 mL approximately). We shook the tubes for 20 min and then centrifuged them (10 min, 3000 rpm, 16 cm centrifugation diameter). On each case, we discarded the aqueous phase and washed the remaining oily phase with 5-6 portions distilled water for 5 min. We repeated this last step until we had made sure there were no free-soaps (the pink color of the phenolphthalein disappeared completely). Finally, we centrifuge them for 10 min in the described conditions. Olive Oil Mild Deodorization Procedure Soft deodorization is a technique utilized to eliminate unpleasant odors in olive oil, getting a matrix that keeps its chemical composition unaltered. We carried out the process under soft thermal conditions, vacuum, and a certain stripping agent (N 2 or Air), in a way that such gas passed for a given period of time through a volume of relatively hot oil at a low pressure. In order to do this, we prepared our own laboratory equipment, mimicking industrial conditions as much as possible. Such equipment consisted of the following parts: 2. Kitasato flask to prevent the sucking back of the sample. 3. Beaker with glycerine as thermal liquid and stirring magnet. 4. Vacuum pump with vacuum control. We took the olive oil samples through different mild deodorization processes (vacuum at 22.5 mmHg; 600 mL/min stripping gas), and studied the influence of the following factors (Table 1): 1. 4. Stripping gas (using hojiblanca variety: Treatment #3). Moreover, the effect of combining neutralization and deodorization was considered. In order to do that, three olive oils with a low, medium and high content of chlorophyll pigments (L-2, M-2, and H-2, respectively) were used. After neutralization with sodium hydroxide (Section 2.6) and filtering, oils were soft deodorized under N 2 , at 98 • C, for three hours. Sensitivity and Method Repeatability The lowest detectable concentration of pyphy a was 0.07 mg/kg. The data obtained in two consecutive determinations of the same sample, using the same analytical method, did not differ in more than the value of 'r' (Table 2). From those data (RSD r = 0.34-6.59%, RSD r = 2.5-10%, and RSD r = 1.82-4.62%, for phy (a+a') and pyphy a, and R a+a' , respectively) one may consider the method to have a good repeatability. Qualitative Analysis of Chlorophyll Pigments The selected conditions lead to the separation of individual pigments. The HPLC chromatograms consist of a series of peaks, three of them well resolved, whose retention times appear within the range from 15 to 25 min ( Figure 1). They correspond to phy b and b', phy a, phy a', and pyphy a. After, pyphy a' might also appear. Those peaks were identified according to the published bibliography [28,33]. Quantitative Analysis of Chlorophyll Pigments We calculated the analyte concentration corresponding to each peak (phy a, phy a', and pyphy a) using the chlorophyll calibration curve: concentration (mg/kg) = −0.00319 + 0.0111 × peak area. We used chlorophyll as a standard instead of pyphy a because pyphy a is not commercialized as such and its synthesis is laborious. Quantitative Analysis of Chlorophyll Pigments We calculated the analyte concentration corresponding to each peak (phy a, phy a', and pyphy a) using the chlorophyll calibration curve: concentration (mg/kg) = −0.00319 + 0.0111 × peak area. We used chlorophyll as a standard instead of pyphy a because pyphy a is not commercialized as such and its synthesis is laborious. Olive Oil Mild Deodorization Procedure As shown in Table 3, the olive oil samples under study presented a wide variation in their chlorophyll content and, as observed in previous studies on different olive varieties [38], phy a was always the dominant pigment, being particularly high in the case of Manzanilla 1. According to our experience in the last ten years, where we have been analyzing 150 samples a year, on average, such a value may be considered to be very high. However, we have to keep in mind that it is not possible to give an expected average value (and therefore a reference value) for this parameter since the total content of chlorophyll compounds depends, among other things, on the characteristics of the starting samples and on the storage conditions [27]. Interestingly, the analyte concentrations seem to be dependent on the cultivar, which is the opposite to those observed by previous researchers over studies in which a higher number of cultivars were considered [39]. Therefore, the small number of samples advise us to be cautious regarding such a statement. We are aware that our assertion may seem contradictory to that observed in the cases of Manzanilla 1 and Manzanilla 2 (same cultivar but totally different results). In such a circumstance we have to take into consideration that the pigment composition and content of a certain oil is highly conditioned by the oil's initial quality, light exposure, temperature, etc., and not only by the cultivar. This is indeed a line of research to be focused on during our next endeavors, where a wider number of varieties are being systematically analyzed. Effect of Deodorization Time In order to consider the effect of the deodorization time, samples of monovarietal VOO hojiblanca, manzanilla, (manzanilla 1 and manzanilla 2) and picual were subjected to different deodorization timespans, at 98 • C, using N 2 as a carrier gas (Table 1, Treatment #1), for which results are shown in Figure 2. Under such accelerated conditions, the rate of evolution per hour is around 50% for hojiblanca and 25% for the others cultivars, which is very high in comparison with the normal 5-6% evolution per year observed during non-accelerated conditions [39]. In all cases, there was a quick rise in phy a+a' during the first hours of treatment, which slowed down later (Figure 2A). In the case of hojiblanca cultivar, the R a+a' relation reaches 17% after around 2.5 h, whereas in the cases of picual, manzanilla 1 and manzanilla 2, at least 4.5 h are needed to exceed the 17% threshold. Such a 17% limit is the one proposed by Australian and Californian regulatory bodies and corresponds to the minimum pyphy a content accepted for fresh EVOO [40,41]. Differences among cultivars (Table 3) may be due to the low initial pyphy a content, 0.70 mg/kg, in comparison with the phy a+a' presence, 10.90 mg/kg, observed in hojiblanca, which in turn gives an already higher initial R a+a' in comparison to the others. Effect of Deodorization Time In order to consider the effect of the deodorization time, samples of monovarietal VOO hojiblanca, manzanilla, (manzanilla 1 and manzanilla 2) and picual were subjected to different deodorization timespans, at 98 °C, using N2 as a carrier gas (Table 1, Treatment #1), for which results are shown in Figure 2. Under such accelerated conditions, the rate of evolution per hour is around 50% for hojiblanca and 25% for the others cultivars, which is very high in comparison with the normal 5-6% evolution per year observed during non-accelerated conditions [39]. The evolution observed in our study agrees with that previously published according to which the parameters of 100 • C and 60 min were considered as the optima, since they allowed negative volatiles removal and low pyphy formation (11.83%) [21]. If the phy a' content is not taken into account, that is, only R a is calculated ( Figure 2B), hojiblanca exceeds the 17% limit after around 1.5-2 h, whereas picual, manzanilla 1 and manzanilla 2 hold 4-4.5 h. It is then clear that the R a relation may evidence the presence of soft deodorized oils in a better way than the R a+a' relation does, meaning that phy a and pyphy a would reveal as key compounds to detect this kind of practice. In any case, the increase in the R a+a' and R a relations is due to the phy a+a' reduction and not so much to pyphy a formation. This may be due to the phy a+a' destruction because of the effect of the deodorization conditions (98 • C), whereas pyphy a increases little after a certain time. We have to point out that we cannot expect an intensive phy a+a' destruction to be translated in an intensive pyphy a formation, since the concentrations of such derivatives do not keep a lineal relationship. Actually, previous studies show how, after the thermal treatment of olive oils, the disappearance of phy a+a' did not only correspond to the formation of pyphy a (and therefore to a lineal relationship) but also to that of other three products: 13 2 OH-phy a, 15 1 OH-lactone-phy a, and a colorless derivative [27], giving a more exact glimpse on the fate of phy a+a'. Effect of Deodorization Temperature The effect of the deodorization temperature was studied with the monovarietal EVOO hojiblanca, manzanilla 1 and manzanilla 2. In this case, oils were subjected to two-hour length deodorizations at 50, 75, 100, 130, and 150 • C using N 2 as a stripping gas (Table 1, Treatment #2). Samples of hojiblanca and manzanilla 2 oils had relatively low initial concentrations of chlorophyll pigments (11.60 and 15.74 mg/kg, respectively), whereas, in the case of manzanilla 1, the pigment concentration was much higher (118.97 mg/kg). The results are shown in Figure 3. When one compares the R a+a' and R a relations between the three different samples, it is clear that the higher the initial pigment concentration, the higher the R a+a' and R a increments. Besides, increases in temperature lead to increases in both R a+a' and R a proportions, the latter being steeper than the former, meaning that not all phy a+a' turn into pyphy a. Besides, it is clear that there is not a linear correlation with temperature and that at temperatures below 100 • C the formation of pyphy a takes place slowly, as has already been observed before [21], although we demonstrate that a two-hour deodorization versus a one-hour timespan, as stated earlier [21], has no effect on pyphy a formation, temperature being the key factor. From 100 • C on, pyphy a formation goes up notably. Effect of Free Acidity The study of the influence of the free acidity on the chlorophyll pigments during deodorization was carried out on samples of VOO from the picual variety. This parameter was chosen because of its relationship with the oil's initial quality. Picual samples had 0.19% free acidity and were spiked with oleic acid in order to get aliquots with 2.0 and 5.0% free acidity. Samples were subjected to 2 to 5 h length deodorizations at 98 °C, using N2 as a stripping gas (Table 1, lines 11-13). According to the data obtained, the higher the acidity, the higher the increase in the Ra and Ra+a' relations, the effect being more pronounced when phy a' is left aside (Figure 4), since in this case the Effect of Free Acidity The study of the influence of the free acidity on the chlorophyll pigments during deodorization was carried out on samples of VOO from the picual variety. This parameter was chosen because of its relationship with the oil's initial quality. Picual samples had 0.19% free acidity and were spiked with oleic acid in order to get aliquots with 2.0 and 5.0% free acidity. Samples were subjected to 2 to 5 h length deodorizations at 98 • C, using N 2 as a stripping gas (Table 1, lines [11][12][13]. According to the data obtained, the higher the acidity, the higher the increase in the R a and R a+a' relations, the effect being more pronounced when phy a' is left aside (Figure 4), since in this case the 17% limit is reached after 1.25-2 h from the highest to the lowest acidity, instead of 1.5-2.25 h, respectively. Therefore, it is clear that high acidity enhances phy a+a' losses and pyphy a formation. Furthermore, pyphy a formation is clearly bound to oil quality expressed over its free fatty acid content, which contrasts with that indicated in the literature, in which a prediction model focused on olive oil shelf life stated that even if pyphy a is strongly related with light exposure and storage temperature, it does not show any association with oil quality nor with its chemical composition [42]. Foods 2020, 9, x FOR PEER REVIEW 11 of 15 17% limit is reached after 1.25-2 h from the highest to the lowest acidity, instead of 1.5-2.25 h, respectively. Therefore, it is clear that high acidity enhances phy a+a' losses and pyphy a formation. Furthermore, pyphy a formation is clearly bound to oil quality expressed over its free fatty acid content, which contrasts with that indicated in the literature, in which a prediction model focused on olive oil shelf life stated that even if pyphy a is strongly related with light exposure and storage temperature, it does not show any association with oil quality nor with its chemical composition [42]. Effect of the Stripping Gas The study of the influence of the carrier gas on the chlorophyll pigments was carried out on hojiblanca VOO. Those samples were subjected to 2 to 5 h length deodorizations at 98 • C, using either N 2 or air as a stripping gas (Table 1, Treatment #3). The results are shown in Figure 5. When N 2 is utilized as a stripping gas, the R a+a' relation is around 4.6-5% higher than when air is chosen ( Figure 5A), meaning that the 17% limit is exceeded after 2.0 h in the case of N 2 , and after 2.7 h if air is applied. After three hours, there is not a statistically significant difference on the R a+a' relation between both stripping gases. The same tendency is observed for the R a relation, although the time to surpass the limit is 1.5 and 2.4 h, respectively ( Figure 5B). Effect of the Stripping Gas The study of the influence of the carrier gas on the chlorophyll pigments was carried out on hojiblanca VOO. Those samples were subjected to 2 to 5 h length deodorizations at 98 °C, using either N2 or air as a stripping gas (Table 1, Treatment #3). The results are shown in Figure 5. When N2 is utilized as a stripping gas, the Ra+a' relation is around 4.6-5% higher than when air is chosen ( Figure 5A), meaning that the 17% limit is exceeded after 2.0 h in the case of N2, and after 2.7 h if air is applied. After three hours, there is not a statistically significant difference on the Ra+a' relation between both stripping gases. The same tendency is observed for the Ra relation, although the time to surpass the limit is 1.5 and 2.4 h, respectively ( Figure 5B). Table 4 shows the results of applying neutralization, and neutralization followed by soft deodorization (3 h, 98 • C, N 2 stripping gas), together with the initial pigment contents in the samples under study. VOO sample L-2 possesses the lowest amount; therefore, pyphy a is not formed in a substantial way. Consequently, R a+a' and R a equal zero. After neutralizing VOO sample M-2, phy a+a' and pyphy a content decrease minimally, which turns into an increase in R a+a' and R a , although without substantial meaning. This is in the way round for sample H-2 but, as it may be expected, the subsequent deodorization resulted in phy a+a' decrease and pyphy a increase, with the corresponding change in the R a+a' and R a relations. Table 4. Chlorophyll derivative concentrations (mg/kg) present in olive oil samples with low (L-2), medium (M-2) and high (H-2) pigment contents after different treatments: neutralization, filtration, and soft deodorization under N 2 , at 98 • C, for 3 h. R a+a' and R a (both in %) are also given. In no case the R a+a' and R a relations exceed the 17% value established as a limit from which an oil may be suspected to be soft deodorized. Conclusions In this pilot study, we observed that changes in chlorophyll pigments, due to soft the deodorization process, depend on the temperature, the limit of which was 100 • C. Below such ceiling, the rate at which pyphy a is formed is lower than the rate at which phy a+a' disappear. This indicates that, besides pyphy a formation, there exist parallel processes through which other non-detected compounds are formed. As a consequence, the R a+a' and R a relations are considered to be more directly linked to phy a+a' decrease than to pyphy a formation. Stripping gas slightly affects the transformation of chlorophyll pigments; in fact, N 2 enhances the increment in the R a+a' and R a relations. Acidity also boosts the increment in the R a+a' and R a relations. Regarding the oil nature, the higher the initial phy a+a' content, the higher the increase in the R a+a' and R a relations. If the initial phy a+a' presence is too low, the value of the R a+a' and R a relations will be zero. Finally, we are sensitive to the fact that the number of samples under study was too limited to draw definite conclusions, yet it is our intention through this approach to offer a new insight in the detection of the soft deodorization oils in virgin olive oils. Indeed we will continue developing this line of research to answer open questions such as the fate of phy (a+a') or the actual influence of the cultivar on the chlorophyll profiles.
2020-07-29T13:06:22.697Z
2020-07-23T00:00:00.000
{ "year": 2020, "sha1": "b47ffb0192099dc99eac475339378f3163a71f4d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2304-8158/9/8/978/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e4ca81d553f63728a383f9ff6b2ee0386edcdb0f", "s2fieldsofstudy": [ "Environmental Science", "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
119664152
pes2o/s2orc
v3-fos-license
Semi-hyperbolic rational maps and size of Fatou components Recently Merenkov and Sabitova introduced the notion of a homogeneous planar set. Using this notion they proved a result for Sierpi${\'n}$ski carpet Julia sets of hyperbolic rational maps that relates the diameters of the peripheral circles to the Hausdorff dimension of the Julia set. We extend this theorem to Julia sets (not necessarily Sierpi${\'n}$ski carpets) of semi-hyperbolic rational maps, and prove a stronger version of the theorem that was conjectured by Merenkov and Sabitova. Introduction In this paper we establish a relation between the size of the Fatou components of a semi-hyperbolic rational map and the Hausdorff dimension of the Julia set. Before formulating the results, we first discuss some background. A rational map f : C → C of degree at least 2 is semi-hyperbolic if it has no parabolic cycles, and all critical points in its Julia set J (f ) are non-recurrent. We say that a point x is non-recurrent if x / ∈ ω(x), where ω(x) is the set of accumulation points of the orbit {f n (x)} n∈N of x. In our setting, we require that the Julia set J (f ) is connected and that there are infinitely many Fatou components. Let {D k } k≥0 be the sequence of Fatou components, and define C k := ∂D k . Since J (f ) is connected, it follows that each component D k is simply connected, and thus C k is connected. We say that the collection {C k } k≥0 is a packing P and we define the curvature distribution function associated to P (see below for motivation of this terminology) by N (x) = #{k : (diam C k ) −1 ≤ x} (1.1) for x > 0. Here #A denotes the number of elements in a given set A. Also, the exponent E of the packing P is defined by where all diameters are in the spherical metric of C. In the following, we write a b if there exists a constant C > 0 such that 1 C a ≤ b ≤ Ca. If only one of these inequalities is true, we write a b or b a respectively. We denote the Hausdorff dimension of a set J ⊂ C by dim H J (see Section 3). We now state our main result. Theorem 1.1. Let f : C → C be a semi-hyperbolic rational map such that the Julia set J (f ) is connected and the Fatou set has infinitely many components. Then x s < ∞, where N is the curvature distribution function of the packing of the Fatou components of f and s = dim H J (f ). In particular N (x) x s . It is remarkable that the curvature distribution function has polynomial growth. As a consequence, we have the following corollary. Corollary 1.2. Under the assumptions of Theorem 1.1 we have where N is the curvature distribution function, and E is the exponent of the packing of the Fatou components of f . This essentially says that one can compute the Hausdorff dimension of the Julia set just by looking at the diameters of the (countably many) Fatou components, which lie in the complement of the Julia set. The study of the curvature distribution function and the terminology is motivated by the Apollonian circle packings. An Apollonian circle packing is constructed inductively as follows. Let C 1 , C 2 , C 3 be three mutually tangent circles in the plane with disjoint interiors. Then by a theorem of Apollonius there exist exactly two circles that are tangent to all three of C 1 , C 2 , C 3 . We denote by C 0 the outer circle that is tangent to C 1 , C 2 , C 3 (see Figure 1). For the inductive step we apply Apollonius's theorem to all triples of mutually tangent circles of the previous step. In this way, we obtain a countable collection of circles {C k } k≥0 . We denote by P = {C k } k≥0 the Apollonian circle packing constructed this way. If r k denotes the radius of C k , then r −1 k is the curvature of C k . The curvatures of the circles in Apollonian packings are of great interest in number theory because of the fact that if the four initial circles C 0 , C 1 , C 2 , C 3 have integer curvatures, then so do all the rest of the circles in the packing. Another interesting fact is that if, in addition, the curvatures of all circles in the packing share no common factor greater than one, then there are infinitely many circles in the packing with curvature being a prime number. For a survey on the topic see [Oh]. In order to study the curvatures of an Apollonian packing P one defines the exponent E of the packing by E = inf t ∈ R : k≥0 r t k < ∞ and the curvature distribution function associated to P by N (x) = #{k : r −1 k ≤ x} for x > 0. We remark here that the radii r k are measured with the Euclidean metric of the plane, in contrast to (1.1) where we use the spherical metric. Let D k be the open ball enclosed by C k . The residual set S of a packing P is defined by Figure 1. An Apollonian circle packing. D 0 \ k≥1 D k . The set S has fractal nature and its Hausdorff dimension s = dim H S is related to N (x) and E by the following result of Boyd. [Bo2]). If P is an Apollonian circle packing, then Recently, Kontorovich and Oh proved the following stronger version of this theorem: Theorem 1.4 ([KO, Theorem 1.1]). If P is an Apollonian circle packing, then In [MS], Merenkov and Sabitova observed that the curvature distribution function N (x) can be defined also for other planar fractal sets such as the Sierpiński gasket and Sierpiński carpets. More precisely, if {C k } k≥0 is a collection of topological circles in the plane, and D k is the open topological disk enclosed by C k , such that D 0 contains C k for k ≥ 1, and D k are disjoint for k ≥ 1, one can define the residual set S of the packing P = {C k } k≥0 by S = D 0 \ k≥1 D k . A fundamental result of Whyburn implies that if the disks D k , k ≥ 1 are disjoint with diam D k → 0 as k → ∞ and S has empty interior, then S is homeomorphic to the standard Sierpiński carpet [Wh1]. In the latter case we say that S is a Sierpiński carpet (see Figure 3 for a Sierpiński carpet Julia set). One can define the curvature of a topological circle C k as (diam C k ) −1 . Then the curvature distribution function associated to P is defined as in (1.1) by N (x) = #{k : (diam C k ) −1 ≤ x} for x > 0. Similarly, the exponent E of P is defined as in (1.2). In general, the limit lim x→∞ log N (x)/log x does not exist, but if we impose further restrictions on the geometry of the circles C k , then we can draw conclusions about the limit. To this end, Merenkov and Sabitova introduced the notion of homogeneous planar sets (see Section 4 for the definition). However, even these strong geometric restrictions are not enough to guarantee the existence of the limit. The following theorem hints that a self-similarity condition on S would be sufficient for our purposes. Theorem 1.5 ( [MS,Theorem 6]). Assume that f is a hyperbolic rational map whose Julia set J (f ) is a Sierpiński carpet. Then where N is the curvature distribution function and E is the exponent of the packing of the Fatou components of f . The authors made the conjecture that for such Julia sets we actually have an analogue of Theorem 1.4, namely lim x→∞ N (x)/x s ∈ (0, ∞), where s = dim H J (f ). Note that Theorem 1.1 partially addresses the issue by asserting that N (x) x s . However, we believe that the limit lim x→∞ N (x)/x s does not exist in general for Julia sets. Observe that the conclusion of Theorem 1.1 remains valid if we alter the metric that we are using in the definition of N (x) in a bi-Lipschitz way. For example, if the Julia set J (f ) is contained in the unit disk of the plane we can use the Euclidean metric instead of the spherical. On the other hand, the limit of N (x)/x s as x → ∞ is much more sensitive to changes of the metric. The following simple example of the standard Sierpiński carpet provides some evidence that the limit will not exist even for packings with very "nice" geometry. The standard Sierpiński carpet is constructed as follows. We first subdivide the unit square [0, 1] 2 into 9 squares of equal size and then remove the interior of the middle square. We continue subdividing each of the remaining 8 squares into 9 squares, and proceed inductively. The resulting set S is the standard Sierpiński carpet and its Hausdorff dimension is s = log 8/ log 3. The set S can be viewed as the residual set of a packing P = {C k } k≥0 , where C 0 is the boundary of the unit square, and C k , k ≥ 1 are the boundaries of the squares that we remove in each step in the construction of S. Using the Euclidean metric, note that for each n ∈ N the quantity N (3 n / √ 2) is by definition the number of curves C k that have diameter at least √ 2/3 n . Thus, N (3 n / √ 2) = 1 + 1 + 8 1 + 8 2 · · · + 8 n−1 = 1 + 8 n − 1 7 (note that we also count C 0 ). Since 3 n·s = 8 n , we have On the other hand, it is easy to see that N (3 n / √ 2) = N (3 n ), since there are no curves C k with diameter in the interval [ 1 3 n , √ 2 3 n ). Thus, lim n→∞ N (3 n )/(3 n ) s = 1/7, and this shows that lim n→∞ N (x)/x s does not exist. In general, if one can show that there exists some constant c > 0, c = 1 such that N (x) = N (cx) for large x, then the limit will not exist. We also note that in Theorem 1.1 one might be able to weaken the assumption that f is semi-hyperbolic, but the assumption that f has connected Julia set is necessary, since there exist rational maps whose Fatou components (except for two of them) are nested annuli, and in fact in this case there exist infinitely many Fatou components with "large" diameters (see [Mc1,Proposition 7.2]). Thus, if N (x) is the number of Fatou components whose diameter is at least 1/x, we would have N (x) = ∞ for large x. The proof of Theorem 1.1 will be given in two main steps. In Section 3, using the self-similarity of the Julia set we will establish relations between the Hausdorff dimension of the Julia set and its Minkowski dimension (see Section 3 for the definition). Then in Section 4 we will observe that the Julia sets of semi-hyperbolic maps are homogeneous sets, satisfying certain geometric conditions (see Section 4 for the definition). These conditions allow one to relate the quantity N (x)/x s with the Minkowski content of the Julia set. Using these relations, and the results of Section 3, the proof of Theorem 1.1 will be completed. Before proceeding to the above steps, we need some important distortion estimates for semi-hyperbolic rational maps that we establish in Section 2, and we will refer to them as the Conformal Elevator. These are the key estimates that we will use in establishing geometric properties of the Julia set. Similar estimates have been established for sub-hyperbolic rational maps in [BLM,Lemma 4.1]. Acknowledgements. The author would like to thank his advisor, Mario Bonk, for many useful comments and suggestions, and for his patient guidance. He also thanks the anonymous referees for their careful reading of the manuscript and their thoughtful comments. Conformal elevator for semi-hyperbolic maps The heart of this section is Lemma 2.1 and the whole section is devoted to proving it. Let f : C → C be a semi-hyperbolic map with J (f ) = C; in particular, by Sullivan's classification and the fact that semi-hyperbolic rational maps have neither parabolic cycles (by definition) nor Siegel disks and Herman rings ( [Ma, Corollary]), f must have an attracting or superattracting periodic point. Conjugating f by a rotation of the sphere C, we may assume that ∞ is a periodic point in the Fatou set. Furthermore, conjugating again with a Euclidean similarity, we can achieve that J (f ) ⊂ 1 2 D, where D denotes the unit disk in the plane. Note that these operations do not affect the conclusion of Theorem 1.1, since a rotation is an isometry in the spherical metric that we used in the definition of N (x), and a scaling only changes the limits by a factor. Furthermore, since the boundaries C k of the Fatou components D k have been moved away from ∞, the diameters of C k in spherical metric are comparable to the diameters in the Euclidean metric. This easily implies that the conclusion of Theorem 1.1 is not affected if we define N (x) = #{k : (diam C k ) −1 ≤ x} using instead the Euclidean metric for measuring the diameters. In this section the Euclidean metric will be used in all of our considerations. By semi-hyperbolicity (see [Ma, Theorem II(b)]) and compactness of J (f ), there exists ε 0 > 0 such that for every x ∈ J (f ) and for every connected component W of f −n (B(x, ε 0 )) the degree of f n : W → B(x, ε 0 ) is bounded by some fixed constant D 0 > 0 that does not depend on x, W, n. Furthermore, we can choose an even smaller ε 0 so that the open ε 0 -neighborhood of J (f ) that we denote by N ε0 (J (f )) is contained in D, and avoids the poles of f that must lie in the Fatou set. Then f is uniformly continuous in N ε0/2 (J (f )) in the Euclidean metric, and in particular, there exists δ 0 > 0 such that for any U ⊂ N ε0/2 (J (f )) with diam U < δ 0 we have diam f (U ) < ε 0 /2. Let p ∈ J (f ), 0 < r ≤ δ 0 /2 be arbitrary, and define B := B(p, r). Since for large N ∈ N we have f N (B) ⊃ J (f ) (e.g. see [Mil,Corollary 14.2]), there exists a largest n ∈ N such that diam f n (B) < ε 0 /2. By the choice of n, we have diam f n+1 (B) ≥ ε 0 /2. Using the uniform continuity and the choice of δ 0 , it follows that diam f n (B) ≥ δ 0 , thus We now state the main lemma. Lemma 2.1. There exist constants γ, r 1 , K 1 , K 2 > 0 independent of B = B(p, r) (and thus of n) such that: This lemma asserts that any ball of small radius centered at the Julia set can be blown up to a certain size, using some iterate f n , with good distortion estimates. For hyperbolic rational maps (i.e., no parabolic cycles and no critical points on the Julia set) the map f n would actually be bi-Lipschitz and part (c) of the above lemma would be true with instead of ≤. However, in the semi-hyperbolic case, the presence of critical points on the Julia set prevents such good estimates, but part (a) of the lemma restores some of them. In order to prepare for the proof we need some distortion lemmas. Using Koebe's distortion theorem (e.g., see [Po,Theorem 1.3]) one can derive the following lemma. Lemma 2.2. Let g : D → C be a univalent map and let 0 < ρ < 1. Then there exists a constant C ρ > 0 that depends only on ρ, such that We will be using the notation |g(u) − g(v)| ρ |g (0)||u − v|. We also need the next lemma. Lemma 2.3. Let g : C → C be a semi-hyperbolic rational map with J (g) = C and assume that J (g) is connected. Then there exists ε > 0 such that for all x ∈ J (g), each component of g −m (B(x, ε)) is simply connected, for all m ∈ N. Proof. As before, by conjugating, we may assume that ∞ is a periodic point in the Fatou set, and the Julia set is "far" from the poles of g. By semi-hyperbolicity (see [Ma, Theorem II(c)]), for each x ∈ J (g) and η > 0, there exists ε > 0 such that each component of g −m (B(x, ε)) has Euclidean diameter less than η, for all m ∈ N. By compactness of J (g), we may take ε > 0 to be uniform in x. We choose a sufficiently small η such that the 3η-neighborhood N 3η (J (g)) of J (g) does not contain any poles of g. We claim that each component of g −m (B(x, ε)) is simply connected. If this was not the case, there would exist an open component W of g −m0 (B(x, ε)), and a non-empty family of compact components Since V i , i ∈ I and W share at least one common boundary point, it follows that V i ⊂ N 2η (J (g)), and in particular V i does not contain any poles of g, i.e., ∞ / ∈ g(V i ) for all i ∈ I. By the choice of m 0 the set g(W ) ⊂ g −m0+1 (B(x, ε)) is a simply connected set in the η-neighborhood of J (g). Note that i∈I g(V i ) cannot be entirely contained in g(W ), otherwise W would not be a component of g −m0 (B(x, ε)). Thus, there exists some V i =: V and a point w 0 ∈ ( C \ g(W )) ∩ g(V ). We connect the point w 0 to ∞ with a path γ ⊂ C \ g(W ), and then we lift γ under g to a path α ⊂ C that connects a preimage z 0 ∈ V of w 0 to a pole of g (see [BM,Lemma A.16] for path-lifting under branched covers). The path α cannot intersect W , so it stays entirely in V . This contradicts the fact that V contains no poles. Now we are ready to start the proof of Lemma 2.1. Since diam f n (B) < ε 0 /2, for x = f n (p) ∈ J (f ) we have f n (B) ⊂ B(x, ε 0 /2), and for the component Ω of f −n (B(x, ε 0 )) that contains B we have that the degree of f n : Ω → B(x, ε 0 ) is bounded by D 0 . Lemma 2.3 implies that we can refine our choice of ε 0 such that Ω is also simply connected. Let ψ : Ω → D be the Riemann map that maps the center p of B to 0, and φ : B(x, ε 0 ) → D be the translation of x to 0, followed by a scaling by 1/ε 0 , so we obtain the following diagram: The proof will be done in several steps. First we prove that ψ(B) is contained in a ball of fixed radius smaller than 1. Second, we show a distortion estimate for ψ, namely it is roughly a scaling by 1/ diam B. In the end, we complete the proofs of (a),(b),(c), using lemmas that are generally true for proper maps. We claim that there exists ρ > 0, independent of B such that This will be derived from the following modulus distortion lemma. We include first some definitions. If Γ is a family of curves in C, we define the modulus of Γ, denoted by mod(Γ), as follows. A function ρ : where m 2 denotes the 2-dimensional Lebesgue measure, and the infimum is taken over all admissible functions. The modulus has the monotonicity property, namely if Γ 1 , Γ 2 are path families and Γ 1 ⊂ Γ 2 , then Another important property of modulus is conformal invariance: if Γ is a curve family in an open set U ⊂ C and g : U → V is conformal, then mod(Γ) = mod(g(Γ)). We direct the reader to [LV, for more background on modulus. If U is a simply connected region, and V is a connected subset of U with V ⊂ U , we denote by mod(U \ V ) the modulus of the curve family that separates V from C \ U . Lemma 2.4. Let U, U ⊂ C be simply connected regions, and g : U → U be a proper holomorphic map of degree D. ( A particular case of this lemma is [Mc2,Lemma 5.5], but we include a proof of the general statement since we were not able to find it in the literature. Proof. Using the conformal invariance of modulus we may assume that U and U are bounded Jordan regions. We first show (a). Using a conformal map, we map the annulus U \ V to the circular annulus D \ B(0, r), and by composing with g, we assume that we have a proper holomorphic map g : U \ V → D \ B(0, r), of degree at most D. We divide the annulus D \ B(0, r) into nested circular annuli centered at the origin A 1 , . . . , A k , k ≤ D such that each A i does not contain any critical value of g in its interior. Note that where we denote by mod(A i ) the modulus of curves that separate the complementary components of the annulus A i . We fix ε > 0. By making the annuli A i a bit thinner, we can achieve that ∂A i does not contain any critical value of g, and Let A i be a preimage of A i , so that A 1 , . . . , A k are nested annuli separating V from C \ U , and avoiding the critical points of g. Note that g : To see the first inequality, note that an admissible function ρ for mod(U \ V ) yields admissible functions ρ| Ai for mod(A i ). Combining (2.5) and (2.4) we obtain Letting ε → 0 one concludes the proof. The inequality in (b) follows from Poletskiȋ's inequality [Ri,Chapter II,Section 8]. Since holomorphic maps are 1-quasiregular (see [Ri, Chapter I] for definition and background), we have mod(g(Γ)) ≤ mod(Γ) (2.6) for all path families Γ in U . First we shrink the regions U and U as follows. Consider a Jordan curve γ 1 very close to ∂U such that γ 1 encloses a region U 1 that contains V and all critical values of g. Then U 1 := g −1 (U 1 ) is a Jordan region that contains V and all critical points of g. Let Γ be the family of paths in U 1 \V that connect ∂V to ∂U 1 and avoid preimages of critical values of g, which are finitely many. Also, note that g(Γ) ⊃ Γ , where Γ is the family of paths in U 1 \ V that connect ∂V to ∂U 1 , and avoid the critical values of g. To see this, observe that any such path γ has a lift γ ⊂ U 1 \ V that starts at ∂V and ends at ∂U 1 . Using monotonicity of modulus and (2.6) we have mod(Γ ) ≤ mod(g(Γ)) ≤ mod(Γ). IfΓ is the family of all paths in U 1 \ V that connect ∂V to ∂U 1 , thenΓ differs from Γ by a family of zero modulus. The same is true for the corresponding familyΓ in U 1 \ V . Thus, we have mod(Γ ) ≤ mod(Γ). By reciprocality of the modulus and monotonicity, it follows that Finally, observe that the path family separating V from C \ U can be written as an increasing union of families separating V from sets of the form C \ U 1 , where U 1 gets closer and closer to U . Writing mod(U \ V ) as a limit of moduli of such families, one obtains the desired inequality. Before proving part (a) of Lemma 2.1, we include a general lemma for proper self-maps of the disk. Lemma 2.5. Let P : D → D be a proper holomorphic map of degree D, with P (0) = 0, and fix ρ ∈ (0, 1). There exists a constant C > 0 depending only on D, ρ such that for each connected set A ⊂ B(0, ρ) one has Proof. Let A be a connected subset of D, and assume first that 0 ∈ A. Define ζ to be the furthest point of P (A), so P (A) ⊂ B(0, |ζ|), and diam P (A) |ζ|, since 0 ∈ P (A). Let W be the component of P −1 (B(0, |ζ|)) that contains A, and consider α to be the furthest point of W , so W ⊂ B(0, |α|). Using Grötzsch's modulus theorem and Lemma 2.4(a) we have The following lemma gives us the asymptotic behavior of µ as r → 0 (see [Ah,). However, since A, P (A) ⊂ B(0, ρ), it follows (e.g. by direct computation using the formulas of the Möbius transformations φ, ψ) that diam φ(A) diam A and diam ψ(P (A)) diam P (A) with constants depending only on ρ. In our case, let P = φ • f n • ψ −1 : D → D, which is a proper map of degree bounded by D 0 , that fixes 0. Now, let A ⊂ B be a connected set. Using (2.9), and Lemma 2.5 applied to ψ(A) ⊂ B(0, ρ), one has where in the end we used the fact that φ is a scaling by a fixed factor. For the proof of part (b), we will need again a lemma for proper maps of the disk. Proof. Let B(0, r 1 ) ⊂ P (B(0, r 0 )) be a ball of maximal radius, and let W be the component of P −1 (B(0, r 1 )) that contains 0. Note that W contains a point z with |z| = r 0 . Lemma 2.4(a) and Grötzsch's modulus theorem yield Monotonicity of µ now yields a uniform lower bound for r 1 . In our case, Koebe's distortion theorem in (2.9) implies that ψ( 1 2 B) contains a ball B(0, r 0 ) where r 0 is independent of B. Now, Lemma 2.7 applied to P = φ • f n • ψ −1 shows that P (ψ( 1 2 B)) contains some ball B(0, r 1 ), independent of B. Since φ is only scaling by a certain factor, we obtain that f n ( 1 2 B) contains some ball B(x, r 2 ), independent of B. Finally, we show part (c). We first need the following lemma. Lemma 2.8. Let P : D → D be a proper holomorphic map of degree D. Then for ρ ∈ (0, 1) the restriction P : B(0, ρ) → D is K-Lipschitz, where K depends only on D, ρ. Proof. Each proper self-map of the unit disk is a finite Blaschke product, so we can For u, v ∈ B by (2.3) one has ψ(u), ψ(v) ∈ B(0, ρ). Thus, applying Lemma 2.8 to P = φ • f n • ψ −1 , and using (2.9) we obtain This completes the proof of Lemma 2.1. Hausdorff and Minkowski dimensions For a metric space (X, d) and s ∈ [0, ∞) the s-dimensional Hausdorff measure of X is defined as where H s δ (X) = inf{ i∈I (diam U i ) s } and the infimum is taken over all covers of X by open sets {U i } i∈I of diameter at most δ. Then the Hausdorff dimension of (X, d) is The Minkowski dimension is another useful notion of dimension for a fractal set X ⊂ R n . For ε > 0 we define n(ε) to be the maximal number of disjoint open balls of radii ε > 0 centered at points x ∈ X. We then define the upper and lower Minkowski dimensions, respectively, as If the two numbers agree, then we say that their common value dim M X is the Minkowski, or else, box dimension of X. It is easy to see that the definition of the Minkowski dimension is not affected if n(ε) denotes instead the smallest number of open balls of radii ε > 0 centered at X, that cover X. The important difference between the Hausdorff and Minkowski dimensions is that in the Hausdorff dimension we are taking into account coverings {U i } i∈I with different weights (diam U i ) s attached to each set, but in the Minkowski dimension we are considering only coverings of sets with equal diameters. It easily follows from the definitions that we always have From now on, n(ε) will denote the maximal number of disjoint open balls of radii ε, centered at points x ∈ X. Based on the distortion estimates that we developed in Section 2, and using results of [Fal] and [SU] we have the following result that concerns the Hausdorff and Minkowski dimensions of Julia sets of semi-hyperbolic maps. Theorem 3.1. Let f : C → C be a semi-hyperbolic rational map with J (f ) = C and s := dim H J (f ). We have where n(ε) is the maximal number of disjoint open balls of radii ε (in the spherical metric), centered in J (f ). Proof. By considerations as in the beginning of Section 2, we may assume that J (f ) ⊂ D, and use the Euclidean metric which is comparable to the spherical metric. This will only affect the constant in part (c) of the theorem. The parts (a) and (b) follow from [SU,Theorem 1.11(e) and (g)]. Also, if B 1 , . . . , B n(ε) are disjoint balls of radius ε > 0 centered at J (f ) then the collection 2B 1 , . . . , 2B n(ε) covers J (f ), where 2B i has the same center as B i but twice the radius. Thus, we have H s ε (J (f )) ≤ n(ε)(2ε) s . Taking limits, and using (a), we obtain 0 < H s (J (f )) ≤ 2 s lim inf ε→0 n(ε)ε s which shows the left inequality in (c). For the right inequality in (c), we use the following result of Falconer. Theorem 3.2 ( [Fal,Theorem 4]). Let (F, d) be a compact metric space with s = dim H F < ∞. Suppose that there exist K 0 , r 0 > 0 such that for any ball B ⊂ F of radius r < r 0 there is a mapping ψ : F → B satisfying for all x, y ∈ F . Then lim sup ε→0 n(ε)ε s < ∞. We remark that the mapping ψ : F → B need not be continuous. It remains to show that this theorem applies in our case. To show the existence of ψ we will carefully use the distortion estimates of Lemma 2.1. Let r 0 be so small that for r < r 0 and p ∈ J (f ) the conclusions of Lemma 2.1 are true for the ball B = B(p, r). In particular, there exists r 1 , independent of B, such that B(f n (p), r 1 ) ⊂ f n (B) (3.1) for some n ∈ N. For each ball B(q, r 1 ), q ∈ J (f ), there exists m ∈ N such that f m : B(q, r 1 ) ∩ J (f ) → J (f ) is surjective (e.g. see [Mil,Corollary 14.2]). We choose the smallest such m. Compactness of J (f ) allows us to choose a uniform m ∈ N, independent of q ∈ J (f ). By the analyticity of f m , there exists a constant K 1 > 0 such that for all u, v ∈ J (f ) we have Also, by Lemma 2.1(c), there exists K 2 independent of B such that Thus, the hypotheses of Theorem 3.2 are satisfied with K 0 = 2/(K 1 K 2 ). Homogeneous sets and Julia sets Let P = {C k } k≥0 be a packing, as defined in the Introduction, where C k are topological circles, surrounding topological open disks D k (in the plane or the sphere) such that D 0 contains C k for k ≥ 1, and D k , k ≥ 1 are disjoint. Then the set S = D 0 \ k≥1 D k is the residual set S of the packing P. In the following, one can use the Euclidean or spherical metric, but it is convenient to consider C 0 = ∂D 0 as the boundary of the unbounded component of the packing P (see Figures 1 and 3), and use the Euclidean metric to study the other disks D k , k ≥ 1. Thus, we will restrict ourselves to the use of the Euclidean metric in this section. (1) Each D k , k ≥ 1 is a uniform quasi-ball. More precisely, there exists a constant α ≥ 1 such that for each D k there exist inscribed and circumscribed, concentric circles of radii r k and R k respectively with R k r k ≤ α. (2) There exists a constant β ≥ 1 such that for each p ∈ S and 0 < r ≤ diam S there exists a circle C k intersecting B(p, r) such that 1 β r ≤ diam C k ≤ βr. (3) The circles C k are uniformly relatively separated. This means that there exists δ > 0 such that for all j = k. (4) The disks D k , k ≥ 1 are uniformly fat. By definition, this means that there exists τ > 0 such that for every ball B(p, r) centered at D k that does not contain D k , we have where m 2 denotes the 2-dimensional Lebesgue measure. (Here one can use the spherical measure for packings on the sphere.) Condition (1) means that the sets D k look like round balls, while (2) says that the circles C k exist in all scales and all locations in S. Condition (3) forbids two "large" circles C k to be close to each other in some uniform manner. Note that this only makes sense when D k , k ≥ 1 are disjoint, e.g. in the case of a Sierpiński carpet. Finally, (4) is used to replace (3) when we are working with fractals such as the Sierpiński gasket, or generic Julia sets regarded as packings, where D k are not disjoint. We now summarize some interesting properties of homogeneous sets, that are not needed though for the proof of Theorem 1.1. A set E ⊂ R n is said to be porous if there exists a constant 0 < η < 1 such that for all sufficiently small r > 0 and all x ∈ E, there exists a point y ∈ R n such that A Jordan curve γ ⊂ C is called a K-quasicircle if for all x, y ∈ γ there exists a subarc γ 0 of γ joining x and y with diam γ 0 ≤ K|x − y|. The (Ahlfors regular) conformal dimension of a metric space (X, d), denoted by (AR) Cdim X, is the infimum of the Hausdorff dimensions among all (Ahlfors regular) metric spaces that are quasisymmetrically equivalent to (X, d). For more background see Chapters 10 and 15 in [He]. Proposition 4.1. Let S be the residual set of a packing P, satisfying (1) and (2). Proof. By (1), each D k contains a ball of diameter comparable to diam D k . Thus, summing the areas of the sets D k , and noting that they are all contained in D 0 ⊂ C, we see that for each ε > 0, there can only be finitely many sets D k with diam D k > ε. We conclude that S is locally connected (see [Mil,Lemma 19.5]). Condition (2) implies that for r ≤ diam S, every ball B(p, r) centered at S intersects a curve C k of diameter comparable to r. Let c < 1 and consider the ball B(p, cr) ⊂ B(p, r). Then B(p, cr) intersects a curve C k of diameter comparable to cr, and if c is sufficiently small but uniform, then C k ⊂ B(p, r). Thus B(p, r) contains a curve C k of diameter comparable to r. By (1), D k contains a ball of radius comparable to diam D k and thus comparable to r (note that here we use the Euclidean metric). Hence, B(p, r) \ S contains a ball of radius comparable to r. This completes the proof that S is porous. It is a standard fact that a porous set E ⊂ R n has Hausdorff dimension bounded away from n, quantitatively (see [Sa,Theorem 3.2]). Thus, (b) implies (c). For our last assertion we will use a criterion of Mackay [Mac,Theorem 1.1] which asserts that a doubling metric space which is annularly linearly connected has conformal dimension strictly greater than 1. A connected metric space X is annularly linearly connected (abbr. ALC) if there exists some L ≥ 1 such that for every p ∈ X, r > 0, and x, y ∈ X in the annulus A(p, r, 2r) := B(p, 2r) \ B(p, r) there exists an arc J ⊂ X joining x to y that lies in a slightly larger annulus A(p, r/L, 2Lr). It suffices to show that S is ALC. The idea is simple, but the proof is technical, so we only provide a sketch. Let x, y ∈ A(p, r, 2r) ∩ S, and consider a path γ ⊂ A(p, r, 2r) (not necessarily in S) that joins x and y. The idea is to replace the parts of the path γ that lie in the complementary components D k of S by arcs in C k = ∂D k and then make sure that the resulting arc stays in a slightly larger annulus A(p, r/L, 2Lr). The assumption that the curves C k are quasicircles guarantees that the subarcs that we will use are not too "large", and condition (3) guarantees that the "large" curves C k do not block the way from x to y, since these curves are not allowed to be very close to each other. Using (3), we can find uniform constants a, L 1 ≥ 1 such that there exists at most one curve C k0 with diam C k0 ≥ r/a that intersects B(p, r/L 1 ). We call a curve C k large if its diameter exceeds r/a, and otherwise we call it small. We enlarge slightly the annulus (maybe using a larger L 1 ) to an annulus A(p, r/L 1 , 2rL 1 ) so that B(p, 2rL 1 ) contains all small curves C k that intersect γ. We now check all different cases. If γ meets the large C k0 that intersects B(p, r/L 1 ), using the fact that C k0 is a quasicircle, we can enlarge the annulus to an annulus A(p, r/L 2 , 2rL 2 ) with a uniform L 2 ≥ 1, so that x can be connected to y by a path in A(p, r/L 2 , 2rL 2 )\D k0 . We call the resulting path γ. Note that here we have to assume that C k0 = C 0 , so that the path γ does not lie in the unbounded component of the packing and it passes through several curves C k on the way from x to y. The case C k0 = C 0 , which occurs only when x, y ∈ D 0 , is similar and in the previous argument we just have to choose a path γ that lies in A(p, r/L 2 , 2rL 2 ) ∩ D 0 . We still assume that B(p, 2rL 2 ) contains all small curves C k that intersect γ. If γ meets a small C k that does not intersect B(p, r/L 2 ), then we can replace the subarcs of γ that lie in D k with arcs in C k that have the same endpoints. The resulting arcs will lie in the annulus by construction. Next, if γ meets a small C k that does intersect B(p, r/L 2 ), we follow the same procedure as before, but now we have to choose the sub-arcs of C k carefully, so that they do not approach p too much. This can be done using the assumption that the curves C k are uniform quasicircles. The resulting arcs will lie in a slightly larger annulus A(p, r/L 3 , 2rL 3 ), where L 3 ≥ 1 is a uniform constant. Finally, if γ intersects a large C k which does not meet B(p, r/L 3 ) we can use the assumption that C k is a quasicircle to replace the subarcs of γ that lie in D k with subarcs of C k that have diameter comparable to r. Thus, a larger annulus A(p, r/L 4 , 2rL 4 ) will contain the arcs of C k that we obtain in this way. We need to ensure that this procedure indeed yields a path that joins x and y inside A(p, r/L 4 , 2rL 4 ). This follows from the fact that diam D k → 0. The latter fact follows from the assumption that the curves C k are uniform quasicircles, which in turn implies that each D k contains a ball of radius comparable to diam D k , i.e., (1) is true (for a proof of this assertion see [Bo,Proposition 4.3]). Next, we continue our preparation for the proof of Theorem 1.1. From now on, we will be using a slightly more general definition for a packing P = {C k } k≥0 , suitable for Julia sets, where the sets D k are allowed to be simply connected open sets and C k = ∂D k (so they are not necessarily topological circles). Making abuse of terminology, we still call C k a "curve". As we will see in Lemma 4.3, a homogeneous set has the special property that there is some important relation between the curvature distribution function N (x) and the maximal number of disjoint open balls n(ε), centered at S. Thus, considerations about the residual set S, which are reflected by n(ε), can be turned into considerations about the complementary components D k , which are comprised in N (x). The following lemma is proved in [MS] and its proof is based on area and counting arguments. Lemma 4.2 ( [MS,Lemma 3]). Assume that S is the residual set of a packing P = {C k } k≥0 that satisfies (1) and (3) (or (1) and (4)). For any β > 0, there exist constants γ 1 , γ 2 > 0 depending only on β and the constants in (1), (3) (or (1), (4)) such that for any collection C of disjoint open balls of radii r > 0 centered in S we have the following statements: (a) There are at most γ 1 balls in C that intersect any given C k with diam C k ≤ βr. (b) There are at most γ 2 curves C k intersecting any given ball in 2C and satisfying where 2C denotes the collection of open balls with the same centers as the ones in C, but with radii 2r. Using this lemma one can prove a relation between the curvature distribution function N (x) = #{k : (diam C k ) −1 ≤ x} (using the Euclidean metric) and the maximal number n(ε) of disjoint open balls of radius ε, centered at S. Namely, we have the following lemma. Lemma 4.3. Assume that the residual set S of a packing P satisfies (1), (2) and (3) or (1), (2) and (4). Then there exists a constant C > 0 such that for all small ε > 0 we have where β is the constant in (2). The proof is essentially included in the proof of [MS, Proposition 2] but we include it here for completeness. Proof. Let C be a maximal collection of disjoint open balls of radius ε, centered at S. For each ball C ∈ C, by condition (2) there exists C k such that C k ∩ C = ∅ and 1 β ε ≤ diam C k ≤ βε. On the other hand, Lemma 4.2(a) implies that for each such C k there exist at most γ 1 balls in C that intersect it. Thus Conversely, note that by the maximality of C, it follows that 2C covers S. Hence, if C k is arbitrary satisfying diam C k ≥ 1 β ε, it intersects a ball 2C in 2C. For each such ball 2C, Lemma 4.2(b) implies that there exist at most γ 2 curves C k with diam C k ≥ 1 β ε that intersect it. Thus Finally, we proceed to the proofs of Theorem 1.1 and Corollary 1.2. Proof of Theorem 1.1. By considerations as in the beginning of Section 2, we assume that J (f ) ⊂ D, and we will use the Euclidean metric since this does not affect the conclusion of the Theorem. Let C 0 be the boundary of the unbounded Fatou component, D k , k ≥ 1 be the sequence of bounded Fatou components, and C k = ∂D k . Then P = {C k } k≥0 can be viewed as a packing, and S = J (f ) is its residual set. Note, though, that the sets C k need not be topological circles in general, as we already remarked. This, however, does not affect our considerations, since it does not affect the conclusions of lemmas 4.2 and 4.3, as long as the other assumptions hold for C k and the simply connected regions D k enclosed by them. We will freely use the terminology "curves" for the sets C k . By Theorem 3.1 we have that the quantity n(ε)ε s is bounded away from 0 and ∞ as ε → 0, where s = dim H J (f ). If we prove that J (f ) is a homogeneous set, satisfying (1), (2) and (4), then using Lemma 4.3, it will follow that N (x)/x s is bounded away from 0 and ∞ as x → ∞, and in particular which will complete the proof. Julia sets of semi-hyperbolic rational maps are locally connected if they are connected (see [Yin,Theorem 1.2] and also [Mih,Proposition 10]), and thus for each ε > 0 there exist finitely many Fatou components with diameter greater than ε (see [Wh2,Theorem 4.4,). First we show that condition (1) in the definition of homogeneity is satisfied. The idea is that the finitely many large Fatou components are trivially quasi-balls, as required in (1), so there is nothing to prove here, but the small Fatou components can be blown up with good control to the large ones using Lemma 2.1. The distortion estimates allow us to control the size of inscribed circles of the small Fatou components. Let d 0 ≤ (1/4K 1 ) 1/γ , where K 1 , γ are the constants appearing in Lemma 2.1. We also make d 0 even smaller so that for r ≤ d 0 and p ∈ J (f ) the conclusions of Lemma 2.1 are true. Since there are finitely many curves C k with diam C k > d 0 /2, for these C k there exist concentric inscribed and circumscribed circles with radii r k and R k respectively, such that R k /r k ≤ α, for some α > 0. This implies that If C k is arbitrary with diam C k ≤ d 0 /2, then for p ∈ C k and r = 2 diam C k , by Lemma 2.1(a) there exists n ∈ N such that Note that the Fatou component D k is mapped under f n onto a Fatou component D k . Since f n is proper, the boundary C k of D k is mapped onto C k := ∂D k . Then the above inequality can be written as Hence, C k is one of the "large" curves, for which there exists a inscribed ball B(q , r k ) such that 2r k ≤ diam C k ≤ 2αr k . Observe that r k ≥ d 0 /2α. Let q ∈ D k ⊂ B(p, r) be a preimage of q under f n , and W ⊂ D k be the component of f −n (B(q , r k )) that contains q. For each u ∈ ∂W , by Lemma 2.1(c) one has Letting R k = diam C k , and r k = inf u∈∂W |q − w|, one obtains R k /r k ≤ αK 2 /2d 0 , so (1) is satisfied with α = max{α, αK 2 /2d 0 }. Similarly, we show that condition (2) is also true. Let r 1 be the constant in Lemma 2.1(b) and consider d 0 ≤ r 1 /2 so small that the conclusions of Lemma 2.1 are true for p ∈ J (f ) and r ≤ d 0 . Note that by compactness of J (f ) there exists β > 0 such that for d 0 ≤ r ≤ diam J (f ) and p ∈ J (f ) there exists C k such that C k ∩ B(p, r) = ∅ and Indeed, one can cover J (f ) with finitely many balls B 1 , . . . , B N of radius d 0 /2 centered at J (f ), such that each ball B j contains a curve C k(j) . This is possible because every ball B j centered in the Julia set must intersect infinitely many Fatou components, otherwise f n would be a normal family in B j . In particular, by local connectivity "most" Fatou components are small, and thus one of them, say D k(j) , will be contained in B j . Now, if B(p, r) is arbitrary with p ∈ J (f ), r ≥ d 0 , we have that p ∈ B j for some j ∈ {1, . . . , N }, and thus B j ⊂ B(p, r). Since r ∈ [d 0 , diam J (f )] lies in a compact interval, (4.1) easily follows, by always using the same finite set of curves C k(1) , . . . , C k(N ) that correspond to B 1 , . . . , B N , respectively. We may also assume that diam C k(j) < r 1 /2 for each of these curves. Now, if r < d 0 , p ∈ J (f ), by Lemma 2.1(b) we have B(f n (p), r 1 ) ⊂ f n (B(p, r)) for some n ∈ N. By the previous, B(f n (p), r 1 /2) intersects some C k = C k(j) with diam C k < r 1 /2, thus C k ⊂ B(f n (p), r 1 ). Hence, B(p, r) contains a preimage C k of C k , and by Lemma 2.1(a), (c) we obtain However, C k was one of the finitely many curves that we chose in the previous paragraph. This and the above inequalities impliy that diam C k diam B(p, r) = 2r with uniform constants. This completes the proof of (2). Finally, we will prove that condition (4) of homogeneity is satisfied. This follows easily from the fact that the Fatou components of a semi-hyperbolic rational map are uniform John domains in the spherical metric [Mih,Proposition 9]. Since we are only interested in the bounded Fatou components, we can use instead the Euclidean metric. A domain Ω ⊂ C is a λ-John domain (0 < λ ≤ 1) if there exists a basepoint z 0 ∈ Ω such that for all z 1 ∈ Ω there exists an arc γ ⊂ Ω connecting z 1 to z 0 such that for all z ∈ γ we have δ(z) ≥ λ|z − z 1 |, where δ(z) := dist(z, ∂Ω). Remark 4.4. Even when the Julia set of a semi-hyperbolic map is a Sierpiński carpet, the uniform relative separation of the peripheral circles C k in condition (3) need not be true. In fact, it is known that for such Julia sets condition (3) is true if and only if for all critical points c ∈ J (f ), ω(c) does not intersect the boundary of any Fatou component; see [QYZ,Proposition 3.9]. Recall that ω(c) is the set of accumulation points of the orbit {f n (c)} n∈N . Remark 4.5. In [QYZ,Proposition 3.7] it is shown that if the boundaries of Fatou components of a semi-hyperbolic map f are Jordan curves, then they are actually uniform quasicircles. If, in addition, they are uniformly relatively separated (i.e., condition (3) for all x > 0. Taking logarithms, one obtains log(1/C) log x + s ≤ log N (x) log x ≤ s + log C log x . Letting x → ∞ yields lim x→∞ log N (x)/ log x = s which completes part of the proof. Recall that the exponent E of the packing of the Fatou components of f is defined by and it remains to show that E = s. Note that for t = 0 the sum E(t) := k≥0 (diam C k ) t diverges. Also, since for semi-hyperbolic rational maps there are only finitely many "large" Fatou components, if E(t 0 ) = ∞, then E(t) = ∞ for all t ≤ t 0 . If t < s, using (4.3), one has This implies that E ≥ s. Conversely, assume that t > s. Since there are only finitely many "large" Fatou components, we only need to take into account the sets C k with diam C k ≤ 1 in the sum (diam C k ) t . Using again (4.3) we have k≥0 1/2 n <diam C k ≤1/2 n−1 Hence E ≤ s, which completes the proof.
2017-11-06T17:07:46.000Z
2017-02-15T00:00:00.000
{ "year": 2017, "sha1": "f1447c07cc13f93287dd2d3692ff71f2fd047c8a", "oa_license": null, "oa_url": "https://doi.org/10.5186/aasfm.2018.4323", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "dbdaf288e5136b38874916ba088485b85c9b5de9", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
16401038
pes2o/s2orc
v3-fos-license
Performance Evaluation of Widely used Portknoking Algorithms Port knocking is a technique by which only a single packet or special sequence will permit the firewall to open a port on a machine where all ports are blocked by default. It is a passive authorization technique which offers firewall-level authentication to ensure authorized access to potentially vulnerable network services. In this paper, we present performance evaluation and analytical comparison of three widely used port knocking (PK) algorithms, Aldaba, FWKNOP and SIG-2. Comparative analysis is based upon ten selected parameters; Platforms (Supported OS), Implementation (PK, SPA or both), Protocols (UDP, TCP, ICMP), Out of Order packet delivery, NAT (Network Address Translation), Encryption Algorithms, Root privileges (For installation and operation), Weak Passwords, Replay Attacks and IPv6 compatibility. Based upon these parameters, relative performance score has been given to each algorithm. Finally, we deduce that FWKNOP due to compatibility with windows client is the most efficient among chosen PK implementations. I. INTRODUCTION I NTERNET became part of our life fourty-one years ago. From the beginning we know that Internet is acting as a hostile place. So it is important to keep a check on unauthorized intrusion and other harmful invaders. The importance of securing the hostile world of internet has increased now, because there have not been such deadly risks before. Reason of this increased security risk is the introduction of Internet. The only secured system is the one that has no connection with the outside world. But internet is the other name of connection, so it is impossible to have no connection with the world and have Internet at the same time. Only thing that can be done is to limit the number of people and set of instructions accessible to the computer. Many security schemes have been developed to fight against the attacks and risks but the attackers have improved within same manners. One way to limit access to selected users is by using an authentication method, but this is not a perfect solution. User first to prove its identity and then access is granted to that user upon verification of authenticity. Many large and complicated systems have suffered from flaws in their authentication mechanisms, which make secure systems vulnerable to the attackers. One usual method of limiting the hosts is to use firewall. All well-known ports of secure server are closed by PK firewall so when an attacker try to directly get connected to port 22, the firewall will simply drop its packet and will not allow attacker to access any secure port directly Firewall selectively accepts and rejects network packets by considering their source address and other important characteristics. Some dangerous attackers are capable enough to hide the source of packets sent by them. Users having unpredictable IP can also easily pass through firewall. So firewall is also not a complete solution as well. Fig. 2. An authentic client who knows the predefined sequence of knocks which will act as a key will send tcp sync packets to those pre-defines sequence of ports Port knocking is a kind of security mechanism installed over firewall of secure computer systems. Basically what port knocking does is that it provides with another security layer over the security we already have. PK close all ports of the system on which it is implemented, It also works on the principle of least privileged as it blocks all the unauthorized users at first and we can say that there is a visible enhancement in the security to that of a system with no port knocking mechanism, PK scenario is explained through figures from 1-a to 1-d. The steps involved in PK authentication can be clearly understood through the flowchart in figure 2. The flow chart provides a detailed step wise complete procedure of a general PK authentication mechanism. Since the advent of PK authentication scheme a lot of PK algorithms have been presented with different characteristics. So what we have done is carried out a performance evaluation and analytical comparison of three widely used PK algorithms against ten different parameters. Fig. 3. PK demon installed on secure server firewall will silently watch those packets and if theses port knocks found to be in correct pre-defined order the client will be considered as authentic client, and PK demon will open clients requested port. Fig. 4. Authentic client after successful authentication will get connected to the secure server through one of the well-known port opened by the PK demon In section II related work and motivation is described, section III contains description of all ten parameters on basis of which performance evaluation will be done, after that limits of scores will be defined and a performance evaluation will be carried out assigning scores to a particular algorithm against ten parameters according to its performance and compatibility with that parameter. Then separate graphs will be presented for all three algorithms which represents their scores against ten parameters, and then combining all these three graphs an overall comparison graphs is plotted to comparatively demonstrate their scores and distinguish the best one out of these three, in the same section we have also presented some unique features of all three PK algorithms. Then finally in section IV we concluded this analytical comparison and defined the best algorithm against ten parameters. II. RELATED WORK AND MOTIVATION Hussein Al-Bahadili [1]; develops and evaluate parameters of newly built PK implementation referred with the name of hybrid, which have the capability to defeat previous knocking techniques. This new technique uses concepts of PK, mutual authentication and steganography. RenniedeGraaf, Improved Port Knocking with Strong Authentication [2]; studies existing PK implementation, improves existing PK techniques, builds a new PK technique which is refers to as novel port knocking technique. Authors in [3]; presents improvements in existing PK and SPA techniques like using one time password method using cellular networks such as CDMA, GSM to enhance security, defines protection against dictionary attacks. Muhammad Tariq et al. Associating the Authentication and Connection Establishment Phases in Passive Authorization Techniques, Proceedings of the World Congress on Engineering 2008 , London, U.K [4]; define weaknesses like lack of link between establishment of a TCP connection, highlight authentication process, presents another novel PK technique, simulation carried out to evaluate algorithms on the basis of overhead calculations. KonstantinosXynos and Andrew Blyth [5]; propose an idea of implementing port knocking technique over a gateway authentication layer or gateway authentication program or network service program instead of firewall, eliminate problems with firewalls and reduce brute force attacks. Ben Maddock [6]; defines portknocking and its benefits in detail, elaborates features of existing portknocking techniques, finally future offer exploration and PK conclusion. Dawn Isabel, Port Knocking: Beyond the Basics [7]; provides three solutions for two basic problems with static PK i.e. detection and replay, propose solutions of dynamic knocks, covert knocks, and one time knocks; implementing these solutions over four PK techniques. Sebastien Jeanquier [7] in his MS thesis, "An Analysis of Port Knocking and Single Packet Authorization"; analyzes PK and SPA as network security mechanisms, performs compatibility as firewall authentication schemes and discusses drawbacks and outcomes in current PK implementations, critical evaluation of FWKNOP, outlining its outcomes and suggesting some remedies. Work done by Sabastien is of great regard as it provides evaluation of a single PK over several parameters but there can also be a research of several widely used PK implementations against different parameters so that a new person in this field can come to know that which implementation is the best, so we have done this work in this research paper by analytically evaluating several widely used PK implementations on ten parameters. We have presented our data with the help of graphs to provide an even better view to the reviewer. III. PERFORMANCE EVALUATION PK ALGORITHMS In this paper we have evaluated the performance of three PK algorithms under different scenarios and parameters. Then we presented their performance comparison. For this purpose we have selected following ten performance parameters. Parameters: Platforms (Supported OS) Implementation (PK, SPA or both) Protocols (UDP, TCP, ICMP) Out of Order packet delivery NAT (Network Address Translation) Encryption Algorithms Root privileges (For installation and operation) Weak Passwords Replay Attacks IPv6 compatibility We have plotted graphs of each algorithm against these parameters. The range of performance score is 0-100. It means that the better is the performance of the algorithm against a parameter the better is the score assigned to that algorithm. Which means that a score of 100 will be awarded to that algorithm which will fully supports the aspects of the given parameter and has solution to all the issues related with that metric. Similarly if algorithm is not robust enough against that parameter then it will get a relatively less score. Platforms means the OS which are supported, in FWKNOP the client can supports both Windows and UNIX based versions but it can only have a UNIX based server hence it has given score of 80, while Aldaba scores a 50 due to the presence of only UNIX based client and server, sig-2 has maximum score due to both Windows and UNIX based client and server. Implementation stands for the Port Knocking scheme use either PK or SPA or both. In this case Aldaba scores 100 due to support with both PK and SPA, on the other hand FWKNOP and SIG-2 both scores a 50 because they only use one implementation. FWKNOP supports three protocols namely UDP, TCP and ICMP so it has been awarded maximum score i.e 100, on the other hand Aldaba supports UDP and TCP so it has been awarded 70 points whereas SIG-2 supports only TCP so it has been given 50 points. The problem of Out of Order packet delivery is inherent in contemporary networks, in FWKNOP this is not an issue because it uses SPA which comprises of only a single packet hence it has maximum score in this regard. In SIG-2, there is no solution for this problem and so, no score, while Aldaba handles it by using Sequence Numbers, so it has maximum score. NAT is widely used in present day networks and the problem is that the client has to know its public IP address before implementing PK as it can't include it private IP address in the authorization packet. FWKNOP automatically obtains the public IP address, so, it earns maximum score. Aldaba resolves this issue by letting the client specify its public IP address, as this puts the client in some misery so Aldaba has score of 50. SIG-2 has no solution to this issue so no score for this. Encryption is a vital part of PK and the more encryption algorithms a PK implementation supports the better it is. Hence Aldaba has the maximum score because it supports 5 encryption algorithms compared with 2 in FWKNOP and only 1 in SIG-2 who has score of 50 and 30 respectively. In FWKNOP and SIG-2 root privileges are required for installation only, which gives it 80 score. Aldaba on the other hand requires it for both installation and operation hence getting only a score of 50. If the passphrases which are used to encrypt and decrypt the PK packets or SPA packet are weak [7] i.e vulnerable to dictionary and Brute Force attacks then an attacker in the Man in the Middle Position can easily capture the authorization packet and can obtain the passphrase, once he has got the passphrase he can decrypt the packet and use the information for crafting his own packet. Unfortunately none of these three PK implementations has solution to this problem. Hence no score is given to any of the three algorithms. An eavesdropper inside network [7] is who has ability to watch traffic between client and server. It means that attacker can replay the authorization packet on its way from client to server and replay it at a later time. In this way attacker can gain access to the server by replacing the IP address as server has no method to know that the received packet is from a valid client or it is just a replay packet by an attacker. FWKNOP solves this problem by including a timestamp inside authorization packet which is accurate up to minutes and also it includes some random data. The presence of timestamp and random data ensures that the received packet is fresh packet and not an old replayed packet so it gives maximum score to FWKNOP due to through resolution of this issue. In Aldaba this issue is resolved by including the IP address of the client inside authorization packet but it is not a complete solution because an attacker can still modify the IP address if he knows the passphrase, so we give Aldaba a 50 score for this parameter. SIG-2 also uses timestamp but it doesn't have the feature of random data so it also scores 50. Both Aldaba and FWKNOP are fully compatible with IPv6 which gives them a maximum score whereas SIG-2 is not compatible with IPv6 so no score for it. A. FWKNOP Unique Features Besides these parameters there are also some features which are unique to these PK implementations. In this section we will also discuss them. FWKNOP has features like Port randomization support for target port of SPA packets and the port over which secondary connection is made using iptables. Later to access granted to local sockets on the system running FWKNOP and to forward such connections to internal services. FWKNOP also has comprehensive test suite which allows series of tests which are designed to validate both the client and server pieces either they are installed properly or not. Tests sniff SPA packets throughout the interface of local loopback, this builds brief firewall rules which are verifies against the particular access based on some testing configuration, later analyzing output of fwknop client and server for predictable outcomes for each test. This result can be utilized for sake of communication with the third parties investigation. Multiple users at the same time are also supported by FWKNOP, every user is assigned its own symmetric or asymmetric encryption key through /etc/fwknop/access.conf file. Outfits versioned protocol of SPA, and makes it convenient to extend protocol offer for another SPA message and at the same time maintains backwards compatibility with other FWKNOP clients at the same time. Fwknop also implies execution of shell commands on basis of effective SPA packets. B. Aldaba Unique Features Application which produces packets at its own should be careful in the creation of process. Latest protocols must support some authentic checksums, special byte orders, particular values, etc. If packets do not follow these principles, routers, firewalls and other such network devices can have problems resulting in discard of packets before reaching to their host. Multiple knocking attempts should be supported for systems that have two or more than two users so that they can make themselves able to handle multiple listening at a time. In a case where system is having some deficiencies in its design, we can have problem when two clients send different knocks at same time can interfere with knocks of each other which will result as a DOS (denial of service) for both of clients. Knock with originating IP address should associate any PK implementation. This can be done quite easily, so that is why most of the PK systems use it. Actual problem is that an attacker can detect start of knock sequence, and then he can fake his own packets with any kind of random data by deceiving the client's IP address and sending this information to knocking server with in a valid knock causing server to evaluate incorrect data resulting in discard of knock and denial of service for client. This problem is not present in Aldaba in the case where authentication protocol is SPA due to the reason that only single packet is involved in such process. Port Knocking is vulnerable to such an attack. Till now Aldaba does not have any proper solution for this problem so still there is a chance that a client will suffer DOS attack in a case when an attacker is able to guess knocking sequence and detect the start of a knocking attempt made by client. Port knocking causes additional load while listening to the incoming packets due to process being done by knocking server where ports scanning is carried out. Also, whenever start of a knocking attempt is detected, then new data structures will be generated in order to handle them. Moreover, Aldaba also keeps complete record of all knocking attempts that are under process or have to be processed. New data entry is created whenever start of a knock is detected, if an attacker comes to know about the ports that forms knocking sequence, it will be able to create and send different source IP addresses of multiple packets. In such a case knocking server treat this situation as multiple clients are trying to send a knock so a node will be created for each different IP. If attacker will keep on sending false knocks then a time will come when system will eventually run out of memory and the ultimate result will be crash the server. C. SIG-2 unique features Sig-2 does not contain any such unique features other than those which we have already mentioned above. IV. CONCLUSION After the performance evaluation it is concluded that SIG-2 Port Knocking is a backward and weak implementation as we can clearly see through graphs. FWKNOP and Aldaba Port Knocking are good implementation with nearly same features. Ability of FWKNOP to use windows client as well gives it a slight edge over Aldaba port Knocking.
2012-07-06T11:12:08.000Z
2012-06-25T00:00:00.000
{ "year": 2012, "sha1": "448fe080697720de7ed229393ff601bbe8f8069b", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1207.1700", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d7f80a7b5eefc9d111952bf5c5019c7f6d806a20", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
265497135
pes2o/s2orc
v3-fos-license
Effects of facial biofeedback on hypomimia, emotion recognition, and affect in Parkinson’s disease Abstract Objectives: Facial expressions are a core component of emotions and nonverbal social communication. Therefore, hypomimia as secondary symptom of Parkinson’s disease (PD) has adverse effects like social impairment, stigmatization, under-diagnosis and under-treatment of depression, and a generally lower quality of life. Beside unspecific dopaminergic treatment, specific treatment options for hypomimia in PD are rarely investigated. This quasi-randomized controlled trial evaluated the short-term effects of facial electromyogram (EMG) based biofeedback to enhance facial expression and emotion recognition as nonverbal social communication skills in PD patients. Furthermore effects on affect are examined. Method: A sample of 34 in-patients with PD were allocated either to facial EMG-biofeedback as experimental group or non-facial exercises as control group. Facial expression during posing of emotions (measured via EMG), facial emotion recognition, and positive and negative affect were assessed before and after treatment. Stronger improvements were expected in the EMG-biofeedback in comparison to the control group. Results: The facial EMG-biofeedback group showed significantly greater improvements in overall facial expression, and especially for happiness and disgust. Also, overall facial emotion recognition abilities improved significantly stronger in the experimental group. Positive affect was significantly increased in both groups with no significant differences between them, while negative affect did not change within both groups. Conclusions: The study provides promising evidence for facial EMG-biofeedback as a tool to improve facial expression and emotion recognition in PD. Embodiment theories are discussed as working mechanism. Introduction Parkinson's disease (PD) is one of the most common and disabling neurological disorders in advanced age.Beside primary symptoms as tremor, bradykinesia, rigidity and postural instability, a growing body of research has addressed hypomimia, an important secondary symptom (Bologna et al., 2013).Two anomalies contribute to the mask-like appearance of patients with PD.Firstly, spontaneous facial activities are reduced, among them blinking, emotional and pain expressions (Agostino et al., 2008;Priebe et al., 2015;Simons et al., 2003).Secondly, voluntary facial movements are abnormally low, e.g., when instructed to pose an emotion (Bologna et al., 2016;Bowers et al., 2006). Hypomimia is often regarded as pure motor symptom of PD, yet due to facial expression being inseparably involved in emotional processes, this symptom has many negative effects on patients' quality of life.Practitioners rated patients with facial masking more depressed, less sociable, and less cognitively competent (Tickle-Degnen et al., 2011).As depression and dementia are common in patients with PD (Riedel et al., 2016), hypomimia can, additionally to stigma, also lead to misdiagnoses.Furthermore, drastic consequences of hypomimia on social life were found.The more hypomimic an individual was, the less interest was shown by healthy adults in interacting with them (Hemmesch et al., 2009).Care partners' rating of how much they enjoyed interacting with the patients with PD was negatively correlated with their rating of the patients' facial masking (Gunnery et al., 2016). Despite this evidence for the detrimental influence, clinical practice lacks treatment of hypomimia.Beside dopaminergic treatment, literature research revealed scarcity of evidence-based treatments.One study showed reduced hypomimia scores after a specialized treatment of hypomimia consisting of facial proprioception, emotion recognition, and mimicking tasks (Ricciardi et al., 2016).Furthermore, enhanced facial expression parameters were found as side benefits of Lee Silverman Voice Treatment (Dumer et al., 2014), group music therapy (Elefant et al., 2012), or orofacial physiotherapy (Katsikitis & Pilowsky, 1996).Generalization of treatment effects outside the study context or in daily life were not investigated in any of these studies.The findings suggest two implications.Firstly, hypomimia seems to be treatable beside of pharmacological approaches, and secondly, further empirical research should be aimed on hypomimia treatment. Hypomimia and deficits in mimicry among patients with PD has also been associated to distinct deficits in decoding emotions from other peoples' faces (Livingstone et al., 2016).While hypomimia is the general reduction of voluntary and spontaneous facial expressions, mimicry describes the mainly spontaneous, subconscious, and unintentional imitation of the opposite's facial expression (Blairy et al., 1999;Dimberg et al., 2000).Embodiment theories suggest that the perceiver mimics the facial expression of the counterpart and the corresponding motor, sensory, cognitive, and affective processes are triggered (Dimberg et al., 2000;Hess & Blairy, 2001;Wood et al., 2016).Deficits in mimicry could have a mediating role in emotion recognition deficits in PD patients (Livingstone et al., 2016).Two recent studies have investigated the association between emotion recognition and facial expression deficits in patients with PD (Livingstone et al., 2016).Contradicting results were reported.In the study in which voluntary facial expressions were investigated, no relation between the impairments in recognition and expression were found (Bologna et al., 2016), whereas in the study investigating mimicry, significant correlations with emotion recognition deficits were shown (Livingstone et al., 2016).Addressing the inconsistent findings and to contribute to embodiment theories, the proposed association is also examined in this study. As a further aspect associated with hypomimia, facial muscle activity was shown to be a reliable part of the affective reaction (Cacioppo et al., 1986).In specific, patients with neuromuscular disorders were found to be more severely depressed when they show specific impairments in smiling (Van Swearingen et al., 1999).Reduced physiological feedback as well as impairment in social interactions are suggested as underlying factors (Gunnery et al., 2016).As also a considerable number of patients with PD are found to show depressive symptoms, enhancing the ability to smile is considered as supportive for (social) well-being (Yamanishi et al., 2013).Several studies in healthy subjects could already show that an activation of Musculus zygomaticus major (zygomaticus), associated with the expression of happiness, was linked to positive affect (Strack et al., 1988).Up to now, there are no specific treatments, targeting facial expression to improve neither positive affect nor emotion recognition in PD patients.In this regards, also trainings targeting the expression of negative affects like sadness, anger, fear, disgust, and the associated corrugator muscle (Cacioppo et al., 1986) seem of interest. So far, no previous study examined the effects of facial biofeedback on hypomimia.Biofeedback is a technique to assess and to provide feedback on usually involuntary physiological signals, and therefore seems to be a promising approach to improve PD patients' facial expressivity.We furthermore investigated potential effects on emotion recognition and affect.Owing to the positive influence on social interactions and affect we decided to focus on the training of happiness and the associated zygomaticus.Additionally, the expression of sadness, anger, fear, disgust, and the associated corrugator muscle are trained to examine broad effects.The effects of biofeedback are being compared to those of non-facial gymnastics, as a reliable intervention to improve mobility in PD patients.We hypothesize, that facial EMG-feedback in comparison to non-facial gymnastics achieves a significantly higher pre to post increase in (a) facial expressions, (b) facial emotion recognition, and (c) positive affect.Further exploratory research questions address effects on negative affect, emotion-specificity of training, and correlations between emotional expressions, recognition, and affect. Methods Experimental procedures were in line with the Declaration of Helsinki and approved by the ethical review committee of the University of Regensburg .The study is reported in accordance with the Consolidated Standards of Reporting Trials (CONSORT) statement (Schulz et al., 2010). Participants Thirty-four participants were recruited from inpatients of a neurological hospital.The hospitalization followed a planned admission for multimodal treatment for patients with movement disorders.Eligibility criteria included a clinical diagnosis of idiopathic PD, and hypomimia defined by a score > 0 on item 19 of the Unified Parkinson's Disease Rating (UPDRS; Goetz et al., 2008).Patients were excluded for cognitive impairment, defined by a Montreal Cognitive Assessment (MoCA; Nasreddine et al., 2005) score < 20, language unfamiliarity (<8 years experience of speaking German), facial botulinum toxin treatment in the last six months (assuming a reduced facial motility), and facial hyper-and dyskinesia as side effect of medication (to avoid EMG artifacts). Study design, outcomes, and randomization Within this quasi-randomized-controlled trial with a withinbetween-subject design, participants were assigned in equal number either to facial EMG biofeedback (experimental group) or non-facial gymnastics (control group).Group allocation was based on the order in which people were recruited, with alternating assignment to the experimental and control group by a person blind to the experimental design.As primary outcome for testing the hypotheses, the patients' (a) overall facial expression, (b) overall emotion recognition, and (c) positive affect were assessed pre and post treatment.As secondary outcomes, the facial expression during posing of and recognition of the emotions happiness, surprise, sadness, disgust, fear, and anger, as well as negative affect were assessed at pre and post treatment.The required sample size was calculated via an a-priori statistical power analysis using G*Power 3.1 (Faul et al., 2009).Because no literature on EMG-changes in the treatment of hypomimia was found, we referred to the feasibility study on hypomimia treatment (Ricciardi et al., 2016) which found medium to big effect sizes and expected a medium effect.Power was set to 80%, alpha to .05,r to .50 and the effect size was estimated as f = 0.25.The Analysis indicated a sample size of 34 patients to detect an interaction effect of Group × Time within a repeated-measures ANOVA.There was no blinding realized. Apparatus All stimuli were presented with the BioTraceþ Software (V2017A, Mind Media, Herten, Netherlands) on a notebook with a 17-in.LCD display.The NeXus-10 system (also Mind Media) was used for measuring and amplifying the surface EMG signal.Two bipolar electrodes were placed on two emotion specific facial muscles (Cacioppo et al., 1986), and as ground electrode on the clavicle to reduce effects of ground electrode placement on facial expressivity (Fridlund & Cacioppo, 1986).Signals were bandpass filtered (20-500 Hz), and acquired at 1,024 samples per second.For further analyses as well as for the biofeedback training, the EMG amplitudes, in terms of root mean square-voltage were calculated with 32 samples per second.To smooth the visual signal, the feedback-parameter was the averaged amplitude over the last epoch, sized 1/4 s. Facial expression For measuring facial expressivity, the muscular activity during an expression posing task was assessed via EMG.Zygomaticus activity was recorded to analyze the expression of happiness.Corrugator activity was recorded for the expression of anger, sadness, surprise, and fear; for being involved in the expression of negative valence (Cacioppo et al., 1986;Topolinski & Strack, 2015).As approximation to measure disgust, the mean amplitude of zygomaticus and corrugator was computed.The zygomaticus is located nearby the Musculus levator labii, which is the main contributor to the expression of disgust, and crosstalk can be expected (Fridlund & Cacioppo, 1986).We validated this procedure as proxy to measure disgust in healthy subjects before.During the task, the patients were presented words of emotions ("happy", "disgusted", "angry", "surprised", "sad", and "fearful") and were instructed to pose the facial expression for 10 s.All emotions were presented once, and in the same order in every participant.The mean EMG amplitude of the 10 s period was used as measured variable for each facial expression, for overall expression the mean over all emotions was used. Emotion recognition The patients' ability to recognize emotions from facial stimuli was assessed with a modified version of the Ekman 60 Faces Test (Ekman, 1976), shortened to 48 faces depicting the six basic emotions happiness, surprise, sadness, disgust, fear, and anger.A review on facial emotion recognition in PD indicates that differences in emotion recognition ability can be found using shortened versions of the 60 Faces Test (Assogna et al., 2008).After an exemplary screen and assuring the patients' comprehension of the task, each face was presented for three seconds, response time was limited to 30 s, and the patients were instructed to respond preferably fast.The response format was a six forced-choice identification task.The order of emotions was pseudorandomized in favor of no consecutive repetition and was the same for every participant.For overall emotion recognition the mean percentage of correct answers over all six emotions is used. Affect The patients' current affect was assessed with the German paperand-pencil version of the Positive and Negative Affect Schedule (PANAS) in the state version (Watson et al., 1988).It consists of two independent and internally consistent scales for positive and for negative affect (Krohne et al., 1996).Ten items (adjectives describing either positive or negative affect at the moment) are rated on a five-point scale. Facial EMG-biofeedback training The experimental group training consisted of an imitation task, which was assisted by facial EMG Biofeedback.Zygomaticus amplitude was assessed to feedback facial expressivity for happiness, corrugator amplitude for sadness, fear, and anger (displayed via bar chart); and the amplitude of both muscles for disgust (two bar charts).Starting with brief psychoeducation screens about reciprocity of communication, followed by instruction screens, patients' comprehension of the task was ensured.Figure 1 shows an exemplary screen.Facial stimuli were depicted from the Montreal Set of Facial Displays of Emotion (Beaupré & Hess, 2005).Caucasian models' pictures in the 100% intensity were used.A training sequence started with the participant mimicking the displayed emotional expression.A slow running average was used to determine the adaptive threshold.Whenever the amplitude surpassed the threshold (yellow marked at the bar chart) for more than 500 ms, a rewarding sound rang out.The participants were instructed to relax for a short moment, then try to reach the threshold again, until the period of 30 s was over.The threshold was consequently adaptively determined in relation to the individual participants' muscular tension during the interval.Three training blocks lasting on average 8-10 minutes with breaks of 2-4 minutes between were conducted.The whole training session including instructions and examples lasted about 40 minutes.Within each block happiness was trained eight times; sadness, anger, fear, and disgust respectively one time.Surprise was not included, because for the expression of a recognizable surprised facial reaction we considered corrugator amplitude as not sufficiently specific.For the assessment of surprise we used the muscle activity in the region of corrugator muscle as an approximation since raising and/or lowering of brow is mainly involved (Topolinski & Strack, 2015). Physical training (control) The control group training consisted of mild physical activation and stretching exercises (amplitude oriented therapy -LSVT-BIG, 2018) for non-facial muscles suggested for patients with PD.Starting with brief psychoeducation about the importance of physical activation, the training included torso mobilization, armshoulder-finger mobilization, and leg-feet mobilization.In line with the experimental training, the control treatment consisted of three blocks and breaks, resulting in a comparable length. Procedure Figure 1 illustrates the study procedure.Initial screening was conducted within the standard diagnostic procedure on the inpatient ward.The study took place in an examination room of the clinic and was applied by the same trainer in all participants.The patients were fully enlightened about the purposes of the study and gave written informed consent.Sociodemographic and clinical variables, among others depression screening (Lehr et al., 2008) and medication status (Table 1), were examined.Pre-test, intervention, and posttest were all conducted within one session. Statistical analyses Baseline differences in demographic and clinical variables, as well as in outcome variables were assessed using t-tests for continuous variables and χ 2 -tests for categorical variables.Mixed-design ANOVAs were conducted to test the three hypotheses, defining significant time × group interaction effects as confirmatory.Therefore, facial EMG data were standardized as z-scores within time series and electrode sites so that analysis across muscle sites is allowable.For better legibility we furtherly used T-scaled values.Alpha was set to .05.For tests of primary outcomes, no correction had to be applied.For secondary outcomes, Bonferroni's correction was applied to multiple comparisons (six comparisons: p = .05/6= .008).As effect size, partial eta-squared ( η 2 p ) values and Cohen's d were calculated.For further exploratory analyses, descriptive statistics, Pearson correlation coefficients as well as repeated-measures ANOVAs were calculated.All analyses were conducted in SPSS 26 (IBM Corporation, 2019).There was no missing data in any outcome variable. Sample Figure 2 shows the flow of participants through each stage of the study.The baseline sociodemographic and clinical variables of the final sample consisting of 34 inpatients with idiopathic PD are shown in Table 1.No significant differences between groups were found among those variables.Within the outcome variables, only in one out of 16 tests a significant pretest difference was found, zygomaticus amplitude while imitating happiness was lower in the experimental group than in the control group, t(32) = 2.369, p = .029,d = 0.81.After correction for multiple testing, no significant pretest differences remain.To conclude, the pretest difference of zygomaticus amplitude lies within the chance probability range.In Supplementary Table 1, the outcome variable values at pretest are reported separately by sex.Briefly summarized, while there were no differences in other primary variables, female participants perform significantly better in facial emotion recognition, t(32) = 2.51, p = .009,d = 0.89. Facial expression Over all emotions, a significant time × group interaction effect for facial expression as primary outcome was found concerning changes from pre-to posttest between the facial EMG-feedback and the control group, F(1,32) = 10.07,p = .003,η 2 p = 0.24., see Figure 3 for an overview.Regarding the specific emotional expressions, significant time × group interaction effects were found for the muscular activity during the expression of happiness and disgust (Table 2).With respect to the mean values at pre-and posttest for both groups, this indicates a greater overall increase in muscular activity, and specifically during the expression of happiness and disgust in the experimental group in contrast to the control group.Including sex as a covariate has no significant influence.For the expressions of sadness, fear, anger, and surprise, no significant group*time interaction effects were found.In an additional analysis, in which the negative facial expressions are grouped into one category, significant group*time interaction was found, F(1,32) = 11.08,p = .002,η 2 p = 0.26. Facial emotion recognition For facial emotion recognition over all emotions as primary outcome, we found a significant time × group interaction effect, F(1,32) = 8.18, p = .007,η 2 p = 0.20, for an overview see Figure 3. Facial emotion recognition improved significantly stronger in the facial EMG-biofeedback than in the control group.Regarding the recognition of specific emotional expressions, no significant time × group effects were found.Including sex as a covariate has no significant influence.In an additional analysis, in which the negative emotion recognition values are grouped into one category, a significant group × time interaction was found, F(1,32) = 6.49, p = .02,η 2 p = 0.17, indicating a stronger improvement in the facial EMG-feedback group for the recognition of negatively valenced expressions in comparison to the control group. Positive and negative affect For positive affect, we found a significant effect of time, F(1,32) = 11.40,p = .002,η 2 p = 0.26, but no significant time × group interaction effect, F(1,32) = 3.16, p = .085,η 2 p = 0.09, indicating that positive affect increased in both groups, with no significant difference between groups (for an overview see Figure 3).For negative affect, no significant time effect and no significant time × group effect was found (Table 2).Including sex as a covariate has no significant influence. Emotion-specificity of training To show that the biofeedback training was emotion specific, we analyzed if only the relevant muscle (see Methods) responded.The dataset of one participant was removed, for exhibiting artifacts in the second training block.As expected, the amplitude of the emotion specific muscle (e.g., zygomaticus for happiness) was above the opponent muscle for all trained emotions (Figure 4).For disgust, the amplitude of corrugator was higher than of zygomaticus, although both were trained/displayed.Also mixed-design ANOVAs proved a significant main effect of the factor Muscle over all three blocks of the training, indicating that the training specifically addressed the expected muscle (Supplementary Table 2). Correlations of facial expression and emotion recognition To examine associations between muscular facial expression and emotion recognition as suggested by embodiment theories, correlations between sociodemographic and clinical variables at pretest were computed (Supplementary Table 3).Facial expression and emotion recognition scores were positively correlated at pretest, even when controlling for age and cognitive impairment (Supplementary Table 4).Age and cognitive impairment were correlated to emotion recognition.This means that patients with a higher muscle amplitude in the facial expression task at pretest also showed better facial emotion recognition abilities at pretest.A total emotion recognition score over all emotions, and sub scores for the specific emotions were assessed.c Mean scores for affect were measured using the PANAS (Positive and Negative Affect Schedule) subscales for Positive Affect and for Negative Affect (both range 10-50). Correlations of outcome variables and PD related clinical variables To examine whether facial expression, facial emotion recognition, and positive and negative affect are related to PD severity, correlations between years since diagnosis, UPDRS-III scale, levodopa dose equivalent, and facial expressivity at pretest were computed.No significant correlations between PD related clinical variables and outcome variables at pretest were found (see Supplementary Table 5).Not surprisingly, years since diagnosis correlated with levodopa dose equivalent and UPDRS-III scale. Discussion This quasi-randomized-controlled trial examined facial EMGbiofeedback training as clinical approach to reduce hypomimia in patients with idiopathic PD.The facial EMG-biofeedback compared to non-facial gymnastics as control condition resulted in significantly greater improvements concerning facial muscular activity during the expression of emotions, and emotion recognition abilities.Positive affect significantly increased in both groups, with no significant differences between them.In regard to the single emotional expressions, greater improvements from pre to post measure were specifically confirmed in facial muscular activity during the expression of happiness and disgust.The findings suggest that our one-session biofeedback training is feasibly and effective concerning voluntary emotional facial expressions.Regarding single emotional expressions, greater improvements from pre to posttest were specifically confirmed for the expression of happiness and disgust.Exploratory analysis showed that the biofeedback training specifically addressed the emotion-related facial muscles (Figure 4).The feasibility of a specific hypomimia rehabilitation program was only examined in one prior study (Ricciardi et al., 2016).This training focusing on facial proprioception training resulted in a stronger reduction of hypomimia scores (UPDRS-III, item 19), and a stronger increase in facial expression of fear (but not the other basic emotions) measured via a computerized video analysis, both in comparison to DVD-guided facial physiotherapy and no treatment.Changes in emotion processing and recognition were not examined.As happiness and therefore zygomaticus was trained eight times per block, whereas all other emotions were trained only once per block, this indicates a stronger effect for happiness that might result from it's more frequent training.This could also serve as explanation for the result concerning disgust.As the only emotion beside happiness, zygomaticus was also trained in the expression of disgust, in addition to corrugator.Alternatively, or additionally, one might speculate that zygomaticus is easier to be trained than corrugator.Future studies could examine a more frequent training for sadness, anger, and fear, and could develop possibilities to include a training for surprise. As potential working mechanisms of the effects of facial EMGbiofeedback training on facial expressivity, enhanced selfperception, and improvements in motor extent planning can be suspected.There is evidence that the basal ganglia network, which is known to be dysfunctional in PD, plays a part in the planning of the extent of movements (Desmurget et al., 2004).Dysfunctional feedback loops while executing motions are therefore hypothesized in PD.Thus, monitoring the patient's amplitude while mimicking an expression is suggested as supportive for motor extent planning.The hypothesized dysfunctional feedback loop could then partly be compensated by the visual presentation of the actual motion (Desmurget et al., 2004).Whether the reinforcement used in this study helped to increase motivation or treatment success cannot be disentangled with our study.Nonetheless, the emotion-specific amplitude increased over the course of the blocks (see Supplementary Table 2).This can be seen as an indication that the adaptive threshold may have supported a continuous increase in muscle effort.However, those expected mechanisms behind the effect of facial EMG biofeedback should be examined and validated in future studies.We suggest to use dismantling studies to further investigate necessary and sufficient parts of the facial EMGfeedback training.Furthermore, it would be helpful to know whether there are subgroups of patients who differentially benefit from biofeedback training or also from facial expression training.With a larger sample size interindividual differences, e.g.regarding predominance of either tremor or rigidity and Hoehn-& Yahr stages on treatment outcome should be investigated.Nonetheless, the focus of this study was on feasibility and efficacy of facial EMGfeedback training.First evidence is provided that facial EMGfeedback training can be used to improve facial expressivity in patients with idiopathic PD. To our best knowledge, this is the first study to use an intervention approach to shed further light on embodiment theories, which claim that emotion processing is multimodal and that the activation of one component (i.e.vision processing of an emotional face) often leads to co-activation of other components (i.e.emotion expression or also affect itself; Wood et al., 2016).As one indication, emotion recognition improved significantly stronger in the facial biofeedback in comparison to the control group.Nonetheless, no definite reply can be made whether this results from enhanced facial expressivity or from other explanations, such as an added value of processed emotional faces.However, in line with embodiment theories, our exploratory correlations over all participants revealed a robust association between facial expression and emotion recognition at pretest.As further point for discussion, only overall emotion recognition showed stronger improvements, while no significant time × group interaction effect was found for the single expressions.Additionally, with recognition rates of 94% for happiness, this sub value may underlie a ceiling effect.To clarify whether we could not find a training effect on happiness recognition due to nonexistence of effect or due to the ceiling effect, future studies could use happiness recognition items with higher level of ambiguity and therefore more difficult items.Nonetheless, the greatest effect sizes of the intervention on emotion recognition were found for happiness, supposable due to the more frequent training of zygomaticus.Causal relations between emotion recognition and facial expression in patients with PD could be tested in future studies using a dismantling design. The increase in positive affect in both groups, but no stronger increase in the EMG-biofeedback group could be due to unspecific factors as activation, care, and social interaction.Alternatively, two different mechanisms specific for the respective intervention are possible.The control group received physical exercises, which have constantly found to enhance mood (Berger & Motl, 2000).Zygomaticus training showed to improve mood via triggering the affective component of smiling (facial feedback hypothesis; Strack et al., 1988).Negative affect did not significantly change in both groups, which however can be interpreted as an indication that neither the biofeedback nor the control training had an aversive effect. In general, the investigation of mid-and long-term effects of a more frequent facial EMG training should be examined in future studies.Since PD is a progressive degenerative disease, the research questions could rather target the stability of impairments over a limited period of time instead of improvements, and could examine effects on the social environment.In this study, emphasized training of smiles was applied due to several reasons.A considerable amount of patients with PD were found to show depressive symptoms, which are related to their quality of life (Yamanishi et al., 2013).Patients with neuromuscular disorders were found to be more severely depressed, when they show specific impairments in smiling (Van Swearingen et al., 1999).This was suggested to be due to reduced physiological feedback as well as impairment in social interactions.Therefore, enhancing especially the ability to smile is considered as most supportive for (social) well-being in patients with PD.Furthermore, enhanced physiological feedback loops while smiling as well as social reciprocity and improved social interactions could be possible (Hess & Blairy, 2001;Van Swearingen et al., 1999).Therefore, systematic investigation of possible changes in external evaluation by relatives and care partners should be endeavored.While mood improvements and alleviation of depressive symptoms would be generally desirable for PD patients, also improvements in the perception and expression of situation-adequate negative emotions might be supportable (Likowski et al., 2011;Seibt et al., 2015).Reduced emotional reactivity and recognition abilities, concerning negative affects might equally impair patients' social integration and wellbeing like it is true for positive affect.Therefore, future studies could also target the impact of EMG-biofeedback training on situation specific affective states. Limitations As outlined above, one limitation concerns the absence of clinical assessments of effects of the treatment and of follow-up assessments, therefore, further research is demanded to examine longterm effects.A further limitation concerns the external validity of the operationalization of facial expressions via the conducted EMG measurement.In the often-used coding system for facial expressions, emotional expressions are characterized by many interacting facial movements (Ekman & Friesen, 1978).Due to our biofeedback device, the measurement and training was restricted to two muscles (zygomaticus and corrugator), which can only represent a rough approximation to the complex patterns of emotional expression.Especially our approach to measure disgust, which is characterized by activation of levator labii muscle, which was not recorded in this study, has to be reflected critically.In prior studies, computer algorithm-based video observations were used to measure facial expressions (Bandini et al., 2017;Bologna et al., 2016;Wu et al., 2014).In future trials, this technology could be used for facial biofeedback, and online therapy using webcams could be considered.Besides, future assessment of the trainings' effects should include external valid measures also for hypomimia.As our study only measured muscular activity while posing expressions, no prediction can be made regarding spontaneous expressions or appropriate application in social interactions.In this regards, also effects on the patients' social integration and quality of life should be examined.However, in this study emotional faces were used as training stimuli, whereas in the facial expression task emotional words were presented.This indicates that patients in the experimental group were able to carry over what they trained with facial stimuli (imitating) to the expression task (posing).Finally, it should be considered that it was no doubleblind randomized controlled trial.Participants were fully enlightened about the purposes of the study.Future studies could conceptualize a double-blind study for example with shambiofeedback as control treatment to rule out nonspecific factors for improvement.In a next step, the treatment should also be validated in outpatients and patients with stronger cognitive impairment.Nonetheless, the focus was on validation of feasibility and efficacy of facial biofeedback training.Furthermore, all assessments were computer-guided and also objective data (muscle amplitude) was collected.Hence examiner effects can be assumed to have been small. Conclusion This quasi-randomized, controlled trial provides first evidence on the feasibility and efficacy of facial EMG-feedback training in terms of improved facial expressivity and emotion recognition in patients with idiopathic PD.Furthermore, positive affect increased from pre-to posttest, yet also in the control group (unspecific muscular activation).Additionally, emotion recognition and facial expression capabilities of participants were robustly correlated.Overall, these results provide preliminary evidence that facial EMG-feedback training might provide a valuable component of multimodal treatment in patients with idiopathic PD and hypomimia. Supplementary material.The supplementary material for this article can be found at https://doi.org/10.1017/S1355617723000747Funding statement.Nothing to declare. Figure 1 . Figure 1.Study procedure (A), exemplary screen of the recognition task (B), exemplary screen of the biofeedback training (C). Abbreviations: MoCA, Montreal Cognitive Assessment; UPDRS-III, Unified Parkinson's disease rating scale III (motor subscale); ADS-K, Allgemeine Depressionsskala -Kurzversion [English: General Depression Scale -Short Version]; LED: Levodopa dose equivalent.PANAS: Positive and Negative Affect Schedule.Facial expression are reported as T-scores.χ 2 -test was conducted for categorial data, t-tests were conducted for continuous variables. Figure 3 . Figure 3.Primary outcome variables as a function of time point of assessment (pretest vs. posttest) and intervention group (facial EMG-feedback group vs. control group). 008. a Mean facial muscle amplitude while posing different emotions during the facial expression task (assessed in mV; z-and T-transformed for analyses).The average amplitude over all emotions and the single amplitudes for the specific emotions were assessed (Zygomaticus for happiness; Corrugator for anger, fear, surprise, and sadness; the average of Zygomaticus and Corrugator for disgust).b Mean scores of facial emotion recognition were measured with a shortened version of the Ekman 60 Faces test.The scores are reported in percentage of correct responses [range 0-100%]. Figure 4 . Figure 4. Mean muscle amplitudes for Zygomaticus and Corrugator in PD patients from to the biofeedback training group (N = 16) during the training blocks (T1-T3), assessed in mV and z-and T-transformed for analyses.Error bars indicate the standard deviation.The training was expected to specifically target zygomaticus during the expression of happiness; corrugator during anger, fear, and sadness, and both muscles during disgust. Table 2 . Analyses of the interaction of intervention and time on emotional expression, emotion recognition, and affect Note.N = 34 PD patients.F-values, p-values for time and time × group interactions within a mixed repeated ANOVA are reported, η 2 p are additionally supported for the tests of hypotheses (group*time interactions).Bold values indicate significant results of hypothesis tests, Bonferroni-corrected, p < .
2023-11-30T06:17:32.481Z
2023-11-29T00:00:00.000
{ "year": 2023, "sha1": "b0d505372dba981afb9b60e0a50a0abb5c71a354", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/035CA410C5A5A4BC14A0E6FD56A322A3/S1355617723000747a.pdf/div-class-title-effects-of-facial-biofeedback-on-hypomimia-emotion-recognition-and-affect-in-parkinson-s-disease-div.pdf", "oa_status": "HYBRID", "pdf_src": "Cambridge", "pdf_hash": "2ae97975e799119c39560ae9dd44c1b8c68715f6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
249644623
pes2o/s2orc
v3-fos-license
Large-magnitude (VEI ≥ 7) ‘wet’ explosive silicic eruption preserved a Lower Miocene habitat at the Ipolytarnóc Fossil Site, North Hungary During Earth’s history, geosphere-biosphere interactions were often determined by momentary, catastrophic changes such as large explosive volcanic eruptions. The Miocene ignimbrite flare-up in the Pannonian Basin, which is located along a complex convergent plate boundary between Europe and Africa, provides a superb example of this interaction. In North Hungary, the famous Ipolytarnóc Fossil Site, often referred to as “ancient Pompeii”, records a snapshot of rich Early Miocene life buried under thick ignimbrite cover. Here, we use a multi-technique approach to constrain the successive phases of a catastrophic silicic eruption (VEI ≥ 7) dated at 17.2 Ma. An event-scale reconstruction shows that the initial PDC phase was phreatomagmatic, affecting ≥ 1500 km2 and causing the destruction of an interfingering terrestrial–intertidal environment at Ipolytarnóc. This was followed by pumice fall, and finally the emplacement of up to 40 m-thick ignimbrite that completely buried the site. However, unlike the seemingly similar AD 79 Vesuvius eruption that buried Pompeii by hot pyroclastic density currents, the presence of fallen but uncharred tree trunks, branches, and intact leaves in the basal pyroclastic deposits at Ipolytarnóc as well as rock paleomagnetic properties indicate a low-temperature pyroclastic event, that superbly preserved the coastal habitat, including unique fossil tracks. Obtained BSE images were used for vesicularity analyses applying the nested image technique following Klug and Cashman (1994) and Shea et al. (2010). The BSE images were processed with the FIJI-ImageJ (Schneider et al. 2012) open-source image analyses software to create binary images using the built-in auto thresholding function; when it was necessary the automatic results have been manually refined. The 2D (two dimensional) area fraction of the glass have been measured (Suplement 1, Table II). Klug and Cashman (1994) suggested that the 2D area fraction of the vesicles equals the volume fraction in three dimension and yields clast vesicularity in case of random vesicle orientation. The vesicularity index which represents the mean value of the measured vesicularity and the vesicularity range which represents the total spread of the measured values were calculated following Houghton and Wilson (1989). Petrography and glass chemistry Unit A shows two subfacies. Unit A_1 subfacies is a pale greyish yellow fine-grained tuff. The tuff is matrix supported with 5% crystals and rounded white micropumice clasts. The crystals are quartz, feldspar, and dark mica (Supplement 1, Fig. 1A). Unit A_2 subfacies is a whitish grey, layered, coarse-grained tuff (Supplement 1, Fig. 1B). The matrix supported tuff contains rounded, white pumice clasts and quartz, feldspar, and dark mica crystals (Supplement 1, Fig. 2A). Unit B is a dark brown fine-grained tuff containing mm sized accretionary lapilli concentrating at the base of the unit (Supplement 1, Fig. 1C). The accretionary lapilli have a well-defined core and rim (Supplement 1, Fig. 2B). This unit has diffused transition and flame structure at the base. Unit C consists of whitish grey pumiceous lapillistone (Supplement 1, Fig.1 D, E). Quartz, feldspar, and dark mica are present as phenocrysts. The pumices are angular and oriented. Unit D is a gray lapilli tuff with high number of phytogenic clasts (Supplement 1, Fig. 1F). Quartz, feldspar, and dark mica were observed as loose crystals in the matrix (Supplement 1, Fig. 2D). Supplement 1, Table I contains the glass chemistry results measured with the EDX detector of the AMRAY electron microscope. The measured glass composition was used only for relative comparison of Unit A and Unit C. The SiO 2 /Al 2 O 3 ratio of Unit A is slightly higher compared to Unit C, but this difference is negligible. SiO 2 /Al 2 O 3 vs Na 2 O/K 2 O ratios of the glass indicate homogenous major element melt geochemistry for these units. Vesicularity BSE image analysis was effective in characterizing Unit A and Unit C. Samples from these units contained appropriate pumice clasts for vesicularity analyses. The vesicles of the pumices from these samples were studied to understand the main conduit processes as degassing and fragmentation during the eruption. Most of the studied clasts of Unit A and Unit C are highly and moderately vesicular (Supplement 1, Table II and Supplement 1, Fig. 4B) according to the Houghton and Wilson (1989) classification. The Unit A sample also contains poorly vesicular clasts population. The larger vesicularity range can indicate heterogenous and mature, partly degassed conduit at the time of fragmentation (e.g. Cashman 2004). However, the size dependent vesicularity analyses of Unit A (Supplement 1 Fig. 4A) indicates logarithmic correlation between clast size and vesicularity, in other words the poorly vesicular clasts are only represented by small sized platy and flaky ash while the larger pumice clasts (> 500 µm) are highly-moderately vesicular similar to the pumices of Unit C. Based on Walker (1980) and Houghton and Wilson (1989), the vesicularity of the clasts increases as the size of the clasts converges to the diameter of the vesicles. Therefore, the broad range of vesicularity in Unit A, especially the poor vesicularity, is only apparent, and the poorly vesicular clasts are interpreted as testifying the strongly fragmented material of moderately/highly vesicular magma. This also suggests that the pre-fragmentation vesicularity of Unit A and Unit C magma was similar, indicating comparable decompression history for both units, but with a more effective fragmentation in the case of Unit A. We propose that similarly to the Askja 1875 (Carey 2009) or Grímsvötn 2011 (Liu et al. 2015) eruptions, in the case of Unit A the already vesiculated expanding magma fragmented more efficiently forced by the explosive magma-water interaction. Phreatomagmatic fragmentation occurs due to magma and water interaction in the conduit. The involvement of water during the fragmentation produces fine-grained deposit in contrast to the magmatic volatile-driven, dry fragmentation (e.g., Wolhetz 1986, Austin-Erickson et al. 2008, Németh & Kósik 2020. The lower vesicularity index and higher vesicularity range in Unit A tuff indicates magma-water interaction during the early stages of the Ipolytarnóc eruption. The involvement of water is also supported by the high amount of fine ash in Unit A and abundant presence of accretionary lapilli in Unit B (see Supplement 1 Fig. 2B), which is probably a co-PDC plume product deposited on the top of Unit A PDC (Pyroclastic Density Current) deposit (Schumacher & Schminke, 1995). The relative abundance of highly vesicular clasts in Unit A suggests late-stage, explosive magma-water interaction of the already degassed expanding magma which was near to or probably just above its fragmentation threshold. It shall be noticed that in contrast to Unit A and B, Unit C vesicularity distribution and two-dimensional vesicle textures (Supplement 1 Fig 3. A-E) indicates dry fragmentation and falls into the range measured for large Plinian eruptions (Cashman 2004). As field observations suggest, Unit C deposited directly on the top of Unit B with sharp boundary, without any signs of intereruptive erosion indicating lack of longer quiescence (Supplement 1 Fig. 2 of main text). Thus, during the Eger-Ipolytarnóc eruption the initial phreatomagmatic phase (Unit A, B) has been followed by a dry magmatic phase represented by Unit C fallout deposit. The transition between these phases was sharp. The sharp transition between wet, phreatomagmatic, and dry magmatic fragmentation mode can be interpreted as a result of a) the depletion of available water supply (e.g., caldera lake) or b) vent position shifting similar to the eruptions of Askja in 1875 or Taupo in 232 (Carey et al. 2009.
2022-06-15T06:17:45.298Z
2022-06-13T00:00:00.000
{ "year": 2022, "sha1": "98ed414f487877d419bcd5c63133abc102d70f09", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-022-13586-3.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f8f20798f3ab2829488b979328282416f92b7e61", "s2fieldsofstudy": [ "Geology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
264604977
pes2o/s2orc
v3-fos-license
Somatic coding mutations in human induced pluripotent stem cells Defined transcription factors can induce epigenetic reprogramming of adult mammalian cells into induced pluripotent stem cells. Although DNA factors are integrated during some reprogramming methods, it is unknown whether the genome remains unchanged at the single nucleotide level. Here we show that 22 human induced pluripotent stem (hiPS) cell lines reprogrammed using five different methods each contained an average of five protein-coding point mutations in the regions sampled (an estimated six protein coding point mutations per exome). The majority of these mutations were non-synonymous, nonsense, or splice variants, and were enriched in genes mutated or having causative effects in cancers. At least half of these reprogramming-associated mutations pre-existed in fibroblast progenitors at low frequencies, while the rest were newly occurring during or after reprogramming. Thus, hiPS cells acquire genetic modifications in addition to epigenetic modifications. Extensive genetic screening should become a standard procedure to ensure hiPS safety before clinical use. Introduction hiPS cells have the potential to revolutionize personalized medicine by allowing immunocompatible stem cell therapies to be developed1,2. However, questions remain about hiPS safety. For clinical use, hiPS lines must be reprogrammed from cultured adult cells, and could carry a mutational load due to normal in vivo somatic mutation. Furthermore, many hiPS reprogramming methods utilize oncogenes that may increase the mutation rate. Additionally, some hiPS lines have been observed to contain large-scale genomic rearrangements and abnormal karyotypes after reprogramming3. Recent studies also revealed that tumor suppressor genes, including those involved in DNA damage response, have an inhibitory effect on nuclear reprogramming4-9. These findings suggest that the process of reprogramming could lead to an elevated mutational load in hiPS cells. To probe this issue, we sequenced the majority of the protein-coding exons (exomes) of twenty-two hiPS lines and the nine matched fibroblast lines from which they originated (Table 1). These lines were reprogrammed in seven laboratories using three integrating methods (four-factor retroviral, four-factor lentiviral, and three-factor retroviral) and two non-integrating methods (episomal vector and mRNA delivery into fibroblasts). All hiPS lines were extensively characterized for pluripotency and had normal karyotypes prior to DNA extraction (Supplementary Methods). Protein coding regions in the genome were captured and sequenced from the genomic DNA of hiPS lines and their matched progenitor fibroblast lines using either padlock probes10,11 or in-solution DNA or RNA baits12,13. We searched for single base changes, small insertions/deletions, and alternative splicing variants, and identified 12,000 -18,000 known and novel variants for each cell line that had sufficient coverage and consensus quality (Table 1). hiPS Cell Lines contain a High Level of Mutational Load We identified sites that showed the gain of a new allele in each hiPS line compared with their corresponding matched progenitor fibroblast genome. A total of 124 mutations were validated with capillary sequencing ( Figure 1, Table 2, Supplementary Figure S1), which revealed that each mutation was fixed in heterozygous condition in the hiPS lines. No small insertions/deletions were detected. For three hiPS lines (CV-hiPS-B, CV-hiPS-F, PGP1-iPS), the donor's complete genome sequence obtained from whole blood is publicly available14,15; we used this information to further confirm that all 27 mutations in these lines were bona fide somatic mutations. Because 84% of the expected exomic variants16 were captured at high depth and quality, the predicted load is approximately 6 coding mutations per hiPS genome (see Table 1 for details). The majority of mutations were missense (83/124), nonsense (5/124), or splice variants (4/124). Fifty-three missense mutations were predicted to alter protein function17 (Supplementary Table S1). Fifty mutated genes were previously found to be mutated in some cancers18,19. For example, ATM is a well-characterized tumor suppressor gene found mutated in one hiPS line, while NTRK1 and NTRK3 (tyrosine kinase receptors) can cause cancers when mutated20 and contained damaging mutations in three hiPS lines (CV-hiPS-F, iPS29e, FiPS4F-shpRB4.5) reprogrammed in three labs from different donors. Two NEK kinase genes, a family related to cell division, were mutated in two independent hiPS lines. In addition to cancer-related genes, fourteen of the twenty-two lines contain mutations in genes with known roles in human Mendelian disorders21. Three pairs of hiPS lines (iPS17a and iPS17b, dH1F-iPS8 and dH1F-iPS9, CF-RiPS1.4 and CF-RiPS 1.9) shared three, two, and one mutation respectively; these most likely arose in shared common progenitor cells prior to reprogramming. However, most hiPS lines derived from the same fibroblast line did not share common mutations (Table 2 and Supplementary Table S1). These data raise the possibility that a significant number of mutations are occurring during or shortly after reprogramming and then become fixed during colony picking and expansion. An alternative hypothesis is that the mutations we found are simply the result of age-accrued biopsy heterogeneity or in vitro fibroblast cell culture. The skin biopsies were collected from donors at ages varying from newborn to 82 years old; biopsy heterogeneity therefore does not appear to play a primary role, as the mutational load is not correlated (R 2 = 0.046) with donor age (Supplementary Figure S2). We attempted to grow clonal fibroblasts in order to obtain a control for single-cell mutational load, but a direct assessment was not possible due to technical difficulties in mimicking the exact culture conditions (Supplementary Methods). Assuming the skin biopsy is mutation-free, we can use previously published values for the typical mutation rate in culture to obtain an expectation of ten times fewer mutations per genome than we observed (p< 1.27 × 10 −53 ; Supplementary Methods), indicating that hiPS mutational load is high compared to normal culture mutational load. We define the term "reprogramming-associated mutations" to describe mutations observed after reprogramming. Reprogramming-associated mutations could be pre-existing at low frequencies in the fibroblast population, occurring during the reprogramming process, or occurring after reprogramming. All reprogramming-associated mutations have become fixed in the hiPS line population. Reprogramming-Associated Mutations arise through Multiple Mechanisms To test whether some observed mutations were present in the starting fibroblasts at low frequency prior to reprogramming, we developed a new digital quantification assay (DigiQ) to quantify the frequencies of 32 mutations in six fibroblast lines using ultra-deep sequencing (Supplementary Figure S3-4). We amplified each mutated region from the genomic DNA of 100,000 cells with a high-fidelity DNA polymerase and sequenced the pooled amplicons with an Illumina Genome Analyzer at an average coverage of 10 6 . Although the raw sequencing error is roughly 0.1-1% with the Illumina sequencing platform, detection of rare mutations at a lower frequency is possible with proper quality filtering and careful selection of controls22. For each fibroblast line, we included the mutation-carrying hiPS DNA as the positive control and another "mutation-free" DNA sample as the negative control for sequencing errors (Supplementary Methods). Comparison of the allelic counts at the mutation positions between the fibroblast lines and the negative controls allowed us to distinguish rare mutations from sequencing errors, and estimate the detection limit of the assay. Seventeen of the 32 mutations were found in fibroblasts in a range of 0.3-1000 in ten thousand while 15 mutations were not detectable (Supplementary Table S2-3). In each fibroblast line with more than one detectable rare mutation, the frequency of each mutation was very similar, which suggests that a small sub-population of each fibroblast line appeared to contain all pre-existing hiPS mutations, while the rest of the cells lacked any of them. We extended this analysis by asking whether all of the hiPS mutations could have preexisted in the fibroblast populations. For the 15 mutations not detected with the DigiQ assay, the detection limits can be estimated (Supplementary Methods). The sequencing quality was sufficiently high at 7 of the 15 sites such that rare mutations at frequencies of 0.6-5 in 100,000 should be detectable with our assay (Supplementary Table S3). Since 30,000-100,000 fibroblast cells were used in the reprogramming experiments, we can rule out the presence of two mutated genes (NTRK3 and PLOR1C) in even one cell of the starting fibroblast population, while five others were present in no more than 1-2 cells. As another test of the hypothesis that all of the mutations pre-existed in fibroblasts prior to reprogramming, we examined the exomes of two hiPS lines derived from a fibroblast line dH1cf16, which was itself clonally derived from the dH1F fibroblast line and passaged the minimum amount to generate enough cells for reprogramming. The two hiPS lines derived from the non-clonal dH1F fibroblast line contained 8 and 3 new mutations not found in the fibroblasts respectively; we observed a very similar independent mutational load in the clonal lines (6 new mutations in the hiPS line dH1cf16-iPS1 and 2 new mutations in the hiPS line dH1cf16-iPS4). Together, these experiments establish that while some of the reprogramming-associated mutations were likely to pre-exist in the starting fibroblast cultures, the others occurred during reprogramming and subsequent culture. Specific distributions tend to vary across hiPS lines (Supplementary Table S3). Mutations occurring during reprogramming could be due in part to a significantly elevated mutation rate during reprogramming. It is also possible that selection could play an important role. We tested the possibility that an elevated mutation rate might occur because the reprogramming process might be inducing transient repression of p53, RB1, and other tumor suppressor genes, which are known to inhibit reprogramming and are required for normal DNA damage responses. SV40 Large-T antigen, which inactivates tumor suppressor and DNA damage response genes (including p53 and p105/RB1)23, was expressed during reprogramming of three analyzed hiPS lines (DF6-9-9, DF19-11, and iPS4.7).24. Another hiPS line (FiPS4F-shpRB4.5) was generated while directly knocking down RB1 (Supplementary Figure S5). However, the observed mutational load was very similar in these lines compared to the others, indicating that reprogramming-associated mutations cannot be explained by an elevated mutation rate caused by p53 or RB1 repression. We also probed if additional mutations could become fixed during extended passaging by extending our analysis of one hiPS line. While most of our hiPS lines were sequenced at fairly low passage number (less than 20), to directly measure the effect of postreprogramming culture we also sequenced one hiPS line (FiPS4F2) at two passages (p9 and p40). We discovered that all seven mutations identified in the passage 9 line remained fixed in the passage 40 line, but that four additional mutations were found to be fixed in the passage 40 cell line. To test the possibility that selection is operating during hiPS generation, we performed an enrichment analysis to determine if reprogramming-associated mutated genes were more likely to be observed in cancer cells than random somatic mutation. We used the COSMIC database as a source of genes commonly mutated in cancer. We discovered that the reprogramming-associated mutated genes were significantly enriched for genes found mutated in cancer (p=0.0019, Supplementary Materials), which implies some mutations were selected during reprogramming. As an alternative test of the selection hypothesis, we asked whether mutations associated with reprogramming could be functional based on the nonsynonymous:synonymous (NS:S) ratio. Traditionally, the analysis of the NS:S ratio is applied to germline mutations evolved over a long period of evolutionary time, which is thus not directly applicable to somatic mutations. However, functional mutations are known to be positively selected in cancers, allowing us to make a direct comparison to mutation characteristics found in cancer genomes. Strikingly the NS:S ratio is very similar between mutations identified in three recent cancer genome sequencing projects25,26,27 and the reprogramming-associated mutations we found (2.4:1 and 2.6:1, respectively), indicating that a similar degree of selection pressure may be present. We also checked if reprogramming-associated mutations could be providing a common functional advantage using a pathway enrichment analysis through Gene Ontology terms28. No statistically significant similarity was identified, indicating that mutated genes have varied cellular functions. Again, identical results were found when performing the same analysis on mutations identified during the genome sequencing of melanoma, breast cancer, and lung cancer samples25,26,27. This lack of enrichment in cancer genomes is generally thought to be due to the presence of many passenger mutations in cancer cells, which could also be true for reprogramming-associated mutations. Nonetheless, these analyses suggest that selection of potentially functional mutations could play a role in amplifying rare mutation-carrying cells and, when coupled with the single-cell bottleneck in hiPS colony picking, could contribute to the fixation of initially low-frequency mutations throughout the entire hiPS cell population. Discussion Taken together, our results clearly demonstrate that pre-existing and new mutations during and after reprogramming all contribute to the high mutational load we discovered in hiPS lines. Although we cannot completely rule out the possibility that reprogramming itself is "mutagenic", our data argue that selection during hiPS reprogramming, colony-picking, and subsequent culture may be contributing factors. A corollary is that, if reprogramming efficiency is improved to a level such that no colony-picking and clonal expansion is necessary, the resulting hiPS cells could potentially be free of mutations. Despite the power of our experimental approach to accurately identify and characterize reprogramming-associated mutations, their functional significance remains to be shown. This issue parallels a general problem facing the genomics community: high-throughput sequencing technologies have allowed data generation rates to greatly outpace functional interpretation. Additionally, when considering the biological significance of reprogramming-associated mutations, there are two separate functional aspects to consider: whether some of these mutations contributed functionally to the reprogramming of cell fate, and whether some of these mutations could increase disease risk when hiPS-derived cells/ tissues are used in the clinic. These two aspects are not necessarily connected. Although the functional effects of the 124 mutations remained to be characterized experimentally, it is nonetheless striking that the observed reprogramming-associated mutational load shares many similarities with that observed in cancer. Furthermore, the observation of mutated genes involved in human Mendelian disorders suggests that the risk for diseases other than cancer needs to be evaluated for hiPS-based therapeutic methods. Future long-term studies must focus on functional characterization of reprogramming-associated mutations in order to further aid the creation of clinical safety standards. Because safe hiPS cells are critical for clinical application, just as previous findings of largescale genome rearrangements in hiPS lines led to the introduction of karyotyping as a standard post-reprogramming protocol, routine genetic screening of hiPS lines to ensure that no obviously deleterious point mutations are present must become a standard procedure. Complete exome or genome sequencing of hiPS lines might be an efficient way to screen out hiPS lines that have a high mutational load or that have mutations in genes implicated in development, disease, or tumorigenesis. Further rigorous work on mutation rates and distributions during in vitro culture and reprogramming of hiPS cells, and perhaps human embryonic stem cells, will be essential to help establish clinical safety standards for genomic integrity. Exome capture was performed with either a library of padlock probes, commercial hybridization capture DNA baits (NimbleGen SeqCap EZ), or RNA baits (Agilent SureSelect), and the resulting libraries were sequenced on an Illumina GA IIx sequencer. Putative mutations were rejected if they were known polymorphisms or contained any minor allele presence in the fibroblast. All candidate mutations were confirmed using capillary Sanger sequencing. For digital quantification, mutations were PCR-amplified and sequenced using an Illumina GA IIx. These libraries were sequenced to obtain on average one million independent base calls for each location. A binomial test was then used to determine if the observed minor allele frequency could be separated from error and estimate the frequency of each mutation. Detailed methods are available in the Supplementary Materials. Table 1 Sequencing statistics for mutation discovery Quality filtered sequence represents the total amount of sequencer data generated that passed the Illumina GA IIx quality filter. Number of high quality coding variants is the number of variants found with sequencing depth of at least 8 and consensus quality score of at least 30. dbSNP percentage represents the percent of identified variants present in the dbSNP database. Shared coding region is the portion of the genome, in base pairs, that was sequenced at high depth and quality in both the iPS line and its progenitor fibroblast. The number of coding mutations lists both the number of identified coding mutations and a projection of the total number of identified mutations based on the fraction of CCDS variants (out of ~17,000 expected variants)16 successfully identified in both hiPS and Fibroblast. * For DF-6-9-9 and FS, mutation calling was performed individually using both Padlock Probe data and hybridization capture data. Each method found five mutations, four of which were shared, leading to a total of six mutations. Padlock probe and hybridization capture have separate strengths (specificity vs. unbiased coverage); it appears these factors directly affect the ability to find separate mutations. Gore et al. Page 13 Table 2 List of genes found to be mutated in coding regions in hiPS cells The full details of each mutation are in Supplementary Table 1
2018-03-28T13:10:07.456Z
2011-01-20T00:00:00.000
{ "year": 2011, "sha1": "c6c794a00230001183dff044d359149a0b9d3d5e", "oa_license": "unspecified-oa", "oa_url": "https://europepmc.org/articles/pmc3074107?pdf=render", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "c0bb5d765ebe6f57627fd8fa60819811ad271c8b", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
7986537
pes2o/s2orc
v3-fos-license
Identification and Molecular Characterization of Genes Coding Pharmaceutically Important Enzymes from Halo-Thermo Tolerant Bacillus 33364038, Fax: +98 41 33379420, Email: dastmalchi.s@tbzmed.ac.ir, siavoush11@yahoo.com 2016 The Authors. This is an Open Access article distributed under the terms of the Creative Commons Attribution (CC BY), which permits unrestricted use, distribution, and reproduction in any medium, as long as the original authors and source are cited. No permission is required from the authors or the publishers. Adv Pharm Bull, 2016, 6(4), 551-561 doi: 10.15171/apb.2016.069 http://apb.tbzmed.ac.ir Advanced Pharmaceutical Bulletin Introduction Production of raw materials using enzymes is a rapidly expanding technology especially in the pharmaceutical and biotechnology industries.2][3] The chemical synthesis of compounds and pharmaceuticals has several disadvantages such as low catalytic efficiency, lack of chemo-, regio-and enantioselectivity, and needs for specific conditions in terms of temperature, pH and pressure, just to mention a few.Also, the use of organic solvents leads to environmental issues brought about by organic waste and pollutants.The ability of enzymes to catalyze chemical reaction with high speed and specificity under mild reaction conditions has made them appropriate alternatives to chemotherapeutic agents such as insufficient drug concentrations in tumors, systemic toxicity, lack of selectivity for tumor cells and drug-resistance problems in tumor cells. 8,9The enzyme-prodrug cancer therapy is achieved by delivering the drug-activating enzyme gene or functional protein from nonhuman or human origin in to the tumor tissues accompanied with the simultaneous administration of a prodrug. 10,11Pro-drugs are nontoxic chemically modified versions of the pharmacologically active agents that can be converted to the active and cytotoxic anticancer drugs with high local concentrations in tumors. 12he synthesis of biologically active compounds through enzymatic transformation can provide an efficient way of achieving novel antimicrobial agents required to overcome the global problem of antibiotic resistance development. 13urrently, out of almost 4000 enzymes, about 200 are used commercially.6][17] Microbial enzymes are preferred over plant or animal sources due to their economic production costs and simplicity of the modification and optimization process.They are relatively more stable than corresponding enzymes derived from plants or animals and they provide a greater diversity of catalytic activities. 18he special features of microbial enzymes which make them potential source for commercial and industrial applications include thermotolerance, thermophilic nature, and stability over broad range of temperature and pH, and other harsh reaction conditions. 1 Nature is the main source of enzyme producing microorganisms among them the great diversity of extremophiles supply the valuable enzymes with robust properties. 19xtremophile microorganisms produce enzymes with great stability under harsh conditions which are regarded as incompatible environmental properties for biological systems. 15he halo-thermotolerant Bacillus sp.SL-1 is a Grampositive, spore-forming bacterium that was isolated from Aran-Bidgol Saline Lake in central region of Iran as introduced in previous study. 20Enzymes from halophiles tolerate such harsh environments by acquiring a large number of negatively-charged residues in their surfaces, which leads to their very low propensity for aggregation.Such a property has been taken as an advantage for these enzymes allowing them to function in non-aqueous media. 21Phylogenetic analysis based on 16S rDNA gene sequence comparisons revealed that the isolate Bacillus sp.SL-1 was closely related to Bacillus licheniformis with 97% similarity.The B. licheniformis is an important producer of exoenzymes and has been used for decades in large-scale manufacturing the industrial and pharmaceutical enzymes such as different proteases, αamylase, penicillinase, pentosanase, β-mannanase and several pectinolytic enzymes. 22,23Based on complete genome sequence of B. licheniformis, many new genes for enzymes with potential biotechnological and pharmaceutical applications were found. 23With the recent advances in biotechnology, various enzymes have been identified and designed or purposely engineered to produce more (and most likely new) chemicals and materials from cheaper (and renewable) resources, which will consequently contribute to establishing a bio-based economy. 15he halo-thermotolerant Bacillus sp.SL-1 as a new and locally isolated extremophile can be regarded as a resource for many useful pharmaceutical enzymes on commercial scales.In this study, based on the similarity observed for Bacillus sp.SL-1 and B. licheniformis (ATCC 14580), the two strains were considered closely related and hence the coding sequence for ten pharmaceutically and industrially important enzymes including laccase, l-asparaginase, glutamate-specific endopeptidase, L-arabinose isomerase, endo-1,4-β mannosidase, glutaminase, pectate lyase, cellulose, aldehyde dehydrogenase and allantoinases were inspected in the Bacillus sp.SL-1.The gene for these enzymes were identified, amplified, sequenced and compared with corresponding gene from the genome of the other Bacillus in database leading to the first analysis of enzymatic profile from Bacillus sp.SL-1.The results were indicative of opportunity for this organism as an industrial strain. Materials All reagents were of analytical grade.Tryptone and NaCl were purchased from Scharlau (Barcelona, Spain).Yeast extract, agar and glycerol were from Applichem (Darmstadt, Germany).DNA extraction kit was received from Qiagen (Germany).DNA ladders and Pfu PCR PreMix master mix were obtained from Fermentas (Russia) and Bioneer (South Korea), respectively.Safe stain was purchased from Thermo Scientific (USA).Primers used in this work were supplied from Bioron (Germany) ordered via FAZA Biotech (Tehran, Iran). Strains and culture media Halo-thermotolerant Bacillus sp.SL-1 was isolated from Aran-Bidgol Saline Lake in central region of Iran as introduced in previous study. 20This isolate deposited in Iranian Biological Resources Center (approved by World Federation for Culture Collections: WDCM950) for availability to scientific community (IBRC-M 11052).Bacillus sp.SL-1 was cultivated overnight in enriched Luria-Bertani (LB) medium at 35 °C and 180 rpm. Primer Design The design of primers for laccase (CotA), lasparaginase (ansA1, ansA3), glutamate-specific endopeptidase (blaSE), l-arabinose isomerase (araA2), endo-1,4-β mannosidase (gmuG), glutaminase (glsA), pectate lyase (pelA), cellulase (bglC1), aldehyde dehydrogenase (ycbD) and allantoinases (pucH) enzymes from Bacillus sp.SL-1 were based on the complete genome sequence of B. licheniformis (GenBank accession no.AE017333.1)and other closely related Bacillus, which permits the prediction of coding sequence for the ten selected enzymes according to the conserved regions in their sequences.Table 1 shows the list of designed primers for the enzymes.The PCR protocol adjusted for the amplification of the genes for all enzymes was performed using the following program: initial denaturation at 94 °C for 3 min, followed by 35 cycles of denaturation at 94 °C for 1 min, annealing at 56 °C for 90 s, and extension at 72 °C for 2 min with final extension at 72 °C for 10 min.PCR products were analyzed on 1% agarose gel stained with safe stain and visualized under ultraviolet transillumination (Syngene InGenius, USA). Sequence analysis For further information about the sequence of the amplified genes the PCR products were sent out for sequencing at Sequetech, USA.Database search for the homologous sequences was performed using BLAST program from National Center for Biotechnology Information.Sequence alignments of the gene sequences were performed using CLUSTALW. 24 Identification of putative Bacillus sp. SL-1 enzymes According to the higher similarity (97%) observed between the16S rDNA gene sequences for Bacillus sp.SL-1 and B. licheniformis (ATCC 14580), the complete genome sequence from later was used as a template for designing appropriate primers for amplifying the corresponding enzyme genes from the genome of the SL-1 strain.The B. licheniformis belongs to the B. subtilis group (group II) of the genus Bacillus together with other well-known species whose complete genome sequence has been determined. 23The genome of B. licheniformis ATCC 14580 is in the form of a circular chromosome of 4,222,336 base-pairs (bp), which was predicted to be consist of 4,208 protein-coding sequences (CDSs).Based on a broad investigation on B. licheniformis genome, at least 82 of the 4,208 genes are likely to encode secreted proteins and enzymes.In addition, there are 27 predicted extracellular proteins encoded by the B. licheniformis (ATCC 14580) genome that are not found in B. subtilis 168.Due to the saprophytic lifestyle, the B. licheniformis encodes several secreted enzymes that hydrolyze polysaccharides, proteins, lipids and other nutrients. 22ecause of the biotechnological importance of this group of organisms we characterized the genes for ten industrial enzymes from the isolated halo-thermo tolerant Bacillus SL-1 and present the first analysis of data derived from the annotated sequences.In this study selected enzyme genes were amplified from total genomic DNA of type strain SL-1 by using specific primer pairs and then the PCR products were visualized by gel electrophoresis on 1% agarose gel stained with safe stain.The gel image analysis indicated bands with sizes in the range of 860 to 1460 bp (Figure 1) that confirmed the presence of four industrially important classes of enzyme genes (EC1, EC3, EC4 and EC5) in Bacillus SL-1 genome.According to the Enzyme Commission the enzymes are divided into 6 class containing oxidoreductase (EC 1), transferase (EC 2), hydrolase (EC 3), lyase (EC 4), isomerase (EC 5) and ligase (EC 6). 25 The classification of the studied enzymes of Bacillus sp.SL-1 was shown in Table 2. Sequence analysis of Bacillus sp. SL-1 enzymes Search for the homologs of the selected enzymes was performed by BLAST algorithm using the DNA sequences of the query enzymes against the nucleotide collection database.The most similar homologs for the studied enzymes were from B. licheniformis (ATCC 14580) with similarities ranging from 99 to 100% shown in Table 3.The table also contains the comparisons between the genes for the selected enzymes and the corresponding genes from B. subtilis, which is another closely related strain, albeit with less similarities (51-75%) in the case of these enzymes. Although the SL-1 isolate is considered closely related to the B. licheniformis, based on sequence similarities for the target enzymes, however, they are distinctive strains according to the 16S rDNA sequencing and some physicochemical properties described previously. 20The comparative sequence analyses for the ten selected enzymes between Bacillus sp.SL-1 and B. licheniformis ATCC14580 were performed at the protein level by CLUSTAL-W pairwise alignment using the translated protein sequences (Table 4).More details regarding individual enzymes investigated in this work presented below. Oxidoreductases (EC 1) Oxidoreductase enzymes catalyze oxidation-reduction reactions where electrons are transferred.These electrons are usually in the form of hydride ions or hydrogen atoms.When a substrate is oxidized it acts as the hydrogen donor in the reaction and therefore the most common name used for the enzymes catalyzing this reaction is dehydrogenase.An oxidase is referred to when the oxygen atom is the electron acceptor.Laccases (EC 1.10.3.2),belonging to the superfamily of multicopper oxidases, catalyze the reduction of oxygen molecule into water molecule via transferring the electrons from substrates.Due to high capacity for the oxidation of wide range of phenols and polyphenols to highly reactive radicals, laccases have potential for applications in biotechnology, especially in the synthesis of new biologically active compounds and biomaterials. 3,27These radicals can undergo coupling reactions with various types of compounds, which can lead to the formation of products with new structures and properties. 3For example, laccase-catalyzed amination of dihydroxy aromatics is a new and promising method to synthesize novel antibiotics via enzymatic transformation. 13These laccase mediated reactions are low-cost reactions which are conducted under mild reaction conditions, in aqueous solvent systems, normal pressure, and room temperature. 28,29p to now, novel cephalosporins, penicillins, and carbacephems were synthesized by amination of amino-β-lactam structures using laccases. 13,28,30Other examples of the potential application of laccases for organic synthesis include the oxidative coupling of katarantine and vindoline to produce vinblastine. 31inblastine is an important anti-cancer agent, extensively used in treatment of leukemia. 31he CotA genes from Bacillus sp.SL-1 and B. licheniformis ATCC14580 are almost identical (99.94%), except for a single A948C nucleotide difference, which has led to K316N substitution as indicated in Table 3.Multiple sequence alignment of laccase from Bacillus sp.SL-1 with other laccase enzymes from different Bacillus strains indicated four conserved segments containing histidine-rich copperbinding sites which are characteristic for bacterial laccases.Moreover, the CotA (SL-1) gene from Bacillus sp.SL-1 shows 64.65% identity with CotA from B. subtilis.The most important advantages of using bacterial laccases are their higher activity as well as stability at various pH and temperature compared to the fungal laccase.Moreover, the expression level of laccase is anticipated to be higher in bacteria other microorganisms, which may provide added economic values for its use.Besides, the industrial processes are often conducted in harsh conditions such as extreme pH, temperature, or ionic strength and therefore such robust enzymes supply economically appealing materials. 32,33The isolation and comprehensively characterization of pure laccase from Bacillus sp.SL-1 revealed its high production yield and stability. 27ldehyde dehydrogenase (ALDH; aldehyde: NAD(P) + oxidoreductase, EC 1.2.1.5)constitute a group of enzymes that catalyze the conversion of aldehydes to the corresponding acids mediated by an NAD(P) +dependent virtually irreversible reaction making it potentially useful in an industrial settings.The ALDH is very unstable because of the spontaneous oxidation, 34 therefore, there is considerable interest in production of stable ALDH, which can be used more efficiently in the pharmaceutical and fine chemicals industries for the production of aldehydes, ketones, and chiral alcohols.The production of chiral compounds is particularly desired because this is an increasingly important step in the synthesis of chirally pure pharmaceutical agents. 35n the other hand, the ALDH family is the most important detoxifying enzyme due to its role in the removal of the accumulated aldehyde metabolits. 36any human diseases are associated with lack of ALDH enzymes and the increased level of aldehydes in the body contributes to the pathology of a variety of metabolic disorders.In this investigation, sequencing analysis of ycbD (SL-1) showed that this gene was composed of 1467 bp, corresponding to 488 amino acid residues with a molecular mass of 52,912 Da.The pairwise alignment of aldehyde dehydrogenase (SL-1) gene with DNA sequences of homologous enzymes from B. licheniformis (ATCC 14580) and B. subtilis 168 showed 100.0% and 74.23% identities, respectively. Hydrolases (EC3) Hydrolases catalyze hydrolysis using cleavage of substrates by water molecule.In biological systems, the reactions contain the cleavage of peptide bonds in proteins, glycosidic bonds in carbohydrates, and ester bonds in lipids.Generally, larger molecules are broken down to smaller fragments by hydrolases. 25he therapeutic enzyme L-asparaginase (EC 3.5.1.1;Lasparagine amidohydrolase) is an important antineoplastic agent primarily applied for management of acute lymphoblastic leukaemia (ALL).Eighty percentage of ALL type of leukemia affect children with only 20% of cases shown in adults. 37Lasparaginase catalyzes the conversion of l-asparagine to l-aspartic acid and ammonia.The antileukemic activity of L-asparaginase is due to depletion of the circulating L-asparagine concentration in the extracellular fluid and hence reduction of its availability for the tumor cells which lack asparagine synthetase required for Lasparagine intracellular synthesis.This leads to the inhibition of protein synthesis in tumor cells and induction of apoptosis.However, the normal cells are not affected significantly due to their intact system for asparagine biosynthesis. 38Currently, in the United States, three asparaginase formulations are widely used against ALL: native E. coli asparaginase, its pegylated form, and the product from cultures of Erwinia chrysanthemi. 39Despite significant advancement in production of therapeutical forms of L-asparaginase, development of anti-asparaginase antibody in the patients is responsible for its major toxicity and resistance in asparaginase therapy and also reduces the therapeutic efficacy in some cancer cases.Considering that the patients do not develop cross reactivity, when a patient shows hypersensitivity to one type of Lasparaginase, it can be replaced with the enzymes obtained from different bacterial sources. 37Pairwise alignment of ansA3 (SL-1) and ansA3 gene from B. licheniformis (strain ATCC 14580) showed 99.90% identity with just a single T522A silent substitution (Table 4).The sequence alignment of genes for Lasparaginase from Bacillus sp.SL-1 and that of B. subtilis 168 indicated 72.04% identity (Table 3).Recently L-asparaginase from B. licheniformis with low glutaminase activity has been considered as a key therapeutic agent in the treatment of ALL. 40reliminary activity assay on recombinant Lasparaginase corresponding to ansA3 gene from Bacillus sp.SL-1in our lab showed that it is not functional.However, the gene for second homologous enzyme with asparaginase activity called ansA1 was also isolated from Bacillus sp.SL-1 with 99.07%similarity to ansA1 from B. licheniformis ATCC14580 (Table 4) and the corresponding recombinant protein was produced in high purity showing excellent enzymatic activity (unpublished data).Our findings are in contrast to the results reported by Sudhir et al where they showed that the enzyme encoded by ansA3 from Pharmaceutically important enzymes from Halo-thermotolerant Bacillus Advanced Pharmaceutical Bulletin, 2016, 6(4), 551-561 B. licheniformis MTCC 429 is active, while the enzyme from ansA1is highly unstable. 41llantoinases (pucH) are members of amidohydrolase superfamily, which are involved in purine metabolism and also catalyze the hydrolysis of a broad range of substrates containing amide or ester functional groups at carbon and phosphorus centers. 42Despite their importance in the purine catabolic pathway, sequences of microbial allantoinases with proven activity are scarce and only the enzymes from Escherichia coli has been studied in detail in this regard. 43It has been reported that allantoinase from B. licheniformis presents an inverted enantioselectivity towards allantoin (Renantioselective) that is not observed for other allantoinases, which makes it an interesting candidate for biotechnological applications. 42The pucH (SL-1) sequence analysis from Bacillus sp.SL-1 showed 99.66 % similarity to B. licheniformis allantoinase gene with three nucleotide substitution (G397A, C850A and C896T) that led to three amino acid modifications in the corresponding protein sequence (Table 4).Glutaminase (EC 3.5.1.2) is an oncolytic enzyme that catalyzes the deamination of L-glutamine to L-glutamic acid and ammonia with high specificity.L-glutaminase such as L-asparaginase is very significant anticancer enzyme in the treatment of acute lymphoblastic leukemia and other kinds of cancer through the Lglutamine amino acid depletion in cancerous cells. 44,45hese enzymes (i.e.L-glutaminase and L-asparaginase) are commonly used of therapeutic agents accounting for about 40% of the total worldwide enzyme sales. 45oreover, microbial glutaminases are enzymes with emerging potential in food industries. 44One of the potential application of recombinant B. licheniformis glutaminase is the bioconversion of glutamine to flavorenhancing glutamic acid in fermented food products. 46here have been only a few reports on the characterization of recombinant glutaminases.The glsA (SL-1) from Bacillus sp.SL-1 with 99.70% similarity to glutaminase gene from B. licheniformis consists of 984 bp corresponding to 327 residues with A435G and A649T nucleotide substitutions leading to I145M and I217F residue changes in the protein sequence, respectively (Table 4).A glutamate-specific endopeptidase (GSE) (EC 3.4.21.19) has the ability to cleave peptide bonds preceded by Glu and/or Asp residues.Its activity to Asp containing substrates contributes only 0.3% of that towards Glu substrates, demonstrating its high specificity for peptide bonds formed by α-carboxyl groups of Glu amino acids 47 The GSE from B. licheniformis has been used to hydrolyze α-lactalbumin, and the hydrolysate formed nanotubes due to the specificity of GSE-BL.It has been suggested that these nanotubes can be used as drug carriers and viscosifying agents with the advantages that they are biocompatible and of low toxicity. 48Also, the biochemical properties of this type of enzyme are useful for its usage in protein structure analysis, solid phase peptide synthesis, and biochemistry industry. 49Sequencing results of blaSE (SL-1) showed 100.0% and 56.80% identity with B. licheniformis and B. subtilis, respectively.Cellulose is the major component of plant biomass, which originally comes from solar energy through the process known as photosynthesis and is the most abundant renewable energy on Earth. 50Microorganisms produce multiple enzyme components to degrade cellulose, known as the cellulase (EC 3.2.1.4)system.The main application of these enzymes are in food and detergent industries. 51Microbial cellulase applications have been widely studied in pharmaceutical industries, mainly because of its huge economic potential in the conversion of plant biomass into ethanol and other chemicals.Recently, improved novel tubular cellulose (TC), a porous cellulosic material has been produced by enzymatic treatment with cellolase in order to prepare nanotubes, which have gained broad attention as major nanomedicine tools in drug targeting and delivery systems. 52lso, mannan polysaccharide is one of the main polymers in hemicellulose, a major component of lignocellulose.The mannan endo-1,4-β-mannosidase (EC 3.2.1.78),commonly named β-mannanase, is an enzyme that can catalyze random hydrolysis of β-1,4mannosidic linkages in the main chain of mannans, glucomannans and galactomannans. 53This enzyme has several applications in many industries including food, feed, pharmaceutical, pulp/paper industries, and gas well stimulation and pretreatment of lignocellulosic biomass for the production of second generation biofuel. 54The application of mannan endo-1,4-βmannosidase for the production of prebiotic mannanoligosaccharides from byproducts of cheap agricultural sources has found increased interests. 55Additionally, the β-mannanase and mannosidase secreted from the microflora in colon environment can degrade the hydrogel-based therapeutics and release the drug molecule from a galactomannan-based hydrogel. 56,57he sequence analysis of bglC1 (SL-1) and gmuG (SL-1) genes of Bacillus sp.SL-1 showed 100.0%homology with both of these genes from B. licheniformis as well as 76.43% and 72.48% identities with those of B. subtilis 168, respectively. Lyases (EC4) Lyases add some groups to double bonds or form double bonds through the elimination of groups.Thus bonds are cleaved by a principle different from hydrolysis.These are often referred to as synthase enzymes which differ from other enzymes in that one substrate is required in the forward direction, whereas two substrates are needed for the backward reaction. 26ectin is the most complex polysaccharides widely found in plant cell wall, consist of a backbone of Dgalacturonic acid residues, which are partially methylesterified.Pectate lyase (EC 4.2.2.2) (pelA) cleaves the α-1,4 glycosidic bond of polygalacturonic acid (PGA) via a βelimination reaction and generates a unsaturated bond at the non-reducing end of the newly formed oligogalacturonide. 58These enzymes are of great commercial value for various industrial applications such as improving juice yields and clarity in fruit juice industry. 59Genes encoding microbial pectate lyase have been identified from many microorganisms, including different Bacillus strains, and the corresponding enzymes form a superfamily based on their amino acid sequence similarity. 60The sequence alignment of pelA from Bacillus sp.SL-1 revealed 100.0% and 51.12% homology with DNA sequences of B. licheniformis (ATCC 14580) and B. subtilis 168 pectate lyase coding genes, respectively. Isomerases (EC5) Isomerases mediate the transferring of groups from one position to another one in the same molecule.On the other hands, these enzymes change the structure of a substrate by rearranging its atoms. 25-arabinose isomerase (EC 5.3.1.4)catalyzes the reversible isomerization of L-arabinose to L-ribulose involved in either the pentose phosphate or the phosphoketolase pathway in carbohydrate metabolism. 61Isomerase enzymes play a crucial role in the synthesis of uncommon sugars, simply termed rare sugars.Due to their scarcity in the nature and uneconomical method of production, rare are available only in limited amounts and at a high cost. 62-Tagatose, a natural rare monosaccharide, is an isomer of D-galactose which can be manufactured by the chemical or enzymatic isomerization of Dgalactose.63,64 Among the biocatalysts, L-arabinose isomerase has been mostly applied for D-tagatose production because of the industrial feasibility for the use of D-galactose as a substrate.64 D-Tagatose has attracted a great attention in recent years for its low caloric diet and can be used as sweetener in several foods, beverages, and dietary supplements.There are numerous reports indicates the useful medical properties of this sugar such as prebiotic, antioxidant and tooth-friendly as well as reduction of symptoms associated with type 2 diabetes, anemia, hemophilia and hyperglycemia.63,65 L-arabinose isomerase from various bacteria have been identified and studied from a number of microbial sources, but little information is available about this enzyme from B. licheniformis, and L-arabinose isomerase specific towards only L-arabinose.61 DNA sequence analysis of araA2 (SL-1) revealed an open reading frame of 1425 bp, capable of encoding a polypeptide of 474 amino acid residues with a calculated isoelectric point of pH 4.8 and a molecular mass of 53,500 Da.Based on analysis of the genome sequence, araA2 gene from Bacillus sp. SL-1 howed 100.0% and 57.52% identity with l-arabinose isomerase gene from B. licheniformis (ATCC 14580) and B. subtilis 168, respectively. Conclusion In the present study, halo-thermo tolerance Bacillus sp.SL1 isolate was evaluated for molecular characterization of its potential important pharmaceutical and industrial enzymes.Based on the sequence alignment results, 6 out of 10 studied enzymes from Bacillus sp.SL-1 showed 100.0%similarity at the nucleotide level to the corresponding genes of these enzymes in B. licheniformis (ATCC 14580) and demonstrated extensive organizational relationship between two strains.In the case of three studied enzymes (laccase, glutaminase, and allantoinase), their gene sequences showed more than 99% identity with B. licheniformis and the modifications in nucleotides translates to amino acid substitution in protein sequences.Asparaginase from Bacillus sp.SL1 was the only enzyme among others with just a single nucleotide silent substitution.Molecular characterizations of industrial enzyme sequences of newly isolated Bacillus sp.SL-1 provides useful information for comparative and evolutionary studies of different species within the industrial microorganisms including B. subtilis, B. licheniformis group.In the meantime, these studies may offer new information regarding the evolution and application of these closely related species.Since, the most industrial processes are often performed in harsh conditions like extreme pH, temperature, or ionic strength, therefore such robust enzymes provide economically appealing materials to be used under specific physicochemical situations. 27,66Thus, halo-thermo tolerance Bacillus sp.SL-1 could represent an excellent source of enzymes that can function at extreme conditions working as biocatalysts in medical, pharmaceutical, chemical and other industries. Table 1 . The primer pairs for amplification of enzyme genes from Bacillus sp.SL-1 genome.The primers designed according to the complete genome of B. Table 2 . Industrial enzyme classes of Bacillus sp.SL-1 and types of reactions Enzyme commission number (EC) Class of enzymes Industrial enzymes of Bacillus sp. SL-1 26 Table 3 . Comparison of the corresponding gene from the genome of Bacillus sp.SL-1 with B. licheniformis ATCC14580 and B.
2017-11-02T18:38:24.957Z
2016-12-22T00:00:00.000
{ "year": 2016, "sha1": "0c05258323c48b276f8a5b8dea3d9c1ed949a73d", "oa_license": "CCBY", "oa_url": "https://apb.tbzmed.ac.ir/PDF/APB_1965_20160723102653", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "0c05258323c48b276f8a5b8dea3d9c1ed949a73d", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
267533914
pes2o/s2orc
v3-fos-license
The Implementation of Personalised Learning to Teach English in Malaysian Low-Enrolment Schools . The implementation of personalised learning in teaching English has become one of the effective approaches to cater learners' needs in education. The educators in Malaysia, as in many countries, are interested in exploring the implementation of personalised learning in their lessons. However, the study on personalised learning to teach English in Malaysian schools is scarce. The prevalence of low academic achievement, especially in rural and low-enrolment schools, has led to teacher reluctance to adopt personalised learning approaches, potentially denying students the vital benefits of tailored education essential for their future success and development. Hence, this study aims to explore the implementation of personalised learning to teach English, particularly in Malaysian low-enrolment schools, as well as the challenges and ways to overcome the challenges. Personalised learning is implemented based on the Self-Determination Theory in this study. This qualitative research used a narrative approach via oral history. Purposive and convenience sampling were used to select a total of four participants for this study. Online semi-structured interviews were conducted with the participants to collect the data. The result analysis was trifold, including data transcription, data coding and data analysis to explore the research questions. Findings show that the implications of personalised learning in low-enrolment schools, grounded in the principles of Self-Determination Theory, are vast and promising. Personalised learning in English language education has the potential to transform Malaysia's educational landscape by nurturing intrinsic motivation, enhancing language skills, engaging students culturally, supporting teacher development, and fostering systemic changes, ultimately improving academic achievement, engagement, and well-being. Introduction The success of learning English depends on both internal and external elements since it is a language that is taught across borders.As English has become a global lingua franca [1], the demand for English language proficiency has grown significantly.English teaching in the global context encompasses various settings, such as schools, language institutes, universities, and online platforms.Numerous aspects have been researched throughout the years in an effort to comprehend, pinpoint, and enhance language teaching techniques in language schools around the world.It is obvious that some language learners are able to pick up a language with ease, while others have less success in doing so. Particularly in schools with low enrolment, the learning gap between urban and rural students is much more pronounced.Low-enrolment schools (LES), also known as schools with low student populations, are a global challenge that can be found in various regions and countries [2].These schools typically have a smaller number of students compared to larger schools in the same area.The reasons for low enrolment can vary depending on the context and specific factors affecting the school and its surrounding community.Based on PPPM 2013-2025, low-enrolment schools (LES) are considered as schools with a small enrolment of students.These schools consist of fewer than 150 students [3], and students from various age groups and academic levels are typically enrolled in the same class [4].As for teachers in these schools, they are expected to teach more than one subject in LES, due to the shortage of teachers in the school [5].Hence, LES have a small student-teacher population, which community consisted from diverse backgrounds. LES face a range of challenges that can negatively impact the quality of education the teachers could possibly provide.The problems associated include lack of motivation in learning, limited resources, teacher burnout and shortages and limited social opportunity [6].Thus, this research is intended to highlight how low enrolment school teachers could use personalised learning as an alternative method in the teaching and learning process in order to increase students' motivation and interest in learning language and to cater for their needs in completing their formal education. Problem Statement Personalised learning is an instructional approach that tailors learning experiences to the specific needs, interests, and abilities of individual students.It has been shown to be effective in improving student engagement, academic achievement, and retention in larger schools, but its effectiveness in LES is less clear.The research that involved personalised learning in LES is scarce.Personalised learning, an instructional approach that tailors learning experiences to the specific needs and abilities of individual students, has shown promise in improving student outcomes in larger schools [7].However, its effectiveness in addressing the unique needs of students in LES is unclear. Due to scarce resources, a shortage of teachers, and a lack of course options, teaching English in Malaysian LES confronts severe difficulties.These schools often struggle to provide quality English language instruction and meet the diverse needs of students .The small student population results in larger multi-graded class sizes, limited opportunities for individualised instruction, and a lack of diverse language learning experiences [5,8,9].Consequently, students may face difficulties in developing English language proficiency and achieving academic success.Addressing these challenges and finding effective strategies to teach English in LES is essential to ensure equitable access to quality education and enhance students' language skills and academic achievement. The issue of low achievement in academics has been one that overwhelmed LES, especially rural schools.Due to that, teachers may be hesitant to adopt this approach, potentially depriving students of the benefits of tailored learning experiences that are crucial for their future success and development.However, the findings of this research can help address this challenge by developing personalised interventions that offer crucial support to both teachers and students in such schools.These interventions have the potential to promote fairness and impartiality in educational outcomes for all students, regardless of their socioeconomic background or geographical location. Research Purpose This study aims to find out the use and challenges of implementing personalised learning approaches on the English language proficiency of students in Malaysian LES as well as ways to overcome those challenges.By investigating the implementation of personalised learning in LES, the study seeks to provide valuable insights into effective instructional strategies that can enhance the educational experience and English language proficiency of students in these unique educational settings. Research Objectives The research objectives of this study are: 1. To explore the implementation of personalised learning for teaching English in Malaysian LES. 2. To explore the challenges associated with implementing personalised learning to teach English in Malaysian LES. 3. To explore the ways to overcome challenges associated with implementing personalised learning to teach English in Malaysian LES. Research Questions The research questions of this study are: 1. How is personalised learning being implemented for teaching English in Malaysian LES? 2. What are the challenges associated with implementing personalised learning to teach English in Malaysian LES? 3. How to overcome challenges associated with implementing personalised learning to teach English in Malaysian LES?The conceptual framework in this study demonstrates the interconnectedness between personalised learning, self-determination theory, implementation, challenges, and ways to overcome the challenges for teaching English in low-enrolment schools.The framework suggests that personalised learning, which aims to create a learning climate that supports student learning outcomes, can be optimised by applying self-determination theory.This theory emphasises that learners' innate needs for autonomy, competence, and relatedness [10] must be met in order to motivate and engage them in the learning process.The learners' abilities and interests are also the factors in implementing personalised learning [11]. Conceptual Framework The framework highlights the implementation of using personalised learning in teaching English.This is to find out how teachers in Malaysian LES carry out the approach in the English lessons such as their preparation and teaching and learning activities.Other than that, challenges and ways to overcome the challenges for teaching English in low-enrolment schools will also be discussed, which include the community's involvement.Overall, the conceptual framework offers a comprehensive approach to addressing the complexities of teaching English in low-enrolment schools while promoting personalised learning and selfdetermination theory. Significance of the Study It is crucial that educators, policy-makers, and community members are aware of these issues and collaborate to find solutions.This research is hoped to provide more insightful thoughts from the experienced participants in assisting the success of students in these schools.The significance of this study can be highlighted in several ways.First and foremost, this study can help in addressing challenges in LES.LES often face resource constraints, limited course offerings, and reduced diversity, among other challenges.This study's focus on personalised learning approaches acknowledges the unique context of LES and seeks to address the specific challenges they face in teaching English. On the other hand, this study also can enhance student engagement and motivation.Personalised learning approaches are designed to cater to individual student needs, interests, and learning styles [12].By implementing personalised learning in LES, this study aims to enhance student engagement and motivation in the learning process, which can have a positive impact on academic performance and overall educational experience.Next, English Language proficiency can be improved as English proficiency is crucial for students in Malaysia to effectively participate in a globalised world [8].By implementing personalised learning strategies, this study seeks to enhance the teaching and learning of English in LES, potentially leading to improved English language proficiency among students. Other than that, education policies and practices can be informed through this study.The findings of this study can provide valuable insights into the feasibility and effectiveness of personalised learning approaches in LES in Malaysia.This can inform education policies and practices at the regional or national level, potentially leading to the adoption of personalised learning strategies in similar educational contexts.Last but not least, this study can contribute to research on personalised learning.The study contributes to the existing body of research on personalised learning, specifically in the context of LES in Malaysia.By exploring the implementation of personalised learning approaches in teaching English, this study adds to the knowledge and understanding of effective instructional strategies in low-enrolment settings. Overall, the significance of this study lies in its potential to address the unique challenges faced by LES, enhance student engagement and English language proficiency, inform education policies, and contribute to the broader research on personalised learning. Personalised learning is an approach of education in which instruction is personalised to the specific needs of each student.This technique has grown in favour in recent years as a way to increase student involvement, motivation, and accomplishment, particularly in lowenrolment schools (LES).The goal of this literature review is to explore the concept of personalised learning, the theories that underpin it, implications of prior research, and its promotion in LES. Concepts In this part, the concepts of the study will be explained in three areas including teaching of English, personalised learning and low-enrolment schools (LES). Teaching of English The teaching of English is the process of facilitating language learning and skill development in the English language.This typically involves the use of various methods and strategies, such as reading, writing, listening, and speaking, as well as the study of grammar, vocabulary, and pronunciation.Effective English language teaching requires an understanding of learners' needs, goals, and backgrounds, and the ability to employ a range of teaching approaches and resources to create a dynamic and engaging learning environment.English language teaching is an important aspect of global education due to the widespread use of English as an international language in business, academia, and communication across cultures.The importance of English language teaching as a means of promoting intercultural communication and understanding in an increasingly globalised world [13].The author argues that teaching English as an international language (TEIL) can help learners develop the language skills necessary to communicate effectively across cultures.An article examines the opportunities and challenges of English language teaching in the age of globalisation [14].The author argues that English language teaching is essential for preparing learners to compete in the global job market and to communicate effectively with people from diverse linguistic and cultural backgrounds. Teaching English in Malaysia can be a challenging yet rewarding experience.Malaysia is a diverse country with multiple languages and cultures, and English is one of the official languages alongside Malay.English language education in Malaysia is given great importance, and the country has implemented various policies and initiatives to improve the quality of English language teaching.English language teaching in Malaysia is predominantly focused on the Malaysian Education Blueprint, which outlines the government's vision for education in the country.The blueprint emphasises the importance of English language proficiency in preparing students for the global workforce and improving the country's economic competitiveness [3].Overall, teaching English in Malaysia can be a fulfilling experience, as teachers have the opportunity to make a positive impact on their students' language skills and prepare them for the global workforce [15].However, it requires a deep understanding of Malaysia's cultural and linguistic diversity and a willingness to adapt teaching strategies to meet the needs of students with varying language backgrounds and proficiencies. Personalised Learning Personalised learning is an approach to education that seeks to tailor instruction and learning experiences to the individual needs and interests of each learner.This approach recognises that students have unique individual needs, abilities, and interests, and aims to create a more individualised and student-centred learning experience.McCarthy [11] defines personalised learning as an approach to education that tailors instruction and learning experiences to the individual needs, interests, and abilities of students.The author emphasises that personalised learning is a student-centred approach that recognises the unique learning styles and paces of each student.Pane et al. [16] provides an overview of the history, definitions, and potential impact of personalised learning.They define personalised learning as an approach that tailors instruction, pace, and content to the individual needs, abilities, and interests of each student. One example of personalised learning in Malaysia is the implementation of the Dual Language Programme (DLP) in public schools [17].This programme allows students to learn certain subjects in both English and Malay, catering to the needs of students who are more proficient in one language than the other.Additionally, students can choose to take elective subjects according to their interests and career aspirations, giving them a personalised learning experience that aligns with their goals.Another example is the use of online platforms and digital resources to provide personalised learning opportunities for students.In Malaysia, most of the schools have implemented the Digital Educational Learning Initiative Malaysia (DELIMa), which allows students to access a range of educational materials and resources, including videos, quizzes, and interactive games, to support their learning.The platform also provides teachers with tools to track student progress and provide personalised feedback to support individual student needs [18]. Low-Enrolment Schools (LES) Low-enrolment schools are educational institutions that have fewer students enrolled than their intended capacity or a minimum enrolment threshold set by the government or education authorities.These schools often face various challenges, such as limited resources, difficulty in retaining teachers, and reduced funding.A few journals explore the impact of low enrolment on school outcomes, including academic achievement and teacher retention, as well as the characteristics and practices of successful small schools [19][20][21].Additionally, the challenges and potential solutions for low-enrolment schools are discussed in the literature.Perez [21] identifies several challenges faced by low-enrolment schools, including limited funding, limited resources, difficulty in hiring and retaining staff, increased workload for staff, limited course offerings and extracurricular activities, and the possibility of closure.The author emphasises that these challenges can lead to a negative impact on student achievement, school climate, and teacher morale. In the Malaysian context, low-enrolment schools refer to schools with student enrolments that fall below the Ministry of Education's target enrolment figures.These schools are often located in rural areas or areas with low population density, and they face unique challenges related to funding, staffing, and providing quality education to students.According to a report by the Malaysian government, low-enrolment schools in Malaysia face a range of challenges, including limited funding, difficulty in hiring and retaining qualified teachers, and limited resources for curriculum development and extracurricular activities [22].These schools may also struggle to provide a diverse range of courses and programs to their students, which can limit their opportunities for academic and career success.To address these challenges, the Malaysian government has implemented a range of initiatives aimed at supporting lowenrolment schools [6].These include increasing funding for these schools, offering incentives to teachers to work in rural areas, and developing targeted programs to improve educational outcomes for students in these schools. Self-Determination Theory (SDT) Self-Determination Theory (SDT) has been applied to the field of education, including personalised learning approaches.SDT is a theory of motivation and personality that addresses three fundamental and universal psychological needs: autonomy, competence, and relatedness [10].Individuals are more likely to be intrinsically motivated and engaged in activities that meet these three basic needs.SDT promotes the establishment of a learning atmosphere that can nurture the three elements crucial for sustaining elevated levels of intrinsic motivation, leading to better learning outcomes in educational settings. Several studies have explored the connection between SDT and personalised learning.Jang et al. [23] conducted a study to investigate how SDT relates to personalised learning in a university-level online course.The study revealed that personalised learning can be effective when it is consistent with the fundamental psychological needs for autonomy, competence, and relatedness outlined by SDT.It was found that students who received more autonomy support and had a greater sense of competence were more likely to engage in personalised learning. In schools with low enrolment, personalised learning based on SDT can be particularly beneficial for both teachers and students.SDT emphasises the importance of educators assisting students in feeling competent by offering suitable challenging activities and performance feedback [24]."Feelings of competence are promoted when learning environments differentiate tasks at the appropriate level of challenge for high ability students" (p.11) [25].For example, teachers can use personalised learning to tailor their instruction to the specific needs and interests of each student, which can help to create a more engaging and supportive learning environment.Students, in turn, are more likely to be intrinsically motivated to learn and to feel a sense of ownership and autonomy in their learning experiences. Personalised learning has become an increasingly popular approach in education, and by tailoring instruction to individual students' needs, abilities, and interests, personalised learning can enhance students' engagement, motivation, and learning outcomes.Moreover, when personalised learning is based on the principles of Self-Determination Theory (SDT), it can provide even greater benefits, particularly for schools with limited resources and low enrolment.By fostering students' sense of autonomy, competence, and relatedness, personalised learning based on SDT can create a more supportive and effective learning environment [24], ultimately leading to better educational outcomes. Past Research Among the critical factors in promoting personalised learning is student agency and autonomy.Personalised learning can only be achieved when students are given the opportunity and autonomy to choose their own learning paths [26].Teachers should act as facilitators rather than spoon-feeding their students.When students are empowered to make choices in their learning, they are more engaged and motivated, which then leads to more positive learning outcomes, such as improved self-regulation skills and increased motivation and engagement [7].This approach is particularly useful in promoting personalised learning in low enrolment schools (LES) where students may have limited options in their learning.Here, teachers can help them to pursue their interests and passions by encouraging them to take an active role in their learning and choosing the content, pace and mode of instruction that best meets their needs.This approach can also help to bridge the achievement gap and increase equity in education by ensuring that all students have access to learning opportunities regardless of their socio-economic status or background, while fostering a sense of belonging and purpose among students [7,27]. Another essential element of fostering personalised learning is the use of data to customise the teaching-learning process to meet the needs of individual students.This data may contain a variety of student characteristics, such as academic achievement, learning preferences, interests, and cultural backgrounds.Teachers, for example, may use formative assessment approaches such as tests, quizzes, and other assignments to collect data on students' knowledge, skills, and performance.They could also conduct diagnostic exams to identify specific strengths and weaknesses.Teachers can better understand each student's particular needs by examining this data and adapting the teaching-learning process accordingly [28].This objective is to offer each student a customised learning experience that can help them reach their full potential and succeed academically [28,29]. Additionally, project-based learning (PBL) demonstrated its potential as a useful tool for teachers to encourage individualised learning among LES students or to implement personalised learning [30,31].Oh and colleagues' research from 2020 found that PBL encourages student involvement and motivation by letting them work on projects that are pertinent to their interests and needs.Due to the fact that students are expected to apply their knowledge and understanding to real-world problems, this method also encourages critical thinking and problem-solving abilities.Teachers indirectly assist students in developing a sense of ownership over their learning while enhancing their academic performance by exposing them to PBL in personalising their learning [32].As highlighted in the fourth Sustainable Development Goal (SDG 4), indicator 4.4.1 [33], it also gives students the chance to develop 21st century skills like collaboration, communication, and creativity.These skills are regarded to be crucial for people to master through action based on personal experience and reflection [34].As a result, the development of the personalised learning idea promotes the value of the student in the learning process, the emphasis on deep learning, and the transforming power of learning.This is centred on putting the student at the centre of the process, enabling learning by giving them the autonomy they need and making learning a lifelong journey.Individual diversity should be acknowledged rather than ignored in this approach. Another issue that must be addressed is the difficulty of geographical isolation.Low enrolment schools may be the only educational option for students in rural or remote areas [35].Students who come from low enrolment schools generally confront resource limitations, such as financial resources, labour management, school infrastructure, and teaching-learning materials, due to the size of this type of school, which is typically small [36].Personalised learning would be the appropriate method for teachers in this type of school to utilise in order to deal with these limits.This will allow teachers and students to fully exploit the resources in order to respond to the specific needs and interests of each student [5,36].Limited resources can act as a spur for creativity and innovation in education, leading to more individualised and student-centred approaches to learning [5,9].Students could learn to cooperate among themselves, with teachers' assistance, to create their projects and foster a culture of cooperation and invention, resulting in a more effective learning environment for all students.This could help overcome the geographical isolation issues that low-enrolment institutions confront. Methodology This section will outline the research design that was used for this qualitative research, samples, instruments, data collection, and analysis that were carried out to explore effective personalised learning approaches practised in this context of research and to identify the challenges associated with implementing personalised learning in LES, as well as ways to deal with these challenges in teaching English Language to students in low-enrolment schools. Research Design This study took a narrative approach to the impact of personal and professional experiences on teaching practise [37][38][39][40], with a focus on personalised learning, which used to provide equitable quality education to students in low-enrolment schools.As Denzin [41] points out, as we live in stories, it is critical that the past and its stories are recounted in ways that provide us with tools to assess and understand problems in the past.It also allows us to recognise patterns that would otherwise be unseen in the present, giving us a critical viewpoint for understanding and solving current and future challenges.This is feasible due to the valuable information gained via oral history, which is a collection of recollections and subjective impressions with historical resonance collected from recorded interviews [42]. Oral history has the potential to produce powerful narratives through the flow of words in situational and interpersonal contexts.As a result, the individuals' identities and perspectives are made public, and other audiences may clearly understand the history [1,43].It allows researchers to record the voices, memories, and perspectives of historical people, in this case, four instructors with at least five years of experience teaching in low-enrolment schools (LES).When analysing the phenomenon under study and revealing its unique features, the researchers have flexibility, authenticity, depth, and transparency due to this qualitative research lens [44].This study is important because it will provide us all a tool to engage with and learn from the people we live and work with through an interview that captures their distinctive history and viewpoint in their own words. As indicated in the research introduction, the current socio-educational setting offers the framework for comprehending and contextualising narratives by emphasising the lived experiences of four English instructors who have taught students in LES in Sabah, Sarawak, and Pahang.They specifically highlight how individualised learning is implemented among students in these types of schools.Teachers' perspectives, assumptions, attitudes, actions, and self-concepts are inevitably evident in their narratives [37].The telling of these teachers' stories will allow us to collect, analyse, dissect, and deduce the significance of past, present, and future perspectives-all of which are critical to ensuring inclusive and equitable quality education and encouraging opportunities for lifelong learning in Malaysia. Samples Since the potential participants were chosen specifically from the researchers' network of friends and acquaintances in order for them to highlight their pertinent first hand experiences of teaching in low-enrolment school settings, this oral history method necessitates a combination of purposive and convenience sampling.The participants were chosen based on their willingness to participate and enjoyment of reflecting on their professional development as teachers.Additionally, participants were contacted online.Before conducting the interview online, participants were given a consent form.Confidentiality is protected by using pseudonyms. In terms of the participants' demographics, 3 are female and 1 is male.All participants have at least five years of experience teaching in low-population schools in rural Sabah, Sarawak, and Pahang.There are as few as 25 students and as many as 120 pupils enrolled in these schools, and because of their diverse backgrounds, English appears to be a foreign language to them.Each respondent has had experience teaching a variety of subjects in their schools, in addition to English.These teachers could also have to deal with less motivated students and administrators who might not place as much priority on teaching foreign languages because low-enrolment schools typically lack instructional facilities and support [45]. Instruments An online semi-structured interview was used as the research instrument for this qualitative study to help answer the research questions.This research tool was intended to elicit detailed data while also providing flexibility to researchers and participants and standardising interviewing techniques [46,47].The questions were developed based on 6 elements for the online interview or online interview form.The participants' demographic backgrounds were covered in Part A, which has 5 items.Three major questions about LES students' characteristics were in Part B. Three questions made up Part C, which covered the participants' teaching experiences with integrating individualised instruction.Part D discussed the strategies for implementing personalised learning with four questions.Part E included two questions that address the difficulties in implementing personalised learning, and Part F contained three questions to address those difficulties.These six components of the interview resulted in 15 different questions. Data Collections All interviews were taken place using video conferencing platforms like Zoom and Google Meet, and with the participants' permission, the meetings were videotaped.In case of an emergency, the researchers had the questions in softcopy and sent to the participants.Alternatively, an interview was conducted via Whatspp message.Independently, researchers took the data from the recorded interview and arranged it in accordance with the list of questions.The items were examined, compiled, and evaluated using the methods of qualitative research. Data Analysis The analysis of qualitative data was trifold.The first part was data transcription, followed by data coding and, ultimately, data analysis to identify the answers to the study questions.Transcription converts audio or video data from semi-structured interviews to written form.The transcriptions were organised and analysed according to the question list.Following the completion of the data transcription, the oral history texts for each respondent were narrated.The information obtained was then subjected to a thematic analysis based on the five interview parts to uncover emergent codes based on common themes and keywords found in the oral history texts.Finally, the data was presented using theme analysis. Findings This section features the oral history texts that were prepared based on the transcriptions of the interviews by the four respondents.Thematic analysis was then used to examine the oral history texts. Thematic Analysis The oral history texts were examined using thematic analysis.This section discussed the analysis based on three research questions. How is personalised learning being implemented for teaching English in Malaysian LES? For research questions one, there are two aspects discussed, mainly participants' experiences and strategies in implementing personalised learning.Based on the transcription of the interviews, two themes were identified for the first aspect and three themes for the second aspect. (i) Participants' Experiences From the aspect of participants' experiences, two themes were identified from the transcript: identifying students' abilities and identifying students' interests.The theme of identifying students' abilities in the context of education delves into understanding and nurturing the unique potential of each learner.This theme encompasses various sub themes that highlight the essence of recognising and fostering students' talents and capabilities.One of these subthemes is focused on individual progress and achievements.In this regard, participants have observed remarkable improvements among their low-enrolment school (LES) students.These advancements encompass enhanced language skills, elevated academic performance, and a notable boost in self-confidence.The significance of such progress is exemplified in the transcript like, "Being able to write nicely and do what I asked was already something to be thankful for."These words reflect the heartfelt appreciation participants have for witnessing the incremental growth of their students.Moreover, there are instances where even the smallest accomplishments hold immense value, as demonstrated by the affirmation that "One small achievement in this low-enrolment school makes me happy."Such instances further emphasise the transformative power of acknowledging and celebrating students' achievements, irrespective of scale, fostering a positive learning environment. Another crucial aspect within the theme of identifying students' abilities is the subtheme of tailoring instruction to students' competency.This approach recognises the diversity in students' English language proficiency levels and adapts teaching methods accordingly.This tailored instruction ensures that each student's unique competency level is addressed effectively.This is particularly vital in the context of low-enrolment school (LES) students, as their English proficiency levels might vary widely due to various factors.Participants acknowledge the importance of adapting their lessons to match their students' English competency and adjusting their teaching techniques based on the smaller class sizes, as articulated by the statement, "My lessons are tailored to my students' competency and level of English as the size of the class is smaller."This personalised approach to teaching not only facilitates better understanding but also contributes to a more inclusive learning environment where each student's needs are met.Ultimately, this sub theme underscores the significance of employing differentiated strategies to cater to the diverse learning styles and competencies of LES students. Next, the theme of identifying students' interests within the realm of education revolves around recognising and nurturing the unique passions and curiosities of learners.Within this theme, three distinct sub themes emerge, shedding light on effective strategies to engage and empower LES students.The first subtheme highlights the significance of promoting engagement through real-world connections.Participants employ innovative strategies to captivate their students' attention, infusing the learning process with vibrancy and joy.They accomplish this by interweaving real-world topics, personal anecdotes, and captivating visuals into their lessons.The impact of this approach is vividly illustrated through statements like, "To make them engaged in the learning process, I would tell them my childhood stories...I showed them videos and pictures about Peninsular Malaysia."These accounts highlight the profound effect of relating the curriculum to students' lived experiences, resulting in heightened participation and a more enjoyable educational journey. Another pivotal facet within the theme of identifying students' interests is the subtheme of personalised learning activities.Participants recognise the significance of tailoring their teaching methods to align with the unique inclinations and understanding of LES students.This personalised approach encompasses a range of creative tools, such as flashcards, songs, puppets, and worksheets adorned with captivating imagery.Prioritising connection and understanding, educators invest time in familiarising themselves with their students' preferences.As one participant articulated, "I always try my best to get to know them first and their likings.This is essential to cater to my students' needs in my lesson, and it is easier for me to do so since there are fewer students."By integrating these personalised elements, participants cultivate an atmosphere where learning becomes an immersive and tailored experience, facilitating a deeper comprehension of the subject matter. Within the overarching theme of identifying students' interests lies the crucial subtheme of building relationships and motivation.Participants acknowledge the profound impact of forging connections with LES students, inspiring them to envision a future imbued with possibility.These relationships become pillars of support, fostering an environment where recognition and rewards serve as motivational forces.The transformative potential of such connections is vividly expressed through sentiments like, "My hope was for them to follow in my footsteps and become teachers...I wanted them to understand the impact we had on their lives."Additionally, participants play a vital role in nurturing students' aspirations.By understanding their dreams and aspirations, as articulated by one participant, "They wanted to change their lives and get a much better paying job...I wanted to help my LES students achieve their dreams," participants provide the guidance and encouragement needed to pave the way for students' success, creating a path toward a brighter future. (ii) Strategies in Implementing Personalised Learning The second aspect discussed in research question one is the strategies for implementing personalised learning.Three themes were identified from the transcripts.In the realm of education, a foundational theme takes centre stage -the creation of a positive and supportive learning environment.Within this overarching theme, sub themes emerge, focusing on crafting spaces that foster engagement, enjoyment, and a genuine passion for learning among LES students.One such sub theme centres around building an environment that is not only positive but also enjoyable.Participants take deliberate steps to infuse their classrooms with elements that captivate and energise their students.Through innovative methods and creative approaches, students become active learners in their own learning journey.This resonates in statements like, "They enjoyed and felt excited about learning English through these methods," underlining the impact of fostering an environment that sparks enthusiasm and curiosity.Further emphasising the connection between enjoyment and learning, participants tap into students' interests, such as incorporating singing into the English class for those who relish musical expression, as shown in the transcript like, "The LES students love to sing!So singing was a must in our English class."This not only makes learning more relatable but also showcases the participants' commitment to catering to the diverse preferences of their students.Moreover, the practice of encouraging students to contribute and share their favourite items or treasured possessions creates a sense of belonging and personal investment in the learning process. Another vital facet within the theme of fostering a positive and supportive learning environment is the sub theme of encouraging student participation and sharing.This element underscores the value of inclusivity and active involvement in the educational journey.Participants recognise the importance of providing platforms for LES students to express their ideas, interests, and experiences.By promoting open dialogue and discussion, students are empowered to contribute meaningfully to the classroom dynamic.As illustrated by the statement, "Some students really want to contribute to the learning session by talking about their interests and bringing their favourite item/treasured item to school," participants actively embrace and celebrate students' perspectives.This practice not only enhances students' self-esteem but also cultivates a sense of ownership over their learning experience.Furthermore, such participatory approaches serve as a bridge between educators and students, fostering relationships built on mutual respect and understanding.This subtheme, in essence, highlights the educators' dedication to nurturing an environment where every voice is valued, fostering a collaborative atmosphere that enriches the learning journey for all. The pursuit of developing competence and skills through engaging activities, which is the second theme, lies at the heart of effective education, where every effort is made to ensure that LES students receive tailored instruction.Within this overarching theme, a key sub theme emerges, highlighting the strategic adaptation of language and tasks to correspond with the proficiency levels of LES students.Participants employ a nuanced approach, recognising the diverse range of abilities within their classrooms.This involves a deliberate process of simplifying language structures and tasks, as expressed in statements such as, "I did not encourage them to construct complex sentences, but instead, I only use simple and compound sentences."By employing this method, participants provide students with an accessible starting point, ensuring comprehension and reducing potential frustration.Furthermore, the practice of preparing tasks and worksheets of varying difficulties underscores the commitment to individualised learning.As captured in the statement, "I always prepare a few tasks/worksheets of different levels of difficulties for my students since it is easier to monitor them in a smaller size," educators create an environment conducive to gradual skill development.This sub theme emphasises the participants' dedication to nurturing growth at each student's pace, ultimately fostering a foundation of competence and confidence. Within the broader theme of developing competence and skills, a significant subtheme comes to the fore -the integration of engaging activities that promote language skills and critical thinking among LES students.Participants understand that interactive and stimulating activities hold the potential to spark curiosity and enhance learning outcomes.These activities, thoughtfully curated to align with LES students' unique needs, play a crucial role in fostering linguistic and cognitive development.Notably, the mention of activities like word mazes, Scrabble, and puzzles illustrates this approach.The participant's perspective, "They love word mazes, Scrabble, and puzzles.These games are not only fun but also can develop their language skills and critical thinking especially when they only have fewer friends to mingle with," highlights the dual benefits of such activities.Not only do they create an enjoyable classroom atmosphere, but they also empower students to engage in meaningful language exploration and cultivate essential critical thinking skills.This sub theme underscores participants' commitment to providing LES students with opportunities that go beyond rote learning, encouraging them to become active learners in their own skill development journey. Promoting student autonomy and ownership of learning is a guiding principle that underscores the essence of education.Within this overarching theme, a significant sub theme emerges -the adept adaptation of teaching methods to the available resources and tasks, all while maintaining the autonomy of LES students.Participants display remarkable flexibility by tailoring their approaches to accommodate the unique circumstances and resources at hand.The fusion of cultural celebrations and language acquisition, as seen in statements like, "Another aspect was the celebration of different ethnicities.Sometimes, we organised fashion shows where they would wear their traditional attire," not only encourages language practice but also highlights the value of connecting with students' personal experiences.This multifaceted approach extends to practical, hands-on activities, where participants utilise everyday experiences like sandwich-making to introduce English, as recounted in, "....when making a sandwich, I would bring the ingredients and introduce them in English."Moreover, the integration of outdoor lessons and cooking activities by the river showcases a dedication to contextual and experiential learning.As encapsulated by, "sometimes conducting lessons by the river or engaging in cooking activities near my quarters," participants foster a dynamic environment that bridges classroom instruction with real-life situations, allowing students to take the reins of their learning journey. In the pursuit of promoting student autonomy and ownership of learning, the sub theme of employing technology and digital resources takes centre stage.This approach resonates with the evolving landscape of education and the digital era.Participants recognise the transformative potential of technology in elevating engagement and facilitating self-directed learning.By transitioning from physical books to digital resources, as articulated in the statement, "I change from physical books to digital resources for reading activities.I hope the digital resources will be more engaging for my LES pupils," participants provide LES students with innovative avenues for exploration.Through interactive platforms, students can navigate a wealth of information, exercising independent decision-making and taking charge of their educational experience.This sub theme underscores educators' commitment to harnessing technology's power to enhance engagement and facilitate a seamless blend of guided instruction and autonomous exploration.By integrating digital tools into the learning process, participants empower LES students to navigate a dynamic learning landscape and cultivate the skills necessary for self-driven success. What are the challenges associated with implementing personalised learning to teach English in Malaysian LES? For research question two, three themes were identified from the transcripts: challenges of teachers, challenges of students and challenges posed by geographical factors. (i) Challenges of Teachers Within the sphere of education, a pervasive challenge faced by teachers in the context of lowenrolment school (LES) is the occurrence of conflicts amongst their colleagues.This challenge emerges as a sub-theme, highlighting the discord that arises when personalised learning methods are introduced.While personalised learning aims to cater to individual students' needs, it's not always universally embraced.The statement, "Some teachers are really against the idea," vividly captures the resistance faced by participants seeking to implement innovative teaching approaches.The tension escalates when certain teachers perceive their own lessons to be disrupted by personalised methods, as indicated by, "It also creates tension between some teachers and me because they feel that their lesson is disrupted by the songs as the school has fewer pupils."This conflict not only strains professional relationships but also underscores the inherent challenges of introducing new teaching methodologies within a complex educational ecosystem. Another significant challenge teachers encounter within the theme of "Challenges of Teachers'' is the burden of an overwhelming workload.This sub theme underscores the intricate tapestry of tasks that participants must navigate alongside their core teaching responsibilities.Teachers in LES are burdened with numerous additional duties, often stretching their capacities thin.The sentiment, "Teachers in LES have a maximum workload.One person has to handle too many tasks," encapsulates the pressure felt by participants who are juggling a myriad of responsibilities beyond classroom instruction.This demanding workload can hinder their ability to fully engage in personalised learning initiatives, potentially impacting the quality and depth of preparation for such tailored approaches. The theme of challenges faced by teachers is further illuminated by the sub theme of time constraints.In the pursuit of educational excellence, participants often grapple with limited time to complete syllabi and adequately prepare personalised learning materials.The struggle to strike a balance between covering all necessary topics and adopting innovative methods is poignantly expressed in the statement, "One of the major challenges is the inability to finish the syllabus within the allocated time."The multifaceted responsibilities inherent in teaching also curtail participants' capacity to infuse creativity and real-life experiences into their lessons, as conveyed by, "I do not have ample time to prepare and bring in real-life or exciting realia for teaching."This sub-theme underscores the intricate dance teachers engage in, attempting to weave together effective pedagogical methods while adhering to strict time constraints imposed by curriculum demands and the educational calendar. (ii) Challenges of Students Navigating the educational landscape presents students with a spectrum of challenges that influence their learning experiences and growth.One key sub theme that emerges within this overarching theme is the inclination of students to compare lessons.In the context of LES, students often find themselves evaluating the approaches of different teachers, forming preferences based on the level of engagement and excitement they derive from the learning process.This trend is encapsulated by the statement, "It is also sad because the students from the LES cannot help but compare one teacher's lesson to another because they want to have fun in their classroom instead of a mundane chalk-and-talk learning session."While this inclination demonstrates their desire for captivating and dynamic educational encounters, it can simultaneously foster varying expectations, potentially leading to challenges in maintaining consistency across classrooms. Alongside the tendency to compare lessons, another compelling subtheme surfaces: the demotivation of students.This demotivation can stem from several sources, including a lack of confidence and proficiency in the language being taught.While personalised learning strategies aim to address individual needs, they may not uniformly resonate with every student's learning style.This discrepancy is highlighted by the concern shared in the statement, "As for Level 2 students in LES, some will have low self-esteem compared to other average students."Moreover, the variability in teaching methods and approaches across different grade levels, as indicated by, "Another challenge is the inconsistency in teaching methods and approaches between different grade levels," can lead to feelings of confusion and challenges in adapting, particularly when transitioning to new levels.This sub-theme underscores the complexity of addressing diverse student needs while fostering an environment that nurtures self-esteem, motivation, and a sense of inclusivity, all within the context of tailored learning experiences. (ii) Challenges Posed by Geographical Factors Amid the realm of education, a significant theme emerges -the challenges posed by geographical factors on the educational landscape.Within this theme, key sub-themes underscore the struggles that educators face in delivering effective personalised learning experiences.One such sub-theme is the presence of limited facilities and resources, profoundly affecting the execution of personalised learning strategies.The constraints become apparent when participants are faced with the scarcity or malfunctioning of essential tools and technology.The sentiment, "As for other facilities, such as LCD projectors, there was only one available," sheds light on the hurdles participants confront when they lack the necessary equipment to enhance classroom engagement.In addition, the wear and tear of facilities in low-enrolment schools can be prohibitive, as articulated in, "The facilities in the low-enrolment school are run-down and I cannot bring my projector to the classroom as the power/plug point in the classroom are out of order."This sub theme emphasises the barriers participants encounter when attempting to implement personalised learning methods, highlighting the need for adequate infrastructure to support dynamic and effective teaching. Another compelling sub-theme within the theme of challenges posed by geographical factors is the struggle participants face due to limited resources, particularly in rural areas where LES schools are often situated.The disparity in resources becomes stark when participants are unable to access necessary materials to facilitate their personalised learning plans.This is underscored by the statement, "The resources were indeed limited.In terms of the internet, it was non-existent."Geographical remoteness exacerbates the issue, as participants are faced with challenges in obtaining resources vital for a well-rounded education.The geographic isolation is emphasised in statements such as, "The distance to the mall is quite far, requiring a boat ride.Additionally, there are more options available there, in the peninsular."This subtheme highlights the disparities that exist between urban and rural educational settings, underscoring the need for equitable access to resources to ensure that all students can benefit from comprehensive and effective personalised learning experiences. .3 Research Question 3 1 How to overcome challenges associated with implementing personalised learning to teach English in Malaysian LES? For research question three, three themes were identified from the transcripts: government support, pedagogical practices and student learning, as well as support and engagement in education. (i) Government Support In the field of education, a crucial theme emerges -the role of government support in shaping the quality and accessibility of learning experiences.This theme is illuminated through sub themes that underscore the significance of providing technological resources to schools in remote areas and the pivotal role of government initiatives in education.The first sub-theme emphasises the potential of technology to bridge educational gaps in underserved regions.Participants express a yearning for technological tools that can revolutionise teaching and learning.This sentiment is evident in statements such as, "The support and assistance that I greatly hoped for were in terms of technology because the rural areas and LES were still lagging behind."They recognise that technology can empower students to engage with the English language in innovative ways, as reflected in, "With the support of technology, such as stable phones and internet connection, it would greatly help the students of SKKT to learn and explore the English language."This sub-theme emphasises the need for technology to facilitate education, considering the affinity of the younger generation for digital devices and their potential to enhance language acquisition.The call for reliable internet connection to access educational resources further underscores the importance of technological infrastructure in promoting effective teaching and learning. The second sub-theme underscores the pivotal role of government initiatives in education.Participants recognise that government investment is essential to address resource gaps, enhance educational outcomes, and ensure equitable access to quality education, particularly in the context of LES.The sentiment, "The Government should provide adequate funding in order to provide necessary resources and materials for effective teaching and learning," highlights the participants' expectation of financial support to create a conducive learning environment.Additionally, the sub-theme underscores the necessity of strategic planning in teacher recruitment and placement.The idea that recruiting teachers from the local community can overcome communication barriers and address shortages in rural areas is encapsulated in, "Perhaps it is time to consider recruiting more teachers from Sabah so that they can be placed back in their home states."This sub theme further reinforces the importance of collaboration between local authorities and community leaders to promote teaching as a viable profession among the local population, contributing to the overall improvement of education in LES. (ii) Pedagogical Practices and Student Learning The realm of pedagogical practices and student learning, which is the second theme, is enriched by the subtheme of inculcating a human-centred teaching approach.This pedagogical shift acknowledges the need to prioritise the holistic development of students, addressing their unique needs, interests, and abilities.Participants recognise that an education system focused solely on content fails to fully nurture students in LES.The sentiment that "The institution needs a human-touch approach" underscores the desire to create an educational environment that extends beyond academic knowledge.This approach calls for a reformation of family institutions to address societal issues faced by the youth, as shared in, "I believe a reform in family institutions is a must because youngsters these days face major problems, especially abandonment and social issues."The participants' focus on understanding students' perspectives and fostering empathy is captured by, "By treating my students as equal humans, they will reciprocate the effort that we put in our teaching."This subtheme advocates for educational practices that prioritise students' overall well-beingmental, physical, and spiritual-ultimately shaping a well-rounded student equipped to navigate the complexities of life. Within the theme of pedagogical practices and student learning, the sub-theme of language integration and cultural exchange emerges as a bridge between education and community.Participants emphasise the power of language as a tool for communication and understanding across different cultures.By incorporating English into daily interactions, as exemplified in, "Sometimes, on Sundays, they would ask if I wanted to go to the river or join them fishing.It was during those moments that I would incorporate a little bit of English, so at least they knew, and hopefully, they could use it to communicate basic ideas," participants create opportunities for language to transcend mere academic instruction.This sub-theme underscores the potential of language to foster connection and mutual comprehension, not only within the classroom but also within the broader community. The sub-themes of administrative assistance and flexibility in learning weave together to shape the pedagogical landscape in LES.The subtheme emphasises the significance of streamlined administrative processes that alleviate teachers' burdens and enable them to focus on students.Participants express the need for support in data entry, as voiced in, "Please make the teacher assistant a reality so that we will have more time to focus on the pupils instead of doing all the data entries especially in LES."Additionally, the provision of specialised subject teachers and teaching staff is highlighted as essential to enrich the LES students' learning journey, as indicated, "To reduce clerical work and to provide sufficient teaching staff, including specialised subject teachers, would also greatly benefit the LES students' learning experience."The synergy between administrative support and skilled teaching staff forms a foundation for effective pedagogical practices. The final sub-theme underscores the value of flexibility in learning.Participants see the need to tailor learning experiences to the individual needs and preferences of LES students.The practice of personalised learning beyond the confines of traditional schedules, as expressed in, "I do the personalised learning out of the timetable.Eg.Early in the morning before class started," reflects a commitment to accommodating diverse learning styles and needs.This sub-theme underscores the importance of fluidity in instructional approaches, allowing students to flourish at their own pace while engaging in a dynamic learning process. (iii) Support and Engagement in Education The theme of support and engagement in education, which is the third theme of RQ3, is illuminated through sub-themes that emphasise the importance of collaboration between home and school, as well as the significance of acknowledgement, inspiration, and additional support in the educational journey.In the first sub-theme, the emphasis is on fostering active collaboration between parents, families, and teachers to elevate students' learning experiences within LES.Participants express the hope for increased involvement of parents and families in their children's educational development, as shared, "I hope the parents and family are more involved in the pupil's development."Recognising the pivotal role parents play, participants envision a scenario where parents take a proactive interest in their children's academic progress, ensuring tasks such as reading and homework completion are actively supported at home, as suggested by, "Maybe they will take the time to make sure their children read or do their homework."Moreover, the sub-theme underscores the potential of community support in facilitating additional educational opportunities, exemplified by the following, "Getting full support from the community could give more benefit for the students to join additional classes or tutoring sessions."This collaborative approach underscores the shared responsibility of teachers, families, and the broader community in shaping a conducive and enriching educational environment for LES students. Another compelling sub-theme within the theme of support and engagement in education is the emphasis on personalised strategies that acknowledge, inspire, and provide additional support to LES students.This sub-theme underscores the importance of recognising and responding to each student's unique needs, strengths, and interests.Participants implement a personalised approach to motivate students by acknowledging their efforts, as reflected in, "I reward and give recognition to my LES students for the little effort in learning."This practice not only reinforces positive behaviour but also cultivates a sense of achievement and selfworth.Furthermore, participants extend their commitment by offering additional support through initiatives like free tuition classes, as exemplified in, "I also conducted free tuition classes to further support the LES students."This personalised support acknowledges that education is not a one-size-fits-all endeavour and underscores the participants' dedication to tailoring their efforts to cater to individual learners.By nurturing a learning environment that respects and responds to each student's unique identity and learning journey, this sub-theme underscores the holistic growth of LES students, promoting their academic, emotional, and personal development. Discussion and Implications The findings of this research show that the respondents had a positive mindset towards the implementation of personalised learning to teach English in Malaysian low-enrolment schools.This can be understood and analysed through the lens of the self-determination theory. Discussion The foundation of self-determination theory (SDT) is the notion that humans are constantly involved in dynamic interactions with their surroundings, particularly their social surroundings.According to this theory, in order to develop and evolve, everyone needs autonomy (the need to feel free and in control), competence (the need to feel effective), and relatedness (the desire to build meaningful relationships with others) [10].This research where implementing personalised learning is closely connected to SDT since the English lessons were designed solely based on the students' needs, background, culture and proficiency. Autonomy Autonomy plays a significant role in personalised learning within a low enrolment school, as it enables students to establish specific language learning objectives that align with their individual interests, aspirations, and areas for improvement.In the context of the current study, the aim of English language acquisition is to equip students with effective communication skills that they can utilise beyond their school years, whether in higher education or employment settings [14,15]. The pursuit of intrinsically motivating endeavours that are consistent with personal goals has been found to enhance individuals' happiness and satisfaction.By allowing students to focus their time and energy on activities that genuinely captivate their interest, personalised learning fosters a sense of ownership and accountability for their learning outcomes [32].Within the framework of personalised learning in a low enrolment school, self-directed learning emerges as a prominent aspect.Students are empowered to assume control over their English language learning experience, determining the pace and depth of their learning journey by exploring the accessible resources [16].This approach capitalises on students' innate desire for growth and self-improvement, as it facilitates engagement through appropriate teaching and learning methodologies while establishing meaningful connections to real-world contexts [7,26]. By incorporating autonomy and self-directed learning principles, personalised learning in a low enrolment school provides students with an environment that not only supports their individual needs and aspirations but also encourages their active participation and responsibility in shaping their language learning progress.This approach acknowledges the importance of intrinsic motivation and personal agency in fostering meaningful educational experiences, ultimately contributing to students' overall development and future success. Competence Competence plays a crucial role in personalised learning, as it enables individuals to effectively communicate with others and possess the necessary skills for accomplishing their objectives.Feeling competent entails a sense of mastery over one's environment.However, feelings of competence may diminish when the tasks at hand are overly challenging or when individuals receive unfavourable criticism.Conversely, when there is a perfect match between the requirements of a task and an individual's skill level, or when individuals receive encouraging feedback, their sense of competence is enhanced [24]. In the context of personalised learning in rural locations, the utilisation of available resources becomes crucial, particularly in cases where technological limitations may exist.Teachers can employ a variety of resources such as books, workbooks, and tailored printed materials to cater to the diverse needs of learners.Additionally, teachers can design projects, exercises, and activities that align with students' language proficiency levels, fostering active engagement in the learning process [28,29].Moreover, personalised learning in rural regions can be enriched by incorporating and valuing the local language and culture to enhance English language proficiency.Teachers can integrate local traditions, stories, and other cultural elements into English language sessions, providing context and fostering a sense of community.By incorporating local cultural heritage, personalised learning not only promotes language skill development but also preserves and values the rich cultural legacy of the community [13]. In summary, the development of competence in personalised learning is vital for effective communication and successful goal attainment.By considering the match between task requirements and individuals' skill levels, providing encouraging feedback, utilising available resources, and incorporating local language and culture, personalised learning approaches in rural areas can empower learners, enhance language proficiency, and foster a sense of cultural identity and belonging. Relatedness Relatedness encompasses the sense of closeness, connection, and belonging that individuals experience within a social group.Establishing and maintaining meaningful connections is crucial for the achievement of self-determination, as it provides individuals with access to help, support, and a sense of community.Feelings of relatedness are strengthened when individuals are treated with respect and care and when they are part of an inclusive environment.Conversely, relatedness is undermined by factors such as competition, cliques, and criticism from others. Cultural relevance plays a significant role in fostering relatedness within personalised learning approaches.Integrating activities and tasks that align with students' exposure and knowledge, such as incorporating cultural practices, traditional foods, and childhood stories, enhances students' sense of relatedness and creates a bridge between their cultural identity and English language learning [11].In the context of low enrolment schools, personalised learning allows for stronger teacher-student relationships.With smaller class sizes, educators can provide individualised attention, mentorship, and guidance to each student, fostering a sense of relatedness.This close connection between teachers and students creates a supportive English language learning environment where students feel supported and connected to their teachers [26]. A supportive learning environment is essential in personalised learning within low enrolment schools.This includes promoting respect, empathy, and understanding through daily routine activities, the surrounding environment, and nature.By cultivating a safe space for students to take risks, express themselves, and collaborate with others, personalised learning fosters a sense of relatedness among students, contributing to their English language learning journey [17,29].In rural areas, ensuring equal access to technology is crucial for personalised learning.Providing digital tools, such as computer labs, laptops, and reliable internet connectivity, requires a concentrated effort, including collaborations with government agencies, non-governmental organisations, and private corporations.Bridging the technology gap between urban and rural communities enables personalised learning to be supported effectively [16]. To successfully implement personalised learning, a shift toward a human-centred teaching approach is necessary.Teacher training programs should incorporate modules on human-centred pedagogy, equipping educators with the skills and knowledge to establish close relationships with their students and create a supportive classroom climate.Incorporating student choice and autonomy in lesson planning empowers students and increases their motivation and engagement in English learning.Continuous professional development and mentorship opportunities for teachers ensure the effective implementation of personalised learning [7]. Reforming the educational system to prioritise student well-being requires comprehensive changes.Curriculum modifications should strike a balance between academic achievement and students' mental, physical, and emotional well-being.Integrating well-being education into the curriculum equips children with the necessary tools to manage stress, build resilience, and foster meaningful connections.Mindfulness practices, such as meditation or breathing exercises, contribute to students' emotional well-being.Additionally, providing counselling services in schools creates a supportive environment where children can seek assistance when needed.Engaging educational officials, administrators, and key stakeholders in discussions and research on the importance of student well-being drives effective improvements within the education system, placing student well-being at the forefront [14]. Parental involvement plays a vital role in facilitating personalised learning.Schools can actively engage parents through frequent parent-teacher meetings and workshops, fostering collaboration and providing parents with a better understanding of their child's growth and individual needs.Providing parents with resources and information on how to enhance their children's English language development at home promotes the school-family partnership.Involving parents in curriculum and school policy decision-making processes ensures that their perspectives are heard, leading to a more inclusive and effective personalised learning environment [28]. In conclusion, personalised learning has the potential to enhance student engagement, motivation, and language proficiency while nurturing their overall well-being.By recognizing the challenges and implementing appropriate strategies, personalised learning can lead to more effective and impactful English language education.Creating a sense of relatedness, cultural relevance, supportive learning environments, equal access to technology, human-centred teaching approaches, a focus on student well-being, and active parental involvement are key components in achieving these positive outcomes. Implications The implementation of personalised learning to teach English in Malaysian low-enrolment schools can have significant implications for students, teachers, and the education system as a whole. Students This research has more significant implications for students.First and foremost, it enhanced students' motivation.The implementation of personalised learning, aligned with the principles of SDT, has the potential to greatly enhance students' intrinsic motivation to learn English.By incorporating elements of autonomy, competence, and relatedness into the learning process, students are more likely to actively engage, set meaningful goals, and take ownership of their own learning journey.This heightened motivation can result in increased interest and enjoyment in learning the English language. Furthermore, this study also helps to improve language proficiency among the students.Personalised learning approaches offer tailored instruction that specifically targets individual students' language needs and abilities.By catering to diverse learning styles and preferences, students can make progress at their own pace, leading to notable improvements in language proficiency across various language skills, including speaking, listening, reading, and writing.The emphasis on autonomy and self-regulation further fosters independent language learning, which can yield long-term benefits. Other than that, this study is also significant for culturally relevant and meaningful learning experiences.Personalised learning provides an opportunity to incorporate culturally relevant content and experiences, thereby making the English language learning process more meaningful for Malaysian students.By integrating local contexts, literature, and examples into the curriculum, students can establish stronger connections with the language and develop a greater sense of identity and cultural appreciation. On the other hand, this study also helps to increase the engagement and participation of the students in English lessons.Personalised learning empowers students to actively participate in their own education.By offering choices and promoting student agency, the learning environment becomes more engaging and interactive.This increased engagement leads to heightened participation, collaboration, and the development of critical thinking skills among students.As a result, a positive classroom culture is fostered, creating a conducive atmosphere for effective learning. Last but not least, this research is significant in addressing individual learner differences.In low-enrolment schools with diverse student populations, personalised learning allows teachers to effectively address the unique needs and individual differences of each student.By recognising and accommodating various learning styles, abilities, and background experiences, teachers can provide targeted support and scaffolding, enabling students to overcome challenges and achieve success in their English language learning journey. Teacher The implementation of personalised learning strategies rooted in SDT necessitates continuous professional development for teachers.Teachers need to develop new instructional approaches, assessment methods, and classroom management techniques aligned with personalised learning principles.By investing in teacher training and providing necessary resources, educators can enhance their pedagogical skills, technological literacy, and ability to differentiate instruction.This ultimately benefits both teachers and students. Education The successful implementation of personalised learning approaches in low-enrolment schools can have far-reaching implications for the Malaysian education system as a whole.By showcasing the effectiveness of student-centred, individualised learning, policymakers and educational stakeholders may be inspired to integrate personalised learning practices more widely.This could lead to systemic changes and improvements in educational outcomes throughout the country, promoting a more student-centric and effective education system. Conclusion In conclusion, the implications of personalised learning in low-enrolment schools, grounded in the principles of Self-Determination Theory, are vast and promising.By fostering intrinsic motivation, enhancing language proficiency, providing culturally relevant learning experiences, increasing student engagement, addressing individual learner differences, supporting teacher professional development, and potentially driving systemic changes in the education system, personalised learning has the potential to revolutionise English language education in Malaysia.These implications highlight the importance of creating learning environments that empower students, value their individuality, and promote meaningful connections.By acknowledging and catering to students' autonomy, competence, and relatedness, personalised learning approaches can unlock students' potential, leading to improved academic achievement, heightened engagement, and overall well-being.To fully harness the benefits of personalised learning, it is crucial for policymakers, educators, and stakeholders to invest in teacher training, provide necessary resources, and foster a supportive ecosystem that embraces innovative instructional practices.By prioritising personalised learning in low-enrolment schools, Malaysia can pave the way for a more inclusive, student-
2024-02-08T16:18:04.208Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "eea79b4741cd7f8fe566ba9d98bda804db5dce42", "oa_license": "CCBY", "oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2024/02/shsconf_access2024_01011.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5d2f9dc989e2c02eae6112650ba1552d4158e9d9", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
52068800
pes2o/s2orc
v3-fos-license
Cloning, expression, and characterization of a porcine pancreatic α-amylase in Pichia pastoris Pancreatic α-amylase (α-1, 4-glucan-4-glucanohydrolase, EC.3.2.1.1) plays a primary role in the intestinal digestion of feed starch and is often deficient in weanling pigs. The objective of this study was to clone, express, and characterize porcine pancreatic α-amylase (PPA). The full-length cDNA encoding the PPA was isolated from pig pancreas by RT-PCR and cloned into the pPICZαA vector. After the resultant pPICZαΑ-PPA plasmid was transferred into Pichia pastoris, Ni Sepharose affinity column was used to purify the over-expressed extracellular recombinant PPA protein (rePPA) that contains a His-tag to the C terminus and was characterized against the natural enzyme (α-amylase from porcine pancreas). The rePPA exhibited a molecular mass of approximately 58 kDa and showed optimal temperature (50 °C), optimal pH (7.5), Km (47.8 mg/mL), and Vmax (2,783 U/mg) similar to those of the natural enzyme. The recombinant enzyme was stable at 40 °C but lost 60% to 90% (P < 0.05) after exposure to heating at ≥50 °C for 30 min. The enzyme activity was little affected by Cu2+ or Fe3+, but might be inhibited (40% to 50%) by Zn2+ at concentrations in pig digesta. However, Ca2+ exhibited a dose-dependent stimulation of the enzyme activity. In conclusion, the present study successfully cloned the porcine pancreatic α-amylase gene and over-expressed the gene in P.pastoris as an extracellular, functional enzyme. The biochemical characterization of the over-produced enzyme depicts its potential and future improvement as an animal feed additive. Introduction As a family member of retaining carbohydrases, a-amylase catalyzes the hydrolysis of a-(1, 4) glycosidic linkages in starch and related malto-oligosaccharides (Janecek, 1994). With this unique function and a broad distribution in microbes, plants, and animals (Muralikrishna and Nirmala, 2005), a-amylase has many applications in food, textile, paper, and feed industries (Eliasson, 1996;Gupta et al., 2003). It represents about 25% to 33% of the world enzyme market and is second to only proteases (Nguyen et al., 2002). Porcine pancreatic a-amylase (PPA) is a secreted 55.4 kDa glycoprotein. It is an endo-amylase and has a high efficiency in catalyzing the hydrolysis of a-(1, 4)-glucosidic bonds in both amylose and amylopectin through multiple attacks toward the non-reducing end (Darnis et al., 1999;Prodanov et al., 1984;Robyt and French, 1970). Because it plays a crucial role in the intestinal starch digestion (Andersson et al., 2002), insufficient production of PPA in the early life of weaning pigs can be a significant stress that causes sudden pause or reduction of growth rate, hence leads to major economic loss (Hedemann and Jensen, 2004). Thus, supplementing weanling piglets with amylolytic cultures of Lactobacillus acidophilus improved daily gain and feed use efficiency (Rincker et al., 2000). Likewise, supplementing amylase, along with xylanase, to a raw pea diet (Owusu-Asiedu et al., 2002) and supplementing amylase, along with glucanase and glucoamylase, to a barley-based diet (Inborr and Ogle, 1988) improved feed conversion ratio and (or) reduced incidence of diarrhea in newly-weaned pigs. These results indicate that amylase is limiting in the young pig and therefore, there may be an application for exogenous enzyme. However, current commercial PPA products are mainly isolated from animal pancreatic tissues. The high cost associated with the extraction of PPA and its purification, the limitation of securing pig's pancreatic tissues, and the possibility of microbial contamination have prevented the application of PPA in a large scale in animal feed industry. Because of these factors, there is a need to develop an efficient heterologous expression system for economical, convenient, and safe production of large amount of PPA as an affordable feed additive. Previous attempts have been conducted to produce amylases in heterologous systems; however, the yield results were unsatisfactory (Kato et al., 2001;Li et al., 2011). Because the methylotrophic yeast Pichia pastoris has recently been used to manufacture feed enzymes such as phytase (Han and Lei, 1999), the objective of the present study was to determine if PPA could be effectively expressed in this system and how the overly-produced recombinant enzyme was compared with the endogenous enzyme isolated from the pancreas of pigs. After the PPA gene was successfully cloned and expressed as a recombinant PPA (rePPA) in the Pichia pastoris system, we found that the enzymatic properties and responses to the intestinal metals of rePPA were similar to those of natural PPA. Our findings suggest a feasible approach to produce PPA for the animal feed industry. Strains, plasmids, and reagents Escherichia coli TOP10 (Invitrogen, Beijing, China) was used for plasmid amplification. The plasmid pPICZaA (Invitrogen, Beijing, China) was used for the production of His-tagged PPA proteins, and the P. pastoris X-33 strain (Invitrogen, Beijing, China) was used as the protein expression host (Zhao et al., 2014). The E. coli TOP10 strain was grown in LB medium at 37 C and P. pastoris X-33 strain was grown in yeast extract-peptone-dextrose medium at 28 to 30 C (Zhao et al., 2014). The AMV reverse transcriptase, T4 DNA ligase, Taq DNA polymerase, pMD18-T vector, restriction enzymes (XbaI, KnpI, SacI), DL2000 DNA marker, protein marker were purchased from TaKaRa (Dalian, China). Plasmid Mini-prep Kit, Gel Extraction Kit, and Cycle-pure Kit were purchased from OMEGA (Chengdu, China). Ni-NTA His Binding Resin (GE Healthcare, Piscataway, NJ, USA) was used for the purification of recombinant PPA (Zhao et al., 2014). Other chemicals used in this experiment were of analytical grade and are commercially available. Cloning of the PPA gene and construction of the expression plasmid Total RNA was isolated from the porcine pancreas (Sus scrofa, Duroc  Large White  Landrace) using TRIzol reagent (Invitrogen, Beijing, China). The cDNA was generated by RT-PCR using the AMV Reverse transcriptase. The forward primer was (5 0 -ATGAAGTTGTTTCTGCTGCTTTC-3 0 ) and the reverse primer was (5 0 -CAATTTGGATTCAGCATGAATTGCA-3 0 ). After the amplified DNA fragment was purified using the Gel Extraction Kit, it was ligated into the pMD18-T vector, and transformed into the E. coli TOP10 strain by calcium chloride activation (Dagert and Ehrlich, 1979). The positive colonies were identified by DNA sequencing (Invitrogen, Shanghai, China). After that, the verified pMD18-T-PPA was used as a template to amplify the cDNA fragment encoding the mature PPA protein (without the signal peptide) by PCR. The forward primer was (5 0 -GATCGGTACCCAGT-ATGCCCCACAAACC-3 0 , XbaI site underlined), and the reverse primer was (5 0 -TTTGTTCTA-GACTTAATTTGGA TTCAGCATG-3 0 , KpnI site underlined). The PCR product was purified, digested with XbaI and KpnI, and ligated into the expression vector pPICZaA. The pPICZaA-PPA plasmid was transformed into E. coli TOP10 (Dagert and Ehrlich, 1979), and positive transformants were selected by using zeocin (25 mg/mL) resistance and restriction mapping (Invitrogen, USA), along with a final verification of sequencing. Transformation and expression of PPA in P. pastoris The recombinant plasmid pPICZaA-PPA was transformed into P. pastoris X-33 by electroporation (Kim et al., 2006). Single colonies of the transformants were selected for expression according to a protocol of EasySelect Pichia Expression Kit (Invitrogen, Beijing, China). After 3 days of methanol induction, total RNA was extracted from the cultured cells to screen for high-level expression transformants using real-time quantitative PCR analysis (Zhao et al., 2017). The expressed extracellular PPA protein samples were separated by 10% SDS-polyacrylamide gel electrophoresis (SDS-PAGE) and visualized by staining with Coomassie Brilliant Blue R-250 (Bio-Rad, Benicia, CA, USA). The P. pastoris transformants containing the expression vector pPICZaA without the PPA gene insert were used as the negative control. Purification of recombinant porcine pancreatic a-amylase After 72 h of induction with methanol, cells were removed by centrifuge the fermentation broth at 14,000  g at 4 C for 10 min (Zhao et al., 2014). After that, the supernatant was added to 0.5 mol/L NaCl and adjusted to pH 7.4, followed by filtration through a 0.45 mm filter. The supernatant was then applied to a Ni Sepharose (GE Healthcare, Piscataway, NJ, USA) affinity column (Bio-Rad, Richmond, CA, USA) pre-equilibrated with a binding buffer (20 mol/L NaH 2 PO 4 , pH 7.4, 500 mol/L NaCl, 20 mol/L imidazole). After the column was washed with binding buffer to remove the unbinding proteins, the PPA was eluted with elution buffer (20 mol/L NaH 2 PO 4 , pH 7.4, 500 mol/L NaCl, 500 mol/L imidazole). The harvested protein was stored at À20 C for subsequent analysis. Protein concentration was determined by the Bradford method (Bradford, 1976). Characterization of rePPA and comparison with the native enzyme Activities of rePPA were measured as described by Bogdanov (2002), using 100 mL 2.0% soluble starch (Kelong, Chengdu, China) as a substrate (Anitha Gopala and Muralikrishnaa, 2009). One unit of a-amylase activity was defined as the amount of enzyme needed for hydrolyzing 1.0 mg starch per minute at pH 7.5 at 37 C. The pH-activity profile of rePPA was assayed at 37 C using acetate buffer (pH 3.0 to 5.0), phosphate buffer (pH 5.5 to 8.0), and TriseHCl buffer (pH 8.5 to 9.5). The optimal temperature of rePPA was determined using the phosphate buffer (pH 7.5) from 20 to 80 C. The thermal stability of rePPA was determined by the residual activity after the enzyme was incubated at 40, 50 and 55 C for 30 min. Kinetic constants of K m and V max were determined at pH 7.5 and 37 C using the LineweavereBurk method (Lineweaver and Burk, 1934). To test the function mechanism of rePPA under the intestinal conditions, the purified enzyme was incubated with different concentrations of chloride metal ions (Zn 2þ , Cu 2þ , Fe 3þ , Ca 2þ ) in the phosphate buffer at 37 C for 10 min. The changes in the action against the untreated control were detected. Data analysis Data were analyzed by SAS 8.2 (SAS Institute, Cary, NC, USA), and simple t-test was used to compare mean differences. Significance was set at P < 0.05 (n ¼ 3). Cloning, expression, and purification of the PPA gene A 1,533 bp cDNA fragment of the coding sequence was isolated from porcine pancreases and cloned into pMD18-T by RT-PCR (Fig. 1A). The cloned cDNA showed 99.3% DNA and 99.8% amino acids sequence homology to that of porcine pancreatic a-amylase listed by NCBI (GenBank: AF064742.1, Appendix Fig. 1). After the cloned expression vector pPICZaA-PPA was digested with KpnI and XbaI, a 1,500 bp target gene band and a 3,600 bp expression vector band were shown on the 1% agarose gel (Fig. 1B). After 0.5% methanol had induced the P. pastoris X-33 transformant for 72 h, the targeted protein was purified by Ni Sepharose affinity chromatography. The purified rePPA showed a single band on 12% SDS-PAGE gel with a molecular size of approximately 58 kDa (Fig. 2, lane 4). The yield of the recombination protein in the medium supernatant was 65 mg/L after 72 h fermentation. Characterization of the rePPA related to the native enzyme As shown in Fig. 3, rePPA shared similar pH-activity and temperature-activity profiles with the natural form of the PPA enzyme. The optimal pH of the rePPA was 7.5. However, more than 50% of the enzymatic activity maintained between pH 5.5 and 9.5, with a reduction (P < 0.05) to less than 30% at a pH lower than 5 (Fig. 3A). The optimal temperature of the rePPA was 50 C, with 60% to 94% of activity at 30 to 55 C. The activity decreased sharply (P < 0.05) at temperatures over 55 C (Fig. 3B). Incubating the purified rePPA at 40 C for 30 min had a little impact on its activity, but treating the enzyme at 50 or 55 C resulted in 60% or 90% activity loss (P < 0.05) (Fig. 3C). The purified rePPA showed a K m for soluble starch as 47.82 mg/mL and V max as 2,783 U/mg protein, whereas the native form of PPA had a K m of 40.45 mg/mL and V max 2.3 U/mg protein (non-purified crude enzyme) (Fig. 4). As shown in Fig. 5, the activity of rePPA demonstrated a significant dose-dependent increase by the incubation with Ca 2þ (Fig. 5). In contrast, the enzymatic activity showed dose-dependent decrease (P < 0.05) by incubating with Zn 2þ , Cu 2þ , or Fe 3þ . Discussion In the current study, we successfully cloned porcine pancreatic a-amylase gene and expressed it in P. pastoris. Although the cloned PPA cDNA from the current study displayed 99.3% DNA sequence homology to the one reported by Darnis et al. (1999), with only one amino acid difference between the proteins that were coded by them. The difference in the sequences between the 2 clones might be due to the spices variation (Duroc  Large White  Landrace vs. Large White). Interestingly, our method was effective as the a-factor signal peptide in the yeast expression vector guided the secretion of the recombined rePPA into the culture broth. This approach can overcome the complicated purification procedures and can be applied for a direct industrial application for amylase (Romanos, 1995). The enzyme yield (65 mg/L) by the methanol-inducible Pichia expression system was higher than that of Rhizopus oryzae a-amylase (20 mg/L) produced in Kluyveromyces lactis (Li et al., 2011), but lower than that of mouse salivary a-amylase (240 mg/L) expressed in P. pastoris (Kato et al., 2001). This can be explained as the production of the heterologous proteins within the intracellularly and extracellularly through the expression system of P. pastoris were up to 3 and 12 g/L, respectively (Barr et al., 1992;Clare et al., 1991), the rePPA yield obtained in the present study was relatively low. Therefore, further research will be required to maximize the production of the rePPA. The reason of that could be there is a limited or rare usage of several codons within the PPA gene in P. pastoris (Qiao et al., 2010;Teng et al., 2007), optimizing these codon usages may improve the protein production of the rePPA by the yeast host. As well, increasing the expression number of the plasmid copies could increase the expression of the recombinant protein (Romanos, 1995). Furthermore, optimizing the fermentation conditions such as temperature, pH, and methanol concentration can effectively lead to a better protein production (Muralikrishna and Nirmala, 2005). In the present study, similar enzymatic properties were detected between the over-expressed rePPA in P. pastoris and with those of the natural form, which were similar to the findings in previous studies. Precisely, the rePPA and the natural form of PPA (Sigma) had K m for soluble starch: 47.8 and 40.5 mg/ mL, respectively. In the current study, the estimated V max (2,783 U/mg), optimal pH (7.5), and optimal temperature (50 C) of the rePPA were similar to those identified for the natural enzyme by Anitha Gopala andMuralikrishnaa (2009) andWakim et al. (1969). These similarities illustrate that the enzymatic properties were not altered by the heterologous expression of the rePPA in the Pichia yeast. Practically, the recombinant amylase can be supplemented into the diet of young pigs as a Fig. 3. Effect of pH (A) and temperature (B) on rePPA activity. The thermostability of rePPA at different temperatures was determined by preincubating the enzyme at these temperatures in the absence of substrate for 5, 10, 15, 20, 25, and 30 min before measuring its activity (C). The rePPA activity prior to the preincubations at different temperature was taken as 100%. An asterisk indicate a significant difference (P < 0.05) between rePPA and PPA at each point of pH or temperature (n ¼ 3). Different letters indicate a significant difference (P < 0.05) between different temperatures preincubating the enzyme at each time point (n ¼ 3). PPA ¼ porcine pancreatic a-amylase; rePPA ¼ recombinant porcine pancreatic a-amylase. Fig. 4. The K m value of rePPA (A) and native PPA (B) was determined by the LineweavereBurk method. R 2 means the correlation coefficient between 1/V and 1/S. The intercept of the function with x-axis represents À1/K m , while the intercept with y-axis gives 1/V max . PPA ¼ porcine pancreatic a-amylase; rePPA ¼ recombinant porcine pancreatic a-amylase; K m ¼ Michaelis constant, the substrate concentration at which the reaction velocity is 50% of the V max ; V ¼ reaction velocity; V max ¼ maximal reaction velocity; S ¼ substrate concentration. replacement or enhancement to the endogenous enzyme within the gastrointestinal tract. It is even more remarkable to notice that the rePPA was actually a fusion protein with 13 additional amino acid residues in the N-terminus and in the C-terminus there are 21 His-tag amino acid residues. Apparently, these additional amino acid residues had little effect on the enzymatic activity or catalytic function. This flexibility may open the door for more genetic or molecular engineering to improve the enzymatic fermentation yield or modify the non-catalysisrelated properties. Furthermore, the recombinant rePPA was tested for its heat tolerance and the response to divalent metals, as the most 2 applicable measurements in animal feeding industries. Due to the usage of a large quantity of feed in a pelleted form for monogastrics (i.e., pigs), exogenous enzymes must have resistance to heat and steam from the pelleting process (Svihus and Zimonja, 2011). Despite that the purified rePPA was reasonably stable at 40 C and the optimal temperature of rePPA was 50 C, most of its activity was reduced after exposure to 50 to 55 C for 30 min. For that reason, if this enzyme is to be used in a large scale in animal feed industries, the thermostability must be improved by different approaches, such as protein engineering (Zhang and Lei, 2008) or chemical coating (Chen et al., 2001). In the digesta of pigs, free ion concentrations (mol/L) were found to be as follows: 5.5 to 31.6 for Cu, 3 to 29 for Fe, 44 to 132 for Zn, 1,100 to 5,400 for Ca (Dintzis et al., 1995). According to the activity response curves of rePPA to different ions (Fig. 5), there is a minor effect of the digesta concentrations of Cu or Fe in inhibiting the rePPA activity, whereas the digesta concentrations of Ca presumably enhanced the enzymatic activity. It has been explained as PPA may bind Ca at the functional site (Buisson et al., 1987;Steer and Levitzki, 1973). However, the available Zn concentration in the digesta was within the range that may inhibit the activity of the rePPA by 40% to 50%. This inhibition of the enzymatic activity by Zn may be attributed as Zn may bind to the catalytic residues or replace Ca 2þ from the substrate-binding site of the enzyme (Anitha Gopala and Muralikrishnaa, 2009). Accordingly, it is important to improve the enzymatic resistance to the inhibition caused by Zn and (or) regulate dietary Zn concentration for an efficient supplementation of the rePPA. Conclusion The present study has successfully cloned the porcine pancreatic a-amylase gene and proved the feasibility to over-express the gene into an extracellular, functional enzyme in P. pastoris. Our biochemical characterization of the over-produced enzyme underscores not only potential suitability but also needed improvement for its application in animal feed.
2018-08-25T21:43:14.184Z
2018-01-02T00:00:00.000
{ "year": 2018, "sha1": "a965b1dcff274fef7225769196bfff8a0502796e", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.aninu.2017.11.004", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a965b1dcff274fef7225769196bfff8a0502796e", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
251349619
pes2o/s2orc
v3-fos-license
Hydroxygenkwanin suppresses proliferation, invasion and migration of osteosarcoma cells via the miR-320a/SOX9 axis Hydroxygenkwanin (HGK) has an anticancer effect in a variety of tumors, but its role in osteosarcoma has not been explored. The purpose of the present study was to investigate the therapeutic effect of HGK on osteosarcoma and its specific molecular mechanism. Osteosarcoma cells (MG-63 and U2OS) treated with various concentrations of HGK were assigned to the treatment group. MTT, clone formation, wound healing and Transwell assays were performed to assess the viability, proliferation, migration, and invasion of MG-63 and U2OS cells. RT-qPCR was conducted to quantify the expression levels of of microRNA (miR)-320a and SRY-box transcription factor 9 (SOX9) in MG-63 and U2OS cells. The binding sites of miR-320a and SOX9 were predicted by starBase database, and verified using the dual-luciferase reporter assay. The expression levels of SOX9 and EMT-related proteins (N-cadherin, E-cadherin and vimentin) were detected by western blot analysis. HGK inhibited cell proliferation, migration, invasion, but promoted the expression of miR-320a in MG-63 and U2OS cells. Downregulation of miR-320a reversed the effects of HGK on proliferation, migration and invasion of MG-63 and U2OS cells, while upregulation of miR-320a had the opposite effect. HGK inhibited the expression of SOX9 by promoting the expression of miR-320a. Upregulation of SOX9 could partially reverse miR-320a-induced migration and invasion of MG-63 and U2OS cells. In addition, upregulation of miR-320a promoted E-cadherin expression and inhibited the expression of N-cadherin and vimentin, and the effect of miR-320a was also reversed by SOX9. In conclusion, HGK inhibited proliferation, migration and invasion of MG-63 and U2OS cells through the miR-320a/SOX9 axis. Introduction osteosarcoma is a malignant tumor originating from mesenchymal tissues (1). it is one of the most common primary tumors, especially in adolescents aged 10 to 20 years (2). osteosarcoma is highly malignant, with pulmonary metastasis occurring in approximately 85-90% of patients with osteosarcoma (3,4). In recent years, with the in-depth research on the pathogenesis and improvement of osteosarcoma therapies, the 5-year survival rate of localized osteosarcoma has been increased to 60-70% (5), but the 5-year survival rate of metastatic osteosarcoma is only 20-30% (6). In recent years, with the development of tissue engineering, increasing attention has been paid to the development of bone substitutes with custom-made microarchitecture and physicomechanical properties comparable to native bone, such as bioactive three-dimensional (3D) porous polymer or ceramic scaffolds or its conjugated forms, which can mimic the host tissue to facilitate the transferring of nutrients to ensure defective bone restoration (7)(8)(9). Moreover, 3D-printed multifunctional polyetheretherketone bone scaffold was widely applied in the multimodal treatment of osteosarcoma and osteomyelitis (10). clinical results reveal that the efficacy of current chemotherapy drugs in patients with metastatic osteosarcoma is insufficient (11). Thus, the research and development of new therapeutic drugs are deemed significant for the treatment of osteosarcoma at this stage. it has become a top priority to clarify the mechanism of the occurrence and development of osteosarcoma and find effective drug treatments. Daphne genkwa Sieb. et Zucc. (Daphne genkwa), a traditional Chinese herb, was first recorded in Shennong's Classic Materia Medica (12). it is widely grown in china, Japan and other countries (13). Daphne genkwa has long been used as an anti-inflammatory, analgesic and sedative drug for edema and asthma (14,15). Hydroxygenkwanin (HGK) is a flavonoid compound extracted from the flower buds of Daphne genkwa, which is considered to be one of the active ingredients in Daphne genkwa flowers (16). The pharmacological effect of HGK has attracted the attention of researchers and is now widely used in the treatment of various tumors (17,18). unfortunately, the study of HGK in osteosarcoma has not received much attention. Micrornas (mirnas or mirs; endogenous non-coding small rnas) can modify gene expression in eukaryotic cells at post-transcriptional levels (19). Several studies have revealed that ~3% of human genes encode miRNA synthesis, and ~60% of human genes are regulated by mirnas (20,21). in recent years, the importance of mirnas in human diseases has been highlighted. mirnas have therefore become a hotspot in the research of diseases. Similarly, in osteosarcoma, mirnas also act as regulators and influence the progression of osteosarcoma (22). Among them, upregulation of miR-320a has been revealed to inhibit the proliferation and migration of osteosarcoma (23,24). SRY-box transcription factor 9 (SOX9) is a key transcription factor in chondrocytes (25). current studies have revealed that SoX9 plays a critical role in the migration and invasion of osteosarcoma cells (26)(27)(28)(29). in addition, it is worth noting that from a bioinformatics perspective, miRNA-320a appears to be associated with SoX9. a study on liver cancer from chou et al revealed that HGK could inhibit the metastasis and invasion of hepatocellular carcinoma by promoting the expression of miR-320a (11). Therefore, it was surmised that HGK may regulate the proliferation, invasion and migration of osteosarcoma cells through the miR-320a/SOX9 axis. in the present study, the effect of HGK on osteosarcoma was detected by treating osteosarcoma cell lines with HGK. The specific molecular mechanism of HGK in the treatment of osteosarcoma was further elucidated by exploring the regulatory effect of HGK on the miR-320a/SOX9 axis. The present study lays a theoretical foundation for the treatment of osteosarcoma by HGK, and also provides a potential targeted drug for the treatment of osteosarcoma. Hydroxygenkwanin (HGK; purity >99%) was purchased from ChemFaces. Various concentrations (0, 10, 20, 40 and 60 µmol/l) of HGK were applied to treat hFOB 1.19 cells to test the safety of the drug. In addition, MG-63 and U2OS cells were also treated with various concentrations of HGK (10, 20, 40 and 60 µmol/l) for 48 h, and assigned as the treatment groups. Moreover, MG-63 and U2OS cells as well as hFOB 1.19 cells received 0 µmol/l HGK and were assigned as the control groups. Following brief slight shaking, the optical density (od) value of each well was measured at a wavelength of 490 nm with a microplate reader (Molecular devices, llc). relative cell viability was calculated according to the following formula: Cell relative viability (%)=OD (test)/OD (control) x100%. 3-(4,5-Dimethylthiazol Colony formation assay. The cells of MG-63 and U2OS cell lines were seeded in a 6-well plate at a density of 100 cells/well. next, the cells of each well were treated with various concentrations of HGK (0, 10, 20 and 40 µmol/l), and the medium was changed every 2 days. After 14 days, the cells were fixed with 4% paraformaldehyde (cat. no. P0099; Beyotime Institute of Biotechnology) for 30 min at room temperature. Subsequently, the cells were stained with 0.1% crystal violet (cat. no. c0121; Beyotime institute of Biotechnology) for 15 min at room temperature. Finally, cell clones were captured with a camera (EOS 3000D; Canon, Inc.) and clones with a cell count >50 were recorded. Wound healing assay. Treated or untreated MG-63 and u2oS cells were seeded in a 6-well plate at a density of 2x10 5 cells/well. The wound healing assay was carried out when the cell confluence reached 90%. The specific procedure of the assay was as follows: cells were scratched vertically with a pipette, and then the scratched cells were washed with phosphate-buffered saline (PBS; cat. no. c0221a; Beyotime institute of Biotechnology). Subsequently, the cells were cultured in a serum-free medium containing HGK (10, 20 and 40 µmol/l). At 0 and 48 h, images of the cells were captured with an inverted microscope at a magnification of x100, and the width of the scratch was measured. The relative migration rate was calculated as follows: relative migration rate=(0 h scratch width-48 h scratch width)/0 h scratch width x100%. Transwell assay. The culture plate used in the Transwell assay was an 8-µm pore Transwell plate (product no. 3428, Corning, inc.). Prior to the initiation of assay, the upper chamber of the Transwell plate was coated with 1:4 diluted BD Matrigel Matrix (cat. no. 356234; BD Biosciences) at 37˚C for 4 h. The treated or untreated MG-63 and U2OS cells (2x10 4 ) were resuspended in a serum-free medium and then transferred to the upper chamber of the Transwell plate while the corresponding dose of drug (10, 20 or 40 µmol/l HGK) was added. a medium containing 20% FBS was then placed into the lower chamber of the Transwell plate. The Transwell plate was cultured in an incubator for 48 h at 37˚C. After the Transwell upper chamber was removed, the invasive cells were fixed with 4% paraformaldehyde for 15 min at 4˚C and then stained with 0.1% crystal violet staining solution at room temperature for 30 min. Following staining, the invaded cells were counted under an inverted microscope at a magnification of x250 and the relevant results were recorded. Reverse transcription-quantitative polymerase chain reaction (RT-qPCR). Total RNA from hFOB 1.19, MG-63 and u2oS cells was isolated using Trizol reagent (cat. no. 15596-018; invitrogen; Thermo Fisher Scientific, inc.). The total rna was subjected to reverse transcription (PrimeScript™ RT reagent kit; cat. no. RR037A; Takara Bio, inc.) and the product was triply diluted with double distilled water. rT-qPcr was then applied to detect the expression levels of SOX9 and miR-320a using the TB Green ® Premix ex Taq™ ii kit (cat. no. rr820a) and Mir-X mirna qrT-Pcr TB Green Kit (cat. no. 638314; both from Takara Bio, Inc.), respectively, and an aBi SteponePlus™ system (applied Biosystems; Thermo Fisher Scientific, Inc.). The primers used in this experiment were provided by Sangon Biotech co., ltd. and are listed in Table i. GaPdH was the internal reference of SOX9 and U6 was the internal reference of miR-320a. The reaction system was as follows: 2 µl cDNA, 10 µl SYBR, 0.8 µl primers and 6.4 µl double distilled water. The thermocycling conditions were as follows: 95˚C for 30 sec, 40 cycles at 95˚C for 3 sec and 60˚C for 30 sec. The RT-qPCR data were analyzed using the 2 -ΔΔCq method (30). Xenograft assays. A total of 10 male, 6-week-old BALB/c nude mice (Weitong Lihua Laboratory Animal Technology co., ltd.) were used to study the antitumor activity of HGK against osteosarcoma. The BALB/c nude mice were kept at 24-26˚C and in 40-70% humidity, with a 12-h light/dark cycle and free access to food and water. BALB/c nude mice were divided into control and HGK groups (n=5, each group), and the animals were inoculated subcutaneously in the right flank with MG-63 cells (3x10 6 ) in 100 µl. Drug treatment was started on day 10, where the nude mice in the HGK group were intraperitoneally injected with 100 µl HGK (1.0 mg/kg body weight) every two days, and the control group was treated with an equal volume of PBS. The tumor volume was measured according to the following formula: Tumor volume=length x width 2 /2. After 4 weeks, these nude mice were sacrificed by cervical dislocation following anesthesia with an intraperitoneal injection of sodium pentobarbital (60 mg/kg), and then the tumors were photographed, and the weight was measured. The maximum allowed tumor size did not exceed 1,000 mm 3 . The study was approved by the animal ethics committee of nanfang Hospital (Guangzhou, China; approval no. NFYY-2021-121). Statistical analysis. GraphPad Prism 8.0 (GraphPad Software, inc.) was employed for statistical analysis. each experiment was repeated three times. data are presented as the mean ± standard deviation. unpaired t-test was utilized for comparison between two groups, while one-way analysis of variance (ANOVA) and Tukey's post hoc test were used for comparison among multiple groups. P<0.05 was considered to indicate a statistically significant difference. HGK suppresses the proliferation, migration and invasion of osteosarcoma cells. The chemical structure formula of HGK is presented in Fig. 1a. in order to test the safety of HGK, hFoB1.19 (an immortalized human fetal osteoblastic cell line) was first treated with various concentrations of HGK (0, 10, 20, 40 and 60 µmol/l), and the results revealed that HGK had no significant effect on hFOB1.19 cells (P>0.05; Fig. 1B). This suggested that HGK within a concentration of 60 µmol/l exerted no toxic effect on normal osteoblasts. Similarly, osteosarcoma cells MG-63 and U2OS were treated with various concentrations of HGK as well. as demonstrated in Fig. 1C and D, HGK decreased the viability of MG-63 and u2oS cells in a concentration-dependent manner as compared with the control group (P<0.05 and P<0.001). From the results, it was determined that the half-maximal lethal concentration of HGK was ≤40 µmol/l, thus, the concentrations of 10, 20 and 40 µmol/l were selected for the subsequent experiments. The effects of HGK on the proliferation, migration and invasion of MG-63 and U2OS cells were then also evaluated. colony formation assay revealed that the cell proliferation in the HGK groups was reduced compared with the control group (Fig. 1e-H; P<0.05, P<0.01 and P<0.001). in addition, the wound healing assay demonstrated that the migration of cells in the HGK groups was diminished compared with the control group (Fig. 2a-d; P<0.01 and P<0.001). Moreover, Transwell assays revealed that the invasion of cells in the HGK groups was decreased compared with the control group ( Fig. 2e-H; P<0.01 and P<0.001). all these results indicated that HGK inhibited cell proliferation, migration and invasion in a concentration-dependent manner. HGK inhibits the proliferation, migration and invasion of osteosarcoma cells by promoting the expression of miR-320a. The expression of miR-320a in osteosarcoma cells was lower than that in osteoblasts ( Fig. 3A; P<0.001). Further detection revealed that HGK could promote the expression of miR-320a in MG-63 and U2OS cells in a concentration-dependent manner ( Fig. 3B and C; P<0.001). To verify the effects of HGK and miR-320a on osteosarcoma cells, a miR-320a inhibitor was used to decrease the expression of miR-320a. The transfection efficiency of miR-320a inhibitor is presented in Fig. 3D and E (P<0.001). Moreover, it was also determined that miR-320a inhibitor could reverse the promotive effect of HGK on miR-320a expression ( Fig. 3F and G; P<0.01 and P<0.001). Next, the effects of HGK and miR-320a inhibitor on proliferation, migration and invasion of MG-63 and u2oS cells were assessed. The results of the colony formation assays suggested that the proliferation of cells in the miR-320a inhibitor group was increased compared with the control group (Fig. 3H-J; P<0.001). Moreover, miR-320a inhibitor could reverse the inhibitory effect of HGK on cell proliferation (Fig. 3H-J; P<0.001). Wound healing assay revealed that cell migration in the miR-320a inhibitor group was increased compared with the control group (Fig. 4A-D; P<0.001). Similarly, miR-320a inhibitor could reverse the inhibitory effect of HGK on the cell migration (Fig. 4A-D; P<0.001). The Transwell assay demonstrated that cell invasion in the miR-320a inhibitor group was promoted compared with the control group (Fig. 4E-H; P<0.001). In addition, miR-320a inhibitor could also reverse the effect of HGK on cell invasion (Fig. 4E-H; P<0.001). HGK inhibits the expression of SOX9 by promoting miR-320a expression. The targeted binding sites of miR-320a and SOX9 were first predicted by starBase v2.0 (Fig. 5A). Subsequently, a dual-luciferase reporter assay revealed that the fluorescence intensity of cells in the SOX9-WT + miR-320a mimic group was significantly lower than that in the mimic control group ( Fig. 5B; P<0.01), while the fluorescence intensity in the SOX9-MUT + miR-320a mimic group displayed no significant change compared to that in the mimic control group (Fig. 5B). This indicated that miR-320a indeed targeted SOX9. Further study demonstrated that the expression of SoX9 was depleted in the HGK group compared with the control group ( Fig. 5C-H; P<0.05 and P<0.01). However, miR-320a inhibitor could abolish the inhibitory effect of HGK on SoX9 expression (Fig. 5c-H; P<0.001). MiR-320a inhibits the migration, invasion and epithelial-mesenchymal transition (EM T)-related protein expression levels of osteosarcoma cells by inhibiting the expression of SOX9 . in order to further assess the effect of the miR-320a/SOX9 axis on osteosarcoma, miR-320a mimic was used to promote the expression of miR-320a ( Fig. 5I and J; P<0.001). concurrently, SoX9 overexpressed plasmid was also employed to upregulate the expression of SoX9 (Fig. 6a and B; P<0.001). it was then determined that the expression of SOX9 was inhibited in the miR-320a mimic + NC group as compared with the MC + NC group (Fig. 6C-H; P<0.05 and P<0.01). Subsequently, the effect of miR-320a/SOX9 on the migration and invasion of osteosarcoma cells was examined. As demonstrated in Fig. 7A-D, the migration of MG-63 and U2OS cells was enhanced in the MC + SOX9 group compared with the MC + NC group (P<0.05, P<0.01 and P<0.001). In addition, SOX9 could reverse the effect of miR-320a mimic on the migration of MG-63 and U2OS cells (Fig. 7A-D; P<0.05 and P<0.01). according to Fig. 7e-H, the invasion of MG-63 and U2OS cells was promoted in MC + SOX9 group, compared with the MC + NC group (P<0.001). In addition, SOX9 could reverse the effect of miR-320a mimic on the invasion of MG-63 and U2OS cells (Fig. 7E-H; P<0.001). In order to further confirm the effects of the miR-320a/SOX9 axis on the migration and invasion of cells, eMT-related proteins were also detected. The results revealed that miR-320a mimic could increase the expression of e-cadherin while diminishing the expression levels of n-cadherin and vimentin, compared with those in the MC + NC group (Fig. 8A-D; P<0.001). The effects of SoX9 were the opposite of those obtained with the miR-320a mimic (Fig. 8A-D; P<0.001). Similarly, upregulation of SOX9 could also reverse the effects of miR-320a mimic on HGK inhibits the osteosarcoma tumor growth of nude mice in vivo. a xenograft mouse model was constructed in the present study to evaluate the antitumor activity of HGK in vivo. as revealed in Fig. 9a-c, HGK inhibited the tumor volume and weight in the HGK group compared with the control group. Discussion The medicinal value of Daphne genkwa dated back to ancient times (31). Daphne genkwa is often used to treat edema and make expectoration easy (32). As one of the active ingredients of Daphne genkwa, HGK has also been identified to possess powerful biological functions in the treatment of assorted diseases, especially its antitumor effect. For example, Huang et al indicated that HGK inhibits the invasion and migration of oral squamous cell cancer cells by downregulating the expression level of vimentin (33). Chen et al (17) reported that HGK can inhibit the expression of Hdac to induce the expression of tumor suppressor p21, and promote the acetylation and activation of p53 and p65, thus inhibiting the growth, migration and invasion of liver cancer cells and increasing the cell apoptosis. notably, this aforementioned study, revealed that the antitumor effect of HGK was mainly demonstrated through the inhibition of the migration and invasion of tumor cells. it is important to note that the primary cause of mortality for osteosarcoma is pulmonary metastasis, and current chemotherapy regimens appear to be ineffective against metastasis of osteosarcoma (34). The outstanding ability of HGK to suppress migration and invasion makes it a noteworthy potential drug for our research, and indicates that it may be a potential drug for the treatment of osteosarcoma. in the present study, it was revealed that HGK did not affect the viability of normal human osteoblasts, indicating that the safety of HGK was reliable. in addition, further study revealed that HGK could reduce the proliferation, migration and invasion of osteosarcoma cells. in our subsequent study, it was determined that HGK could promote the expression of miR-320a in osteosarcoma cells. In fact, the role of miR-320a in osteosarcoma has been extensively studied, involving doxorubicin resistance, and its effect on proliferation, migration, and invasion of osteosarcoma cells (23,24,35). In addition, it has been reported that overexpression of miR-320a promotes stress oxidation levels, while reducing the viability, proliferation and mineralization capacities of osteosarcoma cells (36). A previous study even proposed miR-320a as a possible biomarker for osteosarcoma (37). Therefore, it can be theorized that the role of HGK in osteosarcoma may be realized by regulating miR-320a. Notably, a recent study by chou et al revealed that HGK can inhibit tumor progression by promoting the expression of miR-320a in lung cancer (11). Their research adds to the credibility of our theory. in order to determine the association between miR-320a and HGK, osteosarcoma cells were treated with HGK and an increased expression of miR-320a was detected. This suggested that HGK had a regulatory effect on miR-320a, but whether it further affected the migration and invasion of osteosarcoma needed to be further explored. MiR-320a inhibitor was used to decrease miR-320a expression, and the aforementioned conjecture was verified by miR-320a inhibitor. The results clearly revealed that HGK-inhibited migration and invasion of osteosarcoma cells were partially counteracted by miR-320a inhibitor. The starBase database is commonly used to predict the downstream target molecules and targeted binding sites of miRNAs (38). In the present study, the targeted bindings of miR-320a and SOX9 were predicted through starBase database. This result was also confirmed by dual luciferase reporter assay. notably, SoX9 has been reported to be an essential transcription factor for normal differentiation of osteoblasts and also plays an important role in the progression of osteosarcoma (27,29). in fact, the results of the present study demonstrated that HGK could decrease the expression of SOX9. In addition, miR-320a inhibitor could partially offset the effects of HGK on SoX9 expression. This indicated that HGK can inhibit the expression of SoX9 by promoting miR-320a expression. Further exploration revealed that miR-320a alleviated the migration and invasion abilities of osteosarcoma cells by inhibiting SoX9 expression. These results were further confirmed using western blot analysis. additionally, eMT has been demonstrated to be an important process of migration and invasion, during which the connexins (such as e-cadherin) are gradually decreased and the expression levels of mesenchymal marker proteins (n-cadherin and vimentin) are promoted in cells (39). The results of the present study indicated that upregulation of miR-320a could promote the expression of e-cadherin while inhibiting the expression levels of n-cadherin and vimentin. However, overexpression of SOX9 could reverse the regulatory effects of miR-320a on the expression of e-cadherin, n-cadherin and vimentin. Moreover, our results also confirmed that miR-320a decreased the cell migration and invasion of osteosarcoma cells by inhibiting the expression of SoX9. in addition, similar to a previous study (11), the present study revealed that HGK had no significant effect on hFoB1.19 cells, while inhibiting the viability of osteosarcoma cells. This suggested that HGK selectively killed tumor cells without significant toxicity to normal cells. The mechanism revealing how HGK could selectively kill tumor cells remains unclear and needs to be further explored. collectively, the present experiments demonstrated that HGK could attenuate the proliferation, migration and invasion of osteosarcoma cells by regulating the miR-320a/SOX9 axis. unfortunately, our study was only conducted in vitro, and further investigation in vivo and clinical trials are required. HGK is a potential drug for the treatment of osteosarcoma and is expected to provide a new direction for the clinical research on osteosarcoma.
2022-08-06T06:16:29.072Z
2022-08-05T00:00:00.000
{ "year": 2022, "sha1": "f2e12901ffbaf8da15e7b3756c9a3c6d23a6cc18", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "MergedPDFExtraction", "pdf_hash": "9e1d5ea05bbddfcc53a327a128208ec1a2ce2150", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
244888755
pes2o/s2orc
v3-fos-license
Application of transcranial direct current stimulation in cricopharyngeal dysfunction with swallowing apraxia caused by stroke Abstract Rationale: Dysphagia is a common complication after stroke. The 2 types of dysphagia with cricopharyngeal dysfunction and swallowing apraxia after stroke are relatively rare and difficult to treat; however, there are few clinical case reports of cricopharyngeal dysfunction and swallowing apraxia after stroke. Patient concerns: A case of cricopharyngeal dysfunction and swallowing apraxia due to cerebral infarction caused by atrial fibrillation in a 63-year-old woman who was followed up for 1 year. Diagnoses: The patient was diagnosed with cricopharyngeal dysfunction and swallowing apraxia caused by stroke based on the clinical course and imaging findings. Interventions: Pharmacotherapy and rehabilitation therapy. Outcome: The patient's swallowing function returned to normal, and her nasal feeding tubes were removed, and oral feeding was resumed. Lessons: The 2 types of dysphagia with cricopharyngeal dysfunction and swallowing apraxia after stroke are relatively rare and difficult to treat after stroke. Only by improving swallowing apraxia can patients perform mandatory swallowing and balloon dilatation treatment. However, transcranial direct current stimulation has a good therapeutic effect on the primary motor and sensory cortex of the tongue in patients with cricopharyngeal dysfunction and swallowing apraxia. Introduction Dysphagia is a common complication after stroke and exhibits different symptoms because of the different stroke sites. The manifestations of cricopharyngeal dysfunction caused by medullary infarction include upper esophageal sphincter contraction, relaxation, and coordination dysfunction and the decrease of the contractile force of the pharyngeal constrictor, lack of larynx lift, residual, infiltration, and aspiration in the epiglottis valley and pyriform sinus. [1] Swallowing apraxia is characterized by uncoordinated lip, tongue, and mandible movements without sensory impairment or dyskinesia during oral swallowing. The patient's automatic and unconscious swallowing function is largely preserved, [2] but the absence of tongue movement or a significant decrease in the range of motion during autonomous and conscious swallowing results in delayed initiation of food delivery. [3] It is rare for patients to suffer from cricopharyngeal dysfunction and dysphagia caused by stroke. Dou et al [4] have used active balloon dilatation therapy to treat cricopharyngeal dysfunction and have achieved a good effect, but active balloon dilatation therapy requires patients to follow the swallowing instructions of doctors or rehabilitation therapists to achieve this effect. When patients suffer from swallowing apraxia, it is difficult to carry out swallowing instructions with active balloon dilatation therapy and basic deglutition training, which greatly increases the difficulty of treatment. Currently, there are limited clinical treatments for swallowing apraxia, and we present a rare case comparing the efficacy of transcranial direct current stimulation (tDCS) before and after treatment. Patient data The patient was a 63-year-old woman. She was right-handed, and she was hospitalized mainly because of "dizziness, walking instability with having difficulty in eating for 3 days." When admitted to the hospital, she could not consume any food through her mouth. Physical examination showed that the patient had clear consciousness, poor mental state, unclear articulation, and compared with the left side, a relatively weaker ability for the right soft palatal lift and absent pharyngeal reflex. Cranial magnetic resonance imaging showed patchy acute infarction in the right medulla oblongata and right cerebellar hemisphere. Multiple small patches of ischemia, infarction, and softening lesions were located in the pons, beneath the bilateral frontoparietal cortex, centrum semiovale, mostly at the side of the lateral ventricle and in the basal ganglia ( Fig. 1 A, B). Videofluoroscopic swallowing study (VFSS) showed that the cricopharyngeus muscle opened partly when carrying out reflex swallowing and could not complete swallowing commands. Clinical diagnoses include cerebral infarction, swallowing apraxia, cricopharyngeal dysfunction, dysarthria, and atrial fibrillation. Therapies The patient was successively treated with therapy A (basic swallowing training and balloon dilatation) for 4 weeks and therapy B (tDCS, basic swallowing training, and balloon dilatation) for 4 weeks. The specific treatment therapies used were as follows. 2.2.1. tDCS treatment. IS200 type transcranial direct current stimulator from Sichuan, China was used, and doctors chose a 6.0 Â 4.2 cm 2 isotonic saline gelatine sponge electrode as the stimulating electrode. And the projection area of the labial cortex is located under the Cz point of the International electromyogram (EMG) 10 to 20 System 75 mm, the upper and lower 2.5 mm is the area where the maximum activation site of the sensorimotor cortex is in the left hemisphere during lip tongue pronunciation. [5] When treating, the patient was seated, the anode was placed at a location point on the body surface of the labial tongue cortex (as mentioned above), and the cathode was placed on the contralateral shoulder. The treatment was conducted alternately on the left and right sides for 15 min each with a 30-minute interval. The electrical stimulation intensity was 1.2 mA (current density was approximately 50 mA/cm 2 ). Treatment was performed once a day, 5 days per week. 2.2.2 Basic swallowing training includes breathing training, neck movement training, oral and facial muscle group movement training, sensory training, tongue movement training, swallowing coordination training, vocal cord adduction training, laryngeal up-lift training, empty swallowing and forced empty swallowing training, cough reflex training, and low-frequency electric stimulation treatment. Low-frequency electric stimulation treatment used a Vitalstim low-frequency electric therapy apparatus from the United States. When treating, the patient's head remained neutral. The 1 and 2 electrodes of channel 1 were close to the hyoid above the horizontal arrangement. The 3 and 4 electrodes of channel 2 were located along the midline of the anterior portion. Among these electrodes, the 3 electrodes were placed in the top-notch of the thyroid, and the 4 electrodes were placed below the thyroid notch. The treatment parameters were as follows: bidirectional square wave, pulse width 300 mS, frequency of electrical stimulation 80 Hz, on/off ratio 300:100, and stimulus intensity 0 to 25 mA. If the swallowing muscles in the neck of the patients were vibratory accompanied by swallowing movements, and patients could tolerate it, we considered it as the criterion. Each treatment lasted for 20 min. The basic swallowing training was performed once a day, 5 days per week. Balloon dilatation treatment was as follows. Active balloon dilatation treatment was used for treatment. First, a 14 latex double-lumen catheter with the nasal esophagus is inserted, ensuring that the catheter goes down into the esophagus and completely through the cricopharyngeus muscle, and water was infused (water temperature approximately 0°C) in the catheter, approximately 6 to 8 mL, making the balloon diameter expand from 20 to 25 mm. Next, the catheter is slowly pulled outwards until it feels stuck or cannot be moved, indicating that the balloon has arrived at the lower margin of the circumpharyngeal muscle, and it should be tagged; then, an appropriate amount of ice water is drawn out (according to the tension of the circumpharyngeal muscle, if the catheter can pass it when the balloon is pulled out, we think it is appropriate), and the catheter is gently pulled out again and again. The patient was asked to swallow the balloon actively while the balloon was pulled upward. Once the patient has a feeling of slipping and the resistance decreases sharply, the ice water in the balloon should be pulled out quickly and repeated 8 to 10 times. Notably, when the pull-out resistance decreases sharply, it indicates that the balloon is outside the front part of the larynx, and the fluid in the balloon must be quickly evacuated to avoid suffocation. The balloon dilatation treatment was performed once a day for 30 minutes, 5 days per week. Evaluation methods Before the treatment, after 4 weeks of therapy A and after 4 weeks of therapy B, 2 uniformly trained and qualified swallowing and speech therapists evaluated the efficacy of the therapy, including the following items. 2.3.1 Tongue movement and facial-oral apraxia were assessed using the psycholinguistic assessment in Chinese aphasia version 1.0 was used to evaluate lip closure, mouth opening, laryngeal elevation, tongue movement, and facial-oral apraxia. The score criteria for facial-oral apraxia were as follows: inability to perform (imitation) counts, 0 points; slight performance (imitation) counts, 1 point; performs (imitation) counts slightly worse, 2 points; and performs normal (imitation) counts, 3 points. [6] 2.3.2 Examination of swallowing function using VFSS was as follows: considering that barium is not easily absorbed due to malabsorption, we selected 76% meglumine diatrizoate and 50% glucose as the contrast agent, according to a ratio of 2:1, and used this to prepare different types of food in the following proportions with nutritive rice flour, including thin liquid (30 mL contrast agent and 3 g nutrition rice flour), thick liquid (30 mL contrast agent and 8 g nutrition rice flour), and paste meal (30 mL contrast agent and 14 g nutrition rice flour). Doctors observed the movement of food (1, 2, and 4 mL of food were given in turn and quantified by syringes and spoons) through the mouth cavity, pharynx, and esophagus when patients completed reflex swallowing (automatic and unconscious swallowing) and voluntary or command swallowing (the operator gives patients food through the mouth with a spoon and asks them not to swallow for a while then immediately to complete the swallowing after the operator gives the verbal command "swallow") and observed whether the contrast agent remained in the epiglottis valley and piriform foss, regardless of whether there was aspiration and whether the cricopharyngeus muscle was open. If the patient had aspirated by mistake, the angiography would be stopped, and the angiography should be discharged in time. The oral stage scoring criteria were as follows: food cannot be transported from the mouth cavity to the pharynx, food will be out of the mouth or by food gravity into the pharynx, 0 score counts; patients are unable to stir food into a mass and transport it to the throat, food enters the pharynx in fragments, 1 score count; some food remains in the mouth after 1 swallow, 2 score counts; food can be completely in the pharynx after 1 swallow; and 3 point counts. The pharynx and larynx stage scoring standards were as follows: 0 points indicate decreased swallowing reflex, poor closure of laryngeal elevation, and soft palatal arch elevation; 1 point means that there is considerable food remaining in the epiglottis valley and piriform foss; 2 points means that there is a small amount of food remaining in the epiglottis valley and piriform foss, and all remaining food can be swallowed into the throat after repeated swallowing; and 3 points means food can enter the esophagus completely after swallowing. The esophageal stage score standard was as follows: 0 points, a large number of misabsoption and no cough occurred; 1 point means a large number of misabsoption and cough occurred; 2 points means a small number of misabsoption and no cough occurred; 3 points means a small amount of misabsoption and cough occurred; and 4 points means no misabsoption and no cough occurred. [7] Results The patient experienced any maladjustment or intolerance during treatment, and after 4 weeks of treatment with therapy A, the patient's tongue movement and facial-oral apraxia had no improvement; after 4 weeks of treatment with therapy B, the tongue movement of the 2 patients was significantly improved, and the facial-oral apraxia score increased from 10 to 42 points. VFSS examination showed that the patient could perform mandatory swallowing, and she could complete food agitation and transportation when swallowing autonomously. There was no delay in the swallowing reflex, the laryngeal elevation was close to normal, and the cricopharyngeus muscle could open in a coordinated manner. Food is able to smoothly enter the esophagus and stomach. Thus, her nasal feeding tubes were removed, and oral feeding was resumed. The treatment results are presented in Tables 1-3. Discussion Human swallowing reflex activity, whose neural control consists of 3 parts, is very complex: afferent and efferent systems are composed of cranial nerves, the brainstem swallowing center, and a higher cortical swallowing center. [8,9] After stroke, the cortex, cortical brainstem tracts or the brainstem, and the kernel of the medulla oblongata will become diseased, and the brainstem deglutition center regulation mechanism is abnormal, which may easily lead to dysfunction of the lower jaw, lip, tongue, soft palate, pharynx, cricopharyngeus muscle, and esophagus, eventually affecting the patient's deglutition function. [8] At present, it is believed that dysphagia is related to lesions in the cerebral hemispheric cortex [3] or periventricular white matter. [10] Yuan et al [11] used EMG to record the changes in EMG activity in 6 normal subjects and 1 patient with swallowing apraxia after cerebral infarction in 3 states: quiet closed eyes, reflex swallowing, and voluntary swallowing. It was observed that when patients were swallowing voluntarily, the excitability of the left central, parietal, and posterior temporal cerebral cortex was lower than that of quiet closed eyes swallowing and reflex swallowing. Related functional magnetic resonance imaging studies have shown that the lateral surface of the anterior and posterior gyri are the most common activation areas of the swallowing cortex in normal subjects, while activation was also observed in the forehead, cingulate gyrus, parietooccipital region, and temporal lobe. [12,13] In this study, the patient had a stroke induced by atrial fibrillation, whose lesions involved the medulla oblongata and cerebral cortex. The patient also developed cricopharyngeal dysfunction and swallowing apraxia, which is consistent with the above findings. Relevant studies have found that the central nervous system has strong plasticity and functional reorganization ability after stroke, which can achieve functional improvement through repeated training. tDCS is a non-invasive transcranial stimulation technique that can regulate cortical excitability through microcurrents, whose stimulation effect has polarity specificity, such as anodic stimulation that will lead to depolarization of resting membrane potential and increase cortical excitability, which leads to hyperpolarization of the resting membrane potential and decreases cortical excitability. [14] Yuan et al [11] applied electroencephalography (EMG) to observe the changes in cortical excitability of patients with swallowing apraxia caused by stroke, and found that the excitability of the swallowing cortex significantly increased in patients with significant improvement in swallowing apraxia after treatment with tDCS. Lang et al [15] used tDCS to stimulate the left M1 region and used single-pulse transcranial magnetic stimulation to evaluate the potential amplitude, onset latency, and transcallosal inhibition time evoked by contralateral movement, which indicated that tDCS not only affects corticospinal circuits involved in the generation of motor-evoked potential but can also inhibit transcallosal regulation of the interneurons to the contralateral hemisphere. Considering the above contralateral inhibitory factors, tDCS was used for patients with cricopharyngeal dysfunction and swallowing apraxia to stimulate the primary motor and sensory cortex of the tongue in the bilateral brain. It was found that after tDCS treatment, tongue movement and orofacial apraxia were significantly improved in the patient, and her scores were significantly improved (from 10 to 42 points). VFSS examination after 4 weeks of therapy B treatment showed that the patient could perform mandatory swallowing, and she could complete food agitation and transportation when swallowing autonomously. There was no delay in swallowing reflex, the laryngeal elevation was close to normal, and the cricopharyngeus muscle was able to open in a coordinated manner. Food was able to smoothly enter the esophagus and stomach. Therefore, her nasal feeding tube was removed, and oral feeding was resumed. Yuan et al [16] applied tDCS to stimulate the bilateral primary cortex of the swallowing sensation and motor in patients with swallowing apraxia directly. These researchers observed that the swallowing apraxia symptoms of patients after treatment were significantly improved in both voluntary and reflex swallowing. The results of Table 2 Analysis of examination results of the patient with oral and facial apraxia. Cough Nasal breathe Blow out a match Blow a straw Blow a drum cheek Pout Inspection time Perform Imitation Perform Imitation Perform Imitation Perform Imitation Perform Imitation Perform Imitation On admission 0 2 0 1 0 0 0 1 0 1 0 1 After 4 weeks of therapy A 0 2 0 1 0 0 0 1 0 1 0 1 After 4 weeks of therapy B 1 2 1 2 2 2 1 2 1 2 2 3 Close lips Show teeth Stick out tongue Open mouth Throat Inspection time Perform Imitation Perform Imitation Perform Imitation Perform Imitation Perform Imitation Total score Table 3 Analysis of VFSS scores. the EEG examination also suggested that the excitability of broad areas of the swallowing cortex was improved, which was consistent with the results of this study. Oral period In conclusion, the results of this study show that tDCS stimulation has a good therapeutic effect on the primary motor and sensory cortex of the tongue in patients with cricopharyngeal dysfunction and swallowing apraxia caused by stroke. Only by improving swallowing apraxia can patients perform mandatory swallowing. By further participating in basic swallowing training and active balloon dilatation therapy, patients can achieve satisfactory rehabilitation of swallowing disorders. This combined therapy warrants further study and development.
2021-12-05T16:08:38.163Z
2021-12-03T00:00:00.000
{ "year": 2021, "sha1": "2c92df69eeabdabae36bbe283cc487efb7b9af55", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1097/md.0000000000027906", "oa_status": "GOLD", "pdf_src": "WoltersKluwer", "pdf_hash": "cb4e5366a3f0365fb61a9271de0b1966c85afdb5", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
211532657
pes2o/s2orc
v3-fos-license
Fermionic minimal models We show that there is a fermionic minimal model, i.e. a 1+1d conformal field theory which contains operators of half-integral spins in its spectrum, for each $c=1-6/m(m+1)$, $m\ge 3$. This generalizes the Majorana fermion for $c=1/2$, $m=3$ and the smallest $\mathcal{N}{=}1$ supersymmetric minimal model for $c=7/10$, $m=4$. We provide explicit Hamiltonians on Majorana chains realizing these fermionic minimal models. INTRODUCTION AND SUMMARY The classification of the unitary minimal models of conformal field theory in 1+1 dimensions [1][2][3][4][5][6] is one of the triumphs of theoretical physics in the late 20th century. It was a milestone in our understanding of universality of critical phenomena in certain 2d classical statistical models [7][8][9] and 1+1d quantum systems [10,11]. As is well known, the central charge is of the form c = 1 − 6/m(m + 1) for an integer m ≥ 3. The simplest case m = 3 is the critical Ising model with c = 1/2 and the next case m = 4 is the tricritical Ising model with c = 7/10. Starting from m = 5, there are at least two distinct models, called the A-type (or the diagonal) modular invariant and the D-type modular invariant; for m = 5, they are the tetracritical Ising model and the critical 3-state Potts model, respectively. Finally, there are exceptions when m = 11, 12, 17, 18, 29 and 30. The operators of these models have integer spins. In this sense they can all be called bosonic. Let us recall that the critical Ising model can be transformed to a free massless Majorana fermion via the Jordan-Wigner transformation [12]. They are almost the same, so much so that careful distinctions were not routinely made in the old literature. We stress that they are distinct: The theory of Majorana fermion has an operator of spin 1/2, while the Ising model does not. Similarly, the unitary minimal models with N =1 supersymmetry were classified, and the smallest nontrivial example has the central charge c = 7/10, the same as the tricritical Ising model [2,13]. It is also known that this supersymmetric minimal model is obtained from the tricritical Ising model by the Jordan-Wigner transformation and can appear in a strongly interacting Majorana chain [14]. We can summarize these old observations as saying that we have fermionic minimal models when m = 3 and 4. It is then a natural question to ask whether there are fermionic minimal models with higher m. The purpose of this Letter is to answer this question positively. The existence of fermionic minimal models as 1+1d theories in the continuum should not really come as a surprise, although it was not widely appreciated [15]. This is because there is a general method developed a few years ago [16,17] which allows us to turn any 1+1d bosonic model with non-anomalous Z 2 symmetry into a fermionic model, and the bosonic minimal models have such a Z 2 symmetry. The method, however, is quite abstract. The main result of this Letter then is to make this construction more concrete by providing explicit lattice realizations of fermionic minimal models by presenting a systematic construction of Majorana chains from quantum spin chains which give rise to bosonic minimal models at criticality. General analysis We will review the argument of [16,17], which allows us to turn a 1+1 dimensional bosonic theory with nonanomalous Z 2 symmetry into a fermionic theory. This method is a simplified version of the ideas developed in 2+1 dimensions [18,19], and can be considered as a variant of orbifolding by the Z 2 symmetry. As such, we are going to recall the ordinary Z 2 orbifolding procedure first, and then discuss the fermionization procedure. Let us consider a 1+1d quantum field theory A with a non-anomalous Z 2 symmetry. We would like to study the Hilbert space of states on S 1 , which can be either untwisted or twisted, depending on whether we introduce a twist by the Z 2 symmetry around the spatial S 1 . The untwisted and the twisted states can then each be decom-posed into states even and odd under the Z 2 symmetry. We present this decomposition in Table I, where S, T , U and V are generic symbols for states in the respective sectors. Let us consider the theory D obtained by taking the orbifold, or performing the gauging, by this Z 2 symmetry. The untwisted sector of the theory D consists of the even sector of the original theory A, coming from both the untwisted and the twisted sector of A. We can also assemble the odd sector of the original theory A, from both the untwisted and twisted sector of A, into the twisted sector of the theory D. This means that the states on S 1 of the theory D are as shown in Table I. We easily see that the theory D also has a Z 2 symmetry, and the orbifold of the theory D by this Z 2 regenerates the theory A [20]. This Z 2 gauging is known to be a generalized abstract version of the Kramers-Wannier transformation. The next operation, which is a generalized abstract version of the Jordan-Wigner transformation, uses the low-energy limit of the nontrivial topological phase of the Kitaev chain [21]. This is a fermionic chain whose lowest energy state on S 1 is non-degenerate. Denoting the fermion parity as (−1) F , the ground state has (−1) F = +1 when the fermion is antiperiodic around S 1 , and has (−1) F = −1 when the fermion is periodic around S 1 [22]. For brevity, we call this topological phase of the Kitaev chain simply "the Kitaev chain". We now consider the theory A × Kitaev obtained by stacking the Kitaev chain to the original model A. We then take the orbifold by Z 2 , where the Z 2 action on the Kitaev chain is given by the fermion parity. The result is the fermionic model we denote by F. To find the decomposition of states of this theory, we study the four sectors of A × Kitaev, depending on whether it is untwisted (u = +1) or twisted (u = −1), and whether the fermion in the final theory F is periodic (s = +1) or antiperiodic (s = −1). It is important to keep in mind that the actual periodicity of the fermion of the Kitaev chain is given by the product su, since the Z 2 symmetry we use in the twisting also involves the fermion parity of the Kitaev chain. This means that (−1) F = −su, due to the property of the Kitaev chain. Let us denote the Z 2 charge of A by Q A = ±1. Then the total Z 2 charge is Q := Q A (−1) F . The theory F is obtained by keeping only the states with Q = +1. This means that, to find that the decomposition of states of F for each s = ±1, we simply consider both possibilities u = ±1 and take the states with Q A = −su, which also equals (−1) F . The result is summarized in Table I. There, we refer to states with (−1) F = +1 as bosonic and those with (−1) F = −1 as fermionic. We can also perform the same operation against the theory D, by considering the Z 2 -orbifold of D × Kitaev. The decomposition of states of the resulting theory, which we callF, is also shown in Table I. In the table, we note that F andF are related simply by exchanging the assignment of (−1) F in the periodic sector. Or equivalently, we haveF = F × Kitaev. We can summarize the relation of four theories A, D, F andF in the following diagram: (1) Application to the unitary minimal models Let us now recall the well-known fact that the Dtype modular invariants are obtained by a Z 2 orbifold, or equivalently a Z 2 -gauging, of the A-type modular invariants. This means that we can apply the general method explained above to produce fermionic minimal models. Explicitly, these models have the following operator content. We denote the irreducible Virasoro characters at c = 1 − 6 m(m+1) by χ r,s . We set p = m + 1 and q = m when m is even, and q = m + 1 and p = m when m is odd. This is to make p always odd and q always even. We then let 1 ≤ r ≤ q − 1 and 1 ≤ s ≤ p − 1. The conformal weight L 0 of χ r,s is then given by L 0 = (pr−qs) 2 −1 4pq . This set is redundant because of the twofold identification χ r,s = χ q−r,p−s . We remove this redundancy by restricting s ≤ (p − 1)/2. We then have where we abused the notation and identified a state space and its character; a ≡ b is the equality modulo 2. We note that the parity of r ≡ q/2 for U and r ≡ q/2 + 1 for V is correlated to the spin of the states being integral or half-integral. TABLE II. Details of the m = 3, 4, 5 models. The upper part gives conventional names for the models when available. The lower left part lists the Virasoro content of the states S, T , U and V . The lower right part provides the mapping between the symbols , σ etc. and the characters χr,s. For example, ( 1 10 ) for m = 2, r = 1, s = 2 means that we use the entry for χ1,2 whose primary has L0 = 1 10 . The expressions (2) can be obtained as follows. By definition, S + T and S + U are equal to the partition functions of the A-type and the D-type minimal model, which can be found in the standard textbooks on 2d conformal field theory, e.g. [23]. By performing a modular transformation τ → −1/τ , one then obtains U + V and T + V , respectively. From this information we can extract S, T , U and V individually. These expressions can also be obtained from a very general result of Ref. [24] as applied to the minimal models. The spectra for m = 3, 4, 5 are shown in Table II. We used for Z 2 -even primaries and σ for Z 2 -odd primaries; those with larger L 0 have more primes in the superscript. The operators in S, T , and U all have integer spins, while the operators in V all have half-integral spins. For m = 3, 4, we have U = T , meaning that there is no distinction between A-type and D-type models. For m = 5, U = T , and the A-type model and the D-type model are distinct. For m = 3, is the free fermion with spin 1/2; for m = 4, is the supersymmetry generator in the fermionic model; for m = 5, is the W 3 generator and exists in the untwisted sector of the D-type model. The pattern repeats itself. We find that the chiral al-gebra of the D-type model for m ≡ 5, 6 mod 4 has a W-generator of integer spin, and that the chiral algebra of the fermionic model for m ≡ 3, 4 mod 4 has a Wgenerator of half-integral spin, as was mentioned in Ref. [25]. General analysis Let us begin by recalling the Jordan-Wigner transformation of a spin-1/2 chain [26]. We consider a circular chain with sites labeled by a positive integer i, each hosting the local Hilbert space C 2 . We denote the local Pauli matrices as σ (i) x,y,z , and consider the on-site Z 2 symmetry generated by σ z , so that the global Z 2 charge is given by σ (i) z . The Jordan-Wigner transformation is given by the following relation This is a non-local transformation, but maps local operators to local operators when restricted to Z 2 -even and/or bosonic operators. To see this, we note that any Z 2 -even operator can be generated from σ , and that they are mapped by the Jordan-Wigner transformation as follows: Let us now show that this mapping reproduces the general analysis in continuum theory when we consider a circular chain of N sites. If we impose the boundary condition ψ (2N +1) = sψ (1) where s = ±1, then the relation (4) is slightly modified when i = N to be The right-hand side should equal σ . This means that the periodicity of the original spin chain is given by σ x , where s is the sign determining the periodicity of the fermion chain as above and t is the global Z 2 charge i σ (i) z . It is also clear that the global Z 2 charge agrees with the fermion parity i (−iψ (2i−1) ψ (2i) ). This explains the mapping of states between the original theory A and the fermionized theory F. We now note that the relation between the theory F and the theoryF can be realized at the level of the fermion chain by the shift ψ (i) → ψ (i+1) . Indeed, when the boundary condition is given by ψ (2N +1) = sψ (1) where s = ±1, the fermion number operator after the shift is This means that the fermion number assignment gets reversed only in the periodic sector. Application to the unitary minimal models To obtain a Majorana chain realizing the fermionic minimal models, we simply need to take a realization of ordinary bosonic minimal models on the spin-1/2 chain with the manifest Z 2 symmetry, and perform the Jordan-Wigner transformation. This method is well known to work for the Ising model and the tricritical Ising model. There are two apparent obstacles to generalize this construction to higher minimal models: i) Some of these known bosonic models do not have manifest Z 2 symmetry, while the Z 2 symmetry emerges only in the long-range limit (see, e.g., [27]). ii) Most of the known bosonic models realizing the ordinary minimal models higher than these are defined on a chain of "spins" larger than 1/2. That is, they are realized on a generalized spin chain such that each site has the state space C k with k > 2. While we currently do not have any solutions to the first point, the second point can be easily circumvented. Suppose we are given a spin-chain Hamiltonian realizing a higher minimal model with an explicit Z 2 symmetry [28] such that the state space at each site is C k . We pick an integer so that we can embed C k ⊂ (C 2 ) ⊗ , i.e. we represent one site of the original spin chain in terms of a unit cell consisting of sites of the spin-1/2 chain. It is clear that this can be done in a way preserving the Z 2 symmetry. When k is not a power of two, we have 2 − k unnecessary states after the embedding, but they can be removed by adding to the Hamiltonian a local term which gives a very large energy to these unnecessary states. Then the low-lying states before and after the embedding into the spin-1/2 chain are effectively the same, and eventually we will have a local Hamiltonian on the spin-1/2 chain with a manifest Z 2 symmetry realizing the higher minimal model. Let us illustrate this procedure by taking the 3-state Potts model. The standard Hamiltonian realization of the 3-state Potts model is on a spin chain with each site having C 3 , with the basis |A , |B , and |C , acted on by the clock and shift operators Z : |A → |A , |B → ω |B , |C →ω |C , X : |A → |B → |C → |A where ω = e 2πi/3 . The Hamiltonian is then where J and f are parameters, and the Z 2 symmetry is generated by |A → |A , |B ↔ |C at every site. The model becomes critical when J = f . We now embed C 3 to (C 2 ) ⊗2 by choosing This preserves the Z 2 symmetry. We also note that we have one unnecessary state |D i := |↑ 2i−1 |↓ 2i . This state can be removed by adding to the Hamiltonian with a huge positive coefficient U . Using (9) and (10), we can rewrite H Potts in terms of Pauli matrices. The 3-state Potts model is thus translated to a model on a spin-1/2 chain constructed from σ z and bilinears of σ x,y , which can be Jordan-Wigner transformed into a chain of interacting Majorana fermions, with the following Hamiltonian · (2iψ (4i+1) ψ (4i+2) + iψ (4i+3) ψ (4i+4) + ψ (4i+1) ψ (4i+2) ψ (4i+3) ψ (4i+4) ) As the critical 3-state Potts model gives the m = 5 Dtype modular invariant, the Majorana chain (11) at criticality will give the m = 5 fermionic minimal model [29]. We numerically checked that the Hamiltonian (11) does give a conformal field theory with c = 4/5 when J = f [30]. It would also be interesting to study in detail e.g. the two-point functions of the fermionic operators, from which we could identify their scaling dimensions which have been indicated in TABLE II for the m = 5 fermionic minimal model. We leave it to a future work.
2020-02-28T02:00:36.583Z
2020-02-27T00:00:00.000
{ "year": 2020, "sha1": "8b299cb06428d1ccdaff95f308b3ecd51b4d66df", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2002.12283", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "fbafbe94af64bcc346f08c187b3be520f7459ce8", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
238991129
pes2o/s2orc
v3-fos-license
A cross‐sectional study of psychological burden in Chinese patients with pulmonary nodules: Prevalence and impact on the management of nodules Abstract Background Uncertainty after the detection of pulmonary nodules (PNs) can cause psychological burden. We designed this study to quantitatively evaluate the prevalence, severity and possible impact of this burden on the preference of patients for management of nodules. Methods The Hospital Anxiety and Depression Scale (HADS) was used to evaluate psychological burden in patients. An independent t‐test and a Mann–Whitney U test were used to determine the significance of differences between groups in continuous variables. A chi‐square test was used to determine the significance of difference between groups in categorical variables. Results A total of 334 inpatients diagnosed with PNs were included in the study. A total of 17.96% of the participates screened positive for anxiety and 14.67% for depression. Female patients had significantly higher positive rates of both anxiety and depression screenings than male patients (21.57% vs. 12.31%, p = 0.032 and 18.05% vs. 9.30%, p = 0.028, respectively). Among patients screened positive for anxiety, the proportion of those who chose more aggressive management was significantly higher (34/60 vs. 113/274, p = 0.029). The rate of benign or precursor disease resected was significantly higher in patients with more aggressive management (46.94% vs. 9.63%, p < 0.01). Conclusions Anxiety and depression are common in Chinese patients with PNs. Patients with positive HADS anxiety screening results are more likely to adopt more aggressive management that leads to a higher rate of benign or precursor disease resected/biopsied. This study alerts clinicians to the need to assess and possibly treat emotional responses. INTRODUCTION Detection of pulmonary nodules (PNs) has been reported to cause psychological burden in patients as a result of panic about lung cancer and death. 1-2 Whereas a mass detected in organs such as breast and colon have easy and instant access to biopsy and pathological diagnosis, the management of spots in the lung may cause extra cancer-related psychological burden due to "watch and wait" management. As nodules cannot be diagnosed immediately, patients have to undergo months, or even years, of surveillance in accordance with the guidelines of standard PN management before the final diagnosis, and this long wait places patients in a state of uncertainty 2-4 that is a powerful stressor and an important antecedent of anxiety. 5 Moreover, it has been reported in previous studies that because of lack of understanding of the etiology, malignancy risk and ramification of PNs, patients' inaccurate self-diagnosis of malignancy often precedes professional evaluation of their PNs, and this may also cause considerable anxiety. [6][7][8] Lung cancer has been one of the deadliest cancers in China since 2008. [9][10] The survival rate of Chinese lung cancer patients in 2012-2015 was 16.8% in men and 25.1% in women, which was classified as low survival, 11 hence screening programs for the early detection of lung cancer have now become widespread in China. Yet, although there is a possibility of stage shift in lung cancer, most people screened by chest computed tomography (CT) are diagnosed with nodules that do not lead to cancer-related death. [12][13] This burden following PN detection has considerably increased lately due to the control policy of the COVID-19 pandemic: increased chest CT screenings have led to more PN detection among people of all age groups without risk factors for lung cancer. The management of PNs remains controversial among scholars from different geographic regions and academic backgrounds, and it is revised regularly in accordance with latest studies on PNs. In general, the management recommended in Asian (including Chinese) guidelines is more aggressive than that in the US. For example, for groundglass nodules (GGNs) no larger than 10 mm, the interval between two follow-up CT scans recommended by the Fleischner Society (FS), American College of Chest Physicians (ACCP), and National Comprehensive Cancer Network (NCCN) is 6-12 months, 3,14-15 while the interval recommended by the Chinese Alliance Against Lung Cancer and the Clinical Practice Consensus Guidelines for Asia is 3 months. 16,17 In our department, we noticed that some patients complained of cancer-related psychological burden caused by the detection of PNs, some preferred more aggressive management due to this burden when noninvasive CT surveillance was still an appropriate choice, and some even suspended their normal living and working routines. We designed this study to quantitatively evaluate the psychological burden of Chinese PN inpatients and to explore its impact on the management of PNs in order to advocate the need to assess and possibly treat emotional responses. METHODS This was an observational single-center cross-sectional study conducted with the approval of the Peking University People's Hospital Medical Ethics Committee (Approval Number: 2018PHB021-01). Informed consent was obtained from the participants. The observational trial was registered at ClinicalTrials.gov (ID: NCT03498768). Data collection All inpatients diagnosed with PNs in the Department of Thoracic Surgery at Peking University People's Hospital from April 2018 to June 2019 were invited to complete self-administered questionnaires during inpatient education on the first day of hospitalization. Participation was voluntary and no incentives were offered. The inclusion criteria were as follows: (1) detection of noncalcified PNs with a diameter between 4-30 mm on chest CT, (2) aged between 18-80, (3) tolerant of surgery with accessible pathological diagnosis, (4) willing to receive follow-up phone calls to reevaluate their psychological status after discharge. The exclusion criteria included: (1) difficulty in reading and writing, (2) diagnosed mental disease, (3) other circumstances deemed inappropriate for enrollment by the researchers. At the time of enrollment, information on demographic characteristics (sex, age, education background, medical insurance, and occupational status), clinical characteristics (family history of lung cancer, history of malignant tumors, smoking history), and parameters to define more and less aggressive management (size and attenuation of the PNs, interval between two follow-up CT scans, and duration of CT surveillance until resection/biopsy) was recorded within the self-administered questionnaires. Validated self-rating scales: Hospital anxiety and depression scale (HADS) We used the Hospital Anxiety and Depression Scale (HADS) to evaluate patients' psychological burden. The HADS is a selfreport scale that measures anxiety and depression in physically ill subjects. There are 14 items on this scale: seven for anxiety assessment and seven for depression. Each item is scored between 0 and 3. 18 The HADS scale has been translated into Chinese, validated by several Chinese groups, and recommended by the Anxiety Disorders Collaboration Group of the Chinese Medical Association Psychiatry Branch as a screening tool for Chinese inpatients since 2012. 19 The positive threshold was set at nine points for both anxiety and depression in China, and anxiety/depression with a score of over 15 is considered severe. 18,20 Definition of more aggressive PN management According to recommendations in the most widely-used guidelines, 3,14-17 we defined management of PNs as more aggressive any of the following criteria were met: (1) for solid nodule (SN) ≤8 mm, mixed GGN (mGGN)/pure GGN (pGGN) ≤15 mm, biopsy/resection right after the first detection of PNs; (2) an interval of less than 3 months for SN and mGGN/6 months for pGGN between two follow-up CT scans; (3) for pGGN, discontinuation of CT surveillance and biopsy/resection with no evidence of growth. Study outcomes The primary outcome was the prevalence and risk factors of anxiety and depression in Chinese PN patients. The secondary outcome was the possible impact of psychological burden on the management of PNs. Statistical analyses First, we calculated the prevalence and severity of patients' anxiety and depression. Second, we used univariate analysis to identify which demographic or clinical variables were significant for the prediction of positive screenings of HADS in Chinese PN patients. Simple frequency, mean, standard deviation, median, and range were used to statistically describe the characteristics of participants according to type of variable. A chi-square test was used for bivariate analysis. For continuous variables, an independent t-test and a Mann-Whitney U test were used to determine the significance of differences between groups in continuous variables based on whether the data distribution is a parametric or nonparametric procedure, respectively. Then we analyzed the possible impact of psychological burden on the management of PNs. All participants were grouped into patients with more or less aggressive PNs management according to the definition above. We used a Mann-Whitney U test to compare the parameters of patients in different groups. A chi-square test was then used to compare the difference in the proportion of more or less aggressive management between patients with different HADS screening outcomes. Next, we compared the rates of benign or precursor disease resected in patients with more and less aggressive management using a chi-square test. A value of p < 0.05 was considered statistically significant and all p-values were two-tailed. All statistical procedures were conducted using IBM SPSS (v. 26.0) software for MAC. RESULTS A total of 451 patients completed the questionnaires. Among these, 58 patients were discharged before lung resection. Moreover, 59 patients had missing information in the questionnaires. In total, 334 patients participated in our study. Prevalence and severity of anxiety and depression In total, 17.96% (n = 60) of our participants screened positive for anxiety, 20.00% (n = 12) of which were severe. A total of 14.67% (n = 49) of the patients screened positive for depression, 12.24% (n = 6) of which were severe. Positive predictor of HADS-anxiety and the HADS-depression screening Demographic and clinical characteristics were analyzed to identify the positive predictors of anxiety screening results (Table 1). There was a significantly higher positive rate of anxiety screening in female patients (χ 2 = 4.621, p = 0.032). Patients with a higher education background were also found to be more vulnerable to anxiety (p = 0.069). The same data analysis process was used for the depression subscales ( Table 2). The results of the univariate analysis revealed that only sex was a risk factor for positive depression screening (χ 2 = 4.839, p = 0.028). Comparison of patients with more or less aggressive management According to the definition of more aggressive management, 44.01% (n = 147) of our patients adopted more aggressive management and 55.99% (n = 187) adopted less aggressive management. Our definition differentiates the two groups clearly: patients from the more aggressive group had statistically shorter durations from the detection of PNs to diagnosis (2 vs. 3 months, p = 0.017), shorter intervals between two follow-up CT scans (1 vs. 3 months, p = 0.008), and smaller diameters of PNs at the time of biopsy/resection (8 vs. 16 mm, p < 0.01) ( Table 3). Regarding the pathological diagnosis of PNs, 53.06% (n = 78) patients in the more aggressive management group had malignant nodules (including primary lung neoplasms and metastasis malignacy) and 46.94% (n = 69) had benign or precursor disease (including 14 adenocarcinomas in situ [AIS] and 14 atypical adenomatous hyperplasia [AAH]). In patients with less aggressive management, 90.37% (n = 169) had malignant nodules and 9.63% (n = 18) had benign or precursor disease (including 3 AIS and 2 AAH). The rate of benign or precursor disease biopsied/resected was significantly higher in patients with more aggressive management (46.94% vs. 9.63%, p < 0.01) ( Table 3). Impact of anxiety and depression on the management of PNs Among patients screened positive for anxiety, the proportion of patients with more aggressive PNs management was significantly higher than that among patients screened negative (34/60 vs. 113/274, p = 0.029). However, there was no statistical difference between the proportion of patients with more aggressive PN management in those screened positive or negative for depression (26/49 vs. 121/285, p = 0.167). DISCUSSION Psychological burden, including anxiety, depression, and cancer-related distress, is common in patients with screened or incidentally detected PNs worldwide. Byrne et al. 1 reported that state anxiety appeared in individuals with either indeterminate or suspicious screening results; Clark et al. 2 reported that in the Dutch-Belgian Randomized Lung Cancer Screening Trial (NELSON trial) increased cancer-specific distress appeared in those with indeterminate results. Our study revealed that anxiety and depression were also common in Chinese patients with PNs, and the positive rate of anxiety and depression screening in our PN patients was high compared with that reported in Chinese patients with other diseases. 21 Concerning the possible risk factors of psychological burden in Chinese PN patients, our study revealed that the only positive predictor of both anxiety and depression screenings was sex. As reported by the Anxiety and Depression Association of America, the prevalence of any anxiety disorder in women was twice as high as that in men (23.4% for women and 14.3% for men), which is consistent with the results of our study. 22 In addition, Byrne et al. 1 reported that following lung cancer screenings, individuals with a higher level of education had significantly lower levels of overall state anxiety and trait anxiety than those with lower levels of education. The results of our study did not reveal any statistical difference between groups from different education backgrounds. However, patients with a higher education background were found to be more prone to anxiety (see Table 1), which contradicts the results of Byrne et al. The reason could be difference in the education system and cultural background. Our study examines the impact of anxiety and depression on PN management. We define management of PNs as more aggressive when patients choose biopsy/resection when both noninvasive CT surveillance and invasive diagnostic process are deemed appropriate as recommended in different guidelines, as demonstrated by the three criteria given in the definition section above. First, biopsy/resection right after detection of certain PNs is usually not recommended because it has been previously reported that 10.1%-26% of PNs may decrease in size, resolve or remain stable. [23][24] The median duration from detection of PNs to biopsy/resection ranges from 11-20 months based on the size and attenuation of PNs in previous studies. 23,25 Second, our definition considered the interval between two follow-up CT scans. Recent guidelines have updated in favor of a longer CT follow-up interval ranging from 3 to 12 months depending on the size and attenuation of the PNs and the risk factors of the patients. 3,[14][15] More importantly, it has been reported that a change in both solid and nonsolid PNs should be observed for at least 3 months. [23][24] Third, for subsolid nodules, biopsy/resection used to be recommended when the PNs did not resolve or decrease in the past; however, more recent guidelines have increasingly recommended that biopsy/resection should only be performed when the PNs grow, and CT surveillance should be prescribed for patients whose PNs decrease or remain stable. 23,[26][27] It is consistent with common sense that resection and CT scan "ahead-of-time" may lead to higher risk of over treatment of benign tumors, and frequent CT scans means more exposure to radiation and waste of medical resources. However, no exact number in Chinese PN patients has previously been published. In our study, the high proportion of benign or precursor disease resected/biopsied in patients with more aggressive management may provoke the controversial topic of over-diagnosis, which exists since the advocation of lung cancer screening by chest CT. In comparison, in the I-ELCAP the rate of benign disease in the surgical intervention group was 11% (54 out of 492) 28 ; in the NEL-SON, 15% (5 out of 33) resected nodules were benign 29 ; in the National Lung Screening Trial, this rate was 44% 25 ; and a retrospective cohort study published on JAMA internal medicine reported that 30.8% of the participants who underwent resection had a benign nodule. 25 The rate of benign nodules resected in patients with more aggressive management in our department is actually within the range reported above. However, with the development of the healthcare system in China, chest CT screening has been introduced into routine physical examination in aged people in order to improve the prognosis of this deadliest cancer. Moreover, in accordance with the prevention of COVID-19 pandemic nowadays, the amount of CT screening increases sharply in people without risk factors for lung cancer. Without proper management of PNs, increased incidental detections may lead to more psychological burden in a larger population, as well as significant waste of medical resources. These results call for a consensus on a standardized management of PNs in the entire country, and systematic cooperation of different disciplines and medical centers for management of PN patients, which is consistent with the demand in PN management worldwide. 2,25 In particular, the mental health of patients is not fully addressed in current PN management. The results of our study revealed that among patients screened positive for anxiety, the proportion of more aggressive PN management was significantly higher than that of those screened negative. Although the etiology of anxiety after PN detection remains unclear, it has been widely accepted that anxiety arises from intolerance of uncertainty. 5 At PN detection, most patients were prone to overestimate cancer risk of PNs and mismatch PNs to lung cancer, and these are stressful stimuli of anxiety. 1,6-7 Instead of invasive surgery to uproot the stimuli, noninvasive methods to improve patients' tolerance of PNs could also be a choice to relieve anxiety. Koroscil et al. 8 reported that an easy-to-understand fact sheet on etiology, malignancy risk and medical consequences of PNs would improve understanding and decrease patient anxiety. There are several limitations in our study. First, since this study was a cross-sectional study insufficient for attribution, we could only infer a possible cause-and-effect relationship between psychological burden and more aggressive PN management. The ideal design for this proposal is a prospective cohort study in a screening population in which psychological status of all patients are evaluated after detection of PNs, and then the percentages of more and less aggressive managements are compared between groups with positive and negative psychological screening outcomes. Further studies to dynamically evaluate the psychological status of PN patients are needed to determine whether surgical interventions could relieve PN patients' psychological burden, and to discover the characteristics of patients who may benefit from PNs resection psychologically. Second, this study was single-centered and only included eligible patients, which may have led to selection bias. Third, the psychological problems found in our interview were only screened but remain undiagnosed and untreated; in the biopsychosocial medical model, we should cooperate with professional psychiatrists to provide multidisciplinary care for patients with PNs.
2021-10-16T06:16:36.748Z
2021-10-15T00:00:00.000
{ "year": 2021, "sha1": "3a544d621377c21fc716dce567d85ff5f91f96e0", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/1759-7714.14165", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d130ddd9c48af4bcfd8de2edb938e4742115b8b2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
264997517
pes2o/s2orc
v3-fos-license
Facile synthesis of telechelic poly(phenylene sulfide)s by means of electron-deficient aromatic sulfonium electrophiles We report the facile synthesis of telechelic poly(phenylene sulfide) (PPS) derivatives bearing functional groups at both termini. α,ω-Dihalogenated dimethyl-substituted PPS were obtained in high yield with a high degree of end-functionalization by using soluble poly(2,6-dimethyl-1,4-phenylenesulfide) (PMPS) and 4,4′-dihalogenated diphenyl disulfide (X-DPS, X = Cl, Br) as a precursor and an end-capping agent, respectively. Further end-functionalization is achieved through cross-coupling reactions; particularly, the Kumada–Tamao cross-coupling reaction of bromo-terminated telechelic PMPS and a vinylated Grignard reagent afforded end-vinylated PMPS with thermosetting properties. This synthetic approach can be applied to the preparation of various aromatic telechelic polymers with the desired structures and functionalities. Scheme S1 In a 10 mL flask, Cl-DPS (0.82 g, 2.9 mmol) was dissolved in dichloromethane (6 mL) and DDQ (0.623 g, 2.7 mmol) and TFA (383 L, 5 mmol) were added and stirred.Although the color of the solution turned deep green after starting the reaction (i.e., charge-transfer complexation of Cl-DPS and DDQ), no products were obtained 15 hours after the reaction owing to the presence of p-substituted units as well as the electron-withdrawing properties of chlorine groups that prevented electrophilic substitution reaction. Reduction of Cl-Terminated PMPS for end structure detection Disulfide-reduced Cl-terminated telechelic PMPS (Red-Cl-PMPS) was prepared with the procedure based on our previous report 1 with several modifications (Scheme 2 in the main text).In a two-necked 50 mL flask, Cl-PMPS (0.16 g) was dissolved in THF (5.4 mL).After adding sodium borohydride (0.15 g), methanol (0.6 mL) was added and stirred at 70 °C under reflux for 15 hours.After the reaction, the solution was precipitated in methanol containing 5vol% hydrochloric acid (200 mL), and the precipitate was collected by filtration, was washed with methanol and water, and was dried in vacuo to obtain Red-Cl-PMPS (0.15 g, Yield: 92%). Determination procedure of end-functionalization degree In this study, the degree of end-functionalization degree was determined as follows.First, Mn was calculated by both 1 H NMR (comparison of peak integrals for aromatic and methyl protons) and SEC in chloroform.As 1 H NMR can detect both proton/chlorine-terminated end groups including their numbers although SEC couldn't distinguish them, end-functionality was finally determined as equation (1) shown below. Degree of end functionalization The degree of end-functionalization for V-PMPS' was also determined with a similar procedure (For V-PMPS', Mn (NMR) was determined by comparing the integrals of vinyl and methyl protons). Fig Fig. S9 1 H NMR spectrum of Br-DPS in chloroform-d.
2023-11-05T05:15:18.362Z
2023-10-31T00:00:00.000
{ "year": 2023, "sha1": "d0457fa12c372de77a6b948d2f0062b72847fa03", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "d0457fa12c372de77a6b948d2f0062b72847fa03", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
88647370
pes2o/s2orc
v3-fos-license
Pharmacological role of atorvastatin in myocardium and smooth muscle progenitor cells Smooth muscle cells are essential for the function of vasculature and myocardium. By contraction and relaxation, they modify the luminal diameter, which enables blood vessels to maintain a proper blood pressure. The increased growth potential of vascular smooth muscle cells represents one of the crucial anomalies responsible for the development of hypertension and atherosclerosis, which leads to cardiovascular disease (CVD). 1 Although effective statins are available, however the prevalence of CVD remains higher. 2 Atorvastatin therapy is an effective way for reducing cholesterol level, thus could reduce the development of cardiovascular events by decreasing both inflammatory activity and atherogenic lipoprotein. 3 Statin ABSTRACT INTRODUCTION Smooth muscle cells are essential for the function of vasculature and myocardium. By contraction and relaxation, they modify the luminal diameter, which enables blood vessels to maintain a proper blood pressure. The increased growth potential of vascular smooth muscle cells represents one of the crucial anomalies responsible for the development of hypertension and atherosclerosis, which leads to cardiovascular disease (CVD). 1 Although effective statins are available, however the prevalence of CVD remains higher. 2 Atorvastatin therapy is an effective way for reducing cholesterol level, thus could reduce the development of cardiovascular events by decreasing both inflammatory activity and atherogenic lipoprotein. 3 Statin mediated anti-inflammatory effects may contribute to the ability of the atorvastatin could reduce risk of CVD. This review focuses the benefits of atorvastatin related to smooth muscle proliferation and myocardium. Vascular smooth muscle cells (VSMCs) Vascular smooth muscle cells (VSMCs) are the cellular components of the normal blood vessel wall that gives structural integrity and manage the diameter by contracting and relaxing dynamically in response to vasoactive stimuli. 4 VSMCs also involved in the function during vessel remodeling in physiological conditions such as pregnancy, exercise or after vascular injury. 5 Atorvastatin and smooth muscle proliferation VSMC are essential for maintaining vasculature homeostasis and function. Several studies have shown that statins attenuate vascular proliferative disease, for example, transplant-associated arteriosclerosis. 6 Chronic treatment with atorvastatin directly decreases mitogeninduced nuclear Ca2+ mobilization. 7 In aortic smooth muscle cell atorvastatin and mevastatin notably inhibits the mRNA expression of endothelial ET (A) and ET (B) receptors. Furthermore, the specific antagonists of ET (A) and ET (B) receptors significantly inhibited smooth muscle cell proliferation. It has been suggested that endothelial receptors and the mevalonate pathway are involved smooth muscle cell proliferation induced by bFGF. 8 Bruemmer et al findings revealed that minichromosome maintenance (MCM) proteins play a vital role during the proliferation of vascular smooth muscle cell. 9 Inhibition of MCM6 and MCM7 expression through the blocking of E2F function may contribute importantly to the inhibition of vascular smooth muscle cellDNA synthesis by atorvastatin. Chandrasekar et al results indicate that the proatherogenic cytokine such as, interleukin-18 (IL-18) induces human coronary artery smooth muscle cell migration in matrixmetalloprotease (MMP-9) dependent manner. 10 Atorvastatin suppress IL-18 mediated aortic smooth muscle cellmigration and has therapeutic benefits for attenuating the development of atherosclerosis and restenosis. Erythropoietin directly stimulates the proliferation of vascular smooth muscle cell. Erythropoietin-induced proliferation in rat VSMCs was inhibited by statins through their inhibition of HMG-CoA reductase activity. 11 Lipophilic statins exert direct effects on distal human pulmonary artery smooth muscle celland are likely to involve inhibition of Rho GTPase signaling. 12 Atorvastatin inhibition of periostin expression induced by transforming growth factor-β (TGF-β1) in VSMCs may be exerted by inhibition of the production of mevalonate and other isoprene compounds and by blocking the Rho/Rho kinase signaling pathway. 13 Leptin contributes to the pathogenesis of atherosclerosis. Angiotensin II increases leptin synthesis in cultured adipocytes. Statin decreases the leptin expression in adipocytes and human coronary artery endothelial cells. Angiotensin II induces leptin expression in human VSMCs and atorvastatin can suppress the leptin expression induced by angiotensin II. Rac, reactive oxygen species (ROS) and JNK pathways mediate the inhibitory effect of atorvastatin on angiotensin II-induced leptin expression. 14 Recently, it has been suggested that statins may also modulate VSMC activation by their influence on the rennin-angiotensin system. Ang-(1-7) was identified as a major product of Ang I metabolism in VSMC culture. In this setting tumor necrosis factor alpha (TNF-α) decreases the conversion of Ang I to Ang-(1-7). Interestingly, atorvastatin attenuated the effects of TNF-α on Ang-(1-7) production as well as reversed the influence of TNF-α on angiotensin converting enzyme and angiotensin converting enzyme 2 expressions. Atorvastatin enhancement of ACE2/Ang-(1-7) axis in VSMCs could signify a new and favourable mechanism on cardiovascular action. 15 Atorvastatin and its effects on the myocardium Cardiac hypertrophy is an adaptive response of the heart to pressure excess. In the myocardium, the small GTPbinding proteins, Rho, Rac, Ras and oxidative stress are concerned in the hypertrophic response. 16 Animal studies have emphasized that a phagocyte-type NADPH oxidase may be a significant basis of ROS in the myocardium. 17,18 NADPH oxidase-dependent ROS production appears to be involved in cardiac hypertrophy in response to pressure excess, stretch, angiotensin II-infusion and αadrenergic stimulus. [19][20][21][22] Certainly, statins inhibit oxidative stress and cardiac hypertrophy in angiotensin II-induced rodents. 23 This has also been demonstrated in clinical studies where statins inhibit cardiac hypertrophy in hypercholesterolemic patients. 24 ROS mediated by NADPH-oxidase are increased in left ventricular myocardium from individuals with heart failure and correlate with an increased activity of Rac1 GTPase and treatment with statin decreases the Rac1 function of the human heart. 25 Atorvastatin attenuate lethal reperfusion-induced injury by contingent on the activities of PI3K and Akt as well as the presence and activity of eNOS. 26 The Scandinavian Simvastatin Survival Study (4S) suggests that statins reduce the incidence and morbidity of heart failure. 27 Patients with heart failure are illustrated by augmented vascular tone as well as endothelial dysfunction, which may be enhanced by statin therapy. Statins have proven to maintain the cardiac function in animal model's heart failure of and myocardial hypertrophy. 28,29 Chen et al results provide novel in vivo evidence for the key role of Connexin43 gap junctions in left ventricular hypertrophy and the possible mechanism in the anti-hypertrophic effect of statins. 30 These findings recommend that statins have therapeutic benefits in heart failure patients or atherosclerotic heart disease. CONCLUSION Atorvastatin exert positive effects through restoring of smooth muscle cells, thus promoting normal vasculature homeostasis. It also improves cardiac function and involved in the enhancement of myocardium, which helps in decreasing the risk of CVD.
2019-04-01T13:14:31.183Z
2016-12-30T00:00:00.000
{ "year": 2016, "sha1": "cd4c015adc43f1bee2df03e93b94d3e432e32455", "oa_license": null, "oa_url": "https://www.ijbcp.com/index.php/ijbcp/article/download/307/287", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "de0229225cadb8d1986b81e38f4d1a5af3d52216", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
16337664
pes2o/s2orc
v3-fos-license
Time Machines with Non-compactly Generated Cauchy Horizons and ``Handy Singularities" The use of"handy singularities"(i.e. singularities similar to those arising in the Deutch-Politzer space) enables one to avoid (almost) all known difficulties inherent usually to creation of time machines. A simple method is discussed for constructing a variety of such singularities. A few 3-dimensional examples are cited. It is 10 years now that time machines (TM's) are intensively studied, but the main question, of whether or not TM's can be created remains still unanswered. Most investigations so far have centered on TM's with compactly generated Cauchy horizons (compact TM's, or CTM's, for brevity) [1]. It has been understood that creation of such TM's is connected with at least two serious problems: (1) There are some "dangerous" null geodesics in the causal regions of CTM's. A photon propagating along such a geodesic would return infinitely many times (each time blue-shifted) in the vicinity of the Cauchy horizon [2]. This suggests that quantum effects could prevent the creation of the TM. It is still not clear whether they will [3,4], but they might. (2) Creation of a CTM inevitably involves violations of the Weak Energy Condition. In this connection it is common to refer to quantum effects, but here again some restrictions exist [5]. There is a little hope that any of these issues will be completely clarified in the foreseen future, since they both involve QFT in curved background which has a lot of its own unsolved problems. It well may be, however, that actually we need not clarify them. Indeed, why must we bound ourselves to compact TM's? The only answer I met in the literature is that noncompactness implies that either infinity or a singularity [6] are involved. So, "extra unpredictable information can enter the spacetime" [1] and we can no more completely control such a TM. This is true, indeed, but the point is that this in no way is the distinctive feature of "noncompact" TM's (NTM's). In any time machine we encounter extra unpredictable information as soon as we intersect the Cauchy horizon (by its very definition) and so for none of TM's the evolution can be completely controlled from the initial surface. So, we conclude that NTM's are not a bit worse than CTM's and preference given to the latter is a sheer matter of tradition [6]. Meanwhile, the example of the Deutsch-Politzer spacetime [7] (DPS) shows that to create an NTM we need neither "dangerous geodesics", nor "exotic matter". (Of course the problem remains of how to cause a spacetime to evolve in the appropriate manner but as noted above this does not depend on compactness.) The DPS is obtained as follows. A cut is made along a spacelike segment (i. e. disk D 1 ) on the Minkowski plane. A copy of the cut is made to the past from the original one. The boundaries of the segments (i. e. 4 points, or two copies of S 0 ) are removed from the spacetime and the banks of the cuts are glued, the upper bank of each cut is glued to the lower bank of the other cut. The four removed points cannot be returned back and form thus irremovable singularities. What enables us to render the abovementioned theorems harmless in the case of the DPS is the presence of these quite specific singularities and what makes them so handy is the following 1. Unlike Misner-type singularities, they make the relevant region noncompact, 2. They are absolutely mild (i. e. all curvature scalars are bounded) and so there is no need to invoke quantum gravity to explore these singularities, 3. And they are of laboratory, rather than of cosmological nature. That is they are confined in such a region R of the spacetime M that M − R = (a "good" spacetime) − (a compact set). All the above suggests that singularities possessing these properties (we shall call them handy singularities) are worth studying. Twenty years ago Ellis and Schmidt [8] constructed several singularities satisfying (1) and (2), but not (3). Besides, those singularities occuring in Minkowski spaces quotiented by discrete isometries were too symmetric, which was interpreted as instability. Recentely an n-dimensional analog of the DPS was obtained [9] by the replacement: in the procedure described above. So, we know that n-dimensional handy singularities exist, but that is all we know. And now I would like to propose a simple trick (just generalizing that from [9]) for constructing quasiregular singularities (including the previously known), which yields at the same time a variety of handy singularities. 2. Now make a cut along S in M, that is consider M S ≡ p(M). Note that S 0 ≡ M S − p(M) is a double covering of S: π(S 0 ) = S. If S is orientable, S 0 is just a disjoint union of two copies of S (the two "banks" of the cut): S + and S − . The projection π induces a nontrivial isometry σ : S 0 → S 0 enabling one to return to M from M S by "gluing the banks". Namely, M S /σ = M −C 3. As the third step take an isometry η : R → R ′ , and repeat the above procedure with M replaced by M S and S replaced by S ′ ≡ η(S). The resulting space M SS ′ is M with two cuts made (along S and along S ′ ), each taken with its "banks". The desired spacetime N can be obtained now by the appropriate identification: where ξ ≡ σ • η (rigorously speaking instead of η we should have written some η ′ in (2), where η ′ is the continuous extension of p • η on M SS ′ , we shall neglect such subtleties for simplicity of notation). In the orientable case (2) simply means that we must glue S ± to S ∓ . When η is nontrivial, we cannot return C back and N contains thus a handy singularity "in the form of" C. We can also use in (2) any other isometry ξ = σ (see example (d) below). If S was chosen to be a disc D n−1 in the Minkowski space and η to be a translation, we obtain the DPS (cf. (1)), but being applied to different M, S, and η the same procedure will give us a lot of quite different "handy singularities". 3-dimensional examples. In what follows M for simplicity is taken to be a flat space IR 3 and z, ρ, φ are the cylinder coordinates in it. (a) Let C be an arbitrary knot (with S being its Seifert surface) and η be a translation. Then N is what is called a "loop-based wormhole" in [10]. (b) Let S be a disk z = 0, ρ < ρ 0 and η be a rotation φ → φ + φ 0 (note that S = S ′ thus). In this case N has quite a curious structure. It is diffeomorphic to M − C, any simply connected region of N is isometric to some region of M − C and vice versa, and still globally they are not isometric. Similarly to the conical case one can think of N as a space IR 3 having delta-like Riemannian tensor with support in C (in contrast to M, where the Riemannian tensor is zero even in the distributional sense). (c) Take a rectangle strip. Turn one of its ends through nπ and then glue it to the other (so that a cylinder is obtained if n = 0 and the Möbius band, if n = 1). Take the resulting surface for S. S can be specified by condition 0 < r < 1/2, θ = ±nφ Here r and θ are new coordinates. For any point A, r(A) is defined to be the distance from A to a and θ(A) to be the angle between the z-axis and the direction (Aa), where a is the point z(a) = 0, ρ(a) = 1, φ(a) = φ(A). For a (local) isometry one can take η : φ → φ + φ 0 , θ → θ + nφ 0 . When n = 3, C is a knot (trefoil). (d) Now change IR 3 to IR 3 \ {0} in example (b) and take ξ in (2) to be the reflection r → −r. Thus obtained N has a singularity generated by the circle C and two more corresponding to the removed {0} (it also cannot be returned back). The latter though being handy singularities have locally the same structure as a singularity considered in [8].
2014-10-01T00:00:00.000Z
1997-11-12T00:00:00.000
{ "year": 1997, "sha1": "67adba1dfc72a02ee5c2e3e9d4c45bbf29b8af45", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "67adba1dfc72a02ee5c2e3e9d4c45bbf29b8af45", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
229470445
pes2o/s2orc
v3-fos-license
Cribrilinid bryozoans from Pleistocene Mediterranean deep-waters, with the description of new species Abstract Cribrilinid bryozoans originating from Pleistocene deep-water sediments from two localities near Messina (Sicily, Italy)—Capo Milazzo (Gelasian) and Scoppo (Calabrian)—were examined. Five cribrilinid species were found, three in each locality and time interval, with only one species shared. Three species, Cribrilaria profunda n. sp., Glabrilaria transversocarinata n. sp., and Figularia spectabilis n. sp., are new to science. Of the two remaining species, Figularia figularis was already known from local fossil associations, whereas Glabrilaria pedunculata, a present-day Mediterranean species, is recorded for the first time as a fossil. New combinations are suggested for two species previously assigned to Puellina, Cribrilaria saldanhai (Harmelin, 2001) n. comb. and Cribrilaria mikelae (Harmelin, 2006) n. comb. The diagnosis of the genus Figularia was amended to include an erect growth morphology in addition to the encrusting form, and the occurrence of ooecia formed by the distal kenozooid. Following a literature revision of all species currently assigned to Figularia, the new combinations Vitrimurella capitifera (Canu and Bassler, 1929) n. comb. and Hayamiellina quaylei (Powell, 1967a) n. comb. are suggested, and problematic species are listed and briefly discussed. UUID: http://zoobank.org/b7b36152-bf7b-4e00-b6ec-2614b2a58f1b Introduction is an extremely large family of cheilostome bryozoans including 127 genera and more than 700 living and fossil species to date, accounting for ∼3% of total bryozoan diversity (Bock, 2020). First appearing ca. 100 Ma, in the Cenomanian, Cribrilinidae underwent a peak of diversification during the Santonian, greatly contributing to the radiation of cheilostomes in the Late Cretaceous (Cheetham, 1971;Jablonski et al., 1997 and references therein). This family is one of the most species-rich in the present-day Mediterranean (Rosso and Di Martino, 2016), as well as in other regions of the world (e.g., Gordon et al., 2019). Cribrilinids exhibit a typical and distinctive costate frontal shield, but also high morphological variability, including different types of heteromorphs (avicularia, kenozooids, articulated and non-articulated spines, etc.) and ovicell structures. A future subdivision of Cribrilinidae into several families or subfamilies is very likely. A more accurate definition of certain genera will, however, require a thorough re-examination of the original material, particularly of the numerous Cretaceous representatives (e.g., Taylor and McKinney, 2006;Rosso et al., 2018), as well as phylogenetic analyses. Genus and species identification are often based on subtle morphological characters, such as those associated with the zooidal orifice and the suboral bar (e.g., Harmelin, 1970Harmelin, , 1978Harmelin, , 2001Harmelin, , 2006Bishop and Househam, 1987), which require scanning electron microscopy (SEM), still lacking in the descriptions of numerous taxa. In fossil material, identification of taxa is also jeopardized by taphonomic filters, with abrasion, corrosion, partial dissolution and recrystallization obliterating fine diagnostic characters. This is particularly true for species introduced in old publications, normally including only brief descriptions and often lacking proper illustrations. Descriptions and revisions of fossil cribrilinids based on detailed illustrations are scarce in the modern literature, especially for specific stratigraphic intervals (Berning, 2006;Taylor and McKinney, 2006;Di Martino and Rosso, 2015). In this context, this paper aims to: (1) document cribrilinid associations from Pleistocene deep-water habitats of southern Italy; (2) illustrate fossil representatives of some established species; (3) describe three new species; (4) amend the diagnosis of the genus Figularia Jullien, 1886, and provide a comparative morphological analysis of species currently assigned to this genus; and (5) propose new *Corresponding author Vertino, 2003). At Scoppo, these sediments unconformably lie on Messinian brecciated evaporitic limestone. They consist of basal rudstones rich in fragments of cold-water corals (i.e., M. oculata, D. pertusum, and D. dianthus) that are overlain by poorly cemented white marls with sparse corals and plates of the cirriped Scillaelepas Seguenza, 1876. These macrofossils, and ostracodes, point to deposition in bathyal environments (Vertino et al., 2013;Sciuto, 2016) in the MNN19b-19c biozones (A. Baldanza, personal communication, 2015), corresponding to the early Calabrian (=Santernian). Materials and methods Studied material originates from deep-water sediments cropping out in two different localities near Messina in north-eastern Sicily: Capo Milazzo Peninsula (two outcrops: Cala Sant'Antonino and Punta Mazza) and Scoppo ( Fig. 1; see Geological setting for details). Additional material used for comparison derives from a present-day submarine sample collected at the Apollo Bank off Ustica Island in the Tyrrhenian Sea ( Fig. 1). At Capo Milazzo, cribrilinid bryozoans were found in "sample 1 (1999)" collected near the top of the layers exposed at Cala Sant'Antonino West; "sample 17 (2000)" and "sample 2015" collected in the central part of Cala Sant'Antonino outcrop; and "sample 4" and "sample 5" collected in biogenic layers near the base of Punta Mazza section, corresponding to "sample 12" and "sample 11" of Sciuto (2014b), respectively. Further information on these samples can be found in Sciuto (2014b) and Rosso and Sciuto (2019). At Scoppo, cribrilinids were found in a test sample associated with a Scillaelepas-rich layer, and in the sample "Scoppo 24 top" coming from uncemented marly sediment. Rosso et al.-Pleistocene deep-water cribrilinids At the Apollo Bank, coarse sediments associated with the kelp Laminaria rodriguezii Bornet, 1888 were collected at about 60 m depth. Living and dead bryozoan associations were characterized by high species richness, but delivered only one colony (now fragmented) of Figularia figularis (Johnston, 1847) (Di Geronimo et al., 1990). Sediment was routinely treated (washed, sieved, and dried) at the Paleoecological Laboratory of the University of Catania. All bryozoans were picked from residues larger than 0.5 mm. After preliminary identification under a stereomicroscope, selected uncoated specimens were mounted for scanning electron microscopy (SEM) using a TESCAN VEGA 2 LMU in backscattered-electron/low-vacuum mode at the Microscopical Laboratory of the University of Catania. For the attribution of the specimens to the genera Cribrilaria Canu andBassler, 1929 andGlabrilaria Bishop andHouseham, 1987, we followed the diagnoses in Rosso et al. (2018) summarized herein: Cribrilaria has totally calcified non-pseudoporous ooecia produced by the distal autozooid or kenozooid, interzooidal avicularia of variable size and shape, usually five (4-8) oral spines, and relatively large uncalcified windows of pore-chambers; Glabrilaria has non-pseudoporous ooecia that are exclusively produced by the distal kenozooid, erect or semi-erect avicularia, 6-7 (rarely five) oral spines, small to moderately sized uncalcified windows of pore-chambers. Measurements were obtained from SEM images using the image processing program ImageJ (Schneider et al., 2012). Measurements were tabulated and provided in micrometers. The complete range is given first, followed by the mean value plus/minus standard deviation and the number of measurements taken. In specimens of Glabrilaria, zooidal boundaries were obliterated by recrystallisation with bands of crystals filling the interzooidal grooves. To estimate zooidal size, length was measured from the distal end of the orifice to the mid-point of the crystal band located proximally, while width was measured from mid-point to mid-point of the crystal bands located laterally. Repositories and institutional abbreviations.-All specimens described and illustrated in this work are part of the Rosso Collection deposited at the Museum of Paleontology of the University of Catania (PMC) under the catalogue numbers reported in the "Systematic paleontology" section. Other abbreviations: MNHN, Muséum national d'Histoire naturelle, Paris; NHMUK, Natural History Museum, London; NMNH, National Museum of Natural History, Smithsonian Institution, Washington DC. Etymology.-From the Latin profundus, alluding to its deep-water distribution. Cribrilaria scripta and C. radiata, although similar in appearance to C. profunda n. sp., have smaller zooidal dimensions and larger interzooidal avicularia, and four oral spines occur in most zooids in the latter species (Harmelin, 1970;Bishop and Househam, 1987). Recent specimens of C. scripta sensu Harmelin and Aristegui (1988) from deep waters of the Ibero-Moroccan Bay and Gibraltar Strait, are here attributed to C. profunda n. sp. based on the measurements, the presence of generally five oral spines, and presence of a robust and smooth pair of suboral costae forming a median prominence. In addition, specimens from the early Messinian of Carboneras (SE Spain) identified by J.-G. Harmelin as Puellina (Cribrilaria) scripta and mentioned in Barrier et al. (1992), without description or illustrations, might belong to C. profunda n. sp. The Recent Cribrilaria pseudoradiata from the upper bathyal Atlanto-Mediterranean region is also similar to C. profunda n. sp., but has smaller dimensions and lacks interzooidal avicularia. Cribrilaria profunda n. sp. could possibly correspond to Lepralia planicosta Seguenza, 1880, a cribrimorph species reported from Plio-Pleistocene sediments of the Messina Strait area. Seguenza (1880) distinguished his species from C. scripta, adducing that autozooids were irregularly shaped, with a flat costate shield consisting of several costae, as in C. profunda n. sp. Unfortunately, Lepralia planicosta, supposedly corresponding to Lepralia scripta sensu Manzoni (1875) from the early Pliocene of Castrocaro, was not figured and the type material was lost in 1908 during the Messina earthquake. We refrain from selecting our material as the neotype of L. planicosta because the original description of this species seems insufficient to ensure their conspecificity, and the type localities, although geographically close, are not exactly the same, and neither are the geologic horizons. Seguenza (1880) abstained from illustrating his new species and referred to drawings of L. scripta sensu Manzoni (1875, figs. 25, 25a). Manzoni's specimens, held in the collection of the Museo di Storia Naturale, Geologia e Paleontologia of Florence, should be located and examined before selecting a neotype for this species. Remarks.-The available specimens are worn and recrystallized, preventing recognition of some diagnostic characters. However, the morphology and morphometrics of autozooids, ooecia, and kenozooids are closely reminiscent of Glabrilaria pedunculata Gautier, 1956, although with a few small differences. The present-day Mediterranean species invariably shows six oral spines and two median pores in the triangular shelf distal to the suboral costae (Bishop and Househam, 1987, fig. 97;Harmelin, 1988, fig. 17a, c;Rosso et al., 2019a, fig. 5e, f). However, both the variability in the number of oral spines and the presence/absence of median pores are considered to be in the range of intraspecific variability in cribrilinids (e.g., C. pseudoradiata Aristegui, 1988 andG. orientalis Harmelin, 1988). The longstalked (=pedunculate) avicularia, originating from basal pore chambers in both autozooids and kenozooids, which are typical of G. pedunculata, were not observed in our fossil specimens. This is likely a taphonomic bias, because such avicularia can be easily detached even in living colonies, as observed in Glabrilaria hirsuta Rosso in Rosso et al., 2018 from the Bahama Bank. In our fossil specimens, zooidal boundaries are mostly covered by neomorphic calcite crystals that prevent the detection of the basal pore chambers from which the pedunculate avicularia are budded. However, in Figure 4.4 and 4.5 (see arrows) the pores potentially producing the avicularia lateral to the ovicell are visible. Seven oral spines were described in Glabrilaria corbula Bishop and Househam, 1987 and Glabrilaria orientalis lusitanica Harmelin, 1988, two closely related extant species reported from the Atlanto-Mediterranean region and the Gibraltar Strait area, respectively. However, the former species shows an ooecium that is formed by a distal kenozooid which is not distinguishable in frontal view, has 4-6 costae-like ridges arranged in a radial pattern, a flatter autozooidal shield with somewhat carinate costae that are sometimes with a pelma, and two large pores in the suboral shelf (Bishop and Househam, 1987;Harmelin, 1988), while the latter species lacks midline pores in the suboral shelf (Harmelin, 1988). Glabrilaria orientalis lusitanica also has semi-erect interzooidal avicularia (Harmelin, 1988) backed against the ooecium. Six to seven oral spines also occur in Glabrilaria africana (Hayward and Cook, 1983), but this species has numerous variably sized pores in the suboral shelf in addition to semi-erect avicularia associated with the ooecium and squeezed between autozooids. Etymology.-From the Latin transversus, meaning transversely placed, and carina alluding to the typical median crest of the ooecium. Remarks.-The co-occurrence of a prominent transverse ridge on the ooecium and a bifid suboral mucro is distinctive of this species. Ooecia with a transverse ridge are known in a few species only. One is the extant Glabrilaria hirsuta Rosso in Rosso et al., 2018 from the Bahama Bank, in which the ridge is, however, very arched to subtriangular and equipped with prominent spine-like processes (Rosso et al., 2018). Furthermore, in G. hirsuta, the number of oral spines (six, four persisting in ovicellate zooids) occasionally increases to seven, the costae have more obvious spine-like processes at the periphery of the frontal shield, the suboral costae form a transverse spiny crest proximal to the orifice, and kenozooids arranged in rows or clusters are very common (Rosso et al., 2018). In the extant Glabrilaria cristata (Harmelin, 1978) from the Hyères and Meteor banks south of the Azores, the ooecial ridge is extremely protruding and situated more proximally towards the orifice, contributing to form a sort of spiny collar around the orifice together with the second pair of suboral costae. These costae bear cockscomb-like spines that are still present but smaller than those on the other pairs (Harmelin, 1978). Oral spines are invariably seven in this species. Measurements of Glabrilaria transversocarinata n. sp. generally overlap with those of G. cf. G. pedunculata from Capo Milazzo (Table 2), but tend towards the higher values, sometimes exceeding the upper limit. The only exception is the size of the kenozooid, which seems to be smaller, although only based on a single measurement. However, morphological differences, including the number of oral spines, shape of costae, suboral lacuna and ooecia, and the rarity of kenozooids, distinguish the two species. The two colony fragments available are detached from the substratum, a common feature for bryozoan specimens found in the Capo Milazzo "yellow marl." This may indicate either that the substratum was organic or that selective aragonitic dissolution took place during/before fossilization. Amended diagnosis.-Colony commonly encrusting, but erect, fan-shaped, or developing erect lobes in some species. Autozooids with variably developed gymnocyst, usually wider proximally; costate shield formed by few to numerous (up to 30) costae, each bearing a pelma (circular to drop-shaped or transversely elongated) varying in size and position. Orifice with well-developed poster and condyles, dimorphic and typically larger in ovicellate zooids. Oral spines absent. Avicularia, when present, vicarious, elongate, and often spatulate, with complete crossbar. Ovicells hyperstomial or subimmersed, cleithral. Ooecium formed by the distal autozooid or kenozooid (sometimes in the same colony), bilobate, consisting of two very large, modified costae, arched and meeting in the midline to form a suture and/or carina; each costa with a wide fenestra. Interzooidal communication via mural pore chambers in the transverse walls and multiporous septula in the lateral walls. Ancestrula only observed in the type species, wider than autozooids, subcircular, with narrow gymnocyst encircling an extensive opesia with differentiated orifice; no spines. Remarks.-The finding of a new species having morphological skeletal features fitting into the genus Figularia Jullien, 1886, but characterized by erect colony form and a very distinctive and large ooecium formed by a distal kenozooid, led to the examination of species currently placed in this genus (Tables 3, 4). Figularia was introduced by Jullien (1886, p. 608) who designated Lepralia figularis Johnston, 1847, an Atlanto-Mediterranean extant species, as the type species of the genus, and included an additional fossil species Lepralia elegantissima based on the unique drawing available (Seguenza, 1880, p. 83, pl. 8, fig. 11). This latter species, depicted with oral spine bases, is more likely to be a species of Cribrilaria (see also Remarks on Cribrilaria profunda n. sp.). Oral spines are absent in the type species F. figularis (see Soule et al., 1995, fig. 45C), as well as in all living and fossil specimens found to date (e.g., Figs. 6, 7). The absence of oral spines has also been reported almost consistently in the diagnosis of the genus, with only a few exceptions (e.g., Gordon, 1984). Further diagnostic characters include a complete crossbar in the vicarious avicularia, and the presence of large, symmetrical ectooecial fenestrae and a median carina in the ooecium (see Soule et al., 1995;Hayward and Ryland, 1998;Kukliński and Barnes, 2009;Yang et al., 2018). Rosso et al.-Pleistocene deep-water cribrilinids The erect colony-form has never been mentioned in the generic diagnosis before. However, Busk (1884, p. 132) described Figularia philomela as "free; erect or decumbent (hemescharan)." Subsequently, Hayward and Cook (1979, p. 76) found a bilaminar fragment of F. philomela interpreted as part of an erect foliaceous colony possibly arising from an encrusting phase (var. adnata of Busk, 1884). Gordon (1989, p. 15, 16) recorded the occasional occurrence of an erect bilamellar lobe, arising from the adjacent encrusting zooids, in a colony of Figularia mernae Uttley and Bullivant, 1972 from Puysegur Bank, off the South Island of New Zealand. The fanshaped colonies of the newly discovered Figularia species from Capo Milazzo, although often fragmentary (Fig. 8), show a configuration comparable to that observed in F. mernae, with basal zooids elongated and arranged in back-to-back adjacent pairs ( Fig. 8.1, 8.2, 8.6). The lack of a costate frontal shield, with no obvious evidence of breakage, in several proximal/basal zooids, suggests that simplified polymorphs, reminiscent of those in Corbulipora MacGillivray, 1895 (see Bock and Cook, 2001) may occur. However, the raising of the erect fan-shaped portions from an encrusting phase is doubtful until encrusting colonies, or at least isolated encrusting zooids, are found. The ooecium in Figularia is generally described as bivalved/bifenestrate (Ostrovsky, 2013). In F. figularis, the prominent bilobate ooecium is formed by the distal autozooid, with two costae meeting in the midline leaving a suture and/or forming a slightly raised carina; each costa bearing a large, irregularly shaped and transversely elongate fenestra (membranous area in non-cleaned specimens). The colony fragment of F. figularis from the Apollo Bank (Tyrrhenian Sea, Mediterranean) shows that ooecia formed by the distal kenozooid can co-occur in the same colony in this species (Fig. 6). Though uncommonly reported, and here recorded in F. figularis for the first time, the co-occurrence of ooecia produced by the distal autozooid and kenozooid is known in other cribrilinids, such as Cribrilina punctata (Hassall, 1841), "Puellina" harmeri Ristedt, 1985 (see also discussion in Rosso et al., 2018), Cribrilaria innominata (Couch, 1844) (see Chimenz Gusso et al., 2014), The kenozooid producing the ooecium in F. figularis shows a crescent-shaped shield of short radial costae, each with a single pelma as in the autozooids, but also with a single intercostal pore (Fig. 6). The same structure is also evident in the fossil species from Capo Milazzo (Fig. 9). Ovicells with ooecia formed by the distal kenozooid also occur in other species currently assigned to this genus, based on examination of available SEM images and, to a lesser extent, drawings (see Table 3). Ostrovsky (2013, fig. 1.28A) illustrated sectioned decalcified ovicells of F. figularis in which most of the brood cavity is situated in the proximal part of the distal zooid predominantly below the colony surface, thus corresponding to endozooidal type. Whether this position of the brood cavity was an effect of decalcification of the skeleton (and, thus, sagging of the originally raised ooecium) during preparation for sectioning is currently not clear, but this contradicts most descriptions showing hyperstomial ovicells in this species (see references above). Still, a degree of the brood cavity immersion may vary, and, for example, both hyperstomial and subimmersed ovicells are known within the genus Figularia, and hyperstomial, subimmersed, and endozooidal ovicells are described in the different species of Puellina (Ostrovsky, 2013). Subimmersed ovicells were present in Recent colonies of F. figularis from the Mediterranean (A. Ostrovsky, personal observations). Ostrovsky and Taylor (2005) noted the occurrence of species of Figularia-F. clithridiata (Waters, 1887), F. tahitiensis Waters, 1923, andF. pulcherrima Tilbrook, Hayward andGordon, 2001-having costate ooecia (see also Ostrovsky, 2002). Winston et al. (2014) remarked that the occurrence of costate ooecia in F. pulcherrima possibly suggests a better allocation of this species in a distinct genus. Inclusion of costae in the Journal of Paleontology 95(2):268-290 construction of the ooecium has also been observed in Figularia hilli (Osburn, 1950), with two small costae similar to those of the frontal shield added proximally to the larger ooecial halves (see Table 3). Yang et al. (2018), while including pseudoporous ooecia in the diagnosis of Figularia, also suggested the examination of species with multiple ectooecial pseudopores in order to determine if they are genuinely congeneric. These species are here re-assigned to different genera (see also below and Table 4). A certain variability occurs in the presence/absence of pelmata in the frontal shield, and in their position along the costal length. Sometimes this variability was noted (e.g., Gordon, 1984). Nevertheless, all Figularia species lacking pelmata (i.e., not included in formal descriptions and/or undetectable in available images) are fossil, except "F. philomela var. adnata" (Busk, 1884), suggesting that their absence may be a preservation artefact. The ancestrula is generally not mentioned in species descriptions to our knowledge. In the amended diagnosis, we include characters of the ancestrula for the first time, based on the ancestrula found in a colony of F. figularis from the Mediterranean illustrated in Rosso et al. (2019b, fig . 5C). The large size of both autozooids and ancestrula (0.65 × 0.67 mm) and the absence of spines are rare and remarkable among cribrilinids, which usually have small, tatiform ancestrulae, and this may have implications on the systematics/phylogeny of this genus within the family Cribrilinidae. However, observation of ancestrulae in additional species is needed to confirm whether this morphology is constant among congeners, which has been proven not to be the case in other cheilostome genera, such as e.g., Escharina Milne Edwards, 1836 (see Berning et al., 2008). Several species previously assigned to Figularia were recently displaced in different genera of the families Cribrilinidae and Calloporidae (e.g., Vitrimurella, Reginella Jullien, 1886, Inferusia Kukliński and Barnes, 2009, Valdemunitella Canu, 1900; see Bock and Gordon, 2020), and Jullienula Bassler, 1953(Yang et al., 2018. Here, we suggest further displacements: both Figularia? ampla Canu and Bassler, 1928, only tentatively included in Figularia when first described, and Emballotheca? capitifera, Canu and Bassler, 1929, subsequently referred to his new genus Calyptotheca by Harmer (1957) and to Figularia by Di Martino and Taylor (2018), fit better in Vitrimurella, owing to the pseudoporous zooidal gymnocyst and ooecia, and the extremely reduced costate shield. Figularia ryukyuensis Kataoka, 1961 andF. jucunda Canu andBassler, 1929 also need to be revised, pending examination of the type material. These species have pseudoporous ooecia formed by the distal kenozooid without a visible frontal part. Figularia duvergieri Bassler, 1936 has an unusual denticulate proximal orifice margin, and lacks costal pelmata and fenestrae in the ooecium. A detailed revision based on SEM images is needed to confirm generic allocation for these problematic species (Table 4). This issue has been partially addressed by López Gappa et al. (in press). Occurrence.-Figularia figularis is widely distributed in the Atlanto-Mediterranean area since the middle Miocene (Moissette et al., 1993;Berning, 2006). This species has been commonly reported from shelf habitats, mostly from the deep shelf, often associated with deep coralligenous facies (Di Geronimo et al., 1990;Ballesteros, 2006), and at the shelf break in both the Mediterranean (110-145 m; see Harmelin and d'Hondt, 1992) and the eastern Atlantic as far north as the British Isles (Hayward and Ryland, 1998 Remarks.-Two fossil fragments were found, each consisting of a few zooids (Fig. 7). Zooidal morphological characters allow a reliable identification, even in the absence of ovicells and avicularia. Morphometrics fall within the ranges reported for this species. Inferred teratology in an autozooid resulted in a double-bifurcated frontal shield ( Fig. 7.1, 7.2). This unusual feature also occurs in the type specimen of F. tenuicosta MacGillivray, 1895 from the middle Miocene of Victoria, Australia (Bock, 2020). Although F. figularis exhibits a certain range of morphological variability, some historical records, mostly beyond its confirmed geographical range, proved to be different species (e.g., Brown, 1952). The conspecificity of the colony found on a rock at Armaçao de Pêra in Portugal (Souto et al., 2014) needs to be verified. This colony has an unusual triangular ooecial fenestra with narrow horizontal part and could represent a different species. Figures 8-11, Table 4 Holotype. Diagnosis.-Colony erect, bilaminar with fan-shaped fronds, the tapering proximal terminations possibly consisting of heteromorphs, likely rising from an encrusting phase. Zooidal frontal shield consisting of flat costae, each with a large, elongate drop-shaped pelma placed on its peripheral half; gymnocyst wider laterally and proximally, narrower distally, with faint striations. Vicarious avicularia elongate, spatulate, with extensive rostral palate and complete crossbar. Ovicell subimmersed, presumably cleithral. Ooecium formed by the distal kenozooid with frontally visible costate part, and consisting of two very large, wing-shaped costae merging in the midline producing a longitudinal suture, with two large fenestrae exposing wide areas of endooecium; the costae of the ooecium-producing kenozooid smaller, forming a distal, crescent-shaped crown, each costa with a small pelma. Description.-Available colony fragments bilaminar, fan-shaped (the largest ∼2 mm long by 3 mm wide); fragments diverging distally at variable angles from a subcylindrical proximal . 10.5). Costate shield extensive (∼75% of the frontal surface), gently convex, formed by 7-14 flat and smooth costae (maximum basal width 72-111 μm), varying from short and subtriangular proximally to long and parallel sided distally; the suboral pair often the largest (Fig. 10.1-10.5). Costae defined by grooves, connected by an uncertain number of intercostal bridges, presumably 3-4 ( Fig. 10.5), with small oval to subcircular intercostal pores in between. A longitudinal suture marking the costal fusion along zooidal midline (Fig. 11.1). Each costa bearing a single, elongate, drop-shaped pelma with the rounded base placed in correspondence with the base of the costa, while the acute vertex extends up to half to two thirds of costal length. Orifice oval to round, slightly longer than wide, concave proximally, gently arched distally, outlined by a rim of calcification (Fig. 10). Oral spines absent. Avicularia vicarious, infrequent, elongate and slightly asymmetrical, varying in size; rostrum long, spatulate, directed distally and slightly inclined, facing frontally (Figs. 9.1, 11); post-mandibular area short, palate wide, crossbar complete (Fig. 11.3). Ovicell subimmersed, presumably cleithral. A single observed ooecium formed by the distal kenozooid with frontally visible costate shield of 10 costae, longer than wide, wider and slightly more prominent than the ovicellate zooid (Fig. 9). Very large ooecium consisting of two flat, wing-shaped costae converging along the midline, the fusion marked by a longitudinal suture, distally with two small tubercle-like prominences. Large rhomboidal fenestra exposing finely granular endooecium. Orifice of the ovicellate zooid slightly larger than those of autozooids, rounded rectangular. Closure plates or calcified opercula sometimes occluding orifices ( Fig. 10.7). Etymology.-From the Latin spectabilis, meaning remarkable, exceptional, alluding to the distinctive architecture of the colony and ooecium. Remarks.-The morphology of the colony, zooids and ooecium distinguish Figularia spectabilis n. sp. from congeners. The flabellate to short, ribbon-like morphology of the colony, with putative heteromorphs placed basally, may suggest the occurrence of basal rhizoids for fixation to the substratum. Alternatively, the connection to an encrusting portion may develop through "sites of articulation" as in Bryobaculum carinatum Rosso, 2002a, occurring in the same sediment. Discussion Five species of cribrilinid bryozoans, three of which are new to science, namely Cribrilaria profunda n. sp., Glabrilaria cf. G. pedunculata, G. transversocarinata n. sp., Figularia figularis, and F. spectabilis n. sp., were found in Pleistocene deepwater sediments from north-eastern Sicily. Figularia figularis was already recorded from the area by Seguenza (1880) and Neviani (1900), while C. profunda n. sp. was possibly recorded as Lepralia planicosta (see Remarks above), while the remaining three species, including G. cf. G. pedunculata, represent new records. Focusing only on deep-water assemblages, cribrilinids are present with three species in both the Gelasian associations from Capo Milazzo and the Calabrian (MNN19b-19c biozones) of Scoppo. These figures are comparable to those found in present-day deep-water associations from the Mediterranean and Atlantic (Bahama Bank), in which cribrilinids usually occur with 2-3 species (Rosso et al., 2018). However, the Gelasian of Capo Milazzo includes at least 46 cheilostome species, and the cribrilinid relative percentage is ∼6%, which is lower than the 10-18% found in present-day assemblages (Rosso and Sciuto, 2019). No comparison can be made for Journal of Paleontology 95(2):268-290 the Calabrian of Scoppo whose bryozoans are still under investigation. Discovery of a new species of Figularia, F. spectabilis n. sp., led to the emendation of the genus diagnosis and the re-examination of the 32 species and one variety currently assigned to the genus, based on drawings and photographic material available from the literature. This preliminary survey allows us to confidently reassign two species based on published scanning electron micrographs of the type material. The newly proposed combinations are Vitrimurella capitifera (Canu and Bassler, 1929) n. comb. and Hayamiellina quaylei (Powell, 1967a) n. comb., as also suggested by Kukliński et al. (2015). Thirteen species remain doubtful and their assignment to more suitable genera requires examination of the type material (Table 4). At present, 18 species, including Figularia spectabilis n. sp., match the diagnosis of the genus. This figure will likely change further after a more detailed revision of some fossil species and species left in open nomenclature (see Berning, 2006 for F. haueri and F. figularis; Di Martino et al., 2017 andCook et al., 2018 for two different Figularia spp.) as well as cryptic species/species complexes (e.g., F. clithridiata and F. fissa). Based on our literature review, the diversity of Figularia is reduced by about one-half, from 33 (including F. spectabilis n. sp.) to 18 species, with a revision in the stratigraphic range, but only little variation in the geographic distribution of the genus. The genus possibly appeared in the Cenozoic of Europe and Australia, and commonly occurred in sediments in the European-Mediterranean area during the Miocene. Of the 12 species of Figularia living today, 10 species are found in the Pacific and Australasian region. Only two species, F. figularis and F. dimorpha, fall outside this area, being recorded in the Atlanto-Mediterranean and southwestern Atlantic regions, respectively. A twofold future investigation is sought. This includes an examination of the type material of all the species in the genus to confirm their status, prioritizing those that appear to remain problematic (see Table 4; issue partially addressed by López Gappa et al., in press), and an accurate re-examination of all species records to refine both the temporal and spatial distribution of the genus and reconstruct its diversification history, as well as disentangle species complexes.
2020-11-26T09:07:32.627Z
2020-11-23T00:00:00.000
{ "year": 2021, "sha1": "ce2c17f1c975a4c209b7354d2b7a1525f737c3f7", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/5E4F58BD4EDD78489640C629A0415CBB/S0022336020000931a.pdf/div-class-title-cribrilinid-bryozoans-from-pleistocene-mediterranean-deep-waters-with-the-description-of-new-species-div.pdf", "oa_status": "HYBRID", "pdf_src": "Cambridge", "pdf_hash": "483ae058f8c5bfdbaa0308b3b9c339052704231f", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geology" ] }
235079284
pes2o/s2orc
v3-fos-license
Adoption of Good Agronomic Practices (GAP) Among Smallholder Rice Farmers in Nigeria Agricultural Transformation Agenda This study assessed the adoption rate and identified factors influencing adoption of rice technologies among participants of Agricultural Transformation Agenda across the targeted implementation zones of Adani-Omor, Bida-Badeggi, Kano-Jigawa and Kebbi-Sokoto. Multi-stage sampling procedure was used in selecting eighty respondents for the study. The data were collected with the aid of structured questionnaire. Descriptive statistics and Tobit regression model were employed in the analysis of data. The study revealed that majority of farmers participating in Agricultural Transformation Agenda Project (ATASP-1) are youths and still in their active age as indicated by the average age of 42 years. About 62% have secondary and tertiary education. On the gender distribution of the people engaged in ATASP-1 project, it was revealed that about 92% were male while only 8% were female. Substantial numbers of technologies were disseminated on rice being promoted under ATASP-1 project and the adoption rate of these technologies was very high. More than three-quarter of the respondents have adopted technologies introduced to them. Adoption of rice technologies among participating farmers is largely depends on socioeconomic characteristics of farmers such as age, education and gender of the respondents. The study recommends that there should be continuous training of farmers on the importance of these technologies as well as techniques behind their utilization to ensure continuous usage of the adopted technologies. Women should be encouraged to participate more in the project and to take up farming as a business. Also, adequate attention should be given to farmers socioeconomic characteristics as these are the determinants of technology adoption. Introduction Agricultural productivity in many developing countries is constrained by poor farming techniques and practices being used by most smallholder farmers. The poor performance of Nigerian farmers is attributed to their lack and use of good agricultural practices and poor value attached to improved farm practices. In Nigeria, unsustainable agricultural practices have led to poor agricultural productivity, which is a major determinant of food insecurity (Ayuk, 2001). The current agronomic practices by most farmers have no measures of reducing environmental and socioeconomic problems. This will put future food, fodder and fibre production and ecosystem services under additional risk and uncertainty. Poor soil fertility as a result of degradation has put crop production at a risk. Changes in the amount of rain, increased rainfall intensity and changes in rainfall patterns and recurrent droughts and/or floods would lead to decreased resource productivity and production. Most studies on the adoption of Good Agronomic Practices (GAPs) showed that adoption provides higher yields and income (Manda et al., 2016). Despite the benefits attributed to the adoption of sustainable farm practices, their adoption rates remain low in sub-Saharan Africa (Kassie et al., 2015;Teklewold et al., 2013). Understanding the factors that affect the adoption of GAPs can provide guidance into identifying key drivers and areas that could enhance the use of these practices. However, the majority of earlier studies on the adoption of GAPs have focused on a single technology (Mazvimavi and Twomlow 2009;Arslan et al., 2014). One of the initiatives of government to boost agricultural production is Agricultural Transformation Agenda (ATA). The overall objective was to increase agricultural production in order to increase domestic food production and generate employment. The Nigeria Agricultural Transformation Agenda (ATA) is an initiative by which the Federal Ministry of Agriculture and Rural Development envisions bringing agriculture back to the center of Nigeria's economy that it once occupied, and by so doing solving the problems of rural poverty, youth unemployment and over-reliance on imported foods. Further, it is the mechanism by which Nigeria can replicate the agriculture-driven economic success stories of countries such as Brazil, Thailand, China, Malaysia and Indonesia and, closer to home, Kenya and Malawi. In summary, the ATA is to bring about creation of more than 3.5 million jobs along the value chains of rice, cassava, sorghum, cocoa and cotton; and achieve food security by increasing production of key staple food of rice, cassava, and sorghum by 20 million metric tons (rice, 2 million metric tons; cassava, 17 million metric tons; sorghum, 1 million metric tons). While the production of staple foods has risen sharply over the last twenty-five years, production cannot yet cover the rising demand for staples, particularly grains. Nigeria alone grows about 50% of the total production in West Africa. As is the case in nearly all West African countries, rise in grain production is due largely to the expansion of cultivated land than to any significant improvement in yields. Meanwhile rice yield has stagnated at about 2 t/ha on the average since 1990 (Inter reseaux, 2015). Low agricultural productivity in Nigeria has been largely due to low inappropriate and inadequate application of good agronomic practices such as fertilizer, improved seed utilization, and a wide gamut of on-farm and post-farm activities related to food safety, food quality and food security, the environmental impacts of agriculture (FMARD, 2011). In order to take corrective measures and achieve the ATAPS-1 targets for staple crops, concerted efforts must be made to provide information on the adoption of GAPs among the farmers Regardless of this intervention and other initiatives by government to support rural livelihood and also boost agricultural productivity, over-exploitation and poor management practices will lead to reduced fertility and availability of natural resources (Global Environment Facility, (GEF), 2010). Therefore, good agronomic and management practices remain a practical pathway for farmers to enhance the productivity and resilience of agricultural production systems while conserving the natural resource base (Teklewold et al., 2013;Kassie et al., 2013). The importance of the key staple food crops especially rice for which ATASP-1 is to bring about increase in production cannot be over emphasized. Rice is the staple food in many countries of the world. Over the years, the crop's demand has risen steadily and its growing importance is evident, given its important place in the strategic food security planning policies of many countries (Saka et al., 2005). In Nigeria, the rising import bills on rice coupled with the increasing demand for the commodity has led successive Nigerian governments to step up policies aimed at remedying the country's supply deficit for the commodity. Despite various interventions to boost rice production, Nigeria still depend on the importation of rice. This could be attributed to poor production technologies characterized by low productivity. This study is therefore essential because of the need to increase agricultural productivity especially rice farming through the use of improved agricultural technologies and practices by the smallholder farmers in the face of acute food shortage and worsened living conditions. In order to take corrective measures and achieve the ATAPS-1 targets for food crop production, concerted efforts must be made to provide information on the adoption of GAP among the farmers. This will provide a farm-level feedback for appropriate policy targeting at improving status of crop production. The study therefore examined the rate of adoption of agronomic practices among farmers and the underlining factors that influence farmers' adoption decisions. Methodology The study was conducted in the four staple crop processing zones SCPZs across the country. These zones are: Adani-Omor covering Anambra and Enugu States in the south east, Bida-Badeggi SCPZ in Niger State in the north central, Kano-Jigawa SCPZ and Kebbi-Sokoto SCPZ these two zones are located in the north western part of the country. There is variation in climatic conditions across the zones with Adani-Omor located in the tropical rainforest in the South East, Bida-Badeggi located in the Guinea Savanna while Kano-Jigawa and Sokoto-Kebbi are located in the Sudan Savanna ecological zone of the country. The study focused solely on the program beneficiaries in ascertaining the rates of adoption of technologies disseminated on rice production by ATASP-1 Sampling Procedure and Sample Selection Multi-stage sampling procedure was used in selecting respondents for the study. However, given the preponderance of production-based value chain actors, 80 samples comprising 20 respondents were randomly selected from each of SCPZ zone. These zones were Kebbi-Sokoto, Kano-Jigawa, Bida-Badeggi and Adani-Omor SCPZ. The sampling was done to cover all the 33 local government areas (LGAs) where the program is being executed across the country. Methods of Data Collection Data collection was through primary data. These were solicited through the use of structured questionnaire. The questionnaires were administered to participating farmers under the programme. Data collected covered background information, institutional information, technologies disseminated, mode of practicing technology, rates of adoption and constraints to adoption of GAPs. Method of Data Analysis The study employed descriptive statistics and Tobit regression model in the analysis of data collected for this study. Descriptive statistics such as mean, frequency, standard deviation and count were used. Tobit model was used to ascertain the factors influencing the adoption of good agronomic practices among ATASP-1 farmers. The model is specified as follows in the implicit form: Yo* = Xoβ + µo Where Yo is the latent (hidden) dependent variable for the O th farm; o is the vector of independent variables, vector β comprises the unknown parameters to be estimated that are associated with the independent variables for the O th farm, and µo is an independently distributed error term assumed to be normally distributed with zero mean and constant variance. The independent variables considered were gender, age, marital status, education, household size and farm size. Other variables in the model were farmers' experience, extension visits, credit and association. Socio-economic characteristics of the respondents The socioeconomic variables of the farmers are very important for good planning and decision making by policy makers. The composition of these variables can also make or mar the ability of farmers' effectiveness in production process. The age structures of rice farmers under ATASP-1 as presented in Table 1 showed that about 14% of the farmers are within 21-30 years of age. Those within 31-40 years constitute 21%. The average age was 43 years. The implication is that the project has encouraged youth participation in food production as a business and this has a lot of significant for future food production and food security for the country. On the gender distribution of the people engaged in ATASP-1 project, it was revealed that about 92% were male while only 8% were female. This could be due to the fact that women are mostly found in processing and value addition activities. As shown in Table 1, majority (97.5%) of the respondents were married. The significance of marital status among rural communities with respect to farm business and livelihood activities can be explained in terms of the supply of agricultural family labour. It is expected that family labour would be more available where the household heads are married. It is very significant that participating farmers had one form of education or the other and surprisingly, of the total participants, about 62% have secondary and tertiary education. The implication of this is that these participants are better positioned to understand and adopt innovations that will facilitate productivity and better livelihood as opposed to when we have illiterate participants. About 20% of the household members have family size of between 1-5 persons per household. The average family size was 11 people per household. The implication of fairly large family size is that in some cases where the members are over 18 years old, they could give helping hands in farming particularly during planting, weeding and harvesting of crops. The study also found that total land areas between 0.1-0.5 available for 6% of the respondents, those with land area of 0.51-1.0 represents about 38% of the participants, those with total land area of 1.1-1.5 represents 10%, while those with over 2 ha of total land area represents 35%. The mean land area for the respondents was 2.1ha. From this finding it is very clear that farmers participating in the project are mostly small scale farmers. The average year of experience in rice cultivation was 18 years. About 98% of rice farmers claimed they have contact with extension agents while about 2% claimed they do not. Technologies disseminated on rice production There were many technologies disseminated to rice farmers under the project as shown in Table 2. It was revealed that all the farmers were aware of improved rice varieties while site selection for production was known to about 89% of the farmers under the project and all the farmers-100% were aware of field preparation method for rice production. More so, about 99% were aware of the right planting season for rice production while about 98% were aware of crop establishment. Other technologies these farmers are familiar with are weed management and fertilizer application method of by all the participating farmers and finally, pest control were known to about 99% of the participating farmers. It is hoped that these level of awareness will stimulate improved adoption by these farmers and consequently improved level of productivity. Rate of adoption of technologies disseminated to rice farmers As observed from Table 3, there was an impressive level of technology adoption under ATASP-1 project. It was found that improved seed and weed management recorded 100% adoption levels while field preparation, determination of appropriate planting period and crop establishment all recorded about 99% levels of adoption while site preparation and fertilizer application recorded about 98% levels of adoption and the least which was pest and diseases control was 95% level of adoption. This is an indication of the high interest these farmers have dedicated in joining the train of rice production. This adoption rate has raised the hope that we are on course towards becoming a major rice producing nation and able to produce enough to meet national demands thus conserving our scarce foreign reserve for better use. Factors Influencing Adoption of rice Technologies As shown in Table 4, gender and educational level of the respondents were significantly influencing all the technologies introduced to the farmers. The estimated parameters obtained for these variables were positive and significant at 1% level of probability. Educated farmers tend to know and understand the importance of using improved technologies for increased output in rice production so the more educated a farmer is the more he will be willing to use improved farming techniques. Age of the farmers was found to be positively related to adoption of site/land preparation, field preparation, seed preparation, determining planting season, crop establishment and weed management. The significant and positive coefficients obtained for age is an indication that older farmers tends to adopt rice technologies compared to young farmers. This could be attributed to the accumulation of experience in rice production. Marital status was equally found significant among the variables influencing adoption of rice technologies asides crop establishment and weed management. The estimated parameters for marital status were positive. The implication of this is that married farmers tend to adopt adoption of rice technologies compared to those that are still single. This could be due to the fact that married individual are known to be very responsible willing to get maximum returns from whatever investment they engaged in. Farming experience is another variable that significantly influence rice technology adoption. The more experience a farmer is the more his understanding of rice technology will be and ability to boost crop production through adoption of improved technologies. Other important factor is credit. Access to credit was found influencing adoption of some technologies introduced to the farmers. Credit could facilitate adoption of technologies as credit is needed to buy necessary input like fertilizer, so the more access to credit a farmer has the more inputs he will use. Finally, membership of association was significantly related to adoption of fertilizer application in rice production. Membership of association is known to facilitate farmers' access to inputs like seeds, fertilizers and agrochemicals. So being a member of association will facilitate adoption of recommended fertilizer in rice production. Conclusion and Recommendations The study revealed that majority of ATASP-1 participating farmers are youths and still in their active productive age. This has a lot of significant for future food production and food security for the country. Substantial numbers of technologies were disseminated on rice being promoted under ATASP-1 project and the adoption rate of these technologies was very high. Adoption of rice technologies among participating farmers is largely depends on socioeconomic characteristics of farmers such as age, education and gender of the respondents. The study recommends that there is need for continues training of farmers on the importance of these technologies as well as techniques behind their utilization to help these farmer continue to adopt them. Women should be encouraged to participate more in the project and to take up farming as a business.
2019-09-17T02:40:49.281Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "76dfa812ef3fd795f02c57d46b7ff0deabd6b68f", "oa_license": "CCBY", "oa_url": "https://www.iiste.org/Journals/index.php/JEDS/article/download/49287/50916", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "a333f53ccdc0f687ab8230d104fa4d0d59fbea35", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [] }
237698810
pes2o/s2orc
v3-fos-license
ReTraCk: A Flexible and Efficient Framework for Knowledge Base Question Answering We present Retriever-Transducer-Checker (ReTraCk), a neural semantic parsing framework for large scale knowledge base question answering (KBQA). ReTraCk is designed as a modular framework to maintain high flexibility. It includes a retriever to retrieve relevant KB items efficiently, a transducer to generate logical form with syntax correctness guarantees and a checker to improve transduction procedure. ReTraCk is ranked at top1 overall performance on the GrailQA leaderboard and obtains highly competitive performance on the typical WebQuestionsSP benchmark. Our system can interact with users timely, demonstrating the efficiency of the proposed framework. Introduction Knowledge base question answering (KBQA) is an important task in natural language processing that aims to satisfy users' information needs based on factual information stored in knowledge bases. Over the years, it has attracted a great deal of research attention from academia and industry. Early KBQA systems are generally rule-based. They rely on predefined rules or templates to parse questions into logical forms (Cabrio et al., 2012;Abujabal et al., 2017), suffering from coverage and scalability problems. Recently, researchers usually focus more on neural semantic parsing approaches. These data-driven parsing methods (Yih et al., 2015;Jia and Liang, 2016;Dong and Lapata, 2016;Liang et al., 2017;Gu et al., 2021) significantly improve the state-of-the-art (SOTA) performance on KBQA tasks. Although various neural semantic parsing methods have been proposed for KBQA, there are few works investigating how to leverage the advantages of SOTA models to build a comprehensive system, and how to fit the system with practical application purpose (e.g., balancing effectiveness and efficiency). To investigate, we identify two key issues hindering the development of KBQA systems. On the one hand, there is a lack of a generic and extensible framework for KBQA. For example, the popular SEMPRE 3 toolkit (Berant et al., 2013) provides infrastructures to develop statistical semantic parsers for KBQA with rich features, but its performance and scalability are inferior to recent neural semantic parsing methods. The TRANX toolkit 4 (Yin and Neubig, 2018) employs a transition-based neural semantic parser to model the logical form generation procedure as a sequence of tree-constructing actions under grammar specification. However, TRANX does not include the essential retriever components used in grounding, and thus does not support KBQA by now. On the other hand, recent neural semantic parsing methods mostly emphasize performance on benchmark datasets while neglecting the efficiency (speed) dimension. This limits the understanding of how designed approaches fit into real applications. For example, the popular query graph generation methods generate and rank a set of query graphs (Yih et al., 2015;Maheshwari et al., 2019;Lan and Jiang, 2020). Since all query graph candidates keep in line with the knowledge base (KB) structure, these methods take full advantage of the KB. However, they suffer from poor efficiency due to the large number of candidates and heavily querying on KB. To verify that, we performed a preliminary study on available SOTA models 5,6,7,8,9,10 . According to our study, these models either have difficulties in supporting interactive online services, or limit the candidate space for specific datasets, which makes them difficult to apply in practice. To this end, we present ReTraCk, a practical framework for large scale KBQA. We hope Re-TraCk can help standardize the KBQA model design process and lower the barrier of entry for new practitioners. ReTraCk is designed with the following principles in mind: • Flexibility ReTraCk employs a modular architecture, which decouples the dependencies among components as much as possible to enable quick integration of novel components. For example, our system supports two different kinds of schema retrievers, namely dense schema retriever and neighbor schema retriever 11 . • Efficiency ReTraCk falls into the transduction family, which is fast during the generation process. Besides, we retrieve entities and relevant schema items (relations and types) in parallel by leveraging the recent advance of entity linking (Orr et al., 2021) and dense retrieval (Wu et al., 2020;Karpukhin et al., 2020). Our system can interact with users timely, demonstrating the efficiency of the proposed ReTraCk framework. • Effectiveness ReTraCk is designed to enhance the controllability of transduction-based methods in both syntax level and semantic level. It first employs a grammar based decoder (Yin and Neubig, 2018) to guarantee the syntax correctness. Then it leverages a checker to alleviate the semantic inconsistency issues. Inspired by previous work, four checking mechanisms are proposed and implemented in the checker: instancelevel checking (Liang et al., 2017), ontologylevel checking (Chen et al., 2018), real execution (Wang et al., 2018) and the novel virtual execution. The experimental results verify the significant effectiveness of our proposed checker. Notably, the checker is also flexible enough to be easily extended with new mechanisms. Finally, ReTraCk achieves state-of-the-art performance on GrailQA and achieves highly competitive performance on WebQuestionsSP. ReTraCk Framework Given an input question q, ReTraCk parses the question into a logical form which can be deterministically converted into a SPARQL query to retrieve answers from the knowledge base K. Generally K consists of two parts: an ontology O ⊆ T ×R×T , which defines the schema structure, and the fact triples F ⊆ E × R × (E ∪ T ∪ L). Here, T is the set of types, R is the set of relations, E is the set of entities, and L is the set of literals. As shown in Fig. 1, ReTraCk consists of three components: retriever, transducer and checker. The retriever consists of an entity linker, which links explicit entity mentions to corresponding entities, and a schema retriever, which retrieves relevant schema items (types and relations) mentioned either explicitly or implicitly in the question. Given the retrieved KB items (entities, types, and relations), the transducer employs a grammar-based decoder to generate the logical form with syntax correctness guarantees. Meanwhile, the transducer interacts with the checker to discourage generating programs that are semantically inconsistent with KB. To make ReTraCk more accessible and interpretable for end users, we build a user interface. As shown in Fig. 2, users can type a question in the text box. The interface then displays retrieved KB items, a graph visualization of predicted logical forms, generated SPARQL query and predicted answer (s). The schema items selected by our transducer are shaded. Besides, users can refer to more information of any KB item by clicking on the subsequent "Detail". Next, we will introduce each component in detail. Retriever Entity Linker The entity linker used in this work follows the entity linking pipeline described in Gu et al. (2021). It firstly detects entity mentions using a BERT-based NER system, then generates candidate entities along with their prior score based on an alias map mined from the KB and FACC1 (Gabrilovich et al., 2013). As for entity disambiguation, we implement a prior baseline which selects the most popular entity based on the prior score. Besides, we also implement an alternative model by leveraging BOOTLEG (Orr et al., 2021) enriched with the prior features 12 . Due to space limitations, the model details and its comparison with the entity linker used in Gu et al. (2021) are put in the Appendix. Schema Retriever As schema items are not always mentioned explicitly in the question and their vocabularies are much fewer than entities 13 , we leverage the dense retriever framework (Mazaré et al., 2018;Humeau et al., 2020;Wu et al., 2020) to obtain the related types and relations. To be specific, we train a bi-encoder architecture (Wu et al., 2020) such that related schema items are close to the question embedding. This architecture allows for fast real-time inference, as it is able to cache the encoded candidates. We use two independent BERT-base encoders (Devlin et al., 2019) to represent the input question e q and candidate schema items e s by extracting the upper most layer representation corresponding to the [CLS] token. The matching score for each pair (q g , s i ) is calculated by the dot-product: Given a question q, we retrieve the top k schema items with the highest scores during inference time. Transducer Following previous work (Guo et al., 2018(Guo et al., , 2019) -especially the s-expression design principle (Gu et al., 2021), we design a set of grammar rules for the logical form. As shown in Table 1, there are two kinds of grammars in our definition: knowledgeagnostic grammar and knowledge-specific grammar. To incorporate these predefined grammar rules, we introduce a question encoder and a grammar-based decoder (Liu et al., 2020). {e1 | (e1, e2) ∈ rel and e2 ∈ set} set→ argmax(set, rel) Grammar-based Decoder Once the question representation is prepared, the grammar-based decoder starts to produce the target logical form step by step with attention on the question. Our decoder regards each logical form as a structure and outputs its corresponding grammar rule/action 14 sequence a = (a 1 , · · · , a K ). At each decoding step, a nonterminal (e.g., set) is expanded using one of its valid grammar rules. For example, at time step k, the LSTM decoder LSTM − → D accepts the embedding of the previous output φ a (a k−1 ) as input and updates its hidden state as: 14 We use grammar rule and action interchangeably. where c k−1 is the context vector obtained by attending on each encoder hidden state h E i . As for φ a , it behaves differently for knowledge-agnostic grammar rules and knowledge-specific grammar rules. For knowledge-agnostic grammar rules, φ a returns a trainable global embedding. For knowledgespecific grammar rules, φ a returns its related KB item representation, obtained by averaging over all word representations. When predicting a k , the probability of selecting the action γ follows: BERT Encoding Motivated by the success of pretrained language models on cross-domain textto-SQL tasks (Hwang et al., 2019), we augment our model with BERT (Devlin et al., 2019). First, we concatenate the questions with all retrieved KB items as input for BERT to strengthen the connection between them. Then, we replace the word embeddings mentioned above with deep contextual representations from the last layer of BERT of each question token and each KB item, respectively. In a case where the total number of words in the retrieved KB items exceeds the maximum length constraint of BERT, we split these KB items into different blocks and encode them with the question separately (Gu et al., 2021). Checker Inspired by previous work (Liang et al., 2017;Chen et al., 2018;Wang et al., 2018), we design a pluggable module named checker to improve the decoding process by leveraging semantics of KB. Instance-level Checking relies on the KB linkage information at the instance level (i.e., entities and their connected relations), which means that instance-level checking only deals with cases where the current action is a child node of action set→ join ent (rel, ent) in the abstract syntax tree (AST). As illustrated in Fig. 4, when expanding the nonterminal ent, any retrieved KB entity can return a valid grammar rule such as ent→m.04bmk or ent→m.04vd3. However, only m.04vd3 can pass the instance-level checking, since other candidates do not share direct links with the decoded relation tv.tv episode segment.subjects. Ontology-level Checking performs checking with the help of KB linkage information at the ontology level (i.e., types and bridging relations). Taking the right subtree presented in Fig. 4 as an example, when expanding the second rel, we employ ontology-level checking to determine its valid semantic scope. According to the semantics of the grammar rule set→ join rel (rel 1 , rel 2 ), the type set of the head entity in rel 2 must overlap with the type set of the tail entity in rel 1 , by which the candidate rel→tv.tv program.number of episodes is selected. Although ontology-level checking applies to more situations than instance-level, it is weaker in terms of checking effectiveness and needs constraints of high coverage. Real Execution When decoding reaches the end, an action sequence can be converted into a logical form, and finally into a SPARQL query. As depicted in Fig. 4, the real execution simply takes the final SPARQL query and tries to execute it over KB. If the query cannot be executed successfully, or the result is empty, it means that the corresponding action sequence cannot meet the executable requirement. In practice, we utilize the real execution to check all complete action sequence candidates searched by the beam search procedure, until an action sequence passes checking. Virtual Execution The real execution cannot intervene in the middle of program generation, which leads to candidates of low quality in the final beam (e.g., no candidate can be executed). Meanwhile, since real execution relies on SPARQL, it is relatively slow as SPARQL queries are executed over tremendous (e.g., millions) entities with multi-hop relations. Instead, we propose virtual execution to alleviate these issues. As illustrated in Fig. 4 with previous work, we use F1 and Hits@1 as evaluation metrics on WebQSP. Implementation Details We implemented our model based on PyTorch (Paszke et al., 2019) and AllenNLP (Gardner et al., 2018). With respect to BERT, we utilize the uncased BERT-base model from the Transformers library (Wolf et al., 2020). In training, we employed the Adam optimizer (Kingma and Ba, 2015). The learning rate is set to 1e-3, except for BERT, which is set to 2e-5. Our model training time on a single Tesla V100 is approximately 20h 16 . As for dense retriever, on GrailQA dataset, we retrieve top-100 type items and top-150 relation items. On WebQSP dataset, we retrieve top-200 type items and top-500 relation items. Baseline Models We compare our model with previous state-of-theart models on GrailQA (Lan and Jiang, 2020;Gu et al., 2021) and WebQSP (Liang et al., 2017;Sun et al., 2019;Saxena et al., 2020;Lan and Jiang, 2020). Notably, both TRANSDUCTION and RANKING models proposed by Gu et al. (2021) on GrailQA can be based on either GloVe (Pennington et al., 2014) or BERT (Devlin et al., 2019). We compare with them under all settings. Results We test ReTraCk with two configurations, with or without Checker. As shown in Table 2, Re-TraCk significantly outperforms the previous SOTA model BERT + RANKING (F1 +7.3, EM +7.5 ) and achieves an improvement (F1 +28.5, EM + 24.8) over the previous best transduction-based model BERT + TRANSDUCTION on GrailQA. Table 3 shows model performance on WebQSP. Given predicted entities, our model outperforms previous models (except for QGG (Lan and Jiang, 2020)) and even outperforms these models with oracle entities: GRAFT-Net, PullNet, and Embed-KGQA. Given oracle entities, the performance of our model further boosts to 74.7 F1, which shows the potential gains with a better entity linker. While most SOTA models constrain their answer space by assuming a fixed number of hops, we conduct experiments on both datasets without such assumptions, which simulates real world scenarios. QGG works well on WebQSP by accessing the KB via SPARQL when generating the query graph at each step. However, as noted in Gu et al. (2021), extending QGG to consider 3-hops relations on GrailQA will take a few months to train, which is time consuming. It works poorly on GrailQA under 2-hop assumption. By removing the checker module, the performance drops 21.1 and 14.1 F1 points on GrailQA and WebQSP respectively, which demonstrates the significant effectiveness of the checker. Except for QGG mentioned above, GrailQA RANKING model takes an average 115.5 seconds 17 to process one query, which is not applicable for online systems. In contrast, ReTraCk takes only 1.62 seconds per query on average at its current implementation which demonstrates its efficiency. Case Study To demonstrate ReTraCk's capability, we show three typical examples from the development set of GrailQA dataset in Table 4. In the first case, ReTraCk accurately links two mentions (don slater and editor in chief ) in the query to corresponding entities (m.05ws t6 and m.02wk2cy) in Freebase. It also retrieves all necessary schema items (three relations and one type) via schema retriever. The transducer equipped with checker accurately understands the meaning of query and compose the complex logical form with five operators. The predicted logical form is exactly the same as the golden logical form. As for the second case, Re-TraCk parses the query to a logical form which is semantically equivalent to the golden logical form, which demonstrates the existence of program alias. As for the third case, ReTraCk ignores the seman-17 Data are derived from https://github.com/dki-lab/ GrailQA tics conveyed by the word surface in the query, and selects wrong schema item unit of density instead of unit of surface density. This example shows that our model sometimes only captures part of the semantics in the query and misses some span information. Conclusion We present ReTraCk, a semantic parsing framework for KBQA. ReTraCk is flexible and efficient, achieving strong results on two distinct KBQA datasets. We hope that ReTraCk will be beneficial for future research efforts towards developing better KBQA systems. A Entity Linker The entity linker used in this paper follows the typical pipeline that consists of three sub-modules: mention detection, candidate generation and entity disambiguation. Following the previous work Gu et al. (2021), we use a BERT-based NER system 18 to detect the entity mentions and literals (e.g., numerical values and datetime) in the question. Then we generate candidate entities along with their prior probability using an alias map mined from the KB and FACC1 (Gabrilovich et al., 2013), a large entity linking corpus. For entity disambiguation, we adopt the state-ofthe-art neural entity disambiguation model BOOT-LEG (Orr et al., 2021) 19 which shows decent generalization performance over long-tail entities. In BOOTLEG, each entity is represented with three levels information: its unique entity embedding, attached types' embedding and relations' embedding, and leverage BERT (Devlin et al., 2019) to encode the context. Besides, we also combine the prior score from the candidate generation step and the context compatibility score from BOOTLEG with two fully connected layers of 100 hidden units and ReLU non-linearities. Note that existing KBQA datasets do not provide the mention boundary annotations. We generated the distantly supervised training data for both named entity recognition and entity disambiguation by aligning the natural language question with entities' observed aliases mined from the candidate generation step. We evaluate the performance of our entity linker on GrailQA dev set and WebQSP test set. We compare its performance with the following baselines: 1) Aqqu (Bast and Haussmann, 2015) which is a rule based entity linker using linguistic and entity popularity features. 2) GrailQA (Gu et al., 2021) which is a prior baseline. 3) Prior which is a prior baseline implemented by us. 4) BOOTLEG (Orr et al., 2021) which is trained using distantly aligned question answering data. 5) BOOTLEG + Prior which is the full disambiguation model used in this paper. As you can see from Table 5, our Prior performs slightly better than the GrailQA (Gu et al., 2021)'s Prior by 0.8 F1 points on GrailQA. What's interesting is that the BOOTLEG trained with GrailQA data is even inferior than Prior baseline by 4.8 F1 points. However, BOOTLEG + Prior improves over BOOTLEG and Prior by 4.4 F1 points and 9.2 F1 points respectively. The above experiment results show that the prior feature is very important and orthogonal to the BOOTLEG model in the question entity linking. As shown in Table 6, similar conclusions can be derived from the experiment results on WebQSP dataset. Compared with experiments on GrailQA, the performance of BOOTLEG is lower with only 58.5 F1 score and the improvement of BOOTLEG + Prior over Prior is reduced by 1.7 F1 points. This is mainly because the size of training data of WebQSP (3,098 instances) is much smaller than GrailQA (44,337 instances) which limits the learning of BOOTLEG model. B Dense Schema Retriever In principle, the encoders can be implemented by any neural networks (Karpukhin et al., 2020). We use two independent BERT-base encoders (Devlin et al., 2019). Training The goal of training the encoders is to create a vector space such that relevant schema items get higher scores with the given question. For each pair of question and schema item (q i , s i ) in a batch of size B, the loss is computed as: In-batch negatives have shown to be effective for learning a bi-encoder architecture (Karpukhin et al., 2020). To use in-batch negatives, we separate relevant schema items of the same question into different mini-batches. In this way, there are B training instances in each batch and B − 1 negative candidates for each question. Dense Schema Retriever v.s. Neighbor Schema Retriever To prune the decoding vocabulary space, Gu et al. (2021) retrieves schema items that are reachable by anchor entities within 2-hops in KB, which is named after neighbor schema retriever. In this section, we compare the performance of dense schema retriever proposed in this work with the neighbor schema retriever. Fig. 5 shows the recall of the schema items with respect to top-k retrieved candidates on GrailQA dev set. Neighbor schema retriever obtains 69.2% type recall with an average of 112.1 candidate items while dense schema retriever achieves 73.3% recall with only 2 candidates and 98.5% recall with 100 candidates. Similar trends can be found in the relation recall curve in Fig. 5. Dense schema retriever not only improves the recall of schema items, but also reduces the candidate size, which benefits the downstream transducer model. C Checking Procedure The usage of 4 functions (instance checking, type checking, virtual execution and real execution) are explained in the paper. Here we present an algorithm to introduce the checking procedure better, as show in Algorithm 1. D Detailed Hyper-parameter Setting Entity Linker For the BERT-based NER model, we use the uncased BERT-base model from the Transformers library trained with AdamW optimizer (learning rate: 5e-5) for 5 epochs. For the entity disambiguation model, we use the default parameters from BOOTLEG. On GrailQA dataset, we use the uncased BERT-base model trained with SparseDenseAdam optimizer implemented by BOOTLEG (learning rate: 1e-4) for 5 epochs. We add two fully connected layers of 100 hidden units and ReLU non-linearities to combine BOOTLEG and the prior score feature. The entity embedding size is set to 256, type and relation embedding size is set to 128. The entity embedding mask percentage is set to 0.8. On the smaller dataset WebQSP, except training with a larger number of epochs (50), and the embedding size is set to 64 to avoid overfitting, everything is the same as the model on GrailQA. Through our experiments, we select the best model based on the F1 score on Algorithm 1 Checking Process Input: valid action candidates C, decoded logical form beam L, knowledge base K Output: logical form beam for the next stepL Algorithm: L = / O Procedure static checking(C, L, K) for each action sequence s in L do for each valid action candidate c in C do if not instance checking(s, c) then continue if not ontology checking(s, c) then continue novel checking techniques can be added herê s ← s1, s2, · · · , s |s| , c L ←L ∪ {ŝ} L = kbest beam(L, k) keep top k scoring candidates inL Procedure dynamic checking(L) for each action sequenceŝ inL do τ =ŝ |ŝ| While τ corresponds to a full sub-program do r = virtual execution(τ ) if not r then L ←L removeŝ break τ ← parent node of τ in AST tree ifŝ arrives at the end then r = real execution(ŝ) if r then L ← {ŝ} only keep the first executableŝ break returnL dev set of each dataset. We pass top-3 and top-5 candidate entities per entity mention to the downstream transducer model on GrailQA and WebQSP dataset respectively. Dense Schema Retriever We use the uncased BERT-base model from the Transformers library trained with AdamW optimizer (learning rate: 1e-5) for 10 epochs. We select the best model based on the recall of schema items on the dev set of each dataset. On GrailQA dataset, we retrieve top-100 type items and top-150 relation items. On WebQSP dataset, we retrieve top-200 type items and top-500 relation items. Parser We implement our model based on Py-Torch and AllenNLP. With respect to BERT, we use the uncased BERT-base model from Transformers library. In training, we employ the Adam optimizer. The learning rate of our model is set to 1e-3, except for BERT, which is set to 2e-5. The training time of our model on single Tesla V100 is approximately 20 hours. We select the best model based on the exact match ratio between the predicted logical form and golden logical form.
2021-08-27T16:44:16.603Z
2021-08-01T00:00:00.000
{ "year": 2021, "sha1": "ebc64974e9e0021984a0158b3c04b60327730a88", "oa_license": "CCBY", "oa_url": "https://aclanthology.org/2021.acl-demo.39.pdf", "oa_status": "HYBRID", "pdf_src": "ACL", "pdf_hash": "35ed18a2dfe9f3608009188d84ea1a82f1c5ea0e", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
131817760
pes2o/s2orc
v3-fos-license
Evaluation of antidiarrheal effect of the aqueous extract of the leaves of Chromolaena odorata L (King and Robinson) This study aimed to evaluate the antidiarrheal effect of the aqueous extract of the leaves of C odorata (400 and 800 mg/kg). The antidiarrheal effect was evaluated on the diarrhoea induced by the castor oil, the charcoal test (intestinal transit time) and on the accumulation of the intestinal fluid induced by the castor oil (Enteropooling).The results obtained show that the aqueous extract at the doses used significantly decrease (p<0,001) the frequency of emission, the quantity and the onset of appearance of the faces induced by the castor oil. The aqueous extract of C odorata (400 and 800 mg/kg) does not decrease significantly the intestinal transit (p>0.05) but on the other hand significantly decrease (p<0.01) the accumulation of the fluid in the intestine induced by the castor oil. In conclusion the aqueous extract of C odorata (400 and 800 mg/kg) has an antidiarrheal effect who could be explained by interference with the mechanisms of secretion of the electrolytes. These results would justify the use of plant on the traditional treatment of the diarrhoea. INTRODUCTION In developing countries, people in rural areas often resort to traditional medicine to treat their diseases, including problems related to diarrhea. Indeed, diarrhea is a major cause of mortality and infant morbidity. Diarrheal diseases cause an estimated 1.8 million deaths each year in the world, of which 90 % are children under five years, most of whom live in developing countries 1 . 17 % of children admitted to pediatrics die of diarrhea. They are the third leading cause of death for infectious diseases of all ages 2, 3 and the fifth leading cause of premature death in the worldwide 4 . During the last 15 years, research has been undertaken to discover new drugs. It is becoming increasingly clear that plants can be a source of cheaper new products, especially for developing populations, and effective against diarrhea 5 . For example, WHO in the Africa region encourages African countries to undertake research on medicinal plants and to promote their use in health care systems 6 . C. odorata (Asteraceae) one of the medicinal plants much used in the American traditional herbal, Asian and African for the treatment of several pathologies. In Congo, it is used as a disinfectant and healing wounds 7 . A leaf decoction is used to treat colds, flu, asthma, fever 8 , skin infections, conjunctivitis 9, 10 , diabetes and malaria 11 , diarrhea 12 . The phytochemical study of the aqueous extract revealed the presence of saponins, alkaloids, glycosides-cardiotonics, steroids, tannins and flavonoids 13 . Previous pharmacological work on this plant shows that it has several pharmacological properties such as: analgesic and anti-inflammatory 14 , antibacterial 15, 16 , antioxidant 17 and antiulcer 18 . This study aims to evaluate the antidiarrheal effect of the aqueous extract of leaves of C. odorata. Plant material The leaves of C. odorata collected in Brazzaville were used. Botanical identification of the plant material was done by Mousamboté, botanist systematist of Higher Normal School of Agronomy and Forestry (HNSAF) and confirmed at the botanical laboratory of Research Institute in Exact and Natural Sciences (RIENS) of Brazzaville where the samples of C. odorata was compared with the reference samples of the herbarium at No. 1183, July 1965. Plant material were dried and pulverized with a mortar. The aqueous extract of leaves of C. odorata was prepared by decoction. 250 g of powder of dry C. odorata leaves were mixed in 2500 ml of distilled Animal material Albino rats (200 to 250 g) and albino mice (20 to 30 g) of either sex obtained from the Faculty of Science and Technology of Marien NGOUABI-University were used. They were fed with a standard feed and water ad libitum. They were acclimatized during one week before experimentation and were housed under standard conditions (12 h light and 12 h dark) and at the temperature of 27 ± 1°C. The rules of ethics published by the International Association for the Study of Pain 19 have been considered. Castor oil induced diarrhoea in rat The method reported by Elion Itou, (2018b) 20 was used. The animals were divided into groups of 5 rats each. The different doses of the aqueous extract of leaves of C.odorata (400 and 800), loperamide (reference molecule, 10 mg / kg) and physiological saline (control, 0.5 ml/ 100g) were administered orally to groups, one hour prior castor oil administration (2 ml/rat). After castor oil administration, the animals were placed in metabolism cages to evaluate the frequency, the quantity of the faeces emitted as well as the onset of appearance of the diarrheal faeces (soft or liquids). The frequency and the faeces quantity were noted at 2, 4 and 6 hours after administration of castor oil. Intestinal transit Intestinal transit was determined by the charcoal method 20 . The various doses of the aqueous extract of leaves of C.odorata (400 and 800), loperamide (reference molecule, 10 mg / kg), physiological saline (control, 0.5 ml / 100g) were administered orally to groups, one hour prior of 10% charcoal (10 ml / kg). 30 minutes after administration of the charcoal, the animals were sacrificed by cervical dislocation, the abdomen opened, the small intestine removed and placed on blotting paper. The small intestine is inspected, the distance traveled by the charcoal was measured using a scale and expressed as a percentage of the intestinal transit according to the formula: % T = d / D × 100; distance traveled by charcoal; D = total length of the small intestine. Enteropooling This study involves assessing the net quantity of fluid accumulated in the small intestine. The various doses of the aqueous extract of leaves of C.odorata (400 and 800), loperamide (reference molecule, 10 mg/kg) and distilled water (control, 0.5 ml/100g) were administered orally to groups, one hour prior administration of castor oil (2 ml/rat) 21 . 2 hours after castor oil, the rats were sacrificed by cervical dislocation. The small intestine was removed and weighed (P1). Subsequently it was emptied of its contents then and weighed again (P2) and its length (L) measured. The difference between the weights divided by the length gives the net quantity (Q) of the fluid accumulated: The intestinal contents of each group were collected in a tube and sent to the laboratory to determine the concentrations of Na + (sodium), K + (potassium) ions using a flame photometer ( Micro Touch Biochemistry Analyser). Statistical Analyze All values were expressed as mean ± ESM. Analysis of variance followed by Student-Fischer t test "t" was performed. The significance level was set at p<0.05 Effect on diarrhea induced by castor oil The administration of castor oil caused the stools to be shed. The results obtained show that loperamide and the aqueous extract at the doses used significantly reduce (p <0.001) the emission frequency as well as the quantity of faeces excreted (Table 1 and 2) during the 6 hours of observations compared to the control group. However, maximum decreases are observed the first two hours when loperamide and the aqueous extract at the dose of 800 mg/kg cause almost constipation of the animals. In contrast, loperamide and the aqueous extract at the doses used significantly reduce (p <0.001) the onset of the first diarrheal faeces apparition compared to the control group (Table 3). The onset of the first diarrheal faeces is 65.80 ± 1.01; 259.44 ± 1.44; 170.12 ± 0.79 and 224.04 ± 1.02 min respectively for physiological saline, loperamide and aqueous extract at doses of 400 and 800 mg/kg. Each value represents the mean ± ESM; ***p<0.001 (Student t-test), versus control group The results of the effect of the aqueous extract of C.odorata leaves on intestinal fluid accumulation are shown in figure. 2. They show that loperamide and the aqueous extract (800 mg / kg) decrease significantly (p <0.01) fluid accumulation in the intestine compared to the control group. However, no significant decrease (p> 0.05) was observed with the aqueous extract at 400 mg/kg. The masses of the fluid collected are 10.17 for control group; of 4.18 for loperamide, 10.02 for the 400 mg/kg of aqueous extract and 4.81 mg for the 800 mg/kg of aqueous extract. Moreover, the control, loperamide and the aqueous extract at the doses used (400 and 800 mg / kg) eliminate more Na + ions than K + ions (ratio >1). Figure 2: Effect of the aqueous extract of C. odorata on intestinal fluid accumulation ***p<0.001 (Student t-test), versus control group; ns= no significant (p>0.05) test versus control group DISCUSSION Oral administration of castor oil caused diarrhea in rats. This has resulted in an increase in the frequency of emission and the amount of faeces excreted which are two important parameters in the definition of diarrhea. In fact castor oil contains ricinoleic acid which induces irritation and inflammation of the intestinal mucosa, leading to the release of prostaglandins which, in turn, modify the mucous fluid and the transport of electrolytes, thus preventing the reabsorption of NaCl and water leading to a hypersecretory response and diarrhea 22 . In fact, ricinoleic acid stimulates peristatic activity in the small intestine and modifies the permeability of electrolytes (Na, K) by inhibiting the intestinal activity Na/K ATPase). Inhibition of intestinal activity Na/K ATPase, reduces normal fluid absorption by activation of adenylate cyclase 23 . In this study, the aqueous extract of C. odorata (400 and 800 mg/kg) significantly reduced the faeces frequency emission; the amount of faeces excreted and delayed the onset of diarrhea induced by castor oil such as loperamide (reference molecule). Indeed, loperamide is one of the most widely used and best known antidiarrheal because of the absence of central effects in adults and its preferential binding to the μ and δ receptors of intestinal tissue. It antagonizes castor oil-induced diarrhea and its action is due to its antisecretory and anti-motility properties 24 . Loperamide reduces intestinal motility by direct effect on the circular and longitudinal muscles of the intestinal wall. The fact that the extract is resistant to castor oil-induced diarrhea suggests that the aqueous extract at the doses used (400 and 800 mg / kg) would act as the loperamide used as the reference molecule. In addition, the reduction of motility and gastrointestinal secretions is one of the mechanisms by which many antidiarrheal agents act 25 . Therefore, in this study, the effect of the aqueous extract was evaluated on the intestinal transit as well as on the accumulation of intestinal fluid induced by castor oil. The results obtained show that the aqueous extract of leaves of C. odorata does not significantly decrease (p> 0.05) the intestinal transit in contrast to loperamide (p <0.001) compared to the control group. This result suggests that the antidiarrheal effect observed would not pass an acceleration of intestinal transit. In addition, studies conducted on enteropooling (accumulation of intestinal fluid) revealed that the aqueous extract of C. odorata significantly inhibits the accumulation of intestinal fluid (enteropooling) induced by castor oil with a reduction in weight and the volume of intestinal contents. From these results observed with the aqueous extract, it can be said that the reduction of the intestinal fluid accumulation would be due to a stimulation of the absorption of electrolytes of the intestinal lumen comparable to the inhibition of hypersecretion. According to different physiopathological conditions of diarrhea, hypermotility characterizes diarrhea where the secretory component is not the responsible factor. It is possible that the extract may reduce diarrhea by increasing the reabsorption of electrolytes and water or by inhibiting the induction of intestinal fluid accumulation. However, it is known that prostaglandins stimulate gastrointestinal motility and secretion of water and electrolytes. The mechanism related to this would be associated with the dual effects of intestinal motility on both the transport of water and electrolytes (decreased absorption of Na + and K +) than on the intestinal mucosa 21, 26, 27 . In this study, it was demonstrated that the aqueous extract of C.odorata at the doses used excrete more Na + than K + compared to the control and loperamide, which seems to confirm our hypothesis on the probable mechanism of the observed antidiarrheal effect. Other authors have already demonstrated the antidiarrheal effect of plant extracts 20 . Indeed, these authors demonstrated the antidiarrheal effect of the aqueous extract (400 and 800 mg/kg) of the stem bark of Ceiba pentandra. The phytochemical study carried out previously had shown the presence of saponins, alkaloids, flavonoids, cardiotonic-heterosids, steroids and terpenoïds as well as tannins 13 . It was reported by various researchers that tannins, saponins and flavonoids can be responsible for antidiarrheal actions 28 . CONCLUSION The objective of this work was to evaluate the antidiarrheal effect of the aqueous leaf extract of C. odorata. It appears from this study that the aqueous extract of C. odorata has an antidiarrheal effect. This effect could be achieved by reducing intestinal secretions and not by increasing intestinal transit. These results could explain the traditional use of C. odorata in the treatment of diarrhea.
2019-04-26T13:59:02.005Z
2019-03-30T00:00:00.000
{ "year": 2019, "sha1": "f31c07f5033a11ca83728b4f99538b949b2f055e", "oa_license": "CCBYNC", "oa_url": "http://jddtonline.info/index.php/jddt/article/download/2392/1887", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "591ce6401178f331e861440c2783afad8471df37", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Chemistry" ] }
239269724
pes2o/s2orc
v3-fos-license
Numerical Simulation of Disturbance Evolution in the Supersonic Boundary Layer over an Expansion Corner The linear and nonlinear stages of disturbance development in the supersonic boundary layer over a 10° expansion corner is investigated numerically within the framework of Navier—Stokes equations for Mach number 3. The effect of sudden flow expansion on the disturbance evolution is analyzed. The flow stabilization effect observable in the aerodynamic experiment is also discussed. NOMENCLATURE x, y, z = longitudinal, vertical, and lateral Cartesian coordinates composing the right-hand triple u, , w = Cartesian coordinates of the velocity vector p = pressure l = coordinate along a surface: l = x at x < 0 and l = x/cos(ε) at x ≥ 0 d w = distance to the wall = pressure disturbance on the wall M = Mach number Re ∞L = Reynolds number based on the scale length L β = lateral wavenumber δ = boundary layer thickness according to the u(δ) = 0.99u e criterion ω = cyclic frequency ε = expansion corner in degrees Subscripts e = at the outer edge of the boundary layer ∞ = in the freestream Abbreviations FP = flat plate, ε = 0°E C = expansion corner, ε = 10°W P = wave packet (linear regime) TS = turbulent spot (essentially nonlinear regime) Flow past the elements of supersonic flight vehicles is associated with the formation of flow acceleration and deceleration zones. Within these zones the boundary layer can interact with shocks, be separated, and form zones of enhanced heat transfer on reattachment to the surface. The boundary layer turbulization can considerably increase this effect. In practice, flow acceleration zones with a favorable (negative) pressure gradient are encountered not less than deceleration zones. Despite this fact, the fundamental problem of the influence of a favorable pressure gradient accelerating the flow on the laminar boundary v ' w p layer stability and the return of turbulent flow to the laminar state (relaminarization, see, for example [1]) has been studied considerably less. The problem of relaminarization has been studied from the last mid-century. An investigation of subsonic turbulent flows in the presence of a large negative pressure gradient showed the possibility of complete boundary layer relaminarization [2][3][4][5], which is connected with the curved nature of streamlines and favorable longitudinal and normal pressure gradients which lead to a rapid diminishing of the turbulent fluctuation scale in the flow acceleration region [6]. The longitudinal gradient of the static pressure and the freestream Reynolds number were noted as the basic flow parameters influencing the generation and development of the relaminarization process. In the supersonic flow regime relaminarization leads to a considerable decrease in viscous friction forces and heat transfer from the warm gas to the surface in a flow. It is known that in the outer region of the boundary layer the compressibility effects are predominant over the other effects (see, for example, [7,8]). The compressibility effects include a weak decay and enlargement of large-scale eddy structures in an expansion flow at Mach number 3 [7]. As noted in [7], the Reynolds shear stresses are considerably suppressed with the result that large eddies grow weak downstream. It was also noted that small-scale structures are considerably suppressed immediately behind a rarefaction fan. These inferences were confirmed in experiments at Mach number 4.9 [9,10], where it was found that the intermittence region of the boundary layer shortens in an acceleration flow being displaced toward the boundary layer edge. This fact makes difficult the mixing of the gas from the external flow in the boundary layer. In [11] the possibility of partial boundary layer relaminarization was demonstrated for Mach number 4 (in its wall region consisting up to 40% of its total thickness). It was noted that an increase in the Reynolds number leads mainly to the stretching of the relaminarized flow region, while the greater flow acceleration (greater negative pressure gradient) in the interaction region results in a progressive decrease of turbulent fluctuations. It was noted that the relaminarization criteria obtained at subsonic velocities can also be applied at large supersonic velocities. The experiments [12] performed in a shock tunnel at Mach numbers from 5 to 8 confirm indirectly the stabilizing effect of the favorable pressure gradient on the "ogive-cone-cone-cylinder" body of revolution and also indicate the relaminarization of a turbulent wedge behind an isolated surface roughness. The experimental studies, such as [10], provide a good foundation for developing and testing the computational models of flows of this type, which can be applied together with the Reynolds equations and large eddy simulation. However, owing to computational complexity, there are only few publications concerning direct numerical simulation of laminar flow stability and turbulent flow relaminarization at supersonic velocities. Of these few studies we can note [13] performed at the Mach number M = 2.9 and [14] at M = 2.7. In these two studies turbulent flow around an expansion corner was considered. The two-layer structure of the accelerated flow near the corner was found out; it confirms experimental observations. The flow in the upper layer is characterized by strong suppression of turbulent fluctuations, which are slowly recovered further downstream. In the lower layer the fluctuations are suppressed only in a small vicinity of the flow turn region and are rapidly recovered further downstream. The mechanisms of disturbance growth and flow relaminarization control the laminar-turbulent transition process. It is known that in the case of a low level of external disturbances, which is inherent in flight conditions, the transition in a supersonic boundary layer over a smooth surface includes successive formation, growth, and coalescence of individual turbulent spots. A spot is formed at the nonlinear stage of wave packet development, whose evolution is determined by the undisturbed flow parameters, the disturbance background, and the temperature of the surface in a flow. In accordance with the linear stability theory, it is the oblique first-mode waves that are predominant over a hot (thermally insulated) surface, while the plane second-mode waves predominate over a cooled surface at large Mach numbers. Although a turbulent spot is an essentially nonlinear object, its development can be influenced by the linear stability mechanism. The calculations [15] establish the relation between the characteristics of turbulent spots and linear wave packets. There are certain experimental and numerical investigations that confirm this assertion (their detailed review is given in [15]). It should be noted that there are only few calculated data on the development of wave packets and turbulent spots in supersonic boundary layers over expansion corners. In this study, the evolution of a wave packet (linear regime) and a turbulent spot (nonlinear regime) in the supersonic boundary layer over an expansion corner is investigated at Mach number 3. The numerical approaches used are described in Section 1 and an analysis of the numerical results is made in Section 2, where the mean flowfield, the disturbance development in the physical plane, and the spectral characteristics of the disturbances are presented. The conclusions are given in the Summary. METHODOLOGY OF THE CALCULATIONS The numerical simulation of the flowfields was performed using the in-house software package HSFlow [16], in which the Navier-Stokes equations are discretized by means of the second-order finitevolume TVD-scheme. The reconstruction of the convective flux terms on the cell sides is performed using the WENO-3 scheme. The mean stationary flowfield is obtained using a second-order method of steadystate attainment with application of structured multiblock computation grids. The calculations were carried out for a perfect gas (air) with a constant adiabatic exponent γ = 1.4 and the Prandtl number Pr = 0.72. The viscosity coefficient is calculated from the Sutherland formula with the characteristic temperature T = 110.4 K: μ = (1 + T μ )/(T + T μ )T 1.5 . The flow velocity, temperature, density, and pressure are nondimensionalized as follows: (u, , w) = (u*, *, w*)/u , T = T*/T , ρ = ρ*/ρ , and p = p*/(ρ u ); all the dimensional parameters are marked by the asterisk as a superscript. The no-slip condition is imposed on the wall: (u, *, w) = (0, 0, 0); the wall temperature is constant and is equal to the flow recovery temperature calculated basing on the given stagnation temperature T = 290 K for the freestream Mach number M ∞ = 3: T = T = T (1 + Pr 0.5 0.5(γ -1)M ) ≈ 261.8 K; T = T (1 + 0.5(γ -1)M ) -1 ≈ 103.57 K; T w /T e ≈ 2.53. The simplified boundary condition of extrapolation is preassigned for the pressure: (∂p/∂n) w = 0. The dimensional quantities can be assessed using the scale length L* = 0.1 m and the time scale τ* = L*/u ≈ 1.64 × 10 -4 . The dimensionless frequency and wave vector components are ω = ω*L*/u and (α, β) = (α, β)L*, respectively. We will consider flow over a flat plate (ε = 0°, abbreviated FP) and over an expansion corner (ε = 10°, abbreviated EC). We consider the development of a wave packet (linear regime, abbreviated WP) and a turbulent spot (nonlinear regime, abbreviated TS) over both geometries, four cases, altogether. The expansion corner is located at x = l = 0, where l is the coordinate along the surface (x = l at x < 0 and x = lcos(ε) at x ≥ 0). The calculation procedure consists of five steps and ensures the same initial disturbance fields within the WP and TS groups. First, the mean flow at the plate at -7.5 ≤ x ≤ 0.2 is calculated using the method of steady-state attainment. The freestream conditions are imposed on the entry boundary (left and upper sides), while on the exit (backward) boundary all the variables of the problem are linearly extrapolated from within the computation domain. Secondly, a subdomain is separated out under the bow shock wave generated by viscous-inviscid interaction; its left entry boundary is located at l = -7.2, 0 ≤ y ≤ ∼0.12. The flow is fixed on the new entry boundaries basing on the data obtained at step 1. Then the mean flow is refined again using the steadystate attainment method, up to the moment, when the calculated flow parameters change within 10 -11 for the unit of the computational time. Thirdly, the mean flowfield is doubled in the third direction for 0 ≤ z ≤ 0.7; in this direction, the boundary conditions of symmetry are applied. Disturbances are introduced in the boundary layer using a generator of the "injection-suction" type through a square orifice on the wall (-6.9923 = x s ≤ x ≤ x e = -6.6623, -0.036 = z s ≤ z ≤ z e = 0.036), which is modeled by means of a time-dependent boundary condition for the gas flow rate in the direction normal to the surface: Here, ω 0 = 10.021 is the baseline generator frequency and A is its amplitude. To form the linear wave packet we take A = 5 × 10 -7 and to generate the turbulent spot we take A = 5 × 10 -4 . The generator excites disturbances near the lower branch of the neutral curve for the fundamental harmonic ω 0 . The generator works during the time period 0 < t < π/ω 0 , which provides a near-uniform initial disturbance spectrum for ω < ω 0 , perceived by the boundary layer. The computation grid in the subdomain has an excessive resolution N x × N y × N z = 2023 × 262 × 330 corresponding to approximately 90 points on the longitudinal wavelength of the fundamental harmonic ω 0 . The temporal resolution amounts to about 125 points on the fundamental harmonic period 2π/ω 0 . On this grid the disturbances are calculated up to the moment t = 7.5, when their leading front approaches the section l = 0. Fourthly, in accordance with the procedure described above, the mean flow is calculated in a domain extended in both x and z: 0 ≤ z ≤ 1.5; the length along the wall l ≤ 6; the left boundary of the subdomain begins at l = -3.5, 0 ≤ y ≤ 0.55; the grid dimensions are N x × N y × N z = 1317 × 262 × 470. The extended grid resolves the baseline disturbance wavelength on 45 points. The transverse boundary layer resolution, normal to the surface, does not change, as compared with the original grid, and amounts to about 100- cos π cos π (sin(ω ) sin (2ω )). 120 grid lines. At l > 6 the extended computation domain is closed by a buffer zone with the cells strongly enlarged in both x and y intended for suppressing disturbances arriving to the exit boundary and capable to cause the numerical method instability in the case of an intense turbulent spot. Fifthly, the disturbance field is determined by means of subtracting the mean field from the disturbed field obtained at steps 1 to 3. It is added to the mean flow over the flat plate or the expansion corner in the extended domain obtained at step 4 with application of first-order interpolation to transfer the disturbances between noncoinciding computation grids. Thus, the initial disturbance fields in the extended computation domain are the same for all the cases considered in this study. It should be noted that the procedure of the disturbance transfer from one grid to another does not introduce any spurious disturbances into the solution. The calculations are continued until the disturbances leave the computation domain, that is, until the greatest correction to the dependent variables of the problem on a time step amounts the values of 10 -7 (t max = 20) for the wave packets and 10 -4 (t max = 26 for the expansion corner and t max = 30.5 for the flat plate) for turbulent spots throughout the entire flowfield. Thereupon, the pressure disturbance field p (t, l, z) on the surface is analyzed. The extended computation domain and the steady flowfield in it are presented in Fig. 1 for the case of the expansion corner in the full computation domain (the detailed discussion is given in Section 2). The experience of the previous calculations of the authors has shown that the grid resolution of 90 (correspondingly, 47) points over a wavelength leads to a smaller than 0.01% (correspondingly, 0.27%) reduction in the disturbance amplitude on one length of the monochromatic acoustic wave with respect to the natural level of its viscous decay. A preliminary analysis within the framework of the linear stability theory has shown that this numerical decay is small compared with the physical disturbance growth or decay in boundary layers. The total error of the calculations of the disturbance field for the case of the wave packet can be assessed as less than 10% at the end of the computation domain. In the case of the turbulent spot the generated disturbances have even smaller scales. They are subjected to a greater numerical decay; it is supposed that this fact cannot influence the conclusions made in this paper. Mean Flow The calculated mean flowfield over the expansion corner is presented in Fig. 1. The inviscid approximation of this class of flows is known as the Prandtl-Meyer flow [17]. The flow parameters change rapidly in a small vicinity of the corner l = 0; viscosity smoothens the distributions of the gasdynamic parameters in this region. The pressure on the surface varies on the scale of the boundary layer thickness calculated directly ahead of the bend. The temperature and velocity profiles are recovered on a greater distance. The boundary layer thickness increases rapidly on several scales of δ ahead of the corner. In Fig. 2 the calculated profiles of the absolute value of the velocity, the Mach number M, and the static temperature T are plotted ahead of and behind the corner; here, d w is the distance to the wall. Ahead of the corner the profiles are in good agreement with those for the case of the flat plate and begin experience distortions only in the immediate proximity of the corner, l > -0.1, that is, the upstream effect of the ' w With the passage through the corner the flow velocity increases only slightly; the new absolute value of the velocity at the boundary layer edge V e ≈ 1.056 is about 6% greater than the corresponding value upstream of the corner. The flow acceleration, regarded as an increase in the Mach number, is realized at the expense of flow cooling (Fig. 2b). As can be seen in Fig. 2, the flow parameters at the outer edge of the boundary layer become steady even at l < 0.3, which is of the order of 10δ. In this case, the expansion fan is located fairly high above the surface and the boundary layer is formed beneath it. With increase in the Mach number the fan becomes more gently sloping, with the result that the interaction region behind the corner lengthens. It should be noted that the flow parameters at the boundary layer edge behind the corner are in good agreement with the values predicted by Prandtl-Meyer theory: M e ≈ 3.58 and T e ≈ 0.79. For this reason, this theory can be used for rapidly estimating the parameters of the flow over the expansion corner. For example, we will assess the ratio of the boundary layer thicknesses over the corner and the flat plate under the assumption of a constant gas flow rate through the boundary layer (2.1) Substituting M 1 = M ∞ = 3 and M 2 = M e2 ≈ 3.58 we obtain δ 2 /δ 1 ≈ 1.7. In the direct calculations of Navier-Stokes equations the corresponding value is 1.5. It should be noted that the rapid boundary layer broadening must be accompanied by a rapid variation in its stability characteristics. In particular, assuming that the phase velocities of the disturbances for the FP and EC cases are similar in value, we obtain that the unstable frequency range must be scaled together with the boundary layer thickness: f 2 /f 1 ≈ λ 1 /λ 2 ≈ δ 1 /δ 2 , where f is the frequency of the predominant disturbance harmonic. Because of this, it might be expected that the disturbances growing ahead of the corner will transform into disturbances decaying downstream of the corner. This supposition is discussed below. Disturbance Evolution in the Physical Plane We will consider the disturbance evolution with reference to the example of a pressure disturbance p on the wall for several successive moments of time (Fig. 3). In accordance with the linear stability theory for the flat-plate boundary layer, for the flow parameters under consideration there exists only one unstable mode, namely, the first mode according to Mack's terminology [18,19]. The first-mode disturbances are oblique waves propagating with the front inclination As can be seen in Figs. 3a-3d, the wave packet FP enhances downstream monotonically, increasing in proportion in the longitudinal and lateral directions. The wave packet EC develops in the same fashion up to the bend of the surface at l = 0 and decays monotonically and rapidly further downstream. It should be noted that, as the EC packet passes over the corner, any new disturbances do not appear in the p (x, z) field in the boundary layer. The quantitative comparison of the amplitudes of the wave packets FP and EC is given in Fig. 4a, where the distributions of the greatest-in-z quantity p (x, z) are presented in each grid section x = const. The packet FP amplitude increases downstream almost exponentially, whereas the EC packet decays exponentially at l > 0. The turbulent spot on the flat plate evolves in a different way (Figs. 3e-3h): it enlarges monotonically downstream but its amplitude remains on an approximately same level (Fig. 4b). On the spot periphery, particularly, in its forward region, there is a region of lowered pressure, while the spot core is the region of elevated pressure. In the vicinity of the spot, where the nonlinear interaction is almost absent, oblique wave packets are formed; their geometric parameters are characteristic of the first-mode wave packet (Figs. 3a-3d). As distinct from the case of the wave packet, the turbulent spot EC is attenuated when moving over the expansion corner but this attenuation is only local in nature and the spot continues to grоw, similarly to the case of FP at l > 1. The presence of the expansion corner delays the spot development. This observation is confirmed in Fig. 4b: the level of the maximum disturbance amplitude in the spot decreases sharply behind the corner and is slowly recovered further downstream approaching the corresponding disturbance level over the flat plate. Frequency-Amplitude Analysis of Disturbances We will consider the disturbance evolution in the spectral plane. For this purpose, we will take the field of the two-dimensional fast Fourier transform p (ω, β) for the quantity p (t, x, z) with respect to the variables t and z; here, x is a parameter. In the case of the wave packet the spectra contain two symmetric maxima determining the angle of inclination of the wave front (Fig. 5a). Further downstream the frequencies and the wave numbers of these maxima decrease slowly, which is in agreement with the results of the linear stability theory and will not be discussed here in detail; as for the maximum values, they increase downstream. Ahead of the corner (l < 0) the wave packet spectra are identical for the FP and EC cases. Behind the corner the disturbance spectrum in the EC packet decays monotonically and uniformly throughout the entire spectral range, except from the near vicinity of zero: β ∼ 2, ω ∼ 10. Here, a weak growth on the level of the background noise is observable. Supposedly, this is a new wave packet which is generated from background disturbances in the restructured boundary layer. Owing to restricted dimensions of the computation domain, a more detailed analysis on the basis of the calculations performed seems purposeless. The corresponding spectra for the turbulent spot are qualitatively different. Due to a large amplitude of the disturbance generator, the initial spectrum of disturbances generated in the boundary layer is not simple, although two maxima of the first-mode disturbances can again be separated out; they correspond to the maxima in Fig. 6. The nonlinear interaction leads to the appearance of harmonics with multiple frequencies and wave numbers; the spectrum is rapidly filled and breaks into small parts; the (ω, β) = (0, 0) harmonic appears and enhances, which indicates an increasing variation of the mean flow within the spot. Again, the spectra turn out to be identical for l < 0, while differences appear directly at l ≥ 0. The main difference is that the EC spectrum amplitude is rapidly reduced throughout the entire frequency-wave range and the spectrum becomes less filled but further downstream it is slowly filled and recovered with respect to the amplitude. Nevertheless, the saturation region is not recovered and remains smaller than in the FP case (Figs. 6e and 6f). The spectral behavior of the wave packet and the turbulent spot described above is clearly illustrated by the distributions of the maximum Fourier amplitude on the surface presented in Fig. 7. Clearly that the growth of the strongest harmonic of the packet is near-exponential and is the same for the FP and EC cases at l < 0; the decay of this harmonic behind the expansion corner is also near-exponential. The growth of the strongest harmonic in the turbulent spot, (ω, β) ≈ (0, 0), proceeds most actively at the spot formation stage, l < -2, and then decelerates considerably. Directly behind the expansion corner this harmonic attenuates jumpwise at 0 ≤ l ≤ 0.5 but its growth is renewed, the growth rate (slope of the curve) being similar in value with that for the flat plate. Delay of the Turbulent Spot As noted above, the presence of the expansion corner delays the development of the turbulent spot. We will illustrate this thesis. In Fig. 8a the turbulent spot is visualized on the plate, when it is completely located at l < 0 (that is, coincides with the EC case) and after the passage through the line l < 0 for both the plate and the expansion corner. In both cases (Figs. 8b and 8c) the spot shape is near-triangular. However, in the FP case (Fig. 8b) the spot is greater than in the EC case (Fig. 8c), although the spot structures seen in profile differ only slightly. It might be expected that due to viscous friction the contribution of the greater spot will be greater. The contribution of the spot can be calculated, as follows: where S is the area of the surface in a flow and Δc f, x is the excessive friction coefficient, compared to the case of undisturbed laminar flow. The friction coefficient is determined in the standard way where ∂V/∂n is the derivative of the absolute value of the velocity vector with respect to the normal to the surface. The location of the application of the excessive force ΔF v,x can be determined from simple geometric considerations, similarly to the location of the body center-of-mass (2.4) In Fig. 9 the evolution of the turbulent spot contribution to the viscous drag force is presented. On passage through the corner (EC case) the quantity l c < 0. This is due to the fact that a part of the turbulent spot that has penetrated into the l > 0 region loses rapidly its intensity, compared with the other part of the spot. When the EC spot has penetrated completely into the l > 0 region, its contribution into the viscous friction force begins to increase again. In this case, the growth rates are similar in value for the FP and EC cases, which confirms the supposition about the delaying effect of the expansion corner on the turbulent Fig. 9; it is approximately unity. SUMMARY Within the framework of Navier-Stokes equations the development of wave packets and turbulent spots in a supersonic boundary layer (Mach number 3) over a 10° expansion corner is investigated numerically. The occurrence of the expansion corner leads to flow stabilization. It is shown that the wave packets of the first unstable mode decay monotonically behind the expansion corner. This is due to the sudden flow restructuring, the boundary layer broadening, and, as a consequence, a variation in the boundary layer stability characteristics. The unstable region is scaled with respect to the frequency together with the boundary layer thickness, whose variation may be estimated on the basis of Prandtl-Meyer theory. It is also shown that the localized regions of turbulent flow, or turbulent spots, are suppressed on passage through the expansion corner only locally. The occurrence of the expansion corner delays the spot development on a scale of the order of 40 thicknesses of the undisturbed boundary layer ahead of the corner; the disturbance amplitude decreases and the frequency-wave spectrum becomes less filled. Downstream of this region the turbulent spot grows similarly to the spot the gradient-free flow over the flat plate. For this reason the turbulent flow relaminarization effect observable in the experimental heat-transfer patterns can actually indicate only local attenuation of turbulence rather than its complete suppression. An investigation of the second unstable mode of the boundary layer, which appears at large Mach numbers, and the behavior of viscous friction and heat transfer to the expansion corner surface is the theme of further studies of the authors. DECLARATION OF CONFLICTING INTERESTS The Authors declare no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. OPEN ACCESS This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
2021-10-19T15:57:30.250Z
2021-09-01T00:00:00.000
{ "year": 2021, "sha1": "4eab04db2c6569df73fff43cdbe06450f0f02b99", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1134/S0015462821050025.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "e8249fa0f0c3c6d4cd29971ca7aea820f67572e7", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [] }
255857865
pes2o/s2orc
v3-fos-license
Correlation analysis of epicardial adipose tissue thickness, C-reactive protein, interleukin-6, visfatin, juxtaposed with another zinc finger protein 1, and type 2 diabetic macroangiopathy To investigate the correlation between the thickness of epicardial adipose tissue (EAT), C-reactive protein (CRP), interleukin (IL) -6, visfatin, juxtaposed with another zinc finger protein 1 (JAZF1) and type 2 diabetic mellitus (T2DM) macroangiopathy. The study enrolled 82 patients with T2DM with macroangiopathy (the Complication Group), and 85 patients with T2DM (the Diabetes Group) who were admitted to Shandong Provincial Third Hospital from February 2018 to February 2020. In addition, 90 healthy people who underwent physical examination at the same hospital during the same period were enrolled (the Healthy Control Group). Age, gender, height, weight, waist circumference (WC), hip circumference (HC), diabetic course and therapeutic drugs, waist hip ratio (WHR), and body mass index (BMI) were recorded and calculated. The baseline characteristics of the three groups were comparable, and the diabetic course of the Complication Group and the Diabetes Group was not significantly different (P > 0.05). The WHR of the Complication Group was higher than that of the Diabetes Group and the Healthy Control Group, with statistical significance (P < 0.05). The FPG, 2hPG, HbA1C, CRP, IL-6, Visfatin, JAZF1, HOMA-IR, EAT thickness, and baPWV of the Complication Group were all higher than those of the Diabetes Group and the Healthy Control Group (P < 0.05, respectively). The JAZF1 and FIns of the Complication Group and Diabetes Group were lower than those of the Healthy Control Group, and JAZF1 of the Complication Group was lower than the Diabetes Group with statistical significance (P<0.05, respectively). Pearson correlation analysis showed that the EAT thickness was positively correlated with CRP, IL-6, visfatin, and JAZF1 (r = 0.387, 0.451, 0.283, 0.301, respectively, all P<0.001). Pearson correlation analysis showed that baPWV was positively correlated with EAT thickness, CRP, IL-6, visfatin, and JAZF1 (r = 0.293, 0.382, 0.473, 0.286, respectively, all P < 0.001). Multivariate stepwise regression analysis showed that FPG, 2hPG, HbA1C, CRP, IL-6, visfatin, JAZF1, and EAT thickness were independent risk factors that affected T2DM macroangiopathy. Clinical monitoring and treatment of T2DM macroangiopathy can use CRP, IL-6, Visfatin, JAZF1, and EAT thickness as new targets to delay the progression of the disease. Further research on the relationship between the above factors and the pathogenesis of T2DM macroangiopathy may be helpful provide new treatment strategies. Introduction Type 2 diabetes mellitus (T2DM) is a common metabolic disease in endocrinology department. Its main feature is chronic hyperglycemia, accompanied by insulin resistance and islet β-cell damage. With continuous improvement of the socio-economic level, the prevalence of T2DM has been increasing globally in recent years and is showing a younger trend [1]. An epidemiological study showed that the number of patients with T2DM in the world reached 285 million in 2009, and is estimated to reach 552 million in 2020 [2]. On this trend, the number of T2M patients may rise to 629 million by 2045 [3]. A study showed that patients with T2DM were often accompanied by complications such as macroangiopathy, eye disease, and renal failure, the most common of which was macroangiopathy, which accounted for about 75% of T2DM complications [4] Macroangiopathy is not only the main cause of disability in patients, but also can lead to death of patients, posing a serious threat to the life and health of patients. Therefore, how to effectively prevent and treat type 2 diabetic macroangiopathy is one of the key clinical issues that need to be solved urgently. Macroangiopathy includes coronary heart disease, hypertension, cerebrovascular disease, and vascular disease of lower extremity. The main pathological basis is atherosclerosis. The thickness of epicardial adipose tissue (EAT) is closely related to coronary atherosclerosis and can reflect plaque severity [5]. A recent study has confirmed that inflammation may play an important role in the pathogenesis of type 2 diabetic macroangiopathy. A variety of inflammatory factors, including C-reactive protein (CRP) and interleukin (IL) -6, not only regulate themselves and other tissues, but also are associated with insulin resistance and islet cell dysfunction, promoting the occurrence of diabetes [6]. CRP and IL-6 are important markers of local vascular injury, which can reflect the severity of inflammation. The EAT is located between the visceral pericardium and the myocardium. The thickness of EAT is closely related to coronary atherosclerosis and can reflect the severity of atherosclerosis [6]. Juxtaposed with another zinc finger protein 1 (JAZF1), also known as TAK1-Interacting Protein 27, is closely related to various diseases such as atherosclerosis and T2DM [7]. Visfatin, a cytokine found in visceral fat, binds and activates insulin receptors and produces insulin-like effects. Ming et al. [8] indicated that JAZF1 promoted the expressions of visfatin, peroxisome proliferators-activated receptor (PPAR) α, and PPARβ/δ in adipocytes but simultaneously inhibited the expressions of TAK1 and PPARγ. There are few studies on the relationship between EAT thickness, CRP, IL-6, visfatin, JAZF1 and type 2 diabetic macroangiopathy. Therefore, this study aimed to explore the relationship between these factors and type 2 diabetic macroangiopathy, in order to provide a theoretical basis for the clinical treatment of type 2 diabetic macroangiopathy. Clinical data A total of 167 T2DM patients who were hospitalized from February 2018 to February 2020 were consecutively enrolled in the study. Among them, 82 patients with diabetes with macroangiopathy were assigned to the Complication Group, and 85 patients with simple T2DM were assigned to the Diabetes Group. In addition, 90 healthy people who underwent physical examination during the same period were enrolled as the Healthy Control Group. The flow chart is shown in Fig. 1. The study protocol was approved by the Ethics Committee of Shandong Provincial Third Hospital. The formulation of this research protocol complies with the relevant requirements of the Declaration of Helsinki of the World Medical Association. Inclusion and exclusion criteria Inclusion criteria: (1) All patients with T2DM met the 2019 World Health Organization (WHO) criteria for diabetes diagnosis and classification [9]. (2) The diagnostic criteria of diabetic macroangiopathy were brachialankle artery pulse wave velocity (baPWV) > 1400 cm/s on either side or both sides of the lower limbs in T2DM patients [10,11], and the patients met one of the following criteria: ① magnetic resonance imaging (MRI) or computed tomography (CT) scan of the brain revealed ischemic lesions, confirming cerebral infarction; ② coronary CT or coronary angiography confirmed coronary heart disease (more than 70% lumen stenosis in a major epicardial vessel or more than 50% in the left main coronary artery); ③ a history of coronary heart disease, old cerebral infarction or other cerebrovascular diseases; ④ Doppler ultrasonography revealed extensive irregular stenosis or segmental occlusion of lower extremity arteries; (3) those with normal functions of major organs such as liver and kidney in biochemical tests; (4) the clinical data were complete; (5) the patient and his/her relatives agreed and signed informed consent. Exclusion criteria: (1) Patients with type 1 or other types of diabetes; (2) malignant tumor; (3) gastrointestinal lesions; (4) acute or chronic infection; (5) acute complications of diabetes; (6) hypertension (systolic blood pressure ≥ 140 mmHg, and/or diastolic blood pressure ≥ 90 mmHg); (7) a history of lower limb gangrene. Methods Age, gender, height, weight, waist circumference (WC), leg circumference (HC), and diabetic course in the 3 groups were recorded. WHR and BMI were calculated. Venous blood was collected via the elbow vein from 2 groups of diabetic patients who fasted for more than 12 h and 2-h postprandial blood was also collected. The fasting plasma glucose (FPG) and 2-h postprandial plasma glucose (2hPG) were detected by glucokinase method. Hemoglobin A 1 C (HbA 1 C) was detected by high pressure liquid chromatography [12]. Total cholesterol (TC), triglyceride (TG), high density lipoprotein cholesterol (HDL-C), low density lipoprotein cholesterol (LDL-C) and CRP were detected by Hitachi 7080 fully automated analyzer (Hitachi, Tokyo, Japan). IL-6, visfatin and JAZF1 were detected by TSZ ELISA kit (Biotang Inc./TSZ ELISA, Waltham, MA, USA). Electrochemical luminescence method was used to detect FInS. Homeostasis model assessment of insulin resistance (HOMA-IR) = FPG × FInS/22.5. The baPWV was measured using Colin VP-1000 fully automatic arteriosclerosis detector (Colin Medical Technology Co., Komaki, Japan) and the OMRON HEM-9000AI device (Omron Healthcare, Kyoto, Japan). To ensure accuracy, the mean value was taken of 5 consecutive measurements. Measurement of EAT thickness [13]: Vivid 7 (GE Healthcare, Milwaukee, WI, USA) and EPIQ5 (Philips, Amsterdam, the Netherlands) Echocardiography Machine was used with a 2-4 MHz cardiac probe. The cardiac probe was connected to the electrocardiogram (ECG), and the end of P wave was judged as the end of diastole. The patients were asked to lie on the left side. The thickest at the anterior wall of the right ventricle along the long axis of the parasternal left ventricle was measured at the end of the diastole (Fig. 2). The above operations were performed by physicians who had worked in the imaging department for more than 8 years. To ensure accuracy, the operations were performed consecutively for 5 times and the average value was taken. Statistical analysis Data were statistically analyzed using statistical software SPSS 23.0. Normally distributed data were represented by (mean ± standard deviation), and analyzed using the t-test. Non-normally distributed measurement data were represented by median and interquartile range [M (Q L , Q U )], and analyzed using Wilcoxon rank-sum test. One-way ANOVA was used for comparison among multiple groups using post-hoc test. Categorical data were expressed as counts and percentages and analyzed using χ 2 test. Pearson analysis was used for correlation analysis. Spearman test was used to analyze the correlation of non-normally distributed data. The determinants of T2DM macroangiopathy were analyzed by logistic regression. The level of statistical significance for all the above tests was defined at a probability value of less than 0.05 (P < 0.05). Demographic characteristics The demographic data of the three groups were as follows: The Complication Group: The gender distribution was 39 males, and 43 females. The age range was 47-75 years, with an average age of 55.4 ± 5.2 years. The BMI was 25.02 ± 2.28 kg/m 2 . The Diabetes Group: The gender distribution was 45 males and 40 females. The age range was 48-76 years, with an average age of 55.9 ± 5.3 years. The BMI was 24.79 ± 2.12 kg/m 2 . The Healthy Control Group: The gender distribution was 47 males, and 43 females. The age range was 47-78 years, with an average age of 56.1 ± 6.2 years. The BMI was 24.02 ± 2.39 kg/m 2 . The baseline characteristics were comparable among the three groups (P > 0.05). The WHR of the Complication Group was higher than that of the Diabetes Group and the Healthy Control Group, with statistical significance (P < 0.05) ( Table 1). Comparison of laboratory test results among the three groups There was no statistical difference in TC, HDL-C, and LDL-C among the 3 groups (P > 0.05). The FPG, 2hPG, HbA 1 C, CRP, IL-6, visfatin, JAZF1, HOMA-IR, baPWV, and EAT thickness of the Complication Group were all higher than those of the Diabetes Group and the Healthy Control Group, and the JAZF1 and FIns of the Complication Group and the Diabetes Group were lower than those of the Healthy Control Group, with statistical significance (P < 0.05, respectively), JAZF1 of the Complication Group was lower than that of the Diabetes Group (P < 0.05). The FPG, 2hPG, TG, HbA 1 C, CRP, IL-6, visfatin, JAZF1, HOMA-IR, baPWV, and EAT thickness of the Diabetes Group were all higher than those of the Healthy Control Group, with statistical significance (P < 0.05). There was no statistical difference in TG and FIns between the Complication Group and the Diabetes Group (P>0.05) ( Table 2). Logistic regression analysis of factors associated with macroangiopathy in T2DM As can be seen from Table 3, EAT was correlated with multiple indicators. In order to control the influence of confounding factors and possible collinearity between independent variables, multiple logistic regression analysis was performed of risk factors of T2DM macroangiopathy patients. BaPWV> 1400 cm/s [14] was assigned as the dependent variable, and FPG, 2hPG, HbA1C, TC, TG, HDL-C, LDL-C, CRP, IL-6, visfatin, JAZF1, FIns, HOMA-IR, and EAT thickness were assigned as the independent variables. The results showed that FPG, 2hPG, HbA1C, CRP, IL-6, visfatin, JAZF1, and EAT thickness were all factors that associated with T2DM macroangiopathy. The R2 value of this regression model was 0.892. Discussion T2DM has become the third chronic disease affecting human life after cardiovascular diseases and tumors. Macroangiopathy is one of the main complications of T2DM. It can involve medium sized or large blood vessels, leading to stenosis and occlusion of the lumen. In severe cases, it can cause plaque rupture and shedding, and induce embolism or bleeding. The mechanism may be related to factors such as glucose and lipid metabolism disorder, blood hypercoagulability, microcirculation disorder, and decline of vascular endothelial function. The pathological basis of T2DM macroangiopathy is atherosclerosis, and its mechanism may be related to factors such as glucose and lipid metabolism disorder, blood hypercoagulability, microcirculation disorder, and vascular endothelial function decline [15]. Umemura et al. [16] revealed that the prevalence of macroangiopathy in Caucasian T2DM patients is twice that of microvascular disease, and the mortality rate is 76 times that of microvascular disease. In clinical practice, controlling blood glucose alone cannot reduce the risk of macroangiopathy. Therefore, prevention of macroangiopathy in T2DM is a difficult disease of global concern and one of the important issues to be solved urgently. The EAT is located on the surface of the myocardium. It is a special visceral fat between the epicardium and the visceral pericardium. It is an important endocrine organ of the body and a storage warehouse of body fat energy. It is mainly distributed in the free wall of the right ventricle, the apex of the left ventricle and the free wall of the right ventricle. It can release a variety of biologically active molecules at a high rate, and communicate through signal transduction between the heart, liver, vascular endothelial cells, adipose tissue, skeletal muscle and pancreatic islet cells, forming a complex regulatory network [17]. Chen et al. [18] indicated that the EAT thickness of T2DM patients is significantly higher than that of normal subjects. CRP is the most significant clinical marker of inflammation. It has the function of recognizing and regulating immunity. It can also enhance the reactivity of leukocytes, play a firm role in the fixation of complement, and strengthen the ability to remove cell debris in inflammation sites. By activating complement, inflammatory mediators such as histamine are released [19]. Shen et al. [20] suggested that CRP is present in atherosclerosis and produces proinflammatory and atherogenic pathways, suggesting that CRP can be used not only as an inflammatory marker, but also as an independent risk factor in the pathologic formation of atherosclerosis. IL-6 is synthesized by fibroblasts, vascular endothelial cells, activated monocytes and other cells. It can affect inflammation and host defense through cellular and humoral immune functions and is the main circulating substance in vivo that links systemic immune response with local vascular injury [21]. Ziegler et al. [22] suggested that T2DM may be a cytokine-mediated inflammatory disease, and T2DM and atherosclerosis are both inflammatory diseases. Inflammation plays an important role in the occurrence and development of chronic vascular complications and atherosclerosis and has been considered as one of the important factors in the occurrence and development of atherosclerosis. Since IL-6 can stimulate liver cells to synthesize CPR, the changes in the levels of the two in patients also have a certain correlation. Deng et al. [23] proved that hs-CRP and IL-6 have diagnostic significance for patients with T2DM vascular disease. Li et al. [24] confirmed that the human JAZF1 gene sequence has extremely high homology with mouse gene sequence. JAZF1 is located in the nucleus and its mRNA is common in human tissues. TAKI is an orphan nuclear receptor that plays a role in multiple metabolism-related genes, and has a regulatory effect on liver lipid metabolism. Lack of TAKI in mice can reduce the inflammation of adipose tissue, loss of mitochondrial function, reduce the formation of CD36 and foam cells, and then cause [26] found that PPAR α -mediated transcriptional activation is inhibited by TAKI of liver cells, and PPAR α regulates gene expression of multiple links in the liver, which indirectly suggests that JAZF1 can improve lipid metabolism. Animal experiment by Zhou et al. [27] indicated that JAZF1 gene overexpression can improve lipid metabolism and inhibit the accumulation of macrophages in plates, thus reducing or delaying the formation of atherosclerosis. Therefore, it is speculated that JAZF1 may play an important role in diabetic macroangiopathy, hyperlipidemia and glycolipid metabolism. Visfatin is a factor that exists in visceral fat cells, which can combine activated insulin receptors with Insulin-Like Growth Factor, and is closely related to vascular smooth muscle maturation, atherosclerosis, immune regulation and inflammatory reactions. Ran et al. [28] have shown that inhibition of JAZF1 reduces the expression level of visfatin. However, there are few clinical reports on the correlation between EAT thickness, CRP, IL-6, Vivfatin, JAZF1 and T2DM macroangiopathy. The results of this study showed no statistical difference in baseline characteristics among the three groups, and the diabetic course was comparable between the Complication group and the Diabetes group (P > 0.05). The WHR, FPG, 2hPG, HbA 1 C, CRP, IL-6, visfatin, JAZF1, HOMA-IR, and EAT thickness were all higher in the Complication Group than the Diabetes Group and the Healthy Control Group (P < 0.05, respectively), and the FIns of both the Complication Group and the Diabetes Group were lower than that of Healthy Control Group (P<0.05). It was suggested that the above indicators could predict T2DM with macrovascular lesions, especially CRP, IL-6, visfatin, JAZF1, HOMA-IR, and EAT thickness. According to the changes of the above indicators, early intervention in patients with T2DM can prevent the occurrence of disability and death to a certain extent and has important clinical significance for the treatment of T2DM macroangiopathy. CT and MRI are the main methods to measure EAT thickness. However, due to the high price of CT and MRI, the radiation of CT and the noise of MRI, the large-scale use of CT and MRI is affected to some extent. Uygur et al. [29] have confirmed that the measurement of EAT thickness of the anterior wall of the right ventricle by chest ultrasound is consistent with the measurement results of CT and MRI. Therefore, in this study, chest ultrasound was used to measure the EAT thickness of the anterior wall of the right ventricle in the enrolled cases and healthy controls. The measurement site was the hypoechoic and anechoic region between the epicardium of the right ventricular wall and the visceral pericardium. Because of the difference in shape, the thickest part of the anterior wall of the right ventricle was measured. The results demonstrated that repeated measurements of EAT thickness at the end of the diastole showed stable results, and the measuring method was simple and reliable. Ultrasound measurement of EAT thickness has the following advantages in predicting T2DM macroangiopathy. First, compared with the conventional method of evaluating macroangiopathy, ultrasound can measure the thickness of EAT and examine the structure and function of the heart at the same time, combining the examination to evaluate cardiac and macroangiopathy. Second, compared with CT or MRI, ultrasound examination is cheaper and easier to repeat. Diabetes can easily lead to atherosclerosis. As atherosclerosis progresses, plaques can block the lumen and cause cardiovascular and cerebrovascular diseases. PWV is the rate of pulse conduction from the proximal end to the distal end of the arterial wall due to the expansion and retraction of the arterial wall, which can reflect the elasticity of the artery. The higher the PWV is, the harder the blood vessel wall is [30][31][32]. Studies have confirmed that PWV is an independent predictor of cardiovascular events [33,34], and also pointed out that EAT is an independent risk factor for cardiovascular disease, which can affect the process of atherosclerosis through regulating inflammation. As the lesion area of coronary heart disease expands, the thickness of epicardial tissue also increases. As the human body ages, the expansion of elastic arteries decreases while the compliance of muscular arteries increases. The baPWV measurement includes elastic arteries and muscular arteries, which more comprehensively reflects the condition of arteriosclerosis. Pearson correlation analysis found that EAT thickness was positively correlated with CRP, IL-6, visfatin, and JAZF1 (P < 0.001), and baPWV was positively correlated with EAT thickness, CRP, IL-6, visfatin, and JAZF1 (P < 0.001), suggesting that in T2DM macroangiopathy patients, EAT thickness is closely related to inflammation and lipid metabolism. With increase in EAT thickness, the levels of CRP, IL-6, visfatin, and JAZF1 also increase, which can induce hyperglycemia, insulin resistance, and vascular endothelial dysfunction, etc., promoting the occurrence and development of atherosclerosis, leading to macroangiopathy. A previous study found a stronger link between pericardial adipose tissue and visceral abdominal adipose tissue than other cardiovascular risk factors. Vascular calcification was associated with intrathoracic and pericardial adipose tissue, probably due to a local toxic effect on the vasculature [35]. In addition, excessive serum free fatty acids (FFA) can increase glycogen and basal insulin secretion and reduce liver insulin inactivation, resulting in hyperglycemia and insulin resistance. Diabetic hyper-FFAemia can cause vascular endothelial dysfunction through inflammation, oxidative stress pathways, and mitochondrial dysfunction, and vascular endothelial dysfunction is the initiating factor leading to atherosclerosis [36]. In the pathophysiology of atherosclerosis, inflammatory mechanisms play an important role. Persistent chronic inflammatory responses lead to damage to blood vessels, causing atherosclerosis, and plaque rupture and thrombosis [37]. He et al. [38] indicated that inflammatory factors such as TNF-α and CRP are involved in the pathophysiological process of vascular disease in patients with T2DM in plateau areas. Zhuo et al. [39] found that serum JAZF1 combined with fasting C-peptide has certain value in the diagnosis of T2DM macroangiopathy. Another study has also found that increased levels of visfatin are closely related to the severity of atherosclerotic peripheral arterial obstructive disease [40]. Chang et al. [41] confirmed that visfatin is elevated in the serum of Uyghur diabetic patients. Study strength and limitations This study prospectively observed the relationship between the indicators like the EAT thickness and the incidence of T2DM macroangiopathy, and found some valuable positive indicators, which had a certain predictive value for the incidence of T2DM macroangiopathy. This study also has certain limitations. Firstly, this is a single-center trial, and selection bias cannot be completely eliminated. In addition, due to the limited time of this study, it is still unclear whether there is a cross regulation between serum CRP, IL-6, Visfatin, JAZF1, and EAT thickness, and the specific regulation mechanism still remains unelucidated which needs to be confirmed by further study. Thirdly, there are the inter-or intra-operator differences in the measurement, which may lead to a bias of EAT thickness results. Conclusion CRP, IL-6, Visfatin, JAZF1, and EAT thickness are closely related to the clinical progression of patients with T2DM, which are independent risk factors for T2DM macroangiopathy. However, how these above factors affect T2DM macroangiopathy and the underlying mechanism remain to be further explored. Therefore, the above indicators, as new targets for delaying disease progression, can provide more valuable references for clinical treatment strategies.
2023-01-17T15:11:49.684Z
2021-03-15T00:00:00.000
{ "year": 2021, "sha1": "efa436bcf2d7035d5dfc69da44b4cd9671048ed0", "oa_license": "CCBY", "oa_url": "https://lipidworld.biomedcentral.com/counter/pdf/10.1186/s12944-021-01451-7", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "efa436bcf2d7035d5dfc69da44b4cd9671048ed0", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
257973659
pes2o/s2orc
v3-fos-license
A High-performance Approach for Irregular License Plate Recognition in Unconstrained Scenarios org INTRODUCTION Recognizing license plates is a crucial area of research due to its numerous practical applications, including monitoring road traffic, collecting tolls automatically, enforcing traffic laws, and more. A license plate recognition pipeline for recognizing irregular license plates typically includes four stages: license plate detection, perspective correction, segmenting characters and recognizing characters. License plate detection aims to extract license plate regions from input images. The accuracy of the entire system heavily relies on the accuracy of license plate detection, as the extracted regions are utilized in subsequent stages. As real-world images containing license plates are captured under different viewpoints, license plates may have arbitrary direction. As a result, perspective correction is performed to align the detected license plates. For recognizing characters, a segmentation approach is first used to decompose the aligned license plate image containing a sequence of characters into sub-images of individual character. Then, a character recognition approach is employed to classify each character. Classical approaches based on computer vision for license plate recognition [1], [2] primarily focus on extracting features of license plates based on the background color, contours and edges, and use these hand-crafted features for locating license plates and decomposing characters. Recently, numerous CNN-based approaches for license plate recognition have been proposed, leading to significant advancements. These methods first adopt CNN architectures to extract discriminative feature representations from input images. A network is then used to locate the location of license plates. With the detected license plates, a classifier is proposed to search for license plate characters and classify them. Since character segmentation is a challenge problem due to the effect of lighting conditions, shadows, and noise, various approaches [3], [4] have been proposed to directly recognize license plate characters without segmentation. With the success of CNN and text recognition, CNN-based license plate recognition methods have obtained great achievements in both accuracy and efficient. However, previous methods still depend on high-end GPUs or controlled environments such as specific viewing angles or simple backgrounds. Furthermore, with the growing number of license plate designs, license plate recognition systems that concentrate on single-line plates or frontal plate detection and recognition face increasing challenges. In view of these issues, a novel framework for detecting and recognizing irregular license plates in real-world complex scene images is designed in this paper. The proposed model can locate and recognize various types of license plates with arbitrary shooting angles in difficult conditions. There are three stages in the proposed model: license plate detection, perspective correction, and recognizing characters. For license plate detection, this paper employs a state-of-the-art object detector and extends it for predicting four corner points of license plates, which are then used to rectify distorted license plates. For license plate recognition, this paper designs a segmentation-free model based on a fast and efficient object detection architecture for predicting license plate characters. The results of the experiments conducted on two extensive datasets show that the proposed model boasts both a high recognition accuracy and rapid inference speed. This paper is structured as follows: Section II presents a literature review of license plate recognition. Section III offers an overview of the proposed approach. The details of the proposed pipeline are described in Section IV. The discussion of experimental results can be found in Section V. Lastly, the conclusion is outlined in Section VI. II. LITERATURE REVIEW This section provides a brief literature review on the topic of license plate recognition. This paper focuses on recent methods that are based on deep learning for end-to-end license plate recognition. For studies involving traditional image processing strategies or focused on license plate detection, please refer to [1], [2], [5], [30]. Since license plates usually occupy small portions of input images, various methods proposed to first detect vehicles and then locate license plate regions to improve license plate www.ijacsa.thesai.org detection performance. For this purpose, Sergio and Claudio [6] proposed a novel CNN model that includes a YOLO-based network for vehicle detection and license plate detection and an optical character recognition module for character recognition. The model can detect and rectify multiple distorted license plates before feeding the rectified license plates to the optical character recognition module to obtain results. In [7], the authors presented an end-to-end license plate recognition system utilizing the YOLO detector [8], [9]. This approach first locates the vehicles in the input image by a YOLO detector and then detects their respective license plates in the vehicle patches by another YOLO detector. Afterward, the model detects and recognizes all license plate characters simultaneously by forwarding the license plate region into the CR-NET model [10]. The results showed that this approach obtains high accuracy and fast inference speed. However, the model only recognizes single-line license plate taken from a frontal angle. Due to the effect of environment conditions, character segmentation is a challenge problem. Moreover, any incorrect character location produced by character segmentation will lead to misrecognition of the license plate characters. To solve this problem, various methods proposed to avoid character segmentation. Wang et al. [11] introduced a cascade approach (i.e., VSNet) for irregular license plate recognition. VSNet consists of a license plate detection network that makes predictions using multiple feature levels produced by a fusion network and a license plate recognition network that features an encoding layer for left-to-right feature extraction and a weight-sharing classifier for character recognition. In addition, a vertex-estimation branch is proposed to rectify distorted license plate images. In [12], the authors presented an end-toend convolutional neural network for license plate recognition that eliminates the need for character segmentation. The network is implemented on FPGA with very fast processing speed. To enhance the accuracy of license plate recognition in unrestricted conditions, Zou et al. [13] proposed a robust license plate recognition framework that uses a combination of Bi-LSTM and contextual position information of license plate characters to locate the characters in the license plate. The authors utilized deep separable convolutions and a spatial attention mechanism for license plate feature extraction to activate the character feature regions and thoroughly extract the features of license plates. In summary, although the above methods have achieved some significant accomplishments, they have not fully addressed the issue of irregular license plate recognition in unconstrained scenarios. Furthermore, these methods mostly require hardware with high-end GPUs, which is difficult to implement in practical applications. III. OVERVIEW OF THE PROPOSED FRAMEWORK The proposed method includes three stages as outlined in Fig. 1. Specifically, the proposed method takes images as inputs and sequentially undergoes license plate detection, perspective correction, and character recognition to produce final license plate characters. Both stage 1 and stage 3 are based on simple and efficient deep CNN architectures for fast inference speed. Overview of each stage is described further. Stage 1: License plate detection. As shown in Fig. 1, license plate detection aims to locate the four corner points of each license plate. For this purpose, this paper employs a lightweight deep CNN structure used for human pose estimation [14] and modifies it for predicting the four corner points of license plates. Stage 2: Perspective correction. Perspective deformation images are corrected by applying perspective transformation. First, four corner points are predicted by the license plate detection network. Then, the homography between the camera and the license plate is recovered. Finally, the homography is used to warp the detected license plate into a rectified image as shown in Fig A. License Plate Detection This paper uses CenterNet [14] for extracting license plate regions and the corresponding corner points from input images. CenterNet considers the center point of a bounding box as an instance and uses this keypoint to predict the dimensions and offsets of the box. CenterNet strikes a desirable balance between precision and speed and is highly customizable and extensible. It can be easily extended to multiple computer vision tasks including 3D object detection, object tracking, human pose estimation, and many others. In this paper, CenterNet is used to predict the four corner points of license plates (i.e., top-left, bottom-left, top-right and www.ijacsa.thesai.org bottom-right corners). Based on the predicted corner points, perspective correction is performed to get rectified license plate images. For this purpose, this paper employs the CenterNet structure used for human pose estimation [14] and modifies it for corner point estimation. The detailed pipeline of the license detection model based on CenterNet is shown in Fig. 3. Consider an input image , where and represent the width and height of the input image, respectively, a fully convolutional encoder-decoder architecture is first used to produce feature representations from input images and generate output results. Three heads are produced after one forward pass from the feature extraction network as shown in Fig. 3 (i.e., keypoint heatmap head, corner location head, and corner offset head). All the heads are predicted with the same dimensions (i.e., height and width) ( , ), where represents the output stride ( in this paper). 1) Feature extractor. This paper adopts RestNet-50 [15] for feature extraction. ResNet blocks are then augmented with three up-convolutional layers to incorporate higher resolution output feature maps. In addition, a 3×3 deformable convolutional layer is used before each up-sampling layer. 2) Keypoint heatmap head. Keypoint heatmap head is used for predicting the center point of license plates. In this paper, the keypoint heatmap ̂ has one channel since only one class is predicted by the license plate detection network. After one forward pass, a Sigmoid layer is utilized on the keypoint heatmap, and the calculated value at each keypoint is viewed as the certainty score for it being the center of the license plate. 3) Corner locations head. Corner location head predicts the four corner locations of license plate (i.e., top-left, bottomleft, top-right, and bottom-right corners). Each corner is considered as a 2-dimensional property of the center keypoint and parameterized by an offset to the center keypoint. The dimensions of this head are ( ). 4) Corner offset head. The Corner offset head is employed to rectify the quantization error resulting from the downsampling of the input. After one forward pass, the coordinates of predicted center keypoints are mapped to a higher resolution input image. This results in a deviation in values because the original image coordinates are whole numbers, whereas the actual center points ought to be decimal numbers. As a result, the local offsets ̂ are predicted for each center point to recover the discretization error. 342 | P a g e www.ijacsa.thesai.org B. Perspective Correction As license plates can sometimes be difficult to read due to the viewpoint, perspective correction is performed to align the detected license plates. Based on the four corner points generated by the license plate detection network, the homography between the camera and the license plate is first recovered. Then, the homography is used to warp the detected license plate into a rectified image as shown in Fig. 1. To be more specific, based on the detected corner points from input image, this paper first identifies and , which represent the maximum horizontal and vertical distances between the corner points. Then four corresponding vertices of rectified image are calculated as follow: (1) where represent the top-left, top-right, bottomleft, and bottom-right corners of the rectified license plate. Following [16], the perspective transformation matrix is calculated from the detected corner points and corresponding vertices as follow: where: and Finally, the rectified license plate region is formed as follow: (6) C. Character Recognition Character recognition aims to identify each character on the rectified license plates. For this purpose, this paper considers character recognition as character detection problem and designs a lightweight character detection network that predicts each license plate character without depending on license plate layouts (i.e., license plates of single-line or double-line text). Specifically, the lightweight character detection network is trained to detect 35 classes (i.e., "A-Z", "0-9", the digit "0" is recognized jointly with the letter "O") based on the rectified license plates as well as the bounding box and class of each character as inputs. In the case of Chinese license plates, the initial symbol is a Chinese character that signifies the province. As stated in [17] and [7], the character detection network proposed in this work has not been trained to identify Chinese characters because assigning the category to such characters is not a straightforward task. Table I showcases the design of the suggested lightweight character detection network. The design of the network is influenced by the Fast-YOLOv4 model [18], which is a tiny deep neural architecture that obtains very fast detection speed without sacrificing much accuracy. As shown in Table I, 3×3 convolution layers are used to extract features from previous layers followed by 1×1 convolution layers for reducing the feature channels. In addition, max pooling layers are used to decrease the feature dimensions. The number of channels is multiplied by two following each max pooling layer. Following [9], [19], this paper applies detection head at different scales to predict license plate characters. Specifically, character prediction is performed at layer 13 and layer 20, where the output size is 24×8 and 48×16, respectively. This detection approach is crucial for character recognition because the characters on the license plate may take up either a small or large area of the license plate region, as depicted in Fig. 4. It is worth mentioning that the proposed license plate character recognition system accurately detects and identifies license plates with either single-line or double-line text as it predicts all characters on the rectified license plate. All experiments were carried out on a computer equipped with an Intel Core i7-10700 CPU, a single NVIDIA GeForce GTX 1080Ti GPU, and 64GB of RAM. All models are designed and evaluated under the framework of PyTorch [20] and mmdetection [21]. A. Dataset and Evaluation Metrics To assess the proposed method, this paper evaluates experiments on two extensive public datasets: CCPD [22] and AOLP [23]. CCPD [22] consists of 290k images captured under diverse illuminations, environments, and weather conditions. This dataset is more challenge than other datasets for license plate recognition since each image is captured from different positions and angles, which makes license plates have arbitrary direction. The dataset provides sufficient annotations for training the proposed model, including bounding boxes, vertices of each license plate, and license plate characters. Images in the dataset have the resolution of 720 (width) × 1160 (height) × 3 (channels). Following [22], this paper employs 100k images of CCPD-Base subset for training both detection and character recognition network. The remaining 100k images from the CCPD-Base subset and the 80k images from the CCPD-DB, CCPD-FN, CCPD-Rotate, CCPD-Tilt, CCPD-Weather, and CCPD-Challenge subsets are utilized for testing. Additionally, the CCPD dataset also includes the CCPD-Characters subset, consisting of over 1000 individual images for every possible license plate character. This paper utilizes the CCPD-Characters subset for further training the license plate recognition network. AOLP (Application-Oriented License Plate database) [23] contains 2049 images. The images in this dataset are classified into three categories based on the capturing conditions: access control (AC), traffic law enforcement (LE), and road patrol (RP). The AC subset consists of 681 images of license plates captured in scenarios where vehicles move through a fixed passage at a slower pace or come to a complete stop. The LE subset consists of 757 images of license plates captured by a roadside camera during instances of traffic law violations. The RP subset includes 611 images of license plates captured from vehicles with random viewpoints and distances, making it more challenging for license plate recognition due to the heavily distorted license plates. Since the AOLP dataset only provides annotations for the coordinates of license plate bounding boxes and numbers, this paper manually annotates the four corners of each license plate. In line with [11], this paper trains on the LE and RP subsets and uses the RP subset for testing. For the evaluation metric, this paper calculates the accuracy of license plate recognition by dividing the number of correctly recognized license plates by the total number of license plates in the test set. A recognition is considered correct only if the is greater than 0.5 and all characters have been correctly recognized. Here, is calculated as follow: where is the detected polygon of license plate and is the ground truth polygon of the license plate. and are calculated based on the detected corner points and ground truth corner points, respectively. Table II provides recognition results of the proposed approach and recent approaches on the CCPD dataset. The results demonstrate that the proposed approach obtains the best recognition accuracy on most of the subsets. To be more specific, the proposed model obtains recognition accuracy at 99.7%, 99.2%, 99.1%, 99.6%, and 99.6% on CCPD-Base, CCPD-DB, CCPD-FN, CCPD-Rotate, and CCPD-Tilt, respectively, which outperform all previous methods, including method proposed by Zhang et al. [3]. For CCPD-Challenge subset, method proposed by Zhang et al. [3] obtains the best recognition accuracy. Since CCPD-Challenge subset comprises the most difficult images for the recognition of license plates, the simple and efficient license plate detection network cannot locate some license plates (Fig. 6), which leads to wrong recognition results by the character recognition network. In the future, this paper will investigate more effective fusion strategies to enhance the feature representation of the license plate detection network, which would improve detection results. It is noteworthy that the majority of the comparison methods in Table II determine the recognition results by setting the IoU threshold to 0.6. Additionally, it is observable that the proposed method attains the most significant improvements in the CCPD-Rotate and CCPD-Tilt subsets. Specifically, the proposed network improves recognition accuracy by 3.2% and 2% on CCPD-Rotate and CCPD-Tilt subsets compared with that of model proposed by Zhang et al. [3]. Given that the CCPD-Rotate and CCPD-Tilt subsets contain images with significant perspective distortion, these results show that the proposed model excels at detecting and recognizing license plates that have undergone distortion or rotation. For recognition speed, since the proposed model is designed based on fast and efficient architectures, it obtains the fastest recognition speed among comparing methods. To be more specific, the proposed model needs 8.3ms for processing an image based on single NVIDIA GeForce GTX 1080Ti GPU. The results indicate that the proposed model is efficient and well-suited for real-time applications. As depicted in Fig. 5, this study showcases the recognition results of the proposed approach on the CCPD dataset, including the detection of the four corners of the license plates, the rectified license plates after perspective correction, and the recognition of the characters on the license plates. It is evident that the proposed model performs effectively under various conditions. Fig. 6 shows some failure cases where the proposed model cannot locate license plates in challenging environments or fails to recognize some similar license plate characters. Zhang et al. [3] 91.9 C. Results on AOLP For the AOLP dataset, AOLP-RP subset is employed to evaluate the proposed model since this subset is more challenging (most of images contain severe perspective deformation license plates). Table III presents the comparison of recognition accuracy on the AOLP dataset. The proposed model emerges as the top performer in terms of recognition accuracy on the AOLP dataset, outperforming other methods. Specifically, the recognition accuracy of the proposed model surpasses that of the method proposed by Sergio et al [6] by 0.4%. This result further strengthens the claim of the proposed model's capability in recognizing license plates that have an irregular shape. VI. CONCLUSION This study presents a CNN-based approach for detecting and recognizing license plates with irregular shapes in complex real-world images. The proposed model employs a CenterNetbased CNN structure to predict the four corners of the license plates, followed by perspective correction to align the detected license plates. For character recognition, a YOLO-based segmentation-free model is designed to predict the characters on the license plate. The effectiveness of the proposed method is verified through experiments on the CCPD and AOLP datasets. Specifically, experimental results on two datasets show that the proposed method obtains the best recognition accuracy with the fastest recognition speed. This result demonstrates that the proposed method is highly suitable for intelligent traffic management applications that require real-time processing. In the future, the study intends to investigate additional fusion techniques for extracting more discriminative features from the input images, which can enhance the accuracy of the license plate detection network.
2023-04-06T15:32:46.935Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "77baef44897d87ba14c13ba94fe6e4a177ccae10", "oa_license": "CCBY", "oa_url": "http://thesai.org/Downloads/Volume14No3/Paper_38-A_High_Performance_Approach_for_Irregular_License_Plate.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9a09b0cae589cbe7b94a57bc8c55e66fa3678887", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
257062568
pes2o/s2orc
v3-fos-license
Understanding COVID: Collaborative Government Campaign for Citizen Digital Health Literacy in the COVID-19 Pandemic The strategy “Understanding COVID” was a Public Health campaign designed in 2020 and launched in 2021 in Asturias-Spain to provide reliable and comprehensive information oriented to vulnerable populations. The campaign involved groups considered socially vulnerable and/or highly exposed to COVID-19 infection: shopkeepers and hoteliers, worship and religious event participants, school children and their families, and scattered rural populations exposed to the digital divide. The purpose of this article was to describe the design of the “Understanding COVID” strategy and the evaluation of the implementation process. The strategy included the design and use of several educational resources and communication strategies, including some hundred online training sessions based on the published studies and adapted to the language and dissemination approaches, that reached 1056 people of different ages and target groups, an accessible website, an informative video channel, posters and other pedagogical actions in education centers. It required a great coordination effort involving different public and third-sector entities to provide the intended pandemic protection and prevention information at that difficult time. A communication strategy was implemented to achieve different goals: reaching a diverse population and adapting the published studies to different ages and groups, focusing on making it comprehensible and accessible for them. In conclusion, given there is a common and sufficiently important goal, it is possible to achieve effective collaboration between different governmental bodies to develop a coordinated strategy to reach the most vulnerable populations while taking into consideration their different interests and needs. COVID-19 Pandemic and Vulnerable Populations A viral outbreak of an unknown coronavirus (SARS-CoV-2) was declared a pandemic by the World Health Organization (WHO) in March 2020. The increasing rate of incidence and mortality from the associated disease (COVID-19) challenged and stressed healthcare institutions and the global economy and had an impact on the physical and mental health of people around the world. The effects of this pandemic forced the adoption of drastic collective prevention measures throughout the world. In Spain, a state of alarm was decreed, followed by a series of agreements and resolutions on preventive measures Decision-Making in the COVID-19 Crisis and Public Health Strategies During the early phases of the COVID-19 pandemic, healthcare professionals worked under high levels of uncertainty. Soon a pressing need emerged to translate knowledge into practice more efficiently, with rapid assessment and dissemination of scientific evidence to guide decision-making [6]. Some studies found that bringing together experts from academia, science and clinical practice to search for and summarize information of high scientific quality was effective for informed decision-making [7]. However, knowledge was not only needed by clinical and public health decision-makers as the general population also had a compelling information need to make the best choices for their health. In addition, an 'infodemic' led to confusion and distrust in health workers weakening public health responses [8]. Moreover, the continuous demand for efforts from the population to comply with the requirements and recommendations to improve the epidemiological and health situation (for example, confinements, social distancing, travel restrictions, cuts in benefits, vaccination, etc.) also required efforts to convey information clearly and understandably. However, with the passage of time, the transmission of information and the training of citizens from the public administrations became increasingly complex, a circumstance that especially affected the most vulnerable groups [9], and the first signs of "pandemic fatigue" in the general population began to show. This is why the WHO urged the inclusion of four recommendations in dissemination campaigns and actions summarized in Figure 1 [10]. The "Understanding COVID" Strategy For all these reasons, in most regions, it became necessary to implement information strategies tailor-made for the general population, especially for the most vulnerable people. "Understanding COVID" was a strategy that began to be developed in March 2020 at the General Directorate of Care, Humanization and Socio-Health Care of Asturias (Spain). Asturias is the most aged region in Spain and one of the most aged in Europe. In addition, although a large part of its population lives in diffuse urban environments [11,12], a large number of small and dispersed rural centers that are difficult to access also exist. Therefore, the rural area of Asturias presents some particular problems leading to a risk of poverty or social exclusion: the demographic situation (shortage of population, exodus of inhabitants and aging of the population in rural areas); the difficulties for the mobility of the population (lack of infrastructures and basic services, lack of adequate transport connections); or problems related to the labor market (lower employment rates and longterm unemployment). 13, x FOR PEER REVIEW 3 of Figure 1. WHO: Pandemic fatigue: proposal of four key strategies for governments to maintain a reinvigorate public support for protective behaviors. The "Understanding COVID" Strategy. For all these reasons, in most regions, it became necessary to implement informati strategies tailor-made for the general population, especially for the most vulnera people. "Understanding COVID" was a strategy that began to be developed in March 20 at the General Directorate of Care, Humanization and Socio-Health Care of Astur (Spain). Asturias is the most aged region in Spain and one of the most aged in Europe. addition, although a large part of its population lives in diffuse urban environme [11,12], a large number of small and dispersed rural centers that are difficult to access a exist. Therefore, the rural area of Asturias presents some particular problems leading t risk of poverty or social exclusion: the demographic situation (shortage of populatio exodus of inhabitants and aging of the population in rural areas); the difficulties for t mobility of the population (lack of infrastructures and basic services, lack of adequ transport connections); or problems related to the labor market (lower employment ra and long-term unemployment). The general objective of the "Understanding COVID" strategy was to increase t offer of digital information on prevention and protection against COVID-19, individu care and emotional approach to both the largest number of citizens, prioritizi vulnerable groups and the rural environment of Asturias. The specific objectives were 1. Listen to citizens' voices to redesign training actions, keeping in mind the suggestio from the community and acknowledging the difficulties and successes in carryi out the recommended protection measures against COVID-19. 2. Search, simplify, and adapt in a more comprehensive way to the community all t information and evidence available to increase people's protection against COVI The general objective of the "Understanding COVID" strategy was to increase the offer of digital information on prevention and protection against COVID-19, individual care and emotional approach to both the largest number of citizens, prioritizing vulnerable groups and the rural environment of Asturias. The specific objectives were to: 1. Listen to citizens' voices to redesign training actions, keeping in mind the suggestions from the community and acknowledging the difficulties and successes in carrying out the recommended protection measures against COVID-19. 2. Search, simplify, and adapt in a more comprehensive way to the community all the information and evidence available to increase people's protection against COVID-19. 3. Promote accessible information on protection measures for citizens in general and for people with hearing or visual disabilities. 4. Adapt digital health literacy (or (d)HL) to the particular needs of the population groups to which it is directed (adult population, young people, fathers and mothers, etc.). 5. Design specific campaigns for sectors of activity that are particularly exposed, such as workers in poorly ventilated places and/or environments with a large influx of people. 6. Work with children and adolescents to increase safety in the school environment. The "Understanding COVID" strategy had various target population groups, which included citizens of rural areas, citizens of urban areas, municipal technical professionals, people of Roma ethnic origin, immigrants, and citizens with impaired vision and/or hearing. Another relevant focus to work with was the job sectors most exposed to COVID-19: local commercial activities carried out in indoor areas like hairdressers, beauty salons, and places of worship. In particular, tourism outlets and hotels developed training to protect their own activity, increasing their own security measures and "how to use the facilities" materials for their customers. Finally, in the design of the "Understanding COVID" strategy, two groups were also taken into account in a differentiated way, schoolchildren and also their families, since children and adolescents were left out of the pedagogical approach used for the adult population, and other participation and information methodologies were incorporated. Given that the "Understanding COVID" strategy was a population-based public health campaign, no sampling method was used, but it was disseminated throughout the target population in order to reach the largest possible number of vulnerable people and later study the scope and effectiveness of the strategy under real conditions, within the broad framework of the implementation science. Purpose of this Article Currently, a large number of national and subnational governments are conducting retrospective evaluations of COVID-19 health policy decisions and actions to reflect on their strengths and weaknesses and, thus, to find opportunities to reinforce public institutions for future crises [13]. Thereby, the main aim of this article was to describe the design of the "Understanding COVID" strategy of the Ministry of Health of the Principality of Asturias (Spain), which helped to offer and to adapt information on protection measures against COVID-19 for the entire population of Asturias, including how specific criteria were incorporated for the design of interventions in rural areas with the active participation of their inhabitants. Secondarily, the article also shows the preliminary evaluation of the implementation process of the strategy under real conditions. Ultimately, this work allows us to understand the Spanish public health sector's capacities to deal with crises and seeks to generate learning toward a more effective and equitable response in the future. Hearing Citizen's Voice The first step in designing the strategy was based on analysis through a survey to evaluate weaknesses, perceived strengths, and topics and contents of interest. This was sent by phone using the WhatsApp application, adapting to the technology available at the time of confinement. The survey met the requirements of Organic Law 3/2018 [14] in terms of data analysis and dissemination. Participants were asked to answer six questions on a 6-item Likert scale (0: Strongly disagree; 1: Disagree; 2: Slightly Disagree; 3: Slightly Agree; 4: Agree; 5: Strongly agree) and an additional open-ended question to capture proposals on COVID-19 prevention and control training ("What else would you like us to include in that training? Write down what you deem important in these times of pandemic"). Table 1 shows the responses to this first online survey incorporating the opinion of citizens between 9 March 2020 and 13 March 2020 (106 responses in one week). The questions were not mandatory, so not all of them were answered by all respondents, and the average response rate was 93%. The percentage of agreement was greater than or equal to 80% in the answer "strongly agree" in questions 1, 3, 5, and 6 and greater than or equal to 65% in the same answer in questions 2 and 4. In the open-ended question, respondents answered that their greatest interest was the formations focused on the use of "personal protective equipment", information on face masks and on dealing with emotions in times of pandemics. Does it seem appropriate to you that this training includes information on how to act when you feel a lot of stress or are overwhelmed at work? 0% 2% 1% 5% 10% 82% * 0: "Strongly disagree" → 5: "Strongly agree". The continuous collection of information from participants was considered a priority due to the relevance of incorporating real doubts and needs, as well as adapting the strategy to all target audiences. Thereby, in a later phase, a survey was designed and sent to all participants who took part in the training so that they could anonymously and voluntarily evaluate the usefulness, accessibility, contents, teaching methodology and satisfaction, as shown in Table 2. Along with the previous results, information was also collected from the participants in the training sessions (e.g., opinions or testimonials), thanks to which new content was al-so developed and work was done with groups or sectors that were valued as relevant at different times. A total of 472 responses were collected from the continuous information and satisfaction survey. Of these, 65 were from participants in the activity aimed at the school families' associations, 111 from those aimed at secondary schools, 256 from the "Drop by Drop" action training designed for individual citizens, and 40 from the "First Quality Air", the specific training offered to the hotel industry. The percentages of agreement for usefulness, time and accessibility based on the Likert scale are shown in Table 3. In summary, when asked about the usefulness of the training, the activity with the best result was the training for catering "First Quality Air" with 88% "strongly agree" followed by the training of secondary schools with 86%, family associations with 78% and "Gota a Gota" with 77%. In relation to the duration of the training, all four pieces of training have high scores of "yes", with the secondary schools' training reaching 100% agreement, followed by the Family Associations' training with 97% and "Gota a Gota" and "First Quality Air" with 95% each. In the section related to accessibility, the "strongly agree" scores ranged from 86% for training in secondary schools, 83% for Family Associations and "Gota a Gota," and finally, "First Quality Air" with 80%. The percentages of agreement with the satisfaction-based question on the Likert scale are shown in Table 4. In this table, we can be seen that satisfaction was measured with a broader Likert scale (0-10), with the highest score in all cases being 10, with 49% in families' associations, 85% in "First Quality Air ", 86% in secondary schools and finally 89% in the "Drop by Drop". Along with the previous results, information was also collected from the participants in the training sessions (e.g., opinions or testimonials), thanks to which new content was also developed and work was done with groups or sectors that were valued as relevant at different times. Informative Content In order to prepare the contents of the "Understanding COVID" strategy, the needs expressed by the citizens and identified in the previous phase were taken into consideration. These needs were articulated around four core themes: • Self-protection and collective protection measures against COVID-19: including frequent hand washing, interpersonal distance and coughing into the cubital fossa, use of masks, cleaning of domestic environments, collective protection measures and specific environments, and the disinfection of physical spaces. • Identification and containment of the sources of contagion: early diagnosis of people with symptoms, isolation of cases and tracing and quarantine of close contacts. Therefore, it was important to publicize the symptoms and the protocols for reporting them (e.g., health personnel in the area). • Content related to emotional management: assertiveness, managing emotions in difficult times, such as facing fear, leaving home after confinement, love, learning to trust, positive thinking, guided visualization, etc. • Contents related to maintaining healthy habits: healthy diet, physical exercise, maintenance of routines, communication and sleep. Next, personnel trained in documentation and communication conducted a search for all the information available at that time in different sources: scientific literature (PubMed and Web of Science), gray literature, expert information (documents and explanatory videos), documentation of official health agencies and web resources. Finally, the relevant information was analyzed and synthesized by a group of experts and distributed in the communication channels of the "Understanding COVID" strategy: one created the informative content on the web (in text, in video or in infographics) and others for online training. The contents were continuously reviewed to ensure the maximum topicality of the information that was so changing at that time, to adapt it and adapt it to an understandable language for the different target audiences, as well as to align the messages with the policies that were required in terms of prevention and protection by the authority at any given time. As a result, more than 150 documentary sources were consulted. A total of 10 sections containing the most relevant information were grouped together, and 50 subsections of key information (Figure 2), with 55 infographics and videos. Digital Health Literacy The central methodological pillar of the "Understanding COVID" strategy was the live training sessions. The sessions were virtual, using the office tool "Microsoft Teams", which allows access via smartphone (preferred), PC or tablet without the need to install any type of software. The sessions lasted 60-90 min and were structured into two welldifferentiated parts: a first part where updated information on the pandemic was presented (30-45 min), and a second part where there was free time for questions from attendees to resolve doubts and explore their needs and barriers to implement protection measures (30-45 min). During the design phase of the training sessions, special emphasis was placed on adapting the content, images and language to vulnerable populations (e.g., residents of rural areas, Roma, caregivers, etc.). In addition, attention was paid to adapting the temporary programming of the sessions to the working hours of the professional groups. For the development of the online training sessions and the recruitment of attendees, there was a collaboration from 54 municipalities of Asturias (out of a total of 78, or 69.2%) that adhered to the strategy. Likewise, the Ministry of Education, Ministry of Tourism and Sports, associations of the third sector (hotels, patients, neighbors, etc.), associations of families, as well as associations of Roma ethnic. Thanks to this collaborative work, a total of 100 interventions were carried out with an overall attendance of 1056 people. A summary of the population groups reached in the training sessions can be seen in Table 5. Digital Health Literacy The central methodological pillar of the "Understanding COVID" strategy was the live training sessions. The sessions were virtual, using the office tool "Microsoft Teams", which allows access via smartphone (preferred), PC or tablet without the need to install any type of software. The sessions lasted 60-90 min and were structured into two welldifferentiated parts: a first part where updated information on the pandemic was presented (30-45 min), and a second part where there was free time for questions from attendees to resolve doubts and explore their needs and barriers to implement protection measures (30-45 min). During the design phase of the training sessions, special emphasis was placed on adapting the content, images and language to vulnerable populations (e.g., Lastly, in the open-ended questions of the questionnaire, the most repeated messages correspond to the following codes: "gratitude", "appreciation of the live session for questions and needs", "request to repeat the training to update knowledge", and "verification of the need for training for the entire population". Lastly, in the open-ended questions of the questionnaire, the most repeated messa correspond to the following codes: "gratitude", "appreciation of the live session questions and needs", "request to repeat the training to update knowledge", a "verification of the need for training for the entire population". Communicative Materials and Accessibility of Information In order to achieve the objectives related to reaching the maximum number of citizens, making adaptations for different groups, achieving accessibility of language and content of materials, breaking the digital divide and making information accessible to citizens with accessibility and equity, the strategy "Understanding COVID" designed, coordinated and produced the following communication materials, which were made available in March 2021 to the target population. Logo and Graphic Identity The design of the logo and the graphic identity of the campaign were part of the methodology of the "Understanding COVID" strategy ( Figure 4). Reaching the public, involving them in their health decisions and reflecting on the available evidence were the core of the starting elements for the design of the logo and the graphic identity of the strategy. Web Page An independent web page (www.entendercovid.es, accessed on 1 February 2023) with free and open access was developed, which provided access to all the informative material of the strategy, including documents, ad hoc infographics and videos. The web page also had an interactive virtual space for solving doubts, as well as suggestions and actions for the campaign. The accessibility of the page was reviewed from its design by Web Page An independent web page (www.entendercovid.es, accessed on 1 February 2023) with free and open access was developed, which provided access to all the informative material of the strategy, including documents, ad hoc infographics and videos. The web page also had an interactive virtual space for solving doubts, as well as suggestions and actions for the campaign. The accessibility of the page was reviewed from its design by the Spanish National Organization for the Blind to guarantee inclusiveness. In addition, the videos were designed to include sign language for the deaf community. The following principles were taken into account when designing the website: • Didactic vocation: Present the information in an orderly, clear and attractive way. Carry out positive communication avoiding contributing to general pandemic fatigue. • Usability: Simple and intuitive navigation. Increase click efficiency (relevant information in the minimum number of clicks). Prioritize information in plain text. • Accessibility: Information intended for the whole of society. Visual codes are understandable by all. Its simple structure and adaptation for people with visual disabilities aim to increase the friendliness of its reading, as well as its possible use from mobile phones. Respect for the Accessibility Guidelines for WEB Content (WCAG). The web page is made up of: According to Google Analytics web service, during the study period (March 2021-January 2022), the web page received 7080 visits from 5842 users (1.20 sessions per user). On each visit, users viewed an average of 1.70 pages from the main web page. The bounce rate, that is, the percentage of visitors who left the web page without taking any action, was 76%. Accesses to the web page were highest immediately after its creation, with a maximum of 500 weekly visitors between April and May 2021, and especially in December 2021, the week before the Christmas period, with more than 1000 weekly entries. Of all the users of the web page, the age group most represented was that of 25-34 years (33.5%), followed by the group of 18-24 (24.5%), the group of 35-44 (15.5%) and that of 45-54 years (12.5%). Finally, those over 55 years of age accounted for 11% of accesses. Regarding sex, the percentage of men (54.2%) was slightly higher than that of women (45.8%). In the analysis by country, accesses from Spain stood out (82.0%). Overall, Spanishspeaking countries accounted for more than 89.9% of accesses. Of the remaining percentage, the United States stood out with 2.08% and China with 1.90%. Finally, the devices used to access the website are shown in Figure 5. Actions for the Child and Adolescent Population Following the WHO recommendations in times of pandemic fatigue, co-creation and participatory actions for the underage population were designed. First, a creative contest for adolescents (from 12 to 18 years old) was run. In addition to the previously described adapted training sessions, a creative contest was held in collaboration with the Asturian Ministry of Education for the involvement of adolescents. First, an email was sent in April 2021 to all educational centers that provide secondary education, high school, and vocational training with an invitation and instructions for participation. The email contained a training video to be viewed in class with the students, which was definitively projected in 954 classrooms. The teachers encouraged debate and reflection on its content, and later, the students voluntarily created a creative product to compete in one of the following modalities: audiovisual, written, poster, and free creative. Campaign promoters received 111 creations of the four modalities, each one from a classroom of students between 12 and 16 years of age. In May 2021, the awards were delivered in a collaborative virtual ceremony organized by the Departments of Health and Education. Of all the users of the web page, the age group most years (33.5%), followed by the group of 18-24 (24.5%), the g of 45-54 years (12.5%). Finally, those over 55 years of age Regarding sex, the percentage of men (54.2%) was slight (45.8%). In the analysis by country, accesses from Spain stood speaking countries accounted for more than 89.9% of percentage, the United States stood out with 2.08% and C devices used to access the website are shown in Figure 5. Actions for the Child and Adolescent Population Following the WHO recommendations in times of pan participatory actions for the underage population were des First, a creative contest for adolescents (from 12 to 18 y to the previously described adapted training sessions, a Second, a handwashing campaign called "Bichos fuera" (Bugs out!) was carried out in the Early Childhood and Primary Education Centers (from 3 to 11 years old). The teachers wrote the lyrics of a song in the Asturian language, designed the music, and devised the choreography and staging of the song "Bugs out!". A video was recorded where the protagonists were the children (https://www.youtube.com/watch?v=vU1Kphiukdw, accessed on 1 February 2023). Classroom materials were also designed, such as cards and games. All the material was summarized in a guide for teachers (Supplementary Figure S1). Complementary Actions to Highly Exposed Workers/People Some complementary actions were also carried out for specific activity sectors with high exposure to COVID-19 infection. For instance, a protocol for safe actions against COVID-19 was carried out during the celebration of Catholic masses indoors, with its corresponding posters ("Safe churches help us stop the COVID and reduce risks"). Moreover, a campaign called "First Quality Air" (Aire de Primera) was specifically designed for the hotel industry. Taking advantage of the Asturias tourist campaign "Asturias, Natural Paradise", which promotes its pure and clean natural environment, posters of "First Quality Air" were prepared with COVID-19 protection measures to be used in hospitality and tourism establishments (Supplementary Figure S2). Other Dissemination Actions To reinforce as well as to make room for consultation and reminders of the above elements, we also designed: (1) visual presentations with educational material adapted to different groups; (2) co-design of materials and recruitment mailings; (3) dissemination through social networks (Facebook, Twitter and WhatsApp) and mailing; and (4) a YouTube channel to host educational videos with simultaneous recording in sign language. The channel definitively hosted 10 training videos that had a total of 3,866 views. The "Understanding COVID" Strategy The government of Asturias launched a public health campaign to improve the population response to the COVID-19 crisis and to fight against pandemic fatigue and infodemia. The main novelty of the "Understanding COVID" strategy consisted of specifically targeting a population selected on the basis of vulnerability criteria and identifying the topics on which they should be trained in. In addition, the training actions were delivered through a wide variety of methodologies tailor-made for recipients. For example, online training for all interested vulnerable citizens, posters for the productive sectors with the highest risk of transmission, pedagogical contests and educational games for schools, presence in social networks, etc. Definitively, 100 training actions were carried out for 21 subgroups of the vulnerable population, reaching more than 1000 individuals, but also students from almost 1000 classrooms and countless users from the hostelry industry and church sector. A large number of graphic and audiovisual materials were developed that supported a positive and preventive discourse in the face of the COVID-19 pandemic. In addition, some of the materials disseminated in the school environment were co-created by children and adolescents since including them in the design was considered to increase acceptability. One of the main challenges of the strategy was to reach as many vulnerable people of different ages as possible, as has been done in similar studies and interventions [15][16][17][18][19][20][21][22][23][24][25], but at the same time minimizing the technological gap that could leave someone behind, which is a common problem when trying to reach a vulnerable population using information technologies [9,26,27]. To do this, everyday technology tools already existing in homes, such as tablets and smartphones, were used, with no need for additional installation of complex programs. The design of the communication and dissemination strategy through digital technologies was in line with similar studies [28]. The good reception of the strategy "Understanding COVID" reinforced the choice of the method of dissemination and implementation, bringing this information accessible also to the population with hearing disabilities (with the support of professional translators in sign language) and visual (adaptation of audiovisual media). Of all the actions of the "Understanding COVID" strategy, the ones that generated the most participation were those carried out in schools and in the catering sector since educational activities were added to the online training sessions in the centers and schools, and the distribution of posters occurred in restaurants. Gray et al. described the need to develop protection strategies within the school community and responded to an important need to provide information and support both to the teaching community and to families and students [29]. In our strategy, creativity and horizontal and ascending training were encouraged: from some students to others and from students to their parents. The information strategy in the hospitality sector through the "First Quality Air" campaign allowed commercial establishments to display posters with recommendations for the population, as well as to have a certificate accrediting the training received, thus promoting confidence and security among customers. As the restaurant industry is particularly sensitive to disasters, specific campaigns were run in some countries to encourage people to go out for lunch or dinner. Campaigns such as "Go to Eat" in Japan or "Eat Out to Help Out" in the United Kingdom applied discounts for dining in restaurants and simultaneously achieved an increase in sales and a rebound in cases [30,31]. The "Understanding COVID" strategy focused more on security and less on the economy because it was understood that by pursuing the first goal, it would achieve the second one. Finally, although the results referring to visits to the website are difficult to measure, it was relevant that the highest volume of unique visitors occurred two weeks before Christmas 2021, a time when the restrictions had been modified, and the population was looking for information to safely carry out trips, family reunions and other recreational activities. This increase in the number of hits to the page may reflect the confidence that the population has in seeking accurate, verified, accessible and adapted information, as was the objective of this strategy. The "Understanding COVID" strategy contemplated some key elements that the scientific literature identifies for a campaign to be successful [32]. These include (1) messages that focus on the identity of the population, (2) the use of visual aids, and (3) the use of social networking features to encourage interaction. In addition, although it used online resources (web pages, webinars, social networks, etc.) to be consistent with the message of limiting physical and social contacts, other more appropriate resources were also used to reach the vulnerable population (posters, songs, etc.), which was somewhat less common in campaigns from other countries. In addition, it has been shown that the high penetration of mobile devices and technology in the younger population [33] opened a very interesting door to their inclusion in schools as a means to achieve early health literacy [34]. Additionally, parents of students can benefit from health literacy strategies from schools in collaboration with government health policies [17,[35][36][37], as has been appreciated throughout this strategy. Other Strategies and Campaigns Most countries in the world disseminated information and prevention campaigns for COVID-19 through official statements and other mass media. In Spain, the national government developed four population campaigns exclusively on the internet in order to fight against the spread of the pandemic, reinforcing individual security measures and community action [38,39]. The campaigns were disseminated via Twitter, and the analysis of their design and implementation allowed some interesting conclusions to be drawn. Although the campaigns promoted the dissemination of health security measures, they did not serve to encourage debate and interaction between governments/public institutions and citizens [39]. In addition, the campaigns generated polar responses, with very positive visions that were faced with other very negative ones, which did not help to improve union and community action [38]. However, a similar campaign carried out in Italy through Facebook, the #I-am-engaged campaign, was built around a community perspective, with a participatory process that favored co-creation among peers. In addition, the campaign adopted a positive tone of voice by focusing on the promotion of good practices [40]. In these respects, the Italian campaign was similar to the "Understanding COVID" campaign, although the latter included a wide range of actions to be carried out beyond the digital world on the basis of trying to reach as many vulnerable people as possible. Other campaigns carried out in various countries also tried to address the vulnerable population. For example, in the USA, an alliance of institutions launched a multifaceted national campaign whose objective was to increase confidence in vaccines and decrease misinformation within Hispanic communities. They successfully used social networks, webinars, radio and newsletters, with the participation of volunteers, key people for the Hispanic community and influencers [41]. In Maryland (USA), another regional campaign was developed through social networks and a web page to promote testing for COVID-19 and acceptance of the vaccine among Latinos with limited English proficiency [42]. Also, in the USA, campaigns were created on social networks to promote scientific information on the risks of COVID-19 in pregnancy and the benefits of vaccination, such as the "One Vax Two Lives" campaign in Seattle [43]. In Sydney, Australia, there were also efforts to engage culturally and linguistically diverse communities in the effective and appropriate public health response to COVID-19 [44]. A novel and rapid inter-agency campaign was established that included tailored public education and testing, the establishment of a local clinic, and inspections of local businesses to achieve a safe environment. Lessons Learned and Limitations An important lesson learned from the "Understand COVID" strategy was the importance of various public institutions working in a coordinated manner in pursuit of a common goal, something common to other similar campaigns [43,44]. It was also learned that in vulnerable populations, the public health response in crises must be adapted and react to their needs since, in these population groups, the information channels and conventional health messages are often insufficient. It was particularly interesting to see the acceptance of the campaign in the education sector, perhaps because teachers are very used to introducing transversal content into the academic curriculum, especially when the topic is linked to a problem in the real environment. Another lesson provided by the implementation of this strategy is that in order to achieve successful health communication, the adoption of a participatory approach is essential where the stakeholders participate in the training and change process. In general, health communication based on evidence, culturally relevant and acceptable to the recipients is essential to educate and involve the population in situations that require a rapid and forceful response, either to educate about practical aspects or to combat the infodemic. The lessons learned in this strategy can be applied to other public health programs that seek to engage vulnerable communities. The "Understanding COVID" strategy also presented some weaknesses. First, the campaign was implemented in 2021, when pandemic fatigue was already becoming chronic. Bringing its launch back a few months might have been more successful in preventing fatigue. In addition, the execution deadlines for some activities to adapt to the environment where they were carried out (for example, actions in schools) and the evolution of the pandemic itself forced decisions to be made with little time for reflection. Second, although most of the activities were always evidence-based and oriented towards infection prevention and management in a pandemic setting [45], other activities and groups, such as the promotion of physical activity [46], college students [47], and the 'emotional well-being' intervention [48], could have been taken more into account. On the contrary, it was decided not to focus solely on encouraging vaccination, as was done in many countries [49][50][51][52][53][54][55][56], since in Spain, the public response to the vaccine was very favorable, probably due to high confidence in the vaccination and in the health system [57]. Third, no data on the effectiveness of the campaign was obtained. This is a common limitation of public health campaigns, especially if they are launched under the pressure of an emergency. Evaluating the impact of public health strategies disseminated in an uncontrolled environment is a methodological challenge due to the many factors involved that can influence the results. In any case, at least one study based on surveys could have been carried out. It would have allowed us to know the impressions of people about the strategy. Although several opinion surveys were conducted, these were only used to tailor the strategy and not to explore the satisfaction of the participants in detail. Conclusions The "Understanding COVID" strategy was a public health campaign launched by the government of Asturias to improve the population's response and adaptation to the COVID-19 crisis and to combat pandemic fatigue and infodemia. The main innovation of the campaign was to target a population selected on the basis of vulnerability criteria, whose voices were taken into account to identify training topics. Capacity building was achieved through a variety of tailor-made methodologies, such as online activities, posters for hotels and catering establishments, educational quizzes and games for schools, social media presence, etc. More than 100 training activities were conducted for 21 subgroups of the vulnerable population, reaching more than 1000 people, as well as students from almost 1000 classrooms and users of various hospitality establishments and vulnerable populations. The strategy faced the challenge of reaching as many vulnerable people of different ages as possible while minimizing the technological gap, which was addressed by using technologies accessible to the population, such as tablets and smartphones, that did not require large technological features. The "Understanding COVID" strategy was well-received and reinforced the choice of dissemination and implementation method, making the information inclusive for the deaf and visually impaired population (with the support of professional sign language translators and adaptations of audiovisual materials). The most participatory actions were those carried out in the school environment and in the hospitality sector, where educational activities were added to the online training sessions and posters were displayed in restaurants. The information campaign in the hospitality sector, "First Air Quality", allowed commercial establishments to display posters with recommendations for the public and to have a certificate of the training received, promoting confidence and safety among customers. Overall, the collaboration between different government agencies with the ultimate goal of reaching the population most vulnerable to the COVID-19 pandemic is possible if a coordinated strategy is developed that takes into account the citizens and their interests and adapts to their different needs.
2023-02-22T16:02:34.004Z
2023-02-01T00:00:00.000
{ "year": 2023, "sha1": "d7517f349ae6398d3b86481a645952943569df9b", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "9730cb6024bcaf7b4996f44a5899763e1540530d", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Medicine" ] }
255566472
pes2o/s2orc
v3-fos-license
A more reliable species richness estimator based on the Gamma–Poisson model Background Accurately estimating the true richness of a target community is still a statistical challenge, particularly in highly diverse communities. Due to sampling limitations or limited resources, undetected species are present in many surveys and observed richness is an underestimate of true richness. In the literature, methods for estimating the undetected richness of a sample are generally divided into two categories: parametric and nonparametric estimators. Imposing no assumptions on species detection rates, nonparametric methods demonstrate robust statistical performance and are widely used in ecological studies. However, nonparametric estimators may seriously underestimate richness when species composition has a high degree of heterogeneity. Parametric approaches, which reduce the number of parameters by assuming that species-specific detection probabilities follow a given statistical distribution, use traditional statistical inference to calculate species richness estimates. When species detection rates meet the model assumption, the parametric approach could supply a nearly unbiased estimator. However, the infeasibility and inefficiency of solving maximum likelihood functions limit the application of parametric methods in ecological studies when the model assumption is violated, or the collected data is sparse. Method To overcome these estimating challenges associated with parametric methods, an estimator employing the moment estimation method instead of the maximum likelihood estimation method is proposed to estimate parameters based on a Gamma-Poisson mixture model. Drawing on the concept of the Good-Turing frequency formula, the proposed estimator only uses the number of singletons, doubletons, and tripletons in a sample for undetected richness estimation. Results The statistical behavior of the new estimator was evaluated by using real and simulated data sets from various species abundance models. Simulation results indicated that the new estimator reduces the bias presented in traditional nonparametric estimators, presents more robust statistical behavior compared to other parametric estimators, and provides confidence intervals with better coverage among the discussed estimators, especially in assemblages with high species composition heterogeneity. INTRODUCTION Species richness is the most commonly used diversity index and a key metric in ecological research. Due to the rapid expansion of the human population and the disturbance induced by human activities that increasingly eliminates ecological habitats, an increasing number of species become extinct before even being discovered (Costello, May & Stork, 2013). Assessment and long-term monitoring of species diversity in a target area has become an urgent task for conservation biologists. To collect and identify all species in a target area, researchers need to conduct a census of all species in the area. However, generating species inventories of a target area requires enormous investigation efforts and is often impractical due to resource limitations. Therefore, most biodiversity studies are based on the sampled data from the target area or assemblage. However, since species sampling data represent only a partial collection of the entire assemblage, it is hard to detect all species of the assemblage, especially when the sample size is small. Because the true number of species in an area equates to the number of species observed in the sample plus the number of species not appearing in the sample, using the recorded number of species in the sample as an estimator will lead to underestimating the true richness of the target area or assemblage. In general, the number of undetected species in a sample depends on sampling effort and sample completeness (Hortal, Borges & Gaspar, 2006). Accurate estimation of the species richness in an area is still a statistical challenge, especially in a highly heterogeneous assemblage (Bunge, Willis & Walsh, 2014). Researchers in different disciplines have developed methods for estimating the number of species according to different sampling schemes or model assumptions (Bunge & Fitzpatrick, 1993;Chao & Chiu, 2012;Colwell & Coddington, 1994;Gotelli & Colwell, 2011;Lanumteang & Böhning, 2011;Norris & Pollock, 1998). The proposed estimators are generally classified as nonparametric or parametric. As nonparametric richness estimators do not impose model assumptions on species detection probability, they are more robust and more frequently used by ecologists or conservationists. Among the nonparametric approaches, the Chao1 lower bound estimator (Chao, 1984) and jackknife estimator (Burnham & Overton, 1978;Burnham & Overton, 1979) are the most commonly used methods. Based on the concept that rare species in a sample contain most of the information about undetected species, these nonparametric estimation methods use the number of species observed only once or twice in the sample to estimate the number of undetected species. However, the nonparametric species estimators often considerably underestimate the true number of species, particularly when the sample size is small or when the assemblage has a high degree of heterogeneity (Chao, 2005). Conversely, parametric methods treat the species detection rate as a random variable following a specific probability distribution. Under this assumption, estimating the number of species becomes a matter of estimating the parameters of the probability distribution, and traditional statistical inference approaches may be applied. Generally, a computationally expensive, iterative numerical algorithm is required to solve this parameter estimation problem by using the maximum likelihood method. When the distribution of the true species detection probability is similar to the hypothetical distribution, a parametric richness estimator provides a more accurate estimate. However, when the community is highly heterogeneous and the sample size is not sufficiently large, the parametric estimator frequently fails to converge or requires additional computing time, especially when the sampled data are sparse. Therefore, parametric methods are less frequently adopted to assess species diversity in ecological studies. This study proposes a parametric estimation method in which the species detection probability is assumed to be a random variable following a probability distribution. In addition, the Good-Turing frequency formula (Good & Toulmin, 1956;Good, 1953) reveals that the rare species in a sample contain most of the information concerning undetected species. In this case, the moment method is used to estimate the parameters of the distribution instead of employing the time-consuming maximum likelihood method. Consequently, the proposed method overcomes the problems of statistical divergence and time-consuming parameter calculations encountered in applying the maximum likelihood method. Furthermore, similar to the nonparametric approach, the proposed richness estimator estimates the number of undetected species on the basis of only the sample's rare species data (i.e., the number of singletons, doubletons, and tripletons). Thus, fieldwork may be substantially reduced because researchers are only required to record the number of rare species, not the exact individual number of abundant species in the field. In the following section, the hypothetical model of species composition and the theoretical framework of the proposed estimation method are detailed. Subsequently, in the simulation analysis section, the statistical performances (e.g., bias, the estimated standard error (s.e.), and the coverage rate on a 95% confidence interval) of the proposed estimator are analyzed in various common species composition models. Across different real data sets, the proposed approach is analyzed and compared with commonly used estimators. In the final section, the findings of this research are discussed. MATERIALS & METHODS For individual-based abundance data, the sampling unit is an individual and one individual is randomly sampled and identified at a time. Assume there are S species in the target area or assemblage, which is an unknown parameter. Let X i be the number of individuals of the i th species observed in the sample. When the assemblage is sampled for a fixed period of time, X i follows the Poisson model with discovery rate λ i ,i = 1,2,...,S. To simultaneously reduce the number of unknown parameters and consider the heterogeneity of the species detection rate, let λ i be a random variable following a specific distribution. In ecological studies, a mixed Poisson model assumes that the count X i follows a Poisson distribution P(λ i ), where λ 1 ,λ 2 ,...,λ S are independent and identically distributed random variables from a probability density function g (λ) with a few parameters. Here, I presume g (λ) is from a gamma distribution with two parameters (α,β), where α is a shape parameter and β is a scale parameter. When α = 1, g (λ) is equal to the exponential distribution, which corresponds to the well-known broken-stick model. When α tends to infinity, g (λ) will converge to the uniform distribution, which is identical to a homogeneous model in ecological studies. Therefore, the Poisson-Gamma model is a flexible model, and many estimators and richness estimators have been proposed based on this model assumption (Sanathanan, 1972;Sanathanan, 1977;Chao & Bunge, 2002;Lanumteang & Böhning, 2011). On the basis of the Gamma-Poisson mixture model assumption, the marginal distribution of the species count in the sample can be expressed as follows: Let the species frequency count f k denote the number of species observed exactly k times in the sample; that is f k = S i=1 I (X i = k),k = 0,1,2,... , where I (A) is an indicator function. I (A) is equal to 1 if event A occurs, and I (A) is equal to 0 otherwise. Thus, f 0 is the number of undetected species in the sample, and S obs = n k=1 f k is the observed richness in the sample. In this case, the observed frequency count {f k : k ≥ 1} in the sample follows a multinomial distribution with total sum S obs and cell probabilities { p k 1−p 0 : k ≥ 1}. Therefore, the likelihood function can be expressed as The maximum likelihood estimator of α and β can then be obtained by using an iterative numerical procedure (Sanathanan, 1972;Sanathanan, 1977). The mean detection probability of the in probability, which is equivalent to the assertion that S ×θ converges to k≥2 f k in probability; they subsequently proposed another richness estimator expressed as: Compared with the traditional maximum likelihood estimator,Ŝ CB ispresented as a closed formula which is instantaneous to compute. However, these two parametric approaches are less commonly used in ecological studies because of the divergence problem which can arise when sparsely sampled data is applied. As indicated by the estimator formulas, the two parametric estimators use all observed species data in the sample, including abundant and rare species, to estimate unseen richness. However, the Good-Turing frequency formula indicates that observed rare species contain most of the information about unobserved species. According to this concept, abundant species in the sample mostly do not contain information about the undetected richness and may generate nuisance statistics in estimations of undetected species richness, resulting in high variance and instability. Therefore, abundant species in the sample usually are excluded from richness estimation to obtain more robust estimators (Chao, 1984;Chao, 1987;Chao & Yang, 1993;Chao et al., 2000). Herein, a new richness estimator is derived that employs a simple moment approach to sampled rare species data. According to the Gamma-Poisson mixture model, the expected species frequency count in the sample can be expressed as follows: The unseen richness as well as the numbers of singletons, doubletons, and tripletons can be derived as follows: (3d) According to Eqs. (3a), (3b), (3c) and (3d), the following equalities can be obtained as: According to Eqs. (4b) and (4c), the scale parameter α in the mixed Poisson model is identical to the following: Then, the estimator of α can be obtained by: To ensureα > 0, 3f 1 f 3 should be within the range of 2f 2 2 ,4f 2 2 . According to Eqs. (4a) and (4b), the expected unseen richness can also be presented as follows: Since the Cauchy-Schwarz inequality could show , 1984), it is implied that α isthe bias of f 2 1 2f 2 . Therefore, f 2 1 2f 2 1 + 1 α is an unbiased estimator of undetected richness in the Gamma-Poisson mixture model. However, like all parametric estimators, this unbiased estimator is also unreliable when data is sparse. To obtain a more stable estimator, this unbiased estimator is modified to propose a new estimator. By the Cauchy-Schwarz inequality, the following inequality is held: 1 α can be obtained as: To obtain a more stable (less RMSE) estimator of unseen richness and ensureα > 0, the proposed richness estimator is as follows: is the lower bound estimator of unseen richness (Chao, 1984), the newly proposed estimation method can be treated as a bias-corrected estimator of Chao1 under the Gamma-Poisson mixture model. Furthermore, in Appendix S1A, it is shown that the newly proposed estimator can also be directly derived by correcting the bias of Chao1 based on the Good-Turing frequency formula without any model assumptions. Notably, when the homogeneity of species composition is met, we have This implies that the proposed estimator will be approximately identical to the Chao1 lower bound estimator and both are unbiased estimators for a homogeneous model (proved in Appendix S2B). Under the assumption of the Gamma-Poisson mixture model, the marginal distribution of species count (Eq. (1)) is identical to a negative binomial distribution. Lanumteang & Böhning (2011) reformulated the parameters by using the Taylor expansion and derived a richness estimator expressed as: Since 3E f 1 E f 3 ≥ 2E f 2 2 is held by the Cauchy-Schwarz inequality,Ŝ LB was interpreted as a bias-corrected Chao1 estimator (Lanumteang & Böhning, 2011). Here,Ŝ LB andŜ GP , both interpreted as bias-corrected Chao1 estimators, were derived by using the moment method based on the assumption of the Gamma-Poisson mixture model and using the numbers of singletons, doubletons, and tripletons species to estimate the unseen richness in the sample. However,Ŝ LB andŜ GP have quite different formulaic expressions due to differences in how they adjust to correct the negative bias of Chao1; their statistical performances are also quite different as will be presented in the following simulation section. Using an asymptotic approach, we can derive the estimators of variance of the discussed richness estimators that use rare species frequency counts to estimate undetected richness, based on the assumption that f 0 ,f 1 ,...,f n approximately follows a multinomial distribution with total size S and cell probabilities ( , 1992;Chao & Yang, 1993). Consequently, we have the following: To derive the 95% confidence interval (95% CI) of species richness and to ensure that the lower bound of the 95% CI of species richness is larger than the observed richness, assumeŜ − S obs follows a log-normal distribution (Chiu et al., 2014). Then the 95% CI of species richness is obtained as According to this derivation, the proposed richness estimator has the following properties: (i) Instead of requiring all sample data as in existing parametric approaches, the proposed estimator uses the number of singletons, doubletons, and tripletons to estimate the unseen richness. (ii) Inefficiency and divergence problems encountered in solving the maximum likelihood function of parameters through iterative calculation methods in sparse data are avoided. (iii) The new estimator is asymptotically unbiased when sample size n is sufficiently large. (iv) The proposed estimator provides a lower bound estimator under the species composition assumption of the Gamma-Poisson mixture model, which is a flexible ecological model that incorporates the broken-stick model and the homogeneous model. (v) The newly proposed estimator can be directly derived by correcting the bias of Chao1 based on the Good-Turing frequency formula without any model assumptions. (vi) The new richness estimator can be interpreted as a bias-corrected Chao1 estimator and stay unbiased when species detection probability is homogeneous. Simulation study and results We investigated the performance of the proposed estimator (Ŝ GP ) and compared it with that of the previously described estimators, namely two nonparametric approaches (the Chao1 lower bound estimator, denoted asŜ Chao1 , and the first-order jackknife estimator, denoted asŜ Jack1 ) and two parametric approaches (Ŝ CB andŜ LB ) derived under the Gamma-Poisson mixture model. Herein, other parametric approaches are excluded due to the divergence problem in calculating the maximum likelihood estimation (MLE) of parameters which makes it difficult to fairly compare with other estimators. The simulation study was conducted using two assemblage settings: one of the settings involved calculating species composition from seven models, and the other involved treating three data sets as the entire assemblage. Species composition generated from the theoretical abundance model The simulation results were obtained from seven commonly used ecological species abundance models. The number of species in each model was set to S = 1,000. The species detection probabilities or species relative abundances p 1 ,p 2 ,...,p S = (ca 1 ,ca 2 ,...,ca S ) in each model are provided subsequently, where c is a normalizing constant such that S i=1 p i = 1. We also present the coefficient of variation (CV) of (p 1 ,p 2 ,...,p S ) to indicate the degree of heterogeneity of p 1 ,p 2 ,...,p S . Model 2 ..,S. The CV values in these seven models ranged from 0 to 4 and covered the majority of practical scenarios in real cases. Four different sample sizes were considered: 1,000, 2,000, 4,000 and 8,000. Therefore, a total of 28 model-size combinational scenarios were produced. For each model and sample size, 1,000 simulated data sets were generated, and the following estimators were used to derive estimations: For each estimator, the estimate and the corresponding estimated s.e. were averaged over the 1,000 simulated data sets to derive the average estimate and the average estimated s.e.. The sample s.e. and root-mean-square error (RMSE) over the 1,000 estimates were also obtained. The percentage of data sets in which the 95% confidence intervals covered the true value is presented in Tables 1-7. The average richness observed in the 1,000 samples is also listed in the Tables. Using data sets as true assemblages Three large biological survey data sets were used as the true assemblages and separate data sets were generated from these three assemblages. For each data set, the observed species relative abundance was treated as the true species relative abundance, and a sample of size n was generated through sampling with replacement. The average bias and RMSE obtained Table 1 Comparison of five richness estimators based on 1,000 simulation data sets under a homogeneous model with S = 1,000 and CV = 0. The five estimators are: the Chao1 estimator (Chao, 1984) denoted asŜ Chao1 , the first-order Jackknife estimator (Burnham & Overton, 1978) using the 1,000 generated data sets as a function of sample size are illustrated in the figures to evaluate the statistical behavior of the discussed richness estimators. The first data set includes vascular plant species from the central portion of the Southern Appalachian region (Miller & Wiegert, 1989). This data set has a total of 188 species with 1,008 individuals; the species frequency count is presented in Table 8, and the corresponding degree of heterogeneity is presented (CV = 1.562). For each discussed estimator, the patterns of bias and RMSE as a function of sample size (from 200 to 1,000) are displayed in Figs. 1A and 1B. The second data set comprises butterfly survey data collected from Malaya (Fisher, Corbet & Williams, 1943) and contains a total of 620 species with 9,031 individuals. The species frequency count is presented in Table 8, and the corresponding degree of heterogeneity is presented (CV = 1.435). The simulation settings were the same as those used in the abundance models. The patterns for the average bias and average RMSE as a function of sample size (from 500 to 6,000) are displayed in Figs. 1C and 1D. The third data set contains data on ground-dwelling invertebrate species collected from northwest Tasmania (Bonham, Mesibov & Bashford, 2002) and has a total of 84 species with 2,050 individuals. The species frequency count is presented in Table 8, and the corresponding degree of heterogeneity is presented (CV = 2.07). The patterns for the average bias and average RMSE as a function of sample size (from 100 to 1,000) are illustrated in Figs. 1E and 1F. DISCUSSION In general, a good estimator should be designed with functions such that their bias and accuracy (quantified by RMSE), the two most essential properties for an estimator, decrease as the sample size increases. Furthermore, the coverage rate of the 95% confidence interval should tend towards 0.95 as the sample size increases. Another required property of a richness estimator is that the estimator should be nearly unbiased in the homogeneous model. Since it is impossible to fit all ecological communities by using a single statistical model, there is no existing uniformly unbiased richness estimator for all ecological communities. Therefore, developing a more robust estimator is the most essential goal in species richness estimation. Furthermore, based on the Cauchy-Schwarz inequality, we have the inequality of undetected richness E f 0 ≥ E f 1 2 /2E[f 2 ], which is the essential property of undetected richness for all random samples, and the equation holds when the community is homogeneous. Therefore, the richness estimator should be an approximately unbiased estimator when species composition or species detection rate follows a homogeneous model (i.e., the simplest model with only one parameter). That is why most commonly used nonparametric robust richness estimators were derived on the basis of this framework (Chao, 1984;Chao, 1987;Chao & Lee, 1992;Lee & Chao, 1994), and most parametric assumed models also include the homogeneous model as a special case, despite the fact that the homogeneous model is very rare in practice. On the basis of these essential criteria, the following conclusions can be drawn according to the simulation results. For all simulation cases, the observed richness in the sample substantially underestimates the true richness, especially when the sample size is small or the assemblage is highly heterogeneous (see Tables 1-7). The jackknife estimator typically results in underestimation when the sample size is small and overestimation when the sample size is large. Consequently, the jackknife estimator is unbiased in a limited range of sample sizes (see Tables 1-7). Additionally, the jackknife estimator does not meet the fundamental requirements that the bias, RMSE, and coverage Table 4 Comparison of five richness estimators based on 1,000 simulation data sets under a brokenstick model with S = 1,000 and CV = 1.01. See Table 1 rates of the 95% confidence interval should perform better as the sample size increases. In some simulation scenarios, compared to the other estimators, the jackknife estimator has the lowest RMSE due to its lower variance; however, its bias and coverage rates do not improve as the sample size increases. Although only the first-order jackknife was discussed in the manuscript, the widely used second-order jackknife estimator has similar statistical behaviors as the first-order jackknife estimator (see Chiu et al. (2014) for detail). Since Chao1 was developed as a lower bound estimator of richness, it underestimates the true richness in most models (see Tables 2-7), especially in those with high heterogeneity. Nevertheless, Chao1 is nearly unbiased in the homogeneous model (Table 1), and its bias and RMSE decrease as the sample size increases in all discussed models (shown in Tables and Figures). Accordingly, the Chao1 estimator has the fundamental characteristics of a valuable species richness estimator. However, the estimator's coverage rate of the 95% confidence interval derived by log-normal transformation (Chao, 1984) is generally much lower than 0.95, particularly when the sample size is small or the species composition is highly heterogeneous (Tables 3-7). The Chao-Bunge parametric richness estimator (Ŝ CB ) is unreliable in sparse samples, resulting in overestimation and high variation when the sample size is small or in the Table 5 Comparison of five richness estimators based on 1,000 simulation data sets under a lognormal model with S = 1,000 and CV = 1.35. See Table 1 Tables and Figures). When the sample size is small, the Chao-Bunge estimator provides severely overestimated estimates in some cases, causing an overall increase in the average estimate. However, the Chao-Bunge estimator performs well when the sample size is large enough, which is consistent with the conclusion that ''a sufficiently high overlap fraction is required to produce a reliable estimate of the species richness'' (Chao & Bunge, 2002). Basically, it is an approximately unbiased estimator when a homogeneous model is assumed, and the bias and RMSE decrease as sample size increases. The parametric estimatorŜ LB has been shown to be an unbiased estimator in the homogeneous model (Lanumteang & Böhning, 2011). Although the absolute value of bias and RMSE decrease as sample size increases in all discussed models, the simulation results present an inconsistent pattern in thatŜ LB has negative bias in the models with low heterogeneity (Tables 2-4) and positive bias in the highly heterogeneous models (Tables 5-6). When the model assumption is met (i.e., a negative binomial distribution or homogeneous model),Ŝ LB has good performance in terms of bias (Tables 1 and 7). However, like the parametric estimatorŜ CB ,Ŝ LB usually presents an unstable estimate when the sample size is small and the assemblage is highly heterogenous (Tables 5-6). Table 6 Comparison of five richness estimators based on 1,000 simulation data sets under a Zipf-Mandelbrot model p i ∼ C/(i + 10) with S = 1,000 and CV = 1.87. See Table 1 Compared to the Chao1 estimator, the bias, RMSE, and coverage rate of the 95% confidence interval improve more for the newly proposed richness estimator as the sample size increases (Tables 1-7). The proposed estimation method provides a nearly unbiased estimator in the homogeneous model (see Table 1; also proved in Appendix S2B) and a lower bound estimator in the other discussed models (Tables 2-7). The new estimator presents a consistent pattern in all simulation cases in that the mean of the estimate is always lower than the true richness and tends to the true richness as sample size increases. Compared to the other two discussed parametric estimators (Ŝ CB andŜ LB ), the new parametric approach presents a more stable estimate especially at small sample sizes (Tables 5-6). In most cases, the new estimator presents a higher variance, lower bias and a more accurate 95% confidence interval than the other two discussed non-parametric estimators (Ŝ Chao1 andŜ Jack1 ). When the survey data are treated as true assemblages, the simulation results are consistent with those in the seven hypothetical models, and the new estimator has less bias and lower RMSE in most cases compared to the other discussed estimators (Fig. 1). It is worth noting that althoughŜ LB andŜ GP both are derived based on the Gamma-Poisson mixture model assumption by the moment estimating method and use the Table 7 Comparison of five richness estimators based on 1,000 simulation data sets under a power decay model p i ∼ 1/i 0.9 with S = 1,000 and CV = 4. See Table 1 number of singletons, doubletons, and tripletons to estimate undetected richness, the newly proposed estimator provides a lower but more stable estimate (i.e., lower RMSE) especially at small sample size in highly heterogeneous assemblages (Tables 5-6), and provides a more accurate 95% confidence interval in most simulation models. On the basis of the asymptotic approach, the estimated s.e.s for the discussed estimators perform well in most simulation scenarios, except for the estimate for the Chao-Bunge estimator in small sample sizes. CONCLUSIONS In the literature, a plethora of approaches have been proposed for estimating total species richness in a target area. These approaches are classified as parametric or nonparametric estimators. Parametric estimators employ distribution assumptions on species compositions, and computationally expensive calculation procedures are required to solve the likelihood functions. Moreover, parametric estimators frequently fail to achieve convergence during iterative numerical procedures or result in high variance at small sample sizes. Therefore, parametric estimators are not suitable for small sample sizes and are less frequently employed in ecological studies. Conversely, nonparametric estimators with simple closed formulas and no assumptions on species composition are more robust in most simulation cases and are thus widely used in ecological studies. However, nonparametric estimators substantially underestimate total species richness when the sample size is small or when the species composition has a high degree of heterogeneity, resulting in a low coverage rate of the 95% confidence interval. Accordingly, a new species richness estimator was proposed in this study based on the Gamma-Poisson mixture model that takes the species detection rate as a random variable to reduce the number of parameters. According to the concept of the Good-Turing frequency formula, rare species in a sample contain most of the information about undetected species. In contrast to the traditional maximum likelihood approach, the new estimator uses a simple moment method to estimate unseen richness based on observed rare species. The newly proposed estimator can also be directly derived by correcting the bias of Chao1 based on the Good-Turing frequency formula without any model assumptions. Similar to nonparametric approaches, the proposed estimator uses only the numbers of singletons, doubletons, and tripletons to estimate the number of undetected species in the sample. Compared with other widely used estimators, simulation results reveal that the proposed estimator has less bias and a lower RMSE in highly heterogenous assemblages. The asymptotic-approach-based estimator of the proposed estimator's variance performs well in all simulation scenarios. Overall, the newly proposed estimator uses a simplified formula and is thus more computationally efficient than other parametric approaches. In addition, the newly proposed estimator retains the flexibility of stochastic models and eliminates the divergence problem encountered in other parametric estimators. However, even though the newly proposed estimator performed well in the seven artificial models and three real data sets, it must be applied to more real data sets in the future to further demonstrate its value.
2023-01-11T05:18:20.165Z
2023-01-06T00:00:00.000
{ "year": 2023, "sha1": "a9e953e5b36f2f6e69a199a56d25121f7dea038d", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "a9e953e5b36f2f6e69a199a56d25121f7dea038d", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
8434148
pes2o/s2orc
v3-fos-license
The Protective Role of Antioxidants in the Defence against ROS/RNS-Mediated Environmental Pollution Overproduction of reactive oxygen and nitrogen species can result from exposure to environmental pollutants, such as ionising and nonionising radiation, ultraviolet radiation, elevated concentrations of ozone, nitrogen oxides, sulphur dioxide, cigarette smoke, asbestos, particulate matter, pesticides, dioxins and furans, polycyclic aromatic hydrocarbons, and many other compounds present in the environment. It appears that increased oxidative/nitrosative stress is often neglected mechanism by which environmental pollutants affect human health. Oxidation of and oxidative damage to cellular components and biomolecules have been suggested to be involved in the aetiology of several chronic diseases, including cancer, cardiovascular disease, cataracts, age-related macular degeneration, and aging. Several studies have demonstrated that the human body can alleviate oxidative stress using exogenous antioxidants. However, not all dietary antioxidant supplements display protective effects, for example, β-carotene for lung cancer prevention in smokers or tocopherols for photooxidative stress. In this review, we explore the increases in oxidative stress caused by exposure to environmental pollutants and the protective effects of antioxidants. Introduction Many environmental pollutants are sources of several reactive species (RS). RS is a collective term that includes both oxygen radicals and other reactive oxygen and nitrogen species (ROS/RNS). Free radicals, important for living organisms, include hydroxyl (OH • ), superoxide (O 2 •− ), nitric oxide (NO • ), thyl (RS • ), and peroxyl (RO 2 • ) radical. Peroxynitrite (ONOO − ), hypochlorous acid (HOCl), hydrogen peroxide (H 2 O 2 ), singlet oxygen ( 1 O 2 ), and ozone (O 3 ) are not free radicals but can easily lead to free radical reactions in living organisms. The term reactive oxygen species (ROS) is often used to include not only free radicals but also the nonradicals (e.g., 1 There is strong evidence that RS is involved in oxidative/ nitrosative stress (O/NS) as a common mechanism by which several environmental pollutants induce damage. Oxidative stress can be defined as an excessive amount of RS, which is the net result of an imbalance between production and destruction of RS (the latter is regulated by antioxidant defences). Oxidative stress is a consequence of an increased generation of RS and/or reduced physiological activity of antioxidant defences against RS. Environmental pollutants stimulate a variety of mechanisms of toxicity on molecular level and oxidative stress seems to be the common denominator leading to the damage to cellular membrane lipids, DNA, and proteins [2], as well as modulation of antioxidant enzymes. RS are, due to their high reactivity (e.g., hydroxyl radical formation), prone to cause damage to any type of molecule within the cell, for example, polyunsaturated fatty acids, glutathione, certain amino acids, and so forth. When the antioxidant defence in the human body becomes overwhelmed, oxidative stress to the cellular components often occurs, inducing inflammatory, adaptive, injurious, and reparative processes [3]. On other hand, lifestyle and nutrition might play an important role against environmental oxidant exposure and damage. Protection against O/NS-mediated environmental pollutants can generally occur at two levels: (i) physiochemical protection to lower the dose of exposure, which typically cannot be accomplished 2 Oxidative Medicine and Cellular Longevity by individuals living in polluted areas, or (ii) physiological protection to increase the antioxidative defence of the organism. There is growing scientific evidence that low molecularweight antioxidants are involved in the prevention of or the decrease in the damage caused by certain environmental pollutants. Because we have little influence on the increasing levels of endogenous antioxidants, it would be reasonable to increase the amount of exogenous antioxidants (mainly through ingestion) to strengthen the defensive properties of organisms against environmental oxidative stress. The current evidence suggests that increased consumption of fruits and vegetables or certain dietary supplements can substantially enhance the protection against many common types of environmentally induced O/NS. Purpose This review aims to determine whether antioxidants can modulate the toxicity of environmental pollutants, thereby influencing health and disease outcome associated with oxidative stress-induced insults. Evidence will be presented that environmental pollution increases oxidative stress and that dietary supplementation with antioxidants may play a role on the neutralization or buffering of the effects of pollutants with oxidizing properties. The recommendation for the use of dietary antioxidants in areas of increased environmental pollution will be discussed. This review summarises the most common and healthrelevant sources of oxidative stress like air pollution, radiation, pesticides, noise, and household chemicals. Due to space constrains and a broad scientific data, not all the studies could be covered in this review. The reader is thus referred to search through provided references (and references therein) for further details on selected environmental pollutant or selected antioxidant. Air Pollution-Induced Oxidative Stress and Protection against It. The health effects of air pollution result from minor irritation of the eyes and the upper respiratory system to chronic respiratory disease, heart and vascular disease, lung cancer, and death. Different studies presented in Table 1 are demonstrating increased oxidative stress/damage due to air pollutant exposure and that antioxidants could offer certain level of protection [4][5][6][7]. Oxygen could be presented as the leading air pollutant in regard to oxidative stress formation. Molecular O 2 itself qualifies as a free radical because it has two unpaired electrons with parallel spin in different -antibonding orbitals. This spin restriction accounts for its relative stability and paramagnetic properties. O 2 is capable of accepting electrons to its antibonding orbitals, becoming "reduced" in the process, and, therefore, functioning as a strong oxidizing agent [76]. The diatomic molecule of oxygen contains two uncoupled electrons and can therefore undergo reduction, yielding several different oxygen metabolites, which are collectively called ROS. Mitochondria are the main site of intracellular oxygen consumption and the main source of ROS formation [8,10,13,77]. Once ROS are produced, they are removed by cellular defenses which include the enzymes superoxide dismutase (Mn-SOD, Cu/Zn-SOD, and extracellular (EC)-SOD), catalase, glutathione peroxidase, peroxiredoxins, and the nonenzymatic antioxidants, like glutathione (GSH), thiore-doxin, ascorbate, -tocopherol, and uric acid [9,78]. Since oxidative damage of cells increases with age, the increased intake of exogenous antioxidants may support the endogenous antioxidative defense. Clinical studies imply that eating a diet rich in fruits, vegetables, whole grains, legumes, and omega-3 fatty acids can help humans in decreasing oxidative stress and postponing the incidence of degenerative diseases [79]. Ozone is formed from dioxygen by the action of ultraviolet light and atmospheric electrical discharges. Ozone is a very reactive gas whose uptake depends on the availability of antioxidants in the lining fluids [17,18,52]. The surface of the lung is covered with a thin layer of fluid that contains a range of antioxidants that appear to provide the first line of defence against air pollutants. Mudway et al. [17] studied the interaction of ozone with antioxidants and found that the hierarchy toward ozone in human epithelial lining fluid was ascorbic acid followed by uric acid and then glutathione. Wu and Meng [34] analysed the effects of sea buckthorn seed oil on the protection against sulphur dioxide inhalation. They found that buckthorn seed oil contributed antioxidant effects. Furthermore, study by Zhao et al. [33] revealed the protective effect of salicylic acid and vitamin C on sulphur dioxideinduced lipid peroxidation in mice. Tobacco smoke is one of the most common air pollutants and generates high amounts of various ROS/RNS. Cigaretteinduced oxidative stress was found to be affected by the protective effects of vitamin C, glutathione, and other antioxidants, mainly as quenchers of ROS/RNS (Table 1) [36][37][38][39][40][41]. Kienast et al. [54] demonstrated that alveolar macrophages and peripheral blood mononuclear cells become activated following exposure to nitrogen dioxide. Several studies have demonstrated that certain antioxidants might play a beneficial role in NO -induced toxicity. Guth and Mavis [55] and Sevanian et al. [56,80] examined the effect of vitamin E content on the lungs. Furthermore, a study by Böhm et al. [62] revealed that dietary uptake of tomato lycopene protects human cells against nitrogen dioxide-mediated damage. The possible influence of dietary antioxidants, especially vitamin C, on the increasing prevalence of asthma was explored by Hatch [81]. Particulate matter can also cause oxidative stress via direct generation of ROS from the surfaces of soluble compounds, altering the function of mitochondria or reducing the activity of nicotinamide adenine dinucleotide phosphateoxidase, inducing the activation of inflammatory cells to generate ROS and RNS and mediating oxidative DNA damage [63,82]. Antioxidants could also provide protection against particulate matter-induced toxicity. Indeed, lung lining fluid antioxidants (urate, glutathione, and ascorbate) were demonstrated to be effective in a study by Greenwell et al. [83]. Luo et al. [70] detected an inhibitory effect of green tea extract on the carcinogenesis induced by the combination of asbestos and benzo(a)pyrene in rats drinking 2% green tea extract throughout their lives. Oxidative Medicine and Cellular Longevity [75] As the diet is the main source of antioxidant micronutrients, a plausible link now exists between the exposure to air pollution and the quality of food consumed. Radiation-Induced Oxidative Stress and Protection against It. Ionising radiation consists of highly energetic particles which can generate ROS. These ROS can either be generated primarily via radiolysis of water or they may be formed by secondary reactions. Extensive doses of ionizing radiation have been shown to have a mutating effect; for example, Sperati et al. [84] concluded that indoor radioactivity appears to affect the urinary excretion of 8-OHdG among females, who are estimated to exhibit a higher occupancy in the dwellings measured than males ( Table 2). Many compounds have been demonstrated to protect against cell injury caused by radiation-induced ROS formation. One of these compounds is ebselen, a selenoorganic compound [85]. Another compound is N-acetylcysteine, which reduces nitrosative damage during radiotherapy [86] as well as oxidative damage [87]. The radioprotective effects of quercetin and the ethanolic extract of propolis in gamma-irradiated mice were also detected [88]. The radioprotective and radiosensitising activities of curcumin were demonstrated in a study by Jagetia [89]. Aside from ionising radiation, nonionising radiation also causes oxidative stress. Magnetic fields can affect biological systems by increasing the release of free radicals. There are several studies that indicate a relationship between electromagnetic fields, ROS levels, and OS to exert toxic effects on living organisms [90]. Because it is unlikely that electromagnetic fields can induce DNA damage directly due to their low energy levels, most studies have examined their effects on the cell membrane, general and specific gene expression levels, and signal transduction pathways [91]. Musaev et al. [92] indicated that decimetric microwaves exert oxidant effects at a high intensity of irradiation (specific absorption rate of 15 mW/kg) and antioxidant effects at a low intensity (specific absorption rate of 5 mW/kg) ( Table 2). The protective effects of melatonin and caffeic acid phenethyl ester against retinal oxidative stress during the long-term use of mobile phones were reported [93]. Jajte et al. [94] concluded that melatonin provides protection against DNA damage to rat lymphocytes. Another investigation revealed that Ginkgo biloba prevents mobile phone-induced oxidative stress [95]. Guney et al. [96] found that vitamins E and C reduce phone-induced endometrial damage. Visible and UV light are insufficient to ionize most biomolecules. Nevertheless, human exposure to ultraviolet radiation has important public health implications. Although the skin possesses extremely efficient antioxidant activities, during aging, the ROS levels rise and the antioxidant activities decline. In addition, UV exposure to the skin results in Table 2: Studies demonstrating increased oxidative stress/damage due to ionising and nonionising radiation exposure and the protective effects of antioxidants. Oxidative Medicine and Cellular Longevity the generation of ROS [118], such as singlet oxygen, peroxy radicals, the superoxide anion, and hydroxyl radicals, which damage DNA and non-DNA cellular targets [113][114][115][116] and accelerate the skin aging process. UV-radiation alters endogenous antioxidant protection; for example, in a study by Shindo et al. [127], after UV-irradiation, the epidermal and dermal catalase and superoxide dismutase activities were greatly decreased. With respect to the protective role of antioxidants, many studies (Table 2) investigated the effect of vitamin C on ultraviolet-radiation-(UVR-) induced damage. Radiation Oral vitamin C supplements resulted in significant increases in plasma and skin vitamin C content [118]. In the study by Aust et al. [134], the photoprotective effects of synthetic lycopene after 12 weeks of supplementation were examined, and significant increases in the lycopene serum and total skin carotenoid levels were detected. Studies of animals and humans suggested that green tea polyphenols are photoprotective and can be administered to prevent solar UVB lightinduced skin disorders [137]. A review of the research reveals that polyphenols or other phytochemicals, such as green tea polyphenols, grape seed proanthocyanidins, resveratrol, silymarin, genistein, and others, exert substantial photoprotective effects against UV-induced skin inflammation, oxidative stress, DNA damage, and so forth. Presently, we are exposed to various sources of radiation, both ionising and nonionising. The results of many studies indicate that the human body can cope with radiationinduced oxidative stress to a certain degree by consuming an appropriate antioxidant diet. Pesticide-Induced Oxidative Stress and Protection against It. Pesticides have become an integral constituent of the ecosystem due to their widespread use, distribution, and the stability of some of the pesticides in the environment. Pesticide exposure may play a major role in increased oxidative stress of the organisms and may result in altered disease susceptibility. Bagchi et al. [145] demonstrated that pesticides induce the production of ROS and oxidative damage to tissues. de Liz Oliveira Cavalli [146] found that exposure to glyphosate causes oxidative stress and activates multiple stress-response pathways leading to Sertoli cell death in prepubertal rat testis. The role of oxidative stress in immune cell toxicity induced by the pesticides lindane, malathion, and permethrin was examined by Olgun and Misra [147]. Hassoun et al. [148] reported that chlordane produces oxidative tissue damage based on the levels of hepatic lipid peroxidation and DNA damage (Table 3). Bus et al. [149] reported that paraquat pulmonary toxicity results from the cyclic reduction and oxidation of paraquat. The results of a study performed by Pérez-Maldonado et al. [150] demonstrated the induction of apoptosis by DDT. Hassoun et al. [148] reported that lindane, DDT, chlordane, and endrin exposure resulted in significant increases in hepatic lipid peroxidation and DNA damage. Another study by Senft et al. [151] found out that dioxin increases mitochondrial respiration-dependent ROS production. On the other hand, Ciftci et al. [152] reported a protective effect of curcumin on the immune system of rats intoxicated with 2,3,7,8-tetrachlorodibenzo-p-dioxin. Additionally, Hung et al. [153] suggested that tea melanin might be a potential agent against the development of tetrachlorodibenzodioxininduced oxidative stress. Gultekin et al. [154] examined the effects of melatonin and vitamins E and C on the reduction of chlorpyrifos-ethyl. Another group of pesticides are polychlorinated biphenyls (PCBs), which also induce increased intracellular ROS production. Zhu et al. [155] indicated that different PCB compounds (Aroclor 1254, PCB153, and the 2-(4-chlorophenyl)-1,4-benzoquinone metabolite of PCB3) increase the steadystate levels of intracellular O 2 •− and H 2 O 2 in breast and prostate epithelial cells. Many antioxidants showed protection also against PCB-induced oxidative stress and damage. Ramadass et al. [156] tested the hypothesis that flavonoids modify PCB-mediated cytotoxicity and found that flavonoids inhibit PCB-induced oxidative stress. Zhu et al. [155] demonstrated that treatment with N-acetylcysteine significantly protected cells against PCB-mediated toxicity. Red ginseng, which displays a variety of biological and pharmacological activities, including antioxidant, anti-inflammatory, antimutagenic, and anticarcinogenic effects, was found to protect the body against oxidative stress/damage induced by PCB exposure [157]. Sridevi et al. [158] also reported that the effect of alpha-tocopherol against PCB-induced neurotoxicity resulted in decreased oxidative stress. Another study reported the synergistic effects of vitamins C and E against PCB-(Aroclor 1254) induced oxidative damage [159]. Dioxins and furans are byproducts of chemicals production. Dioxins may be released into the environment through the production of pesticides and other chlorinated substances. Both dioxins and furans are related to a variety of incineration reactions and the use of a variety of chemical products. Ciftci and coworkers reported that dioxin (2,3,7,8tetrachlorodibenzo-p-dioxin; TCDD) causes an oxidative stress response in the rats liver. The subcellular sources and underlying mechanisms of dioxin-induced reactive oxygen species, however, are not well understood. TCDD increases the formation of thiobarbituric acid-reactive substances. It also causes a significant decline in the levels of glutathione, catalase, GSH-Px, and Cu-Zn superoxide dismutase in rats [160]. The impact of 2-furan-2-yl-1H-benzimidazole on vitamins A, E, C, and Se, malondialdehyde, and glutathione peroxidase levels on rats was analysed in a study by Karatas et al. [161]. The results showed that vitamins A, E, C, and Se levels were lower than the control groups, while serum MDA level and GSH-Px activity flexibly increased, depending on the injection days. The observed decreases in vitamins A, E, C, and Se levels in the blood might be causally related to the increased amount of ROS. The potential protective effect of quercetin on TCDD induced testicular damage in rats was studied by Ciftci et al. [160]. The results showed that exposure to TCDD induces testicular damage, and quercetin prevents TCDD-induced testicular damage in rats. Resveratrol's antioxidative effects were also investigated against in a study by Ishida et al. [162]. The results suggested that oral resveratrol is an attractive candidate for combating dioxin toxicity. Türkez et al. [163] analysed effects of propolis against TCDD induced hepatotoxicity in rats and found that propolis 8 Oxidative Medicine and Cellular Longevity [192] alleviate pathological effects and prevents the suppression of antioxidant enzymes in the livers. It can be concluded that the stimulation of ROS production, the induction of lipid peroxidation and oxidative DNA and protein damage, and the disturbance of the total antioxidant capacity of the body are mechanisms of the toxicity induced by most pesticides, including organophosphates, bipyridyl herbicides, and organochlorines. Antioxidant nutrients and related bioactive compounds common in fruits and vegetables as well as food additives can protect against environmental exposure to pesticides-induced oxidative stress/damage (Table 3). Household Chemical-Induced Oxidative Stress and Protection against It. The predominant use of industrial resins, such as urea-formaldehyde, phenol-formaldehyde, polyacetal, and melamine-formaldehyde resins, can be found in domestic environments in adhesives and binders for wood products, pulp products, paper products, plastics, synthetic fibres, and in textile finishing. Formaldehyde was demonstrated to exert increased oxidative stress formation (Table 4), primarily as lipid peroxidation, as found in a study performed by Chang and Xu [193]. Also in the case of household chemical-induced oxidative stress certain antioxidants showed protection. In a recent study, Köse et al. [194] reported that rose oil inhalation protects against formaldehyde-induced testicular damage in rats. Zararsiz et al. [195] demonstrated that exposure to formaldehyde increased the free radical levels in rats and that omega-3 fatty acids prevented this oxidative stress. The protective effect of melatonin against formaldehyde-induced renal oxidative damage in rats has also been reported [196]. Many studies have been performed on carbon tetrachloride because it is a well-known model of inducing chemical hepatic injury in mice. Also carbon tetrachloride exposure increases oxidative stress/damage in tested model organisms and carbon tetrachloride-induced damage has been reversed by many antioxidants examined. Thus, the antioxidant and hepatoprotective effects of many antioxidants and plant extracts against oxidative stress induced by carbon tetrachloride have been reported [198]. For example, chlorellamediated protection against carbon tetrachloride-induced oxidative damage in rats was demonstrated in a study by Peng et al. [224]. Ozturk et al. [201] found that apricot (Prunus armeniaca L) feeding exerted beneficial effects. The potency of vitamin E to enhance the recovery from carbon tetrachloride-induced renal oxidative damage in mice was revealed in a study by Adaramoye [202]. The protective effects of Curcuma longa Linnwere reported by Lee et al. [205]. The protective effect of blackberry extract against oxidative stress in carbon tetrachloride-treated rats was reported by Cho et al. [207]. Chemicals found in common household and personal care goods are major sources of oxidant exposure that can lead to oxidative stress. Many antioxidants, such as melatonin, vitamin E, ascorbate, and extracts from various plants, for example, rose, green tea, and blackberry, were reported to decrease oxidative stress and/or damage in vivo and in vitro. Disinfection Byproducts (DBP) and Other Water Born Pollutants. The beneficial role of water ingestion can be minimised due to the formation of disinfection byproducts. Chlorination and ozonation in the water treatment process [194] Imbalance in antioxidant status Chang and Xu (2006) [193] Melatonin Zararsiz et al. (2007) [196] Carbon tetrachloride (CCl 4 ) Increased ROS production Brent and Rumack (1993) [197] Electrolysed reduced water Tsai et al. (2009) [198] Lipid peroxidation Morrow et al. (1992) [199] Basu (2003) [223] are believed to produce various active oxygen species, which seem to participate in the reaction with fumic acid, pollutants, and bacteria ( [212] demonstrated the induction of oxidative stress and cellular death of drinking water disinfection byproducts. Similar observations were reported by Leustik et al. [214]. Studies suggest that Cl 2 inhalation damages both airway and alveolar epithelial tissues and that these damaging effects were ameliorated by the prophylactic administration of low molecular-weight antioxidants. Trolox was reported to be protective against oxidative injury induced by HOCl to Ca-ATPase in the sarcoplasmic reticulum of skeletal muscle [220]. Ascorbic acid might also play a protective role (Table 4), especially in individuals consuming supplements containing this vitamin. Also thioallyl and Sallylcysteine (both are garlic-derived compounds), melatonine, glutathione, glutathione disulfide, S-methylglutathione, lipoic acid, and dihydrolipoic acid were reported to protect against hypochlorous acid and peroxynitrite-induced damage [217][218][219]222]). Additionally, the following plant extracts display a protective effect against HOCl-induced oxidative damage: Agaricus campestris, Cynara cardunculus, Thymus pulegioides, and Vicia faba [223]. When resolving the problem of DBP, first the cause of their formation should be assessed with different engineering approaches DBP, for example, by moving the point of chlorination downstream in the treatment train, reducing the natural organic matter precursor concentration, replacing prechlorination by peroxidation, and so forth. The use of antioxidants as compounds which ameliorate DBP-induced toxicity should be just the last alternative when all other approaches deal with the DBP formation in the drinking water fail. Researches in the past two decades have pointed out that redox active metals like iron (Fe), copper (Cu), chromium (Cr), cobalt (Co), and other metals present in water possess the ability to produce ROS such as superoxide anion radical and nitric oxide. Disruption of metal ion homeostasis may lead to oxidative stress, a state where increased formation of reactive oxygen species overwhelms body antioxidant protection and subsequently induces DNA damage, lipid peroxidation, protein modification, and other effects [225]. Pollutants in water like heavy metals As, Cd, Cu, Fe, Pb, and Zn can cause oxidative stress in fish [226]. On other hand Yang and coworkers [227] reports that water spinach containing chlorophyll and lycopene have potential to reduce cytotoxicity and oxidative stress in liver induced by heavy metals. Besides heavy metals also pesticides in water can represent sources of oxidative stress. Atrazine and chlorpyrifos are the most common pesticides found in freshwater ecosystems throughout the world. Xing et al. [228] investigated the oxidative stress responses in the liver of common carp after exposure to atrazine and chlorpyrifos and found that exposure or their mixture could induce decrease in antioxidant enzyme activities and increase in MDA content in a dose-dependent manner. Eroǧ lu et al. [229] reported organophosphate pesticides produce oxidative stress due to the generation of free radicals, which alter the antioxidant defence system in erythrocytes and that vitamins C and E can act as protective role. The Role of Oxidative Stress in Noise-Induced Hearing Damage. Noise is a disturbing and unwanted sound. Exposure to noise causes many health problems such as hearing loss, sleep disturbance, and impairs performance as well as effecting cognitive performance. It also increases aggression and reduces the processing of social cues seen as irrelevant to task performance, as well as leading to coronary heart disease, hypertension, higher blood pressure, increased mortality risk, serious psychological effects, headache, anxiety, and nausea ( [230] and references within). Prolonged exposure to noise can also cause oxidative stress in the cochlea which results in the loss (via apoptotic pathways) of the outer hair cells of the organ of Corti. Increased noise exposure results in increased levels of reactive oxygen species formation that play a significant role in noise-induced hair cell death [231]. Acute as well as long-term exposure to noise can produce excessive free radicals alter endogenous antioxidative enzymes as superoxide dismutase, catalase, and glutathione peroxidase [232,233]. In a study by Demirel et al. [230] the effect of noise on oxidative stress parameters in rats was analyzed by measuring malondialdehyde, nitric oxide levels, and glutathione peroxidase activity. The results showed an elevation in MDA level, an indicator of lipid peroxidation, as well as NO level and GSH-Px activity through noise exposure, suggesting that the presence of oxidative stress may have led to various degrees of damages in the cells. Additionally, increases in oxidative stress parameters, such as MDA level, and decreases in CAT and SOD activities in textile workers exposed to elevated levels of noise supports the hypothesis that noise causes oxidative stress [234]. It seems that noise might cause damage not only in the ears but also across the entire body, leading to oxidative stress [230]. In a study by van Campen et al. [235], the time course of ROS damage following exposure was assessed. Based upon oxidative DNA damage present in the cochlea following intense noise, the researchers postulate that the first 8 h following exposure might be a critical period for antioxidant treatment. Thus, the ROS quenching properties of antioxidants and medicinal plants are attracting more and more research to counteract noiseinduced oxidative stress. Manikandan and Devi [232] investigated the antioxidant property of alpha-asarone against noise stress induced changes in different regions of the rat brain and their data proved that the antioxidant property of alpha-asarone acts against noise stress induced damage. The aim of a study performed by Manikandan et al. [233] was to evaluate the protective effect of both ethyl acetate and methanolic extract of Acorus calamus against noise stress induced changes in the rat brain. Both the ethyl acetate and methanolic extract of Acorus calamus protected most of the changes in the rat brain induced by noise stress. Nacetyl-cysteine also offered protection against noise-induced hearing loss in the Sprague Dawley rat [236]. The study by Ewert et al. [237] determined if administration of a combination of antioxidants 2,4-disulfonyl -phenyl tertiary butyl nitrone (HPN-07) and N-acetylcysteine could reduce both temporary and permanent hearing loss. The results showed that a combination of antioxidants HPN-07 and NAC can both enhance the temporary threshold shift recovery and prevent permanent threshold shift by reducing damage to the mechanical and neural components of the auditory system when administered shortly after blast exposure. Additionally, arboxy alkyl esters (esters of quinic acid found in fruits and vegetables) have been shown to improve DNA repair capacity of spiral ganglion neurons in response to noise stress [238]. The problem of oxidative stress in the production of hearing loss is even worse when the synergistic effects takes place since a broad range of environmental and occupational contaminants can interact with noise to enhance noiseinduced hearing loss, for example, through carbon monoxide and by acrylonitrile [239]. Adverse or Insignificant Effects of Antioxidant Treatment after Exposure to Environmental Pollutants. Administration of antioxidants in cases of environmentally induced oxidative stress does not always demonstrate protection (Table 5). Hackney et al. [240] analysed whether vitamin E supplementation protected against O 3 exposure and found no significant differences between the vitamin E-and placebo-treated [252] groups. Another study demonstrated that in a high-risk group, such as smokers, high doses of beta-carotene increased the rate of lung cancer [241]. Additionally, the results of large, controlled trials of an intervention of beta-carotene supplementation did not support the detected beneficial associations or a role for supplemental beta-carotene in lung cancer prevention; instead, they provided striking evidence for its adverse effects among smokers [242]. McArdle et al. [118] investigated the effects of oral vitamin E and betacarotene supplementation on ultraviolet radiation-induced oxidative stress to the human skin. The results revealed that vitamin E or beta-carotene supplementation displayed no effect on the sensitivity of the skin to UVR. A study by Stahl et al. [122] was performed in which the antioxidant effect of carotenoids and tocopherols was investigated based on their ability to scavenge ROS generated during photooxidative stress. The antioxidants used in this study provided protection against erythema in humans and may be useful for diminishing the sensitivity to ultraviolet light (Table 5). Iron and copper have been reported to aggravate the toxicity of paraquat in E. coli. Treatment with ferrous iron in a study by Korbashi et al. [248] led to an enhancement of bacterial killing by paraquat, whereas treatment with chelating agents, such as nitrilotriacetate and desferrioxamine, markedly reduced, up to complete abolishment, the toxic effects. Some compounds contribute to the antioxidant defence by chelating transition metals and preventing them from catalysing the production of free radicals in the cell. Metal-chelating antioxidants, such as transferrin, albumin, and ceruloplasmin, ameliorate radical production by inhibiting the Fenton reaction, which is catalysed by copper or iron. Latchoumycandane and Mathur [250] investigated whether treatment with vitamin E protects the rat testis against oxidative stress induced by tetrachlorodibenzodioxin and revealed that the activities of antioxidant enzymes and the levels of hydrogen peroxide and lipid peroxidation did not change in the animals coadministered tetrachlorodibenzodioxin and vitamin E. Although several studies have demonstrated the protective effect of antioxidant administration against oxidative stress, it is important to note that not all antioxidants exert health benefits. What Could Be the Reason? The inappropriate use of dietary supplements may lead to "antioxidative stress. " Detailed description of the negative effects of antioxidants can be found in publications by Poljsak et al., [253], Poljsak and Milisav [254], and references therein. Briefly, the intake of only one antioxidant may alter the complex system of endogenous antioxidative defence of cells or alter the cell apoptosis pathways [255]. The beneficial physiological cellular use of ROS is being demonstrated in different fields, including intracellular signalling and redox regulation and synthetic antioxidants cannot distinguish among the radicals that have a beneficial role and those that cause oxidative damage to biomolecules. If administration of antioxidant supplements decreases total ROS/RNS formation, it may also interfere with the immune system to fight bacteria and essential defensive mechanisms for removal of damaged cells, including those that are precancerous and cancerous [256]. When large amounts of antioxidant nutrients are taken, they can also act as prooxidants by increasing oxidative stress [257,258]. None of the major clinical studies using mortality or morbidity as an end point has found positive effects of antioxidant, such as vitamin C, vitamin E, or -carotene, supplementation. Some recent studies demonstrated that antioxidant therapy displays no effect and can even increase mortality (The Alpha-Tocopherol, Beta-Carotene Cancer Prevention Study Group, 1994; [259][260][261] On the other hand, antioxidant supplements do appear to be effective in lowering an individual's oxidative stress if his/her initial oxidative stress is above normal or above his/her set point of regulation [262,263]. Thus, the antioxidant supplements may help the organism to correct the elevated levels of oxidative stress when it cannot be controlled by the endogenous antioxidants. Conclusions There is substantial evidence that environmental pollution increases oxidative stress [264] and that dietary antioxidant supplementation and/or increased ingestion of fruit and vegetable may play a role in neutralising or buffering the effects of pollutants that display oxidising properties. In vitro and in vivo studies suggest that antioxidant nutrients and related bioactive compounds common in fruits and vegetables can protect against environmental toxic insults. It is important to emphasise that antioxidants as dietary supplements can provide protection against ROS-induced damage under conditions of elevated oxidative stress to the organism. It could be postulated that antioxidants would be therapeutically effective under circumstances of elevated oxidative stress or in aged mammals exposed to a stressor that generates exacerbated oxidative injury. Evidence is presented demonstrating that synthetic antioxidant supplements cannot provide appropriate or complete protection against oxidative stress and damage under "normal" conditions and that the administration of antioxidants to prevent disease or the aging process is controversial under conditions of "normal" oxidative stress. Many clinical trials in which individuals received one or more synthetic antioxidants failed to detect beneficial effects (reviewed in [253]). Thus, the results of clinical trials of exogenous antioxidant intake are conflicting and contradictory. These findings indicate that other compounds in fruits and vegetables (possibly flavonoids) or a complex combination of compounds may contribute to the improvement in cardiovascular health and the decrease in cancer incidence detected among individuals who consume more of these foods [265,266]. It must be understood that the use of synthetic vitamin supplements is not an alternative to regular consumption of fruits and vegetables. Cutler explains that most humans maintain stable levels of oxidative stress, and no matter how much additional antioxidant that individuals consume in their diet, no further decrease in oxidative stress occurs. However, antioxidant supplements do appear to be effective in lowering an individual's oxidative stress if his/her initial oxidative stress level is above normal or above his/her stably regulated level [262,263]. Thus, antioxidant supplements may only provide a benefit to an organism if it was necessary to correct a high level of oxidative stress that could not be controlled by endogenous antioxidants. All of this evidence indicates the need to determine an individual's oxidative stress level prior to the initiation of antioxidant supplement therapy. Both, the ROS/RNS formation and the antioxidative defense potential should be measured in a person in order to determine his/her oxidative stress status. Multiple methods of oxidative stress measurement are available today, each with their own advantages and disadvantages (reviewed in [253]). In the end it should be stressed that more research should be performed to strengthen the evidence for dietary supplements as modulators of the adverse effects caused by increased exposure to environmental pollution.
2016-05-04T20:20:58.661Z
2014-07-20T00:00:00.000
{ "year": 2014, "sha1": "531762f645c3e46538bea8140949b41bbc4f14ad", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/omcl/2014/671539.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "61934a518f1ac42f45dd411f915cb6dc6c8cfd06", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
523608
pes2o/s2orc
v3-fos-license
Oncotargets and Therapy Dovepress Current Therapeutic Strategies for Invasive and Metastatic Bladder Cancer Background: Bladder cancer is one of the most common cancers in Europe, the United States, and Northern African countries. Muscle-invasive bladder cancer is an aggressive epithelial tumor, with a high rate of early systemic dissemination. Superficial, noninvasive bladder cancer can most often be cured; a good proportion of invasive cases can also be cured by a combined modality approach of surgery, chemotherapy, and radiation. Recurrences are common and mostly manifest as metastatic disease. Those with distant metastatic disease can sometime achieve partial or complete remission with combination chemotherapy. Recent developments: Better understanding of the biology of the disease has led to the incorporation of molecular and genetic features along with factors such as tumor grade, lympho-vascular invasion, and aberrant histology, thereby allowing identification of 'favorable' and 'unfavorable' cancers which helps a more accurate informed and objective selection of patients who would benefit from neoadjuvant and adjuvant chemotherapy. Gene expression profiling has been used to find molecular signature patterns that can potentially be predictive of drug sensitivity and metastasis. Understanding the molecular pathways of invasive bladder cancer has led to clinical investigation of several targeted therapeutics such as anti-angiogenics, mTOR inhibitors, and anti-EGFR agents. Conclusion: With improvements in the understanding of the biology of bladder cancer, clinical trials studying novel and targeted agents alone or in combination with chemotherapy have increased the armamentarium for the treatment of bladder cancer. Although the novel biomarkers and gene expression profiles have been shown to provide important predictive and prognostic information and are anticipated to be incorporated in clinical decision-making, their exact utility and relevance calls for a larger prospective validation. Introduction Bladder cancer occurs mostly in men. An estimated 386,300 new cases and 150,200 deaths from bladder cancer occurred in 2008 worldwide. 1 Its incidence varies widely internationally, with the highest incidence rates found in Europe, the United States, and northern African countries, while the lowest rates are found in the countries of Melanesia and middle Africa. Smoking and occupational exposures (dye, arsenic, aromatic amines, rubber or leather industries) are the major risk factors in Western countries, whereas chronic infection with Schistosoma hematobium accounts for about 50% of the total burden in developing countries. 1,2 It is the fourth most common malignancy diagnosed in the US with estimates of 70,530 (52,760 men and 17,770 women) Staging considerations in bladder cancer Clinical assessment of primary tumor includes bimanual examination under anesthesia before and after endoscopic biopsy or resection and histological verification for the presence of tumor. Finding of bladder wall thickening, or a fixed mass suggest the presence of an invasive disease. Appropriate imaging studies such as a computed tomography or magnetic resonance imaging should be incorporated into clinical staging to assess the extravesical extension of the tumor and lymph node evaluation, but one should take caution for there is a potential for an overestimation or even underestimation of the stage of the tumor. The ability of these studies to determine degree of muscle invasiveness preoperatively is modest and pathologic staging is usually needed to confirm the extent of the disease. Detailed staging information is not discussed here; 10 in the context of this article, according to the 2010 American Joint Committee on Cancer (AJCC) cancer staging system, muscle infiltrating disease is considered T2. It is further subdivided into T2a (inner half) or T2b (outer half), but with disease still confined within the bladder. T3 lesions extend beyond muscle into the perivesical fat. T4 lesion are those extending into adjacent organs -tumors invading the prostatic stroma, vagina, uterus, or bowel are classified as T4a, while those fixed to the abdominal wall, pelvic wall, or other organs are classified as T4b. A single lymph node metastasis in the true pelvis is considered N1 disease, while multiple nodal involvement in the true pelvis is N2 disease, and involvement of the common iliac nodes is staged as N3. Presence of distant metastatic disease (eg, lung, liver, bones) is classified as M stage. The number of lymph nodes examined from the operative specimen and the number of positive lymph nodes have been reported to be associated with survival. [11][12][13] In addition, the size of the largest tumor deposit and presence of extra-nodal extension may independently affect survival. 14 Adequate lymph node sampling should include an average of .12 lymph nodes. 10 Prognostic and predictive markers for bladder cancer The most important prognostic determinants in bladder cancer are the tumor grade and stage (whether the tumor is organ-confined or nonorgan-confined). However, conventional histopathologic evaluation criteria are limited in their ability to accurately predict tumor behavior. A number of clinical and molecular characteristics are correlated with the response to chemotherapy and survival. Poor performance status and presence of visceral (eg, pulmonary, liver, skeletal) metastatic disease are correlated with decreased survival. This was demonstrated in the intergroup trial that compared cisplatin alone with methotrexate, vinblastine, doxorubicin, and cisplatin (M-VAC) in patients with metastatic disease. 15 The median survival of the group with favorable features was 18.2 vs 4.4 months for the group with unfavorable features. In the long-term follow-up results of this trial, no patients with liver or bone metastases and only one patient with a Karnofsky performance status ,80 survived 6 years. 16 Subsequent reports have also confirmed the relationship between shortened survival and poor performance status or the presence of visceral metastases. 17,18 Prediction tools, also known as 'nomograms', developed based on retrospective multivariate analysis to predict probability of extravesical extension or nodal metastasis at radical cystectomy, estimate the risk of recurrence and survival after cystectomy in bladder cancer and are currently available for clinical use. [19][20][21] One such nomogram developed by International Bladder Cancer Nomogram Consortium (http:// www.mskcc.org/applications/nomograms/bladder) is based on a retrospective multivariate analysis of more than 9000 patients from twelve centers of excellence worldwide. 20 This nomogram estimates probability of remaining disease free at 5 years after cystectomy based on patient age, sex, time from diagnosis to surgery, pathologic tumor stage and grade, tumor histologic subtype, and regional lymph node status. The predictive accuracy of the constructed international nomogram (concordance index, 0.75) is significantly better than standard AJCC TNM staging (concordance index, 0.68; P , 0.001) or standard pathologic subgroupings (concordance index, 0.62; P , 0.001). These nomograms do not make treatment recommendations, but simply provide a means to predict an advanced stage and assess individual patient risk for disease recurrence, and survival -all key factors in deciding the need for additional treatments in the form of neoadjuvant or adjuvant therapies. These predictive tools can be accessed online at http://www.nomograms.org. Several studies on molecular alterations as markers for prognostication have been reported with the goal of using these molecular markers of an individual tumor to help select appropriate therapy. P53 is the most widely investigated molecular marker in bladder cancer. Overexpression of P53 detected by immunohistochemistry (IHC) which infers mutation of TP53 gene has been demonstrated to be a predictor of poor survival in patients with advanced bladder cancer. [22][23][24] In a report of 90 patients undergoing neoadjuvant M-VAC chemotherapy, those who harbored mutant P53 were three times more likely to die from their disease than those with wild-type P53. 25 Ki-67 index is also significantly greater in high-grade tumors and in those overexpressing P53; high Ki-67 index (.32% staining in IHC) is predictive of poor prognosis. 23 Positive staining for pro-apoptotic markers -Bax and CD40 L -is shown to predict improved survival while positive staining for anti-apoptotic marker -Bcl-2 -is correlated with poor survival. 26,27 However, there have been inconsistencies in these findings which may be due to arbitrary cut-off levels for positive or negative expression based on the level of IHC staining. Molecular profiling and proteomics can provide better indicators of tumor behavior, and may become available for routine clinical practice. 28,29 A recent report from a German group 30 on the gene expression analysis of chemotherapy response modifiers multidrug resistance gene 1 (MDR1) and excision repair cross-complementing 1 (ERCC1) performed on tumor samples from patients undergoing adjuvant chemotherapy for locally advanced bladder cancer showed that expression of MDR1 and ERCC1 were independently associated with overall progression-free survival (PFS) with relative risk of 2.9 and 2.24, respectively. In another study of 57 patients with advanced bladder cancer treated with cisplatinbased regimen, the median survival was significantly longer in patients with low ERCC1 levels. 31 The MDR1 gene product P-glycoprotein (Pgp) is an energy-dependent efflux pump, which, among others, reduces intracellular concentrations of certain chemotherapeutic drugs including anthracyclines and vinka alkaloids, both of which are components of M-VAC submit your manuscript | www.dovepress.com 100 Vishnu et al regimen. Although cisplatin is not considered a de novo substrate of Pgp, studies have suggested an altered expression of MDR1 after cisplatin administration, possibly resulting in decreased cytotoxic efficacy. 32 ERCC1 gene is involved in DNA repair and may mediate resistance to alkylating agents. Recently, a 20-gene gene expression model was reported to be effective in predicting the pathological nodal status, thereby allowing selecting high-risk patients for neoadjuvant chemotherapy on the basis of risk of node-positive disease, while sparing others from toxic side-effects, and the delay to cystectomy. 33 Currently, research on molecular prognostication of invasive bladder cancer is still in its infancy and the data generated from this work are primitive, but research certainly holds promise for future personalization of therapy by better understanding the biology of the disease, matching the appropriate group of patients with the right drug combination, and estimating the efficacy of those chemotherapeutic and biologic agents. Management of muscle-invasive bladder cancer The standard treatment approach for patients with localized muscle-invasive bladder cancer is radical cystectomy with urinary diversion. Reconstructive techniques such as ileal conduit, catheterizable pouch, and neo-bladder eliminate the need for external drainage devices in some male patients and provide improved quality of life for those who undergo radical cystectomy. Radical cystectomy requires removal of the bladder, adjacent organs, and regional lymph nodes. In men, it generally includes removal of the prostate and seminal vesicles, along with the urinary bladder, and in women, removal of the uterus, cervix, ovaries, and anterior vagina is usually performed en bloc with the bladder. Despite undergoing such 'radical' surgery, several patients are at risk of developing distant metastases and also loco-regional recurrence with second primary urothelial tumors in the renal pelvis, ureters, or urethra. Multimodality approaches in the form of neoadjuvant or adjuvant chemotherapy have been evaluated in randomized trials and are currently applied clinically to decrease relapses and increase cure rates. Result of such contemporary clinical trials in both neoadjuvant and adjuvant settings will be discussed in this section. Neoadjuvant therapy In patients with muscle-invasive bladder cancer, the most important treatment-related issues include the identification of those who can be cured with radical cystectomy alone, and those who, due to high risk of recurrence or metastasis, require a multimodality approach to achieve cure. For long, radical cystectomy has been considered the standard approach for patients with muscle-invasive bladder cancer. Despite a curative surgery, about half of these patients develop metastatic disease within 2 years, with a high mortality among those developing metastatic disease. 6,34 Administering cisplatin-based systemic chemotherapy either before (neoadjuvantly) or after (adjuvantly) cystectomy has a potential to eradicate micrometastatic disease, and thereby improve survival in this group of patients. Hence, neoadjuvant chemotherapy followed by radical cystectomy is now considered by many as the new standard of care for this disease. The advantages of neoadjuvant therapy include delivery of chemotherapy through intact vasculature, which is often affected by surgery, and downsizing the tumor prior to cystectomy, thereby increasing complete resection with a likelihood of long-term remission and/or survival in such patients. Patients often tolerate greater dose intensity and more cycles of chemotherapy preoperatively than postoperatively. 35 The disadvantage of such therapy is the delay of definitive local therapy in patients who do not respond to neoadjuvant chemotherapy and it could potentially be associated with disease progression. Results of several randomized clinical trials [36][37][38][39][40] and a meta-analysis of all neoadjuvant studies in bladder cancer 41 have favored this approach of platinum-based, multiagent, neoadjuvant chemotherapy followed by cystectomy over cystectomy alone ( Table 1). The largest neoadjuvant chemotherapy trial (BA06 30894) was conducted jointly by the Medical Research Council (MRC), the European Organization for Research and Treatment of Cancer (EORTC), and several international collaborators. 36 In total, 976 patients with high-grade T2-T4a urothelial bladder cancer accrued over 5.5 years from 106 institutions were randomly assigned to three cycles of neoadjuvant cisplatin, methotrexate, and vinblastine (CMV) chemotherapy (n = 491) or no chemotherapy (n = 485), followed by institution's choice of definitive therapy with either radical cystectomy and/or radiation therapy. Of patients in the chemotherapy and no-chemotherapy groups 42% and 43%, respectively, received radiation therapy alone as definitive therapy. Pathologic complete response (pCR) with neoadjuvant chemotherapy was 33%. Overall survival (OS) at 3 years in the two groups was 55.5% vs 50%, respectively, with an absolute survival benefit of 5.5% favoring the chemotherapy group. However, the prespecified statistical aim to detect an absolute survival improvement of 10% (from 50% to 60%) was not met. The most recent update, 42 101 Treatment of invasive and metastatic bladder cancer after 8 years of follow-up, showed a statistically significant 16% reduction in the risk for death in patients who received neoadjuvant CMV prior to radiotherapy and/or cystectomy; this corresponds to an increase in 3-year survival from 50% to 56%, an increase in 10-year survival from 30% to 36%, and an increase in median survival time of 7 months (from 37 to 44 months) in CMV-treated patients compared with those treated with local therapy only. In a US Intergroup trial (INT 0080), 38 307 of the 317 enrolled patients with T2-T4a urothelial bladder cancer were randomized to three cycles of neoadjuvant M-VAC (n = 154) or no chemotherapy (n = 153) followed by cystectomy. The study took almost 13 years to complete accrual. pCR with neoadjuvant M-VAC chemotherapy was 38%. Median follow-up was 8.7 years. Patients who received M-VAC showed a trend towards improvement in median OS (77 vs 46 months, P = 0.06). A subsequent retrospective analysis showed that after adjustment for pathologic factors and neoadjuvant chemotherapy use, an optimal cystectomy and thorough pelvic node dissection, defined as negative resection margins and at least 10 lymph nodes in the surgical specimen, was associated with longer survival (.80% at 5 years). 43 Another recent secondary analysis of this study showed that presence of squamous or glandular differentiation in locally advanced urothelial bladder cancer does not confer resistance to M-VAC, and in fact, may be an indication for the use of neoadjuvant chemotherapy before radical cystectomy. 44 Randomized clinical trials (Nordic 37,39 and GISTV, 40 Table 1) did not demonstrate survival difference using neoadjuvant chemotherapy. Majority of the randomized clinical trials have not demonstrated a survival benefit with the addition of neoadjuvant chemotherapy. Inadequate sample size, suboptimal chemotherapy, premature closure, and/or inadequate follow-up have all been attributed to these negative results. Hence, meta-analyses have been performed to interpret these data. An update of a systematic review and meta-analysis of clinical trials of neoadjuvant chemotherapy in invasive bladder cancer was published by Advanced Bladder Cancer metaanalysis collaboration. 41 In patients who received cisplatinbased combination chemotherapy prior to cystectomy, a 5% absolute benefit improving survival from 45% to 50% at 5 years (P = 0.003) was observed. No information was reported about the quality of life and toxicities from various chemotherapeutic regimens used. Most patients from the EORTC/MRC, INT 0080, and Nordic studies were young, with a median age of 63 to 65 years with excellent performance status and good renal function; hence, there remains a question as to whether these results can be applied to most of the elderly patients who form the major proportion of the bladder cancer population. Regimens such as gemcitabine plus cisplatin (GC), which in metastatic setting is shown to be less toxic and achieves similar response rates and survival, 45 has not been tested prospectively in the neoadjuvant setting. A recent singleinstitution retrospective study by Dash et al 46 showed a pCR of 26% with GC which is comparable to other cisplatin-based regimens. A combination of a taxane, nab-paclitaxel, along with gemcitabine and carboplatin in a neoadjuvant setting was recently reported. 47 In this Phase II trial, 27 eligible patients with T2-T4, N0, or any T, N1-3 bladder cancer were treated with three cycles of nab-paclitaxel along with gemcitabine and carboplatin, followed by cystectomy. Of the 27 patients, 25 completed all three cycles. Grade 3-4 neutropenia was seen in all patients. pCR, the primary endpoint, was seen in 30% of the patients with 25% demonstrating CIS. This combination appears to be an active regimen and could be of potential benefit in patients who are not 48 reported a retrospective study of 80 patients who underwent accelerated M-VAC therapy administered at 2-week intervals with granulocyte colony-stimulating factor (G-CSF) support in an attempt to minimize delay to definitive therapy and improve efficacy of neoadjuvant chemotherapy. All planned cycles of chemotherapy were completed by 84% of patients and median duration of chemotherapy was 34 days. All 80 patients received their planned definitive therapy (cystectomy in 60 patients; radiotherapy in 20 patients). pCR was seen in 43% of patients treated with surgery with an objective radiological response in 75% of patients. There were no treatment-related deaths, and incidence of grade $3 toxicities was 11%. Accelerated M-VAC appears to be a safe and well-tolerated regimen that needs to be prospectively evaluated. Although these newer regimens are promising, there are no data yet from well-powered randomized trials supporting their use. For those patients with inter-current illnesses that prohibit use of M-VAC, GC may constitute a reasonable alternative. Several clinical trials are evaluating biologic agents along with chemotherapeutic combinations in a neoadjuvant setting as listed in Table 2. Addition of radiation therapy to chemotherapy in the neoadjuvant setting has been investigated in randomized studies with equivocal results; hence this approach is not considered a standard-of-care. The results of these trials will not be discussed here but are available for review elsewhere. [49][50][51][52] The primary goal of muscle-invasive bladder cancer treatment is cure, and bladder preservation is a secondary consideration. Organ-sparing approaches are considered as an alternative, particularly in frail and very elderly patients and those with significant medical co-morbidities or those who will not accept the side-effects and risks associated with surgery. Avoidance of radical cystectomy as a reasonable approach in those patients who have a complete response to neoadjuvant therapy has been investigated in a few clinical trials. Herr et al 53 have reported a nonrandomized study with 111 patients with T2-3, N0, M0 urothelial cancer who received neoadjuvant M-VAC chemotherapy. Forty-three of the 60 patients who achieved complete clinical response (cT0) underwent bladder-sparing surgery (transurethral resection of bladder tumor [TURBT] alone in 28 patients; partial cystectomy in 15 patients) while 17 underwent radical cystectomy. At 10 years, 32 of the 43 patients (75%) who underwent bladder-sparing surgery were alive. These results were similar to the group who underwent radical cystectomy (65% survival at 10 years). However the bladder remained at risk for new invasive tumors (24 patients, 56% relapse), most requiring salvage cystectomy. In a similar study reported by Sternberg et al, 54 104 patients with T2-T4a urothelial cancer who received neoadjuvant therapy with M-VAC, were followed by bladder-sparing surgery in 65 patients (TURBT alone in 52 patients; partial cystectomy in 13 patients) while 39 patients had radical cystectomy based on the degree of response to chemotherapy. The estimated 5-year survival in the group undergoing bladder-sparing surgery was 67% compared with 46% in group who underwent radical cystectomy. However, a more recent Phase II clinical trial reported by deVere et al 55 showed that though the complete clinical response (cT0) by TURBT following neoadjuvant therapy (with gemcitabine, carboplatin, and paclitaxel) was 46%, there was an unacceptably high rate (60%) of persistent 103 Treatment of invasive and metastatic bladder cancer cancer at cystectomy in patients presumed to have pT0 status. The authors concluded that patients completing neoadjuvant chemotherapy should strongly consider definitive local therapy regardless of post-chemotherapy cT0 status. Based on these studies, one can infer that a considerable number of patients whose invasive tumors are significantly downsized with combination chemotherapy may be curable by conservative surgery such as partial cystectomy, rather than a radical cystectomy; however, downsizing with neoadjuvant therapy does not necessarily ensure complete local control of disease, especially with a high risk of metachronous bladder cancer in these patients. Adjuvant therapy Unlike neoadjuvant treatment, adjuvant chemotherapy can be tailored according to the pathologic staging prior to administration of systemic therapy, thereby limiting toxicity associated with such treatment and also avoiding any delay in potentially curative surgery in those patients whose tumor is not responsive to cytoreductive chemotherapy. The availability of adequate tissue for analysis of molecular prognostic and predictive markers may be another advantage. The disadvantage of adjuvant therapy is that there could be delay in initiating systemic therapy for occult metastatic disease while treating the primary focus; in some surgically debilitated and elderly patients, it can be very challenging and sometimes may not be possible to administer adequate systemic chemotherapy following cystectomy. Similar to neoadjuvant setting, there are several randomized clinical trials reported in the adjuvant setting which have conflicting results and with caveats such as inadequate sample size, flawed clinical trial design, and poor accrual leading to early termination. The older trials have been reviewed elsewhere. 56 A systematic review and meta-analysis of individual patient data from those trials was published in 2005. 57 The results, based on 491 patients from six trials, representing 90% of all patients randomized in cisplatin-based combination chemotherapy trials and 66% of patients from all eligible trials, suggested a 25% relative reduction in the risk of death for chemotherapy compared with that on control, with an overall hazard ratio for survival of 0.75 (P = 0.019). It concluded that there was insufficient evidence on which to reliably base treatment decisions. The contemporary cooperative trials will be reviewed in this section. In an Italian multicenter randomized Phase III trial, 58 patients with pT2G3, pT3-4, N0-2 transitional cell bladder carcinoma, after radical cystectomy, were assigned to four cycles of GC or observation followed by same chemotherapy at progression. Only 194 patients were enrolled (32% of the target) and the trial was stopped due to poor accrual. At a median follow-up of 32.5 months, relapses were similar in both groups (43% vs 45%) with no difference in disease-free survival (DFS). The 3-year OS was 67% for the chemotherapy arm and 48% for the observation arm and the 3-year DFS was 47% and 35%, respectively, suggesting no improvement in OS or DFS with adjuvant GC in these patients. In a clinical trial conducted by the Southwest Oncology Group, 59 499 patients post-radical cystectomy for urothelial cancer with pT1-T2, N0 disease were assessed for P53 expression. Those positive for P53 expression with $10% nuclear reactivity by IHC were randomly assigned to observation vs three cycles of adjuvant M-VAC. Primary endpoint was recurrence-free survival. The trial was terminated after a planned interim analysis due to futility. Among the 114 patients with P53-positive tumors who were randomized to observation or adjuvant chemotherapy, there were no differences in time to recurrence or OS. In the entire cohort, the study did not confirm the prognostic value of P53 expression by IHC for either recurrence or OS. In the randomized Phase III Spanish Oncology Genitourinary Group trial 99/01, 60 patients with high-risk muscle-invasive bladder cancer (pT3-4 and/or node-positive disease) were assigned to four courses of chemotherapy with paclitaxel, gemcitabine, and cisplatin combination or observation. The primary objective was OS. The study was opened in July 2000 and prematurely closed in July 2007 due to poor recruitment, with 142 patients randomized (74 to observation and 68 to chemotherapy). At a median follow-up of 51 months, there was a statistically significant increase in OS with chemotherapy compared with observation. Five-year OS was 60% vs 31% respectively. Secondary endpoints such as DFS, time-to-progression (TTP) and disease-specific survival were also superior in the chemotherapy arm. Importantly, this abstract reports a post-hoc review of a study that was closed early to accrual, and further follow-up and peer review will be required before it can be viewed as definitive. A large Phase III trial by EORTC (protocol #30994) evaluating observation vs adjuvant chemotherapy with one of the three chemotherapy regimens (GC, M-VAC, or high-dose M-VAC) in high risk bladder cancer (pT3-4 and/or nodepositive disease) was also prematurely closed in August 2008 due to poor accrual after enrolment of 278 patients; another large 800-patient trial by the Cancer and Leukemia Group B evaluating role of high-dose intensity chemotherapy vs standard chemotherapy in the adjuvant setting also suffered from submit your manuscript | www.dovepress.com 104 Vishnu et al poor accrual that led to its early closure. Results of these trials are currently not available. Based on the older clinical trials, meta-analysis, and the contemporary clinical trials in the adjuvant setting, there appears to be no clear evidence for the role of adjuvant chemotherapy in locally advanced bladder cancer. Patients are encouraged to participate in such clinical trials whenever possible. In patients with pT2, N0 urothelial bladder cancer, following cystectomy with observation seems to be a rational approach while for those patients with pT3-4 and/or nodepositive disease, following cystectomy with four cycles of chemotherapy with M-VAC or GC appears reasonable since these regimens have shown significant activity in metastatic setting. First-line therapy The standard approach for patients with inoperable locally advanced or metastatic disease is systemic chemotherapy. Urothelial bladder cancer is highly responsive to cisplatinbased chemotherapy; however, the median survival even with aggressive chemotherapy is only about 15 months. Several chemotherapeutic drugs such as cisplatin, methotrexate, adriamycin, ifosfamide, docetaxel, and gemcitabine have shown to have single-agent activity in either first-line or subsequent therapy of metastatic bladder cancer, but with low overall response rates (ORR) and short duration of responses. [61][62][63][64][65][66] This led to development of cisplatin-based combination regimens. In a randomized trial of 108 patients, comparing cisplatin with cisplatin plus methotrexate, 67 the combination demonstrated a response rate of 45% vs 31% compared with single-agent cisplatin, which was not significantly different. There was an improved TTP but no difference in survival. In a 58-patient cohort with metastatic transitional cell carcinoma, combination of cisplatin, methotrexate, and vinblastine showed an ORR of 56% with a complete response rate (CR) of 28%. Patients who had achieved CR showed a prolonged DFS of 11 months. M-VAC regimen in a nonrandomized clinical trial 68 of 133 patients with advanced urothelial tract cancer showed tumor regression in about 72% of cases and 36% of those achieved CR; 3-year survival was 55% among patients who had a CR. Further, in a prospective randomized international cooperative group trial, M-VAC was compared with single-agent cisplatin. 15 Patients (269) were assigned to M-VAC or cisplatin, cycles repeated every 28 days until tumor progression or a maximum of six cycles. M-VAC regimen was associated with a greater toxicity, particularly leukopenia, mucositis, neutropenic fever, and drug-related mortality. Response rates were superior in the M-VAC arm compared with the single-agent cisplatin arm (39% vs 12%) PFS (10.0 vs 4.3 months) and OS (12.5 vs 8.2 months) were significantly greater for the combined therapy arm. In another randomized trial with 110 patients, 69 M-VAC was compared with a regimen consisting of cisplatin, cyclophosphamide, and doxorubin (CISCA); M-VAC arm showed significantly higher response rate (65% vs 46%) and median survival (48 vs 36 weeks) compared with CISCA. In attempts to translate response rates to improved survival rates, high-dose intensity M-VAC was evaluated in an EORTC Phase III clinical trial (protocol #30924) 70 with a recent 7-year update of the results. 71 Patients (263) were randomly assigned to high-dose M-VAC given at 2-week intervals with growth factor support or to standard M-VAC given every 4 weeks. ORR (63% vs 50%), CR (21% vs 9%), and PFS (9.1 vs 8.2 months) were improved but there was no difference in OS, which was the primary endpoint (15.5 vs 14.1 months). In the subsequent update with more than 7 years of follow-up, high dose M-VAC showed a borderline statistically significant relative reduction in the risk of death at 5 years (21.8% vs 13.5%; hazard ratio = 0.76) compared with M-VAC. Toxicity is a major consideration with M-VAC particularly myelosupression, neutropenic fevers, sepsis, and mucositis, with significant toxicity-related deaths reported in most clinical trials evaluating M-VAC. High dose M-VAC is considered standard of care at some centers, but not all. In Phase II clinical trials, gemcitabine in combination with cisplatin have shown response rates of about 50% with a median survival of around 14 months. 72,73 Based on these results, this combination was evaluated in a randomized Phase III trial of 405 patients, comparing it with M-VAC. 45 Chemotherapy was administered every 4 weeks for a maximum of six cycles. More patients in the GC arm completed the planned six cycles of therapy with fewer dose adjustments and significantly fewer patients with neutropenia and related complications, and toxicity-related deaths. The ORR (49% vs 46%), TTP (7.4 vs 7.4 months), and median survival (13.8 vs 14.8 months) were similar in both groups. This study demonstrated that GC had a better safety profile and tolerability while providing similar survival benefit compared with M-VAC. An updated analysis showed similar 5-year survival rates between the two regimens. 17 Based on its similar efficacy and lower toxicity, GC rather than M-VAC submit your manuscript | www.dovepress.com 105 Treatment of invasive and metastatic bladder cancer is considered by many to be the standard first-line regimen for patients with advanced urothelial bladder cancer. Addition of paclitaxel to GC was evaluated in a Phase III clinical trial by EORTC (protocol #30987), 74 which enrolled 627 chemotherapy-naïve patients with advanced urothelial carcinoma, 81% of whom had primary bladder tumors. Chemotherapy was administered for a maximum of six cycles. Both regimens were well tolerated overall. Results showed that the triplet combination had a higher rate of ORR (57% vs 46%) and CR (15% vs 10%); though the survival was 3 months longer (15.7 vs 12.8 months) in the 3-drug arm, it was not statistically different from GC. Combination of docetaxel and cisplatin (DC) has been compared with M-VAC in a multicenter Phase III clinical trial by the Hellenic Co-operative Oncology group. 75 Patients (220) Two Phase II studies have reported antitumor efficacy of the combination of gemcitabine with pemetrexed, a folate antimetabolite, in patients with untreated metastatic urothelial cancer, demonstrating a moderate antitumor activity at the expense of significant myelosuppression. In the ECOG study (E4802), 79 with a cohort of 46 patients treated for a maximum of six cycles, the ORR was 31.8%; median TTP was 5.8 months with a median OS of 13.4 months. The most common grade $3 toxicity was neutropenia (75%) with 11% febrile neutropenia. In an earlier study of 64 patients, 80 the reported ORR was 20% in the intention-to-treat population (28% among the 47 patients evaluable for response); median OS was 10.3 months. Significant grade $3 toxicity included neutropenia (38%) with febrile neutropenia (17%) and anemia (19%). Eribulin, currently approved by the US Food and Drug Administration (FDA) to treat patients with metastatic breast cancer based on the results of a Phase III EMBRACE trial, 81 is a synthetic analog of halichondrin B and a potent inhibitor of microtubule dynamics. Preliminary results from an ongoing Phase II trial evaluating eribulin in patients with urothelial cancer with no prior cytotoxic therapy for advanced disease (neo/adjuvant therapy allowed) was recently reported. 82 Results of the 37 evaluable patients demonstrated an ORR of 38% and a RR of 34% in patients who had received prior neo/adjuvant therapy. At a median follow-up of 19.8 months, the PFS was 3.9 months and a median OS of 9.4 months, suggesting promising activity of eribulin in this group of patients. The most common grade $3 toxicity reported was neutropenia (54%). Its safety and efficacy in combination with GC is currently being evaluated in a Phase I/II study ( Table 2). With a better understanding of tumor biology including a few upregulated/dysregulated signaling pathways (Figure 1) in urothelial cancer, 83 several agents that act against specific targets among these signaling pathways, particularly vascular endothelial growth factor receptor (VEGFR) (eg, bevacizumab), epithelial growth factor receptor (EGFR) (eg, cetuximab), and mammalian target of rapamycin (mTOR) (eg, everolimus) are currently being tested in first-line therapy in combination with cytotoxic chemotherapy for patients with advanced bladder cancers, some of the agents showing promising results (Table 3; Figure 1). Bevacizumab has been studied in combination with GC as first-line therapy for metastatic urothelial carcinoma in a Phase II trial by the Hoosier Oncology group 84 . In this single-arm study, 43 patients received GC along with bevacizumab 15 mg/kg every 3 weeks. Known antiangiogenic treatment-related toxicities (bleeding, thromboembolism) were common. The ORR was 72% with a CR of 21%, another 16% having stable disease. At a median follow-up of 27.2 months, PFS was 8.2 months with an OS of 20.4 months, suggesting that the combination of GC and bevacizumab is an active first-line regimen in metastatic bladder cancer. This is now being tested in a large Phase III trial (Table 3). 106 Vishnu et al with first-line therapy of recurrent and/or metastatic Her2/neupositive urothelial cancers was reported in a Phase II study. 85 Expression of Her2/neu in urothelial cancers can be variable, ranging from 8.5% to 81%, and in this study 52.3% of the tumors were positive (57 of the 109 screened cases). Her-2/neu-positive patients had more metastatic sites and visceral metastasis than did Her-2/neu-negative patients. Forty-four of the 57 Her-2/neu-positive patients were treated with combination of transtuzumab, paclitaxel, carboplatin, and gemcitabine. The median number of chemotherapy 107 Treatment of invasive and metastatic bladder cancer cycles administered was six. The ORR was an impressive 70%; median TTP was 9.3 months and median OS was 14.1 months. Most common grade 3-4 toxicities were myelosuppression and sensory neuropathy; grade 3 cardiac toxicity was reported in two patients (4.5%). Though the results are very promising, there appears no consensus on routinely screening for Her-2/neu expression on all bladder cancer specimens. Based on these results a prospective clinical Phase III study with paclitaxel, carboplatin, and gemcitabine, with or without trastuzumab, is clearly warranted. Second-line therapy An effective salvage therapy for relapsed urothelial cancer following first-line chemotherapy has remained an unmet need despite several research efforts. Frequently there is a significant deterioration in the overall clinical condition, often associated with renal impairment after progression following first-line therapy, which makes it difficult to enroll them in clinical trials, or even administer systemic chemotherapy off-study protocol. In several clinical trials, the reported response rates with single agents such as paclitaxel, 79 ifosphamide, 86 docetaxel, 65 and gemcitabine 64 has been about 20% or less. Combinations such as paclitaxel with gemcitabine, 70,87 oxaliplatin with 5-fluorouracil (FOLFOX), 88 or gemcitabine (GEM-OX) 89 after failing M-VAC have demonstrated response rates in the range of 20% to 27% but with significant toxicities such as neutropenia, thrombocytopenia, and peripheral neuropathy. Currently, there is no defined standard second-line therapy for metastatic bladder cancer. Some of the more recent trials with (promising) results will be reviewed in this section. Vinflunine is a novel, biflourinated, third-generation, vinca alkaloid, antimitotic agent that has demonstrated superior antitumor activity to other agents in its class. 90 The efficacy of vinflunine as a second-line therapy for patients with relapsed or refractory advanced urothelial cancer after first-line platinum-containing chemotherapy has been evaluated in 3 open-label, multicenter studies. [91][92][93] In the two Phase II studies, vinflunine demonstrated moderate antitumor activity with a RR of 15% 93 and 18%. 92 The Phase III trial compared vinflunine plus best supportive care (BSC) with BSC alone. Patients (370) were randomly assigned in a 2:1 ratio to receive vinflunine plus BSC (n = 253) or BSC alone (n = 117). Both arms were well balanced except there were more patients with performance status .1 (10% difference) in the BSC arm. Most common grade $3 toxicities for vinflunine arm were neutropenia (50%), febrile neutropenia (6%), anemia (19%), fatigue (19%), and constipation (16%). In the intent-to-treat population, the objective of a median 2-month survival advantage (6.9 months for vinflunine plus BSC vs 4.6 months for BSC) was achieved but was not statistically significant (P = 0.287). Multivariate Cox analysis adjusting for prognostic factors showed a statistically significant effect of vinflunine on OS (P = 0.036), reducing the death risk by 23%. ORR (8.6% vs 0%), disease control (41.4% vs 24.8%), and PFS (3.0 vs 1.5 months) were all statistically significant, favoring vinflunine. With an acceptable safety profile, vinflunine appears to be a reasonable second-line therapy option for patients with bladder cancer who have relapsed following cisplatin-based therapy. In a recent randomized Phase III trial by German Association of Urological Oncology (AB 20/99), 94 shortterm (maximum of six cycles every 3 weeks) vs prolonged therapy (treatment continued until disease progression) with a combination of gemcitabine with paclitaxel was evaluated as second-line chemotherapeutic treatment for patients with metastatic urothelial cancer after failure of cisplatin-based first-line therapy. Of the 102 enrolled patients 96 were eligible for analysis. The results showed that there was no difference in OS (7.8 vs 8.0 months), PFS (4 vs 3.1 months), or ORR (37.5% vs 41.5%) between the short-term and prolonged therapy. More patients had severe anemia (26% vs 6.7%) in the prolonged treatment arm. The high response rate (∼40%) suggests that the combination of gemcitabine and paclitaxel is a reasonable option as second-line therapy in this group of patients. Activity of single-agent pemetrexed as a second-line therapy in patients with urothelial cancer was reported by the Hoosier Oncology Group. 95 Forty-seven patients were enrolled and included in the intention-to-treat efficacy analysis. The ORR was 27.7%, median TTP was 2.9 months, median duration of response was 5 months, and median OS was 9.6 months, fatigue and myelosupression accounting for the most common grade 3-4 toxicity. This study supports pemetrexed as a reasonable second-line therapy option in this patient population. Results of a Phase II study evaluating single-agent nab-paclitaxel, the albumin-bound nanoparticle formulation, in a cohort of 48 patients with urothelial cancer who had progressed or relapsed after cisplatin-based chemotherapy was recently presented. ORR in 47 evaluable patients was 32%. With an additional 21% of patients having stable disease, the clinical benefit rate (CBR) was 53%, representing one of the highest reported RRs in the second-line therapy of urothelial cancer. Nab-paclitaxel was well tolerated and the most frequent grade $3 adverse events reported were pain (45%), hypertension (14%), and fatigue (8%). submit your manuscript | www.dovepress.com Dovepress Dovepress 108 Vishnu et al Ixabepilone, an epothilone B analog, which binds to β-tubulin and stabilizes microtubules, has shown promising activity in several solid tumors and is currently approved by FDA for treatment of metastatic breast cancer. 96 Its efficacy in urothelial cancer was evaluated in a Phase II trial by ECOG (E3800). 97 In this study of 45 patients, ORR was a dismal 11.9% with a median survival of 8 months. Toxicity was moderate, granulocytopenia, fatigue, and sensory neuropathy being the most common side-effects reported. Signaling through VEGFR and EGFR pathways is thought to play a critical role in growth and progression of urothelial cancers. 83 Several molecularly targeted approaches are currently under investigation as second-line therapies in recurrent/refractory bladder cancers (Table 4). A recent report of a multicenter, noncomparative randomized Phase II study 98 of cetuximab with or without paclitaxel in patients with previously treated metastatic urothelial cancer suggests that EGFR inhibition with cetuximab enhances the antitumor activity of paclitaxel in this setting. Thirty-nine evaluable patients were enrolled. The cetuximab arm was closed after nine of the first eleven patients progressed by 8 weeks. ORR was 28.5%, and median PFS for cetuximab-paclitaxel arm was 3.8 months with a median OS of 9.5 months. Pazopanib, a second-generation multitargeted tyrosine kinase inhibitor (TKi) of VEGFR-1, 2, and 3, platelet-derived growth factor receptor, and c-kit, has shown promising results as a single agent in an ongoing Phase II trial in heavily pretreated patients with relapsed or refractory urothelial cancer. 99 In total, 18 patients were enrolled until July 2010, 10 patients having primary bladder tumor; 22% of patients had partial response and 61% had stable disease with a CBR of 83%. The drug was well tolerated overall, with grade $3 nausea or anorexia reported in two patients and hypertension in one patient. More patients need to be enrolled and longer follow-up is required. A Phase II study evaluating single-agent aflibercept, a soluble receptor for VEGF, also known as VEGF Trap, in urothelial cancer patients who have failed cisplatinum-based therapy has completed accrual but results are not yet reported. 100 Clinical trials with other targeted agents such as lapatinib (HER-2 and EGFR TKi), erlotinib (HER-1 and HER-2 TKi), sunitinib (multiple receptor TKi), and everolimus (mTOR inhibitor) are ongoing ( Table 4). Management of variants and nonurothelial cell malignancies of the bladder Primary nonurothelial bladder malignancies are rare, representing less than 10% of all bladder cancers. The recent World Health Organization classification of urothelial cancers lists 13 different histologic variants of urothelial cancer 101 (Table 5). The divergent differentiation patterns such as squamous, glandular (adenocarcinoma), micropapillary, nested, plasmacytoid, and carcinosarcoma/sarcomatoid variants should be identified because of the potential for having an unfavorable prognosis despite aggressive surgical management that relates both to an aggressive biological behavior and also often due to an advanced stage at the time of diagnosis. Squamous cell carcinoma (SCC) is the second most prevalent epithelial neoplasm of the bladder, accounting for approximately 3% to 5% of bladder tumors in Western countries. 102 While the pathogenesis of SCC of the bladder is only been partly understood, it is thought to involve factors that result in chronic bladder infection and irritation. SCC of the bladder in countries of the Middle East and Egypt has a distinct pathogenesis that is linked to chronic 109 Treatment of invasive and metastatic bladder cancer infections with Schistosoma haematobium. In regions where this water-borne parasitic pathogen is endemic, SCC not only represents the most common histological type of bladder tumor, but also the most prevalent form of cancer in men overall, accounting for 30% of cancers. Preoperative radiation has been shown to decrease pelvic recurrence in a single-institution study, but this remains of uncertain benefit. 103 Standard chemotherapy regimens appear to have limited impact on the disease due to the relative chemoresistance of SCC. The use of chemotherapy regimens, such as the combination of paclitaxel, carboplatin, and gemcitabine, which have demonstrated efficacy in patients with SCC of other locations such as lung, head, and neck, may offer better outcomes. 85 Standard treatment of Schistosoma-associated SCC is radical cystectomy and urinary diversion. A potential role for neoadjuvant or adjuvant radiation and chemotherapy remains poorly defined. Pure adenocarcinoma of the bladder represents the third most common type of epithelial tumor comprising 0.5% to 2.0% of all bladder tumors. 102 In advanced cases of adenocarcinoma of the bladder, conventional chemotherapy (eg, M-VAC) is not effective, and hence the use of chemotherapy or radiotherapy should be individualized and may be of potential benefit in select patients. A recent SEER-based analysis showed that while patients with adenocarcinoma of the bladder undergo radical cystectomy at more advanced disease stage, the stage-and grade-adjusted cancer-specific mortality is the same among patients with adenocarcinoma and urothelial carcinoma of the bladder. 104 Primary small cell or neuroendocrine carcinoma of the bladder is an extremely uncommon, aggressive, poorly differentiated neoplasm that is similar to small cell carcinoma of the lung in clinical behavior and accounts for less than 0.7% of all bladder tumors. A report from Mayo Clinic suggest that more than half the patients had metastatic spread to the loco-regional lymph nodes, liver, or bone at the time of presentation. 105 Chemotherapy regimens similar to those used in small cell lung cancer of the lung have been employed and shown to be of benefit in several retrospective studies. 105,106 Sarcoma, a malignant mesenchymal tumor, and carcinosarcoma, a biphasic mixture of carcinoma and sarcoma, have very rare occurrence in the bladder with only a few case series reported to date. [107][108][109] Metastatic sarcomas and carcinosarcomas are frequently treated by employing multimodality protocols including resection, radiation, and chemotherapy. Doxorubicin and ifosfamide appear to be the most active single agents. 108 A case report suggested benefit using gemcitabine with cisplatin in a patient with metastatic sarcomatoid carcinoma. 110 Overall, for patients with metastatic nonurothelial bladder cancers, patient management should be based upon the histology of the primary tumor. Given the absence of data showing a survival or quality-of-life benefit from chemotherapy for these diseases, palliative care as an alternative to chemotherapy should be offered. Those electing to receive chemotherapy should be encouraged to consider enrolling in a clinical trial if an appropriate trial is available. Concluding remarks Bladder cancer comprises a variety of diseases. While most patients with superficial cancers do not encounter a lifethreatening condition, several patients with invasive disease do. In this group of patients, choosing the appropriate systemic regimen and timing of institution of such therapy is crucial. Based on the review of the multiple randomized clinical trials and meta-analysis, the treatment paradigm for muscle-invasive bladder cancer has shifted from cystectomy alone towards the use of cisplatin-based neoadjuvant chemotherapy. Further, development of gene expression models (eg, 20-gene GEM) will allow patients who would benefit from such therapy to be identified more accurately. Understanding the biology and various pathways involved in development of invasive bladder cancer has led to evaluation of targeted therapy (eg, VEGFR and EGFR pathways and use of multityrosine TKi) in combination with conventional cytotoxic chemotherapy, and the results from such clinical trials are promising. Though the progress in the field of bladder cancer has been slow, the future looks bright. In view of the multitude of questions still unanswered, every patient with Publish your work in this journal Submit your manuscript here: http://www.dovepress.com/oncotargets-and-therapy-journal OncoTargets and Therapy is an international, peer-reviewed, open access journal focusing on the pathological basis of all cancers, potential targets for therapy and treatment protocols employed to improve the management of cancer patients. The journal also focuses on the impact of management programs and new therapeutic agents and protocols on patient perspectives such as quality of life, adherence and satisfaction. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors.
2014-10-01T00:00:00.000Z
0001-01-01T00:00:00.000
{ "year": 2011, "sha1": "58320a0eb8f171ed052d4c0ebfcd7ba1a72b43c5", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=10513", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "58320a0eb8f171ed052d4c0ebfcd7ba1a72b43c5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
2371505
pes2o/s2orc
v3-fos-license
Recurrent Pneumonia and a Normal Heart: Late Complication after Repair of Hemianomalous Pulmonary Venous Drainage—A Cautionary Tale Hemianomalous pulmonary venous drainage with intact atrial septum is a rare congenital anomaly and reports of its surgical repair and the long-term complications related to the correction are only infrequently encountered in the literature. We report the case of a patient with hemianomalous pulmonary venous drainage and intact atrial septum who underwent surgical repair using a pericardial baffle and creation of an “atrial septal defect” aged 15 years. Dyspnoea and recurrent chest infections started 7 months after surgery when he was seen by a respiratory physician without cardiac followup. He presented again aged 28 years with a recurrent pneumonia investigated over 6 weeks and heart pronounced normal from examination and echocardiography. Correct diagnosis was made in Grown Up Congenital Heart (GUCH) clinic stimulating review of data and catheterisation with pulmonary artery angiography which confirmed it. We feel that this case highlights the importance of specialist care and followup for GUCH patients. Hemianomalous pulmonary venous drainage with intact atrial septum is a rare congenital anomaly and reports of its surgical repair and the long-term complications related to the correction are only infrequently encountered in the literature. We report the case of a patient with hemianomalous pulmonary venous drainage and intact atrial septum who underwent surgical repair using a pericardial baffle and creation of an "atrial septal defect" aged 15 years. Dyspnoea and recurrent chest infections started 7 months after surgery when he was seen by a respiratory physician without cardiac followup. He presented again aged 28 years with a recurrent pneumonia investigated over 6 weeks and heart pronounced normal from examination and echocardiography. Correct diagnosis was made in Grown Up Congenital Heart (GUCH) clinic stimulating review of data and catheterisation with pulmonary artery angiography which confirmed it. We feel that this case highlights the importance of specialist care and followup for GUCH patients. Case Presentation A gentleman born in 1979 had an asymptomatic murmur during childhood. Investigations confirmed hemianomalous pulmonary venous drainage to the right atrium without an atrial septal defect. The parents were Jehovah's witnesses and refused cardiac surgery with use of blood products. When aged 15 years, an adult cardiologist advised surgery and the patient was operated on by a general cardiac surgeon unused to congenital heart disease but willing not to use blood. The atrial septum was confirmed to be intact at operation and the oval fossa was enlarged to allow redirection of blood from the right pulmonary veins using a baffle of autologous pericardium to the left atrium. Recovery was uncomplicated. Seven months later he complained of increasing dyspnoea on effort and chest infections. He was reviewed once by a respiratory physician without further investigations or referral to cardiology. He trained and worked as a stonemason. He presented again aged 28 years, with a three-week history of right-sided pleuritic chest pain, high fever and a dry cough. On examination there was reduced air entry, coarse crepitations and a pleural rub at the right base. Chest radiography showed a dense patchy consolidation of the right lower lobe, probably also involving the middle lobe ( Figure 1). He was treated with intravenous cefuroxime and oral clarythromycin but failed to respond as symptoms continued over several weeks with febrile exacerbations. An infectious disease specialist was involved and several alterations in the antibiotic regime were made. Tuberculosis was excluded on serial sputum cultures and infective endocarditis was considered possible but a transthoracic echocardiogram was reported as normal. Contrast computerised tomography scan of the thorax reported extensive consolidation within the right lung and probable chronic posterior pleural effusion. Bronchoscopy showed hyperaemic right main bronchial mucosa with inflammation especially around the [R] [L] right upper lobe bronchus. No lesions or pus were found to explain the situation. Despite the "normality" of the heart but because of past cardiac surgery, he was referred to the monthly Grown Up Congenital Heart (GUCH) clinic. A diagnosis of obstructed right pulmonary venous drainage was made from the chest X-ray ( Figure 1). This stimulated review of his previous investigations. Computerised tomography scan cuts at the level of the heart and pulmonary vessels showed completely obstructed right pulmonary veins (Figure 2). No pericardial baffle was identified upon review of cross-sectional transthoracic echocardiographic images. There was also no flow to be seen across the atrial septum at the level of the created "atrial septal defect" on colour Doppler. A repeat transthoracic echocardiogram with spectral Doppler interrogation of the pulmonary arteries showed normal antegrade flow from the main pulmonary artery into the left pulmonary artery but none into the right pulmonary artery. A cardiac catheter study further confirmed no forward flow to the right lung at pulmonary artery wedge angiography, with normal flow to superior caval vein and left pulmonary artery. Pulmonary artery systolic pressure was 26 mmHg. After retrograde catheterisation of the left atrium, a baffle stump was seen but was found to be blindending and could not be crossed. Discussion of management with the GUCH Unit team at The Heart Hospital London concluded that it was unsuitable for intervention or operative reconstruction. The only treatment recommended was right pneumonectomy. The patient and family adamantly refused operation with use of blood and no surgeon agreed to perform it without blood as it is likely to be a very vascular operation. The patient has not had further infective recurrences but still complains of exertional dyspnoea. The family is searching for a surgeon who accepts their conditions. Discussion Partial anomalous pulmonary venous drainage with associated atrial septal defect is not an uncommon congenital malformation [1]. On the other hand, hemianomalous pulmonary venous drainage with intact atrial septum is a rare congenital anomaly [2,3] often presenting as a simple atrial septal defect [1] and, as such, has only been reported in the literature on few occasions. This case demonstrates points of clinical importance. The general cardiac surgeon who undertook the repair may not have been familiar with such a rare anomaly. Pulmonary vein obstruction is a known complication when pericardium is used in this way [4]. New chest symptoms and a history of cardiac surgery for congenital heart disease should have been correlated earlier. Presence of pulmonary venous obstruction is likely to have already been evident 7 months after his repair but unfortunately there was failure to have the patient adequately investigated. At this stage, it is likely that the lung was still salvageable and the obstruction might have been relievable. Yet at the time, the patient was not seen by any consultant with experience in congenital heart disease, particularly in adults or adolescents. The chest radiograph showed obvious Kerley lines as well as fluid in the fissures in keeping with pulmonary venous obstruction on the right, pointing to where and what to look for on computerised tomography of the thorax and on the echocardiogram. This patient highlights the importance of having GUCH patients seen by specialists familiar with congenital heart problems [5]. This was an uncommon lesion repaired by a surgeon probably unfamiliar with the special techniques needed in such cases and not followed up with any informed evaluation. We caution those consultants in medicine, cardiology and general practice to seek specialist advice about patients with congenital heart disease.
2014-10-01T00:00:00.000Z
2010-03-09T00:00:00.000
{ "year": 2010, "sha1": "31f128bcbadafb77140a1f1cc8822574fb16371b", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/crim/2010/930589.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "aea42099b2aa9ed96b06e3e3779857ffc75fd4c1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
126812361
pes2o/s2orc
v3-fos-license
Static and semi-static hedging as contrarian or conformist bets In this paper, we argue that, once the costs of maintaining the hedging portfolio are properly taken into account, semi-static portfolios should more properly be thought of as separate classes of derivatives, with non-trivial, model-dependent payoff structures. We derive new integral representations for payoffs of exotic European options in terms of payoffs of vanillas, different from Carr-Madan representation, and suggest approximations of the idealized static hedging/replicating portfolio using vanillas available in the market. We study the dependence of the hedging error on a model used for pricing and show that the variance of the hedging errors of static hedging portfolios can be sizably larger than the errors of variance-minimizing portfolios. We explain why the exact semi-static hedging of barrier options is impossible for processes with jumps, and derive general formulas for variance-minimizing semi-static portfolio. We show that hedging using vanillas only leads to larger errors than hedging using vanillas and first touch digitals. In all cases, efficient calculations of the weights of the hedging portfolios are in the dual space using new efficient numerical methods for calculation of the Wiener-Hopf factors and Laplace-Fourier inversion. Introduction There is a large literature 1 studying static hedging and replication of exotic European options, and semi-static hedging and replication of barrier and other types of options. What this literature ignores, however, is the cost of maintaining the hedging position, which can drive the payoff of the overall portfolio negative. In this paper, we argue that, once the costs of maintaining the hedging portfolio are properly taken into account, semi-static portfolios should more properly be thought of as separate classes of derivatives, with non-trivial, model-dependent payoff structures. Depending on the structure of the option being hedged and the model, the semi-static hedging portfolio may either function as a contrarian bet small losses with high probability and large gains with low probability or as a conformist bet small gains with high probability and large losses with low probability. We suggest new versions of static and semi-static hedging, provide qualitative analysis of errors of different static and semi-static procedures, explain why in the jump-diffusion case the exact replication of barrier options by European options, hence, the model-independent replication, is impossible, and produce numerical examples to demonstrate how different sources of hedging errors depend on a model. In the main body of the paper, we consider European and down-and-in barrier options in Lévy models, and then indicate the directions in which the approach of the paper can be generalized and extended to cover options of other types, in more complicated models. Pricing barrier options and the calculation of the variance of the hedging portfolio at expiry are based on new efficient numerical procedures for calculation of the Wiener-Hopf factors and Laplace-Fourier inversion. These procedures can be useful in other applications as well. The underlying idea of the static hedge [30] of European options with exotic payoffs is simple. One replicates the payoff of an exotic European option by a linear combination payoffs of the underlying stock and vanillas, and uses the portfolio of the stock and options to replicate or hedge the exotic option. In Section 2, we start with the derivation of integral representations for an exact static hedging portfolio. Contrary to [30], we work in the dual space, and derive a representation in terms of vanillas only; this representation if different from the one in [30]. By construction, the portfolios we construct and the portfolio in [30] are model-independent, which looks very attractive. However, the continuum of vanillas does not exist, and even if it did, the integral portfolio would had been impossible to construct anyway. Hence, one has to approximate each integral by a finite sum. The hedging error of the approximation is inevitably model-dependent. We design simple constructions of approximate hedging portfolios and study the dependence of the static hedging error on a model using a portfolio of available vanillas. We derive an approximation in an almost C(R)-norm, and then calculate the weights of the variance-minimizing hedging portfolio. In both cases, the calculations are in the dual space using the sinh-acceleration technique [17]. We believe that both approximate hedging procedures have certain advantages as compared to the two-sep procedure in a recent paper [42], where, first, one uses the projection of the payoff of the security to be hedged and securities in the hedging portfolio on the space of model payoffs, then calculates the weights. This more complicated procedure does not help to decrease the hedging error, and, similarly to the approximate static hedging that we construct, cannot produce smaller variances than the variance minimizing hedging portfolio. In Section 3, we outline the general structure of the semi-static variance-minimizing hedging of barrier options; in the paper, we consider the down-and-out and down-and-in options. The initial version of the semi-static hedging portfolio for barrier options was suggested in [33]: put options with strikes equal to the barrier, with different expiry dates, are added to the portfolio in such a way that the portfolio value is zero at the barrier. Assuming that, at the moment the barrier is breached, the underlying is exactly at the barrier, the weights of portfolio can be calculated backwards. It is clear that if the underlying can cross the barrier with a jump, the procedure cannot be exact, and the implicit error is inevitably model-dependent. A different semi-static hedging of barrier options is developed in [25,27,29], but the underlying assumption is the same as in [33]. For a given barrier option, an exotic European payoff G ex is constructed so that, at maturity or at the time of early expiry (the case of "out" options) or activation (the case of "in" options), the price of the hedging portfolio for barrier option equals the price of the European option. At the barrier is reached (the presumption is that it cannot be crossed by a jump), the portfolio is liquidated. The European option being exotic, an approximate static hedging portfolio for the latter is presumed to be used. Hence, in the presence of jumps, the hedging errors are model-dependent even if one believes that an auxiliary exotic option can be hedged exactly using a portfolio of vanillas, and the question of the interaction of two types of errors naturally arises. The option with the payoff G ex is more exotic than the usual exotic options (the structure of the payoff is more complicated), and the more exotic the option is, the larger the hedging errors are. Even in the case of diffusion models, the errors can be quite sizable, and the approximation is justified under a certain rather restrictive symmetry condition on the parameters of the model. See [61]. The paper [46] uses the approximate semi-static hedging of [29] and an approximation of the exotic European option which approximates the barrier option; this leads to at least two sources of model-dependent errors, which can be large if the jump component is sizable; in addition, the symmetry condition is more restrictive than in the case of diffusion models. In the introduction of [46], it is claimed that Carr and Lee [29] rigorously justified the semistatic procedure for jump-diffusion models; the picture is more complicated. In Section 3.1, we explain that the standard semi-static construction has numerous sources of errors, and even an approximation can be justified under additional rather restrictive conditions only. In particular, in the presence of jumps, the semi-static procedure is never exact. The variance minimizing hedging portfolio has certain advantages. We can directly construct the hedging portfolio using the securities traded in the market provided that a pricing model is chosen, and one can calculate the option prices V j in the portfolio and products of the prices as functions of (t, x), 0 ≤ t ≤ T , where T is time to maturity. Accurate and fast calculations are possible for wide classes of options (barrier options, lookbacks, American options, Asians, etc.), and many popular pricing methods working in the state space can be applied. However, to calculate the weights of the hedging portfolio, we need to calculate the expectations of the products of the discounted prices at time τ ∧ T , where τ is the first entrance time into the early exercise region. Hence, one needs to approximate the products of prices by functions which are amenable to application of efficient option pricing techniques, which are, typically, based on the Laplace-Fourier transform. In the paper, we suggest and use new efficient methods for the numerical Fourier-Laplace inversion and calculation of the Wiener-Hopf factors; these methods are of a general interest. We work in the dual space; calculations in the dual space are also necessary to accurately address the following practically important effect. There is an additional source of errors of hedging portfolios consisting of vanillas only. In all popular models used in finance, the prices of vanilla options are infinitely smooth before the maturity date and up to the boundary but prices of barrier options in Lévy models are not smooth at the boundary, the exceptions being double jump diffusion model, hyper-exponential jump diffusion model, and other models with rational characteristic functions. For wide classes of purely jump models, it is proved in [21,9] that the price of an "out" barrier option near the barrier behaves as c(T )|x − h| κ , where κ ∈ [0, 1) is independent of time to maturity T , c(T ) > 0, and |x − h| is the log-distance from the barrier. For finite variation processes with the drift pointing from the boundary, κ = 0, and the limit of the price at the barrier is positive. Similarly, the price of the first touch digital behaves as 1 − c 1 (T )|x − h| γ . Even if the diffusion component is present, the prices of the barrier and first touch options are not differentiable at the barrier [21], and if the diffusion component is small, then essentially the same irregular behavior of the price will be observed outside a very small [53], where it is demonstrated that Carr's randomization method [10], which relies on the time randomization and interpolation in the state space, underprices barrier options in a small vicinity of the barrier. From the point of view of the qualitative composition of the hedging portfolio, one should expect that an accurate hedging of barrier options is impossible unless the corresponding first touch digitals are included. Fig. 1 clearly shows that the first touch digital is much closer to the down-and-in option than a put option, and the first-touch options with the payoffs (S/H) γ , γ > 0, would be even better hedging instruments. We calculate hedging portfolios consisting of vanillas only and of vanillas and the first-touch option in in Sections 5 and 6 using the Wiener-Hopf factorization technique. We recall the latter in Section 4, and introduce the new efficient method for the calculation of the Wiener-Hopf factors based on the sinh-acceleration technique [17]. The numerical examples for static hedging and calculation of the Wiener-Hopf factors and expectations related to barrier options are discussed at the end of the corresponding Sections; a numerical example for hedging of barrier options is in Section 7. In Section 8 we summarize the results of the paper and outline natural extensions. The outline of Gaver-Stehfest-Wynn method, and Tables and Figures (with the exception of Fig. 1 above and Fig. 2 in Section 3) are relegated to Appendices. Static hedging of European options 2.1. Static hedging in the ideal world. Let X be the Lévy process, and let G(X T ) = G(ln X T ) be the payoff at maturity. We assume that G is continuously differentiable, the measure dG is a sum of a finite number of atoms (equivalently, G has a finite number of kinks), and, under additional weak regularity conditions, prove that if G(S) polynomially decays as S → +0 (call-like options), then Assumption (G). G is continuously differentiable function satisfying the following conditions (i) dG is a signed measure, without the singular component; (ii) G has only a finite number of points of discontinuity; (iii) the measure dG at = Y >0 (G (Y + 0) − G (Y − 0))δ Y is finite; (iv) ∃ β ∈ (−∞, −1) ∪ (0, +∞) s.t. S → S −β−1 G(S), S → S −β−1 G (S) are of the class L 1 . Consider the case β > 0. Set k = ln K, and calculate the Fourier transform of the RHS of (2.2) w.r.t. x := ln S, for ξ on the line {Im ξ = β}. Using Fubini's theorem, we obtain where ∆G (k) = G (k+0)−G (k−0). Using K d dK = d dk and K 2 d 2 dK 2 = d 2 dk 2 − d dk , then integrating by parts and taking into account that e −ikξ G 1 (k) and e −ikξ G 1 (k) tend to 0 as k → ±∞, we continue Thus, in the case G = G 1 , the Fourier transforms of the LHS and RHS of (2.2) coincide on the line Im ξ = β, which proves (2.2). If β < −1, then, in the proof above, we replace (K − S) + with (S − K) + , and modify all the steps accordingly. Remark 2.1. If G has the compact support and no atoms, then both representations, in terms of puts and calls, are valid, with integration w.r.t. the same measure. This (mildly surprising) fact can be verified using the put-call parity and the following calculation Example 2.2. The payoff function of the powered call of order β > 1 with the strike K 0 is G(S) = ((S − K 0 ) + ) β . Since β > 1, there are no kinks, dG (K) has no atoms, and Clearly, it suffices to construct the hedging portfolio for the option with the payoff function G 1 , which vanishes above K 1 . At K 1 , G 1 has a kink, and 2.2. Approximate static hedging. In the real world, only finite number of options are available, hence, one has to approximate the measure dG (K) using an atomic measure, typically, with a not very large number of atoms. For instance, it is documented in [31] that static hedging with 3-5 options produces good results. Hence, the static hedging will be approximate. Furthermore, as seen from time 0, the hedging error will depend on the choice of the model used; although the idealized static hedge is model independent, the approximate one is model-dependent, and the quality of the approximation depends (naturally) on the choice of the approximation procedure. We will approximate the payoff of an exotic option by linear combinations of payoffs of vanillas, in the norm of a Sobolev space with an exponential weight. To be more specific, we minimize the difference between the payoff of an exotic option and the portfolio payoff G(x) := G(e x ) in the norm of the Sobolev space H s,ω (R) of order s, with an appropriate exponential weight e ωx (a.k.a. dampening factor). The Plancherel theorem allows us to do the calculations in the dual space. The integrals are calculated accurately and very fast using the sinh-acceleration techniques [17]. If s > 1/2, H s (R) is continuously embedded into C(R), hence, we can estimate the error in the C-norm (with the corresponding weight), which is natural for the approximate static hedge: if the error 0 is not achievable, we control the maximal error. We study the dependence of the variance of the hedging error on the model and s, ω, and demonstrate that the variances of errors in cases s = 1/2 and s > 1/2 close to 1/2 are comparable (and essentially independent of ω in a reasonable range), the differences being smaller than the differences between the variances of errors of approximate static hedging portfolios and variance-minimizing portfolio. Let ω, s ∈ R. The Sobolev space H s,ω (R) of order s, with weight e ωx , is the space of the generalized functions u such that u ω := e ω· u ∈ H s (R). The scalar product in H s,ω (R) is defined by (u, v) s;ω = (u ω , v ω ) H s (R) . Thus, By one of the Sobolev embedding theorems (see, e.g., Theorem 4.3 in Eskin (1973)), if s > 1/2, H s (R) is continuously embedded into C 0 (R), the space of uniformly bounded continuous functions vanishing at infinity, with L ∞ -norm. Hence, for any ω ∈ R and s > 1/2, an approximation in the H s,ω -topology gives a uniform approximation over any fixed compact K. Consider an exotic option whose payoff vanishes below K, which we normalize to 1. For practical purposes, we may assume that the strikes of European options used for hedging are close to 1, and the spot is close to 1; hence, the log-spot x is close to 0, and if ω is not large in modulus, the differences among the hedging weights for different omegas are not large. Likewise, ifû(ξ) decay fairly fast at infinity, the norms of u in H s,ω (R) will be close if s ∈ [1/2, s 0 ] and s 0 is close to 1/2. Assume that β < −1. Thus, we have a call-like options, which is hedged using a portfolio of call options. We fix ω ≤ β as discussed above, s ≥ 1/2, and the set of call options with the payoff functions G j := G(K j ; ·). Set G 0 = G. We look for the set of weights n = (n 1 , . . . , n N ) (numbers of call options in the portfolio) which minimizes StHG(n) := −G 0 + N j=1 n j G j in the H s;ω (R) norm. Denote G s;ω jk = (G j , G k ) s;ω ; these scalar products can be easily calculated with a sufficiently high precision since the integrands in the formula for G s;ω jk decay as |ξ| −4+s . Furthermore, ifĜ 0 (ξ) is of the form e −ik 0 ξĜ 00 (ξ), whereĜ 00 (ξ) is a rational function, and k 0 ∈ R, then the integrands in the formulas for (G j , G k ) s;ω are of the form e −ik jk ξĜ jk,0 (ξ)(1 + (ξ − iω) 2 ) s , where k jk ∈ R andĜ jk,0 (ξ) are rational functions. Hence, the scalar products can be calculated with almost machine precision and very fast using the sinh-acceleration technique [17]. After an appropriate change of variables of the form ξ = iω 1 +b sinh(iω +y), the simplified trapezoid rule with a dozen of terms typically suffices to satisfy the error tolerance of the order of 10 −10 and less. 2.3. Variance minimizing hedging portfolio. The hedging error is the random variable is reducible to the Fourier inversion. As it is explained in [14,54,17], if (1)Ĝ j is of the form G j (ξ) = e −ik j ξĜ j0 (ξ), where k j ∈ R andĜ j0 is a rational functions, and (2) ψ is of the form ψ(ξ) = −iµξ + ψ 0 (ξ), where µ ∈ R and Re ψ 0 (ξ) → +∞ as ξ → ∞ remaining in a cone around the real axis, then it is advantageous to represent V j (T, x) in the form where the set of admissible ω ∈ R depends onĜ j0 , and x j = x + µT − k j . Then we use a conformal deformation of the contour of integration in (2.6) and the corresponding change of variables, and apply the simplified trapezoid rule. The most efficient change of variables (the sinh-acceleration) suggested in [17] is of the form ξ = iω 1 + b sinh(iω + y), where ω is of the same sign as x ; the upper bound on admissible |ω | depends on ψ 0 andĜ j0 . The variance can be calculated using the equality To calculate E x Err(n; x, X T ) 2 , we need to calculate V j (T, , j, = 0, 1, . . . , N , which is the "price" of the European option with the payoff G j (S T )G k (S T ) at maturity T . We calculate the Fourier transform G j G (ξ) of the product G j G , multiply by the characteristic function e −T ψ(ξ) , and apply the inverse Fourier transform. For typical exotic options and vanillas, G j G (ξ) is of the form e −ik j ξĜ j ;0 (ξ), whereĜ j ;0 (ξ) is a rational function, and k j ∈ R. Hence, The integral on the RHS of (2.7) can be calculated accurately and fast using the sinh-acceleration technique [17]. The numbers of options in the variance minimizing portfolio are given by In the hedging portfolio, we use put options with strikes Construction of an approximate static hedging portfolio. We take ω > (1 − β) + . Typically, β < 0, hence, ω > 1. We have . where k = ln K. We can calculate the integral accurately and fast making the simplest sinhchange of variables ξ = b sinh y, and applying the simplified trapezoid rule. See [17] for explicit recommendations for the choice of b and the parameters ζ, N of the simplified trapezoid rule. Next, for j = 1, 2, . . . , N , we set k j = ln K j and calculate . If k j > k , we deform the wings of the contour down, equivalently, use the sinh-acceleration with ω ∈ (−π/2, 0). If k j < k , we use ω ∈ (0, π/2). Finally, if k j = k , we may use any ω ∈ [−π/2, π/2]; the choice ω = 0 is the best one. After the scalar products are calculated, we apply (2.5) to find the approximate static hedging portfolio. Below, we will use a modification of this scheme when the hedging portfolio has the fixed amount H −β K 0 K β−1 1 = (H/K 0 ) β−2 of put options with strike K 1 , and the weights of the other put options in the hedging portfolio are calculated minimizing the hedging error. 2.4.2. Construction of the variance-minimizing hedging portfolio. Calculating the integral on the RHS of (2.6), we use (2.9) for j = 0; for j = 1, 2, . . . , N ,Ĝ j (ξ) = K 1−iξ j /(iξ(iξ − 1). To calculate the integral on the RHS of (2.7), we need to calculate the Fourier transforms of the products of the payoff functions. The straightforward calculations give (2) for j = 1, 2, . . . , N and ξ in the half-plane Im ξ > (1 − β) + , Numerical experiments. In Tables collected in Section E.1, we study the relative performance of the static hedging and variance-minimizing hedging. We consider the exotic European option with the payoff G ex = (S/H) β (H 2 /S − K 0 ) + ; the hedging portfolios consist of put options with the strikes K j = H 2 /K 0 −(j −1)0.02, j = 1, . . . , #K, where #K = 3, 5. For different variants of hedging, we list numbers n j of the options with strikes K j in the hedging portfolio. Static portfolios are constructed minimizing the hedging error in the H s;ω norm; the results are essentially the same for ω = 2(1 − β) + + 0.1, ω = 2(1 − β) + + 0.2, and weakly depend on s = 0.5, 0.55. The static portfolios are independent of time to maturity T and the process but nStd does depend on both as well as on the spot S. We study the dependence of the (normalized by the price V ex (T, x) of the exotic option) standard deviations nStd of the static hedging portfolio and variance minimizing portfolio on the process, time to maturity T and x := log(S/K 1 ) ∈ [−0.03, 0.03]. For the static hedging portfolios, for each process and time to maturity, we show the range of nStd as the function of x ∈ [−0.03, 0.03]. In the case of the variance-minimizing hedging, n j depend on x by construction, and we show n j and nStd for each x in a table. We consider two variants of the variance minimizing portfolios: V M 1 means that n 1 (the same as for static portfolios) is fixed, V M 2 means that all n j may vary. In all cases, H = 1, K 0 = 1.02, the underlying S t = e Xt pays no dividends, X is KoBoL; in two tables, the BM with the embedded KoBoL component. The second instantaneous moment is m 2 = 0.1 or 0.15, and c is determined by m 2 , λ + , λ − and σ. The riskless rate is found from the EMM condition ψ(−i) + r = 0; β found from ψ(ξ) = ψ(−ξ − iβ). If X is KoBoL, then β exists only if µ = 0, and then β = −λ − − λ + . If the BM component is present, then we choose σ and µ so that β = −λ + − λ − = −µ/(2σ 2 ). The results for two cases when such a β exists are presented in Tables 1-5; in Tables 3 and 5, KoBoL is close to BM (ν = 1.95), and in Tables 1, 2, 4, close to NIG (ν = 1.2). The BM component is non-trivial in Tables 4 and 5. The reader may notice that the parameter sets are not very natural; the reason is that it is rather difficult to find natural parameter sets which ensure that (1) β satisfying ψ(ξ) = ψ(−ξ − iβ), ∀ ξ ∈ R, exists; (2) the EMM condition holds. In Tables 6 and 7, we present results for a sizably asymmetric KoBoL with µ = 0, and consider the same exotic option with β = −λ − − λ + . Naturally, this exotic option cannot be even formally used to hedge the barrier option but can serve for the purpose of the comparison of two variants of hedging of exotic European options. Tables illustrate the following general observations. (1) If the number of vanillas in a static hedging portfolio is sufficiently large, the portfolio provides uniform (approximate) hedging over wide stretches of spots and strikes. Hence, if the jump density decays slowly, one expects that, far from the spot, the static hedging portfolio will outperform the variance minimizing portfolio. Table 1 demonstrate that even in cases when the rate of decay of the jump density is only moderately small, and the process is not very far from the BM, the variance of the static portfolio differs from the variance of the variance minimizing portfolios (constructed separately for each spot from a moderate range, and using the information about the characteristics of the process) by several percent only; if the jump density decays slower and/or process is farther from the BM, the relative difference is smaller. Hence, if the rate of decay of the jump density is not large and the density os approximately symmetric, the static portfolio is competitive for hedging risks of small fluctuations. It is clear that the hedging performance of the static portfolio in the tails must be better still. (2) However, if the jump density decays moderately fast, then the variance of the static portfolio can be sizably larger than the one of the variance minimizing portfolios (Tables 2-3), and if a moderate BM component is added, then the relative difference becomes large (Tables 4-5). (3) In Tables 1-5, the jump density is approximately symmetric. In Tables 6-7, we consider the pure jump process with a moderately asymmetric density of jumps. In this case, the variance of the static portfolio is much larger than the variance of variance minimizing portfolios. (4) The quality of variance minimizing portfolios VM1 and VM2 is essentially the same in almost all cases when 5 vanillas are used although the portfolio weights can be rather different. Hence, as a rule of thumb, one can recommend to use vanillas associated with the atomic part of the measure in the integral representation of the ideal static portfolioprovided these vanillas are available in the market. The implication of observations (1)-(2) for semi-static hedging of barrier options is as follows. If the the variance of the BM component makes a non-negligible contribution to the instantaneous variance of the process, the ideal semi-static hedging using a continuum of options improves but the quality of an approximation of the integral of options by a finite sum decreases. Hence, one should expect that the variance minimizing hedging of barrier options would be significantly better than an approximate semi-static hedging in all cases. 3. Hedging down-and-in and down-and-out options 3.1. Semi-static hedging. Carr and Lee [29] formulate several equivalent conditions on a positive martingale M under the reference measure P, call the class of these martingales PCS processes, and design semi-static replication strategies for various types of barrier options. Since the proof of Theorem 5.10 in [29] strongly relies on the assumption that at the random time τ when the barrier H is breached, the underlying is exactly at the barrier, in Remark 5.11 in [29], the authors state that these strategies replicate the options in question for all PCS processes, including those with jumps, provided that the jumps cannot cross the barrier. [29] generalize to various asymmetric dynamics, but the property that jumps in one direction are impossible means that the results for semi-static replication of barrier options under these asymmetric dynamics are valid only if there are no jumps. Carr and Lee [29] give additional conditions which will ensure the super-replication property of the semi-static portfolio for "in" options; but the corresponding portfolio for "out" options recommended in [29] will under-replicate the option. For an additional clarification of these issues, in Section A, we derive the generalized symmetry condition for the case of a Lévy process X in terms of the characteristic function ψ: ∃β ∈ R s.t. ψ(ξ) = ψ(−ξ − iβ) for all ξ in the domain of ψ, and show that this condition implies that either X is the Brownian motion (BM) and the riskless rate equals the dividend rate or there are jumps in both directions, and asymmetry of the jump component is uniquely defined by the volatility σ 2 and drift µ. Furthermore, if σ = 0, then µ = 0 as well. For the case of the down-and-in option with the payoff G(X T ) = G(e X T ) and barrier H, we rederive the formula for the payoff of the exotic European option, which, in the presence of jumps, replicates the barrier option only approximately: The numerical examples above show that the variance of the hedging error of the static portfolio for the exotic option with the payoff (3.1) is close to the variance of the variance-minimizing portfolio if the BM component is 0, the jump density does not decrease fast and is approximately symmetric; if the BM component is sizable and/or the density of jumps is either asymmetric or fast decaying, then the variance of the static portfolio is significantly larger than the variance of the variance-minimizing portfolio. Hence, the static portfolio is a good (even best) choice in cases when the idealized semi-static replicated exotic option is a bad approximation fo the barrier option. 3.2. General scheme of variance minimizing hedging. We consider one-factor models. The underlying is e Xt , there is no dividends, and the riskless rate r is constant. Let V 0 (t, X t ) be the price of the contingent claim to be hedged, of maturity T , under an EMM Q chosen for pricing. Let V j (t, X t ), j = 1, 2, . . . , N , be the prices of the options used for hedging. We assume that the latter options do not expire before τ ∧ T , where τ = τ − h is the first entrance time into the activation region U of the down-and-in option (in the early expiry region of the down-and-out option). As in the papers on the semi-static hedging, we assume that, at time τ ∧ T , the hedging portfolio is liquidated. Let (−1, n 1 , . . . , n N ) be the vector of numbers of securities in the hedging portfolio. The portfolio at the liquidation date τ ∧ T is the random variable and the discounted portfolio at the liquidation date is P (τ ∧T, X τ ∧T ) = e −rτ ∧T P 0 (τ ∧T, X τ ∧T ). One can consider the variance minimization problem for either P 0 (τ ∧ T, X τ ∧T ) or P (τ ∧ T, X τ ∧T ), and we can calculate the variance under either the EMM Q used for pricing or the historic measure P. We consider the minimization of the variance of P (τ ∧ T, X τ ∧T ) under Q. and find the minimizing n = n(x) as To calculate C 0 j (x), it suffices to calculate V j (0, x) and V (0, x). We decompose V j (0, x) into the sum of the first-touch option V j f t (x) with the payoff V j (τ, X τ )1 τ ≤T , and no-touch option V j nt (x) with the payoff V j (T, X T )1 τ >T . Given a model for X, we can calculate the prices of the no-touch and first touch options. Similarly, we can decompose C j (x) into the sum of the first-touch option C f t;j (x) with the payoff V j (τ, X τ )V (τ, X τ )1 τ ≤T , and no-touch The no-touch options can be efficiently calculated if the Fourier transforms of the payoff functions G j and G j G of options V j and V can be explicitly calculated; then several methods based of the Fourier inversion can be applied. First, one can apply apply Carr's randomization method developed in [21,10,8,11] for option pricing in Lévy models. A simple generalization is necessary in the case of no-touch options because, in the setting of the present paper, the payoff functions depend on (t, X t ) and not on X t only as in [21,10,8,11]. Apart from the calculation of the Wiener-Hopf factors, which, for a general Lévy process, must be done in the dual space, the rest of calculations in [21,10,8,11] are made in the state space. The calculations in the state space [10,8,11] can be efficient for small ∆t and ν if the tails of the Lévy density decay sufficiently fast. In Section 4.2, we design new efficient procedures for the calculation of the Wiener-Hopf factors. These procedures are of a general interest. The second method is a more efficient version of Carr's randomization. The calculations at each step of the backward procedure bar the last one are made remaining in the dual space. These steps are of the same form as in the Hilbert transform method [39] for barrier options with discrete monitoring, with a different operator used at each time step. In [39], the operator is (1 − e −r∆t ) 1 (I − e ∆t(−r+L) ), where L is the infinitesimal generator of X under the probability measure used; in the Carr's randomization setting, the operator is q −1 (q − L), where q = r + 1/∆t, and ∆t is the time step. If ∆t is not small and/or the order of the process ν is close to two 2 , then (1 − e −r∆t ) 1 (I − e ∆t(−r+L) ) can be efficiently realized using the fast Hilbert transform [39]. Otherwise too long grids may be necessary. An efficient numerical realization of q −1 (q −L) in the dual space requires much longer grids than an efficient numerical realization of (1 − e −r∆t ) 1 (I − e ∆t(−r+L) ) if the fast Hilbert transform is used. In the result, this straightforward scheme can be very inefficient. Instead, we can apply the Double Spiral method [56] calculating the Fourier transform of the option price at two contours, at each time step. In [56] discretly sampled Asian options were considered, and a complicated structure of functions arising at each step of the backward induction procedure required the usage of the flat contours of integration. In application to barrier options, the contours in the Double Spiral method can be efficiently deformed and an efficient sinh-acceleration technique developed in [17] applied. Namely, we can use the change of variables of the form ξ = iω 1 + b sinh(iω + y) and the simplified trapezoid rule. In the result, an accurate numerical calculation of integrals at each time step needs summation of 2-3 dozen of terms in the simplified trapezoid rule. We leave the design of explicit procedures for hedging using both versions of Carr's randomization to the future. In the present paper, we directly apply the general formulas for the double Fourier/Laplace inversion. These formulas are the same as the ones in [13] in the case of no-touch options, with the following improvement: instead of the fractional-parabolic changes of variables, the sinh-acceleration is used. In the case of the first touch options, an additional generalization is needed because, contrary to the cases considered in [13], the payoff depends on (t, X t ) rather than on X t only. One can use other methods that use approximations in the state space. Any such method has several sources of errors, which are not easy to control. Even in the case of pricing European and barrier options, serious errors may result (see [14,12,54] for examples), and, typically, very long and fine grids in the state space are needed. The recommendations in [43,45,44] for the choice of the truncation parameter rely on the ad-hoc recommendation for the truncation parameter used in a series of papers [37,38,36]. As examples in [14,32] demonstrate, this ad-hoc recommendation can be unreliable even if applied only once. In the hedging framework suggested in the present paper, the truncation needs to be applied many times, for each t j used in the time-discretization of the initial problem, hence, the error control becomes almost impossible. 3.3. Conditions on processes and payoff functions. We consider the down-and-out case; H = e h is the barrier, T is the maturity date, and G(X T ) is the payoff at maturity. The most efficient realizations of the pricing/hedging formulas are possible if the characteristic exponent admits analytic continuation to a union of a strip and cone and behaves sufficiently regularly at infinity. For the general definition of the corresponding class of Lévy processes (called SINHregular) and applications to pricing European options in Lévy models and affine models, see [17]. In the present paper, for simplicity, we assume that the characteristic exponent admits analytic continuation to the complex plane with two cuts. Assumption (X). (2) For KoBoL, VG and NTS, ψ 0 admits analytic continuation to an appropriate Riemann surface. This extension can be useful when the SINH-acceleration is applied to calculate the Wiener-Hopf factors, and less so for pricing European options. [19], we constructed more general classes of Lévy processes, with the characteristic exponents of the form with modifications in the case ν + = 1 and/or ν − = 1. For these processes, the domains of analyticity and bounds are more involved. In particular, in general, the coni are not symmetric w.r.t. the real axis. Note that in the strongly asymmetric version of KoBoL, in the spectrally one-sided case in particular, the condition c 0 ∞ > 0 does not hold, and should be replaced with arg c 0 ∞ ∈ (−π/2, π/2). 1. G is a measurable function admitting a bound where is well-defined on the half-space {Im ξ < −β}, and admits the representationĜ(ξ) = e −iaξĜ 0 (ξ), where a ∈ R andĜ 0 (ξ) is a rational function decaying at infinity. Note that only the values G(x), x > h, matter, hence, we may replace G with 1 (h,+∞) G, and then there is no need to consider the case (i) separately. With K j = K , we have the payoff function of a powered call option.Ĝ 0 has three simple poles at 0, −i and −2i. has two simple poles at 0 and −i. 3.4. More general payoff functions and embedded options. In the case of embedded options, the simple structure ofĜ formalized in Assumption (G) is impossible. The following generalization allows us to consider payoffs that are prices of vanillas and some exotic options. For γ ∈ (0, π/2), define cones in the complex plane has a finite number of poles and decays as |ξ| −δ or faster as ξ → ∞ remaining in S (λ − ,λ + ) ∪ C γ . [20,21,10,13,53]. Several equivalent versions of general pricing formulas for no-touch and first touch options were derived in [20,21,10,13,53], in terms of the Wiener-Hopf factors. In this subsection, we list the notation and facts which we use in the present paper. 4.1.1. Three forms of the Wiener-Hopf factorization. Let X be a Lévy process with the characteristic exponent ψ. The supremum and infimum process are defined by X t = sup 0≤s≤t X s and X t = inf 0≤s≤t X s , respectively. Let q > 0 and let T q be an exponentially distributed random variable of mean 1/q, independent of X. Introduce functions Wiener-Hopf factorization: basic facts used and derived in and normalized resolvents (the EPV operators under X, X and X, respectively) and its operator analog E q = E − q E + q = E + q E − q is proved similarly to (4.5) (see, e.g., [10,11]). Finally, introducing the notation φ + q (ξ) = E e iξX Tq , φ − q (ξ) = E e iξX Tq and noticing that E e iX Tq ξ = q q+ψ(ξ) , we can write (4.5) in the form Equation (4.6) is a special case of the factorization of functions on the real line into a product of two functions analytic in the upper and lower open half planes and admitting the continuous continuation up to the real line. This is the initial factorization formula discovered by Wiener and Hopf [65] in 1931, for functions of a much more general form than in the LHS of (4.6). Depending on the situation, we will need the Wiener-Hopf factors either on a curve L(ω 1 , ω, b) with the wings pointing up (ω > 0), or on a curve L(ω 1 , ω , b ) with the wings pointing down (ω < 0). In the former case, we deform the contour of integration (in the formula for the Wiener-Hopf factors that we use) so that the wings of the deformed contour L(ω 1 , ω , b ) point down (ω < 0), and in the latter case -up (ω > 0). This straightforward requirement is easy to satisfy, as well as the second requirement: the curves do not intersect. Indeed, if ω ∈ (0, π/2] and ω ∈ [−π, 2, 0), the curves do not intersect if and only if the point of intersection of the former with the imaginary axis is above the point of the intersection of the latter with the same axis: The last condition on L(ω 1 , ω , b ) is that for q of interest, the function L(ω 1 , ω , b ) η → 1 + ψ(η)/q ∈ C (or η → q + ψ(η)) is well-defined, and, in the process of the deformation of the initial line of integration into L(ω 1 , ω , b ), image does not intersect (−∞, 0]. If the parameters of the curve are fixed, this requirement is satisfied if Re q ≥ σ and σ > 0 is sufficiently large. For details, see [53], where a different family of deformations (farctional-parabolic ones) was used. In cases of the sinh-acceleration and fractional-parabolic deformation, at infinity, the curves stabilize to rays, hence, the analysis in [53] can be used to derive the conditions on the deformation parameters if q > 0. If the Gaver-Stehfest method is applied, then we need to use the Wiener-Hopf factorization technique for q > 0 only, and the analysis in [53] suffices. An alternative method used in [13] is to deform the contour of integration in the Bromwich intergral; in this case, the deformation of the latter and the deformation of the contours in the formulas for the Wiener-Hopf factors must be in a certain agreement. We outline the restrictions on the parameters of the deformations in Section C. For positive q, the maximal (in absolute value) σ ± (q) are easy to find for all popular classes of Lévy processes used in finance. As it is proved in [21] (see also [53]), the equation q + ψ(ξ) = 0 has either 0 or 1 or two roots in the complex plane with the two cuts i(−∞, λ − ] and i[λ + , +∞). Each root is purely imaginary, and of the form 4.2. Calculation of the Wiener-Hopf factors using the sinh-acceleration. If for the Laplace inversion the Gaver-Stehfest method or other methods utilizing only positive q is used, then we can take any ω ∈ [−π/2, π/2] and ω = −ω. If ψ admits analytic continuation to an appropriate Riemann surface (as it is the case for the some classes of Lévy processes), then ω ∈ [−π/2, π/2] can be used. The curve lies in the complex plane but the "conical" region around the curve, in the y-coordinate, is a subset of the Riemann surface. See [14,13,53,17] for details. In [14,13,53], fractional-parabolic deformations of the contour of integration into a Riemann surface significantly increased the rate of the decay of the integrand as compared to deformations into the complex plane with the two cuts; the number of terms in the simplified trapezoid rule decreased by a factor of 10-1000 and more (see [54] for a detailed analysis). If the sinh-acceleration is used, the gain is not large if any: the number of terms in the simplified trapezoid rule decreases by a factor of 1.5-2, at best, but the analytic expressions that one has to evaluate become more involved. Hence, in the present paper, we use ω, ω ∈ [−π/2, π/2], of opposite signs. The sinh-acceleration has another advantage as compared to the fractionalparabolic deformations. As it is shown in [53], if q > 0 is small (which is the case for some terms in the Gaver-Stehfest formula if T is large), then one of the σ ± (q) in Subsection 4.1.2 or both are small in absolute value, and then the strip of analyticity of the integrand is too narrow. Hence, if the fractional-parabolic change of variables is applied, the size ζ of the mesh in the simplified trapezoid rule necessary to satisfy the desired error tolerance is too small, and the number of terms N too large. A rescaling in the dual space can increase the width of the strip of analyticity but then the product Λ := N ζ must increase, and the decrease in N is insignificant. If the sinh-acceleration is used, then the rescaling (using an appropriately small b) does not increase Λ significantly. Roughly speaking, in the recommendation for the choice Λ, 1/ should be replaced with 1/ + a ln(1/b), where a is a moderate constant independent of , b and other parameters. As numerical examples in [16] demonstrate, this kind of rescaling is efficient even if the initial strip of analyticity is of the width 10 −6 and less. Hence, in this paper, we will use the Gaver-Stehfest method with the Rho-Wynn acceleration. For each q, we use the following versions of (4.8)-(4.9): (i) for ξ ∈ L(ω 1 , ω, b), Each integral is evaluated applying the simplified trapezoid rule. Numerical examples. In Table 8, we apply the above scheme to calculate φ ± q (ξ) in KoBoL model of order ν = 1.2. If the factors are calculated at 30 points, then approximately 1.5 msec is needed to satisfy the error tolerance = 10 −15 , and 1.0-1.2 msec are needed to satisfy the error tolerance = 10 −10 . The number of terms is in the range 350-385 in the first case, and 159-175 in the second case. To satisfy the error tolerance of the order of 10 −20 , about 500 terms would suffice but, naturally, high precision arithmetics will be needed. 5. Calculation of no-touch options and expectations of no-touch products 5.1. General formulas for no-touch options. In [10,13,53], it is proved that the Laplace . The result is proved under conditions more general than Assumptions (X) and (G emb ). Let F denote the operator of the Fourier transform, and set Π + h := F1 (h,+∞) F −1 . The operator Π + arises systematically in the theory of the Wiener-Hopf factorization and boundary value problems. See [35] for the general setting in applications to multi-dimensional problems. Using F and Π + h and taking into account that is analytic in the half-plane {ξ | Im ξ < σ + }, and can be defined by any of the following three formulas: a) for any ω + ∈ (Im ξ, σ + ), where v.p. denotes the Cauchy principal value of the integral. Proof. a) Applying the definition of Π + h and Fubini's theorem, we obtain We push the line of integration in (5.6) down. On crossing the simple pole at η = ξ, we apply the Cauchy residue theorem, and obtain (5.4). c) Let ω := Im ξ and a = Re ξ. We deform the contour Im ξ = ω into and then pass to the limit ↓ 0. The result is (5.5). Remark 5.1. a) Let ω = h = 0. Then (5.5) can be written in the form Π + 0 = 1 2 I + 1 2i H, where I is the identity operator, and H is the Hilbert transform. Therefore, a realization of Π + 0 is, essentially, equivalent to a realization of H. b) Similar formulas are valid in the multi-dimensional case, and for Π + h acting in appropriate spaces of generalized functions. In particular, the condition on the rate of decay of f at infinity can be relaxed. Under appropriate regularity conditions on f , equation (5.5) can be proved for f defined on the line iω + R, the result being a function on the same line that admits analytic continuation to the half-plane below this line. See [35]. The following theorem is a part of the proof of the general theorem for pricing no-touch options in [20,21]; we will outline the proof based on (5.2). Denote by M the set of q's in the Gaver-Stehfest formula and by Q(q) the weights. We have an approximation It remains to design an efficient numerical procedure for the evaluation ofṼ 1 (G; q, x), q > 0. As η → ∞,Ĝ 0 (η) → 0. If the order ν ∈ [1,2] or µ = 0, then φ ± q (ξ) → 0 as ξ → ∞ in the domain of analyticity; if ν < 1 and µ = 0, then one of the Wiener-Hopf factors stabilizes to constant at infinity, and the other one decays as 1/|ξ|. Since x − h > 0, it is advantageous to deform the outer contour on the RHS of (5.10) so that the wings of the deformed contour point upward: where ω > 0. At this stage, we deforms the contour so that no pole of the integrand (if it exists) is crossed, and consider the cases of crossing later. In all cases, in the process of deformations, the curves must remain in holds, then we may take any γ ∈ (0, π/2); in our numerical experiments, we will take γ = π/4. The type of the deformation of the inner contour depends on the sign of a − h. If a ≥ h, which, in the case of puts and calls means that the strike is above or at the barrier, then we may deform the contour downward. The deformed contour is of the form L − := L(ω 1 , ω , b ) = χ(ω 1 , ω , b ; R), where ω < 0; as in the case of the deformation of the outer contour, we choose the parameters of the deformation so that, in the process of deformation, no pole of the inner integrand is crossed, and consider the cases of crossing later. The parameters of the both contours are chosen so that L + is strictly above L − . The case a < h (the strike is below the barrier) is reducible to the case hence, Assumption (G) is valid with a = h. If G is the value function of an embedded option, then a < h is possible (e.g., the strike of the embedded European put or call is below the barrier). We consider this case separately, in Section 5.4. The type of deformation of the contour of integration on the RHS of (5.11) depends on the sign of x − a. If x − a > 0, we use L + , and if x − a < 0, then L − . If x − a = 0, then either deformation can be used. We conclude that, for the majority of applications when embedded options are not involved, we may use deformed contours of the form L + in the outer integral and of the form L − in the inner integral, and, for x > h, writẽ where the type of the deformed contour L depends on the sign of x − a: To evaluate the integrals numerically, we make the changes of variables ξ = iω 1 + b sinh(iω + y) and η = iω 1 + b sinh(iω + y ), and apply the simplified trapezoid rule w.r.t. y and y . Crossing poles. For simplicity, we consider the case q > 0. Hence, the results in this Subsection can be applied only if the Gaver-Stehfest method or other similar methods are applied. 3 Assume that both solutions −iβ ± q exist. the condition x ≥ a means that the corresponding European call is ITM or ATM, and put OTM or ATM. Recall that the initial lines of integration and the deformed contours are in the strip of analyticity of the integrand, around the real axis. It follows that the intersection of L + (resp., L − ) with the imaginary line iR is below −iβ − q (resp., above −iβ − q ). On the RHS of (5.12), we move L + up (resp., L − down), cross the simple pole at −iβ − q (resp., at −iβ + q ), and stop before the cut i[λ + , +∞) (resp., i(−∞, λ − ]) is reached. On crossing the pole, we apply the residue theorem and the equalities , which follow from (4.11)-(4.12). Denote the new contours L ++ and L −− . In the last integral on the RHS of (5.12), we deform L into L ++ (or a contour with the properties of L ++ ). In the process of the deformation, we may have to cross the poles ofĜ 0 that are above the line {Im ξ = ω − } but below the contour L ++ . Let Z(Ĝ 0 ; ω − , L ++ ) the set of these poles, and let the poles be simple and different from −iβ − q ; theñ Note that if −iβ ± q do not exist or not crossed in the process of the deformations, than (5.14) is valid after all terms with β ± q are removed. In applications to pricing call options, G(η) = e −iaηĜ 0 (η), where a = ln K, andĜ 0 (η) = − Ke −rT η(η+i) has two simple poles at −i, 0, it is advantageous to cross these two poles even if the poles −iβ ± q are not crossed: In typical examples G(x) = (e x − e a ) + or G(x) = (e a − e x ) + , the corresponding European call is OTM, and put ITM. In the case x ≥ a, the last two terms on the RHS of (5.14) appear as we push up the line of integration in the last integral on the RHS of (5.12). Now x ≥ a, hence, we deform this line down, and obtain (5.14) with the following modification: the last three terms on the RHS are replaced with the sum Approximate formulas in the case of a wide strip of analyticity. If −λ − , λ + are large, q is not large, so that −iβ ± q are not large in absolute value, and x − h and x − a are nor very small in absolute value, then we can choose the deformations L ++ and L −− so that the integrals over L ++ and L −− are small, and can be omitted (we do not copy-paste the result in order to save space); a small error results. The resulting formulas are given by simple analytical expressions [53]). E.g., if h < x < a, theñ Case of G the value function of an embedded put option, with the strike below the barrier. Let a < h; then x − a > 0. We deform both contours on the RHS of (5.10) up. First, we deform the outer contour into L + = L(ω 1 , ω, b), and then the inner one, into L + 2 = L(ω 12 , ω 2 , b 2 ). We choose the parameters of the latter(ω 12 , ω 2 , b 2 ) so that L + 2 is strictly below L + 2 , and, furthermore, so that the angles between the asymptotes of the two contours are positive: 0 < ω 2 < ω. Instead of (5.12), we havẽ . The crossing of possible poles can be done similarly to the case a ≥ h. 5.5.1. Pricing no-touch options and first touch digitals, down case (Table 9). Let V nt (T, x) and V f t (T, x) be the prices of the no-touch and first touch digital option, respectively. Then, for any ω ∈ (σ − (q), 0) and ω + ∈ (0, σ + (q)),Ṽ nt (G; q, x) is given bỹ A similar formula for V f t (T, x) (see [53]) is where ω + ∈ (0, σ + (q − r)). The integrals on the RHSs of (5.19) and (5.20) differ by scalar factors in front of the integrals, hence, both options can be evaluated simultataneously. If −β − q exists and λ + + β − q > 0 is not small, it may be advantageous to move the line of integration up and cross the simple pole at −iβ − q . In both cases, the line of integration is deformed into a contour L + := L(ω 1+ , ω + , b + ) in the upper half-plane, with wings pointing up. To evaluate φ − q (ξ), ξ ∈ L + , we use (4.8) to calculate φ + q (ξ) (the line of integration is deformed into a contour in the lower half-plane with the wings pointing down), and then (4.11). We calculate prices of no-touch options and first touch digitals in the same KoBoL model as in Table 2. Our numerical experiments show that, for integrals for the Wiener-Hopf factor and the Fourier inversion, the relative error of the order of E-8 and better (for prices) can be satisfied using the general recommendations of [17] for the error tolerance = 10 −10 ; if = 10 −15 is used, the pricing errors decrease insignificantly. Hence, in this example, the errors of GWR method are of the order of 10 −10 − 10 −8 . The CPU time (for 7 spots) is less than 10 msec for the error tolerance = 10 −6 , and less than 25 msec for the error tolerance = 10 −10 . The bulk of the CPU time is spent on the calculation of the Wiener-Hopf factor at the points of the chosen grids; this calculation can be easily paralellized, and the total CPU time significantly decreased. The CPU time can be decreased further using more efficient representations for the Wiener-Hopf factors. 5.5.2. Pricing down-and-out call option (Tables 10-11). We consider the same model, and the call options with the strikes K = 1.04, 1.1 In both cases, the barrier H = 1, and T = 0.1, 0.5. In (5.15), we make the corresponding changes of variables and apply the simplified trapezoid rule. In Table 10, we see that even in cases when the spot is close to the barrier, the error tolerance of the order of E-05 and smaller can be satisfied at a moderate CPU-time cost: about 0.1 msec for the calculation at 7 spots. In Table 11, we show the prices V call (H, K; T, S) for S very close to the barrier H (x := ln(S/H) ∈ [0.0005, 0.0035] and the prices divided by x ν/2 . We see that the ratio is approximately constant which agrees with the asymptotics of the price of the down-and-out option Pricing first-touch options and expectations of first-touch products Conditions on processes and payoff functions are the same as in Section 3.3. We consider the down-and-in case; H = e h is the barrier, T is the maturity date, and G(τ, X τ ) is the payoff at the first entrance time τ into (−∞, h]. We need to calculate V (G; T ; x) = E x [G(τ, X τ )1 τ <T ]. 6.1. The simplest case. Let G(τ, X τ ) = e −rτ +βXτ , where β ∈ [0, −λ − ). For β = 0, V (G; T ; x) is the time-0 price of the first-touch digital, for β = 1 of the stock which is due at time τ if τ < T . The case of the down-and-in forwards obtains by linearity. If we need to calculate the expectation of the product of two payoffs of this form, β may assume value β = 2; in this case, we need to replace r with 2r. Since β ∈ [0, −λ − ), φ − q (−iβ) = E e βX Tq is finite for any q in the right half-plane. Furthermore, for any σ > 0, there exists ω > 0 such that, for any q in the half-plane Re q ≥ σ, φ − q (ξ) admits analytic continuation to the half-plane {Im ξ < ω}. For such σ and ω, the general formula for the down-and-out options derived in [20,10] (see also [13,53]) is applicable: for x > h, Assuming that we use the Gaver-Stehfest method to evaluate the Bromwich integral (the outer integral on the RHS of (6.1)), we need to calculate the inner integral for q > 0. As in the case of no-touch options, we deform the contour upward and cross the simple pole at −iβ − q if β − q exists and −β − q is not close to λ + : . If x − h > 0 is not very small, q not too large and λ + and −λ − are large, then a good approximation can be obtained using the last term on the RHS above and omitting the integral over L ++ . 6.2. General case. Even in a simple case of the down-and-in option which, at time τ , becomes the call option with strike K and time T − τ to maturity, G(τ, X τ ) = e −rτ V call (K; T − τ, X τ ), the payoff at time τ , is more involved than the payoff functions in [20,10] because the pricing formulas in [20,10] were derived under assumption that the dependence of G(τ, X τ ) on τ is of the simplest form e −rτ G 1 (X τ ). If we calculate the expectation of the product of a discounted down-and-in call option e −rτ V call (K; T − τ, X τ ) and e −rτ +βXτ , then G(τ, X τ ) = e −2rτ +βXτ V call (K; T − τ, X τ ). In the case of the expectation of the product of discounted prices of European options, the structure of G(τ, X τ ) is even more involved. First, we calculate the expectation V (G; T ; x) = E x [1 τ <T G(τ, X τ )] in the general form, and then make further steps for the special cases mentioned above. We repeat the main steps of the initial proof in [20] omitting the technical details of the justification of the application of Fubini's theorem; they are the same as in [20,9]. Let L = −ψ(D) be the infinitesimal generator of X. Recall that the pseudo-differential operator ψ(D) with the symbol ψ is the composition of the Fourier transform, multiplication operator by the function ψ, and inverse Fourier transform. Ifû is well-defined and analytic in a strip, ψ admits analytic continuation to the same strip, and the product ψ(ξ)û(ξ) decays sufficiently fast as ξ → ∞ remaining in the strip, an equivalent definition is (ψ(D)u)(ξ) = ψ(ξ)û(ξ), for ξ in the strip. For details, see [35,21,20]. The function V 1 (G; t, x) := V (G; T − t; x) is the bounded sufficiently regular solution of the boundary value problem Making the Laplace transform w.r.t. t, we obtain that if σ > 0 is sufficiently large, then, for all q in the half-plane {Re q ≥ σ},Ṽ 1 (G; q, x) solves the boundary problem (6.6) in the class of sufficiently regular bounded functions. If {Ṽ 1 (G; q, ·)} Re q≥σ is the (sufficiently regular) solution of the family of boundary problems (6.5)-(6.6) on R, then V 1 (G; t, x) can be found using the Laplace inversion formula. Finally, The family of problems (6.5)-(6.6) is similar to the one in [20]; the only difference is a more involved dependence ofG on q (in [20],G(q, x) = G(x)/(q − r)). Hence, we can apply the Wiener-Hopf factorization technique as in [20] and obtain Let ω ∈ (0, λ + ). If σ > 0 is sufficiently large, then, for q in the half-plane {Re q ≥ σ}, φ − q (ξ) is analytic in the half-plane {Im ξ ≤ ω}. For ξ in this half-plane, the double Laplace-Fourier transform of V 1 (G; t, x) w.r.t. (q, x) is given by . We take ω < −β, and, similarly to the proof of Lemma 5.1, calculate The result is: for (q, ξ) s.t. Re q ≥ σ and Im ξ > ω , Applying the inverse Fourier transform, we obtaiñ dη. (6.10) Since x − h > 0, we deform the outer contour upward, the new contour being of the type L + (meaning: of the form L(ω 1 , ω, b), where ω > 0): dη. (6.11) If −iβ − q exists and −β − q is not close to λ + , we push the contour up, cross the simple pole at ξ = −iβ − q , and obtaiñ . (6.13) An admissible type of the deformation of the inner integral on the RHS of (6.12) and the integral on the RHS of (6.13) depends on the properties of e ihηĜ (q, η). If G is the price of a vanilla option or the product of prices of two vanilla options, then an admissible deformation is determined by the relative position of the barrier and the strikes of the options involved. Hence, we are forced to consider several cases. 6.3. Down-and-in call and put options. If −iβ + q exists, we can cross the pole at −iβ + q , and obtaiñ 6.3.2. Put option, the strike is at or above the barrier. In the case of the put option, we have (6.16) with ω ∈ (0, ω). Hence, deforming the contours of integrations w.r.t. η down into L −− , we cross not only a simple pole at −iβ + q but simple poles 0, −i as well. Hence, to the RHS of (6.17), we need to add 6.3.3. Call option, the strike is below the barrier. We start with the contour {Im η = ω }, where ω ∈ (−β + q , −1). Since a < h, we need to deform the inner contour of integration on the RHS of (6.12) and the contour of integration in (6.13) up. Since φ + q (η) is analytic and bounded in the half-plane {Im η ≥ ω }, the inner integrand on the RHS of (6.16) has three simple poles at η = −i, 0, ξ, and the 1D integrand on the RHS of (6.16) has three simple poles at −i, 0, −iβ − q ; after the poles are crossed, we can move the line of integration up to infinity, and show that the integrals after crossing are zeroes. Hence, we obtaiñ Remark 6.1. If −iβ ± q either do not exist or not crossed, in all cases above the contours of the types L ± are used, and, in all formulas above, all the terms that contain β ± q should be omitted. 6.3.4. Put option, the strike is below the barrier. Evidently, the price is the same as the one of the European put. 6.3.5. The case of the product of discounted a European call or put option and e Xτ . In the case of the call option, we have G(t, x) = e −2rT e x+2rt V call (T, K; T − t, x). Assuming that λ − < −2, we take ω ∈ (λ − + 1, −1), and write dη. If the strike is at or above the barrier, the remaining steps are essentially the same as in Subsections 6.3.1-6.3.2. The poles of the integrands are at −2i and −i rather than −i and 0, and the contour L − must be below −2i rather than −i. Also, due to a different factor q −r+ψ(η +i) in the denominator, we have an additional restriction on σ and L −− . Furthermore, instead of the equality φ − q (η)(q + ψ(η)) = q/φ + q (η) we have a more complicated equality φ − q (η)(q − r + ψ(η)) = q(q − r + ψ(η))/(φ + q (η)(q + ψ(η)), hence, some of the poles and the corresponding residue terms are different. In the case of the put, the calculations are the same only ω ∈ (0, ω) must be chosen at the first step, and in the process of the deformation of the contours of integration w.r.t. η down, tho poles at η = −i, −2i are crossed rather than at η = 0, −i. If the strike is below the barrier, the above argument is modified similarly to the modification in Subsection 6.3.3. 6.4. The case of the product of two European call or put options. Since x − h > 0, we deform the outer contour into a contour of the form L ++ , crossing the pole at ξ = −iβ − q if it exists: . If the first option is a call and the other is a put, then ω 1 ∈ (−β + q , −1) and where ω 1+ , ω + , b + are the parameters that define the curve L ++ (recall that the lowest point of the latter is i(ω 1+ − b + sin(ω + ))). 6.4.2. Reductions. All three cases are reducible one to another. We start with the product of two call options. We move the lines of integration {Im η j = ω j }, ω j ∈ (−β + q , −1), j = 1, 2, up, and, on crossing the poles, apply the residue theorem. Let ω 1 , ω 2 > 0, ω 1 + ω 2 < Im ξ. Then, moving the innermost line of integration up, we obtain The first term on the RHS is the integral for the case of a call and put, the remaining two terms can be calculated similarly to the integrals in formulas for put and call options. If a 1 − h ≤ 0, the line {Im η 1 = ω 1 } is deformed into a contour of the form L − , and if a 1 < h, then of the form L + or L ++ . The new contour must be strictly below the contour of integration L ++ w.r.t. ξ, and the angles between the asymptotes of the two contour must be non-zero. Pushing the lines of integration w.r.t. η 1 up, we obtain a repeated integral which arises when the product of puts is considered (plus 4 one-dimensional integrals): Below, we consider the calculation of the repeated integrals above, denote them J(ξ; ω 1 , ω 2 ), for strikes below or above the barrier. Depending on a case, it will be convenient to calculate either J(ξ; ω 1 , ω 2 ) or J(ξ; ω 1 , ω 2 ) or J(ξ; ω 1 , ω 2 ), and, then, if necessary, use (6.21) and (6.22). Since the choice of the parameters of the contours of integration are simpler if the contours of integration w.r.t. η 1 , η 2 are in the lower half-plane, we would recommend the reduction to the case of two calls unless β + q − 1 is much smaller than −β − q . If the characteristic exponent is a rational function, then we may push the contour L ++ to infinity upwards, and L − downwards. After all the poles are crossed, instead of the triple integral, we will obtain a triple sum expressible in terms of a 1 , a 2 , h, the parameters of the characteristic exponent and its roots and poles. The same can be done in the beta-model [50]; the resulting sums will be infinite, though, and one will have to solve a rather non-trivial problem of a sufficiently accurate truncation of these infinite sums. 6.4.4. The case when one of the strikes is below the barrier. Let K 2 < H ≤ K 1 . Then a 2 − h < 0 ≤ a 1 , and the sinh-deformation of the line of integration w.r.t. η 2 is impossible, hence, we apply the simplified trapezoid rule to the initial integral w.r.t. η 2 (flat iFT). The integrands decay very slowly, hence, the number of terms N 2 in the simplified trapezoid rule is too large. The number N 2 can be significantly decreased using the summation by parts in the infinite trapezoid rule if f (y) decreases faster than f (y) as y → ±∞ (as is the case in the setting of the present paper). Indeed, then the finite differences ∆f j = f j+1 − f j decay faster than f j as j → ±∞ as well. The summation by parts formula is as follows. We choose ζ > 0 so that 1 − e −iaζ is not close to 0. Then If each differentiation increases the rate of decay, then the summation by part procedure can be iterated. In the setting of the present paper, the rate of decay increases by approximately 1 with each differentiation. In the numerical examples, we apply the summation by parts 3 times, which decreases the number of terms many times, and makes the number comparable to the number of terms when the sinh-acceleration can be applied. 6.4.5. The case when both strikes are below the barrier. In this case, the sinh-acceleration in the integrals w.r.t. η 1 and η 2 is impossible, hence, we apply the simplified trapezoid rule to the initial integrals w.r.t. η 2 and η 1 . The numbers N 1 , N 2 can be significantly decreased using the summation by parts in the infinite trapezoid rule applied to evaluate the repeated integrals on the RHS of (6.23). Semi-static hedging vs Variance minimizing hedging of down-and-in options: a numerical example and qualitative analysis In this section, we present and discuss in detail several important observations and practically important conclusions using an example of a down-and-in call option. The process (KoBoL) is the same as in Table 2 , maturity is T = 0.1, but the strike K = 1.04 is farther from the barrier than in Table 2 . The reason for that is two-fold: 1) to show that if the restrictive formal conditions for the semi-static hedging are satisfied, then the semi-static procedure works reasonably well for jump-processes with a rather slowly-decaying jump component even if the distance from the barrier to the support of the artificial exotic payoff is sizable (about 4 percent); 2) if this support is too close to the barrier, then the summation-by-part procedure can be insufficiently accurate unless high precision arithmetic is used. In the paper, we do calculation with double precision. We consider the standard situation: an agent sells the down-and-in call option, and invests the proceeds into the riskless bond. We assume that the spot S 0 = e 0.04 is almost at the strike. That is, the agent makes the bet that the barrier will not be breached during the lifetime of the option. Fig. 3 shows that the probability of this happy outcome is not very large; even the probability that the barrier will be breached before τ = 0.05 is about 42%. However, with the probability about 50%, at the time the portfolio is breached, the portfolio value is positive. This is partially due to the fact the bond component in the portfolio increases fairly fast. 4 Nevertheless, if the barrier is breached at time close to 0, the loss in the portfolio value can be rather sizable (see Fig. 4). Hence, it is natural for the agent to hedge the bet. Assume that the agent uses the semi-static hedging constructed in the paper. The standard static and semi-static arguments construct model-and spot-invariant portfolios, which make it impossible to take into account that a hedging portfolio needs to be financed. If we consider the portfolio of 3 put options, and ignore the riskless bonds borrowed to finance the position, then, at any time τ the barrier is breached, and at any level S τ ≤ H, the portfolio value is positive or very close to zero (Fig. 5). Thus, the semi-static hedge seems to work very well even if the portfolio consists of only 3 options. Furthermore, if only one option is used, then the portfolio value decreases except in a relatively small region far from the barrier and close to maturity (Fig. 6). Since the probability of the option expiring in that region is very small, a naive argument would suggest that increasing the number of options in the hedging portfolio would increased the overall hedging performance of the portfolio. Recall, however, that the agent borrows riskless bonds to finance the put option position. If the barrier is breached, this short position in the riskless bond has to be liquidated alongside the other positions in the portfolio, complicating the overall picture. Fig. 7 demonstrates that, when S τ is close to the barrier (a high probability event conditional on breaching the barrier), the value of the hedging portfolio is negative and large. Thus, the hedge is far from perfect. Furthermore, in Fig. 8 we see that if τ > 0.037 and S τ is not too far from the barrier, then the semi-static portfolio consisting only of a put option whose strike is at the kink of the exotic option has a larger value then the value of the semi-static portfolio consisting of 3 put options. The value of the hedging portfolio with one put option is shown in Fig. 9. Table 12 shows that, if the barrier is not breached during the lifetime of the options, both portfolios have negative value but the losses on the one with 3 options is twice as large. Hence, it is better to use fewer options unless the agent is betting on a very low probability event realizing: if a larger fraction of wealth is invested in options, the cost of these options is higher, eroding the advantage of a more accurate semi-static hedging. Fig. 10 demonstrates that the variance-minimizing portfolio of the same three put options hedges the risk of the down-and-in option, but only partially: if τ < 0.06 (approximately), and S τ is not far from the barrier, the value of the portfolio is negative; for τ closer to maturity, the portfolio value becomes sizably positive. Comparing Fig. 12 and Fig. 10, we observe that the inclusion of the first-touch digital increases the hedging performance of the portfolio somewhat. Fig. 11, 13 and 14 show that portfolios with one put option or one first touch digital perform approximately as well as portfolios with 3 options, and the portfolio with the first touch digital as the only hedging instrument is better than portfolios with one put option (with strikes either at the barrier or at the kink). We emphasize once again that, since the hedging portfolio is not costless, proper analysis of the efficiency of hedging should incorporate both the payoff to the short position in the riskless bond and the value of the hedging portfolio at maturity in case the barrier is not breached. In the case of the formally perfect semi-static hedging with 3 put options, the loss of the portfolio when the barrier is not breached is quite sizable (about 0.6 of the value of the hedged option at time 0); in contrast, if only one option is used, then the loss is about 0.3 of the value of the hedged option. Instead, if we use a variance-minimizing portfolio with 3 or 1 put options, the gain when the barrier is not breached is sizable (about the value of the hedged option); if the first touch digital is used, then the gain is close to 0 (positive if, in addition to the first touch digital, two put options are used, and negative if only the first touch digital is used). To sum up: any realistic hedging portfolio replaces the initial non-hedged bet with another one, which can be more risky than the initial bet, once we take the investment in the riskless bond into account. In particular, the semi-static hedging portfolio with 3 put options, which seems perfect from the point of view of the replication of the payoff of the down-and-in call option, ignoring the upfront payments for the hedging instruments, is, in fact, another and much riskier bet: only if the barrier is breached, and the underlying is sizably below the barrier at that moment, the hedging portfolio is in the black; otherwise, it is is the red, and, with the large probability, significantly so. Thus, semi-static hedging portfolios can be regarded as contrarian bets: if the realization of the underlying is very bad (which occurs with small probability), the portfolio gains a lot, but looses a lot otherwise. The portfolios with first touch digitals have the most concentrated profile of the payoffs. Table 12 illustrates the bet structure implicit in different hedging portfolios, showing the approximate payoffs in the vicinity of the barrier when the barrier is breached and the payoff at maturity if the barrier is not breached. 5 We see that the replacement of the initial bet (the naked short down-and-in call option) with the portfolios based on the semi-static argument may lead to a sizable loss with probability more than 90%. At the same time, there is a small probability of a significant gain, if the barrier is breached by a large jump. In our opinion, Table 12 demonstrates that the most efficient hedge is with the first touch digital. In Table 13, we show the normalized standard deviation of different portfolios, computed at 1%-7% from the barrier. The irregular structure of the payoff to the semi-static hedging portfolio implies that the portfolio volatility must be high, as demonstrated in Table 13. The volatilities of other portfolios are of the same order of magnitude, although the volatilities of portfolios that contain the first touch digital are sizably smaller, which is an additional indication of the advantages of the first touch digitals. In Table 14, we show the variance-covariance matrix which is used to construct several hedging portfolios; the reader may check that the matrix is rather close to a matrix of a smaller rank 2-3 rather than 5. This explains why the variances of different portfolios that we construct are very close, and why, as far as the hedging of small fluctuations is concerned, there is no gain in using several options; and the cost of using several options can be uncomfortably high. Conclusion 8.1. Hedging: results and extensions. We developed new methods for constructing static hedging portfolios for European exotics and variance-minimizing hedging portfolios for European exotics and barrier options in Lévy models; in both cases, the calculations are in the dual space. In particular, we constructed approximate static portfolios for exotic options approximating an exotic payoff with linear combinations of vanillas in the norm of the Hölder space with an appropriate weight; the order of the Hölder space is chosen so that the space of continuous functions with the same weight is continuously embedded in the weighted Hölder space, hence, we obtain an approximation in the C-norm. The weights are easy to calculate because the weighted Hólder space is the Hilbert space, and the scalar products of the elements of this space can be easily calculated evaluating integrals in the dual space. 5 Payoffs in the vicinity of the barrier when the barrier is breached are shown for several small time intervals. For omitted time intervals, approximate probabilities and payoffs can be reconstructed using the interpolation; the probabilities of realizations of Sτ farther from the barrier are small because the tails of the Lévy density decay exponentially, and the rate of decay is not small. We have discussed the limitations of the static hedging/replication of barrier options, and, in applications to Lévy models, listed the rather serious restrictions on the parameters of the model under which the approximate replication of a barrier option with an appropriate European exotic option can be justified. We explained why in the presence of jumps a perfect semi-static hedging is impossible, and, using an example of the down-and-in call option in KoBoL model, demonstrated that (a) the formal semi-static procedure of hedging barrier options based on the approximation of the latter with exotic European options and the static hedging algorithm of the present paper produce a good super-replicating portfolio even if only 3 put options are used; (b) however, if the borrowed riskless bond needed to finance the hedging portfolio is taken into account, then the hedging portfolio of the short down-and-in call option, 3 put options and riskless bond has the payoff structure resembling contrarian bets. With a very high probability, the portfolio suffers sizable losses (several dozen percent and more of the value of the down-and-in option at the initiation). This happens if the barrier is not breached during the lifetime of the option or, at the moment of breaching, the spot does not jump too far below the barrier. Only if the jump down is large, the hedging portfolio will turn out the profit, and the profit can be very large; (c) if only one put option is used for hedging, then the performance of the portfolio significantly improves although the super-replicating property becomes imperfect, naturally. This observation undermines a general idea behind the semi-static hedging/replication. An accurate replication needs more options in the hedging/replicating portfolio but the associated costs may outweigh the formal advantage of the model-free replication/hedging; (d) in the example that we considered, variance-minimizing hedging portfolios are less risky than the semi-static hedging portfolios, and the ones with the first touch digitals are the best. This observation implies that, surprisingly enough, for barrier options, the varianceminimizing objective leads to smaller losses even at the scale which is not expected to be characterized by the variance; (e) in the case of the short down-and-out call option, the natural hedging portfolio is a short position in a European call; similarly, the natural hedging portfolio for the down-and-in option is a long position in a European call. Fig. 15 The results the paper suggest that it might be natural to consider semi-static hedging portfolios as separate classes of derivative securities, with non-trivial payoff structures, which are modeldependent. The numerical examples in the paper demonstrate that the properties of the payoffs of semi-static hedging portfolios for short down-and-out options are fundamentally different from the properties of semi-static hedging portfolios for short down-and-in options. For the process and down-options considered in the paper, the semi-static hedging portfolios for downand-out options do have properties close to the properties of good hedging portfolios: with high probability, the payoff at expiry (or maturity) is positive although with small probability the payoff is negative. However, for the down-and-out options, the properties are the opposite: small losses with high probability and large gains with small probability (essentially, contrarian bets). For up-options under the same process, the semi-static hedging portfolios for "out" options are good but the ones for "in" options are contrarian bets. The properties change with the change of the sign of the drift as well. For barrier options, the picture becomes even more involved. To conclude, the formal semi-static argument is applicable only under rather serious restrictions, cannot be exact in the presence of jumps, may lead to very risky portfolios, and the hedging errors of the variance-minimizing portfolios can be sizable (although less than the errors of the formal semi-static hedging portfolios). The deficiencies of both types of the hedging portfolios stem from the fact that both are static. If we agree that a model-independent hedging is seriously flawed (the semi-static hedging disregards the cost of hedging), and the variance minimizing one does not take into account the payoff of the portfolio at the time of breaching explicitly, then a natural alternative is a hedging portfolio which is rebalanced after reasonable short time interval so that the profile of the payoff at an uncertain moment of breaching is approximately equal to the profile at the moment of the next rebalancing. The portfolio can be calculated using the approximation in the Hölder space norm, and the following versions seems to be natural: I. Myopic hedging, when the hedging instruments expire at the moment of the next rebalancing. II. Quasi-static hedging, when the first hedging portfolio is constructed using the options of the same maturity as the barrier option, and the other options are added to the existing portfolio at each rebalancing moment. III. Hedging using first touch digitals only. Possible versions: each period, buy the digital which expires at the end of the rebalancing period; each period, buy first touch digitals which expire at the maturity date. As Fig.1 suggests, and our numerical experiments confirm, the use of the first touch digitals has certain advantages, and the first-touch options with the payoffs (S/H) γ , γ > 0, would be even better hedging instruments. We leave the study of these versions of hedging to the future. 8.2. The technique used in the paper to price options with barrier features and its natural extensions. The construction of the variance minimizing portfolio for barrier options is based on the novel numerical methods for evaluation of the Wiener-Hopf factors and pricing barrier options that are of a more general interest than applications to hedging. In particular, the Wiener-Hopf factors can be evaluated with the relative error less than E-12 and barrier options with the relative error less than E-08 in a fraction of a msec. and several msec., respectively. The efficient evaluation of the Wiener-Hopf factors, and, in many cases, the numerical realization of the inverse Fourier transform in the option pricing formulas are based on the sinh-acceleration method of evaluation of integrals of wide classes, highly oscillatory ones especially, developed in [17]. In some cases, the sinh-acceleration method is not applicable. These cases are the ones when the standard Fourier inversion techniques may require the summation of millions of terms and more; the same problem arises when the Hilbert transform method is applied in Lévy models of finite variation. In the present paper, we suggested and successfully applied the summation by parts trick in the infinite trapezoid rule. In the result, the rate of the convergence of the infinite sum significantly increases, and it suffices to add thousands of terms in the infinite trapezoid rule instead of millions. Note that the same summation by parts can be applied when the Hilbert transform is used to evaluate discrete barrier options, and in other cases. Similarly to [53], it is possible to derive simplified asymptotic formulas, which are fairly accurate if the spot is not very close to the barrier and the tails of the jump density decay fast. Similar exact formulas with larger number of terms are valid in models with rational exponents ψ, if the roots and poles of the characteristic equation q +ψ(ξ) = 0 can be efficiently calculated, and the number of zeros and poles is not large. This is the case in the double-exponential jump diffusion model (DEJD model [47]) used in [58,57,47,48,49,63,59], and its generalization, the hyper-exponential jump-diffusion model (HEJD model) introduced and studied in detail in [52] (and independently outlined in [57]), and used later in [41,26,7,8] and many other papers. In the case of the beta-model [50], similar formulas with series instead of finite sums can be derived; but it is unclear how to truncate repeated series in order to satisfy the desired error tolerance. The approach of the paper can be applied to construct hedging portfolios for lookbacks [13], American options, barrier options and Asians with discrete monitoring [32], Bermudas, where the Fourier transform of the option price can be efficiently calculated. To this end, one can reformulate the backward induction procedures in the discrete time models or Carr's randomization method making the calculations in the dual space as in [56], where Asian options were priced making calculations in the dual space. The double-spiral method introduced in [56], together with the sinh-acceleration, should be used to decrease the sizes of arrays one works with at each time step. The sinh-acceleration allows one to make calculations fast in regime switching models as well (calculation of the Wiener-Hopf factors in the matrix case is possible using the sinh-acceleration technique), hence, approximations of models with stochastic interest rates and stochastic volatility can be considered, similarly to [23,24,18], where American options were priced using Carr's randomization in the state space. The double-barrier options can be treated by reformulating the method of [11] in the dual space. Appendix A. Standard semi-static hedging under Lévy processes We consider the down-and-in option (with the value function) V (G; H; t, X t ) with the barrier H = e h . Let τ := τ h be the first entrance time by the Lévy process X with the characteristic exponent ψ into (−∞, h]. If τ < T , then, at time τ , the option becomes the European option with the payoff G(X T ) at maturity date T . The standard semi-static hedge is based on the assumption that there exists β ∈ R such that, for any stopping time τ , If (A.1) holds, we consider the European option V (G ex ; t, X t ) of maturity T , with the payoff function If τ > T, the down-and-in option and the European option expire worthless. At time τ , the option values coincide on the strength of (A.1) provided X τ = h because But the assumption X τ = h means that there are no jumps down. In a moment, we will show that then there are no jumps up as well. Let G(x) = (e x − K) + or, more generally, letĜ(ξ) be well-defined in the half-plane {Im ξ < −1} and decay as |ξ| −2 as ξ → ∞ remaining in this half-plane. Let ψ be analytic in a strip S (λ − ,λ + ) , where λ − < −1. Take ω ∈ (λ − , −1), denote x = X τ , and represent the LHS of (A.1) in the form The RHS of (A.1) can be represented as the repeated integral if ω ∈ R can be chosen so that the repeated integral converges (this imposes an additional condition on ψ, which will be made explicit in a moment). Changing the variable 2x − y = y , and then −ξ − iβ = ξ , we obtain We see that ω and β must satisfy −ω − β ∈ (λ − , −1), and, comparing with (A.3), we see that the characteristic exponent ψ and β must satisfy Given a class of Lévy processes, the equation (A.4) imposes one condition on the diffusion part, and the second condition on the jump part; hence, the dimension of the admissible parameter space drops by 2 if there are both diffusion and jump component, and by one if there is only one of these components. If X is the BM with drift µ and volatility σ, then (A.4) is equivalent to β = −µ/(2σ 2 ). Equivalently, ψ(−i) = 0 (the stock is a martingale), hence, δ = r (the dividend rate equals the riskless rate). In the case of the BM with embedded KoBoL component: the conditions become: 1) c + = c − (hence, either there are no jumps or there are jumps in both directions), and ν + = ν − ; and 2) β = −µ/( We see that if there is no diffusion components, then the "drift" µ = 0, and if there is a diffusion component, and µ > 0 (resp., µ < 0), then λ + > −λ − (resp., λ + < −λ − ), which means that the density of jumps decays slower in the direction of the drift. We finish this section with a discussion of a possible size of hedging errors induced by the assumption that the process does not cross the barrier by a jump when, in fact, it does. In the case of the down-and-in options the expected size of the overshoot decreases when λ + increases; in the case of up-and-in options, −λ − increases in absolute value. Hence, if the diffusion component is sizable: σ 2 > 0 is not small, then the condition β = −µ/(2σ 2 ) = −λ + − λ − implies a strong symmetry λ + ≈ −λ − of the positive and negative jump components. If σ 2 is small, then a strong asymmetry is possible; but then µ must be very large in absolute value. To sum up: the conditions on the parameters of the model which allow one to formally apply the semi-static hedging procedure to Lévy processes with jumps are rather restrictive. Appendix C. Sinh-acceleration in the Laplace inversion formula Let σ, ω , b > 0 and σ − b sin ω > 0. Introduce the function and denote by L 0 = L 0 (σ, ω , b ) be the image of R under the map χ 0 (σ, ω , b , ·). We fix ω ∈ (0, π/4), k d ∈ (0, 1), and set d = k d |ω |, γ ± = ω ± d . If the contour L(ω 1 , ω , b) with ω > 0 is used to calculate the Wiener-Hopf factors, we set d = k d |ω | and choose ω ∈ (0, π/4). Denote by S the image of the strip S (−d ,d ) under the map y → χ 0 (σ, ω , b , y) and by S the image of the strip S (−d ,d ) under the map y → ψ(χ(ω 1 , ω , b , y)). We choose the parameters of the deformations so that the sum S + S = {q ∈ S, η ∈ S } does not intersect (−∞, 0] in the process of deformation of the two initial lines of integration; then the initial formulas (4.8)-(4.9) can be applied (provided ξ is below S ). The other formulas for the Wiener-Hopf factors above can be applied under weaker conditions. See [13,53] for details in the similar case of the fractional-parabolic deformations. The necessary condition on the pair ω and ω , which ensures that the intersection of S + S with the exterior of a sufficiently large ball in C does not intersect (−∞, 0]. At infinity, S stabilizes to the cone C ∪C, wherez denotes the complex conjugation, and S also stabilizes at infinity to a cone of the form C ∪C , but the description of C is more involved than that of C. We need to consider 3 cases: (1) ν ∈ (1, 2] or ν ∈ (0, 1] and µ = 0; (2) ν = 1 and µ = 0; (3) ν ∈ (0, 1) and µ = 0. (3) Formally, we have the same condition as in (2), with |ϕ 0 | = π/2. Clearly, this condition fails for any positive |ω | and ω . Hence, in this case, it is impossible to use the sinh-acceleration w.r.t. q and η and apply (4.8)-(4.9). However, it is possible to choose the deformations so that S+S 0. If, in addition, the parameters are chosen so that, for q, η of interest, |1−Ψ(q, η)| < 1, then we can apply the sinh-acceleration in (B.2)-(B.5), and the sinh-acceleration in the integral for the Laplace inversion. Appendix D. Gaver-Stehfest method Iff (q) is the Laplace transform of f : R + → R, then the Gaver-Stehfest approximation to f is given by where M is a positive integer, and a denotes the largest integer that is less than or equal to a. Iff (q) can be calculated sufficiently accurately,then, in many cases, the Gaver-Stehfest approximation with a moderate number of terms (M ≤ 8) is sufficiently accurate for practical purposes even if double precision arithmetic is used (sometimes, even M = 9 can be used). However, it is possible that larger values of M are needed, and then high precision arithmetic becomes indispensable. The required system precision is about 2.2 * M , and about 0.9 * M significant digits are produced for f (t) with good transforms. "Good" means that f is of class C ∞ , and the transform's singularities are on the negative real axis. If the transforms are not good, then the number of significant digits may not be so great and may not be proportional to M . See [2].
2019-02-07T13:37:38.000Z
2019-02-06T00:00:00.000
{ "year": 2019, "sha1": "43bf261322b5d4f5619f691bbdc32d4971f1a5b6", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1902.02854", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "43bf261322b5d4f5619f691bbdc32d4971f1a5b6", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Mathematics", "Economics" ] }
8566737
pes2o/s2orc
v3-fos-license
A user defined taxonomy of factors that divide online information retrieval sessions Although research is increasingly interested in session-based retrieval, comparably little work has focused on how best to divide web histories into sessions. Most automated attempts to divide web histories into sessions have focused on dividing web logs using simplistic rules, including user identifiers and specific time gaps. This research, however, is focused on understanding the full range of factors that affect the division of sessions, so that we can begin to go beyond current naive techniques like fixed time periods of inactivity. To investigate these factors, 10,000 log items were manually analysed by their owners into 847 naturally occurring web sessions. During interviews, participants reviewed their own web histories to identify these sessions, and described the causes of divisions between sessions. This paper contributes a taxonomy of six factors that can be used to better model the divisions between sessions, along with initial insights into how the divided sessions manifested in web logs. The factors in our taxonomy provide focus for future work, including our own, for finding practical ways to more intelligently divide and identify sessions for improved session-based retrieval. INTRODUCTION Recent research has moved beyond trying to provide optimal results for a current or evolving set of queries, towards trying to model and support a "search session" [25]. Bailey et al, for example, identified a number of sessions that typically last longer than 5 minutes, including: adult, how-to, and entertainment sessions [2]. Most current approaches to detecting the start and end of sessions, however, have Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Permissions@acm.org. IIiX '14 August 26 -29 2014 used simplistic techniques, such as identifying users in search engine logs, and separating their activities by 25 minutes of inactivity [12]. While other papers have investigated alternative methods of identifying sessions in logs, such as modeling clear changes in the focus of queries [10], these papers have typically used an artificial corpus of uniform search sessions from TREC. Real sessions, however, are rarely uniform and human web behaviour is highly dynamic, and so this work focuses on understanding the full range of factors that relate to the boundaries of sessions. Much research has shown that people interleave many different activities within single web episodes [28,29,15,32], such as email, social networking, and information gathering. Further, research has shown that users also spread more notable tasks, such as vacation planning, research, and complex purchasing, across multiple sessions [13,19,16]. Despite aims to support these multi-session tasks, systems have struggled with multitasking [19] or have been retired [7]. Consequently, this research has sought to build a richer understanding of the factors that cause sessions to start and stop, by analysing 847 real web sessions, self identified by their owners in their own terms. In particular, our research questions were: RQ1) What factors affect the end of a session? RQ2) What factors relate to the start of new sessions? RQ3) What factors divide apparently single sessions? RQ4) What factors join two seemingly separate sessions? To better understand the boundaries of search sessions against the sessions they occured between, the study investigated all web sessions, including non-search sessions, from personal browser histories. We define "web sessions" as sessions of general web history from participants, "search sessions" as those web sessions that involve web search queries, and "browse sessions" as web sessions without web search. The following sections first present an overview of how sessions have previously been determined, analysed, and supported. Our interview study is then described in Section 3, and the results are presented as a taxonomy of six factors of session boundaries in Section 4. We conclude with a discussion about better modeling web session boundaries, regardless of their temporal relation to each other. RELATED WORK The notion of sessions started in the form of query sequences represented in search systems. Early work on DIA-LOG [31], for example, kept track of a searcher's queries and allowed them to reuse them by reference. Such systems were about supporting longer tasks within specific collections of documents, rather than web search, which is an aim still held by recent research (e.g. [24]) to support extended episodes of Exploratory Search [34] and sensemaking [27]. Investigations into web sessions, however, can be dated back to the mid 90s (e.g. [5]). Despite this history, there is increasing focus on web sessions, where search engines are keen to better support searchers who continue to search for more than a few queries or minutes [35,33,2]. Queries can be disambiguated, for example, given a user's query history, but more specifically against current queries if the bounds of the current session are known. Ozmutlu (2006) found about 28% of queries were reformulations of previous queries [23], while Jansen et al (2007) reported that about 37% of search queries were reformulations when repeated queries were not considered [12]. Similarly, query analysis in user experiments has also found that users are more likely to submit reformulations in more complex search tasks [18]. Despite these ideas, we still know very little about what constitutes a session, nor how to determine the start and end amongst the highly dynamic behaviours we exhibit online [29,28]. Determining sessions A number of researchers have generated definitions of a session using different delimiters such as cutoff time, query context, or even the status of the browser windows (e.g. [19]). In 1995, Catledge and Pitkow suggested a 25.5 minute "timeout", the time between two adjacent activities, was best to divide logs into sessions [5]. Although their research was focused on identifying contiguous periods of general web activity, rather than homogenious search sessions, their 25.5 minutes timeout has been used by many others. He and Goker later aimed to find the optimal interval that would divide large sessions, whilst not affecting smaller sessions [11]. Their analysis found that optimal timeout values vary between 10 and 15 minutes. Spink et al [29] defined a session as the entire series of queries submitted by a user during one interaction with a search engine, and one session may consist of single or multiple topics. Their approach focused on topic changes rather than temporal breaks, yet "one interaction" was determined as a contiguous period. Going beyond simplistic time divisions, google defines a session boundary based on three issues: 1) 30 minutes interval, 2) end of a day, and 3) traffic source value change [6]. To summarise the different approaches used to define sessions, Jansen et al. provided a summary of the three most representative strategies [12]. As IP and cookies were utilised to identify a user, the most frequent strategies involve temporal cutoffs and topic change. Other surveys of session boundary detection methods have been provided by Wolfram [36]. Gayo-Avello [9] provided a comprehensive summary of previous search session detection methods involving both temporal and lexical clues based on query logs, however it only focused on the "consecutive" search activity without considering interleaving. Understanding sessions Taking a user-focused approach, Sellen et al [28] investigated the different activities that people perform online, including information gathering, browsing, transacting, communicating, and housekeeping. Many others have tried to categorise the types of activities, and thus perhaps sessions, that people engage in online (e.g. [15,32]). Broder divided web search behaviour into three main categories: navigational, informational, and transactional [3]. Although these types of taxonomies help us to understand the types of things people do online, they do not practically help search engines to identify and support real web sessions, because they are highly interleaved and dynamic in nature. Consequently, researchers resort to the techniques described above to divide search engine logs and investigate them. Understanding the nature of longer sessions, however, can help provide results relevant to the current session. In analysing Bing logs, Bailey at al identified several key examples of sessions that typically lasted more than a few minutes, or involved more than a short sequence of queries [2]. Their analysis showed, for example, the nature of adult search sessions, and other long sessions types including: researching how to do something, and finding pictures or watching entertaining videos. Elsweiler et al [8] also investigated these latter casual-leisure sessions, highlighting a) their tendency to be long, b) that participants continue to search despite already finding good results, and c) that participants typically stop when they cannot find good results. Further, Kotov et al analysed multi-session search tasks [16]. Such findings highlight the importance of providing relevant results for a whole session [25]. Supporting sessions While research contiunes trying to identify and determine sessions, researchers have used the available techniques to collate examples of sessions and find ways to better support them. The aim of the TREC session track, for example, is to improve retrieval accuracy over an entire session [14], rather than optimising for one query at a time, by taking into account recent query history and other logged behaviours. To do this, a series of real sessions were extracted from search engine logs, however they were identified using similar timeout techniques described above, and are typically homogenous in topic or style. Such corpuses of sessions, however, have allowed researchers to determine how to use query change to find possible session boundaries [10]. Conversely, Adeyanju et al [1] aimed to determine which pages people typically end up in during sessions. By identifying the likely session motivating the query, they can try to return key results earlier in the search, despite not being relevant to the earlier queries. Similarly, Raman et al [25] identified patterns for "how-to" searches, and aimed to return results that matched the likely phases of the sessions. Research has also produced systems that try to support searchers during their sessions. For a while, Yahoo! developed SearchPad, which provided searchers with a note-taking facility for use during longer sessions [7]. Work by Mackay and Watters aimed to support people in tasks that span multiple sessions, by allowing them to explicitly specify their current sessions in a tool bar [19]. Alternative approaches have tried to break web history into sessions in order to make them easier to review. SearchBar, for example, let people manipulate their search histories as being related to certain topics or sessions [20]. Many other browser extensions exist for sessions management and alternative views of web history. The research above reinforces that real web sessions are highly dynamic and that using notions of time gaps in search engine logs are likely to be too simplistic for automatic detection of session boundaries. Our research, therefore, is focused on understanding how real human web sessions, and their boundaries, relate to each other, and what factors must be considered to (semi-)automatically identify them. EXPERIMENT DESIGN To understand and characterise real web sessions, we employed similar interview methods to Sellen et al. [28]. 20 participants engaged in a 90-120 minute interview about their own web histories. To ground the interviews in real data, participants focused on printouts of their own web history, and we used the card sorting technique [26] to probe their mental models of sessions. Although these methods do not allow us to analyse web sessions at a large scale, they are conducive with building a better, richer understanding of web sessions and their boundaries, and so we can focus on insights rather than scale. The procedure was approved by our school's ethics committee and pilot tested. Procedure Preparation. Participants began by providing their web history and were advised to edit it in advance should they wish to keep some logged activities private 1 . These logs were gathered by importing their web histories into Firefox (if not already there), and creating an XML export using "History Export 0.4" 2 . This log was structured and pre-processed using a) automatic detection of search URLs, and b) manual identification of periods of interest to discuss in the interview. Examining History Logs. After providing demographic information, participants spent around 20 minutes examining the structured printout of their history, using a pen to mark out "sessions". During the study, the term "session" was left to be ambiguous as possible in order to avoid influencing their mental modal. The only precaution taken was to make sure participants did not simply categorise entries rather than identify sessions, i.e. simply classifying all social networking entries into one large 'social networking' session that spanned their entire log. Participants comprehensively identified sessions from the most recent 500 entries in their web history, which varied between 2 to 5 days of history, depending on the individual. Consquently, 10,000 history items were manually analysed into 847 sessions for later analysis. All participants also put around 10 sessions onto individual cards, unless single queries or similar in nature to previously carded sessions, for later sorting. Each card had a number, a title, activity purpose, included history items, and whether it has been completed successfully or not. Interview. After participants marked their own web histories, the interview began by discussing participants' session boundaries. Participants were given the chance to review each session boundary, however this discussion typically focused on unclear boundaries or sessions that the participants or researchers found interesting or worth discussing. This phase provided three benefits: 1) allowing the participants to review and revise session demarkations, 2) allowing the researchers to begin to understand the ways that participants understood sessions, and 3) supporting the participants to begin producing criteria for the subsequent card sorting. Card Sorting. The remainder of the interview involved first open, and then closed repeated single-criteria card sorting [26]. Open card sorting allowed the participants to classify and group the sessions according to their own ideas, whilst closed card sorting allowed us to make sure the following dimensions were considered: duration, difficulty, importance, 1 Although this means we have likely missed common web sessions, like the lengthy adult sessions observed by Bailey et al [2], it was considered an important ethical provision. 2 addons.mozilla.org/en-us/firefox/addon/history-export/ and frequency. The interviews were audio recorded, and physical copies of the card sorts were kept for analysis. Participants The 20 participants were recruited broadly from across a university in the United Kingdom, including students and staff from both non-technical and technical backgrounds. 9 were male and 11 female; all were aged between 18 and 30. 18 out of 20 said they search online everyday, while the remaining two participants indicated they search online every 3 to 5 days. Participants were given £15 remuneration for the time they gave to the study. Analysis Three types of data were collected and analysed during the study: logs, interview data, and card sorts. Quantitative Analysis. We were able to produce summative data about 847 sessions, such as average size of sessions, temporal gaps between sessions, number of queries, and so on. Some dimensions of card sorts were also summarised, and used to summatively analyse the carded sessions. Qualitative Analysis. Interview data was transcribed from the audio recordings and was analysed using an open inductive form of Grounded Theory [30]. Initially, one interview was coded using open coding by two researchers, such that process and focus of the coding could be discussed and compared. After discussing and reaching agreement on the focus of the coding process, the remainder of the interviews were analysed using open, axial, and selective coding, which were reflected upon at multiple stages as coding progressed. Codes were collected, given definitions, and associated with sample pieces of text, and then considered collectively. According to the Grounded Theory process, these codes were assessed in order to produce categories and then themes within the data. Disagreements were discussed carefully and codes were merged or divided as their definitions, and the definitions of the categories and themes, developed. A final taxonomy of the factors involved in differentiating between sessions is presented in the results. To assess the stability and reliability of the taxonomy, a copy was provided to an independent researcher, alongside a sample of 58 quotations from the text. The indepenent researcher firstly spent ten minutes reading and discussing the taxonomy, until they felt comfortable with each part. The independent researcher then categorised the 58 samples according to the taxonomy, which was compared to the categorisations chosen by one of the primary researchers. A Cohen's Kappa score of 0.796 was reached between the independent and primary researcher, which is considered as Substantial Agreement [17]. RESULTS The 20 participants identified an average 42.35 sessions each, creating a total of 847, which are summarised in Table 1. Sessions involved an average of 13.3 mins of active web behaviour (not including gaps), with a standard deviation of 31.25, but 33.9% of them lasted for less than 1 min and 76.9% of them lasted for less than the average length. 5.3% of them lasted for more than 60 mins, where the longest recorded session of web activity (excluding breaks) lasted for 303 mins. Sessions included an average of 10.1 history log entries, which we take loosely to be pageviews -although some dynamic updates may not have been captured. We divided these sessions into three sets: short -less than 15 mins, medium -between 15 mins and 60 mins (inclusive) and long -more than 60 mins; these are the median numbers in the duration definition of session given by participants. 601 of our sessions did not include a search query, which we call browse sessions. They were shorter than the average length of all web sessions and the vast majority (83.2%) were short, indicating a large proportion of short navigational episodes in our dataset. Notably, 3.7% of the browse sessions lasted longer than 60 mins, without a single query. 246 of the sessions involved search URLs and those were longer than the average of all sessions; we call these search sessions. Although it seems hard to have a long session without a query, the average number of queries for long sessions was 12.4, at around one for every 9 mins, as the average length was 104.1 mins. The longest search session lasted for 246 mins (4.1 hours), but had only 15 queries, which was one every 18.9 mins. Conversely the session with the largest number of queries, 42, lasted for only 190 mins (3.2 hours), which was one query every 4.5 mins. Only 21 sessions had more than one query per minute, and 19 of these we classed as short; none were classed as long. The time of the day for each session was also studied. As shown in the Figure 1, between 1-3am, people had more pageviews and queries than other times in search sessions. However, the duration of each single search sessions was lower than the average 12.0 mins. Therefore, the search sessions "before bed" involved more queries and more pageviews but took shorter time. The longer search sessions always happened in the morning between 8-9am, which also involved 6 queries per search session. The number of pages viewed in the morning was much lower than late at night. Participants spent longer viewing pages in the mornings, which Nettleton et al [21] said is indicative of 'good quality' search. Many of our participants took breaks during sessions. 77 sessions involved inner-breaks longer than 10 mins, with an average length of 288.3 mins. 62 had inner-breaks longer than 1 hour and 3 of them even had day-break that was longer than 24 hours. Further, between sessions, 456 involved breaks of inactivity of less than 10 mins, leaving 378 that included more notable breaks. 302 of those had breaks lasting for more than 26 mins, indicating that simple divisions of logs, using 25.5 mins as proposed by papers like Catledge and Pitkow [5], would have only divided 35% of our sessions. In addition, more than 30% of our sessions involved discontinuous activities, with interruptions from other web activities or real-life (e.g. cooking), indicating that session identification is not only important for consecutive activities but also interleaving activities. Understanding Session Boundaries Our qualitative analysis identified 6 key factors that are involved in determining different sessions: Topic, Task, Phase, Group of People involved in the activity, Time gap, and Multitasking. Table 2 summarises these key factors, with detail about when they cause a change in session, when there is an exception, and when they override other factors. The Topic, Task, and Phase refer to the lexical clues about the activities grouped into sessions. The differences among them can be presented as: 1) Topic refers to the broad aim of a series of activities, which may consist of one or more specific tasks/phases; 2) Tasks related to one topic are contentrelavant, such as the different (task) questions search about (topic) "Java Programming", but not neccessarily sequential; 3) Phases related to one topic are sequential, e.g. booking a flight ticket (topic) may firstly browse cross sites (phase 1 -info gathering) to get the info before making the final decision and doing the transaction (phase 2 -transaction). These 6 factors are interrelated, and they represent common themes discussed by participants, rather than rules that can directly applied. It is not, however, a matter of how often each factor applies, but instead how much each factor applies at different times. Topic Change The main topic was found to be one of the primary session delimiters in this study. Topic typically refers to the main idea of a user's intention, or their higher-level Work Task [4]. It may consist of one or multiple specific Tasks or Phases. Creates a Session Boundary. Most participants discussed the topic of their work when marking boundaries in their web log. P14, in Table 2, said: "session 7 is about online shopping, and session 8 is related to my academic study, they are topical difference". Exception. However, if the topic is too trivial to be identified as a single session, topic-change may fail in causing session change and participants may just group trivial activities from different topics into one session instead: P8 said "I grouped all of these [free-browse online shopping, social network] into one session because they are just free browsing, I don't have any particular purpose, just to relax." In addition, if the topic is broad and its tasks are easily dividable, the tasks may be put into separate sessions rather than grouped together; described further in the "Task" section below. Overrides another Factor. Sometimes, topic may become dominant and override other factors. P14 said:"all of info search in trip plan to europe before 1st August should be put into one session, including accommodation, ticket, and places of interests searching." In this case, the session was expanded through days and the large time gap was not Content-Relevance Topic change: Topic refers to the user's main intention, or higher-level work task, and may consist of one or more tasks or phases. When Topic-change led to session change. Users may start a new session when the topic shifts. • P14: "session 7 is about online shopping, and session 8 is related to my academic study, they are topical difference." Topic-change may not lead to session change when the topic is too trivial or too broad. • P8: "I grouped all of these [freebrowse online shopping, social network] into one session because they are just free browsing, I don't have any particular purpose, just to relax." Topic may override factors and join seemingly separate sessions. • When it overrides timeout and task-change: P14: "all of information search in trip plan to europe before 1st of August should put into one session, including the accommodation, ticket , and places of interests searching." Task change: One topic may span several specific tasks, e.g. corresponding to distinct specific questions related to a big topic. When task-change led to session change. A big topic-based web acitivity may have mutliple specific tasks, which are relevant but different to each other. • P15: "all of the specific problems searching related to the topic "Matlab" are put into separate sessions." Task-change may not lead to session change when the task is closely integrated or too small. • P17: "these [topical-related] tasks are for different questions, but I want to group them together because some of them are just quick search and have only one query." When the task or phase overrides other factors and bridge seemingly separate session together. • When it override time gap: P16: "when I did some search yesterday and continue doing some more search on the same thing, I will put them into one session. Even if they have longer time gap." • When it override topic: the Matlab example from P15. Different phases: One topic may be made up of phases, they are more sequentially dependent when compared with the "Task". When phase-shifts in one topic leads to session change. In a topic-based web activity, it may be identified as mutliple phases. • P4: "In the flight ticket booking, Looking for information and final purchase are two different phases, because from checking price to purchase, I need a decision making and it takes time." Phase-change may not lead to session change when phases may be too small to have a separate session. • P1: "Searches on 'Burn a DVD' has two parts: 'How to burn it' and 'a software resource searching', and I put them into one session, because they are relevant and I didn't spread many sites." Different People The group of people involved in the activity. e.g. different collaborators or clients for different projects. When group of people involved in the activity is changed, it can indicate a session change • P11: "The gmail and uni emil should be put into different session, because I use the uni one to contact with my classmates and colleagues, and use the gmail for friends and family." Some participants grouped all of adjacent activities across different social networks together. • P3: "so while waiting, I will [...] either to check my personal emails, or because I use google chat a lot, or facebook, chat with my uni friends, and all of these should be put into one session, because they are just a break for me" When people override other factors and bridge seemingly separate sessions. • When it override topics, task, time: P6: "I put all of the web activities from the same mailbox into one session, because the people I contacted with via the same mailbox are from same group." Time gap: The time gap between web activity, as is traditionally the main technique used to divide session. When the time gap is big enough to lead to session change, depending on other factors, such as task size and type of interruption. • P6: "For the video, the acceptable time interval is less than 45mins. and for facebook, probably 1-2mins, and in academic search is less than 1h" • P15: "I put these two activities on one specific questions into one session, even they have more than 2 hours gap and it exceeds my acceptable time gap, but I knew it is interrupted by lunch." When time gap is not considered as a factor in session division, especially for bigger, more Important activities. • P10: "I don't mind the time, because they are for the same purpose, it is the same duty. So I put them together." • P14: "because some of my information search may spread over days, for example, the information gathering on schengen visa takes me about three days, I will put all of them into one session even with days break." When the time gap override other factors and bridge seemingly separate session together, such as the comments from P6 in the "Multi-tasking" factor. Multitasking: Sometimes, users may do multiple things concurrently. Enough characteristically-diverse behaviour creates a session of "diverse activity", or a multi-tasking session. a session of un-connected web behaviour. • P6: "I may feel borded when doing some task, so I probably stop and then go through my facebook, emails, and or stream to have a break. I will put all of these during that period into one session -break session." When the scale of interleaving activities are not trivial and they can be easily dividable. • P19 said "my initial aim is to do academic info search then I switched to browse property info, and go back to academic again after a while. The property viewing in the middle should be put into a different session." N/A considered as important as the connectivity of the larger topic of trip planning. Further, P14 bridged different specific tasks: accommodation, ticket booking, and places of interests searching into one session because of the one topic -"trip plan", which overrides the Tasks -"accommodation" and "ticket booking" below. Task Change Participants often divided periods of activity into different tasks, where descriptions indicated that this was when these were more easily dividable, or larger in size. Creates a Session Boundary. Specific tasks can be used to divide web activities into separate groups, even if related to a big topic. The tasks shifting may create a session boundary. In a big topic-based web activity, there may be multiple specific tasks, which are topically relevant but different to each other. For example, the "Matlab" example from P15 above. The tasks like questions searching on "what does error XXX mean in Matlab" and "how to declare a variable in matlab" are both related to "Matlab" but grouped into different sessions, because he thought they were two different "how-to" tasks. Exception. When the scale of the tasks for one topic are relevantly small, participants were less likely to divide tasks as sessions, but as complementary or supporting "missions" to the main Topic. A comment from P17 described that several difference but relevant tasks within single search query should be put into one session, because he thought a session with single item was meaningless. Overrides another Factor. Task is clearly related to main topic in some form, and so projects that try to model common tasks in sessions would help to determine thresholds and task detection. Task may override topic when they are easily dividable such as specific tasks in the "Matlab" example above, and it overwrites the rule of "putting topicrelated activities into one session". Task also has association with specific collaborators and time impacts, especially as they grow to the size of smaller topic sessions. Similarly, larger task sessions can also begin to tolerate brief divergent web activity or temporal gaps. P16 decided to group the continuous searching on one technical problem solving accross multiple days into one session, despite spanning overnight breaks, because the task was unchanged. The challenge in delineating between small but similar tasks, means that when trying to model human web sessions, systems may need to retrospectively consider relative thresholds before deciding if they were in the same session. Different Phases Some types of activities have clear phases [25], for which progression can be predictable. There may be multiple phases related to one topic, for example. Compared with "Tasks", they are more sequentially dependent with each other. Our participants also reported this behavior, adding weight to the idea of whole-session relevance. These phases can be hierarchical or sequential, and participants noted that one phase may affect the activity in another one. Creates a Session Boundary. One common example of this type was participants dividing periods of research and option comparison, as a separate session to then finding the best place to buy a product and then purchasing it. P4 said: "In the flight ticket booking, Looking for information and final purchase are two different phases, because from checking price to purchase, I need a decision making and it takes time." Another described a two contiguous activities involving banking and bill paying as separate phases and thus separate sessions. Exception. Not all phase-shifting lead to session changes. Like with Task, some participants said that although there were clear phases in the process, they were too small to be considered as separate sessions, as P1 said:"Searches on 'Burn a DVD' has two parts: 'How to burn it' and also 'a software resource searching', and I put them into one session, because I think they are relevant and I didn't spread many sites." The phases in this are easily identifiable, however, P1 thought the size of each phase was not big enough to warrant a separate session. P1 also said that if the downloading of software had involved learning and researching, that they would have become separatable phases, highlighting the importance of size and delineatable aims for phases as well as tasks. Overrides another Factor. It is feasible that phases are simply a sequential instance of tasks, but this was not easy to determine from the qualitative data collected. The finding does have implications for projects looking at supporting sessions with phases [25], which if grow to contain phases across separate sessions would have to adapt. People The group of people involved in the activity was also a common theme in the interviews, although heavily related to others like task. It is mainly applied in the online communication, e.g. the group of people a user communicates with via their email or social network. Related people could help identify a topic/task, but other contiguous periods of web activities were divided simply by the collaborators alone. Creates a Session Boundary. 70% participants preferred to put activities from different mailboxes and social networks into different sessions as they utilised them for contacting different groups of people. P11 described how contiguous use of email could be divided by people involved:"The gmail and uni emil should be put into different sessions, because I use the uni one to contact with my classmates and colleagues, and gmail for friends and family." Exception. A small number of participants grouped all of the activities across different social networks into one session when they were adjacent to each other. For example, P3 said: "so while waiting, I will [...] either to check my personal emails, or because I use google chat a lot, or facebook, chat with my uni friends, and all of these should be put into one session, because they are just a 'break' for me". Overrides another Factor. Sometime, the group of people may override other factors, such as talking to specific people about multiple topics or tasks. P6 grouped all of the web acitivities from one mailbox even within a big time gap or activity-differentiation into one session as P6 said: "I put all of the web activities from the same mailbox into one session, because I think the people I contacted with via the same mailbox are from same group." Time Gap As with most research in this area, Time gap has clearly been associated with methods to divide sessions. Large time gaps were repeatedly mentioned as separating sessions in our interviews, but the findings most notably highlight that they vary dramatically according to context. In addition, the type of web activity can affect the tolerance of temporal gaps. Creates a Session Boundary. Temporal gaps between activities were a common cause of separating sessions, whereas topics and tasks were frequently cited as anchoring sessions over a time gap. Large gaps, such as overnight breaks, usually divided sessions, P5 said:"I did academic search about the "Learning enviroment" in different days, and I put them into seperated sessions.". Further, the acceptable time gap varies from types of web activity and the length of invested time, and some people even suggested non-web-activity gaps from a real life interruption may need to be as long as a few hours to divide a session. P6 said:"For video, the acceptable time interval is less than 45mins. For facebook, probably 1-2mins, and in academic search is less than 1h", and P15 said:"I put these two activities on one specific questions into one session, even they have more than 2hs gap and exceeds my acceptable time gap, but I knew it is interrupted by lunch." Exception. In relation to other factors above, time did not divide temporally distant web activity, when another factor became the overriding one. P14 highlighted how a topic can tie over a large period of activity: "[...] because some of my information search may spread over days, for example, the information gathering on schengen visa takes me about three days, I will put all of them into one session even with days break." Overrides another Factor. The time gap may override other factors and bridge some separate sessions together, even when they are unrelated to each other, such as the "break period activities" from P6 above and he grouped all of the unrelated casual activities happened in the break period into one session because of the short time gap and trivial tasks. As a result, scaling the acceptable size of break in accordance with the size of the session determined so far might be a better way to model inactivity periods as a factor. Multitasking Participants frequently described activity in their logs as being caused by multi-tasking. Multi-tasking often accounted for divergent behaviour amongst larger sessions, however participants also entered states of multitasking. Creates a Session Boundary. Enough characteristicallydiverse behaviour creates a session of "diverse activity", or a multi-tasking session; a session of unconnected web behaviour. P6 said he may also check his facebook and email simultaneously to have a break, during the serious working period. In this case, he preferred to put the break activity inside of the working session. The model of this situation is similar with "one mainstream activity with some other trivial activities". Exception. When the scale of interleaving activities are not trivial and they can be easily dividable, e.g. P19 said "My initial aim is to do some academic information search, then I switched to browse property information, and go back to academic again after a while. The property viewing in the middle should be put into a different session." The model of this situation seems to be "two or more mainstream activities interleave with each other", and the topical difference causes the session division. Overrides another Factor. To handle multi-tasking, during other sessions (created by main Topic), some approaches have simply ignored them and focused on things that match the current topical focus of the session (e.g. [10]). People may multi-task in natural breaks, like between Parallel tasks. These approaches to avoiding session changes seem relevant, but for sessions that are identified for multi-tasking it would be important to learn to model them to avoid unwanted incorrect support. The multi-tasking factor perhaps best highlights the risks for supporting sessions. These comments and findings highlight that these 6 interrelated factors have effects on determining a user's session boundaries in different situations. The first three mainly focused on the content of the web activities and make the decison based on the lexical relavance. Telling the scale difference between them, however, is still a big challenge for deciding when tasks should become separate phases. The People factor is added mainly for the activities involving other people, such as the online communication via email or social networks, but because People can be closely related to different work activities, it can become a good indicator of web content. Time gap is a temporal technique applied in most existing research, but we find that a "fixed time gap rule" may be insufficient without modeling the size of sessions that precede them. Scale varies dramatically according to the feature of the activities themselves and also individual preferences. The final factor, Multitasking, reflects that human behaviour is also related to the session division. Understanding Sessions with Card Sorts To understand what people thought about different types of sessions, we first asked participants to sort their cards, or sessions, according to their own criteria. Table 3 shows the range of critieria chosen to use by participants in open card sorting. Nearly every participant began by using purpose to divide their sessions, creating groups like: work, entertainment, and social networking. From the open card sorting, we received some unexpected dimensions that differentiated sessions, such as Willingness to do the activity, and whether sessions involved refinding via bookmarks. Interestingness was an unexpected but commonly used dimension. Although interestingness was defined differently to willingness, the separation of sessions was similar. Although difficult to utilise directly, these different dimensions may help us to investigate other factors of session boundaries in the future. During closed card sorting, if not already used in the open process, we asked participants to divide their sessions according to the following 4 critiera as shown in Table 4: Importance, Frequency, Difficulty, and Perceived Length, in order to obtain the relation between people's perception on the session scale and those four dimensions. Further, we noticed that some classified sessions were longer or shorter than objective measurements. Importance, Frequency, and Difficulty Importance. Search sessions in the High Importance group were longer and had notably more queries but fewer pageviews than other groups. This indicates that the query number and the length of single pageviewing time may be an indicator of search session importance. Conversely, browse sessions in the High Importance group had many more pageviews than other groups, indicating that the pageviews is related to the importance of browse sessions. Frequency. Search sessions in the Low Frequency group had more queries, indicating that frequent searches had fewer queries. 35 out of 46 sessions in High Frequency group were browse sessions with longer than average length, perhaps because of some daily casual-leisure sessions like video streaming. Difficulty. The majority of High and Medium Difficulty groups were search sessions, which implicitly indicated that query input may lead to higher difficulty. The search session in the High Difficulty group lasted much longer, and had more queries and pageviews. However, longer length and more pageview,s in browse sessions, did not often lead to higher Difficulty, compared to the Browse sessions in the Medium Difficult group, which lasted longest and had far more pageviews. As a result, search session with more queries input may lead to higher importance and difficutly, but happen less frequently. Browse sessions with more pageviews may lead to higher importance, and their length does not seem to have notable differences. Perceived Length We built two main categories to analyse perceived length: sessions that were actually the type that they specified (Actual Short (AS) and Actual Long (AL)), and categories that were perceived to be long or short when objectively not in those categories (Perceived Short (PS) and Perceived Long (PL)), as shown in Table 4. Participants were more likely to over-estimate (45 PL) rather than under-estimate (8 PS) the session length. PS and AS. The sessions in PS were percevied as Short but their actual length were located in either the Medium or Long groups. First, their average length and pageviews were much higher than the sessions in the AS group. However, the query number in PS was lower than in AS. This indicates that query number in Search Sessions may affect the perceived length and lower query numbers may lead to under-estimation. PL and AL. The sessions in PL were percevied as Long but their actual length were located in either Medium or Short Length group. The sessions in AL laster much longer and had more pageviews, and had many more queries if it was a search session. There were no clear indicators for the reason causing the over-estimation of shorter sessions. PS and PL. In the search session comparison between these two, the query number and pageviews in PL were more than twice of these in PS, which indicates that query number and pageview may have effects on the length perception, and more queries and pageview may lead to over-estimation. Combining our Results Our main taxonomy presents the factors that were associated with the boundaries of sessions, which indicate that several factors may be relevant depending on the scale of the sessions being divided. Further, using objective analyses of these sessions, the Importance, Difficulty, and Frequency analysis indicated the query number, pageviews, and length may have some effects on the perception of activity scale. The "over-estimation" and "under-estimation" in the length of sessions, however, indicates that we should estimate perceived scale of session, combining time and activity as indicators of importance and difficulty, rather than objectively measure them directly. There are several insights that can be drawn from the taxonomy and session features. When thinking about the appropriate factors for different situations, the features of the sessions play an important role. For example, the tolerance of time gap for dividing sessions varies from types of web activity, and it was much higher for bigger and more important activities. In Search Sessions, we found a relationship between the number of queries and Importance, where more queries may lead to higher Importance and probably over-estimation on length. In Browse Sessions, more pageviews lead to higher Importance. It seems that search sessions with more queries, or browse sessions with more pageviews, should have higher tolerance for time gaps. In addition, from the study on time of the day in Figure 1, the search sessions that happened "before bed" typically had more queries, and may also lead to the higher tolerance of time gap. Similarly, number of queries was also a good indicator of frequent search sessions, where High Frequency sessions had fewer queries and Low Frequency sessions had more queries. This may help to identify routine activities that can be more easily bounded as common sessions. DISCUSSION -APPLYING THE TAXONOMY The results above have provided three perspectives on how to determine the start and end of web sessions: 1) a taxonomy of factors that relate to the boundaries of sessions, along with notable exceptions and overrides, 2) insights into how those sessions manifested in web logs, and 3) how users perceived and categorised these sessions. Core to our contribution is that these factors have not been determined by researchers, but elicited from the users who created them. In relation to the four RQs set out in the Introduction, we discovered that the triggers cause that start, end, divide, and join session are dependent on 6 inter-related factors. The priority of each factor involved in different session boundaries needs further study, as it is highly related to the scale of web activities themselves. Determining the scale of the activity could be one of the more challenging parts in future work. Although, for example, common triggers like large time gaps are typically considered to divide sessions, we saw sessions spanning overnight periods, if the nature of the work task was large or important enough. Secion 4.3 also presented some insights into, for example, what make a session important; search sessions seem to be considered important if they included more queries according to the data in Table 4, and more page views were indicative of important browse sessions. As the factors in our taxonomy were drawn from qualitative methods, and are abstract themes that each relate to the boundaries discussed by our participants, the subsequent challenge is to put the factors from our taxonomy into practice. It is not the case that simple rules can be derived from our six factors, as each factor may play some amount of influence on a session ending. Consequently, the challenge for applying the taxonomy is to learn how to measure and track each of these factors, and then to discover how their thresholds, in combination, create a boundary. This challenge is both the reason we cannot yet quantify the importance of each factor in our taxonomy, and thus the primary motivator of our on going and future work. Below, however, we provide an initial discussion of potentially applying our taxonomy. Implications for Systems As the taxonomy captures factors, rather than specific trigger events, putting our results into practice means not detecting events in specific factors, but monitoring each factor in combination. Time, for example, has been commonly modeled in research [5,11], but our findings highlight that timeouts are closely related to other factors, such as the size or importance of a task; the analysis of card-sorted sessions provided insights into the nature of important search and browse sessions. Conversely, however, many sessions changed without a notable time gap, based upon topic, phase, or task change. A more intelligent time gap calculation is required, as the initial finding from our study is that acceptable time gaps varied by query numbers in search sessions, and pageviews in browse sessions. Recent work has also studied topic change as a means to detect the end of sessions [10,23], but these only focused on consecutive search activity and did not consider any activity resumption and multitasking. From our study, session boundaries usually occured when the activity was either completed or interrupted. Participants reported being interrupted by a number of triggers, including: 1) non-web demands, such as sleeping or cooking; 2) internal demands, such as feeling bored and needing a break (e.g. social network); and 3) interruption from concurrent tasks. Different factors should be considered for each. For example, in 1), longer time gaps could be accepted around normal meal times and over night. This may be especially true if the system has identified the current task as being especially large, and detects the following morning's web activity as being related. In 2), multitasking, time gap, and size of task should all be considered, as users often grouped lots of small diverse trivial tasks activities during a break into one session. The scale measurement of "Task", "Topic" and "Phase" is also challenging as it is highly dynamic and subjective, such as our user who was searching for matlab related content, but these spanned across separate tasks. Detecting the change of people involved in the activity as a factor may be one of challenging parts of applying the taxonomy, especially if this information is not accessible for search providers. The most obvious examples in our dataset were from users who engaged with notably separate groups via work and then personal email, even though the nature of the web activity appeared to be similar. Both of these two sessions may have covered several topics, and involved some web search as part of the response to emails. There are, however, means to determine a notable change of group of people. First, network analytics [22] may indicate when users notably switch between sections of their network, even within a single service. Further, document editors like Google Docs, may list specific collaborators in the permissions, and so it may be simple, in some cases, to detect notable switches. The largest challenge, in relation to People as a factor, however, is tying these people to other factors like topic, task, or phase. These may reinforce boundaries, where a user moves from one topic and set of collaborators to another. Conversely, they may conflate each other, with users covering several small tasks while working in their email client. Achievability One concern for the taxonomy is that some factors require access to data that services may not have, especially because it was based upon client data. Conversely, we argue that modern services should have access to each of these factors. Nearly all major search engines provide browsers, email services, social networks, document repositories, toolbars, etc that would allow them to monitor our six factors in combination. In fact, current major search engines are well placed to model sessions according to our factors and provide relevant results to dynamically evolving sessions. Individual Differences Personalisation may be an important aspect of applying the taxonomy. Within our study, we interviewed a range of technical and non-technical participants, where some users had much less overall web activity than others. Less-frequent web users, for example, typically indicated a higher tolerance of longer gaps, than those users with dense web history. Similarly, frequent technically-minded participants more commonly had smaller multitasking activities within or in parallel to larger tasks. The potential implication for systems, or search services, is that session detection needs to be relative to each user's normal web behaviour, however a much larger sample should be investigated to see whether these differences can be consistently and automatically identified. This issue highlights, though, that simplistic time-gap dividers, for example, have notable implications for session studies. CONCLUSIONS Supporting users with more session-relevant results is a common shared objective for IR (e.g. [7]), but with a lack of effective approaches to automatically determine sessions, most research has either focused on systems that let people explicitly label their sessions (e.g. [19]) or by presuming, for now, that sessions have been well determined (e.g. [10]). This research has focused on trying to develop our understanding of how real human web sessions relate to each other, such that sessions can be better identified, instead of using single naive measures like average timeouts. Our primary contribution is a taxonomy of six key factors that relate to the boundaries of sessions, with insights into how they relate, exceptions, and when they override each other. Beyond our primary contribution, we have also contributed an objective analysis of the 847 real human web sessions that were analysed when discussing the factors relating to session boundaries. Finally, by analysing our card sorting data, we have identified additional categorations of sessions that are classfied as difficult, important and frequent, and that participants perceived some types of tasks as longer. There are several avenues of future work that can build on our work, aside from the development of systems that attempt to implement a model based upon our taxonomy. First, our resource of 847 real sessions can be examined more comprehensively according to our taxonomy, and taxonomies from other papers like web activity [28] and casual leisure [8]. This process would help us to quantify and examine both the prevelence and the interrelation of our factors on a larger dataset. Further, it would be extremely valuable to investigate much larger search engine logs based upon our taxonomy, to detect their prevelance across many more users than we could study qualitatively in our interviews. ACKNOWLEDGEMENTS This work was partially supported by the EPSRC ORCHID project (EP/I011587/1).
2015-03-21T17:44:09.000Z
2014-08-26T00:00:00.000
{ "year": 2014, "sha1": "546a8f69af6dd2b46e47dd783a9e126d711330bc", "oa_license": "CCBY", "oa_url": "https://nottingham-repository.worktribe.com/preview/733691/IIiX2014-session.pdf", "oa_status": "GREEN", "pdf_src": "ACM", "pdf_hash": "546a8f69af6dd2b46e47dd783a9e126d711330bc", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
262085145
pes2o/s2orc
v3-fos-license
Dynamic top-down biasing implements rapid adaptive changes to individual movements Complex behaviors depend on the coordinated activity of neural ensembles in interconnected brain areas. The behavioral function of such coordination, often measured as co-fluctuations in neural activity across areas, is poorly understood. One hypothesis is that rapidly varying co-fluctuations may be a signature of moment-by-moment task-relevant influences of one area on another. We tested this possibility for error-corrective adaptation of birdsong, a form of motor learning which has been hypothesized to depend on the top-down influence of a higher-order area, LMAN (lateral magnocellular nucleus of the anterior nidopallium), in shaping moment-by-moment output from a primary motor area, RA (robust nucleus of the arcopallium). In paired recordings of LMAN and RA in singing birds, we discovered a neural signature of a top-down influence of LMAN on RA, quantified as an LMAN-leading co-fluctuation in activity between these areas. During learning, this co-fluctuation strengthened in a premotor temporal window linked to the specific movement, sequential context, and acoustic modification associated with learning. Moreover, transient perturbation of LMAN activity specifically within this premotor window caused rapid occlusion of pitch modifications, consistent with LMAN conveying a temporally localized motor-biasing signal. Combined, our results reveal a dynamic top-down influence of LMAN on RA that varies on the rapid timescale of individual movements and is flexibly linked to contexts associated with learning. This finding indicates that inter-area co-fluctuations can be a signature of dynamic top-down influences that support complex behavior and its adaptation. Introduction Complex behaviors depend on the coordinated activity of neural ensembles in distinct brain areas, yet much remains to be resolved about the specific behavioral functions of such multi-area coordination.Recent studies of trial-by-trial activity co-fluctuations between neuronal populations in connected brain areas have noted a link between changes to such coordinated activity and behavioral performance (Cowley et al., 2020;Koralek et al., 2013;Koralek et al., 2012;Lemke et al., 2019;Makino et al., 2017;Perich et al., 2018;Sawada et al., 2015;Veuthey et al., 2020;Wagner et al., 2019).Although some slower activity co-fluctuations may reflect task-independent, shared changes in a global internal state (Cowley et al., 2020), more dynamic co-fluctuations-varying rapidly on a moment-by-moment basis and linked flexibly to specific behavioral contexts-could be a signature of task-relevant influences of one area on another (Semedo et al., 2019). One intriguing role for such dynamic inter-area influences has been hypothesized in the context of the learning and adaptation of skilled reaching behavior (Perich et al., 2018;Veuthey et al., 2020).Specifically, the joint evolution of inter-area activity co-fluctuations and behavioral performance over the course of learning has led to the suggestion that top-down inputs from higher-order areas dynamically implement learning-related changes in activity in primary motor areas.In principle, such topdown modulation during learning could operate to control adaptive behavioral modifications on the fine timescale of individual movements.However, the difficulty in identifying and quantifying discrete movement components during skilled reaching leaves unresolved whether such top-down influences operate on this rapid timescale during learning. Here, we take advantage of error corrective learning of adult birdsong-a complex learned behavior with quantifiable, rapidly varying discrete movements-to investigate whether inter-area activity co-fluctuations change during learning with the appropriate specificity to constitute such dynamic top-down commands.Previous work has led to the hypothesis that such error corrective learning depends on the moment-by-moment shaping of activity in primary motor areas by dynamic, top-down control.In particular, pharmacologically inactivating either the song-specific higher-order area LMAN or its synapses in the primary motor cortical analog RA (Figure 1A) transiently eliminates recently learned modifications to pitch (fundamental frequency, FF) of the individual 50-200 ms syllables that constitute adult song (Figure 1B, Andalman and Fee, 2009;Charlesworth et al., 2012;Tian and Brainard, 2017;Warren et al., 2011).This raises the possibility that LMAN provides topdown drive to RA to implement adaptive changes to behavior (Ali et al., 2013;Andalman and Fee, 2009;Aronov et al., 2008;Bottjer et al., 1984;Charlesworth et al., 2012;Doya and Sejnowski, 1998;Fee and Goldberg, 2011;Kao et al., 2005;Kearney et al., 2019;Nordeen and Nordeen, 2010;Olveczky et al., 2005;Scharff and Nottebohm, 1991;Tian and Brainard, 2017;Troyer and Bottjer, 2001;Troyer and Doupe, 2000a;Warren et al., 2011).Moreover, studies in sleeping and anesthetized birds have identified co-fluctuations in LMAN and RA activity (Hahnloser et al., 2006;Kimpo et al., 2003)-measured as increases in the cross-covariance between LMAN and RA activitywhich peak at a short LMAN-leading temporal lag, consistent with the possibility of a dynamic topdown influence of LMAN on RA.However, it remains unclear whether such LMAN-RA co-fluctuations are present during singing and, if so, whether they change during learning in a manner that could support a top-down role of LMAN in the adaptive adjustment of song parameters. LMAN-leading co-fluctuations of LMAN and RA activity are present during singing To determine whether the co-fluctuations in neural activity between LMAN and RA previously identified in sleeping and anaesthetized birds are present during singing, we first examined neural activity in both nuclei during baseline singing before the onset of training (Figure 1A-D).Consistent with prior studies in which LMAN and RA were recorded in separate birds (Aronov et al., 2008;Chi and Margoliash, 2001;Hessler and Doupe, 1999;Kao et al., 2008;Leonardo and Fee, 2005;Yu and Margoliash, 1996;McCasland, 1987;Olveczky et al., 2005;Sober et al., 2008), average activity in both areas exhibited consistent temporal structure aligned to ongoing song, peaking within 50 ms prior to syllable onsets (e.g., Figure 1D and Figure 1-figure supplement 1A-D).Additionally, we observed a close alignment between the patterns of activity in the two areas (Figure 1-figure supplement 1C-F).Despite such consistent temporal structure in average activity, we observed rendition-by-rendition variation in the patterning of activity in both nuclei, raising the possibility that this variation reflects co-fluctuations in LMAN and RA activity. We assessed co-fluctuations of LMAN and RA activity by measuring cross-covariance, a metric previously used in sleeping and anaesthetized birds (Kimpo et al., 2003;see Methods).For a given pair of LMAN and RA recordings (example in Figure 1C-E), LMAN-RA cross-covariance is a measure of the similarity of moment-by-moment fluctuations of LMAN and RA spike patterns away from their respective across-rendition means, computed as a function of the time lag between these patterns (Perkel et al., 1967).We computed the cross-covariance within a premotor window for each syllable, which we defined as a 100ms period centered 50 ms preceding syllable onset (Figure 1D), chosen based on empirical estimates of when LMAN and RA activity patterns maximally influence acoustic structure of the upcoming syllable (Fee et al., 2004;Giret et al., 2014;Kao et al., 2005;Kojima et al., 2018;Yu and Margoliash, 1996;Sober et al., 2008).In birds singing spontaneously at baseline ('undirected' singing produced in isolation), the LMAN-RA cross-covariance was on average positive during syllable premotor windows (example pair of sites in Figure 1E, cross-covariance computed by normalizing data to shuffled controls; summary across pairs of recordings and syllables in Figure 1F), similar to cross-covariance measurements in anesthetized and sleeping birds (Hahnloser et al., 2006;Kimpo et al., 2003).Although there was variation in the magnitude and time lag of cross-covariance peaks across syllables and recording sites (Figure 1F, light gray curves) there was a dominant positive peak in the average cross-covariance with LMAN leading at a short time lag (~3 ms, Figure 1F; Kimpo et al., 2003).This putative influence of LMAN on RA is consistent with the known presence of a direct excitatory projection from LMAN to RA (Bottjer et al., 1989;Kubota and Saito, 1991;Mooney and Konishi, 1991;Nottebohm et al., 1982) and a hypothesized role of LMAN in driving motor variability in RA during baseline singing (Giret et al., 2014;Kao et al., 2005;Kao and Brainard, 2006;Kojima et al., 2018;Olveczky et al., 2005;Stepanek and Doupe, 2010).This indication of LMAN-RA interactions during baseline singing raises the question of whether similar top-down signals from LMAN to RA operate during learning to instantiate rapid adaptive adjustments to individual syllables. LMAN-RA co-fluctuations are enhanced during learning To determine whether adaptive biasing of pitch during learning is accompanied by changes to co-fluctuations in LMAN and RA activity, we examined LMAN-RA cross-covariance while birds learned modifications to the pitch of individual 'targeted' song syllables in response to pitch-contingent white noise (WN) feedback (Figure 2A, B, see Methods; Ali et al., 2013;Andalman and Fee, 2009;Charlesworth et al., 2012, Charlesworth et al., 2011;Tian and Brainard, 2017;Tumer and Brainard, 2007;Warren et al., 2011).Across experiments, we found that LMAN-RA cross-covariance was enhanced following training (3-10 hr) with pitch-contingent reinforcement (Figure 2C-F, 'Target syllable'; also Figure 2figure supplement 1A) and this enhancement was maximal at a short LMAN-leading time lag (~2 ms, Figure 2D).Moreover, LMAN-RA cross-covariance increased with a similar time course to pitch modification (Figure 2G, H).In contrast, there were no detectable changes in LMAN-RA cross-covariance for non-targeted syllables (Figure 2E, F, 'Non-target syllables'), including those directly preceding the target syllable (Figure 2-figure supplement 1B).There was also no detectable learning-related difference between target and non-target syllables in changes in the overall level of activity within each nucleus (Figure 2-figure supplement 1C).The observations that increases in LMAN-RA crosscovariance are restricted to the target syllable and develop in parallel with pitch modification indicate that these changes are linked specifically to learning, and not to other processes such as arousal, nonstationarity or 'drift' of recorded units, or circadian variation in activity patterns. The strength of LMAN-RA co-fluctuations and the strength of adaptive motor bias are associated on a rendition-by-rendition basis If changes to LMAN-RA cross-covariance reflect top-down commands that adaptively bias RA activity and pitch during learning, then rendition-by-rendition variation in the strength of this signal should be associated with rendition-by-rendition variation in the magnitude of pitch shifts in behavior.To test this prediction, we took advantage of the rendition-by-rendition variation in pitch shifts that birds naturally exhibit (Figure 3A, note that this variation is reflected in the large spread of pitch values in both 'baseline' and 'trained' periods).We divided interleaved syllable renditions into two groups, based on whether each rendition's pitch was shifted in the adaptive direction by more or less than the median amount (Figure 3A).At the peak of learning ('trained' period) we found that renditions exhibiting stronger learning-related pitch shifts (stronger bias) also exhibited greater learning-related enhancement of cross-covariance (example experiment in Figure 3B; summary in Figure 3C; also see Figure 3-figure supplement 1A), regardless of whether birds had been trained to adaptively shift pitch up or down (Figure 3-figure supplement 1B).This finding that pitch modifications and LMAN-RA cross-covariance are linked on a rendition-by-rendition basis suggests that LMAN-RA co-fluctuations reflect a rapidly varying top-down signal that adaptively biases motor performance during learning. Learning-related increases in LMAN-RA co-fluctuations are context specific Prior studies revealed that pitch modifications for a given syllable are dependent on sequential context, or the sequence of syllables in which it is embedded (Hoffmann and Sober, 2014;Tian and Brainard, 2017); this prompted us to examine whether increases in LMAN-RA cross-covariance are similarly context specific.To do so, we took advantage of the specificity with which learning can be driven; when a syllable (e.g., B) is targeted in a specific sequential context (e.g., AB) and not in others (e.g., XB or YB), the resulting pitch modification is greatest in the targeted context (Hoffmann and Sober, 2014;Tian and Brainard, 2017).In a subset of experiments, we therefore provided WN feedback in such a context-specific manner, with target and non-target contexts naturally interleaved from rendition to rendition (Figure 4A, B).We found that average LMAN-RA cross-covariance increased over the course of training specifically for the target context, with no detected change for the same syllable in non-target contexts (Figure 4C; see Figure 4-figure supplement 1).Enhanced LMAN-RA cross-covariance is therefore flexibly linked to sequential contexts associated with learning. Adaptive bias is eliminated by disrupting LMAN activity in a narrow premotor window The close relationship between changes to LMAN-RA cross-covariance and behavior is consistent with a model in which LMAN provides a temporally localized top-down command that has a transient, adaptive influence on the immediately upcoming syllable.This model has remained untested in prior studies driving learning and then perturbing LMAN activity to causally probe its contributions to pitch shifts, as these studies have used pharmacological manipulations that act on the timescale of minutes, and thus lack the temporal resolution to assess the moment-by-moment relationship between neural activity and behavior (Andalman and Fee, 2009;Tian and Brainard, 2017;Warren et al., 2011). We reasoned that electrical microstimulation of LMAN could potentially enable a more temporally precise disruption of neural activity and thus test of LMAN's causal influence on RA.Stimulation of LMAN (Giret et al., 2014;Kao et al., 2005;Kojima et al., 2018) and interconnected song system nuclei (Ashmore et al., 2005;Fee et al., 2004;Vu et al., 1994) during baseline singing perturbs song acoustic features in a manner that is rapid and transient (timescale of 10 s of milliseconds).LMAN microstimulation-with intensity titrated to minimize overt behavioral effects during baseline singing (see Methods)-thus has the potential to disrupt neural activity that may convey a motor-biasing signal during learning. To evaluate the specificity of the top-down bias reflected in increased LMAN-RA cross-covariance, we stimulated LMAN within the target syllable's premotor window during a randomly interleaved 50% of renditions ('Stim'; see Methods) in 1-to 3-hr long blocks (Figure 5A, B).Compared to prior studies using spatially localized, unilateral microstimulation of LMAN (Giret et al., 2014;Kao et al., 2005), our stimulation experiments were designed to cause more global, bilateral disruption of LMAN (see Methods).We measured the effect of stimulation by comparing stimulated renditions to randomly interleaved control renditions for which stimulation was withheld.Before the onset of pitch-contingent WN training, stimulation caused modest changes to pitch that varied across experiments, resulting in only small changes to average pitch (example experiment in Figure 5C, D; summary in Figure 5E; see also Giret et al., 2014;Kao et al., 2005;Kojima et al., 2018).In contrast, during learning, stimulation caused pitch to revert systematically toward its baseline value (Figure 5C-E).This reversion is consistent with the interpretation that stimulation removes an adaptive biasing signal that develops during learning, with the remaining expressed pitch shifts reflecting learning that has transferred to the downstream motor pathway to become independent of LMAN (Andalman and Fee, 2009;Warren et al., 2011).Indeed, this occlusion of pitch modification via temporally precise disruption of LMAN activity was quantitatively indistinguishable from that caused by longer lasting pharmacological inactivation of LMAN in the same species and training paradigm (Warren et al., 2011;compare Figure 5E, F).Thus, our findings indicate that appropriately patterned LMAN premotor activity is required for adaptive biasing of vocal output during learning. To precisely localize the temporal window in which LMAN contributes to adaptively biasing motor output, we systematically varied the timing of stimulation relative to the target syllable, applying short stimulation trains (10 ms) at times ranging from −65 to +5 ms relative to the timepoint of maximal pitch modification (Figure 5G, H, average time of WN delivery labeled 'WN time'; Charlesworth et al., 2011).Stimulation caused reversion of learning only if applied within a narrow premotor window prior to pitch measurement (e.g., 25-45 ms before pitch measurement for the example experiment in Figure 5G, H).Across four experiments, the latency between stimulation and detectable pitch reversion ranged from 18 to 26 ms (Figure 5H, arrowheads), consistent with prior estimates of the premotor delay between LMAN activity and song (Giret et al., 2014;Kao et al., 2005;Kojima et al., 2018).Combined with prior findings from pharmacological perturbations within this circuit and of changes to LMAN-RA cross-covariance during learning, these microstimulation experiments indicate that LMAN exerts a moment-by-moment, top-down influence on RA to implement adaptive motor bias. Discussion Using paired recordings from LMAN and RA in singing birds, we identified a neural signature of a top-down influence of LMAN on RA, quantified as a short latency, LMAN-leading peak in the crosscovariance of neural activity.This LMAN-RA cross-covariance peak is present even at baseline prior to learning reinforcement-driven pitch modifications (Figure 1), consistent with an ongoing role for LMAN in driving rendition-by-rendition exploratory variation in behavior (Kao et al., 2005;Olveczky et al., 2005).Strikingly, during learning LMAN-RA cross-covariance strengthens in a premotor window closely linked to the individual movement (syllable, Figure 2), rendition-by-rendition variation in the magnitude of adaptive pitch modifications (Figure 3), and sequential context (Figure 4) associated with learning.Moreover, temporally localized perturbation of LMAN activity specifically within this premotor window causes rapid and transient occlusion of learned changes to pitch (Figure 5).Combined, these results indicate that LMAN enables learning by conveying a dynamic top-down command to RA that varies on the timescale of individual movements and is flexibly linked to contexts associated with learning. Adaptive bias through dynamic top-down influence on primary motor area output Our finding that LMAN-RA cross-covariance is a temporally precise neural correlate of adaptive behavioral modifications suggests a similar interpretation for previous work that identified enhanced interarea co-fluctuations during learning (Koralek et al., 2013;Koralek et al., 2012;Lemke et al., 2019;Makino et al., 2017;Sawada et al., 2015;Veuthey et al., 2020;Wagner et al., 2019).In particular, prior studies left unresolved whether such co-fluctuation signals reflect the moment-by-moment implementation of adaptive movements instead of other slower modulatory processes related to motivation, vigor, attention, motor preparation, or other kinds of global brain state changes (Cowley et al., 2020;Hikosaka et al., 2006;Ignashchenkova et al., 2004;Jaffe and Brainard, 2020;Kawashima et al., 2016;Müller et al., 2005;Noudoost et al., 2010;Sawada et al., 2015;Stavisky et al., 2017).Our results indicate that inter-area co-fluctuations in these and other systems may similarly reflect dynamic top-down shaping of primary motor output in driving behavioral adaptation.Momentby-moment and context-specific links between measures of inter-area co-fluctuations and behavior, similar to those we found between LMAN-RA cross-covariance and pitch shifts during learning, may be detected most readily for forms of learning that involve modification of specific movement parameters with high temporal (Kawai et al., 2015;Medina et al., 2005;Medina et al., 2000;Narayanan and Laubach, 2006;Rueda-Orozco and Robbe, 2015) or contextual specificity (Howard et al., 2012;Rochet-Capellan et al., 2012;Rochet-Capellan and Ostry, 2011;Wainscott et al., 2005), similar to song learning and adaptation (Charlesworth et al., 2011;Lipkind et al., 2017;Ravbar et al., 2012;Tchernichovski et al., 2001;Tian and Brainard, 2017;Tumer and Brainard, 2007). Neural mechanisms underlying an adaptive top-down influence of LMAN on RA Our finding of learning-related increases in LMAN-RA cross-covariance raises the question of what neural mechanisms instantiate this top-down bias during learning.A prevalent hypothesis is that topdown LMAN bias results from the reinforcement of LMAN activity patterns associated with successful behavioral variants (Andalman and Fee, 2009;Brainard and Doupe, 2000;Charlesworth et al., 2011, Charlesworth et al., 2012;Doya and Sejnowski, 1998;Fee and Goldberg, 2011;Gadagkar et al., 2016;Kao et al., 2005;Kearney et al., 2019;Singh Alvarado et al., 2021;Troyer and Bottjer, 2001;Troyer and Doupe, 2000a;Tumer and Brainard, 2007;Warren et al., 2011).Such reinforcement could lead to a variety of changes to LMAN activity patterns that could alter the influence of LMAN on RA, including population-wide changes in firing rate, altered correlational structure across LMAN neurons (Brown and Raman, 2018;Darshan et al., 2017;Kumar et al., 2010;Riehle et al., 1997;Woolley et al., 2014;Zandvakili and Kohn, 2015), or more complex or distributed changes to the temporal structure of LMAN neurons' firing patterns (Kao et al., 2008;Kao et al., 2005;Kojima et al., 2013;Olveczky et al., 2005;Palmer et al., 2021).We did not detect any systematic learning-related increases or decreases in average LMAN firing rates during learning (Figure 2-figure supplement 1C).Moreover, we did not find evidence that enhancement in crosscovariance during learning was specific to LMAN and RA sites that exhibited the highest baseline correlation with pitch (Figure 2-figure supplement 2).Both negative findings-although requiring corroboration using larger datasets-suggest that any changes to LMAN activity during learning may occur in a heterogeneous and complex fashion across neurons. That increases in cross-covariance can be observed at arbitrarily selected recording locations in RA and LMAN suggests that the associated changes in these areas are distributed across neurons.At least for RA, the possibility that activity changes are correlated on a rendition-by-rendition basis across neurons is supported both by the known architecture of RA in which the inhibitory inputs to multiple projection neurons are highly correlated (Miller et al., 2017;Spiro et al., 1999) and by an understanding that correlated activity of RA projection neurons supports the ability of these neurons to drive downstream structures and influence song (Sober et al., 2008). An alternative model is that top-down bias is implemented by altering the efficacy of LMAN synapses in RA-potentially in the absence of changes within LMAN-either by plasticity at these synapses, or by changes to neuromodulatory inputs to RA that act to alter the gain of an appropriate set of LMAN synapses in RA.Indeed, prior work has supported a role for noradrenergic (Sheldon et al., 2020;Solis and Perkel, 2006) and cholinergic (Puzerey et al., 2018) inputs to RA in modulating song.Overall, distinguishing between possible mechanisms for learning-related changes to LMAN's top-down bias will require elucidation by studies monitoring larger numbers of neurons during learning. A functional architecture with distinct substrates for fast and slow learning Our findings suggest that motor skill learning is generally supported by biasing signals generated by frontal circuits (such as the Anterior Forebrain Pathway that contains LMAN) separate from those that drive the core motor program.An advantageous feature of this hierarchical architecture may be to enable rapid behavioral modifications in early phases of learning via top-down signals that flexibly update in response to changing goals, contexts, and reward contingencies, while allowing for the stable representation of core motor programs in primary motor areas (Kim and Hikosaka, 2013;Miller and Cohen, 2001;Tian and Brainard, 2017). Animal subjects We used adult male Bengalese finches [Lonchura striata domestica, N = 8 (4 for recordings, 4 for microstimulation)] that were bred in our colony and housed with their parents until at least 60 days of age.During experiments, birds were housed individually in sound-attenuating chambers (Acoustic Systems) on a 14/10 hr light/dark cycle with food and water provided ad libitum.All experiments were performed on 'undirected' song (i.e., with no female present).All procedures were in accordance with protocols approved by the University of California, San Francisco Institutional Animal Care and Use Committee (Approval #: AN185512-02E). Pitch training paradigm using closed-loop reinforcement We used a modified version of EvTaf (Charlesworth et al., 2012, Charlesworth et al., 2011;Tian and Brainard, 2017;Tumer and Brainard, 2007;Warren et al., 2011), a custom-written Labview program (National Instruments), to monitor song and deliver WN feedback in a closed-loop fashion during training (EvTaf is available from the authors upon request) (Ali et al., 2013;Andalman and Fee, 2009;Charlesworth et al., 2012, Charlesworth et al., 2011;Tian and Brainard, 2017;Tumer and Brainard, 2007;Warren et al., 2011).Briefly, song was recorded with an omnidirectional lavalier microphone (Countryman), bandpass filtered between 75 Hz and 10 kHz, and digitized at 32 kHz.To detect a specific segment of song (i.e., within a targeted syllable) for targeted reinforcement, the spectrum of each successive 8ms segment of ongoing song was tested online for a match to a spectral template constructed to discriminate the targeted segment from all other song segments.Successful match was based on threshold crossing of the Euclidean distance between this song segment and template.A match signaled detection of the targeted syllable.FF, or pitch, of the matching segment was compared to an FF threshold (Tumer and Brainard, 2007).If FF was below threshold (in experiments driving upwards shifts in FF), or above threshold (in experiments driving downwards shifts in FF), WN feedback [WN, 40-60 ms at 90-95 dB(A)] was delivered with <1 ms latency.Feedback renditions were termed 'hits'. For context-dependent training, this paradigm was modified so that pitch-contingent WN was delivered only if the Target syllable was sung in a specific Target context, defined by the identity of the sequence of syllables directly preceding the Target syllable.Sequential context was defined for each rendition of the Target syllable and detected online by extending the spectral template matching algorithm (above) to detect conjunctions of the current and preceding syllables, using methods described in Tian and Brainard, 2017.All training trajectories for neural recordings were performed within a single day of continuous recording of neural and singing data.WN was turned on after sufficient baseline data were collected (15-40 song bouts).The average training duration was 6.4 hr (min, 3.0; max, 10.0) starting from when WN was turned on. For LMAN microstimulation experiments, the training duration was extended to up to 4 days with the goal of eliciting large shifts in pitch that would allow more robust measurement of behavioral effects of stimulation.This approach was chosen to match that of previous pharmacology experiments (Warren et al., 2011) which enabled us to directly compare behavioral effects of these different manipulations. For all pitch training experiments, we initially set the WN pitch threshold to a hit rate of ~70%.Successful learning leads to a progressive decrease in the hit rate; therefore, the pitch threshold was updated over the course of training to maintain a hit rate of 70%. Offline analysis of song For analysis of song recorded simultaneously with neural recordings, we used song acoustic data recorded using the same Intan acquisition system used for collecting neural data (see below), to ensure temporal alignment of neural and singing data.Audio signals were acquired with an electret microphone (CUI), amplified (MAX4466, Adafruit), and digitized at 30 kHz.For analysis of song during LMAN stimulation, we analyzed data saved using the Labview training program described above. Syllable pitch was calculated in the following manner (Charlesworth et al., 2011).For each syllable rendition, we calculated a spectrogram using a Gaussian-windowed (σ = 1 ms) short-time Fourier transform (window size = 1024 samples; overlap = 1020 samples; sampling rate = 30 kHz).Within each time bin, FF was defined as the frequency corresponding to peak power of the first harmonic, estimated using parabolic interpolation.FF for the rendition was then calculated as the mean FF across time bins for a fixed window defined relative to syllable onset.We similarly excluded introductory notes and call-like syllables, which both consist largely of broadband noise and lack well-defined pitch. Electrode array microdrives for neural recordings Custom-built microdrives, inspired by Vandecasteele et al., 2012, were constructed to hold a custom-built array of tungsten electrodes.For one bird we used two 0.5 MOhm and two 6.0 MOhm electrodes in each array organized in a diamond formation (see below for spatial spread).For other birds we used four (diamond) or five (pentagon) 0.5 MOhm electrodes.In all cases, we used tungsten electrodes from Microprobes (WE30010.5Ffor 0.5 MOhm, and WE30016.0F for 6.0 MOhm).Microdrives consisted of a movable shuttle onto which electrodes were affixed, allowing manual adjustment of the position of electrodes along the z-axis with a resolution of 26.5 µm (one-eighth turn of a 00-120 screw).Electrodes were stabilized in the horizontal (x-y) plane by passing them through tightfitting polyimide tubes (0.0056″ ID, 0.00075″ WD) glued to the static parts of the drive.Silver wires (diameter, 0.003″ bare, 0.0055″ teflon-coated) connected each electrode to its pin on an Omnetics connector (A79042-001).Low impedance reference electrodes were made by cutting tungsten electrodes described above to a blunt tip.Electrodes for the two microdrives (LMAN and RA) were wired to different channels on the same connector. Electrodes were positioned in the array such that their tips were in the same horizontal plane.Electrodes were positioned so that there was greater spread along the anterior-posterior axis (0.4-0.5 mm) than the medial-lateral axis (0.2-0.3 mm) to account for the expectation of greater variability in targeting along the anterior-posterior axis, since position in this axis was further from stereotaxic zero than position in the medial-lateral axis (see below for coordinates). Implantation of recording microdrives Implants were performed in the left hemisphere in the following order within a single surgical session with birds under anesthesia: RA microdrive, LMAN microdrive, reference electrode, shared connector.All wiring of electrodes to connectors was completed before implantation. The location of RA was confirmed by electrophysiological targeting.Single carbon-fiber electrodes (Kation Scientific, Carbostar-1) were lowered gradually at candidate x-y locations to depths where RA was expected.RA was detected based on the presence of its characteristic tonic spiking activity (10-20 Hz).The x-y location of the center of RA was determined as where the extent of tonic activity extended over a depth of at least 400 µm; we performed up to three penetrations at different x-y locations to determine the best estimate of RA's center, which we found to be 2.05-2.15mm lateral, 0.04 mm posterior to Y 0 (i.e., caudal point of the intersection of the midsagittal and transverse sinuses), 2.75-3.15mm ventral to the brain surface, with beak angle at 42° [by definition, vertical (beak pointing down) was 0° and horizontal was 90°]. A microdrive was then implanted with electrode tips 1.2 mm above the dorsal edge of RA.Next, a microdrive was implanted over LMAN using stereotaxic coordinates: 1.37 mm lateral, 5.18-5.27mm anterior, 1.25 mm ventral, at a 50° beak angle.This depth was expected to place the electrode tips ~0.5 mm above LMAN.The reference electrode was implanted directly underneath the skull but with minimal penetration of brain, either over the cerebellum or directly between the LMAN and RA implants.The connector was fixed to the skull over the right hemisphere.Dental cement (Coltene Hygenic) was used to secure all implants to the skull; small holes were made in the upper layer of the skull into which dental cement could flow before curing to increase implant stability.A plastic tube, glued to the dental cement base of the implant and surrounding the implant, was used as a protective cap. Electrophysiological recordings After subjects recovered and were singing (1-2 days), they were tethered and then handled a few times a day for acclimatization.Recording sessions began a week or two after surgery after birds consistently sang within tens of minutes after being handled.Starting immediately after lights were turned on in the morning, we slowly lowered (<20 µm/s) the electrodes from their resting positions toward LMAN and RA.Localization within these nuclei was assessed by evaluating tonic activity (RA), songlocked firing rate modulation (LMAN and RA), and depth.Post hoc histological verification confirmed that recordings were within LMAN and RA (see below).At the end of each session, electrodes were raised to a position with the tip at >300 µm above the dorsal edges of RA and LMAN to minimize potential tissue damage within those areas.Recording sessions were separated by multiple days, and different depths were targeted, with no selection criteria (e.g., firing rate or baseline correlation with pitch) for units that were recorded. Voltage signals were measured using a homemade lightweight headstage (Intan RHD2132 amplifier chip) and the Intan RHD2000 Amplifier Evaluation System.Signals were amplified, filtered (1-12,000Hz pass band), and multiplexed on the headstage, then stored on hard disk for offline analysis. A total of seven birds received dual LMAN and RA implants.For three of these birds, we were unable to obtain any appropriate data due to song deterioration after surgery (likely due to damage of HVC, RA, or the HVC-RA tract).For analyses of LMAN and RA activity during baseline singing, we obtained recordings from the remaining four birds (including recordings performed separately in LMAN and RA), resulting in a total of N = 30 LMAN and 52 RA multi-unit sites (ranging from 3 to 10 LMAN and 5-20 RA sites per bird).Analysis of LMAN-RA cross-covariance in learning experiments included only concurrent LMAN and RA recordings collected over a day of learning.We excluded one of the four birds, for which we were only able to collect concurrent recordings from LMAN and RA during baseline singing, as signal strength had degraded (to the point where spikes were not detectable) before the rate of songs per day had recovered to a level sufficient for training pitch modification.This resulted in a a total of N = 38 pairs of LMAN and RA sites across 11 learning sessions in 3 birds.We varied the target syllable and training direction across the experiments (learning sessions) performed for each bird.Neural analyses excluded the two syllables directly following the target syllable, to avoid potential acute startle or feedback effects that may occur due to WN (Sakata and Brainard, 2008;Sakata and Brainard, 2006), and syllables that were associated with movement artifacts either during baseline or training renditions (N = 3.3/10.8syllables).In each training session we recorded 1-2 sites in LMAN and 1-5 sites in RA, and considered all pairs of these LMAN-RA sites for LMAN-RA cross-covariance. Spike detection Spikes were detected using the spike clustering software Wave_clus (Chaure et al., 2018) run on MATLAB.Briefly, we detected putative spikes by amplitude threshold crossing (threshold of 2.5-3.5 × SNR, minimum refractory period 0.2 ms), mapped those spikes onto a feature space defined by wavelet coefficients, and then clustered spikes in this feature space.The noise cluster was discarded, and all spike clusters were merged into a single multiunit cluster. Prior literature suggests that a substantial portion of the spikes we recorded was from excitatory projection neurons.First, for LMAN, the ratio of projection neurons to interneurons has been estimated at around 3:1 (Bottjer et al., 1998) to 15:1 (Livingston and Mooney, 1997).Further evidence that excitatory neurons are more easily detected than inhibitory ones comes from reports of difficulty finding interneurons in slice preparations (Boettiger and Doupe, 1998;Livingston and Mooney, 1997).One study reported that LMAN projection neurons verified as projecting to RA via antidromic stimulation have similar spiking activity to unverified ones, suggesting that extracellular recordings in LMAN tend to be dominated by projection neuron activity (Olveczky et al., 2005).For RA, the tonic firing we found in the multi-unit activity is characteristic of projection neurons (Leonardo and Fee, 2005;Sober et al., 2008;Spiro et al., 1999).Moreover, other studies isolating single neurons have reported low probability of finding putative interneurons in RA Leonardo and Fee, 2005;Sober et al., 2008; indeed, in slice preparations the ratio of projection neuron to interneurons in RA was found to be around 30:1 (Spiro et al., 1999). Analysis of temporal structure of LMAN and RA activity during singing To assess the temporal structure of activity separately in LMAN and RA, we computed the average firing patterns aligned to song motifs (stereotyped sequences of syllables), as shown in Figure 1figure supplement 1. Motifs were identified by visual inspection of song bout spectrograms (N = 8 motifs across 4 birds).For a given motif, the timing of syllable onsets and offsets can vary slightly across renditions and recording sessions.To account for this variation and temporally align activity, we linearly time warped all renditions to a 'reference' motif, constructed with syllables and gaps matching the median value of syllables and gaps for that specific motif across all of its renditions.This alignment was performed by shifting each spike so that its fractional time within its containing segment (i.e., syllable or gap) remained unchanged after time warping (Figure 1-figure supplement 1A). To compute a smoothed firing rate function, activity on each rendition was smoothed by first binning spikes (1 ms) then convolving with a Gaussian kernel (5 ms SD).Rendition-averaged activity was then z-scored to facilitate comparison across recording sites, which may have different firing rates, relative to the mean and SD over the entire motif. To assess the similarity of LMAN and RA activity patterns, we combined all recordings across sessions into separate LMAN and RA datasets, one for each motif, where activity was time warped, smoothed, and averaged over all sites (without first z-scoring) and then mean-subtracted before computing cross-correlation. For analysis of average LMAN and RA activity during baseline singing, to assess whether the average time lag of maximum LMAN-RA cross-correlation was significantly different from zero, we performed a Monte Carlo permutation test (Figure 1-figure supplement 1F).We computed a null distribution under the null hypothesis that LMAN and RA activity patterns have the same temporal profile relative to song.On each shuffle iteration, we randomized the assignment of rendition-averaged neural activity to brain region.We first generated a dataset consisting of average activity patterns, one for each combination of motif and brain area.On each shuffle iteration, we randomly reassigned the brain area labels; this was done independently for each motif, which ensures that the number of LMAN and RA sites assigned to each motif (and therefore bird) remained unchanged.LMAN-RA cross-correlations were computed with this shuffled dataset for 10,000 random permutations.The probability of finding in the shuffled dataset an absolute time lag equal to or greater than the absolute time lag in the real dataset was taken as the two-sided p-value. Normalized cross-covariance Normalized cross-covariance was computed for each pair of LMAN and RA sites for each syllable's premotor activity (neural activity extracted from 100 to 0 ms preceding syllable onset).This calculation was done separately for baseline and training.For baseline, we used the last half of renditions, to minimize potential drift in recordings from lowering electrodes into position at the start of the session.For training, we used the last quarter of the renditions to take the window of maximal learning. Normalized cross-covariance was calculated in a similar manner to previous birdsong studies (Hahnloser et al., 2006;Kimpo et al., 2003).We first calculated the average cross-correlation: are binned spike counts at time bin t for LMAN and RA, respectively, T is number of time bins in a single rendition, τ is lag between LMAN and RA data in units of time bins (2.5 ms), and 〈〉 indicates average over all renditions. The cross-covariance was computed to estimate the extent to which deviations of firing rates in LMAN and RA from their respective means are associated: where rL are rendition-averaged firing rates in LMAN and RA.The second term on the right hand side, 1 , is the cross-correlation of the average firing rates, since it measures the average similarity of LMAN and RA activity while removing the contribution of shared, within-rendition variation in LMAN and RA.It can be estimated by calculating a shuffled cross-correlation (Brody, 1999;Hahnloser et al., 2006;Kimpo et al., 2003;Perkel et al., 1967).The shuffled cross-correlation (or 'shift predictor') is computed using data shuffled such that rendition n for one recording site (either LMAN or RA) is compared to a temporally adjacent rendition (i.e., n + 1 or n − 1) for the other site: where n = 1...N indexes the rendition, and the L − R subscript indicates that LMAN renditions (n) are chosen to be the one directly preceding the RA rendition (n + 1).We also computed the R − L shift predictor: The final shift predictor was the average of these two: Subtracting the shift predictor from the average cross-correlation gives the cross-covariance: in units of spikes 2 .To rescale cross-covariance in units more easily comparable across the dataset we normalized cross-covariance relative to the standard deviation of the cross-covariance across all the data points (i.e., each combination of rendition n vs. rendition n + 1) in the shuffled dataset.This effectively z-scores the cross-covariance relative to the mean and standard deviation of the shuffled distribution: where and where stdev is the standard deviation over all n (from 1 to N − 1), for the time bin τ for the shuffled data: is the mean over renditions n (from 1 to N − 1).In practice, shuffled renditions can be computed with either LMAN renditions preceding RA renditions or vice versa.We combined both kinds of shuffled renditions into a single set . Cross-correlation functions were first linearly interpolated to 1ms resolution then smoothed with a kernel (SD = 5 ms) before converting to normalized cross-covariance.Calculation of crosscorrelations were implemented using the MATLAB function XCORR (with SCALEOPT = 'unbiased').Normalized cross-covariance was computed separately in 60-ms windows, sliding over 5 ms timesteps, in each syllable's premotor window (such that the earliest window spanned from 100 to 40 ms preceding syllable onset, and the latest window from 60 to 0 ms).The cross-covariance functions over all 60-ms windows were then averaged to generate one cross-covariance function for a given 100 ms premotor window. To summarize the strength of the LMAN-leading peak in a scalar value, we took the average within a 15-ms window centered at the time lag of peak normalized cross-covariance at baseline across all syllables (3 ms, LMAN leading). For the analysis, splitting interleaved renditions by pitch into two groups ('Stronger' and 'Weaker' bias), we split renditions by comparing their pitch to the median pitch.If pitch deviated from the median pitch in the adaptive direction (i.e., escaping WN), then the rendition was considered to express stronger behavioral bias; if pitch was in the other direction, then weaker bias.This grouping was performed separately for baseline and WN renditions relative to their respective median pitch values, ensuring that the resulting groups consisted of interleaved renditions.We used the last half of the baseline and training renditions.For the 'Baseline' dataset, the adaptive direction was set the same as that for the 'Trained' dataset so that we could specifically measure learning-related change in the relationship between pitch and neural activity by subtracting baseline from training measurements.Normalized cross-covariance was computed as before, separately for each dataset group.Shuffled renditions were constrained to be only those pairs of renditions overlapping with the renditions included in a given group.For example, if rendition m for LMAN is included in the group, then the corresponding shuffle renditions will be LMAN rendition m vs. RA rendition m − 1 and LMAN rendition m vs. RA rendition m + 1.At dataset edges, when m = 1, then RA rendition N was used instead of m − 1 (which does not exist), and when m = N, RA rendition 1 was used instead of N + 1 (which does not exist). Electrical microstimulation of LMAN We used a modified version of EvTaf that enabled electrical stimulation to be delivered independently of WN feedback.In order to stimulate at a controlled time relative to the syllable targeted for learning, we detected 'predictor' syllables that consistently preceded the targeted syllable.We then delivered stimulation trains (60 ms duration, 200 Hz bilateral stimulation, centered at 50 ms preceding syllable onset) at a fixed delay from this detection so that stimulation began at a premotor latency prior to the WN trigger time.Stimulation renditions were randomly interleaved with catch renditions in which no stimulation was delivered.We stimulated a randomly interleaved 50% of renditions of the targeted syllable only during specific 1-to 3-hr intervals on specific days at baseline and training (1-3 days). Custom-made 4-wire Pt/Ir microwire arrays (Microprobe; 25 µm diameter, 400-800 kOhm impedance wires) were surgically implanted bilaterally.The arrays were targeted using stereotaxic coordinates for the center of LMAN.Wires were electrically connected to a male Omnetics connector (A8391-001) that enabled electrical connection to an external lead.The four wires in each array were arranged in a rectangular pattern (250-500 µm separation on the rostro-caudal axis, 250-500 µm separation on the medial-lateral axis) for all but one bird.Wire pairs at the same rostral/caudal or medial/lateral level were separated in depth by 0-250 µm.In one bird, the four wires were laid out in a linear array in which wires were separated by 250 µm along the medio-lateral axis, and neighboring wires were separated by 250 µm in depth.In the rectangular configuration, stimulation was between diagonal wires; in linear configuration, stimulation was between the two inner wires. After recovery from surgery, microstimulation trains (biphasic pulses, total biphasic pulse duration of 0.4 ms, 200 Hz frequency, 30-100 µA) were delivered to LMAN bilaterally by two separate microstimulators (A-M systems Model 2100).We adjusted various microstimulation parameters to globally disrupt LMAN activity without inducing the large, rapid deflections in pitch previously reported following unilateral LMAN microstimulation (Kao et al., 2005).First, we bilaterally stimulated LMAN to induce a more global activity disruption than effected with unilateral stimulation (Kao et al., 2005).Second, to perturb a large volume of LMAN rather than a specific area, we passed current between pairs of identical wires placed within LMAN (distance between electrodes ≥350 µm), rather than from a single electrode to ground; We selected pairs of arrays for stimulation which elicited minimal baseline pitch deviations.Finally, we set the amplitude of stimulation at a current level below the threshold at which song stoppages or degradation of syllable structure occurred. We calculated the time latency from short-duration LMAN microstimulation [three pulses for 10 ms (N = 3) or four pulses for 15 ms (N = 1)] to pitch reversion by comparing randomly interleaved unstimulated and stimulated syllables.For each syllable, we made a continuous measurement of pitch at a millisecond timescale (Charlesworth et al., 2011).We then calculated the time-varying pitch difference (or residual) of each stimulated syllable from the mean of the unstimulated syllables.We aligned these residuals according to stimulation onset to obtain the latency to pitch reversion, defined as the duration until the beginning of a 10-ms (or longer) time window in which the residuals were significantly shifted toward baseline pitch for all time bins. Post-mortem localization of recording and stimulation sites For both recording and microstimulation experiments, we marked the location of electrodes by first lesioning brain tissue and then performing histology to map those lesions relative to sites of recording (LMAN and RA) or microstimulation (LMAN) (see Figure 1-figure supplement 2).Lesions were performed by passing 100 µA current for 4 s.After lesions, birds were deeply anesthetized and perfused with 4% formaldehyde.Brains were removed and post-fixed for a few hours to overnight.We performed histology on sectioned tissue (40 µm thick, coronal).Electrode tips were localized by identifying lesions and tracts by identifying tissue damage.LMAN and RA were visualized by immunostaining for calcitonin gene-related peptide (Sigma, RRID: AB_259091, 1:5000 to 1:10,000) (Bottjer et al., 1997).For microstimulation experiments, we confirmed that lesions were in LMAN.For neural recording experiments, two lesion sites were made, one immediately dorsal and another immediately ventral to LMAN and RA, in order to retain the integrity of tissue within each area for histology.We confirmed in histology that lesions were indeed positioned dorsal and ventral to LMAN and RA, such that electrodes would be expected to be within these regions when at stereotaxic depths used during recordings. Statistical tests The main recording results were analyzed using mixed effects modeling to capture potential hierarchical effects based on experimental session, because a given experimental session may contribute multiple pairs of sites, which are not completely independent; in such cases we modeled responses (changes in LMAN-RA cross-covariance) with fixed effects for the covariate of interest, with random effects for intercept and the covariate of interest grouped by experiment ID. Figure 1 . Figure 1.LMAN-leading co-fluctuations of LMAN and RA activity are present during singing.(A) Schematic of song system circuitry implicated in the production and learning of birdsong.This study focuses on hypothesized top-down signals from LMAN (lateral magnocellular nucleus of the anterior nidopallium), the output nucleus of the Anterior Forebrain Pathway (blue), to RA (robust nucleus of the arcopallium) in the Motor Pathway (red).RA sends projections to brainstem motor nuclei.Recordings of multi-unit activity were made using multi-electrode arrays chronically implanted in LMAN and RA.DLM, medial dorsolateral nucleus of thalamus.HVC and Area X are used as proper names.(B) Spectrogram of a single bout of a motif 'ABBCD' A motif is a specific sequence of syllables that is consistently sung across bouts and is unique to a bird (see Methods).Scale bar, 500 ms.(C) Example paired recordings in LMAN and RA across four renditions (numbered to match those in panel D) aligned to the onset of a single syllable, filtered in the spike band (300-3000 Hz).(D) Raster plot representing spikes for the same pair of LMAN and RA sites shown in panel C, across 30 renditions, ordered chronologically and aligned to syllable onset.Above the raster plot are the mean (± SEM) smoothed firing rates.'Premotor window' refers to the time window of neural activity used for calculating the cross-covariance between LMAN and RA spike trains.(E) Calculation of normalized cross-covariance for the example sites and syllable in panels C and D. Top: cross-correlation between LMAN and RA was calculated using spike trains for both concurrent trials (Same trial) and a control dataset in which LMAN and RA activity patterns were shuffled between adjacent renditions ('Shuffled trials', mean ± standard error of the mean [SEM]).We used this shuffling procedure to estimate the cross-correlation between the mean activity patterns in LMAN and RA (i.e., eliminating the contribution of rendition-by-rendition variation that is shared between LMAN and RA, see Methods).Bottom: the mean of the shuffle-computed cross-correlation functions (Shuffled trials) was subtracted from the actual cross-correlation function (Same trials) to compute cross-covariance.We then divided the cross-covariance by the standard deviation of the shuffled cross-correlations to compute a normalized cross-covariance (measured in z-score).(F) LMAN-RA cross-covariance across all syllables and pairs of LMAN and RA sites during baseline singing.The light gray curves represent individual syllables (N = 27), averaged over all simultaneously recorded pairs of LMAN and RA sites (N = 38 total LMAN-RA site pairs, 3 birds).Mean ± SEM cross-covariance (across syllables, with each syllable contributing its mean over recording site pairs) is shown for individual birds (dark gray) and all data (orange).The online version of this article includes the following figure supplement(s) for figure 1:Figure supplement 1. LMAN and RA average activity patterns align to each other with time lag close to zero. Figure supplement 2 . Figure supplement 2. Histological verification of electrode locations. Figure 2 . Figure 2. LMAN-RA co-fluctuations are enhanced during learning.(A) Pitch training paradigm.White noise (WN) feedback is delivered during renditions of a specific target syllable when its pitch is either below or above a threshold (pink fill in 'pitch' histogram), depending on whether the objective is to train pitch shifts up or down, respectively.The schematic shows training for upward pitch shifts.Over the course of training (3-10 hr, mean ~6 hr), birds progressively modify pitch in the direction that escapes WN so that in this example the 'Trained' distribution is shifted upwards relative to 'Baseline'.(B) Summary of the magnitude of pitch change across experiments (mean ± standard error of the mean [SEM], N = 11 experimental trajectories over 3 birds); individual points represent the mean for individual birds.*p < 0.05, Wilcoxon signed-rank test.(C) Spectrogram of a song bout in an example experiment, with the syllable 'F' targeted with pitch-contingent WN.Scale bar, 500 ms; y-axis, 0.5-7.0kHz.(D) Change in cross-covariance during learning for target syllables.Left: mean ± SEM cross-covariance for 'Baseline'and 'Trained' periods (last quarter of renditions during the training session) across experiments (N = 38 LMAN-RA site pairs, 11 experiments, 3 birds).Right: mean ± SEM change in cross-covariance over the course of training.Black bar indicates time bins with values significantly different from zero (thin, p < 0.05; thick, p < 0.005, Wilcoxon signed-rank test).(E) Same as (D), but for non-target syllables.For a given experiment, there was only one target syllable, but multiple non-target syllables (mean, 4.2 syllables).Thus for each LMAN-RA site pair, data were first averaged across all non-target syllables before plotting (N = 38 pairs, 11 experiments, 3 birds).(F) Summary of change in LMAN-RA cross-covariance during training for target and non-target syllables.For each combination of paired LMAN-RA sites and syllables, we computed the average change in cross-covariance (Trained -Baseline) in a 15-ms window centered at the peak of the average endof-training cross-covariance (−3 ms) [N = 38 (Target) and 158 (Non-target) combinations of paired sites and syllables, across 11 experiments in 3 birds].*p < 0.05, mixed effects model (fixed intercept and effect of syllable type; random effect of intercept and syllable type grouped by experiment ID).# p < 0.05, mixed effects model (fixed intercept and random effect of intercept grouped by experiment ID).(G) Time course of pitch change.Each training trajectory was analyzed by binning renditions into four training stages with equal numbers of renditions (i.e., quartiles).The average pitch change across experiments is plotted for each training stage (N = 10 experiments in 3 birds; excluding one experiment for which neural data was recorded only during baseline and the end of training.).Spacing of stages along the x-axis maintains the relative timing of stages (time of median rendition for each stage relative to median baseline rendition: stages 1, 2, 3, and 4: 1.02, 2.35, 3.73, and 5.46 hr).*p < 0.05 vs. 0, Wilcoxon signed-rank test.(H) Time course of change in LMAN-RA cross-covariance for the target syllable for the same experiments illustrated in (G) (N = 37 LMAN-RA site pairs, 10 experiments, 3 birds).*p < 0.05 vs. 0, Wilcoxon signed-rank test; ## p < 0.005, last two vs. first two training quartiles.The online version of this article includes the following figure supplement(s) for figure 2:Figure supplement 1. Specificity of enhancement of LMAN-RA co-fluctuations during learning. Figure supplement 2 . Figure supplement 2. Lack of detected relationship between baseline activity-pitch correlations and learning-related changes in LMAN-RA crosscovariance. Figure 3 . Figure 3.The strength of LMAN-RA co-fluctuations and the strength of adaptive motor bias are associated on a rendition-by-rendition basis.(A) Example experiment, plotting pitch of individual renditions during baseline and at the end of training in an experiment in which white noise (WN) feedback targeted lower pitch renditions (i.e., training upwards pitch shift).Renditions are split into two groups, 'Stronger bias' (purple) and 'Weaker bias' (green) based on pitch deviation from the median.Renditions in the 'stronger' and 'weaker' groups are interleaved in time, and taken from periods with relatively stable pitch, so that any differences in LMAN-RA cross-covariance between these two groups reflects rendition-by-rendition variation, not slower 'drift' due to learning.We used the last half of baseline and training renditions.Each data point represents a single syllable rendition.(B) Example experiment (same as in panel A), plotting LMAN-RA cross-covariance for individual pairs of LMAN-RA sites.Each data point represents the mean cross-covariance for renditions with either stronger bias (purple) or weaker bias (green).Thus, each site pair contributes two points to the 'Baseline' and two points to the 'Trained' data (one purple, one green).The renditions used to calculate cross-covariance are the same as in panel A. Each data point represents a single LMAN-RA site pair.(C) Summary of change in LMAN-RA cross-covariance at the peak of training, relative to baseline (Trained − Baseline), measured separately for renditions expressing 'stronger' or 'weaker' bias (N = 38 LMAN-RA site pairs, 11 experiments, 3 birds).*p < 0.05, LMAN-RA cross-covariance (Trained -Baseline) modeled with fixed intercept and random intercept grouped by experiment ID. **p < 0.005, Stronger-Weaker modeled with fixed intercept and random intercept grouped by experiment ID.The online version of this article includes the following figure supplement(s) for figure 3:Figure supplement 1. Robustness of the rendition-by-rendition relationship between LMAN-RA co-fluctuations and adaptive motor bias. Figure 4 . Figure 4. Learning-related increases in LMAN-RA co-fluctuations are context specific.(A) Schematic of contextdependent training.In this example experiment, pitch-contingent white noise (WN) was provided for renditions of the target syllable (C) only in the 'Target' context (BC).WN was never provided when C was sung in 'Non-target' contexts (e.g., AC).(B) Spectrogram for the same experiment as in panel A, illustrating pitch-contingent WN delivered only for syllable C in the target context BC.The first rendition of BC escaped WN because its pitch was higher than the threshold defined for this experiment.(C) Summary of change in LMAN-RA cross-covariance during training (N = 24 LMAN-RA site pairs for target syllable, target context, 35 pairs for target syllable, nontarget context, and 112 pairs for non-target syllables, 7 experiments, 3 birds).**p < 0.005, mixed effects model (fixed effect of intercept and syllable type; random effect of intercept and syllable type grouped by experiment ID).The online version of this article includes the following figure supplement(s) for figure 4:Figure supplement 1. Learning-related increases in LMAN-RA co-fluctuations are context specific. Figure 5 . Figure 5. Adaptive bias is eliminated by disrupting LMAN activity in a narrow premotor window.(A) Schematic of electrical microstimulation in LMAN.Stimulation was used to disrupt LMAN activity at precise times during singing.(B) Experimental design.To test for a causal contribution of LMAN premotor activity to pitch modifications during learning, on randomly interleaved renditions stimulation was either delivered ('Stim', 60 ms duration centered 50 ms prior to syllable onset) or withheld (Catch).'WN' marks the average time of WN feedback for this experiment.(C) An example experiment in which pitch was shifted away from baseline over 3 days, with LMAN microstimulation performed during 1-3 hr blocks on the baseline day and 3 subsequent training days, depicting pitch (mean ± standard error of the mean [SEM]) for stimulated (Stim) and non-stimulated (Catch) syllable renditions.(D) Scatterplot showing pitch of individual renditions from the experiment in panel C on randomly interleaved stimulated (red) and catch (black) renditions during baseline and training day 3 (arrowheads, mean pitch).The magnitude of LMAN bias is estimated as the difference between Stim and Catch renditions (blue arrow).(E) Summary of effects of LMAN microstimulation on pitch across all experiments (N = 14 training sessions in 4 birds), plotted for baseline (Base), and training (mean over training days 1-3) for experiments training pitch up or down.*p < 0.05, t-test comparing Catch to Stim.(F) Same as panel E, but for experiments in which LMAN was inactivated pharmacologically using muscimol or lidocaine (Drug).Pharmacological perturbation had quantitatively similar effects on pitch compared to electrical microstimulation in panel E. Data were previously published (Warren et al., 2011).(G) Schematic of experiment using short-duration microstimulation [red hash marks, durations 10 ms (n = 3 experiments) or 15 ms (n = 1)] applied at varying timepoints in the target syllable's premotor window.Onset timing of stimulation was varied from −65 to +5 ms relative to the expected timepoint of WN delivery (WN).Three example microstimulation trains are shown, with onsets at −45, −35, and −25 ms.(H) Analysis of the temporal latency between short-duration microstimulation and pitch reversion.Black trace shows pitch reversion for the example experiment in Panel G, measured for a single 8ms time bin, caused by stimulation at varying times relative to the time of pitch measurement ('Stimulation onset time', time between the centers of the time bins for stimulation and measuring pitch).Arrowheads depict, for each experiment, the minimum latency between stimulation and significant pitch reversion, calculated from continuous pitch contours (red arrowhead refers to the experiment shown in panel G).
2023-09-22T06:17:31.850Z
2023-09-21T00:00:00.000
{ "year": 2023, "sha1": "a60d07a9ca93e3f5de1e9cdb160c7ecc0a194722", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "8d77a8cfc5f532e5fcdd65802affd1ad0fc71156", "s2fieldsofstudy": [ "Biology", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
259588958
pes2o/s2orc
v3-fos-license
GOVERNMENT SUPPORT AND POLICY DESIGN TO IMPROVE MSME’S PERFORMANCE This study aims to analyze the effect of financial support and non-financial support from the Government of Pesawaran Regency on the performance of MSMEs in Pesawaran Regency. Furthermore, this study also analyzes the proposed policy model for the Pesawaran Regency Government to improve the performance of MSMEs. This study uses mixed methods, namely a combination of quantitative and qualitative analysis. Data in the quantitative analysis in this study came from primary data obtained through the distribution of questionnaires given to MSMEs in Pesawaran Regency, Lampung Province. The results of the questionnaire will be analyzed using the Structural Equation Model method using AMOS software. While qualitative data were obtained from interviews with informants which were then analyzed using the coding method with the N-Vivo software. The results of the analysis show that financial and non-financial support have an effect on the performance of MSMEs in Pesawaran Regency, Lampung. The most influential financial support is related to giving awards and appreciation to MSMEs that excel. The most influential non-financial support is related to digital technology coaching and training in business and entrepreneurship. This study proposes 2 policy models, the first model includes MSME training, capital assistance and incentives for MSME achievements, collaboration and building and developing marketing centers for MSME products in pesawaran. The second model namely INDIRA covers MSME Integration, Digitalization and MSME Regeneration. INTRODUCTION One of the drivers of the country's economy that has great potential to excel in Indonesia is Micro, Small and Medium Enterprises (MSMEs) (Madanchian et al. 2015;Matt & Rauch, 2020). Not only in Indonesia, MSMEs have an important role in the economy in various parts of the world, such as in Europe (Eurostat, 2018) in the United States (Eggers, 2020) to Africa (Abisuga-Oyekunle et al. 2020). MSMEs are a significant sector because they absorb the majority of the workforce and boost community entrepreneurship. In ASEAN (Association of Southeast Asian Nations includes Singapore, Brunei, Malaysia, Thailand, Philippines, Indonesia, Vietnam, Laos, Cambodia, and Myanmar) MSMEs control 97% of the company population (Matt & Rauch, 2020). So, it is not surprising that MSMEs have become a determining sector in the development of the country's and even the world's economy. Muliadi et al. (2020) stated that MSMEs have great potential in improving the economy due to several things, namely absorbing a large workforce, cultivating the entrepreneurial spirit of the community, ease in adopting new technology and business innovation, simple bureaucratic system and not too many employees so as to facilitate business management and high flexibility so as to survive in a dynamic business era. One proof of the important role of MSMEs in improving and developing the economy is that in several cases of the economic crisis in Indonesia, MSMEs were industries that were able to survive. The phenomenon of the ability of MSMEs to survive the crisis shows that their entities have opportunities and can trigger the development of other sectors to accelerate more (Tambunan, 2020). Even so, not all MSMEs can survive in the midst of a crisis, and there are several obstacles faced by MSMEs, especially after the Covid-19 pandemic (Athaide & Pradan, 2020). To optimize the development of MSMEs, the government must participate and create policies that support the progress of MSMEs (Kapur et al., 2023). The role of government support for MSMEs is evidenced in several empirical studies conducted by previous researchers. Alkahtani et al. (2020) found that government support in the form of financial and capital assistance could increase the competitive advantage of MSMEs. Another study by Razumovskaia et al. (2020) found that government support consisted of 4 types, namely tax support, capital loan support, administrative support and public funding. If the 4 types of support can be implemented properly, MSMEs will develop more rapidly. Furthermore, Arshad et al. (2020) and Nakku et al. (2020) divided government support for MSMEs into 2 types of support, namely financial support and non-financial support. Financial support includes funding, land and business premises as well as working capital. Meanwhile, non-financial support includes R&D, coaching, business assistance, distribution of raw materials and ready-to-sell products, marketing and networking. Several types of support have been empirically proven to be able to improve MSME performance. Although many previous studies have proven the significant influence of government support in developing MSMEs (Alkahtani et al. 2020;Arshad et al. 2020;Razumovskaia et al. 2020;Nakku et al. 2020), several studies have also found contradictory results. Nugroho (2015) analyzes the role of government support in increasing the readiness of MSMEs to compete and develop amid the development of technology and business innovation. The results show that government support has no effect on improving MSME performance. There is no influence from government support because the human resources in MSMEs themselves are still not ready and qualified to accept and learn new technologies and innovations. Therefore, in providing support to MSMEs, the government must ensure the readiness and capacity of MSME business actors first. Another study by Seo & Kim (2020) also found that government support in the form of export policies was not able to improve the performance of MSMEs. The research emphasizes that guidance and assistance to MSMEs is more important than policy. Likewise with research by Tende (2014) who found that the government's capital loan policy for MSMEs had no effect on improving MSME performance. There is an inconsistency in research results related to empirical studies of the effect of government support on MSME performance where most of the literature states that there is an effect of government support on MSME performance (Alkahtani et al. 2020;Razumovskaia et al. 2020;Arshad et al. 2020;Nakku et al. 2020), will but some other literature states that there is no effect of government support on the performance of MSMEs (Nugroho, 2015;Seo & Kim, 2020). This shows that there should be further studies related to this topic, so this study seeks to fill this research gap by further analyzing the effect of government support on the performance of MSMEs. Based on a literature review, it was found that the role of government support in improving the performance of MSMEs is quite large and there are still inconsistencies in the results of several previous studies. On the other hand, from the point of view of the reality that occurs in the in-depth field, related to the concept of government support that can optimally improve the performance of MSMEs also needs to be studied further. Therefore, this study seeks to analyze the role of aspects of government support that can improve the performance of MSMEs and analyze in depth the concept of appropriate policies in developing MSMEs after the Covid-19 pandemic. METHOD This study uses mixed methods, namely a combination of quantitative and qualitative analysis. From the quantitative analysis, this study analyzes the effect of government support which consists of two aspects, namely financial support and non-financial support on the performance of MSMEs in Pasawaran Regency, Lampung. The qualitative analysis is used to look for government policy concepts in providing support for MSMEs so that they develop more rapidly and are able to improve the community's economy. Furthermore, the data in the quantitative analysis in this study came from primary data obtained through the distribution of questionnaires given to MSMEs in Pesawaran Regency, Lampung Province. The results of the questionnaire will be analyzed using the Structural Equation Model method using AMOS software. While qualitative data were obtained from interviews with informants which were then analyzed using the coding method with the N-Vivo software. The population in this study were SMEs in Pasawaran Regency, Lampung. The sample used in this study was determined using a purposive sampling technique with the following criteria: 1. MSMEs have been registered with the Pasawaran Regency Government. 2. MSMEs are at least 1 year old MSMEs have employees According to Hair et al. (2017) if the sample size is too large it will be difficult to obtain a suitable model, and it is recommended that an appropriate sample size be between 100-200 respondents so that estimation interpretation can be used with the Structural Equation Model (SEM). For this reason, the number of samples will be determined based on the results of the minimum sample calculation. Determination of the minimum sample size for SEM according to Hair et al (2017) is (Number of indicators + number of latent variables) x (5 to 10 times) Based on these guidelines, the minimum sample size for this study is a minimum sample = (6 + 2) x 10 = 80 respondents. Based on the formula above, the minimum sample size in this study is 80 respondents and researchers will use a sample of 170 MSME managers. Furthermore, the qualitative data in this study were obtained through interviews with informants. The informants in this study are people who have the authority to adopt MSME development policies and MSME actors themselves. The following are the sources in this study. 1. 2 people from the Office of Cooperatives and SMEs 2. 11 Village Heads in Karawang Regency 3. 11 MSME association managers from each village RESULTS AND DISCUSSION The first stage in this study was a quantitative analysis to determine the effect of government financial and non-financial support on the performance of MSMEs in Pesawaran Regency. The analysis used to prove the hypothesis is the calculation of the Structural Equation Model (SEM) with AMOS 24 software. Validity Test Validity indicates the accuracy of data in representing variables or indicators in research. In an analysis using SEM, Hair et al. (2011) categorize an indicator as valid if it has a loading factor value of > 0.5. If there is a value that is still below 0.5, it will be excluded from the analysis. The results showed that there were 3 invalid indicators, namely FS3, FS5, and NFS12, so they had to be dropped from the research model. After 3 invalid indicators were dropped, the validity test results are shown in Table 1. Table 1 shows that all the indicators in this study already have a loading factor value of more than 0.5. So it can be concluded that all indicators in this study can be said to be valid. Reliability Test Reliability is the accuracy of a measuring instrument and its accuracy in making a measurement. The construct reliability is considered good if the construct reliability value >0.7 and the variance extracted value >0.5 (Yamin & Kurniawan, 2009). From the calculation results, the reliability test results are obtained in Table 2. Finansial Support 0,9 0,5 Reliable Non-finansial Support 0,9 0,5 Reliable SME Performance 0,9 0,5 Reliable Goodness of Fit Furthermore, the conformity test of the confirmatory model was tested using the Goodness of Fit Index. The Gof test aims to determine how precisely the observed frequency is with the expected frequency. In this study several criteria were taken from each type of GOFI namely Chisquare, probability, RMSEA and GFI representing absolute fit indices, CFI and TLI representing incremental fit indices then PGFI and PNFI representing parsimony fit indices. The Goodness of Fit score met all the criteria except the PGFI score with a marginal fit rating. However, according to Hair et al. (2010) the value of marginal fit can still be tolerated. Hypothesis Test The next analysis is the analysis of the full Structural Equation Model (SEM) to test the hypotheses developed in this study. The results of the regression weight test in this study are as shown in table 5. The results of hypothesis testing can be seen by looking at the Critical Ratio (CR) and probability (P) values of the data processing results. The direction of the relationship between variables can be seen from the estimated value, if the estimated value is positive then the relationship between the variables is positive, whereas if the estimated value is negative then the relationship is negative. Furthermore, if the test results show a CR value above 1.96 and a probability value (P) below 0.05/5%, the relationship between exogenous and endogenous variables is significant. In detail, testing the research hypothesis will be discussed in stages according to the hypothesis that has been proposed. The results of the analysis in Table 4.21 show that: 1. Government financial support (FS) has a positive and significant effect on MSME performance (SP). These results are evidenced by a positive estimate value, namely 0.070, a t statistic value > 1.96, which is 2.183 and a probability value <0.05, which is 0.029. So that H1 in this study is supported. 2. Government non-financial support (NFS) has a positive and significant effect on the performance of MSMEs (SP). These results are evidenced by a positive estimate value, which is 0.858, the t statistic value > 1.96, which is 7.358, and the probability value < 0.05, which is 0.000. So that H2 in this study is supported. Furthermore, testing the effect of independent variables is used to determine the magnitude of the influence between variables. The amount of influence between variables in this study is shown in table 4. Based on table 4 it is known that the direct effect of FS on SP is 0.070 while the direct effect of NFS on SP is 0.858. These results indicate that to increase the work of SMEs in Pesawaran Regency, Lampung. Non-financial assistance is more influential and has a more positive impact than financial assistance. These conditions indicate that what is most needed by SMEs today is non-financial support. The next analysis is related to the identification of indicators that have the highest points in the respondent's data. The analysis aims to describe which aspects are most needed and cared for by respondents and which aspects are not really needed and cared for by respondents. This analysis includes two variables, namely financial support and non-financial support. Financial support consists of 11 indicators and non-financial support consists of 13 indicators. The results of the analysis show that the indicator with the greatest influence value is FS10 and the indicator with the lowest influence is FS2. The lowest influence is shown in the socialization aspect, these results show that socialization is not the most urgent thing in the financial assistance provided. The greatest influence is shown in the aspect of awarding and appreciation for outstanding MSMEs which are included in the aspect of government incentives for MSMEs in Pesawaran Regency. These results indicate that the government needs to increase the awards and appreciation for MSMEs that excel. This can increase the motivation of MSMEs in developing their business and can create healthy competition for MSMEs. Non-financial support consists of 12. The results of the analysis show that the indicator with the greatest influence value is NFS6 from the R&D aspect and NFS10 from the coaching aspect. Then the indicator with the lowest influence is NFS3. The lowest influence is shown in the aspect of priority policies, these results show that priority policies are not the most urgent thing in the financial assistance provided. The biggest influence is shown in aspects of technology and entrepreneurship training for MSMEs. These results indicate that the government needs to plan and make policies related to the process of digitizing MSMEs and increasing entrepreneurial capacity for MSMEs. Qualitative Analysis Result This study proposes 2 policy models. The first policy model is training, capital assistance and incentives, collaboration with universities and private retailers and building a sales center for MSME products in pesawaran. Training The ongoing training program is training for micro-entrepreneurs who do not yet have a Household Industry Food Production Certificate (SPP-IRT). The activity was divided into 2 sessions, the first or 1st session was held at the Hanura Village Hall, Teluk Pandan District, where the 1st session involved business actors from 5 Districts (Teluk Pandan, Padang Cermin, Marga Punduh, Way Ratai, and Punduh Pedada). The second session is planned for July 2022 covering 6 Districts (Gedong Tataan, Negeri Katon, Tegineneng, Way Lima, Kedondong, and Way Khilau), the location will be adjusted. Furthermore, it is hoped that training for SMEs will be intensified again. This study proposes a training center for MSMEs in Pesawaran District. The training center is a training program for MSMEs that is carried out regularly and continues consistently. The focus of training topics for MSMEs is digitization and product marketing. Currently, Pesawaran MSMEs really need support from the government, especially from a non-financial aspect. Non-financial aspects include training, competency and insight development, collaboration with experts and other parties who can develop MSMEs as well as motivation instilled in MSMEs so that they have persistence in trying and maximum entrepreneurial spirit. This point is in line with the FGD conclusion expressed by the facilitator that "The problem is. From. The entrepreneurial spirit of Indonesian MSMEs needs to be improved. So if Jenengan's mother is persistent. The production has been maximized, the quality is maximum, the market will come. So the market will chase what products are really good. We know that there are many stalls in remote areas, but if it's good. Everyone is visited by products that are sold at home, they don't even have a shop. If it's good people will look for it. So indeed, from the side of SMEs from the side of the first SMEs. Improve yourself first (FGD Facilitator)". The results of the quantitative analysis in this study also support training for MSMEs. The greatest influence was shown in aspects of technology and entrepreneurship training for MSMEs. These results indicate that the government needs to plan and make policies related to the process of digitizing MSMEs and increasing entrepreneurial capacity for MSMEs. Capital Support The next policy is government capital assistance for MSMEs in Pesawaran. Access to financial support has basically been provided by the government as stated by the Head of the MSME Office that "The government has provided a lot of assistance to MSMEs, firstly related to capital assistance, secondly related to licensing, we have collaborated with several parties to make licensing easier. Furthermore, my input is that the mindset must also change so that capital is for business, not personal needs so that businesses are more organized and can be analyzed properly (Head of the MSME Office)". The statement from the head of the MSME office shows that the government has provided a lot of support in the form of capital and access to loans to MSMEs in Pesawaran Regency. However, the fundamental problem with MSMEs is not the capital aspect but the mindset and capital management that has been accepted. Some MSMEs still mix business and personal or family affairs so that the mixed financial management causes business stability to decrease. This policy covers two types of assistance provided to MSMEs, namely financial capital assistance and non-financial capital assistance. The emphasis on financial assistance is to continue existing policies, including directives to obtain bank loans and financial capital assistance programs from the government directly. On the other hand, policies that are more emphasized are related to non-financial capital assistance. One of the deficiencies in MSMEs is the weak entrepreneurial spirit, so the government will provide guidance and training to improve the entrepreneurial spirit and spirit of MSME business actors. Non-financial aspects include training, competency and insight development, collaboration with experts and other parties who can develop MSMEs as well as motivation instilled in MSMEs so that they have persistence in trying and maximum entrepreneurial spirit. This point is in line with the FGD conclusion expressed by the facilitator that "The problem is. From. The entrepreneurial spirit of Indonesian MSMEs needs to be improved. So, if Jenengan's mother is persistent. The production has been maximized, the quality is maximum, the market will come. So the market will chase what products are really good. We know that there are many stalls in remote areas, but if it's good. Everyone is visited by-products that are sold at home, they don't even have a shop. If it's good people will look for it. So indeed, from the side of SMEs from the side of the first SMEs. Improve yourself first (FGD Facilitator)". Incentive The next policy is incentives for MSMEs that excel in order to grow motivation and enthusiasm in developing their business. MSMEs from Hanura stated that as long as the incentives were still in the form of certificates only "Already in the form of certificates" (MSMEs Hanura), it was hoped that in the future the incentives provided would have a higher value so that MSMEs could be enthusiastic and able to compete healthily. The importance of incentives for MSMEs is also supported by the results of an interview conducted with the Head of the Office of Cooperatives, MSMEs and Employment of the Pesawaran Regency on September 25 2022 at the Hanura Village Office that "These MSME actors are happy to get awards, so they should be given appreciation if they are successful or there are achievements. Yes, the goal is to create healthy business competition, it can spur the development of MSMEs." The incentive policy for MSMEs aims to produce MSMEs that excel and are brave enough to go national and international. Currently, programs from both the central government and local governments are very supportive in the development of MSMEs. This has resulted in tighter MSME competition even though MSMEs in Pesawaran Regency still have weaknesses in persistence and seriousness in managing business. Therefore incentive policies are expected to be a driving force for MSMEs to have optimal motivation and competitiveness. Collaboration The next policy in developing MSMEs in Pesawaran is collaboration. The collaboration in question is collaboration or cooperation between MSMEs and other parties with the government as a facilitator. Other parties proposed in this study are universities and private retailers. The government can become a facilitator who collaborates MSMEs with strategic private parties and creates mutually beneficial cooperation between the two parties. The importance of collaboration was also stated by the Pesawaran Regency UMKM Representative who was interviewed on September 25 2022 at the Hanura Village Office, in the interview the informant stated that "One of our weaknesses is in human resources, especially for digitization, our human resources are very limited and we are already very busy with the production process so we don't have time for digital learning, if possible, the government will partner with campuses or other parties whose students can be sent to help MSMEs, especially for marketing" The first collaboration is collaboration or cooperation with universities in Lampung. Lampung has several universities or colleges in which there are students, lecturers and academics who have great potential in channeling ideas and innovations for MSME development. Therefore, the government should be a bridge that can unite the competencies of universities with MSME business people. The form of cooperation is carried out with the concept of mentoring, events or bazaars and marketing. The mentoring program is carried out by sending several delegations from tertiary institutions continuously and periodically to assist MSMEs starting from the production process, packaging to marketing and post-sales. From this program, academics can gain experience while MSME players gain knowledge and theoretical knowledge in doing business. On the other hand, academics are the right party to help MSMEs in digitizing. The second collaboration is collaboration with private retailers in Pesawaran. This collaboration has been carried out by Pesawaran and Indomart and Alfamart SMEs with the government as the facilitator. However, weaknesses are still found where the form of cooperation that is carried out is still detrimental to MSMEs. This was explained by the Head of Hanura Village that "What's extraordinary now is that we help with marketing to Indomart and Alfamart. But it turns out that there must be a binding MOU, yes, we are not equipped with a clear MOU so that is detrimental to MSMEs. For example, regarding payment, the payment process and system must be clear, the goods come in but are not paid for, it's the same thing, that's what we have to address together (Head of Hanura Village)". The problem is that the MOU is not clear or even that there is no binding MOU. So that in the future the government must review the forms of cooperation that are carried out and it is hoped that the cooperation that is formed can be mutually beneficial. Development of the Pesawaran Shopping Center One of the complaints from MSME actors is that Pesawaran Regency has many tourist destinations that attract many tourists, unfortunately, shopping centers, especially souvenirs, are still dominated in Bandar Lampung so tourists who travel in Pesawaran do shopping in Bandar Lampung even though Pesawaran UMKM have the same product. with what is sold there. Therefore, the next breakthrough is the construction of a shopping center in Pesawaran Regency. Shopping centers, especially souvenir centers that accommodate various MSME products, are expected to be a source of increased sales for MSMEs in Pesawaran Regency. MSME is a business sector that still has many weaknesses in management and competition aspects. Therefore, the best concept for MSMEs is a collaboration between fellow MSMEs. The development of a shopping center by the government is the right place to carry all MSME products in Pesawaran Regency and has become an icon of Pesawaran Regency. The second proposed policy model is INDIRA, namely MSME Integration, MSME Digitalization and Regeneration. MSMEs Integration This study proposes the integration of MSMEs in Pesawaran district. Integration is the unification of small components into a larger system to work together and achieve common goals. Good integration and coordination from all parties is an important key to supporting the implementation of policies and programs that have been planned by the Government so that Indonesian MSMEs can upgrade and become the driving force of the national economy. The integration proposed in this study is integration in several aspects. The first is in the production aspect. The government becomes a facilitator in integrating MSMEs with the right raw material providers and in accordance with economic conditions. The second is the integration between MSMEs and the private sector in product sales. In this aspect, the government has actually facilitated MSMEs in collaborating with private retailers as stated in an interview with the Head of the Hanura Village on 25 September 2022 at the Hanura Village Office that "The government has facilitated MSMEs by making an agreement with Indomart to sell MSME products, but the problem is that there is no clear MOU between MSMEs and retail, so there are often cases that are detrimental to MSMEs such as long and unclear payments, returns for damaged goods and so on." This collaboration still has problems or deficiencies, namely in the MOU which still has the potential to harm MSMEs. Therefore, it is hoped that the government will be able to evaluate the ongoing cooperation and improve the cooperation MOU so that it can benefit both parties. Furthermore, integration is carried out between fellow SMEs in Pesawaran Regency. MSME integration aims to increase joint branding and develop a wider market share. One form of integration between MSMEs can also be done by holding a bazaar. The results of the FGD also show that MSMEs are still trying to find solutions by carrying out selective production, so they produce products that are really easy to sell and delay the production of products that are difficult to sell. On the other hand, the government has also taken the initiative to hold MSME bazaars to increase sales. "Yes, then we only produce products that sell well and we market them ourselves, yes, that's the steps we did at that time, then we also made bazaars (Pesawaran SMEs)" Digitalization The development of digital technology has accelerated, especially since the Covid-19 pandemic. This also encourages people's behavior to shop online. Not surprisingly, electronic trading platforms are selling well as people's choice for shopping and transactions. Digitization of MSMEs is a change from conventional to digital systems as an effort to increase the effectiveness and efficiency of MSME business processes and operations. The digitization of MSMEs has made MSME business actors change their business management from conventional to modern practices. Digitalization is an innovation and breakthrough that can increase market share widely. On the other hand, digitalization is also a must to keep pace with the times and meet market demand. There are two digitalization models that can be implemented in UMKM in Pesawaran Regency, namely: 1. Digitization in the form of a platform created by the Pesawaran Regency government to accommodate, develop and sell products from MSMEs in Pesawaran Regency. Its implementation can be carried out by the government hiring experts in the field of technology and information to create a platform that accommodates all MSME products in Pesawaran Regency which are registered and properly selected. This concept is very helpful for MSMEs in marketing and product development. 2. Digitization independently by each MSME. In this case the government provides guidance, assistance and monitoring facilities in the digitalization process. The digitization process includes product preparation, product packaging, digital account creation, product photos, product sales, consumer feedback and increased traffic on digital platforms. This implementation is indeed more complicated, but can be assisted by collaboration with other parties such as universities who can send students or lecturers who are more experienced in digitalization to assist MSMEs. For academics, this collaboration can be used as research or community service programs that can benefit both parties. MSMEs Regeneration The next policy proposal is the regeneration of MSMEs. MSME products are a reflection of culture for an area. Likewise, in Pesawaran Regency, MSMEs in Pesawaran Regency have many products including culinary business products, agricultural machine tools, apparel, crafts, laundry soap, cosmetics, floor cleaning fluids, as well as honey and bee products other than honey. Some of these products are part of the culture in Pesawaran Regency which must be preserved. One of the efforts to preserve the culture that is reflected in MSME products is to regenerate MSME actors. MSMEs regeneration is carried out with skills and business education for youth in Pesawaran Regency, especially for the sons and daughters of MSME owners. Pesawaran Regency should pay attention to the future of young people as the next generation, one of which is by providing educational facilities, mentoring and coaching in producing and marketing typical Pesawaran products. The next generation must be involved from an early age to understand business processes and participate in developing MSMEs. MSMEs in Pesawaran Regency are generally managed by people who are not young enough, so one of their difficulties is adapting to technology. This is in line with the results of an interview with the Hanura Village Head on 25 September 2022 at the Hanura Village Office that "We always work together to change the mindset of MSME actors so that they struggle even more with higher quality and the current breakthrough that is suitable is digitalization but the human resources are not yet qualified, so for me the solution is to partner with universities so that their students want to help MSMEs, especially in digital marketing (Head of Hanura Village)". The Head of Hanura Village stated that the solution to help HR for MSMEs in Pesawaran Regency was to collaborate or collaborate with universities to assist and assist MSMEs in marketing and improving product quality. This proposal is quite interesting and in line with one of the governance concepts in government, namely collaboration. This breakthrough can also be a solution in overcoming the next problem, namely improving product quality. The government can become a facilitator who collaborates MSMEs with strategic private parties and creates mutually beneficial cooperation between the two parties. CONCLUSION This research analyzes the effect of financial support and non-financial support from the government of Pesawaran Regency, Lampung on the performance of MSMEs in Pesawaran Regency, Lampung. Furthermore, this research also proposes the right concept or policy model in developing MSMEs in Pesawaran Regency, Lampung. The results of the analysis show that financial support has a positive and significant effect on the performance of MSMEs in Pesawaran Regency, Lampung. The magnitude of the impact of financial support on the performance of MSMEs is 0.070. Furthermore, the most influential financial support is related to giving awards and appreciation to MSMEs that excel. Furthermore, non-financial support has a positive and significant effect on the performance of MSMEs in Pesawaran Regency, Lampung. The impact of non-financial support on MSME performance is 0.858. Furthermore, the most influential non-financial support is related to digital technology coaching and training in business and entrepreneurship. The results of this study propose 2 models of policies for the Government of Pesawaran Regency. The first policy model is dubbed TABIK PUN, namely MSME Training, Capital Assistance and Incentives for MSME achievers, Collaboration with universities and other retail and private businesses and building and developing MSME product marketing centers in Pesawaran. The second proposed policy model is INDIRA, namely MSME Integration, MSME Digitalization and Regeneration.
2023-07-11T18:20:16.544Z
2023-06-14T00:00:00.000
{ "year": 2023, "sha1": "a21d050cc550afed11827ceb5b4a33310d9a315d", "oa_license": "CCBYSA", "oa_url": "https://ijsr.internationaljournallabs.com/index.php/ijsr/article/download/1008/797", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "bdd5a43fefbcd3fa66dcb59d274bdd16cfd2e61f", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
259778798
pes2o/s2orc
v3-fos-license
Blood viscosity and inflammatory indices in treatment-resistant schizophrenia: A retrospective cross-sectional study Objective: Alterations in blood flow and inflammation may be associated with the treatment response of psychotic disorders. However, changes in blood viscosity in patients with treatment-resistant schizophrenia (TRS) have yet to be studied. We examined whether blood viscosity and systemic inflammatory status varied between patients with TRS, remitted schizophrenia, and healthy subjects. Method: Forty patients with TRS, 40 remitted schizophrenia patients, and 43 age-and gender-matched healthy controls were enrolled in this retrospective file review study. Whole blood viscosity (WBV) was calculated according to de Simone’s formula at low and high shear rates (LSR and HSR, respectively). Complete blood count (CBC) markers of inflammation were recorded through screening data at admission. Results: In patients with TRS, WBV at both LSR and HSR was significantly decreased, whereas all CBC markers of inflammation were significantly increased compared to controls. Remitted patients had significantly decreased WBV at HSR than controls. There was no significant correlation between blood viscosity and CBC markers in patients. According to the regression models, the systemic immune-inflammation index (β=0.578) and monocyte-to-lymphocyte ratio (β=1.844) were significantly associated with WBV at LSR in multivariate analyses, whereas the Positive and Negative Syndrome Scale (PANSS) Positive subscale (β=-0.330) was significantly associated with WBV at HSR in univariate analyses in the patient sample. Conclusion: TRS, associated with decreased blood viscosity and increased inflammatory status, may not fully explain such a relationship. Prospective studies would help establish the extent to which hemorheological and inflammatory characteristics reflect the pathophysiological process underlying treatment responsiveness as well as cardiovascular morbidity. INTRODUCTION Antipsychotics are the mainstay of treatment for schizophrenia, but over a third of patients fail to respond significantly to appropriate pharmacotherapy with antipsychotics (1). Such patients are generally defined as having treatment-resistant schizophrenia (TRS), which is considered a distinct, more severe, and homogenous subtype of the illness (2). The clinical importance of TRS stems from the fact that patients with TRS have poor outcomes, including worse achievement of social and occupational functioning milestones, as well as persistent positive, negative, and cognitive symptoms that lead to reduced quality of life (3). Despite the significant variability in inclusion criteria for defining treatment-resistant patients, previous studies have focused on identifying biomarkers of TRS to aid in early prediction, enhance our understanding of the biological basis of TRS, and inform the development of future treatments (4). Alterations in redox homeostasis and immune architecture (5)(6)(7), polymorphisms or mutations in specific molecules, altered expression of certain proteins (8,9), and changes in the endocrine system (10,11) have recently been studied as potential biological interfaces of treatment resistance or responsiveness in schizophrenia. Specific immune-inflammatory biomarker profiles have been associated with TRS, where elevated levels of inflammatory markers leading to neuronal damage may contribute to treatment resistance in this patient group (4). Conversely, treatment resistance is linked to increased all-cause morbidity and mortality, independent of clozapine's side-effects (12). Chronic inflammation is considered a common physiological process involved in the pathogenesis of both schizophrenia and cardiometabolic-vascular diseases (13). Blood viscosity, which is influenced by proinflammatory status, is another variable associated with an increased risk of cardiovascular diseases. Taken together, a substantial body of evidence supports the notion that both increased proinflammatory status and blood viscosity are associated with an increased risk of cardiovascular diseases (14). Parameters related to blood circulation, such as blood viscosity, are influenced by inflammation-induced changes in the surrounding milieu, psychophysiological alterations, and metabolic abnormalities (15). Psychophysiological stress can cause changes in hemorheology measures such as hemoglobin, hematocrit (Hct), total protein (TP), and blood viscosity (16,17). TRS is associated with impaired functions of endothelial neurotrophic proteins, such as vascular endothelial growth factor (VEGF) and brain-derived neurotrophic factor (BDNF), whose decreased levels lead to disrupted functions of monoamine receptors located on neuronal membranes (18). Altered endothelial growth factors may contribute to pathophysiological processes of psychotic disorders by reducing synaptic plasticity and modifying treatment responses to antipsychotics. They also have the potential to influence blood viscosity through alterations in endothelial functions. Thus, changes in blood viscosity may reflect changes in receptor functions in neuronal membranes. Viscosity is defined as the thickness and stickiness of the blood and is one of the major determinants of local blood flow. Blood viscosity is relatively high at low shear rates (LSR), such as when the blood is moving at a low velocity during diastole, and is relatively lower during systole at high shear rates (HSR) (19). Whole blood viscosity (WBV), a primary determinant of endothelial shear stress, is a physiological parameter that is considered a reliable tool for the assessment of blood fluidity in various patient groups (20). A recent study by our group reported that initial and subsequent episodes of schizophrenia are associated with decreased blood viscosity (21). This relationship could be attributed to psychotic relapses and their effect on biological systems. Furthermore, although heightened inflammation is associated with altered blood viscosity, cardiovascular morbidity, which patients with schizophrenia suffer from, might be related to distinct contributory pathways led by changes in both blood viscosity and inflammation. To our knowledge, no clinical studies have investigated WBV in patients with TRS, leading to a lack of clear and sound postulations on the association between inflammatory indices, hemorheology, and treatment responsiveness of schizophrenia. Therefore, we examined both blood viscosity and complete blood count (CBC) markers of inflammation in both treatment-resistant and remitted schizophrenia patients. In the current study, WBV was calculated according to the de Simone formula. Based on previous studies (21), we hypothesized that blood viscosity would be decreased in patients with TRS compared to remitted patients and healthy controls, while increased inflammatory status would be found more prominently in TRS compared to remitted schizophrenia patients and healthy subjects. Study Design, Sample and Procedure This retrospective file review study included data from male and female patients (aged 18-65 years) with schizophrenia who were admitted to either the psychiatric inpatient or outpatient units providing mental health services at Bakirkoy Prof Mazhar Osman Training and Research Hospital for Psychiatry, Neurology, and Neurosurgery (Istanbul, Turkiye) between October 2022 and March 2023. All patients were diagnosed with schizophrenia according to the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) by senior attending psychiatrists. Within the study timeframe, 110 patients with schizophrenia in remission were identified. The systemic operational remission criteria, as conceptualized by Andreasen et al. (2005), (22) were followed to determine remission during outpatient follow-up or at predischarge evaluation. This criterion is based on a clinical examination by a senior psychiatrist who considers specific items of the Positive and Negative Syndrome Scale (PANSS). The criteria posit that all eight symptoms (P1, P2, P3, N1, N4, N6, G5, G9) in the PANSS should score three or lower for at least six months for remission to be considered. Additionally, during the same period, 68 patients with schizophrenia,who met the criteria for treatment-resistant schizophrenia defined by the Treatment Response and Resistance in Psychosis (TRIPP) working group consensus (23) were identified. According to the consensus, the determination of TRS requires a mitigation of symptoms by <20% despite the use of at least two antipsychotic drugs with a total daily dose equivalent to 600 mg of chlorpromazine for 12 weeks. Following previous work (24), we considered an increase in positive psychotic symptoms and global, behavioral or functional deterioration at a moderate to high level determined through clinical evaluation, as well as the requirement of hospitalization, as a proxy of relapse. Patients with a relapse at the index admission in which blood sampling was performed were excluded. Other exclusion criteria for patients included the presence of a comorbid neurological or psychiatric illness, substance use disorder which was excluded by a urine testing and psychiatric evaluation, the presence of a systemic disease that may influence rheological properties of blood and inflammatory state such as previous cardiovascular diseases, diabetes mellitus, hepatic or renal failure, hypertension, acute infection, acute or chronic immuno-inflammatory disease or pregnancy, heavy smoking (>20 cigarettes per day) since it affects inflammatory parameters, use of antiinflammatory or immunosuppressive medication, documented laboratory findings of liver or renal pathology, abnormal blood screening results such as neutropenia and electrolyte imbalances, nutritional deficiencies, and not having a laboratory screening at admission. Despite no clear evidence existing that antipsychotics modulate plasma proteins (25), patients with no change in the antipsychotic treatment regimen within the last month were recruited to rule out any possible indirect effects on circulating blood proteins. After applying the exclusion criteria, we enrolled 40 treatment-resistant schizophrenia patients (35 males, 5 females) and 40 patients with remitted schizophrenia (33 males, 7 females). A comparison group of healthy controls consisted of 43 individuals (34 males, 9 females), who visited our outpatient unit for pre-employment health check-ups or employee medical examinations during the study period and were subjected to the same exclusion criteria as the patients. Control subjects were matched by gender and smoking status with both patient groups. The study protocol was reviewed and approved by the Scientific Research Ethics Committee of the University of Health Sciences, Hamidiye Faculty of Medicine [IRB: 10.03.2023-5/21], and was conducted according to the principles stated in the Helsinki Declaration. Since the data of the individuals were retrieved anonymously without any accessible personal identifying information and the file review was made retrospectively by the researchers, informed consent was not applicable. Variables of Interest The sociodemographic and clinical characteristics of the patients, such as age, gender, duration of illness, chlorpromazine equivalent dose, and PANSS scores at the time of admission, were recorded. PANSS is a 30item clinician-rated tool based on scoring symptom severity in psychotic disorders (26). It consists of 30 items, with 7 on the positive symptoms subscale, 7 on the negative symptoms subscale, and 16 on the general psychopathology subscale, and each item has a 7-point Likert-type assessment. The total score is calculated by adding the scores of all items. Its Turkish validity and reliability were assessed by Kostakoglu et al. (1999) (27). At our institution, PANSS is administered by a senior psychiatrist or trained psychiatry resident at admission. Statistical Analysis The minimum required sample size (N=111) to achieve statistical significance in WBV at HSR between groups was calculated with G*Power software V. 3.1.9.2, considering an α-error of 0.05, power of 0.80, and effect size of 0.3. Statistical Package for Social Sciences Software for Mac OS, Version 25.0 (Armonk, NY: IBM Corp.) was used to analyze the study data. The Kolmogorov-Smirnov test was used to determine the normality of the distribution of the numeric data before performing further analyses. Chi-Squared, Mann-Whitney U, Independent Samples t Test, Kruskal-Wallis Test, and Analysis of Variance (ANOVA) were used for comparisons of categorical and continuous variables between the groups. Bonferronicorrected Mann-Whitney U Test was used for post hoc pairwise comparisons of Kruskal-Wallis Test results. Turkiye Honestly Significant Difference (HSD) was used as a post hoc analysis for pairwise comparisons of ANOVA results. Pearson's correlation coefficient was used to examine the relationship between blood viscosity parameters and Complete Blood Count (CBC) markers of inflammation. Univariate and multivariate linear regression analyses using the enter method were used to identify potential predictors of WBV at both LSR and HSR in the patient sample consisting of both TRS and remitted schizophrenia patients (n=80). Potential predictors were determined as independent variables that are predicted to have a clinical impact on WBV at both LSR and HSR. A p-value<0.05 was considered significant. RESULTS Descriptive characteristics and comparisons of laboratory parameters between the study groups are presented in Table 1 There was no significant correlation between WBV at both LSR and HSR and SII, SIRI, NLR, MLR, and PLR (r=-0.05 to 0.15) in all patients. Univariate and multivariate linear regression analyses were performed to identify potential predictors of WBV at each LSR and HSR in the patient sample consisting of TRS and remitted schizophrenia subjects (Table 3). Initially, age, gender (male vs. female), patient group (treatment-resistant vs. remitted), PANSS subscales, SII, SIRI, NLR, MLR, and PLR were entered into univariate analyses as independent variables. DISCUSSION To date, little is known about the relationship between treatment resistance, blood viscosity, and CBC markers as a proxy of peripheral inflammatory status in schizophrenia. This retrospective study revealed that blood viscosity was significantly decreased, and CBC indices of inflammation were significantly increased in patients with treatment-resistant schizophrenia compared to healthy controls. On the other hand, blood viscosity and inflammatory markers did not discriminate TRS and remitted schizophrenia patients. This suggests that biological differences between treatmentresistant and treatment-responsive patients are likely to be explained by mechanisms other than immunoinflammatory processes. Since increased inflammation is associated with impaired cardiometabolic and cardiovascular outcomes, our results support previous findings that schizophrenia patients may be at shortand long-term risk for cardiovascular diseases (29), irrespective of treatment responsiveness. A few studies have evaluated blood rheology in psychiatric disorders such as bipolar disorder (30), major depressive disorder (31), neuroleptic malignant syndrome (32), panic disorder (33), and first-episodes and clinical exacerbations in schizophrenia (21). These studies argue that blood fluidity is affected in psychiatric disorders in both the short and long term. Labile groups in plasma proteins and the erythrocyte's cytoskeleton can be affected by systemic inflammation and related oxidative stress. The subsequent modification of plasma and membrane proteins and lipids may increase blood viscosity and erythrocyte aggregation and decrease microcirculation (14,34). In our study, decreased blood viscosity observed in patients with TRS does not seem to be completely attributable to the increased systemic inflammation, which requires further investigation. Blood viscosity may be affected by multiple components (21), which may interact with each other to maintain homeostasis (28). Acute and persistent psychophysiological stress have been reported to alter fluid balance in the body (35). Thus, an imbalance in fluid homeostasis may also contribute to changes in blood viscosity in patients with schizophrenia. In this study, the severity of positive psychotic symptoms is related to lower whole blood viscosity at a high shear rate in the univariate analysis. Moreover, WBV at HSR seems to be a trait marker for schizophrenia rather than WBV at LSR according to pairwise comparisons. These findings suggest that psychophysiological mechanisms have an impact on blood viscosity, particularly during systolic endothelial shear. Chronic, persistent, and severe psychopathology in schizophrenia may trigger biological mechanisms, such as increased vascular permeability and extravasation of solid plasma ingredients (which may also be associated with increased inflammation), escalated catabolism of circulating proteins due to oxidative stress, decreased serum lipids, and reduced negative acute phase reactants, all of which entail decreased blood viscosity. In this study, all patients were taking antipsychotic drugs. Antipsychotics were associated with inhibited platelet aggregation, increased clot formation time, and decreased clot firmness through adenosine diphosphate receptors (36), all of which are associated with changes in blood viscosity (37). CBC markers of inflammation in schizophrenia patients were significantly decreased in clinical remission compared to acute exacerbation (38), whilst such a decrease does not seem to be mediated simply by the effects of receiving antipsychotic medication (39). We found that inflammatory indices were not different between TRS and remission groups, suggesting that these indices may be a trait marker of the illness rather than treatment responsiveness. On the other hand, previous work has revealed that a link existed between an increased inflammatory state and poorer treatment outcomes in schizophrenia (6,40). Mondelli et al. (2015) (41) reported that patients who did not sufficiently respond to antipsychotics had higher inflammatory cytokines following treatment compared to responsive patients, suggesting TRS as a more severe and distinct biological subtype of schizophrenia. Follow-up data from large patient samples are required to clarify the role of CBC markers of inflammation as a proxy for treatment responsiveness. The results of our study should be considered in the context of the following limitations. Due to the retrospective and cross-sectional design of the study, we were unable to obtain follow-up data on patients with subsequent cardiovascular diseases; hence, we were unable to establish a causal association between blood viscosity and future adverse cardiovascular events. Although there was no significant difference between patient groups in chlorpromazine equivalent doses, blood viscosity, and inflammatory markers could be confounded by the effect of specific antipsychotics which we did not examine. Biochemical parameters examined in this study might be affected by numerous factors such as nutrition, exercise, and sedentary lifestyle. A relatively small sample size might not be adequate for statistically significant results for WBV. Although de Simone's formula is widely acknowledged for the determination of blood viscosity, a viscometer is more sensitive and would provide more accurate results. In conclusion, TRS may be associated with decreased blood viscosity, and replication of this study with larger patient samples and a prospective design can reflect the pathophysiological processes and their influence on cardiovascular risk in treatment-resistant schizophrenia. Researchers may focus on the extrapolation of whole blood viscosity through a feasible evaluation tool using hematocrit and total protein level to demonstrate how blood viscosity may reflect endothelial dysfunction involved in pathophysiological processes in schizophrenia. Such studies would also help establish to what extent hemorheological and inflammatory characteristics reflect biological interfaces of treatment resistance or responsiveness in schizophrenia. Determination of the alterations in blood viscosity and inflammatory status may help facilitate the development of personalized or precision clinical approaches to schizophrenia by helping stratify patients and implement biologically-tailored pharmacological and psychological interventions to reduce any cardiovascular and cardiometabolic risk in both treatment-resistant and treatment-responsive patients with schizophrenia. Contribution Categories Author Initials
2023-07-12T17:02:28.923Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "aafe55c91992367fdebf7d0df32b1c449e9773ba", "oa_license": null, "oa_url": "https://doi.org/10.14744/dajpns.2023.00210", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "9450c65a99ff96eadd02278cccb7f4af70cd640d", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [] }
264449163
pes2o/s2orc
v3-fos-license
Top-Down Proteoform Analysis by 2D MS with Quadrupolar Detection Two-dimensional mass spectrometry (2D MS) is a multiplexed tandem mass spectrometry method that does not rely on ion isolation to correlate the precursor and fragment ions. On a Fourier transform ion cyclotron resonance mass spectrometer (FT-ICR MS), 2D MS instead uses the modulation of precursor ion radii inside the ICR cell before fragmentation and yields 2D mass spectra that show the fragmentation patterns of all the analytes. In this study, we perform 2D MS for the first time with quadrupolar detection in a dynamically harmonized ICR cell. We discuss the advantages of quadrupolar detection in 2D MS and how we adapted existing data processing techniques for accurate frequency-to-mass conversion. We apply 2D MS with quadrupolar detection to the top-down analysis of covalently labeled ubiquitin with ECD fragmentation, and we develop a workflow for label-free relative quantification of biomolecule isoforms in 2D MS. ■ INTRODUCTION −9 High-resolution mass analyzers such as the Orbitrap or Fourier transform ion cyclotron resonance mass spectrometers (FT-ICR MS) enable the top-down tandem mass analysis of large biomolecules with complex fragmentation patterns. 10The development of fragmentation methods that result in high sequence coverage and favor backbone fragmentation, such as electron capture dissociation (ECD) or ultraviolet photodissociation (UVPD), increases the accuracy of the location of the modifications induced by the chemical probing method. 11,12Choosing top-down over bottom-up analysis reduces the number of experimental steps and the risk of losing the labels introduced by the probing methods. 13vertheless, top-down analysis comes with its own set of limitations.Because of the complexity and number of accessible dissociation pathways, ECD and UVPD often yield low-abundance fragments.As a result, they usually require the accumulation of approximately 10−100 measurements to obtain a satisfactory signal-to-noise ratio (SNR). 14,15ECD and UVPD are therefore difficult fragmentation methods to couple with liquid chromatography (LC), which does not allow for the accumulation of much more than 10 scans for each analyte because of the rate of change of elution profile, even when using very fast and relatively low-resolution individual measurements. 12In addition, standard tandem mass spectrometry techniques require the isolation of a single ion species to enable correlation between the precursor and fragment ions, most often with a quadrupole mass filter. 16This method of isolation creates a competition between the accuracy of the isolation and precursor ion abundances.The method also depends on the analytes of interest, thereby making data-independent acquisition difficult. 17,18Moreover, for the analysis of protein modifications, no quadrupole-based isolation can separate overlapping isotopic distributions, although adding an ion mobility step has shown advantages. 19,20Separation between isobaric ion species and coeluting species is therefore a limitation that all existing data-independent acquisition methods have in common. 21wo-dimensional mass spectrometry (2D MS) is a dataindependent method for tandem mass spectrometry that does not require ion isolation or separation before fragmentation to correlate between precursor and fragment ions. 22In a 2D FT-ICR MS experiment, ion radii are modulated in the ICR cell according to their cyclotron frequencies (which are inversely proportional to their mass-to-charge ratios, or m/z) before fragmentation with a radius-dependent fragmentation method such as infrared multiphoton dissociation (IRMPD), ECD, or UVPD. 23,24The resulting fragment ion abundances (and therefore intensities) are modulated according to the cyclotron frequencies of the precursor ions. 25The data set acquired in 2D MS experiments can be Fourier transformed to yield a twodimensional mass spectrum (2D mass spectrum) that shows the fragmentation pattern of each precursor ion species analyzed in the ICR cell. 24−32 One application of label-free quantification by 2D MS is the top-down analysis of covalently labeled proteins. New developments in ICR cells have enabled increased resolving power and SNR in FT-ICR MS, which have improved top-down approaches for protein footprinting techniques. 7,33In mass spectrometers equipped with dynamically harmonized ICR cells, quadrupolar 2ω detection can be optimized with the appropriate electronics.By detecting ion signals at the 2ω harmonic, the resolving power can be doubled for a given transient length or the transient length can be halved for a given resolving power-. 34In this study, we perform 2D MS for the first time on a dynamically harmonized ICR cell with quadrupolar detection to determine the protein's solvent-accessible surface area.We then compare our results with a previously published study performed using standard tandem mass spectrometry on FT-ICR MS by isolating the [M + 10H] 10+ charge state of ubiquitin with increasing concentration of an acetylation reagent and fragmenting the ions by collision-induced dissociation (CID). 35n this study, we discuss the benefits of quadrupolar 2ω detection in 2D MS and our adapted data processing pipelines for the analysis of different proteoforms.We acetylated ubiquitin with a fivefold molar excess of N-hydroxysuccinimidyl acetate (NHSAc), and reaction products were analyzed with top-down 2D MS with ECD fragmentation.We show how 2D MS can be used for the analysis of the covalently labeled protein and what analytical information can be gleaned from 2D MS that cannot be obtained by isolating precursor ions before fragmentation. ■ EXPERIMENTAL METHODS Sample Preparation.The acetylation of ubiquitin (50 μg) was achieved by diluting the sample in 50 mM triethylamine/ bicarbonate (pH 7.6, Sigma-Aldrich, Saint Louis, MO) buffer at 0.5 mg/mL and adding the solution to a fivefold molar excess of NHSAc (Tokyo Chemical Industry Co Ltd., Tokyo, Japan) at room temperature for 1 h.The sample was desalted on an OPTI-TRAP macrotrap column (Optimize Technologies, Oregon City, OR) using an aqueous solution with 0.1% formic acid and eluted using an 80% acetonitrile/20% water solution with 0.1% formic acid.The solution was diluted to a 2 μM final protein concentration in aqueous solution of 1% acetic acid and 50% methanol for analysis (all solvents were LC-MS grade and obtained from Merck, Darmstadt, Germany). Instrument Parameters.All experiments were performed on a 12 T solariX FT-ICR mass spectrometer (Bruker Daltonik, Bremen, Germany) with an electrospray ion source operated in positive mode and direct infusion at a flow rate of 108 μL/h. 36Ions were accumulated for 0.5 s before being transferred to the dynamically harmonized ICR cell (2XR Paracell).The one-dimensional mass spectrum was acquired over an m/z range of 196.51−3000 in quadrupolar detection mode at the 2ω harmonic as described by Nikolaev et al., with a 1 M data point transient with 64 averaged scans. 37,38he pulse sequence for the 2D MS experiment is shown in Scheme 1.The two pulses in the encoding sequence (precursor detection and modulation) were set at 5.02 dB attenuation with 1.0 μs per excitation frequency step (frequency decrements were 625 Hz).The corresponding amplitude was estimated at 250 V pp , with a 1.9% sweep excitation power for an amplifier with a maximum output of 446 V pp .The encoding delay t 1 was increased 4096 times with a 3 μs increment, which corresponds to a 166.67 kHz frequency range.No phase-cycled signal averaging was employed in the experiment.Because of the digital clock in the Bruker electronics in quadrupolar 2ω detection, the minimum cyclotron frequency for the modulated precursor ions was 122.8 kHz for a maximum m/ z of 3000 during excitation, leading to a m/z 808.1−3000 mass range for precursor ions.Captured ions were fragmented by ECD using the following parameters: the hollow cathode current was 1.3 A, the ECD pulse length 10 ms, the ECD lens 7 V, and the ECD bias 1.0 V. 39 Finally, in the horizontal fragment ion dimension, the excitation pulse in the detection sequence was set at 2.60 dB attenuation with a 15 μs/ frequency step (frequency decrements were 625 Hz).The corresponding amplitude was estimated at 330 V pp , with a 37% sweep excitation power for an amplifier with a maximum output of 446 V pp .The horizontal mass range was m/z 196.51−3000 (corresponding to a frequency range of 1875.0−122.8 kHz).Transients were acquired over 0.559 s with 1 million data points.The total duration of the experiment was 68 min. Data Processing.The two-dimensional mass spectrum was processed and visualized using the Spectrometry Processing Innovative Kernel (SPIKE) software (available at www.github.com/spike-project,version 0.99.27,accessed on June 1, 2021) developed by the University of Strasbourg (Strasbourg, France) and CASC4DE (Illkirch-Graffenstaden, France) in the 64-bit Python 3.7 programming language on an open-source platform distributed by the Python Software Foundation (Beaverton, OR). 40Processed data files were saved using the HDF5 file format.The 2D mass spectrum was apodized with the Kaiser apodization, zerofilled once, denoised with the SANE algorithm (with a rank of 30), and visualized in magnitude mode. 41The size of the resulting data sets was 1 048 576 data points horizontally (fragment ion dimension) by 4096 data points vertically (precursor ion dimension). Frequency-to-mass conversion was quadratic in both the vertical precursor ion dimension and the horizontal fragment ion dimension. 42However, due to the quadrupolar 2ω detection, the parameters of the conversion equation were specific to each dimension, as will be discussed in the next section. 34For each precursor ion species, five fragment ion scans were added up to cover the entire precursor isotopic distribution and obtain complete isotopic distributions for all fragment ions.The resulting one-dimensional fragment ion patterns were peak-picked in SPIKE.Peak assignments were performed using the Free Analysis Software for Top-down Mass Spectrometry (FAST-MS) developed by the University of Innsbruck (Innsbruck, Austria) in the 64-bit Python 3.7 programming language. 43FAST-MS generated theoretical c/z and y fragment lists for ubiquitin variably modified with 4−6 acetylations located on lysine and methionine residues. ■ RESULTS AND DISCUSSION In this study, the 2D MS experiment is performed in a dynamically harmonized ICR cell with quadrupolar 2ω fourplate detection. 34,44The ICR cell was "shimmed" to ensure that the precursor ions were centered at the start of the pulse sequence (see Scheme 1). 38The frequency range of the broadband pulses for precursor ion excitation and modulation covers the reduced cyclotron frequencies of the precursor and fragment ions (61.4−937.5 kHz).The frequencies measured during the transient cover the second harmonic of the reduced cyclotron frequencies of the precursor and fragment ions (122.8−1875.0kHz).In addition, the digital modulation frequency was set by the instrument at twice the frequency of the highest m/z in the excitation pulse, instead of its cyclotron frequency as in detection of the fundamental frequencies. 24he first consequence of using quadrupolar detection is that, for an equivalent resolution and m/z range, each transient duration is halved, resulting in 2D MS experiments that are less time-and sample-consuming.The resolving power in the horizontal fragment ion dimension remains theoretically unchanged, while the SNR in quadrupolar 2ω detection is typically reduced compared to that in standard detection. 45,46econd, the coefficients required in the frequency-to-mass conversion equation of 2D mass spectra recorded with quadrupolar 2ω detection are doubled in the horizontal fragment ion dimension compared to the coefficients for the frequency-to-mass conversion in the vertical fragment ion dimension.Finally, the digital modulation frequency set by the instrument electronics is doubled in quadrupolar 2ω detection compared to that in the detection of the fundamental frequencies (see Scheme 1).The modulation frequency for a precursor ion is defined as f ICR − f min , where f ICR is the reduced cyclotron frequency of the ion and f min is the digital modulation frequency set by the instrument electronics.Doubling f min increases the lowest precursor m/z, which corresponds to a cyclotron frequency of f N + f min , where f N is the Nyquist frequency or reduces the necessary Nyquist frequency. 22In the 2D MS experiment, the Nyquist frequency in the vertical dimension corresponds to the cyclotron frequency range of the precursor ions.With all other parameters remaining equal, reducing the frequency range increases the theoretical resolving power of the 2D mass spectrum in the vertical dimension. 30igure 1a precursors of a given fragment ion.The horizontal resolving power (m/Δm, where Δm is the full-width at half-maximum of the fragment ion peak) was measured to be 200 000 at m/z 400 and the vertical resolving power was 1300 at m/z 874 (corresponding to 2800 at m/z 400).We can also extract electron capture lines as follows: where n is the charge state of the precursor ions.In Figure 1a, electron capture lines for the capture of one electron by the 7− 10+ charge states are plotted in green.As shown in eq 2, their slopes are 6/7, 7/8, 8/9, and 9/10.The 2D ECD mass spectrum also shows harmonics of the autocorrelation line as curved lines.The presence of harmonic peaks is caused by the nonsinusoidal modulation of the precursor ions. 22,25Scintillation noise, which is caused by the fluctuation of the number of ions in the ICR cell from scan to scan, manifests as vertical streaks along the m/z of the precursor ions and can be filtered out by the use of a denoising algorithm during data processing. 41Figure S1 in the Supporting Information shows the complete 2D mass spectrum, including harmonics of the autocorrelation line.Most harmonics are similar to the ones obtained in 2D MS with standard detection at 1ω.One noticeable difference between detection at 1ω and quadrupolar detection at 2ω is the presence of the 1ω subharmonic frequency (at double the measured m/z).In the 2D mass spectrum, we observe the subharmonic peak of the autocorrelation line at a 1/2 slope at approximately 15−20% the intensity of the autocorrelation line. 24ere, the 2D mass spectrum is shown as a contour plot, but we cannot see enough detail to show the fragmentation patterns of the 7−10+ charge states of acetylated ubiquitin.Because of the multiplicity of dissociation channels for the fragmentation of proteins in ECD, relative intensities of fragment ions in the 2D mass spectrum can be equivalent to the intensity of signals caused by harmonics or noise, and plotting one without the other is difficult. 47Nevertheless, discriminating analytically useful signal from noise is readily achieved because, due to distinctly different frequency relationships, they are in different areas of the spectrum.The zoomed-in view of the fragmentation patterns shown in Figure 1b illustrates how the fragmentation patterns can be easily distinguished.The red lines highlight various dissociation lines to illustrate how they can be used to locate modifications. Figure 2a shows the extracted autocorrelation line (m/z 850−1300) of the 2D ECD mass spectrum.The charge states of acetylated ubiquitin that are modulated and fragmented in this 2D mass spectrum are 7−10+, each of them bearing 4−6 acetylations, which is consistent with the level of acetylation under similar labeling conditions presented by Novaḱ et al. 35 The inset shows the isotopic distribution of the [M + 10H + 4Ac] 10+ precursor ion species on the autocorrelation line.The signal from precursor ions is modulated by the radius (during the pulse-delay-pulse sequence in Scheme 1) and by their abundance (during the ECD irradiation), followed by Fourier transformation over 4096 scans.Therefore, the SNR on the autocorrelation line is typically very high. 48In the case of the isotopic distribution of [M + 10H + 4Ac] 10+ , the SNR for the most intense peak is 720.The SNR for the monoisotopic peak is 20.For comparison, Figure 2b shows the 1D mass spectrum of acetylated ubiquitin.Both the mass spectrum and the autocorrelation line show similar charge state ranges and acetylation numbers for each charge state.However, the relative intensities of the peaks are different between Figure 2a and Figure 2b: while the relative intensities in the mass spectrum reflect ion abundance and charge state, the relative intensities on the autocorrelation line also reflect the fragmentation efficiency of each ion species, which, for ECD, depends greatly on charge state. 24,49The SNR for the monoisotopic peak of [M + 10H + 4Ac] 10+ in the mass spectrum is only 2−3, which is about 10× smaller than that for the same monoisotopic peak extracted from the autocorrelation line in Figure 2a.With 4096 scans instead of 64, the SNR would be 8× higher. One issue in the top-down analysis of large biomolecules is their accurate mass determination.Typically, deconvolution algorithms based on the averagine method are used because the SNR of the monoisotopic peak is often below the level of detection. 50Although most biomolecules for which this issue arises are much larger than ubiquitin, this result suggests that using the autocorrelation line in 2D mass spectra may offer more accurate analytical information by offering higher SNRs for monoisotopic peaks of biomolecules.The process of peak assignment and sequence coverage determination using FAST-MS is illustrated in Figure 3 for each ubiquitin isoform.Figure 3a shows the summed fragment ion scans of m/z 1098 ([M + 8H + 5Ac] 8+ ).Five fragment ion scans were extracted from the 2D mass spectrum to cover the precursor ion peak of [M + 8H + 5Ac] 8+ at m/z 1098 and co-added to obtain the resulting fragment ion scan shown in Figure 3a.In Figure 3b, we illustrate why the fragment ion scans were added up (individual extracted scans are shown in red).Since the resolving power in the vertical precursor ion dimension is insufficient to distinguish between precursor ion isotopes, the overlap between precursor ion isotopic peaks is not complete.The relative intensities in fragment ion isotopic distributions in a single fragment ion scan can therefore be distorted; to recover the full isotopic distribution for fragment ions, we summed up the fragment ion scans before analysis.FAST-MS compares experimental and theoretical relative intensities to gauge the quality of peak assignments, peak-picking the fragmentation pattern for the full isotopic distribution of each protein isoform, then improves the accuracy of the sequence coverage assignment, which provides an optional advantage of adding-up adjacent scans in 2D MS.Because the fragment ion scans are adjacent, noise signals are correlated between them and the SNR is only marginally affected. The information fed into FAST-MS was the ubiquitin sequence, the molecular formula of the acetylation, the number of modifications, and the location of the modification (M and K residues).The software then generated a library of theoretical isotopic distributions of the a, b, c, y, and z fragments.Figure 3c shows the sequence coverage of [M + 8H + 5Ac] 8+ .All peak assignments were validated manually, reaching a sequence coverage of 86%.For comparison, a onedimensional tandem mass spectrum of [M + 8H + (0−6)Ac] 8+ in similar conditions with 2 M data points and 200 accumulated scans yielded a cleavage coverage of 84% (see Table S12 and Figure S2 in the Supporting Information). The lists of peak assignments can be found in Tables S1− S11 in the Supporting Information.Table 1 summarizes the sequence coverage for each proteoform and charge state of acetylated ubiquitin.Each fragmentation pattern has a different sequence coverage, which depends on both the abundance of each precursor ion and charge state because the fragmentation efficiency of ECD is charge state-dependent. 11The last column shows the sequence coverage for each ubiquitin proteoform after the results for all charge states.Because different fragments are produced for each charge state, the total sequence coverage is higher than the sequence coverage of each charge state. Figure 4 shows the acetylation rate vs the residue index for proteoforms with four, five, and six acetylations, for c and z fragments.Each plot combines the peak assignments for all charge states (7−10+) with M/K acetylation sites assigned by FAST-MS.−32 Figure 4a shows the extent of acetylation for ubiquitin with four acetylations from c fragments and z fragment ions, respectively.Ubiquitin has eight possible acetylation sites, namely, M1, K6, K11, K27, K29, K33, K48, and K63.From the N-terminus, the acetylation sites are M1, K6, K48, and K63.From the C-terminus, the acetylation sites are K63, K48, K33, and K6.The most easily accessible sites can therefore be located at K63, K48, and K6.Residues M1, K11, K27, K29, and K33 are less solvent-accessible.The sequence coverage for ubiquitin with four acetylations is not sufficient to distinguish between K27, K29, and K33. From these results, we can conclude that the most accessible acetylation sites are K63 and K48; followed by K6, M1, and K33; and finally K29, K27, and K11.This conclusion is congruent with the conclusions by top-down CID MS/MS found by Novaḱ et al. 35 We should note that we observe a loss of acetylation in Figure 3a.However, despite this result, all acetylation sites for each isoform could be accounted for. One advantage of broadband-mode 2D MS over individual MS/MS spectra is the ease with which the interactions between the charge state and protein modifications can be measured.Since lysine, which is the main residue carrying the acetylation, also carries the charge, and since acetylation is known for reducing positive charges in proteins, we hypothesized that the charge state of ubiquitin would be affected by acetylation. 51We calculated the average charge state of ubiquitin for each number of acetylations using the intensities on the autocorrelation line and the mass spectrum.a Legend: Ac = acetylation, N/A = not annotated. Since measured intensities in FT-ICR MS are proportional to the abundance and the charge of each ion species, we calculated the average charge state for each proteoform using the following equation: where ⟨z⟩(n) is the average charge state for n acetylations and I(z, n) is the intensity of the [M + zH + nAc] z+ peaks. The results are plotted in figure S3 in the Supporting Information.The average charge state decreases with the number of acetylations, both in the mass spectrum and in the autocorrelation line, which is consistent with acetylation reducing the number of positive charges on a protein.The results also show that the average charge state is higher in the autocorrelation line than in the mass spectrum, which is due to factors determining the intensity of a peak in FT-ICR MS.In the mass spectrum, peak intensities are determined by the ion abundance and the charge state.On the autocorrelation line of a 2D ECD mass spectrum, peak intensities are determined by the ion abundance, the charge state and the capacity to capture electrons, which increases with charge state in positive ionization mode. 24Therefore, the average charge state for each isoform is higher in the autocorrelation line of the 2D mass spectrum than in the 1D mass spectrum. In Figure 5, we seek to determine whether the acetylation of both lysine and N-terminus methionine reduces the charge state of ubiquitin.Therefore, we extracted the vertical precursor ion scans from the 2D mass spectrum for the c 3 (m/z 390.21790, blue) and c 3 +Ac (m/z 432.22714, red) fragments, which, in turn, enables us to quantify the acetylation of only the M1 residue in ubiquitin.Figure 5 shows the c 3 fragment ion (blue) alongside with its acetylated form (c 3 + Ac, red) for charge states 10−7+ in Figure 5a−d, respectively. Figure 5a shows that ubiquitin with four acetylations produces the c 3 fragment and that ubiquitin with five and six acetylations produce the c 3 + Ac fragment in the 10+ charge state.Therefore, the fifth most favored acetylation site is M1.In Figure 5b, for 9+ charged precursors, the c 3 + Ac fragment is only produced from the ubiquitin with six acetylations, which means that M1 is the sixth most favored acetylation site.In Figure 5c and d, we see that only c 3 is produced from the 7+ and 8+ charge states, which means that M1 is, at best, the seventh most favored acetylation site.As a result, we can say that ubiquitin with an acetylation on the M1 residue skews toward higher charge states.This result suggests that the acetylation of the methionine residue may not reduce the charge state of ubiquitin like the acetylation of the lysine residues does. ■ CONCLUSION Stable protein covalent labeling coupled to 2D MS analysis and ECD fragmentation has yielded information about solvent accessibility at individual residues, particularly the N-terminus methionine and the lysines residues. 35For the first time, 2D MS was applied with quadrupolar detection on a dynamically harmonized ICR cell.The detection at the 2ω harmonic led to a shorter experimental duration and an increase in resolving power in the vertical precursor ion dimension. 34ecause of the multiplexing inherent to the 2D MS experiment, we were able to obtain in parallel the ECD fragmentation pattern of four charge states of ubiquitin with up to six acetylations each. 24The resolving power in the vertical precursor ion scan was sufficient to confidently correlate precursor and fragment ions without unwanted contributions from different proteoforms and without a loss of precursor ion abundance due to quadrupole isolation.We used the FAST-MS software and defined a workflow to assign all fragment ions generated from each charge state by ECD and quantify the extent of acetylation of methionine/lysine residues, which was consistent with previously published results. 30,4335D MS showed the advantages of having the fragmentation patterns of multiple isoforms and charge states in a single spectrum.First, the sequence coverage from the combined fragmentation patterns of all observed charge states was higher than the sequence coverage obtained from the charge state with the highest fragmentation efficiency.Second, the 2D mass spectrum enabled the observation that acetylation reduces the gas-phase charge state of ubiquitin and more specifically that the acetylation of lysine residues reduces the charge state to a higher degree than the acetylation of the N-terminus M1 residue. This study shows the potential for 2D MS coupled with ECD fragmentation to yield comprehensive analytical information for the top-down analysis of the proteoform mixtures.2D ECD MS can further be applied to the quantitative analysis of post-translational modifications of proteins and to the structural analysis of covalently labeled proteins. Scheme 1 . Scheme 1. Pulse Sequence for the 2D MS Experiment with Frequency and m/z Range for Quadrupolar Detection a displays the 2D ECD mass spectrum of acetylated ubiquitin.Fragment m/z values are plotted horizontally, and precursor m/z values are plotted vertically.The autocorrelation line (m/z) precursor = (m/z) fragment (i.e., identity line) results from the modulation of precursor ion radii and abundances with their own reduced cyclotron frequencies and shows all the precursor ions observed in the 2D MS analysis.Horizontally, fragment ion scans show the fragmentation pattern of each precursor ion.Vertically, precursor ion scans show all the Figure 1 . Figure 1.(a) 2D ECD mass spectrum of acetylated ubiquitin.An asterisk (*) indicates electron capture lines (green).(b) Zoom-in on the fragmentation pattern of [M + H] 9+ with 4−6 acetylations.The red lines indicate dissociation lines for the various c and z fragments listed around the periphery. Figure 2 . Figure 2. (a) Extracted autocorrelation line from the 2D mass spectrum.The inset shows a zoomed-in view of the isotopic distribution of the [M + 10H + 4Ac] 10+ .The arrow marks the monoisotopic peak (MI).(b) Mass spectrum of acetylated ubiquitin.The inset shows the zoomed-in isotopic distribution of the [M + 10H + 4Ac] 10+ species from the mass spectrum shown in Figure 2b.The arrow marks the monoisotopic peak (MI). Figure 4 . Figure 4. Acetylation rate vs residue index for ubiquitin modified with (a) four acetylations (c fragments on top, z fragments at the bottom), (b) five acetylations (c fragments on top, z fragments at the bottom), and (c) six acetylations (c fragments on top, z fragments at the bottom).
2023-10-26T06:16:58.887Z
2023-10-25T00:00:00.000
{ "year": 2023, "sha1": "2d7e329732d7715f0851015b8e904a200ad1bd59", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "5644b7eb22934a2a9e27e5fa42976591d5f5de31", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
118877227
pes2o/s2orc
v3-fos-license
Holography and Conformal Anomaly Matching We discuss various issues related to the understanding of the conformal anomaly matching in CFT from the dual holographic viewpoint. First, we act with a PBH diffeomorphism on a generic 5D RG flow geometry and show that the corresponding on-shell bulk action reproduces the Wess-Zumino term for the dilaton of broken conformal symmetry, with the expected coefficient aUV-aIR. Then we consider a specific 3D example of RG flow whose UV asymptotics is normalizable and admits a 6D lifting. We promote a modulus \rho appearing in the geometry to a function of boundary coordinates. In a 6D description {\rho} is the scale of an SU(2) instanton. We determine the smooth deformed background up to second order in the space-time derivatives of \rho and find that the 3D on-shell action reproduces a boundary kinetic term for the massless field \tau= log(\rho) with the correct coefficient \delta c=cUV-cIR. We further analyze the linearized fluctuations around the deformed background geometry and compute the one-point functionsand show that they are reproduced by a Liouville-type action for the massless scalar \tau, with background charge due to the coupling to the 2D curvature R. The resulting central charge matches \delta c. We give an interpretation of this action in terms of the (4,0) SCFT of the D1-D5 system in type I theory. Introduction The proof of the a-theorem in D=4 CFT and the alternative proof of c-theorem in D=2 CFT [1], given in [2,3], inspired by the anomaly matching argument of [4], has prompted several groups to address the issue of a description of the corresponding mechanism on the dual gravity side [5,6]. While a sort of a(c)-"theorem " is known to hold for RG-flows in the context of gauged supergravity [7,8], as a consequence of the positive energy condition, which guarantees the monotonic decrease of the a(c) function from UV to IR [9] 1 , one of the aims of the renewed interest on the topic has been somewhat different: the field-theoretic anomaly matching argument implies the existence of an IR effective action for the conformal mode, which in the case of spontaneous breaking of conformal invariance is the physical dilaton, whereas for a RG flow due to relevant perturbations is a Weyl mode of the classical background metric ("spurion"). In any case, upon combined Weyl shifting of the conformal mode and the background metric, the effective action reproduces the conformal anomaly of amount a U V − a IR (c U V − c IR ), therefore matching the full conformal anomaly of the UV CFT. This effective action therefore is nothing but the Wess-Zumino local term corresponding to broken conformal invariance. So, one obvious question is how to obtain the correct Wess-Zumino term for the dilaton (or spurion) from the dual gravity side. One of the purposes of the present paper is to discuss this issue offering a different approach from those mentioned above. In known examples of 4D RG flows corresponding to spontaneous breaking of conformal invariance on the Coulomb branch of N = 4 Yang-Mills theory [13][14][15][16], indeed the existence of a massless scalar identifiable with the CFT's dilaton (see also [6,17]) has been shown. However, the background geometry is singular in the IR, so that one does not have a full control on the geometry all along the RG flow. It would be therefore desirable to have an explicit example which is completely smooth from UV to IR, and indeed we will discuss such an example in the AdS 3 /CF T 2 context. Before going to analyze in detail a specific example, we will generally ask what is the bulk mode representing the spurion field of the CFT. The spurion couples to field theory operators in according to their scale dimension and transforms under conformal transformations by Weyl shifts. These properties point towards an identification of this mode with the PBH (Penrose-Brown-Henneaux)-diffeomorphism, which are bulk diffeomorphisms inducing Weyl transformations on the boundary metric, parametrized by the spurion field τ . This identification has been first adopted in [18,19] to study holographic conformal anomalies and also recently in [5,6,20] to address the anomaly matching issue from the gravity side. As will be shown in section §2, for the case of a generic 4D RG flow, by looking at how PBH diffeomorphisms act on the background geometry at the required order in a derivative expansion of τ , we will compute the regularized bulk action for the PBH-transformed geometry and show that it contains a finite contribution proportional to the Wess-Zumino term for τ , with proportionality constant given by a U V − a IR . In the case where conformal invariance is spontaneously broken, when D > 2, one expects to have a physical massless scalar on the boundary CFT, the dilaton, which is the Goldstone boson associated to the broken conformal invariance. As stressed in [6], one expects on general grounds that the dilaton should be associated to a normalizable bulk zero mode, and therefore cannot be identified with the PBH spurion, which is related to a non normalizable deformation of the background geometry. In section §3 we will follow a different approach to the problem: starting from an explicit, smooth RG flow geometry in 3D gauged supergravity [21], we will promote some moduli appearing in the solution to space-time dependent fields. More specifically, we will identify a modulus which, upon lifting the solution to 6D, is in fact the scale ρ of an SU (2) Yang-Mills instanton. We will then find the new solution of the supergravity equations of motion up to second order in the space-time derivatives of ρ. We will find that demanding regularity of the deformed geometry forces to switch on a source for a scalar field. We will then compute the on-shell bulk action and verify that this reproduces the correct kinetic term boundary action for the massless scalar field τ = log ρ, with coefficient δc = c U V − c IR 2 . The computation of the CFT effective action is done up to second order in derivative expansion. Namely, only the leading term in the full IR effective action is computed, and our procedure is similar to the one followed in [22] for the derivation of the equations of hydrodynamics from AdS/CFT. In section §4 we reconsider the problem from a 6D viewpoint [23]: the 6D description has the advantage of making more transparent the 10D origin of our geometry in terms of a configuration of D1 and D5 branes in type I string theory 3 . Here we take one step further: not only we determine the deformed background involving two derivatives of ρ but also solve the linearized equations of motion around it to determine the on-shell fluctuations. This allows us to compute one-point functions of the boundary stressenergy tensor < T µν >, from which we deduce that the boundary action for τ is precisely the 2D Wess-Zumino term of broken conformal invariance, i.e. a massless scalar coupled to the 2D curvature R (2) and overall coefficient δc. An obvious question is what the field τ and its action represent on the dual CFT. We will argue that the interpretation of the effective field theory for τ is a manifestation of the mechanism studied in [26], describing the separation of a D1/D5 sub-system from a given D1/D5 system from the viewpoint of the (4, 4) boundary CFT. There, from the Higgs branch, one obtains an action for the radial component of vector multiplet scalars which couple to the hypermultiplets, in the form of a 2D scalar field with background charge, such that its conformal anomaly compensates the variation of the central charge due to the emission of the sub-system. In our case we will see that in the limit ρ → ∞, the gauge five-brane decouples, whereas in the limit ρ → 0 it becomes a D5-brane: these two limits correspond in turn to the IR and UV regions of the RG flow, respectively. The effective action for τ = log ρ accounts, in the limit of large charges, precisely for the δc from the UV to the IR in the RG flow. We will give an interpretation of the action for τ in terms of the effective field theory of the D1-D5 system in presence of D9 branes in type I theory. We stress that the above procedure, although, for technical reasons, implemented explicitly in the context of an AdS 3 /CF T 2 example, we believe should produce the correct Wess-Zumino dilaton effective action even in the D = 4 case, had we an explicit, analytic and smooth RG flow triggered by a v.e.v. in the UV. Of course, in this case we should have pushed the study of equations of motion up to fourth order in the derivative expansion. The Holographic Spurion The aim of this section is to verify that the quantum effective action for the holographic spurion in 4D contains the Wess-Zumino term, a local term whose variation under Weyl shifts of the spurion field reproduces the conformal anomaly 4 , with coefficient given by the difference of UV and IR a-central charges, in accordance with the anomaly matching argument. We start by characterizing a generic RG flow background and the action of PBH diffeomorphisms on it. The action of a special class of PBH diffeomorphisms introduces a dependence of the background on a boundary conformal mode which will play the role of the spurion. Indeed, we will verify that the corresponding on-shell Einstein-Hilbert action gives the correct Wess-Zumino term for the conformal mode introduced through PBH diffeo's. We then study the case of a flow induced by a dimension ∆ = 2 CFT operator, and check that boundary contributions coming from the Gibbons-Hawking term and counter-terms do not affect the bulk result. A derivation of the Wess-Zumino action has appeared in [20], studying pure gravity in AdS in various dimensions: the spurion φ is introduced as deformation of the UV cut-off boundary surface from z constant to z = e φ(x) , z being the radial coordinate of AdS. In appendix A.5 we present a covariant approach to get the same result for the WZ term. Holographic RG flows We start by characterizing a generic RG flow geometry. For the sake of simplicity, we are going to work only with a single scalar minimally coupled to gravity. In the next section we will consider a specific example involving two scalar fields. The action comprises the Einstein-Hilbert term, the kinetic and potential terms for a scalar field φ, and the Gibbons-Hawking extrinsic curvature term at the boundary of the space-time manifold M : where K is the trace of the second fundamental form, and γ is the induced metric on the boundary of M , ∂M , L n is the Lie derivative with respect to the unit vector field n normal to ∂M . The metric has the form: which is an AdS 5 metric for constant l(y) and g µν (y) (µ, ν = 0, 1, 2, 3.). A RG flow geometry is then characterized by the fact that the above geometry is asymptotic to AdS 5 both in the UV and IR limits, y → 0 and y → ∞, respectively. We assume that the potential V (φ) has two AdS 5 critical points that we call φ U V (IR) and the background involves a solitonic field configuration φ(y) interpolating monotonically between these two critical points: Around each critical point there is an expansion: where δφ(y) = φ(y) − φ U V (IR) . By using (2.6) in the asymptotic expansion of the equations of motion: one sees that the constants Λ U V (IR) play the role of cosmological constants and fix also the radii of the two AdS 5 's. We discuss here the possibility to work in a gauge that makes easier to appreciate how only the boundary data is determining the spurion effective action. Consider a RG flow geometry of the form (2.3). Poincaré invariance of the asymptotic value of the metric implies g µν (y) = g(y)η µν . This is going to be an important constraint later on. The scale length function l 2 (y) has the following asymptotic behaviour: Notice that there is still the gauge freedom: where h = h(y) is any smooth function with asymptotic values 1 in the UV/IR fixed points. This gauge freedom allows to choose positive integers n U V and n IR as large as desired. In particular it is always possible to choose n U V > 2. This gauge choice does not change the final result for the effective action because this is a family of proper diffeomorphisms leaving invariant the Einstein-Hilbert action (we will comment on this fact later on). Its use is convenient in order to make clear how only leading behaviour in the background solution is relevant to our computation. At the same time it allows to get rid of any back-reaction of δl U V and δl IR in the leading UV/IR asymptotic expansion of the equations of motion. The metric g µν has the following UV expansion, for y → 0: and a bulk scalar field dual to a UV field theory operator of conformal dimension ∆ = 2 that we denote as O (2) , behaves like: where the ... stand for UV subleading terms. From the near to boundary expansion of the Klein-Gordon equations one reads the useful relation between the conformal weight of O (2) and the mass of φ on dimensional AdS d+1 : In this critical case we have the standard relation between asymptotic values of bulk fields and v.e.v.'s or sources for the dual CFT operators: namely φ (0) is the v.e.v. andφ (0) the source. We have chosen the case ∆ = 2 to take a particular example, but one can easily generalize the results to any other value of ∆ ≤ 4. In the remaining of the section we refer only to relevant perturbations. On the PBH diffeomorphisms The PBH diffeomorphisms transform, by definition, the line element (2.3) into: withg µν given by an UV asymptotic expansion of the form (2.9): and h 1,2 and g (i) , with i = 2, 4, determined in terms of the boundary data by the near to boundary expansion of the equations of motion (A.10). For the static RG flow geometry at hand, (2.3) a PBH transformation has the following structure in terms of derivatives of τ : where the ... stand for higher derivative in τ dependence. Covariant indices are raised up with the metric g µν (y) = g −1 (y)η µν . Notice we have written the most general boundary covariant form and that this IR expansion of the full transformation is valid along the full flow geometry up to the IR cut off, not only near to boundary. The constraints implied by preserving the form (2.12) allow to determine the form factors a (i) and b (i) in terms of the scale length function l. To begin with, it is immediate to see that : where z = e τ y, which can be readily integrated. Some of these form factors can be settled to zero without lost of generality, since they are solution of homogeneous differential equations. Let us study the following one b (1) . We can look at second order in derivatives contribution of δx µ ≡ x τ µ − x µ to the (y, y) component of the metric, which is ∼ (∂τ ) 2 . The contributions coming from δy ≡ y τ − y contains a linear order in y term proportional to that does not match any contribution from δx µ and also a term proportional to (∂τ ) 2 . This implies b (1) has to be taken to vanish. Consequently a (2) would vanish. In the same fashion one can prove b (2) can be taken to vanish and b (3) can be found to obey the following inhomogeneous first order differential equation: which can be solved asymptotically to give: Notice that so far, we have always taken the trivial homogeneous solution. In fact we are going to see that this choice corresponds to the minimal description of the spurion. The choice of different PBH representative 5 would translate in a local redefinition of the field theory spurion. In the same line of logic one can find that: From these we can infer that b (4) , b (7) and b (8) obey homogeneous differential equations provided a (4) is taken to vanish, so we set them to zero too. The following constraints: give the UV/IR asymptotic expansions for the form factors: where the ... stand for subleading contributions. In appendix (A.2) we extend these results to the case of non static geometries. We use those non static cases in section §3 to check out the general results of this section in a particular example. Before closing the discussion let us comment about a different kind of PBH modes. To make the discussion simpler we restrict our analysis to the level of PBH zero modes i.e. τ is taken to be a constant. Then, is easy to see that one can take the transformation This arbitrary function h constitutes a huge freedom. In particular we notice that one can choose a PBH which does not affect the UV boundary data at all, but does change the IR side, namely such that: respectively, or vice versa. This kind of PBH's are briefly considered in appendix A.2. Besides acting on the metric the change of coordinates also changes the form of the scalars in our background. We focus on the UV asymptotic. So, for instance the case of the dual to a ∆ = 2 operator: Notice the source transforms covariantly, but not the v.e.v.. This asymptotic action will be useful later on when solving the near to boundary equations of motion. As already mentioned, we assume smoothness of the scalar field configurations in the IR. It is interesting however to explore an extra source of IR divergencies. The original 5D metric is assumed to be smooth and asymptotically AdS in the IR limit, y → ∞: This AdS limit assumption implies that g (0) µν = η µν . Non trivial space time dependence for g (0) sources an infinite tower of extra contributions that break AdS limit in the IR. For instance, a Weyl shifted representative will alter the IR AdS behaviour. The change is given by: in (2.23). Clearly AdS IR behaviour, y → ∞, is broken in this case. This is related with the fact that PBH diffeomorphisms are singular changes of coordinates in the IR. These modes alter significantly the IR behaviour of the background metric. Let us comment on a different approach that will be employed in the following to study the effect of PBH diffeo's. Clearly PBH diffeo's map a solution of the EoM into another solution. By knowing the UV and IR leading behaviours, one could then use near to boundary equations of motion to reconstruct next to leading behaviour in both extrema of the flow. Namely we can find the factors g (2) , g (4) and h (4) 's in (2.9) in terms of the Weyl shift of the boundary metric e τ g (0) . We can then evaluate the bulk and boundary GH terms of the action with this near to boundary series expansion. Some information will be unaccessible with this approach, concretely the finite part of the bulk term remains unknown after use of this method. In appendix A.4.1 we compute the divergent terms of the bulk term and find exact agreement with the results posted in the next subsection. We will use this procedure to evaluate the GH and counter-terms indeed. Wess-Zumino Term Given its indefinite y-integral S[y], the bulk action can be written as: The divergent parts of the bulk action come from the asymptotic expansions of the primitive S: For a generic static RG flow solution a U V /IR , will depend on the specific matter content of the bulk theory at hand. As for our particular choice of ∆'s in the UV/IR, the a U V /IR [η µν ] dependence on the parameters of the flow, so in order to keep the discussion as general as possible until the very end of the section we keep the static limit of both a (2) U V /IR as arbitrary. As for the expansions of the primitive S in a generic static case, one gets thence: The terms a (4) uv,ir [η µν ] are the contributions to the Weyl anomaly coming from the matter sector of the dual CFT, they must be proportional to the sources of the dual operators. The order one contribution is completely arbitrary in near to boundary analysis. Notice that we have freedom to add up an arbitrary, independent of y functional, d 4 x C, in the expansions. The difference of both of these functionals carries all the physical meaning and it is undetermined by the near to boundary analysis. To determine its dependence on the parameters of the flow, full knowledge of the primitive S is needed. Next we aim to compute the change of the bulk action introduced before, under an active PBH diffeomorphism. The full action is invariant under (passive) diffeomorphisms x µ = f µ (x ′ ), under which, for example, the metric tensor changes as: 27) and similarly for other tensor fields. Here the transformed tensors are evaluated at the new coordinate x ′ . On the other hand by an active diffeomorphism, the argument of a tensor field is kept fixed, i.e: The infinitesimal version of this transformation above is given by the Lie-derivative acting on g. The difference between the two viewpoints becomes apparent on a manifold M with boundaries. Let us take a manifold with two disconnected boundaries to be time-like hypersurfaces. An integration of a scalar density over this manifold is invariant in the following sense: where the boundaries are denoted by B U V (IR) . By f −1 (B U V ) we mean the shape of the boundaries in the new coordinates x ′ = f (x). On the other hand, under an active transformation we have the change: (2.32) By using (2.31), the variation of the corresponding functional under an active diffeomorphism can be written as: where in the last step we have used the invariance under the passive diffeomorphism induced by the inverse map f −1 . Of course, if the maps f or f −1 leave invariant the boundary conditions then the functional S is invariant even under the active transformation induced by them. From now on in this section we specialize to D = 5 with x 5 ≡ y. We take as diffeomorphism the PBH diffeomorphism discussed earlier. The aim is to compute the on-shell action of the PBH mode τ . From the last discussion we found all we need is the on-shell action in terms of the background, namely the solution before performing the PBH transformation, and a choice of time-like boundary surfaces, which we take to be: Under a generic PBH GCT this region transforms into: with y τ U V and y τ IR given by the action (2.14) on y U V and y IR respectively. In virtue of (2.33) we compute the transformed bulk action: where y τ is given in (2.14). Given the near to boundary expansion of the bulk action for boundary metric g (0) = η: with cut off surface at y = y U V , we can then compute the leading terms in the PBH transformed effective action by using (2.14) and (2.37): Similar contribution comes from the IR part of the primitive S. Should we demand IR smoothness of every background field, the static coefficients a IR [η µν ] will vanish automatically (See last paragraph in appendix A.4). So finally, we get the following form for the regularized bulk action: The . . . stand for logarithmic divergent terms that are going to be minimally subtracted. Notice that the gravitational Wess-Zumino term comes out with a universal coefficient ∆a, independent of the interior properties of the flow geometry. Specific properties of the flow determine the normalization of the kinetic term and the Wess-Zumino term corresponding to the matter Weyl Anomaly. Next, we have to check whether this result still holds after adding the GH term and performing the holographic renormalization. So, from now on we restrict the discussion to the case of ∆ = 2. The finite Gibbons-Hawking contribution can be computed with the data given in appendix A.4.1. One verifies that the contributions of both boundaries are independent of derivatives of τ . The difference S GH | U V IR gives in fact a finite contribution proportional to d 4 xφ 0φ(0) which after a PBH tranformation reduces to a potential term for τ . Notice that this term vanishes for a v.e.v. driven flow, so in this case no finite contribution at all arises. We will crosscheck this in the particular example studied in the next sections. In the case of a source driven flow, the finite contribution d 4 xφ 0φ(0) give a potential term which is not Weyl invariant, as one can notice from the transformation properties (2.22). In fact its infinitesimal Weyl transformation generates an anomalous variation proportional to the source square δτ From the passive point of view, the GH term presents an anomaly contribution log(y U V ) (φ (0) ) 2 that after the cut off redefinition originates a matter Wess-Zumino term d 4 x 19)). Next, we analyze the counter-terms that are needed in order to renormalize UV divergencies. Covariant counter-terms involve the boundary cosmological constant and curvatures for g (0) and the boundary values of the scalar field, namely v.e.v. and sources: where . . . stand for logarithmic dependences that at the very end are going to be minimally substracted. We take g (0) to be conformally flat and then use the Weyl transformation properties of the boundary invariants to compute the Weyl factor dependence of counter-terms. The "volume" counter-term (2.40) is used to renormalize the infinite volume term of an asymptotically AdS 5 space. One then needs to use the R term to cancel the next to leading divergent term. In the process one remains with a finite potential contribution that even for a v.e.v. driven flow gives a non vanishing energy-momentum trace contribution. The usual procedure [13,27] is then to use the finite covariant counter-term (2.42) to demand conformal invariance in the renormalized theory, when the source is switched off. The counter-term action satisfying this requirements is: This action will provide an extra finite contribution to (2.3) proportional to: Finally the renormalized action takes the form: We should notice that no second derivative term, (∂τ ) 2 , is present in this particular case, just as in the similar discussion of [20]. However, there is a source of higher derivative terms: due to the fact that the PBH diffeomorphism is singular in the IR, and in fact, the higher orders in derivatives come with the higher order IR singularities. So, the higher derivative terms are counted by powers of the IR cut off. We do not address here the issue of renormalizing these terms. The main idea here was to show the presence of a Wess-Zumino term compensating the anomaly difference between fixed points. The term O(1) stands for possible finite contributions (4D cosmological constants) in the static on shell action plus GH term and CT. As for the GH term this contributions vanish for v.e.v. driven flows. RG Flow in N = 4 3D Gauged Supergravity In this section we consider a particular, explicit and analytic example of a Holographic RG flow in 3D gauged supergravity. The reason to analyze this particular example is twofold: first, it is relatively simple and analytic, and, second, it is completely smooth, even in the infrared region. Indeed smoothness will be our guiding principle in deforming the background geometry in the way we will detail in this section. We will promote some integration constants (moduli) present in the flow solution to space-time dependent fields and identify among them the one which corresponds to a specific field in the boundary CFT. To get still a solution of the equations of motion we will have to change the background to take into account the back reaction of spacetime derivatives acting on the moduli fields. This will be done in a perturbative expansion in the number of space-time derivatives. The starting point is one of the explicit examples of RG flows studied in [21], where domain wall solutions in N = 4 3D gauged supergravity were found. These solutions are obtained by analyzing first order BPS conditions and respect 1/2 of the bulk supersymmetry. They describe holographic RG flows between (4, 0) dual SCFT's. It turns out that the solution we will be considering admits a consistent lift to 6D supergravity, which will be reviewed and used in the next section. In this section the analysis will be purely three-dimensional. We start by writing the action and equations of motion for the three dimensional theory at hand. In this case the spectrum reduces to the metric g, and a pair of scalars A and φ, which are left over after truncating the original scalar manifold. The action is: with potential for the scalar fields given by: The corresponding set of equations of motions is then given by: The domain wall solution and its moduli In this subsection we review the domain wall solution describing the RG flow on the dual CFT and identify its moduli. Let us choose coordinates x ν = t, x, r and the 2D (t, x)-Poincaré invariant domain wall ansatz for the line element: and the scalar field profiles A B (r) and φ B (r). The equations of motion reduce then to the following set: where the primes denote derivative with respect to r. It is then straightforward to show that the following field configuration: with y(r) = e 2g 1 F (r) is the most general solution of (3.7),(3.4), (3.3), provided: We can solve this equation explicitly for r(F): Notice the presence of three moduli τ , s p , ρ. The first one corresponds to a freedom in shifting the radial coordinate by a constant amount τ , r → r + τ . This mode is a PBH rigid diffeomorphism in the domain wall coordinates. As mentioned a rigid PBH in domain wall coordinates becomes a warped one in the Fefferman-Graham coordinates. The second modulus s p can be identified with a rigid conformal transformation in the boundary coordinates (t, x). The third modulus ρ is an internal mode respecting the boundary conditions for the metric in both UV and IR limits but changing the scalar modes and It corresponds to a normalizable zero mode. In the next section we will see this mode is basically the instanton size modulus in the 6D description of the RG flow. But can be also thought of as a linear combination of a PBH and s p mode. In order to have a flavor of the properties of the flow geometry it is useful to make a change of coordinates, from (t, x, r) to (t, x, y) with y = e 2g 1 F (r) . In this coordinates the metric becomes: This geometry approaches AdS 3 in both the UV(y → ∞) and the IR (y → 0) limits, with corresponding radii: (3.15) These radii determine the central charges of the (4,0) CFT's at the fixed points, through the expression c = 3L/2G N , G N being the 3D Newton's constant 6 . Additionally the limit: recovers AdS 3 space with radius L given by L 2 4 = c 2 1 g 4 1 . An additional transformation in the boundary metric is needed to keep it finite in the limit, η → 2 η. The scalar fields go in the UV and IR to different fixed points (extrema) of the potential V (A, φ). In particular, in the UV, A → 0 and φ → log 2c 1 g 1 . Expanding the potential (3.2) around the extremum we find out the masses of the bulk fields A(r), φ(r) at the UV fixed point: (3.17) The allowed conformal dimension of the corresponding dual boundary operators are: respectively. By looking at (3.10) we can read off their asymptotic expansions near the UV boundary (y → ∞): These are "normalizable" excitations, and in the standard quantization, which adopts ∆ = ∆ + , they would correspond to a vacuum state in the dual CFT, where the dual operators O A and O φ acquire a v.e.v.. This clashes with the fact that in D = 2 we cannot have spontaneous breaking of conformal invariance 7 . Notice that the problem arises also in the well known case of the D1-D5 system in IIB, when one deforms the AdS 3 × S 3 background by going to multi-center geometries. Most probably this is a feature of the supergravity approximation, or dually, of the leading large-N expansion on the CFT side. It would interesting to see how the picture is modified in going beyond the supergravity approximation, as discussed, in a different context, in [29]. At the IR, In particular, the background is completely smooth. Now we notice a property of the metric (3.14) : the UV/IR AdS limits of the geometry are independent of ρ, and, as mentioned earlier, this modulus corresponds to a normalizable zero mode. It is instructive to look at how can be represented a PBH diffeomorphism zero-mode of the form y → e 2σ P BH y in terms of the moduli appearing in the background geometry: it amounts to take the combined set of transformations ρ → ρe 2σ P BH and s p → sp + σ P BH . Conversely, the ρ modulus can be thought of as a combination of a PBH mode mentioned before plus a suitable choice of s p such that the boundary metric remains unchanged. We should stress that the PBH zero-modes τ and σ P BH aren't precisely the same. The difference will come about in the next subsection. But we can already say that there is a choice of τ and s p for fixed ρ = 1 that preserves normalizability. We can explore then two possibilities, either we analyze the combined pair of moduli (τ, s p ) ρ=1 or the single modulus ρ. In the next subsection we analyze both cases. We will also check the geometrical procedure discussed in section §1. Fluctuations Analysis In this subsection we are going to analyze a deformation of the background geometry which arises when one gives a non trivial (t, x) dependence to some of the moduli introduced in the previous subsection. Specifically, we will promote the integration constants s p and τ to functions of t and x, s p (t, x) and τ (t, x). In doing so, of course, we have to take into account the back reaction due to the (t, x) derivatives acting these fields. The equations of motion will involve therefore inhomogeneous terms containing derivatives of s p (t, x) and τ (t, x). We will work in a perturbative expansion in the number of t and x derivatives. For that purpose it is convenient to introduce a counting parameter q, whose powers count the number of t, x derivatives. As for the metric, we keep the axial gauge condition and therefore start with the expression: where x 0 = t and x 1 = x, and µ, ν = 0, 1. For the background deformations, at second order in (t, x) derivatives, we adopt the following ansatz for the scalar fields: whereas for the metric components: and we redefine g (2) tx → e 2f g (2) tx . The homogeneous part of the equations of motion will involve an ordinary linear differential operator in the r variable acting on the fluctuations and this will be sourced by an inhomogeneous term involving two t, x derivatives acting on s p and τ , which represents the moduli back reaction to the original background. Now we have five unknown functions and eight equations, (3.3), (3.4), (3.5), so that we need to reduce the number of independent equations. It is a long but straightforward procedure to find out the general solutions to the system. We are going to sketch the procedure we followed to solve them. Details are given in appendices. Specifically the equations of motions at order q 2 are given in appendix B.1. A change of coordinates is useful to render the system of partial differential equations simpler. We perform a change from the domain wall coordinates (t, x, r) to the Poincaré like coordinates (t, x, y) already introduced in the previous subsections: where, Notice that if we are using a non fluctuating cut off surface r = r U V in the original coordinates, in the new coordinates the same surface will be fluctuating at a pace dictated by τ (t, x). We can however use a different choice of coordinates: It is then easy to show based on (3.13), that cut offs shapes in the y-system and y-system are related as follows: The set of equations, (3.3), (3.4), (3.5) provides a system of second order differential equations for the fluctuations in terms of the inhomogeneities produced by derivatives acting on s p (t, x) and τ (t, x). We are going to denote the five Einstein equations (3.5) by (t, t), (x, x), (t, x), (r, r), (t, r), (x, r), with obvious meaning. Equations (t, t), (x, x) and (r, r) form a set of second order equations in the η-trace part of the metric parametrized by g (2) (t, x, r) and the traceless part parametrized by T (t, x, r), together with the scalar fluctuations, which only appear up to first order in radial derivatives. It turns out that the combination (t, t) − (x, x) gives an equation for the trace part and scalar fluctuations, but the traceless part decouples in the combination (t, t) + (x, x). Namely it gives the equation: whose general solution is: where C 3 and C 2 are integration constants promoted to be arbitrary functions of t and x. Let's focus then on the set of equations (t, t) − (x, x) and (r, r). This is a coupled system for the trace part and the scalars which can be solved in many different ways, here we present one. First of all (r, r) can be integrated to get: where, with an integration constant C 5 . Then, one can notice that Eq. (t, t) − (x, x) only contains derivatives of the trace part of the metric fluctuations, so we can use (3.30) and its derivative to eliminate this function. The remaining equation will contain the scalar fluctuations up to first order in "radial" derivatives: Under the conditions already found the remaining equations (3.3), (3.4) reduce to the final algebraic equation for φ (2) in terms of y-derivatives of A (2) up to second order. By solving it and plugging the result in (3.32) we obtain the third order differential equation: where the inhomogeneous part takes the form: The R (i) A (2) and F (i) are rational functions in the radial coordinate y ( They are given in the appendix B.2). We solve this equation by Green's function method (See appendix B.3). The (t, x) equation: can be solved to get: As for the mixed equations, (t, r) and (x, r), they involve odd number of (t, x) derivatives and one needs to go to third order, were in fact they reduce to differential constraints for the integration constants C 2 , C 5 and C 6 sourced by second derivatives of the moduli τ and s p . Before solving for these constraint equations it is convenient to analyze the constraints that IR regularity imposes on the modulus C 5 . At this point we should comment about an important issue. We have nine integration functions C i (t, x) and our general on shell fluctuations develop generically infrared singularities and/or UV non-normalizabilty, in the latter case representing source terms on the dual CFT. We have two ways to deal with possible IR divergencies in our deformed background geometry: we could allow infrared singularities of the geometry and put a cut off at the IR side, or demand IR-smoothness. This last option will spoil full normalizabilty of all fluctuations, as we will see. This is something perhaps we could allow because at q 0 order the modulus which could be associated to the "dilaton" is still a normalizable bulk mode. The first option will guarantee full normalizability to order q 2 , but will require the presence of an IR Gibbons Hawking (GH) term (3.55). In any case we will see that the GH term will give no contribution to the boundary effective action of the moduli. In this paper we take the first point of view and demand full smoothness of the deformed geometry. By demanding regularity in the IR side for our spectrum of matter fluctuations A (2) and φ (2) we get the following set of relations for the integration functions: At this point we could solve the (t, r) and (x, r) fluctuation equations for the moduli: (3.40) According to the AdS/CFT dictionary, a state in the boundary CFT should correspond to a normalizable bulk mode, whereas non normalizable modes correspond to source deformations of the CFT. In our case, the UV boundary metric in the Fefferman-Graham gauge looks like e 2sp+ g 2 1 c 1 τ η. So, assuming the "standard" quantization, if we don't want to turn on sources for the trace of the boundary energy momentum tensor we need to take: This is not the case in the IR boundary where the induced metric picks up a shifting factor that we cant avoid by staying in the axial gauge (g rr = 1). By requiring not to turn on sources, even at second order in the derivative expansion for other components of the UV boundary CFT stress tensor, we see that: At this point of the nine integration constants at our disposal, after requiring regularity and normalizability of the metric fluctuations, two are left over, C 8 and C 9 . Together with τ they determine the CFT sources inside the matter fluctuations φ (2) and A (2) . This remaining freedom can be used just to require normalizability of either φ (2) or A (2) , but not both of them. From here onwards we choose to make φ (2) normalizable but for our purposes the two choices are equivalent. Finally we get: This choice turns on a source for the CFT operator dual to A. Indeed the UV expansion for A-fluctuation reads: To summarize, requiring IR regularity forces us to turn on a source term for one of the scalar fields. Notice that under the condition (3.41) the traceless and off-diagonal modes T and g (2) are IR divergent. They go as 1 y in the IR limit. Nevertheless the IR limit of the metric is not divergent because of the extra warp factor, which is proportional to y. Notice that The AdS IR limit is in fact broken by q 2 order fluctuations, as already argued in section §1. Evaluating the on-shell Action The regularized boundary Lagrangian coming from the bulk part is obtained by performing the integral over the radial coordinate with IR and UV cut-offs y IR , y U V respectively: First we present the result for the presence of both the moduli s p and τ . We can write down the 3D lagrangian as: tt + l (5) g tt + l (6) A (2) + l (7) φ (2) ) . (3.46) After integration and evaluation at the cut off surfaces we arrive to a boundary regularized action: where the ... stand for infinitesimal contributions and a total derivative term which is irrelevant for the discussion. Notice that the logarithmic divergent part is a total derivative, as it should be. Moreover the coefficient in front of It is proportional to the difference of central charges at the UV and IR fixed points. The contribution of the homogeneous part of the solutions to the onshell bulk action can be written as: tt + l (5) g (2) tt + l (6) A (2) + l (7) φ (2) . (3.48) As we will show in a while, this contribution does not affect the finite value of the moduli τ and s p effective action at all! In next section we will see this will not be the case if we work in Fefferman-Graham gauge since the beginning. In that case, the solution of homogeneous equations do affect the final result but upon regularity conditions the contributions are total derivatives of the moduli and hence irrelevant. The explanation in this mismatch comes from the fact the coordinate transformation from one gauge to the other is singular at q 2 order. After using (3.30) on (3.48) we get: Now, we asymptotically expand L hom . For this we need to use the most general form of the solutions to g (2) tt , A (2) and φ (2) . After a straightforward computation one gets: The only integration constant entering the boundary data is given by C 5 (t, x). However [L hom ] y U V y IR vanishes, and the boundary effective action for the moduli s p and τ coming from the bulk action is independent of all the integration constants, namely, any particular solution of the inhomogeneous system of differential equations gives the same final result, so far. We say so far, because still we have not commented about the GH and CT contributions. This is an interesting outcome, since the result holds independently of the IR regularity and normalizability conditions imposed on the fluctuations discussed earlier. The GH term will not affect this observation, but the CT contribution does it. In any case, we choose integration constants in order to satisfy our cardinal principle: IR regularity. Gibbons-Hawking contribution Let us discuss now the GH contribution: in the domain wall coordinates, (4.6), it reads: where so far g rr = 1, but for later purposes it is convenient to write the most general form above. In the (t, x, y) coordinates and after using (3.30) it is simple to show that: The UV and IR asymptotic expansions are thence given by: Even though we are not taking the approach of cutting off the geometry in the IR side, we present the IR behaviour of GH term just for completeness of analysis. Notice there is not finite contribution coming from them and again the independence on integration constants mentioned previously. Regularized Action At this point we can write down the regularized Lagrangian for the "normalizable" modulus s p . We first make the change to the Fefferman-Graham gauge at q 0 order, y →ỹ, make use of the normalizability condition (3.41) and the final result becomes: where the ... stand for subleading contributions in term of the cutoffs and finite total derivative terms. Notice that there is no logarithmic divergence at the UV cutoff. This is because this modulus is not affecting the UV boundary metric. On the other hand the IR side does have a logarithmic divergent factor, which however is a total derivative. Now, we discuss possible contributions coming from covariant counterterms. Let us start by gravitational countertems. In the asymptotically AdS 3 geometries the leading divergence in the on-shell action is renormalized by using the covariant term Other possible counterterms are: where δA, δφ denote the fluctuations around the UV stationary point of the potential. Notice that after imposing the normalizability condition (3.41) the finite contributions of this counterterm disappear except for the δA fluctuation which is a total derivative contribution. The remaining IR logarithmic divergence is minimally subtracted. Finally the renormalized action takes the form: The coefficient in front of this action is not the difference of central charges of the UV/IR fixed points. Although we can always rescale the field, this mismatch is unpleasant, because a rigid shifting in the spurion mode τ (not on s p ) rescales the CFT metric (UV side) in accordance with the normalization used in [3], and the mode s p only contributes through total derivatives to the boundary Lagrangian. So, the QFT side is saying that once fixed the proper normalization, the corresponding coefficient of the kinetic term of the spurion should coincide with the difference of central charges. This, points towards the conclusion the modulus τ seems not to be the optimal description for the QFT spurion. In fact the PBH modulus τ looks like a warped PBH in the Fefferman-Graham gauge, see A.2, so the outcome of the 2D version of the computation done in section §1 will change. We will show the result in the next subsection. The appropriate description of the spurion from the bulk side seems to be associated to a rigid PBH in Fefferman-Graham gauge. As we already said the modulus ρ could be seen as a combination of a PBH of that kind and the mode s p . So, following our line of reasoning ρ seems to be the most natural bulk description of the dilaton. In fact in the 6D analysis to be discussed in section 4 this identification will become even more natural. Checking the PBH procedure. There is an equivalent way to arrive to (3.59). We present it here because it gives a check of the procedure we used to compute the spurion effective action in a 4D RG flow. As was already noticed the modulus τ can be related to a family of diffeomorphisms. To check the procedure we take as starting point the bulk on-shell action of the modulus s p without turning on τ : Notice that the PBH transformations do not affect the boundary conditions of the matter field (2.22), provided we take the restriction (3.41). So all the IR constraints and normalizability conditions we imposed before will still hold in this second approach provided they were imposed at τ = 0. In Finally, after applying the same previous procedure to the GH term and to the counterterms, namely transforming the metric (4.6) at vanishing τ -modulus, gives (3.54) and (3.57) respectively. The ρ-branch analysis We can repeat the same computations done before but using the ρ modulus instead of the pair (τ, s p ). The trace and off-diagonal modes T and g (2) tx can be solved from the decoupled equations (t, t) + (x, x) and (t, x) to be: (3.62) In the same manner, we can then solve for all fluctuations in terms of A (2) by integrating the (t, t) − (x, x) and (r, r) equations: with: which is also found to obey a third order linear differential equation of the form: where The rational functions F (1) , F (2) and F (3) are given in the second paragraph of appendix B.2. We solve this equation by the Green's function method (see second paragraph appendix B.3). As for the case before we use the nine integration constants to demand IR regularity and as much normalizability as possible. In this case we are able to turn off UV sources except for one of the two corresponding to ∆ = 2 and ∆ = 4 CFT operators. We choose to allow source of the A scalar field, namely at the UV boundary, y = y U V : We compute then the full renormalized boundary action S ren = S bulk + S GH + S CT . The result up to total derivatives and without ambiguity in renormalization (as for the previous case) is: where s = log(ρ). Notice the coefficient in front of this kinetic term is proportional to the difference of holographic central charges among the interpolating fixed points, which in 2D can be identified with the difference of AdS 3 radii ∆L = 2c 1 Notice that we have a freedom in normalization of s. We have chosen the normalization to agree with [2,3]. Namely, the associated PBH diffeo shifts the UV/IR metric from η → e −2σ P BH η. As we mentioned the ρ modulus is a combination of a PBH mode with s p . So we can again check the procedure used in section §1 via (3.69). We can see the rigid ρ modulus as a combination of a P BH mode y → e 2σ P BH y and the s p = −σ P BH mode. This last constraint guarantees not to turn on sources for the CFT's energy momentum tensor (nor for the hypothetical IR one). To obtain the bulk contribution we perform the PBH transformation (A.7)-(A.8), on the on-shell action with only s p turned on (3.60). Before performing the PBH transformation, explicit solutions in terms of s p are demanded to be IR regular and as normalizable as possible. As usual, we choose to let on the source of the dimension ∆ = 2 CFT operator, which we can read from (3.44). As in previous cases. The GH and Counterterms contributions are evaluated by explicit use of the transformed metric and fields. The GH term does not contribute to the final result for the regularized action at all. As for the CT's, they contribute with total derivatives to the final result of the effective action which, under the identification σ P BH ≡ s, coincides with (3.69). A last comment about the relation between bulk normalizability and the identification of (3.69) as quantum effective action for s: we notice that demanding normalizability of the mode s amounts to impose the on-shell condition s = 0, in both equations (3.44) and (3.68). This is in agreement with holographic computations of hadron masses, where normalizability gives rise to the discreteness of the spectrum and indeed puts on-shell the states corresponding to the hadrons 8 . On the other hand, the on shell supergravity action, as already mentioned in the paragraph below (??), is independent of A (2) . Also, as shown in (3.58), the contributions coming from counter terms which depend on A (2) give contributions that are linear in the source for the operator dual to A, at order q 2 , but at the end, these contributions reduce to a total derivatives in (3.69). Notice that no other sources, apart from the one corresponding to the operator dual to A are turned on. Therefore (3.69) has no source dependence and can be interpreted as the (off-shell) effective action for the massless mode s. 6D Analysis Six dimensional supergravity coupled to one anti-self dual tensor multiplet, an SU (2) Yang-Mills vector multiplet and one hypermultiplet is a particular case of the general N = 1 6D supergravity constructed in [23] and admits a supersymmetric action. The bosonic equations of motion for the graviton g M N , third rank anti-symmetric tensor G 3M N P , the scalar θ and the SU (2) gauge fields A I M are: The three-form G 3 is the field strength of the two form B 2 modified by the Chern-Simons threeform, , with the SU (2) gauge field strength F = dA + A 2 . As a result there is the modified Bianchi identity for the 3-form: We are going to consider all the fields depending on coordinates u, v and r where u and v are light-cone coordinates given by u = t + x, v = t − x, and r is a radial coordinate. For the metric we take the following SO(4) invariant ansatz: where dΩ 2 is the SO(4) invariant metric on S 3 : and f , g uu , g uv , g vv are functions of (u, v, r), from now on we will not show this dependence. As for the SU (2) one-form A, we take it to be non trivial only along S 3 , preserving a SU (2) subgroup of SO(4), where σ k are Pauli matrices and ω k left-invariant one-forms on S 3 , and s is a function of (u, v, r). For the three-form G 3 , we take it to be non trivial only along u, v, r and along S 3 , where the functions G only depend on (u, v, r) . Finally we will have a non trivial scalar field θ(u, v, r). Deforming the RG flow background The aim of this section is to look for a solution of the above equations of motion which deforms the RG flow solution of [21], with the appropriate boundary conditions to be specified in due course (In order to demand IR regularity). To be more specific, this background is actually BPS. It preserves half of the 8 supercharges and interpolates between two AdS 3 × S 3 geometries for r → ∞, the UV region, and r → 0, the IR region, with different S 3 and AdS 3 radii. It describes a naively speaking, v.e.v. driven RG flow between two (4, 0) SCFT's living at the corresponding AdS boundaries parametrized by the coordinates u, v. The solution involves an SU (2) instanton centered at the origin of the R 4 with coordinates r, φ, ψ, χ, corresponding to s = ρ 2 /(r 2 + ρ 2 ). The scale modulus ρ enters also in the other field configurations, as will be shown shortly. Our strategy here is to promote ρ to a function of u, v, ρ = ρ(u, v). So, the starting point will be given by the field configurations: (4.10) Notice that s (0) goes like ρ 2 /r 2 in the UV. As for the three-form, it turns out that the following expressions for G 3 and G 3 solve identically the Bianchi identity and equations of motion: where det(g) = −g uu g vv + g 2 U V and f , θ and s are functions of (u, v, r). As explained in [21,30], the positive constants c and d are essentially electric and magnetic charges, respectively, of the dyonic strings of 6D supergravity. More precisely we have: where we see that the instanton contributes to Q 5 with one unit as a consequence of the modified Bianchi identity (4.5). The constants c and d determine the central charges of the UV and IR CFT's, respectively: c U V = c(4 + d), c IR = cd [21]. These fields solve the equations of motion only if ρ is constant (apart from G (1,2) 3 which solve them identically). We will then deform the above background to compensate for the back reaction due to the u, v dependence of ρ. In this way one can set up a perturbative expansion in the number of u, v derivatives. For the purpose of analyzing the equations of motion keeping track of the derivative expansion, again it is convenient to assign a counting parameter q for each u, v derivative. The first non-trivial corrections to the above background will involve two u, vderivatives of ρ(u, v). i.e. linear in two derivatives of ρ(u, v) or quadratic in its first derivatives. From now on we will not write down the coordinate dependence of the modulus ρ. Therefore we start with the following ansatz for the deformed background: uv (u, v, r), g buu (u, v, r) = q 2 g (2) uu (u, v, r), g bvv (u, v, r) = q 2 g (2) vv (u, v, r). (4.13) Our first task is to determine these deformations as functions of ρ and its derivatives. The structure of the resulting, coupled differential equations for the deformations is clear: they will be ordinary, linear second order differential equations in the radial variable r with inhomogeneous terms involving up to two derivatives of ρ. Due to the symmetry of the problem, there is only one independent equation for the gauge field, with free index along S 3 , say φ, and the non trivial Einstein's equations, E M N , arise only when M, N are of type u, v, r and for M = N along one of the three coordinates of S 3 , e.g. φ. The traceless part of the Einstein equations E uu and E vv involve only g (2) uu and g (2) vv respectively and these differential equations can be solved easily. The equations E uv , E φφ , E rr , the gauge field equation and the θ equation involve only g (2) uv , s (2) (u, v, r), f (2) and θ (2) . Since a constant scaling of u and v in the zeroth order background solution is equivalent to turning on a constant g (2) uv , the latter enters these equations only with derivatives with respect to r at q 2 order. Therefore we can find three linear combinations of these equations that do not involve g (2) uv . To simplify these three equations further, it turns out that an algebraic constraint among the fields f , θ and s, dictated by consistency of the S 3 dimensional reduction of the 6D theory down to 3D, gives a hint about a convenient way to decouple the differential equations by redefining the field θ in the following way. . (4.14) Note that for the reduction ansatz, ϕ = 0. In general the new field ϕ will also have an expansion in q of the form: For the zeroth order solution defined above one can see that ϕ (0) = 0. The reduction ansatz indicates that at order q 2 one can find a combination of the linear second order differential equations which gives a decoupled homogeneous second order equation for ϕ (2) . This equation can be solved for ϕ (2) , which involves two integration constants denoted by a 1 and a 2 (that are functions of u and v) 48r 6 (r 2 + ρ 2 ) 2 log( r 2 +ρ 2 r 2 ) − 48r 6 ρ 2 − 24r 4 ρ 4 + (12 + d)r 2 ρ 6 + dρ 8 r 4 ρ 2 ((4 + d)r 4 + 2(4 + d)r 2 ρ 2 + dρ 4 ) , (4.16) and after substituting this solution, we get two second order differential equations for s (2) and f (2) . In general one can eliminate f (2) from these two equations and obtain a fourth order differential equation for s (2) . However, it turns out that in these two equations f (2) /r 2 appears only through r-derivatives 9 and this results in a third order decoupled differential equation for s (2) where A 3 (r) = r 3 (r 2 + ρ 2 ) 6 ((4 + d)r 4 + 2(4 + d)r 2 ρ 2 + dρ 4 ) 2 , A 2 (r) = r 2 (r 2 + ρ 2 ) 5 (11(4 + d) 2 r 10 + 51(4 + d) 2 r 8 ρ 2 + 2(4 + d)(128 + 47d)r 6 ρ 4 + 2(4 + d)(24 + 43d)r 4 ρ 6 + d(80 + 39d)r 2 ρ 8 + 7d 2 ρ 10 ), A 1 (r) = r(r 2 + ρ 2 ) 4 (21(4 + d) 2 r 12 + 130(4 + d) 2 r 10 ρ 2 + (4 + d)(948 + 311d)r 8 ρ 4 + 4(4 + d)(100 + 91d)r 6 ρ 6 + (−192 + 456d + 211d 2 )r 4 ρ 8 + 10d(−8 + 5d)r 2 ρ 10 + d 2 ρ 12 ), A 0 (r) = 16ρ 2 (r 2 + ρ 2 ) 3 (4(4 + d) 2 r 12 + (4 + d)(72 + 19d)r 10 ρ 2 + (4 + d)(72 + 35d)r 8 ρ 4 + 2(16 + 54d + 15d 2 )r 6 ρ 6 + 2d(6 + 5d)r 4 ρ 8 − d 2 r 2 ρ 10 − d 2 ρ 12 ), The three independent solutions of the homogeneous part of the above equation are Using the most general solution of the homogeneous equation one can construct the Green's function for the third order differential equation and obtain a particular solution of the full inhomogeneous equation Substituting the general solution for s (2) in the remaining equations one gets first order linear differential equations for f (2) and g (2) uv which can be solved easily resulting in two more integration constants. Moreover, E uu and E vv give two decoupled second order differential equations for the traceless part of the metric g (2) uu and g (2) uu that can also be readily solved giving another four integration constants. In all there are eleven integration constants as compared to nine integration constants in the 3D case discussed in the previous sections. This is to be expected since the S 3 reduction ansatz from 6D to 3D sets ϕ = 0. Finally E ru and E rv at order q 3 give first order partial differential equations in u and v variables on the integration constants. The full homogeneous solution and a particular solution for the inhomogeneous equations are given in Appendix C. Now we turn to the analysis of the IR and UV behaviour of the general solutions. The general solution for s (2) is a sum of the particular solution (4.20) and the homogeneous solution (4.19). Near r = 0 this solution has divergent 1/r 4 and 1/r 2 terms that can be set to zero by choosing: Similarly analyzing the general solution for ϕ (1) one finds that it has also IR divergent 1/r 4 and 1/r 2 terms that can be set to zero by setting a 1 (u, v) = 0. With these choices we have checked that Ricci scalar and Ricci square curvature invariants are non-singular at r = 0. Finally, the Einstein equations E ur and E vr give certain partial differential equations with respect to v and u on the integration constants b 1 and c 1 respectively and these are solved by: With these conditions even the metric functions g uu , g vv and g uv have no power like singularities in r near r → 0. Thus we have a smooth solution near IR up to q 2 order. In the UV region, r → ∞, the source terms behave as O(r 2 ) for ϕ and f , and O(1) for the metric g uv , g uu and g vv . By making an asymptotic expansion of the homogeneous solutions one can see that a 2 , a 4 , a 7 , b 2 and c 2 control these source terms. Since in our background we do not want to turn on any sources, we set these integration constants to zero. Finally, the UV behaviour of the gauge field s (2) is: It turns out though that IR regularity forces us to allow a source term for the s b (u, v, r) field: this is a term of order r 0 for r → ∞, of order q 2 : as r → ∞. Notice that here, like in the 3D case, discussed at the end of section §3, the source term for the operator dual to s is proportional to the EoM for the massless scalar log ρ, and therefore vanishes on-shell. Finding linearized fluctuations around the deformed background Having determined the background corrected by the leading terms involving two space-time derivatives of the modulus ρ, we could compute the regularized on shell action, as was done in the 3D case. We find it more convenient to compute directly one-point functions of dual operators (especially of the stress energy tensor). To this end we need to switch on corresponding sources and therefore to solve the linearized equations of motion of the various fields on the deformed background. This is done again in a derivative expansion starting with the following ansatz for the fields fluctuations: δs = δ (0) s + q 2 δ (2) s, δg uu = δ (0) g uu + q 2 δ (2) g uu , δg vv = δ (0) g vv + q 2 δ (2) g vv , (4.25) where δ (0) stands for the zeroth order in space-time derivatives, and δ (2) stands for fluctuations coming at second order in space time derivatives and this is why is weighted by q 2 . The general solution for δ (0) is the homogeneous solution given in Appendix C. We fix the integration constants so that δ (0) f = 2ρ 4 r 2 (ρ 2 + r 2 ) (dρ 4 + (4 + d) r 4 + 2 (4 + d) ρ 2 r 2 ) a 5 (u, v), (4.28) where h uu , h vv and h uv are the integration constants b 2 (u, v), c 2 (u, v) and a 7 (u, v) respectively. Consequently they are the sources for the boundary stress energy tensor components T uu , T vv , and T uv . These h's are small fluctuations around the flat boundary metric, g (0) = η + h, and the corresponding linearized curvature is We have also kept the integration constant a 5 for reasons that will become apparent later on. The next step is to solve the equations of motion at order q 2 for the δ (2) fields. The equations for δ (2) fields contain also inhomogeneous terms that involve δ (0) fields and their derivatives, up to second order with respect to u and v. The procedure is the same as the one employed in solving for the corrected background. As the differential equations are inhomogeneous, the general solution will be the sum of the homogeneous solution and a particular solution of the inhomogeneous one, which can be obtained using Green's functions once we have the homogeneous solutions. The integration constants in the homogeneous part of solution can be partially fixed by requiring IR smoothness and absence of sources for δ (2) θ and δ (2) f . Moreover some sources can be reabsorbed in the already existing sources at zeroth order. Finally, the mixed u, v and r Einstein's equations result in differential constraints among the integration constants. Concerning the IR behaviour, the metric components go, for r → 0, as: (4.32) The apparent 1/r 2 singularity is presumably a coordinate singularity: we have verified that both the 6D Ricci scalar and Ricci squared are finite both at the IR and UV. The other fields are manifestly regular at the IR. We have seen that the there is a physical fluctuation for the operator O s proportional to ρ 2 at order q 0 and that at order q 2 there is a source, J s , which couples to it, proportional to log(ρ)/ρ 2 . Therefore we expect that, at order q 2 , the corresponding term O s J s in the boundary action will not give any contribution being a total derivative. So, this type of term will not contribute to the dilaton ρ effective action if we were to compute it, as it was done in the 3D case, by evaluating the regularized bulk action on the background together with boundary GH and counter-terms. We close this subsection by writing down the full source term J s for the operator O s dual to the bulk field s, i.e. the sum of the source in the background s b plus the one in the fluctuation δs: Next, we go to compute the contribution of the term g (0) J s O s to the 2D boundary action. While J s is the coefficient of r 0 in the UV expansion of s, < O s > is proportional to the coefficient of 1/r 2 . We will determine this proportionality constant in the following by studying the dependence of the regularized bulk action on a 5 . Note that J s is already of order q 2 , therefore we need only q 0 term in the coefficient of 1/r 2 in s, which can be seen from (4.10) and (4.27) to be 10 < O > s ∼ ρ 2 (1 + a 5 ). (4.34) Using the fact that g (0) at order q 0 is 1/2(1 − 2h uv ), it can be shown that g (0) J s < O s > up to the order we are working at, is a total derivative and therefore the corresponding integral vanishes. Boundary Action Here, we will determine the boundary action in presence of sources for the dual stress energy tensor T µν , which will allow to compute its one-point functions. We will expand the bulk action around the determined background to linear order in the fluctuation fields, at order q 2 . First of all, we need to point out a subtlety concerning the bulk action. Recall that the bosonic equations of motion of (1, 0) 6D supergravity, (4.4), can be derived from the following action: where the equations of motion are obtained by varying with respect to all the fields, including the two form B M N . The 6D equations of motion have been shown in [21] to reduce consistently to the 3D equations discussed earlier. In particular the 3D flow solution discussed before has a 6D uplift. For convenience, we give the map of the 6D fields and parameters in terms of 3D ones used in the previous sections: (4.36) In the 6D action (4.35) above, (G 3 ) 2 equals (G 3 ) 2 + (G 3 ) 2 . However the 3D gauged supergravity action is not the reduction of S bulk 6D . The difference lies in the fact that in reducing to 3D, one eliminates G 3 by using its 6D solution in terms of the remaining fields. The 3D action is constructed by demanding that its variation gives the correct equations for the remaining fields. From the explicit solutions for G (1) 3 and G 3 in (4.11), one can easily prove that the modified actionS bulk 6D , obtained by replacing ( 3 ) 2 in S bulk 6D , reproduces the correct equations of motion for all the remaining fields. From the AdS/CFT point of view, it seems reasonable to useS bulk 6D , since the two-form potential in 3D is not a propagating degree of freedom and does not couple to boundary operators. We should point out that the boundary action that we will compute in the following is not the same for S bulk 6D andS bulk 6D . Only the latter reproduces the results of the 3D analysis. The flow solution studied in this paper can be described in the 3D gauged supergravity, however there are many solutions describing flows in 2D or 4D CFTs that cannot be described in 3D or 5D gauged supergravities. Instead one has to directly work in higher dimensions. In such cases, we think, that the bulk action that should be used in the holographic computations, is the one that reproduces the correct equations for the fields that couple to the boundary operators, after having eliminated 2-form and 4-form fields respectively. As promised at the beginning of this subsection our goal will be to evaluate S bulk 6D , with the modification just mentioned, on the field configurations which are sums of the background fields plus the δ fields, at first order in the latter and to order q 2 . Since the background solves the equations of motion, the result will be a total derivative and there will be possible contributions from the UV and IR boundaries, i.e. r → ∞ and r → 0, respectively. It is simpler to give the sum, S 1 , of the boundary term coming from the bulk action and the Gibbons-Hawking term, which in our case is dudv ∂r (e 2f det(g)) √ (−detg) : − r 2 (g buv (−6 + 4r∂ r f b ) − r∂ r g buv )δg uv /4 + r 2 (g bvv (−6 + 4r∂ r f b ) − r∂ r g bvv )δg uu /8 (4.37) By looking at the solutions for the various fields one can see that this expression has a quadratic divergence for r → ∞ at order q 0 , which can be renormalized by subtracting a counterterm proportional to the boundary cosmological constant: The final term S f = S 1 − S CT , at order q 2 , for r → ∞ is obtained using the explicit solutions: For r → 0 one can readily verify that there is no finite contribution left over. Before coming to the computation of < T uu >, < T vv > and < T uv >, let us analyze more precisely O s . This can be obtained by comparing J s from (4.33), after setting to zero the sources of T µν , with the corresponding term in S f , which gives g (0) < O s > J s . Setting the sources of T µν to zero, i.e. keeping only a 5 , S f is 2c(∂ u ρ∂ v ρ − ρ∂ u ∂ v ρ)/ρ 2 a 5 which by the holographic map is equal to g (0) < O s > J s . Using the expression for J s given in (4.33) one finds: Notice that using the fact that < O s > is proportional to ρ 2 (4.34), the term proportional to a 5 in J s is a total derivative. The above equation actually gives the proportionality constant in (4.34) so that including the first order fluctuation: One-point function of T µν The one-point functions of the stress energy tensor, < T uu >, < T vv > and < T U V >, are determined as the coefficients of h vv , h uu and h uv , respectively, in S f . After performing a partial integration one obtains the result : This stress energy tensor can be derived from an effective action for the field ρ: Note that the coefficient that appears in S ρ is c which is proportional to c U V − c IR . Under the Weyl transformation and therefore S ρ precisely produces the anomalous term. Finally note that J s in (4.33) transforms, up to the linearized fluctuation that we have computed here, covariantly as J s → e 2σ J s under the Weyl transformation. Finally, using (4.42), (4.33) and (4.41), we find that the conservation of stress tensor is modified by the source terms as: which is the Ward identity for diffeomorphisms in the CFT in the presence of a source term j s O s . Now we would like to interpret (4.43) from the dual (4, 0) SCFT point of view. It is useful to recall some facts from the better understood type IIB (4, 4) SCFT describing bound states of Q 1 D1-branes and Q 5 D5 branes [26,31]. If one wants to study the separation of, say, one D1 or D5 brane from the rest, one has to study the effective action for the scalars in the vector multiplets, V , in the relevant branch of the 2D (4,4) gauge theory, which is the Higgs branch, where (semiclassically) the hypermultiplet scalars H acquire v.e.v., whereas for the vector multiplet scalars, which carry dimensiion 1, < V >= 0. One can obtain an effective action for V either by a probe supergravity approach [26] or by a field theory argument [31][32][33], i.e. by integrating out the hypermultiplets and observing that in the 2D field theory there is a coupling schematically of the form V 2 H 2 . This can be shown to produce for log | V | a lagrangian of the form (4.43) with the correct background charge to produce a conformal anomaly which matches the full conformal anomaly, to leading order in the limit of large charges. In our case, where we have a D1-D5 system in presence of D9 branes in type I theory, the role of the vector multiplet scalars is played by the field ρ, the instanton scale in the background geometry. The "separation" of one D-brane corresponds geometrically to the limit ρ → ∞, where the gauge 5-brane decouples, making a reduction in the central charge from an amount proportional to Q 1 Q 5 in the UV to Q 1 (Q 5 − 1) in the IR, where, as shown earlier, Q 1 = c/4 and Q 5 = d/4 + 1. Therefore the variation of the central charge, δc, is proportional to Q 1 . On the other hand, from the D-brane effective field theory point of view the instanton scale corresponds to a gauge invariant combination of the D5-D9 scalars, h, with h 2 ∼ ρ 2 . The h's are in the bifundamental of Sp(1)× SO(3), Sp(1) being the gauge group on the D5-brane and SO(3) that on the D9-branes. The h's couple to D1-D5 scalars H which are in the bifundamental of SO(Q 1 ) × Sp(1) and belong to (4,4) hypermultiplets. In the Higgs branch, which gives the relevant dual CFT, again H's can have v.e.v. semiclassically, while < h >= 0. In the 2D effective action there is a coupling of the form H 2 h 2 and upon 1-loop integration of H's one gets a term (∂h) 2 /h 2 [32], with coefficient proportional to Q 1 . The presence of the background charge term can be justified along the lines of [26,31]. Conclusions and Open Problems This article consists of two parts. In the first part, section §2, we have shown how Weyl anomaly matching and the correspondig Wess-Zumino action for the "spurion" is reproduced holographically, from kinematical arguments on the bulk gravity side: there, its universality comes from the fact that only the leading boundary behaviour of bulk fields enters the discussion. The PBH diffeomorphisms affect the boundary data and consequently the gravity action depends on them, in particular on the field τ . The regulated effective action is completely fixed by the kinematical procedure detailed in section §1. For a specific representative in the family of diffeomorphisms the Wess Zumino term takes the minimal form reported in literature. In appendix A.5 we present a different way to approach the same result (We do it for an arbitrary background metric). We then moved on in sections §3 and §4 to analyze an explicit 3D holographic RG flow solution, which has a "normalizable" behaviour in the UV. In section §3 we studied the problem in the context of 3D gauged supergravity. We started by identifiying the possible moduli of the background geometry: out of the zero modes (τ, s p , ρ), there come out two independent normalizable combinations. We promoted these integration constants to functions of the boundary coordinates (t, x) and solve the EoM up to second order in a derivative expansion. In a first approach we used a combination of (τ, s p ) dictated by normalizability, in a second approach we used ρ. In both cases we find a boundary action for a free scalar field with the expected normalization. As argued in section §3, agreement with QFT arguments in [3] points towards ρ as the right description for the would-be-dilaton scalar field. For possible extensions to higher dimensional computations, could be helpful to keep on mind that this mode ρ can be seen as the normalizable combination of a rigid PBH in Fefferman-Graham gauge and the mode s p . Then we moved in section §4 to elucidate the QFT interpretation of this normalizable mode by lifting the 3D theory to the 6D one: we promoted the modulus ρ, the SU (2) instanton scale, to a boundary field, ρ(u, v), and solved the EoM in a derivative expansion both for the background geometry and the linearized fluctuations around it, up to second order. This allowed us to compute < T µν > and determine the boundary action for log ρ: this is the action of a free scalar with background charge and its conformal anomaly is c U V − c IR , therefore matching the full c. We identified τ = logρ with a D5-D9 mode in the (4, 0) effective field theory of the D1-D5 system in the presence of D9 branes in type I theory. Finally, as an open problem, it would be interesting to apply the procedure followed in sections §3 and §4 to a v.e.v. driven RG flow in a 5D example, where we would give spacetime dependence to the moduli associated, say, to the Coulomb branch of a 4D gauge theory: in this case no subtleties related to spontaneous symmetry breaking arise and we should be able to obtain a genuine dilaton effective action. A.1 Conventions We use the mostly positive convention for the metric, namely signature (−, +, +, +) in 4D and (−, +) in 2D. The Riemann tensor we define as: with the Christoffel symbols: The 4D Euler density and Weyl tensors are defined as: A.2 Non Static domain wall ansatz Be the domain wall form for the metric: The PBH diffeomorphism until second order in derivatives of τ , can be written by symmetry arguments are: where index contractions and raising up of covariant indices are made by using the metric g µν = g µν (r, x µ ). The gauge preserving conditions on the form factors are where z = r + τ . Notice that if we go to the Fefferman-Graham gauge this mode will look like a "warped" diffeomorphism. Namely, the induced y-transformation at zeroth order in derivatives of τ will look like: with h some function of y interpolating between constant values. This is the technical cause behind the fact the coefficient in the kinetic term (3.59) does not coincides with the difference of holographic central charges. Namely, if we choose the right normalization in the UV h(∞) = 1, thence h(0) = 1, and so the IR kinetic contribution is not properly normalized to the IR central charge. A.3 Non Static Fefferman Graham gauge Let us suppose we are in the Fefferman-Graham gauge, namely: where g yy and g µν go like constant and a space time function times η µν in both UV and IR limits, respectively. Next, we can ask for the 3D diffeomorphisms preserving this form above. We write it as where the covariant form factors obey the following constraints which can be solved easily for a given RG flow metric in this gauge. A.4 Near To Boundary Analysis We use the near to boundary analysis to reproduce the results for the bulk action in presence of a PBH mode and to compute the GH and counterterm contribution. We start by writing the near to boundary expansion of the equations of motion. We then evaluate the onshell bulk contribution and finally the onshell contributions from GH and counterterm. A.4.1 Near to boundary expansion of the EoM The near to boundary expansion of the equations of motion in the Fefferman-Graham gauge choice (4.6) comes from: where the primes denote derivative with respect to the flow variable y and V f p is the potential at the corresponding fixed point. For the case of boundary V f p = V [0]. Another useful relation that is going to be helpful in computing the spurion effective action is the following form for the onshell action: Solutions We can solve the equations of motions for a generic potential of the form (2.6). Let us start by the UV side. The UV side We can check now the result (2.3), for the bulk action after a τ :PBH (except for the finite part of course) by using near to boundary analysis. As said before we take the near to boundary expansion of the scalar field to be: where the φ (0) and φ (0) are identified with the source and vev of a dimension ∆ = 2 CFT operator, respectively. The terms in the near to boundary expansion (2.9) of the metric are solved to be: g T r(g (4) ) = 1 4 tr(g 2 (2) ) − 14) The volume measure expansion: is used to evaluate the near to boundary expansion of bulk lagrangian in (A.10). The result for the UV expansion of the onshell action (2.24), is evaluated by use of the following result for a conformally flat g (0) = e −τ η a (0) Use of Weyl transformations properties of the Ricci scalar in 4D was used in getting this result. GH term contribution In the UV side we can expand the Gibbons Hawking term in a near to boundary series: where, The finite contribution b f inite is proportional to d 4 x T r(h 1 ) which by (A.11) is proportional to the product of the vev and the source φ (0) andφ (0) respectively. Namely, for a vev driven flow the GH term does not contribute at all to the finite part of the regularized onshell action. In the case of a source driven flow, the finite contribution gives a potential term which is not Weyl invariant, as one can notice from the transformation properties (2.22). In fact its infinitesimal Weyl transformation generates an anomalous variation proportional to the source square δτ (φ (0) ) 2 . This fact can be notices by simple eye inspection one just need to analyse the transformations properties (2.22) on the static case. The IR side In this case we can do the same. As already said, we assume IR regularity in the corresponding background, namely, We start by writing the IR asymptotic expansion of the GH term in the IR: We compute the factors b in terms of the components of the near to IR expansion of the metric: By using the near to IR expansion of the equations of motions (A.10) at second order we get: and additionally: 2 ) = 0, T r(g (4) ) = 1 4 tr(g 2 (2) ) − 2 ) = 1 4 tr(g 2 (2) ). It is then easy to see how the IR GH term does not contribute to the finite part of the regularized action! provided the background solutions are smooth in the IR. A.5 Anomaly matching from PBH transformations In this appendix we present an alternative way to compute the gravitational WZ term. The approach is covariant in the sense that It works with an arbitrary boundary background metric g (0) and shows how the 4D anomaly matching argument of [2,4] is linked to the 5D PBH transformation properties. The relevant terms in the cut off expansion of the bulk action are: after a finite PBH transformation parameterized by τ is performed. The S f inite [τ ] stands for the cut off independent contribution to the bulk action andĝ (0) = e −τ g (0) andφ (0) stand for the PBH transformed boundary data. The leading "matter" boundary dataφ (0) (UV/IR need not be the same), do not transform covariantly, unlike the background boundary metric g (0) . Next, one can perform a second infinitesimal PBH, δτ 1 , and think about it in two different ways: • Keep the cut-off fixed and transform the fields (I). • Keep the fields fixed and transform the cut-offs (II). In approach I, in virtue of additivity of PBH transformations: In approach II, one needs the generalization of (2.14) for a linear parameter δτ 1 and arbitrary boundary metric g (0) . An important point is that (2.14) is not a near to boundary expansion, but rather an IR expansion valid along the full flow geometry. Notice also that, in principle, some contribution proportional to δτ 1 , δτ 1 , .., could come out of the cut off powers in (A.5). As discussed for (A.5), these terms can be completely gauged away. Then approach II gives: Equating (A.24) and (A.25) we get: Now we can expand the gravitational contribution to a (4) by using the Weyl expansions: Hence, from (A.26) and (A.27) one can integrate out the gavitational contribution to S f inite : (A.28) where in the case we are considering c = a. Notice that in the above derivatiom, we implicitly assumed the group property of the PBH transformations on fields, that is: were L represents the transformation thought of as an operator acting on the fields (boundary data). As for the case of matter contributions, a problem arises when a v.e.v. or source transforms non covariantly φ (0) → e τ φ (0) + τ e τφ(0) . So, it is not clear to us how to use this procedure to compute "matter" contributions to the Weyl anomaly. An efficient procedure to compute anomalies for generic backgrounds (in a spirit similar to the approach presented here), had appeared in [34] (section §3.1). In this subsection we write down the rational functions appearing in the equations in section §3.
2013-11-20T17:21:28.000Z
2013-07-14T00:00:00.000
{ "year": 2013, "sha1": "64129d3388c34a16d4f423d064c38fea413c9cf2", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1307.3784", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "64129d3388c34a16d4f423d064c38fea413c9cf2", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
227252902
pes2o/s2orc
v3-fos-license
Nutrition Support in Liver Transplantation and Postoperative Recovery: The Effects of Vitamin D Level and Vitamin D Supplementation in Liver Transplantation Vitamin D plays an important role in the arena of liver transplantation. In addition to affecting skeletal health significantly, it also clinically exerts immune-modulatory properties. Vitamin D deficiency is one of the nutritional issues in the perioperative period of liver transplantation (LT). Although vitamin D deficiency is known to contribute to higher incidences of acute cellular rejection (ACR) and graft failure in other solid organ transplantation, such as kidneys and lungs, its role in LT is not well understood. The aim of this study was to investigate the clinical implication of vitamin D deficiency in LT. LT outcomes were reviewed in a retrospective cohort of 528 recipients during 2014–2019. In the pre-transplant period, 55% of patients were vitamin-D-deficient. The serum vitamin D level was correlated with the model for end-stage liver disease (MELD-Na) score. Vitamin D deficiency in the post-transplant period was associated with lower survival after LT, and the post-transplant supplementation of vitamin D was associated with a lower risk of ACR. The optimal vitamin D status and vitamin D supplementation in the post-transplant period may prolong survival and reduce ACR incidence. Introduction Vitamin D plays an important role in bone metabolism, regulating gene expression in multiple tissues, and increasing the intestinal absorption of calcium. Recently, in addition to the well-known effects on musculoskeletal metabolism, it has been reported that vitamin D has anti-inflammatory and immune-modulatory properties [1][2][3]. Clinically low serum levels of vitamin D have been associated with a higher prevalence of infections, cancer, cardiovascular, and autoimmune disorders [4,5]. Vitamin D deficiency is one of nutrition issues that is addressed in liver transplantation (LT) patients [6]. Due to the end-stage liver disease (ESLD) of the LT patients, malabsorption, inadequate dietary intake, and impairment in hepatic activation of vitamins are major issues [7,8]. While LT has been reported to have positive effects in increasing serum vitamin D concentrations as well as the percentage of patients with sufficient vitamin D levels, immunosuppression-related metabolic disturbances cause vitamin D Demographical Characteristics of Patients A total of 528 patients were included in the analytic cohort ( Table 1). The median recipient age at the time of LT was 58 years (IQR: 52-64), and more than half of the recipients were male (n = 350, 66.2%). Cause of cirrhosis was mainly alcohol (n = 154, 29.1%), NASH/Cryptogenic (n = 136, 25.9%), and viral hepatitis (n = 103, 19.5%). Half of the recipients received previous abdominal surgery (n = 266, 50.3%), and a minority of recipients had portal vein thrombosis at the time of LT (n = 104, 19.7%). The median laboratory MELD-Na score was 19 (IQR: 13-28). The median waiting time was 2.9 months (IQR: 0. 8-7.4). The majority of the donor graft type was donor after brain dead (DBD) (n = 458, 86.8%). The median cold ischemic time was 6.2 h (IQR: 5.3-7.4). Figure 1A shows the distribution of vitamin D status prior to LT, showing 55% were vitamin-D-deficient. The characteristics of vitamin-D-deficient and -sufficient patients were compared (Table 2). Recipient characteristics including age >60 years, presence of HCC, alcohol consumption rate, MELD-Na score, and serum albumin level had a significant difference (p < 0.05). The relationship between the MELD-Na score and serum levels of vitamin D in the pre-transplant period was analyzed ( Figure 1B). The correlation coefficient was −0.254 (p < 0.01; 95% CI: −0.34-−0.17). Influence of Preoperative and Postoperative Serum Vitamin D Levels on Overall Survival Differences in the long-term survival between patients who had vitamin D deficiency and sufficiency at pre-and post-transplant were compared (Figure 2A,B). There was no significant difference between the patients who had vitamin D deficiency and sufficiency in the pre-transplant period (p = 0.64) ( Figure 2A). However, there was a significant difference between the two groups in the post-transplant period ( Figure 2B). A bivariate and multivariable Cox regression analysis was performed to assess the risk factors associated with the five-year OS (Table 3). Older age (>60 years old) (HR 3.47; 95% CI, 1.38-8.68, p < 0.01) and post-transplant vitamin D sufficiency (HR 0.31; 95% CI, 0.13-0.75, p < 0.01) were associated with five-year OS. Influence of Preoperative and Postoperative Serum Vitamin D Levels on Acute Cellular Rejection The incidence of ACR in LT recipients with pre-and post-transplant vitamin D deficiency was 19.1% and 22.0%, respectively. There was no significant difference in the cumulative incidence of ACR between vitamin D deficiency and sufficiency in both pre-and post-transplant periods ( Figure 3A,B). Comparison of Patient Characteristics Based on Vitamin D Supplementation Status In examining the effect of vitamin D supplementation, four groups were investigated: (1) patients who did not receive supplementation (No Supplement), (2) patients who received supplementation during only pre-transplant (Pre), (3) patients who received supplementation during only post-transplant (Post), and (4) patients who received supplementation during both pre-and post-transplant (Pre/Post). The characteristics of the four groups were compared (Table 4). Among the four groups, there was a significant difference in the ratio of sex and 25(OH)D level at pretransplant (p < 0.05). Comparison of Patient Characteristics Based on Vitamin D Supplementation Status In examining the effect of vitamin D supplementation, four groups were investigated: (1) patients who did not receive supplementation (No Supplement), (2) patients who received supplementation during only pre-transplant (Pre), (3) patients who received supplementation during only post-transplant (Post), and (4) patients who received supplementation during both pre-and post-transplant (Pre/Post). The characteristics of the four groups were compared (Table 4). Among the four groups, there was a significant difference in the ratio of sex and 25(OH)D level at pre-transplant (p < 0.05). Effect of Vitamin D Supplementation on Overall Survival Differences in the long-term survival among the four patient groups (No Supplement, Pre, Post, Pre/Post) were compared using the Kaplan-Meier curve ( Figure 4A). Regardless of the supplementation status, there were no significant differences in the OS (p = 0.60). Effect of Vitamin D Supplementation on Overall Survival Differences in the long-term survival among the four patient groups (No Supplement, Pre, Post, Pre/Post) were compared using the Kaplan-Meier curve ( Figure 4A). Regardless of the supplementation status, there were no significant differences in the OS (p = 0.60). Effect of Vitamin D Supplementation on Acute Cellular Rejection The cumulative incidence of ACR among the four groups (No Supplement, Pre, Post, Pre/Post) ( Figure 4B) was compared. Interestingly, the incidence rate of ACR showed a significant difference Effect of Vitamin D Supplementation on Acute Cellular Rejection The cumulative incidence of ACR among the four groups (No Supplement, Pre, Post, Pre/Post) ( Figure 4B) was compared. Interestingly, the incidence rate of ACR showed a significant difference based on the vitamin D supplementation status. The cumulative incidence was high in the Pre group and was low in the Post group. The proportional subdistribution hazard model of the Fine and Gray method was used for ACR in the No Supplement group and Pre group (Table 5). From the bivariate and multivariable analysis, age (>60 years) was the only variable that was significant (sHR 0.30; 95% CI, 0.12-0.77, p = 0.01). In the same manner, the Fine and Gray method was used for ACR in the No Supplement group and Post group (Table 6). Among all variables, calculated MELD-Na score >30 (sHR < 0.01; 95% CI, <0.01-<0.01, p < 0.01) at the time of LT and vitamin D supplementation during post-transplant (sHR 0.09; 95% CI, 0.01-0.72, p = 0.02) were significant. Discussion Although previous studies have reported that vitamin D plays an important role in solid organ transplantation including kidney and lung [12,13], the clinical impact of vitamin D on LT outcomes and vitamin D supplementation is still unknown. The current study included 528 patients which is larger than any other previous studies of vitamin D in LT. Our study investigated the correlation of the pre-transplant vitamin D level and the MELD-Na score. Moreover, the current study is important because we were able to reveal how the perioperative vitamin D levels and vitamin D supplementation status affect long-term outcomes, such as OS and ACR all in the same cohort. Using the cutoff of 20 ng/mL of 25(OH)D [18], 55% of the patients had vitamin D deficiency before LT. The MELD-Na score and serum levels of 25(OH)D before LT showed a negative linear relationship. While there was no survival difference based on the vitamin D level during pre-transplant, patients who had vitamin D deficiency at post-transplantation had worse survival compared with vitamin-D-sufficient patients. The cumulative incidence of ACR was not affected by the perioperative 25(OH)D level. There was no difference in the OS when it was assessed based on the vitamin D supplementation status. However, the accumulated incidence of ACR was high in the Pre group and low in the Post group, showing a significant difference. Importantly, the risk factor of ACR in the Pre group and Post group was younger age and no vitamin D supplementation and higher MELD-Na score at the time of LT, respectively. Vitamin D3 is taken in the body by diet (20%) or is synthesized by the skin (80%) from 7-dihydrocholesterol following UVB exposure. Vitamin D3 becomes biologically active after hydroxylation in the liver by the enzymes cytochrome P450 2R1 and cytochrome P450 27 becoming 25-hydroxyvitamin D3. The fully active metabolite 1,25-dihydroxyvitamin D3 is hydroxylated in the kidney [21]. ESLD patients have both impaired liver and kidney function that can alter calcium and vitamin D homeostasis [22,23]. Even when there were patients taking supplemental vitamin D during pre-transplant, more than half of the patients had vitamin D deficiency ( Figure 1A). In addition, we found that there was a negative correlation between MELD-Na scores and serum levels of 25(OH)D in the pre-transplant period, which was compatible with the vitamin D physiology since the MELD-Na score incudes both hepatic and renal components [15]. Vitamin-D-sufficient status in the post-transplant period was associated with five-year survival after LT. The optimal vitamin D status prolonged survival. This indicated that post-transplant nutritional support including the correction of vitamin D deficiency will support better OS. These results are consistent with a study from Lowery et al., showing that the mortality of lung transplant recipients who remained vitamin-D-deficient at one-year post-transplant was higher than that of recipients who maintained normal vitamin D levels [13]. In the current study, the six-month mortality rate of vitamin-D-deficient patients in the post-transplant period was 8.0% while that in vitamin-D-sufficient patients was 0.63% (p < 0.01) ( Figure 2B). Post-transplant vitamin-D-deficient status had larger effects on mortality in the early period after LT compared with the late period. Even though early mortality after LT can be caused by different conditions such as early allograft dysfunction and infections [24], these complications might be related to decreased immune-modulatory properties because of the low vitamin D level [1][2][3]. On the other hand, low vitamin D can be a result of the malnourished condition of LT recipients. Malnutrition itself could also negatively affect mortality [25,26]. Thus, serum levels of 25(OH)D in the post-transplant period can be a prognostic factor or a predictive factor for survival. Further investigation to determine how the vitamin D status contributes to OS is needed. Pre-transplant vitamin D status was not associated with the long-term accumulated incidence of ACR after LT in this study ( Figure 3A). Although focusing on the short term after LT, the vitamin D deficiency group had a higher rate of ACR compared to the sufficiency group. Similar findings were seen in a paper from Zhou et al. showing that high pre-transplant 25(OH)D level (>25 ng/mL) prior to LT significantly decreased the incidence of ACR in 30 days after LT [27]. Another report from Bitetto et al. also confirmed that low pre-transplant 25(OH)D level (<5 ng/mL) was independently associated with moderate to severe ACR episodes within two months after LT [28]. After the early period of LT, the incidence of ACR in the vitamin D sufficiency group tended to be higher than the deficiency group. This implies that the vitamin D sufficiency in the pre-transplant period contributed to reduced risks for ACR in the early post-transplant period and smoothly boosted the immune system of recipients through the recovery phase from LT-related surgical procedures in the late post-transplant period. The vitamin D status during the post-transplant period had no significant relationship with the incidence of ACR ( Figure 3B). As such, pre-transplant vitamin D levels may be associated with ACR in the early period after LT, and it can be hypothesized that optimization of vitamin-D-deficient status by supplementation in the pre-transplant period may contribute to reducing the incidence of ACR. The clinical effects of vitamin D supplementation on LT outcomes remained unclear. We demonstrated that vitamin D supplementation in the post-transplant period has positive effects in decreasing the incidence of ACR during one year after LT. A previous study showed similar findings that vitamin D supplementation for the first one month significantly decreased the incidence of ACR in 30 days after LT [27]. Vitamin D supplementation is assumed to increase the components of suppressor T cells/T memory cells, decrease the C3 co-stimulatory molecule expression (HLA-DR, CD28), and expand T naïve cells/cytotoxic T cells [27,29]. Our comparison between the No Supplement and the Post groups clarified the anti-rejection effects of the post-transplant vitamin D supplementation. From these results, we propose that vitamin D supplementation should be considered, especially to reduce the incidence of ACR. Yet, our result does not demonstrate that vitamin D supplementation in the pre-transplant period could reduce the incidence of ACR. Thus, prospective studies should be conducted to elucidate the importance of vitamin D supplementation in LT, including the relationship between vitamin D supplementation and modification of vitamin D levels. This study had several limitations. Since this was a retrospective study, information bias was possible given that the data were manually abstracted from the medical records. Additionally, there was potential for unmeasured confounders of the relationship between vitamin D status and hepatic graft status. The multivariable analysis of all four groups (No Supplement, Pre, Post, Pre/Post) showed that sex is not a significant factor regulating the incidence of ACR (Tables 5 and 6) or five-year survival (data not shown), although there were gender differences among groups (Table 4). In this study, more female patients tended to be on vitamin D supplementation compared with male patients. We assume it was because of the higher incidence of bone-related disorders in female groups, such as osteoporosis. Future research is needed to elucidate the gender effects on outcomes after LT. Finally, it should be noted that there are no guidelines for screening or supplementation of vitamin D in LT recipients. Further investigation is needed to clarify the importance of screening for vitamin D status and the effectiveness of vitamin D supplementation for patients in the peri-transplant period. Conclusions Vitamin D deficiency in the post-transplant period was associated with lower survival after LT, and the post-transplant supplementation of vitamin D was associated with a lower risk of ACR. Vitamin D levels in the pre-transplant period may be an important factor of ACR. Nutritional support with vitamin D supplementation might be contributing to improving LT outcomes.
2020-12-03T09:05:17.392Z
2020-11-28T00:00:00.000
{ "year": 2020, "sha1": "191826a38f2a882c7a0ba7dbe49ab28a30733304", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/nu12123677", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "aeda7c16d337864135803556a4908fa9091925db", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
7177714
pes2o/s2orc
v3-fos-license
A Quantitative View of Short Utterances in Daily Conversation: A Case Study of Thats right, Thats true and Thats correct Short utterances serve a multitude of different communicative functions in interactive speech and have attracted due attention in recent re- search in dialogue acts. This paper presents a quantitative description of three short utteranc- es i.e. that's right, that's true, that's correct and their variations based on the Switchboard Dialogue Act Corpus. Particularly, it offers an overview to account for how they are deployed by native speakers in daily conversation. At the same time, it attempts to provide a comparative account of that's right and that's true, showing that while almost 75% of them are mutually exchangeable, they nonetheless exhibit prefer- ences in interactive speech. This insight is ex- pected to form a useful approach towards automatic dialogue act tagging. Introduction Dialogue act (DA), defined as "communicative activity of a dialogue participant, interpreted as having a certain communicative function and semantic content" (ISO 24617-2, 2012: 2), plays a key role in the interpretation of the communicative behaviour of dialogue participants and offer valuable insight into the design of human-machine dialogue system (Bunt et al. 2010). With the goal of facilitating automatic DA tagging, this paper describes a corpus-based investigation into that's right, that's true, that's correct and their variations in the Switchboard Dialogue Act (SWBD) Corpus, in order to answer questions about the communicative functions they mainly perform in daily conversation. These utterances deserve our particular attention in research considering that, like other brief responses (e.g. Oh, Uh huh, Mm, Okay), they serve as important feedback to the main speaker and they usually occur as overlapping speech. They are particularly problematic to interpret because they demonstrate a drastically different functional or pragmatic meaning from the semantic meaning of the component tokens. Consider Example 1. This is one excerpt retrieved from the targeted corpus, which will be further illustrated in section 2. A and B, two speakers, are talking about books and literature, where B is describing one of her daughter's book, "real easy to follow". The last utterance PACLIC 28 ! 379 that's right can be interpreted as serving both assessment/appreciation and agreement functions. The speaker B considers that what has been stated by A is right, not false, in which right is used as the evaluative adjective. Also, he implies his agreement with the interlocutor where that's right is used as a whole. Therefore, on the one hand, the semantic meaning of that's right makes it much closer to personal judgments and assessments, that is, the opinion is "right, not false". On the other hand, it is often used as a whole, indicating speaker's agreement, which goes beyond lexical meanings. However, past studies rarely specify various usage for that's right, that's true and that's correct in a systematic fashion, and just sporadically describe one or two cases to illustrate one or two facets for them, without capturing a full picture of how they are used with empirical evidence. To be more exact, for the studies that do discuss usage for that's right, Gardner (2001) believes that that's right is exactly the same as right when responding to a preceding question, the synonym for "that's correct". This point has been further elaborated in that right is deemed "a truncated version of that's right" when acting as "an epistemic confirmation token", "in a sense close to one of its dictionary meanings, namely 'correct'" (Gardner, 2004: 4). The studies indicate that that's right, right, that's correct, and correct are similar and can be alternatively used as the confirmation token oriented to a prior question. At this regard, however, Stenström (1987: 104) asserts that that's right is much stronger than right in degree of emphasis and involvement when severing as a response move to the same type initiating move. In addition, when responding to a previous declarative, that's right has been considered to realize the functions of seeking confirmation (Tui, 1994), showing agreement (Stenström, 1987;Tui, 1994;Gardner, 2001) as well as making assessments (Tao, 2003). Therefore, that's right has been considered to indicate a wide variety of intentions in interaction. With regard to that's true, it has received little attention, and only McCarthy (2003) makes brief description that as a syntactically independent token, true seems to prefer the clausal option (that's true) to independent occurrence (true). In terms of that's correct, it has been left largely unexamined and unspecified regarding the usage. Considered semantic meanings of the three short utterances (i.e. that's right, that's true and that's correct), they largely embody in their key words right, true and correct. As is shown in dictionaries, the three words have similar lexical meanings and are often used to paraphrase each other. For instance, Longman Dictionary of Contemporary English (2009, fifth edition) defines them as follows: Correct: having no mistakes; right (p.379) Right: true/correct (p.1504) True: not false, based on facts and not imagined or invented (p.1891-1892) Thus, this paper aims to bring together the disparate findings on the uses of the three short utterances as well as their variations, attempting to depict an overview of them: how they are deployed by native speakers in daily conversation. At the same time, a comparative view has been concentrated on that's right/true, to seek to the circumstance in which they are mutually exchangeable and in which they are distinct. In this way, it is expected to form a useful approach towards automatic detection of DAs. This paper is structured as follows. Section 2 briefly introduces the SWBD DA corpus, then section 3 presents how the data has been processed before statistical analysis. Section 4 is related to general figures for the three short utterances and their variations, followed by a comparative study (section 5). Section 6 draws conclusions to this paper. Corpus Resource This study uses the Switchboard Dialogue Act Corpus 1 , which comprises 1,155 transcribed telephone conversations, totaling in 223,606 utterances or 1.5 million word tokens (Fang et al., 2011). In this corpus, the segmented unit for utterances is defined as "slash-unit", which can be complete or incomplete, ranging from "a sentence" to "a smaller unit" (Meteer et al., 1995: 16). Moreover, all these segmented utterances have been annotated with DA information, such as "aa" (accept), "ba" (assessment/appreciation), etc., to denote functions of particular utterances according to the SWBD- PACLIC 28 ! 380 DAMSL coding scheme (Jurafsky et al., 1997 As can be seen, the first utterance has been coded with "sv", a DA tag for statement-opinion, while the second one has been labeled as "aa", a code for accept. In the current study, investigation of various functions will be conducted based on the DA tags which have been coded for each utterance. Data Pre-processing For the benefit of the current work, that's right, that's true and that's correct, and their variations are retrieved from the corpus accordingly. Variations in the current study are defined with a series of factors taken into account. Firstly, variations of the same token share the key words and present in similar patterns, for instance, it's true, this is true and true are all considered as variations of that's true, since they contain the same key word true with similar patterns. Consequently, the whole utterances have similar semantic meanings. Secondly, cases (e.g. it's true) embedded with adverbs and formulaic terms are still regarded as variations, because adverbs and formulaic terms are often used to enhance or emphasize emotions or attitudes, but not to change the meaning of the whole utterance. That's really true and I think that's certainly true are cases in point, where really and certainly are adverbs, and I think is the formulaic term. They are used to emphasize the attitude of the speaker. Formulaic terms refer to expressions such as "I think" and "I believe", which display in the form of "I + predicate", to express the speaker's subjectivity in spoken discourse (Baumgarten and House, 2010). Also, they have been recognized as one type of "engagement", dealing with "sourcing attitudes and the play of voices around opinions in discourse" in the appraisal framework (Martin and White, 2005: 35). Thirdly, the negative form and interrogative form, e.g. that's not true, is that true? are excluded, since their meanings and primary functions are apparently distinct from those of that's true. Fourthly, cases subsequently followed by thatclauses or prepositional phrases are excluded from the current work either, for instance It's true, followed by a that-clause, is not used independently any more. Such cases are not concerned with at this moment. Finally, it is necessary to reconsider the independent token right since it is often used as acknowledging token in the literature (e.g. Gardner, 2004;2007), different from that's right. As a consequence, right is not treated as a variant of that's right in this stage, which will be verified by the statistical information later. Thus, the final list can be identified as shown in Table 1, where similar patterns take one-to-one correspondence. Apparently, that's true has more different types of variations than the other two. Descriptive Statistics That's right, that's true and that's correct are in effect synonymous concerning the dictionary meaning, while in the corpus, they do vary regarding their frequency information. (1) That's right and variations (2) That's true and variations (3) That's correct and variations Total 911 920 21 (1), (2) and (3) in the following will be used to stand for the three sets of utterances respectively. (1) and (2) are almost the same, both of which far exceed that of (3). Beyond this, a range of functions have been identified for each of them, of which "aa", "ba", "s", "na" and "b" 3 are the most significant ones, all together accounting for over 98% in each set. Table 3 sets out these functions and their relative frequencies in performing each of them. Table 3 Top five functions of three sets 3 In the coding scheme SWBD-DAMSL, there are very specific definitions for each of them. Accept (aa), one subtype of agreement, indicates the speaker explicitly accepts a proposal, or makes agreements with previous opinions (Jurafsky et al., 1997: 37). Assessment/appreciation (ba) is defined as "a backchannel/continuer which functions to express slightly more emotional involvement and support than just 'uh-huh'" (Jurafsky et al., 1997: 48). Statement (s) divides into "descriptive/narrative/personal" statements (sd) and "other-directed opinion statements" (sv), both with the primary purpose of making claims about the world (including answers to questions) (Allen and Core, 1997: 10). Affirmative answer (na) is one subclass of answers, which indicates affirmative answers that are not "yes" or a variant (Jurafsky et al., 1997: 50). Acknowledgement (b) is usually "referred to in the CA literature as a 'continuer'" (Jurafsky et al., 1997: 42). A glance at the table establishes that these top five functions together account for a large proportion among a series of functions performed by each particular set. In particular, accept overwhelmingly occurs in all the three sets, followed by assessment/appreciation. However, set (3) displays some slight distinction from (1) and (2) in the way that its proportion of assessment/appreciation is around 10% higher than that of sets (1) (2), but approximately 10% lower in accept. In the description to follow, the major concern is to seek similarities and distinctions within each set. That's right and its variations That's right and its variations frequently occur in daily speech, which can be seen in Table 4 (1) It is perceptible that the simple token that's right overwhelmingly occurs compared to a range of variations, which may be indicative of the significance of economy in casual talk. By contrast, formulaic terms and adverbs are not so often attached with that's/it's right, accounting for less than 3% (2.2%+0.6%) and 4% (2.9%+0.6%+0.4%) respectively, which implies that such additional emphasis of stance and attitudes is not common in daily conversation. Noticeably, it's right appears 4 times, and this is right never occurs in the corpus. Hence that, it and this are similar lexical items but they have their own particular preference in some circumstance: when prefacing "be + right", that is more often used than it and this. Regarding a variety of functions they serve, that's right and its variation totally perform twelve different functions in the corpus, but the top five are extremely significant which can be seen in Table 5, together constituting over 60% in each row. Strikingly, that's right does exhibit some slight distinction from its variations in that that's right can respond to a prior question and acknowledge to (1) Moreover, when formulaic terms are attached previously, the whole utterance has greater likelihood to function as statement. It is noted that the top three functions of that's right are exactly those functions analyzed and discussed in the literature, that is, agreement, assessments and affirmative answers. But with the empirical evidence, it can be further observed that agreement is much more remarkable than the other two. In addition, it's right is a special token in the table in that it clearly prefers statement to accept, which should not have been counted as a variant of that's right. Yet, considered the limited occurrence (4 times), it is not pervasive enough to determine what kind of functions it exactly serves, so it remains in this set. In the future, a larger spoken corpus will be in demand for examining such tokens. That's true and its variations Likewise, Table 6 exhibits basic frequency information of set (2). Different from set (1), that's true has much more variations than that's right in terms of types and tokens, which is illustrated by the statistics that variations of that's true make up 29% of set (2) while those of that's right just accounts for 6.5% of set (1). It needs to be noted that the symbol "*" in Table 6 means that the adverb in that's + adverb + true is able to move freely, not restricted to the middle position, such as "probably that's true", or "that's true also". This, however, has not been perceived for that's right. Table 6 Statistical information of set (2) Yet still, set (2) is consistent with set (1) in two respects. On the one hand, that's true occurs more frequently than it's true and this is true, which is correspondingly close to set (1). On the other hand, formulaic terms and adverbs do not show high frequency in set (2) Concerning a range of functions they perform, that's true and its variations totally have nine different functions in the corpus, among which the top five are displayed in Table 7. Overall, the distribution here shares a large number of similarities with that of set (1) in Table 5. In particular, accept, assessment/appreciation and statement are considerably significant, while affirmative answer and acknowledgement are comparatively less crucial, only occurring in that's true, that's + adverb + true and true. When that's true is attached with formulaic terms, the likelihood to function as accept declines accompanying with greater proportion in statement. The exceptional token is it's true, which itself prefers both accept and statement. In this sense, it's true is distinguished from that's true which overwhelmingly deals with accept. By contrast, this is true is relatively consistent with that's true in primary functions they serve. Thus, in the pattern "THAT/IT/THIS + BE + TRUE", that, it and this indicate their particular preference as well. That's correct and its variations That's correct and its variations are used infrequently, with a total occurrence of 21 in the whole corpus. As a consequence, there are far less variations in this set. Table 8 shows the basic frequency information, and 0 0 0 0 1 100% aa = accept; ba = assessment/appreciation; s = statement; na = affirmative answer; b = acknowledgement/backchannel Table 9 All functions performed by set (3) As can be seen in Table 8, that's correct occurs more frequently than its variations, accounting for 62% in set (3), which is lower than that of that's right (93%) and that's true (71%). Moreover, formulaic terms and adverbs are not so frequent, either, which suggests bare tokens such as that's correct and correct are preferred by native speakers. Considered a range of functions performed Table 9, accept and assessment/appreciation are remarkable compared to statement, affirmative answer and acknowledgement with one occurrence for each. To summarize, an overview of utterances in the three sets has presented with empirical evidence. Generally, they share quite a lot of similarities in terms of primary functions they serve. In addition, two points need to be further elaborated. One is that, right is assumed to be used in a way different from that's right in conversation, which is further confirmed by the evidence that 73% of right serve acknowledgement while that's right prefers accept with 76% of its total occurrence in the corpus. This can be observed in Table 10, where their top five functions have been listed respectively. Also, 16% of that's right can be used as assessment/appreciation, whereas the single right only occurs 11 times (0.2%) as assessment/appreciation. Hence, in general, right and that's right are two different cases in interactive speech. The second point is that, that, it and this have their particular preference to the pattern "THAT/IT/THIS+BE+RIGHT/TRUE/CORRECT" , which can be summarized as follows. That > Ø > it > this It means that the ones on the left side take priority over those on the right: that more likely occurs than this, and the symbol Ø signals no pronoun occurs. This is highly consistent with Tao's finding (2003: 202) "that is more likely to be used as a turn initiator than this". A Comparative Study According to the previous statistical analysis, it is noted that that's right, that's true and that's correct account for quite a large proportion in each particular set. The previous observation has also shown that the total occurrence of that's correct is much fewer than the other two, and therefore, a comparative study will concentrate on that's right and that's true, and examine the condition where they are mutually exchangeable with each other and where they are distinct from each other. Figures 1 and 2 respectively fill out their primary functions and their preceding contexts 4 . By Figure 1, apparently that's right and that's true both exhibit considerable preference to accept and assessment/appreciation which together make up over 90% for both cases. It is meant that over 90% of their tokens perform the two same functions. aa = accept; ba = assessment/appreciation; s = statement; na = affirmative answer; b = acknowledgement/backchannel Figure 1 Primary functions of that's right and that's true However, some slight difference between them can be perceived as well. That's right is used to perform all these five functions, while that's true cover four of them and cannot be not used to answer a question. At the same time, that's true shows far greater likelihood to serve statement compared to that's right. By contrast, that's right is almost ten times more likely than that's true to function as acknowledgement. In order to see whether their previous contexts could offer useful cues to differentiate the occurrence of that's right and that's true, a specific view is taken into the previous contexts when they act as accept and assessment/appreciation, because the two functions together make up a large proportion of the total occurrence. Figure 2 depicts the salient previous context when they act as the two functions. aa = accept; ba = assessment/appreciation; s = statement Figure 2 Previous contexts of aa and ba It is clear that statement is the most overwhelming previous function, accounting for over 80% previous contexts of that's right/true when they act as accept and assessment/appreciation. It seems that the previous contexts offer little cues to differentiate them, since both are so often preceded by statement. According to Figures 1 and 2, it is possible that almost 75% of that's right/true are mutually exchangeable since over 90% of their occurrence contributes to accept and assessment/appreciation, in which over 80% of the previous contexts are statement. This can be further validated by the chi-square test, which aims to test if that's right and that's true have no difference in the distribution of different functions. Table 11 shows the frequency distribution of that's right/true in accept, assessment/appreciation and other functions. Table 12 Chi-Square Tests In Table 12, the value of pearson chi-square is 0.130, and the p-value is 0.937 which is larger than 0.05. It manifests that the difference between that's right and that's true is not significant in the distribution of primary functions according to the frequency information observed in the corpus. In summary, the statistical analysis above demonstrates that that's right and that's true are used almost the same in interactive speech, in which nearly 75% of their total occurrence are interchangeable. This is further confirmed by the significant test which explicitly shows no significance in the distribution of primary functions, and their previous contexts supply little cues for the distinction. In some cases, however, they have their own preference and differ from each other. For instance, that's true has never been found to answer a previous question in the corpus, while 3% of that's right can perform this function. Moreover, that's true shows much greater likelihood to serve statement whereas that's right is almost ten times more likely than that's true to be acknowledgement. Specifically, when the preceding utterance is a statement or a question, the current utterance is more likely to serve statement if it is realized by that's true; it has greater possibility to be acknowledgement or an affirmative answer if it is realized by that's right. This kind of preference is expected to facilitate DA tagging. Conclusions This paper presented a quantitative investigation of three short utterances (i.e. that's right, that's true, that's correct) and their variations in the Switchboard Dialogue Act Corpus. Particularly, it offered an overview to account for how they are used in daily conversation with empirical evidence. By the current investigation, it has been observed that that's right/true and their variations much more frequently occur than that's correct and its variation. In terms of primary functions served in interactive speech, they consistently exhibit great preference to accept, assessment/appreciation, statement, affirmative answer and acknowledgement, among which, accept and assessment/appreciation together account for quite a large proportion. Regarding their variations, that, it and this are similar lexical items but they indicate their particular preference to this pattern "THAT/IT/THIS+BE+RIGHT/TRUE/CORRECT" . Moreover, formulaic terms and adverbs are not so frequently embedded. When formulaic terms are attached, the whole utterances have greater likelihood to be statement. Also, we have specified some crucial issues for that's right and that's true, which are clearly useful to the detection of DAs. It has been discovered that almost 75% of that's right and that's true are mutually exchangeable, which has been verified by the chi-square that their difference is not significant in the distribution of primary functions. Moreover, the previous contexts offer little cues to differentiate that's right and that's true. In this sense, they are two short utterances with similar meanings and uses. But in some cases, they display their particular preference: that's right has fewer variations compared to that's true, and covers a wide range of functions in the corpus; that's true has never been found to answer a previous question in the corpus, while 3% of that's right can do that. Moreover, that's true shows much greater likelihood to serve statement whereas that's right is more likely to be acknowledgement. Such kind of empirical analysis will provide the insights and bases for automatic DA tagging. In addition, we believe that it also tells second language learners how to use these three shore utterances under specific contexts.
2016-01-24T08:34:15.539Z
2014-12-12T00:00:00.000
{ "year": 2014, "sha1": "f5f81fa27379fd4dcc3bbf0971ce132572383334", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "f5f81fa27379fd4dcc3bbf0971ce132572383334", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [ "Computer Science" ] }
248369022
pes2o/s2orc
v3-fos-license
Laparoscopic-assisted vs open transhiatal gastrectomy for Siewert type II adenocarcinoma of the esophagogastric junction: A retrospective cohort study BACKGROUND The studies of laparoscopic-assisted transhiatal gastrectomy (LTG) in patients with Siewert type II adenocarcinoma of the esophagogastric junction (AEG) are scarce. AIM To compare the surgical efficiency of LTG with the open transhiatal gastrectomy (OTG) for patients with Siewert type II AEG. METHODS We retrospectively evaluated a total of 578 patients with Siewert type II AEG who have undergone LTG or OTG at the First Medical Center of the Chinese People’s Liberation Army General Hospital from January 2014 to December 2019. The short-term and long-term outcomes were compared between the LTG (n = 382) and OTG (n = 196) groups. RESULTS Compared with the OTG group, the LTG group had a longer operative time but less blood loss, shorter length of abdominal incision and an increased number of harvested lymph nodes (P < 0.05). Patients in the LTG group were able to eat liquid food, ambulate, expel flatus and discharge sooner than the OTG group (P < 0.05). No significant differences were found in postoperative complications and R0 resection. The 3-year overall survival and disease-free survival performed better in the LTG group compared with that in the OTG group (88.2% vs 79.2%, P = 0.011; 79.7% vs 73.0%, P = 0.002, respectively). In the stratified analysis, both overall survival and disease-free survival were better in the LTG group than those in the OTG group for stage II/III patients (P < 0.05) but not for stage I patients. CONCLUSION For patients with Siewert type II AEG, LTG is associated with better short-term outcomes and similar oncology safety. In addition, patients with advanced stage AEG may benefit more from LTG in the long-term outcomes. INTRODUCTION In recent decades, the global incidence of gastric cancer has declined annually while the incidence of adenocarcinoma of the esophagogastric junction (AEG) has presented an upward trend, especially in Asian countries [1][2][3][4][5]. Although there are many controversies concerning the optimal treatment for AEG patients, surgery is still the cornerstone of therapeutic strategies [6]. According to the results of the nationwide clinical trial (JCOG 9502) in Japan, the transhiatal approach is recommended for Siewert type II/III AEG patients with esophageal invasion within 3 cm [7,8]. Since the first report of laparoscopic-assisted transhiatal gastrectomy (LTG) by Kitano et al [9] in 1994, LTG has developed rapidly worldwide. With the improvement of laparoscopic technology and the optimization of equipment, a large number of countries have successively carried out LTG for gastric cancer because it provides not only better short-term outcomes but also comparable oncologic safety and survival in comparison with open transhiatal gastrectomy (OTG), especially in early-stage and distal gastric cancer[10-13]. Conversely, due to the lack of scientific evidence, the feasibility of LTG in proximal gastric cancer is still controversial. Moreover, peripheral lymphatic drainage pathways of Siewert type II AEG are more complicated as the particularity of the anatomical location, and LTG surgery with D2 lymphadenectomy remains more challenging than other gastric cancer sites [14,15]. At present, the studies on the short-term and long-term clinical effects of Siewert type II AEG regarding LTG and OTG are limited [16][17][18][19][20]. Thus, this study retrospectively analyzed the clinical data of Siewert type II AEG patients in our hospital, compared the short-term and long-term outcomes of LTG and traditional OTG and aimed to explore the feasibility of LTG treatment of Siewert type II AEG. Patients This work retrospectively reviewed patients with Siewert II AEG who have undergone gastrectomy at the First Medical Center of Chinese PLA General Hospital in China from January 2014 to December 2019. The inclusion criteria contained: (1) Histologically proven Siewert type II AEG; (2) Surgery via either OTG or LTG with total or proximal gastrectomy with D2 lymphadenectomy; (3) Staging T1-4a, N0-3, M0 (according to the 8 th edition of the TNM staging system of the American Joint Committee on Cancer) [21]; and (4) Esophageal invasion < 3 cm. The exclusion criteria were presented as following: (1) Patients with a secondary malignancy within 5 years; (2) American Society of Anesthesiologists physical status score > 3; (3) Only underwent palliative resection or combined organ resection; and (4) Received preoperative chemotherapy of radiotherapy. Finally, a total of 578 patients were pooled into the study (LTG = 382, OTG = 196). This study has been registered on Clinical-Trial.gov (ChiCTR2100053647) and approved by the Ethics Committee of Chinese PLA General Hospital. LTG: The patient was placed in a supine position and given general anesthesia by employing a 5-hole method. After exploring the relevant positions of various tissues in the abdominal cavity and the location and size of the tumor, a radical total and proximal gastrectomy was performed in this study. Gastrectomy and D2-lymphadenectomy were completed. Then, a small incision was made in the middle of the abdomen to reconstruct the digestive tract. Gastric tube construction and esophagogastrostomy were often performed after proximal gastrectomy. After total gastrectomy, most patients underwent esophagojejunostomy and jejunojejunostomy (Roux-en-Y reconstruction). OTG: The positioning and anesthesia of the patients remained the same as those of the LTG group. An incision was made in the middle of the abdomen to enter the abdominal cavity. Other operative details such as gastrectomy, lymphadenectomy and reconstruction were the same as those in the LTG group. Clinical parameters and follow-up We retrospectively collected the following clinical and pathological factors available in our clinical database: Age, sex, body mass index, smoking/drinking history, American Society of Anesthesiologists score, tumor size, histopathological grade, TNM stage, operation time, intraoperative blood loss, length of abdominal incision, length of proximal margin, number of harvested lymph nodes (LNs), number of positive LNs, resection status (R-status) of margin, postoperative recovery (the time to liquid diet, ambulation, first flatus or defecation and discharge) and postoperative complications (anastomotic leakage, anastomotic stenosis, abdominal abscess, pneumonia, arrhythmia and wound infection). All postoperative complications were classified with the application of the Clavien-Dindo grading system [22]. In addition, postoperative patients were periodically followed up with blood tests, physical examinations and chest/abdominal computed tomography scans through outpatient visits. The follow-up interval was every 3-6 mo for the first 2 years and every 6-12 mo for the subsequent 3 years. All surviving patients were followed up annually thereafter until death. Overall survival (OS) was calculated from the time of surgery to death due to any cause or latest follow-up. Disease-free survival (DFS) was calculated as the time from surgery to first recurrence or death because of any reason. Statistical analysis Continuous data were presented as mean ± standard deviation with t test if normally distributed or as the median (interquartile range) with Mann-Whitney U test if not normally distributed. Dichotomous variables were compared with the χ 2 test or Fisher test. Survival analysis was performed by the Kaplan-Meier curves based on the log-rank test. Statistical analysis was done by IBM SPSS (version 26.0.0.0). The figures were plotted with RStudio (version 1.4.1717). Bilateral P < 0.05 was considered to be statistically significant. Clinicopathological characteristics As shown in Figure 1, a total of 578 patients were eligible (512 male and 66 female) for our study, of which 382 (66.1%) patients underwent LTG and 196 (33.9%) patients underwent OTG. The demographic information of the participants was presented in Table 1 Perioperative outcomes Perioperative outcomes are shown in Postoperative complications occurred in 5.0% of patients after LTG and in 4.6% of patients after OTG (P = 0.840). There existed no significant difference between the two groups in terms of anastomotic leakage, anastomotic stenosis, abdominal abscess, pneumonia, arrhythmia or wound infection (P > 0.05). Furthermore, the complications of Clavien-Dindo grade III or higher were comparable in both groups (P = 0.729). No mortality existed within 30 d postoperatively in either group. Further details are presented in Table 2. According to the histopathological analysis, the rate of complete tumor resection (R0) could be achieved in 99.5% in the LTG group and 99.0% in the OTG group (P = 0.879). The number of the harvested LNs was significantly higher in the LTG groups (28.81 ± 12.16 vs 26.20 ± 12.23, P = 0.015). In addition, the number of positive LNs was similar in the two groups (P > 0.05). Apart from that, the length of the proximal margin was also comparable between the two groups (P = 0.597). Survival The median follow-up time was 38.94 mo (Interquartile range: 23.28-59.93) for all patients. In comparison with the OTG group, the LTG group showed a better 3-year OS (88.2% vs 79.2%, P = 0.011) (Figure 2A). Then, we performed a stratified analysis of survival according to the TNM stage. For patients with stage I, there existed no significant difference in 3-year OS between the two groups, but patients in the LTG group with stage II and stage III had a better 3-year OS compared with that of the OTG group [Stage II: hazard ratio (HR): 0.126, 95% confidence interval (CI): 0.027-0.584, P = 0.008; Stage III: HR: 0.361, 95%CI: 0.134-0.967, P = 0.043] (Figure 2B-D). Recurrence The rate of recurrence presented no significant difference in the LTG and OTG groups (12.8% vs 10.7%, P = 0.547). The patterns of recurrence were listed in Table 3. Distributions of recurrence for LTG were similar to that for OTG, and there existed no differences in organ metastasis (liver, lung, bone, brain, pancreas), anastomotic recurrence, peritoneal dissemination, lymph node metastasis or others (P > 0.05). April The 3-year DFS was significantly better in the LTG group than that in the OTG group (79.7% vs 73.0%, P = 0.002) ( Figure 3A). After stratification by TNM stage, the 3-year DFS was similar between the two groups in stage I patients. However, for stage II and stage III patients, the 3-year DFS was better in the LTG group compared with that of OTG group with significant difference (Stage II: DISCUSSION Recently, the prevalence of Siewert type II AEG has risen rapidly, and most patients are diagnosed as an advanced stage with a poor prognosis at the first visit [23]. Complete removal of the tumor and adequate regional LN resection remains the only curative treatment for AEG [6]. Since the first report of laparoscopic-assisted gastrectomy, laparoscopic techniques have developed quickly in gastrointestinal tumors Comparison of disease-free survival rates between the LTG and OTG groups for stage I patients; C: Comparison of disease-free survival rates between the LTG and OTG groups for stage II patients; D: Comparison of disease-free survival rates between the LTG and OTG groups for stage III patients. CI: Confidence interval; HR: Hazard ratio. [9,24]. However, due to the lack of scientific evidence, the safety and feasibility of LTG in the treatment of Siewert type II AEG still remain controversial [16,17]. In the present study, LTG for Siewert type II AEG showed longer operation times but less blood loss, shorter abdominal incision and faster recovery compared with OTG. The obtained results were similar to the previous studies [17,18,20]. A large number of studies have demonstrated that LTG was comparable for morbidity and mortality to OTG for gastric cancer while few of them were focused on AEG [25][26][27][28]. In this study, no significant difference was observed in postoperative complications between the LTG group and OTG group for Siewert type II AEG. Apart from that, the complications of Clavien-Dindo grade III or higher were comparable in both groups. These results suggested that LTG can be safely performed and provide better short-term outcomes for patients diagnosed with Siewert type II AEG. Ensuring the safety of oncology is critical to the choice of surgical strategy. Shi et al [17] compared 132 patients with LTG and 264 patients with OTG. After propensity score matching, the number of harvested LNs showed no significant difference for AEG. By contrast, Sugita et al [18] suggested an increased number of dissected LNs in the LTG group compared with OTG for Siewert type II AEG [18]. In the current work, there existed a higher number of harvested LNs in the LTG group than that in the OTG group. The previous studies reported that the number of harvested LNs is an important prognostic factor for patients with AEG [29,30]. In addition, other oncological parameters in terms of length of proximal margin, R0 resection and the number of positive LNs were comparable between the two groups. As a result, the oncological safety of LTG is equivalent to OTG. Regarding the long-term outcomes, we found that the distribution of recurrence patterns was similar in the two groups. Shi et al [17] reported that there existed no significant difference for OS between the LTG and OTG groups [17]. Nevertheless, their study population included not only Siewert type II but April 27, 2022 Volume 14 Issue 4 also type III AEG. In addition, Huang et al [19] and Sugita et al [16] suggested that Siewert type II patients in the LTG group had significantly better OS than that in the OTG group [16,19]. The existing limitations included short observation period and small population, respectively. We observed a better 3-year OS and DFS of LTG for Siewert type II AEG patients compared with those treated with OTG. Moreover, we conducted a stratified analysis based on the TNM stage. Patients with stage I exhibited no survival benefit from LTG, while patients with stage II and III also revealed better survival outcomes in the LTG group. Undoubtedly, our study has some limitations. First, this study was a single-center, retrospective cohort study. In addition, the follow-up compliance of patients is limited, and the specific death and the patterns of recurrence of some patients remain unknown. Thus, prospective randomized controlled studies are still needed. CONCLUSION In conclusion, LTG is a safe and feasible treatment for Siewert type II AEG. Meanwhile, patients with advanced stage AEG may benefit more from LTG in the long-term outcomes. Research background Due to the lack of scientific evidence, the feasibility of laparoscopic-assist transhiatal gastrectomy (LTG) in patients with Siewert type II adenocarcinoma of the esophagogastric junction (AEG) is still controversial. Research motivation To compare the feasibility of LTG with the traditional open transhiatal gastrectomy (OTG) in patients with Siewert type II AEG. Research objectives We retrospectively evaluated and compared the short-term and long-term outcomes for patients with Siewert type II AEG treated with LTG and OTG and aimed to explore the feasibility of LTG treatment of Siewert type II AEG. Research methods We retrospectively evaluated 578 patients with Siewert type II AEG who have undergone LTG or OTG at the First Medical Center of the Chinese People's Liberation Army General Hospital from January 2014 to December 2019. The short-term and long-term outcomes were compared between the LTG (n = 382) and OTG (n = 196) groups. Research results Compared with the OTG group, the LTG group had less surgical trauma and a faster recovery after surgery. No significant difference was present between the two groups regarding oncological safety. The 3-year overall survival and disease-free survival were better in the LTG group than those in the OTG group (88.2% vs 79.2%, P = 0.011; 79.7% vs 73.0%, P = 0.002, respectively). In the stratified analysis, both overall survival and disease-free survival were better in the LTG group than those in the OTG group for stage II/III patients (P < 0.05) but not for stage I patients. Research conclusions For patients with Siewert type II AEG, LTG is associated with better short-term outcomes and similar oncology safety. In addition, patients with advanced stage AEG may benefit more from LTG in the longterm outcomes.
2022-04-25T15:05:37.432Z
2022-04-27T00:00:00.000
{ "year": 2022, "sha1": "95c089648ad7f4e5bceda161e7bedbced695ff0e", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.4240/wjgs.v14.i4.304", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e999fa9d9f89526d3355d6ec3428769f93c01fb8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
39897431
pes2o/s2orc
v3-fos-license
Medical Management of Post-Operative Abdominal Infection: A Case of Well Management and Appropriate Medications Intra-abdominal infections are the most complicated infections to diagnose and treatment. A successful outcome depends upon early diagnosis, rapid, appropriate surgical intervention and also the selection of most appropriate antibiotics [1]. The tertiary infections are relatively new term which is referring to those patients who require more than one operation for infection source control [2]. Complicated intra-abdominal infection is a common problem with appendicitis alone affecting about 300,000 patients per year and consuming more than one million hospital days. Intra abdominal infection is the second most common cause of infectious mortality in the intensive care unit. The requirement for intervention in most cases and the controversies surrounding the choice and nature of the procedure performed has added another layer of complexity to the management of these patients [3]. The possible complications for abdominal infections include the return of the abscess, rupture of an abscess, spread of the infection to the bloodstream (sepsis), widespread infection in the abdomen [4]. Introduction Intra-abdominal infections are the most complicated infections to diagnose and treatment. A successful outcome depends upon early diagnosis, rapid, appropriate surgical intervention and also the selection of most appropriate antibiotics [1]. The tertiary infections are relatively new term which is referring to those patients who require more than one operation for infection source control [2]. Complicated intra-abdominal infection is a common problem with appendicitis alone affecting about 300,000 patients per year and consuming more than one million hospital days. Intra abdominal infection is the second most common cause of infectious mortality in the intensive care unit. The requirement for intervention in most cases and the controversies surrounding the choice and nature of the procedure performed has added another layer of complexity to the management of these patients [3]. The possible complications for abdominal infections include the return of the abscess, rupture of an abscess, spread of the infection to the bloodstream (sepsis), widespread infection in the abdomen [4]. Surgeon commonly deals with intra-abdominal infections that are the result of perforation of a hollow viscous, which lead to three potential outcomes: clearance of the bacteria by the host, abscess formation and peritonitis [2]. Infection is established if the quantity and virulence of the bacteria over run local peritoneal host defenses include resident peritoneal macrophages, early neutrophil recruitment, as well as activation of the coagulation and complement cascades. If host defenses are completely overwhelmed, then diffuse peritonitis will result. An abscess will form after fibrin deposition [2]. Present case study designed to highlight the well management of surgical complications with appropriate selection of medications and accurate diagnosis can save patient from life threatening conditions. Case Report A 55 years old man was presented with abdominal pain and abdominal distension and was admitted from clinic for persistent pus discharge from wound suture after one and half month of surgical procedure. Upon history, it appeared that antibiotic used were not of broad spectrum (first generation cephalosporin ampicillins). He was known case of hypertension, colon cancer, familial adenomatous polyposis and liver metastases changes stage four (post panproctocolectomy with ileoanal anastomosis with ileal pouch and defunctioning ileastases). Patient under went with second suturing after 20 days of 1 st surgery. Previous operation procedure was done due to multiple liver nodules, tumor sigmoid and descending colon while his small bowel and stomach was normal. Based on the laboratory data received, patient haemoglobin was below than the normal value for male and his white blood cell was higher than normal range. Thus, doctor started haematinic agents to improve patient blood volume and doctor also started IV cefoperazone (3 rd generation cephalosporin) 1 g and IV metronidazole 500 mg. CT scan done after 2 nd surgical procedure which showed multiple intraabdominal collection. Physical examination showed patient abdominal was soft and non-tender. Pus was tested with presence of mixed growth of gram negative bacilli and lumbar drain was yellow to greenish colour. After seven days of therapy, pus examination further confirmed the presence of mixed growth of gram negative bacilli and gram positive cocci. CT scan done in successive diagnostic procedure, showed that possibility of leakage at ileoanal anastomosis and enterocutaneous fistula. It was expected that the leak part was not yet settled. Abstract A 55 years old man was presented with abdominal pain and even distension. Patient was admitted from clinic for persistent pus discharge from wound suture. Present case is established case of hypertension, colon cancer, familial adenomatous polyposis and liver metastases changes stage four (post panproctocolectomy with ileoanal anastomosis with ileal pouch and disfunctioning ileastases). Operation procedure is done with multiple liver nodule, tumor sigmoid and descending colon while small bowel and stomach was normal. After 2 months of surgical procedure, patient's pus discharge was confirmed with presence of mixed growth of gram negative bacilli and gram positive cocci. To improve the prognosis of patient with intra abdominal infection, monitoring of wounds, examination of tissue or pus discharge and proper selection of antibiotic treatment must be practicized. The bacteria inoculums must be controlled and diminished in the most effective manner depending on the patient condition. Mechanistic approach of surgeon and professional attitude may retard the accelerated prognosis of post operative intra-abdominal infections of the cancer patient. Currently patient was conscious and alert, a febrile. His blood pressure was normal and his blood glucose level was slight higher than normal range. Albumin level in this patient is normal. Patient was discharged and advised to go clinic every day for drainage purpose and come again after two weeks. Patient was recovered after 2 weeks daily dressing and aggressive use of antibiotics. Discussion Generally the diagnosis of intra-abdominal infection is made on physical examination. Before abdominal CT was readily available, it was much more difficult to diagnose intra-abdominal infections and the diagnosis was often delayed. The treatment of intra-abdominal infections is predicated on restoration of normal homeostasis. The principles of treatment include: restoration of fluid and electrolyte imbalances; physiologic support of organ systems; administration of appropriate empiric antimicrobial therapy and control of the source of the infection [5]. Regarding present case, IV drip was maintained to improve their electrolytes imbalance. 5 pints of IV drip which are four normal saline drips (0.9% NaCl) and one 5% dextrose for one day. Haematinic agents such as tablet folic acid, tablet ferrous fumarate, tablet B complex and tablet multivitamin were given to the patient to improve haemoglobin level. Counseling point such as appearance of black colour stool need to aware the patient. Antibiotic treatment includes metronidazole and cefoperazone. Numbers of regimen are available with compareable efficacy [6]. According to a research, metronidazole has been used by many investigators, particularly in Europe. This drug has excellent activity against most anaerobic organism [7]. It's of great concern that elderly patient has familial adenomatous polyposis and liver metastases changes (stage four), and used of metronidazole in severe liver impairment patient may bring to potential accumulation. Thus, the liver function test for this patient is highlighted. Reduced dose is recommended for this patient. Apart from that, the test investigations may interfere by metronidazole such as glucose level, LDH testing [8]. Bacterial resistance is common in healthy isolates and person with community acquired infections in developing countries and prevalence of highly infectious disease the need for antibiotic is inevitable [9]. The used of third generation antibiotic (cefeperozole) and metronidazole are alternatives for microbial resistance or when nephrotoxicity is a concern. Tablet cefuroxime was prescribed. The most common adverse reaction of cefuroxime is nausea and vomiting (4%-11%) [8]. Here choice of Lincosamide group like lincomycin or clindamycin with metronidazole in discharge medicine may reduce post operative complication as combination therapy will cover all the gram positive and negative anaerobes and aerobes. Clindamycin is generally employed in infection caused by anaerobic bacteria like Bacteroides fragilis which often causes abdominal infections associated with trauma [10]. Blood glucose level of patient is higher than the normal so there is susceptibility of complication in healing of wounds. Clindamycin has ability to penetrate in poor blood supply areas of body and can be considered as drug of choice in these cases. So combination therapy of Clindamycin and metronidazole should be recommended in serious infections [10]. In addition, another point of concern was patient blood glucose level is higher than normal range (<7 mmol). At the time of discharge his blood glucose level was 10.1 mmol. Counseling on diet is needed. On the other hand, patient is known case of high blood pressure, prescribed amlodipine 10 mg daily. He had stopped taking antihypertensive agent. This may due to his blood pressure level is normalized (<130/90 mmHg). Conclusion Thus, in conclusion, to improve the prognosis of patient with intra abdominal infections, monitoring and culture examination of tissue or discharge and proper selection of antibiotic [12][13][14][15][16] must be practisized. Furthermore, the bacteria inoculums must be controlled and diminished in the most effective manner depending on the patient condition. During the treatment number of complications like age, malignancy, impaired liver, pus leakage from peritoneum to body, hypertension and diabetes were handled in mechanistic and professional approach. So this case report is a kind of medical education and successful handling of complicated case with number of comorbidities. A deep point of concern was leakage part at abdominal is not yet settled and patient was discharged. The high white blood cell is also a complication in this patient. Patent high white blood cells value is not improved from day of admission until the day of discharged. It can be perceived that present case is high risk case of sepsis will occur. Based on studies, advanced age, comorbidity and degree of organ dysfunction, inability to achieve adequate debridement or control, low albumin level, poor nutritional status and presence of malignancy will increase the rate of treatments failure in patient [11] [Tables 1 -3]. Counseling for patient such as must attend clinic every day for drainage purpose, always keep his body clean and healthy.
2019-03-12T13:11:53.038Z
2013-03-26T00:00:00.000
{ "year": 2013, "sha1": "302736762b09840b9762249c1b80d084ab6d4f47", "oa_license": "CCBY", "oa_url": "https://www.omicsonline.org/open-access/medical-management-of-post-operative-abdominal-infection-a-case-of-well-management-and-appropriate-medications-2329-9088.1000110.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "58452cba670a85955c0abcbba6cefbe6baee21a4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
12690209
pes2o/s2orc
v3-fos-license
Foxf2: A Novel Locus for Anterior Segment Dysgenesis Adjacent to the Foxc1 Gene Anterior segment dysgenesis (ASD) is characterised by an abnormal migration of neural crest cells or an aberrant differentiation of the mesenchymal cells during the formation of the eye's anterior segment. These abnormalities result in multiple tissue defects affecting the iris, cornea and drainage structures of the iridocorneal angle including the ciliary body, trabecular meshwork and Schlemm's canal. In some cases, abnormal ASD development leads to glaucoma, which is usually associated with increased intraocular pressure. Haploinsufficiency through mutation or chromosomal deletion of the human FOXC1 transcription factor gene or duplications of the 6p25 region is associated with a spectrum of ocular abnormalities including ASD. However, mapping data and phenotype analysis of human deletions suggests that an additional locus for this condition may be present in the same chromosomal region as FOXC1. DHPLC screening of ENU mutagenised mouse archival tissue revealed five novel mouse Foxf2 mutations. Re-derivation of one of these (the Foxf2 W174R mouse lineage) resulted in heterozygote mice that exhibited thinning of the iris stroma, hyperplasia of the trabecular meshwork, small or absent Schlemm's canal and a reduction in the iridocorneal angle. Homozygous E18.5 mice showed absence of ciliary body projections, demonstrating a critical role for Foxf2 in the developing eye. These data provide evidence that the Foxf2 gene, separated from Foxc1 by less than 70 kb of genomic sequence (250 kb in human DNA), may explain human abnormalities in some cases of ASD where FOXC1 has been excluded genetically. Introduction Anterior segment dysgenesis covers a spectrum of disorders affecting the iris, cornea, trabecular meshwork and Schlemm's canal of the eye, which can result in abnormal aqueous humor drainage from the eye leading to raised intraocular pressure and glaucoma [1]. These abnormalities result from a primary defect in the migration and differentiation of neural crest cells that contribute to the development of the anterior segment structures [2]. Malformation of tissue specifically at the iridocorneal angle (iridogoniodysgenesis anomaly) or in the anterior stroma of the iris -contribute to the glaucoma phenotype [3,4]. Anterior segment dysgenesis (ASD) phenotypes are inherited as autosomal dominant traits with variable expressivity and incomplete penetrance, pointing to a complex etiology [5,6]. Nine different human genes have been associated with ASD or congenital glaucoma including FOXC1, PITX2, PITX3, FOXE3, PAX6, MAF, CYP1B1 and LMX1B. Mutations in the FOXC1 gene [7], or dosage effects due to deletions [8] or duplications [9,10] in the 6p25 region that surrounds FOXC1 can all cause iridogoniodysgenesis; as can mutations in the PITX2/RIEG1 gene [11]. Patients with FOXC1 mutations have a milder average prognosis for glaucoma development than do patients with any one of the known PITX2 mutations [12]. One common link between these genes, other than their expression in the neural crest cells of the periocular mesenchyme [13,14]; is that their upregulation can be triggered by Tgfb2 activity. Inactivation of this growth factor in mouse neural crest cells results in malformed trabecular meshwork, ciliary body and corneal endothelium cells [15]. Genetic evidence suggests that other genes near FOXC1 may also be involved in the underlying etiology of iridogoniodysgenesis and other eye abnormalities associated with glaucoma. For example, deletion of 6p24-p25 proximal to the FOXC1 locus causes anterior segment abnormalities [16,17,18]. Recombination mapping in families linked to 6p25 excluded FOXC1 as the causative gene [19]. Furthermore, a patient with an unbalanced translocation between 6p25 and 4p14 was disomic for FOXC1 but may have been monosomic for FOXF2 [20]. To investigate whether the nearby Foxf2 gene could be involved in anterior segment development and dysgenesis we took advantage of an ENU mutagenised DNA archive [21,22], that allowed recovery of identified Foxf2 mutant lineages. We describe the genetic analysis of an identified Foxf2 mutation and the phenotypic features of the affected animals. These analyses suggest Foxf2 is essential for normal anterior segment development, and that the FOXF2 gene should be considered as an additional candidate for anterior segment dysgenesis in humans. Identification of Foxf2 sequence variants from archival DNA Archival DNA from tail biopsies of the F1 progeny of mice that had undergone ENU mutagenesis, was screened by DHPLC analysis followed by sequencing of samples that produced heteroduplexes. This protocol identified 5 sequence variants in the Foxf2 genomic DNA (Table 1). Two base changes did not alter amino acid sequence and are therefore silent variants. Individual mouse GSK 14H3 carried a TRA transversion at position 821 of the Foxf2 transcript ( Figure 1A). This change results in a W174R amino acid substitution in the forkhead DNA binding domain of the protein. In mouse MRC 18C1, a GRT transversion at position 1535 of the Foxf2 transcript resulted in a conservative V412F amino acid substitution in the third sub-region of the AD2 transactivation domain [23]. An ARG transition was identified in mouse MRC 31H8 at the third base of the intron. The six base region following the end of exons is generally highly conserved between eukaryotic 59 splice donors, but this third base is the least conserved of these positions. In an analysis of intron -exon boundaries within 1446 genes, 35% of splice sites contain an adenosine at this position and 60% a guanosine, whereas all of the other positions showed much greater levels of conservation [24] so interference with normal splicing could be considered unlikely. However, 106 disease associated ARG splice site mutations at the equivalent position (IVS+3) in the donor regions of 79 genes, are present in the human gene mutation database (HGMD) [25]. Thus, the possibility remains that this mutation could result in aberrant splicing. The Foxf2 mutation rate within the ENU archives that was previously determined during the discovery of the Foxf2 W174R mutation and one of the silent mutations [22] can now be updated to 5 mutations in 1340 bp of 7990 individuals. Recovery of the Foxf2 W174R mouse lineage Analysis of inter-species conservation, physico-chemical implications of the amino acid substitutions and the position of the mutations in the protein structure suggested that Foxf2 W174R was the mutation that was most likely to disrupt the function of the gene product. The tryptophan residue is conserved in all genes with a forkhead domain ( Figure 1B) and occurs within a b-sheet structure. This mouse line was therefore re-derived for further examination. Homozygous mutants die within 14 days of birth and in 16 individuals, none showed evidence of malformation in either the primary or secondary palate. This is in contrast to earlier findings in the homozygous Foxf2 knockout mice which die within 18 hours with cleft palates and gas distended guts [26]. Foxf2 W174R homozygotes appear normal at birth but fail to thrive and by 3 days are noticeably smaller than their wildtype littermates ( Figure 2). As in the knockout, microscopic analysis did not reveal any lung defects despite the gene's intense expression in the lung [23] which, in common with the eye but not any of the other tissues that express Foxf2, continues to express the gene into adulthood [27]. Heterozygous mice appear to thrive normally and are fertile, as was the case for knockout mice. Foxf2 W174R eye phenotype analysis The eyes from ten Foxf2 W174R heterozygous mice that were 45 days of age were examined by light microscopy. The iris stroma showed irregular thinning of the tissue (compared to wildtype (Figure 3)) and a loss of structural organization. A number of unusual features were observed in the irido-corneal angle of all mice analysed ( Figure 4). The canal of Schlemm was smaller in most of the mice (7/10) and was not seen at all in others (2/10); the trabecular meshwork showed signs of hypoplasticity (7/10); one individual had a hypoplastic ciliary muscle. In some mice the angle between the cornea and iris was significantly reduced (6/10) and in one individual the two tissues were adherent. The phenotype variability that was seen between different animals was also apparent between the eyes of individuals, although to a lesser degree. This type of variation is also seen in Foxc1 heterozygous mice [28] as well as in human disease [29] and may be dependent on genetic background. Although this variability could be attributed to genetic modifiers it is also likely to be influenced by the presence of normal and abnormal tissue, probably reflecting stochastic events in which the spatiotemporal regulation of Foxf2 downstream targets is critical to anterior segment development. Nevertheless, all mice exhibited two or more defects. Histological analysis showed no signs of damage to the cornea, optic nerve or retinal nerve fibres at 45 days of age (data not shown). To investigate the effect of the W174R mutation in older mice (6 months), the retina, cornea and optic nerve of heterozygous mice were examined to determine if there was any apparent glaucomatous damage. Histological analysis of 18 mice showed a range of anterior segment defects as previously seen in younger mice. In addition two mice appeared to have bulging eyes that can be associated with raised intraocular pressure. However, on histological investigation there appeared to be extraneous amorphous tissue between the retina and the lens. Histological analysis in the majority of Foxf2 W174R heterozygous mice (16/18) revealed no substantial damage to the optic nerve, retina or cornea ( Figure 5). In two mice there was swelling of the optic nerve, which disrupted the outer nuclear layer of the retina ( Figure 5B&C). In mice that were 12 months of age, the optic nerve appeared to be normal because there was no optic nerve cupping as would be expected at this age if glaucomatous damage had occurred [30]. To investigate whether homozygote Foxf2 W174R embryos displayed iridocorneal defects, E18.5 embryos were examined by histology. Evagination of tissue from the anterior optic cup begins at E14 to form the future iris and ciliary body. At E18.5 finger-like projections of tissue forming the ciliary body processes are clearly visible in wildtype littermates ( Figure 6A). However, there was no evidence of tissue evagination in the homozygote embryos ( Figure 6B). Heterozygous mice at this stage appear indistinguishable from wildtype mice. Mouse subjects were from several divergent lineages that had been outcrossed to between G5 and G8 from the single mutagenised founder. This meant that the likelihood that the observed phenotype resulted from mutations in ASD associated genes on other chromosomes was negligible. However, due to the close genetic linkage of Foxc1 to Foxf2 and the similarity between the Foxf2 W174R and Foxc1 mutant and knockout eye phenotypes, it was important to ensure that no mutations in Foxc1 were responsible for the observed phenotype. The Foxc1 coding sequence of the Foxf2 W174R mouse was therefore sequenced. No differences between this sequence in the Foxf2 W174R mouse and the Foxc1 mouse reference sequence were present. Discussion Chromosome 6p25 is a major locus for anterior segment dysgenesis (ASD). Previous reports of cytogenetic abnormalities are consistent with the notion that the eye is exquisitely sensitive to both reduced and increased dosage in this chromosomal region. Although FOXC1 dosage is a major contributor to eye defects localised to this region, we now provide evidence that Foxf2 is a novel locus for anterior segment dysgenesis. Heterozygous mutation of the forkhead binding domain of the Foxf2 gene is associated with anterior segment defects in the iridocorneal angle of mice, whereas homozygous defects are lethal. At E18.5 the development of the ciliary body is defective, suggesting that Foxf2 is essential for normal ciliary body formation. These data support the role of Foxf2 in normal anterior segment development. Data from the characterisation of a 200 kb deletion located 1.2 Mb upstream of FOXC1 [31] suggested that mutations could induce a phenotype via long-range effects. It is therefore feasible that the observed phenotype in Foxf2W174R individuals could be the result of a mutation within a Foxc1 regulatory region. However, The domain structure is shown as described for mouse [23] including two activation domains at the 59 end, but overlayed by the activation domain structure that was described for the human gene [49] other evidence to support the involvement of Foxf2 in anterior segment dysgenesis, including the patterning of its ocular expression [27,32], the high level of conservation and physicochemical changes of the mutagenised amino acid and the absence of Foxc1 coding mutations; in combination with the observed physical phenotype -all contribute towards a greatly strengthened candidacy of Foxf2. Previous studies have shown that targeted deletion of Foxf2 caused palate malformations and an abnormal tongue [26]. Analysis of Foxf2 knockout mice subsequently revealed megacolon, colorectal muscle hypoplasia and agangliosis [33]. However, the colon was not analysed in the present study and therefore the effects of this mutation on the gut would seem like a promising focus of future investigations into the effects of the Foxf2 W174R mutation. Foxf2 is expressed in the absence of its closest paralogue (Foxf1) in the CNS, ear, and limb buds as well as the eye [27] so these systems are also worth prioritising in the search for other potential Foxf2-associated phenotypes. The effect on eye development was not examined in previous analyses of Foxf2 knockouts [33]. Interestingly however, one study did demonstrate normal Foxf2 expression in the periocular mesenchyme of the developing eye at about E12.5 [32]. Furthermore, in situ hybridisation established that there was continued Foxf2 expression from E13 through to adult stages [27]. High levels of Foxf2 expression at E17 were observed in the developing ciliary body and choroid. These data support the abnormal morphological finding in the developing ciliary body in homozygous Foxf2 W174R embryos. The difference in phenotype that was identified between targeted knockout and homozygous missense mutation could suggest that Foxf2 W174R is a hypomorphic allele, However it is also possible that the differences are due to genetic background and that the mutation causes a complete loss of function. Molecular modelling of FOXC1 in a previous study revealed that a tryptophan residue (Trp152) -the direct homologue of Trp174 in Foxf2, is one of nine critical intramolecular interaction residues that maintain structural integrity of the forkhead winged helix structure [34]. It therefore seems likely that disruption of Trp174 in Foxf2 would lead to protein instability. Another example of an unstable forkhead transcription factor with a mutation in the DNA binding region is the I87M variant of FOXC1 [34]. Cos7 cells transfected with this mutant plasmid demonstrated markedly reduced levels of the protein at only 5% of levels observed for the wildtype, but the molecule retained its nuclear localisation function. A drastic reduction but not complete destruction of protein functionality, could explain the reduced severity of phenotype that is observed in association with the Foxf2 W174R mutation and would be consistent with the hypothesis that haploinsufficiency plays a key role in the pathogenesis of Fox associated anterior segment anomalies. The ocular abnormalities found in Foxf2 W174R mice are variable in eyes from different individuals, recapitulating the variable expressivity observed in human patients with ASD. Schlemm's canal was often smaller than typically seen in wild-type eyes and trabecular meshwork was either missing or was underdeveloped, suggesting abnormal migration of mesenchymal cells into the iridocorneal angle. The ciliary body malformations may affect aqueous humor production and secretion of antioxidant proteins into the aqueous humor [35]. Aqueous humor is drained through the trabecular meshwork, therefore alterations in aqueous humor homeostasis are likely to occur when these tissues are malformed and could contribute to changes in intraocular pressure [36]. The iridocorneal abnormalities observed in the FoxF2 W174R mice are very reminiscent of those seen in mice that are heterozygous for Foxc1 or Foxc2 mutations [28]. Since Foxc1, Foxc2 and Foxf2 are all expressed in the developing periocular mesenchyme, this suggests that this tissue is particularly sensitive to gene dosage [9,37]. Despite the high level of conservation in their DNA binding domain, forkhead transcription factors are an extraordinarily diverse group of genes with roles as varied as development, homeostasis, stress response and cell cycle control [38]. Intriguingly, mutations in a number of forkhead genes can result in a variety of disorders affecting the eye. Mutations of the FOXE3 gene affect lens development and can be inherited as either an autosomal dominant or recessive trait [39]. The more severe recessive trait is associated with bilateral microphthalmia, aphakia, corneal defects and glaucoma, whereas the milder autosomal dominant trait is associated with iris hypoplasia, Peters' anomaly, and isolated cataract. Mutations of the FOXC2 gene cause lymphedema-distichiasis syndrome [40] -characterised by double rows of eyelashes, ptosis, photophobia and anterior segment anomalies reminiscent of those caused by FOXC1 [7]. FOXL2 mutations cause blepharophimosis-ptosis-epicanthus inversus syndrome (a complex eyelid malformation) [41] and in some patients lacrimal duct anomalies, amblyopia, strabismus, and refractive errors. In addition, expression of three other forkhead genes; Foxg1 [42], Foxd1 [43] and Foxn4 [44], has also been shown in the developing retina. It is clear that forkhead transcription factors play a critical role in the developing eye, and now the Foxf2 gene can be added to this growing list. The 6p25 region contains a forkhead cluster (FOXC1/FOXF2/ FOXQ1) in which FOXC1 is separated from FOXF2 by less than 250 kb of genomic DNA, and FOXQ1 is 470 kb proximal of FOXC1. Because duplication and deletions of this region in human disease often contain more than one of these genes, confirmation of pathogenicity has relied on specific mutations in animal models. Although gene knockouts [28,45] and naturally occurring mutations [14] recapitulate FOXC1 deletions or mutations, no model carrying an additional functional copy of FOXC1 has been developed to explore gain-of function effects seen in interstitial gene duplication events. Since our data provides evidence that Foxf2 in mice is also critically involved in anterior segment development, then duplications or deletions containing both FOXF2 and FOXC1 in patients may contribute to the phenotype. This is supported by clinical observations where interstitial duplication of FOXC1 alone causes an iris hypoplasia phenotype, whereas duplications containing both genes (plus several others depending on the extent of the duplication) cause microcornea and ptosis, without iris hypoplasia [46]. This suggests that different combinations of transcription factor gene dosage within cytogenetic abnormalities influence how eye development is affected. DHPLC mutation scanning We used ENU archival DNA that was generated as previously described [21,22] as a template for Foxf2 mutation scanning using DHPLC [47]. DNA concentrations were determined with a Spectramax 190 spectrophotometer (Molecular Devices). Five of six overlapping sets of primers were used for amplification of Foxf2 (Table 2). For each PCR reaction, 10 ul of pooled archive DNA (4 samples) was added at a concentration of 5 ng/ml. Following amplification of the DHPLC targets, thermal cycling using the WAVE TM DNA Fragment Analysis System (Transgenomic, Cheshire, UK), was used to denature and then re-anneal PCR products with the following parameters: 95uC for 4 min, 45 cycles of 93.5uC for 1 min with a reduction of 1.5uC per cycle down to 25uC. Sequencing PCR amplification products from pooled DNA that exhibited evidence of heteroduplexes in their DHPLC profiles, were individually PCR-amplified and screened by DHPLC. The single DNA heteroduplex that was identified was sequenced on both strands to determine the mutation. PCR products were purified using a QIAquick PCR purification kit (Qiagen) and sequencing was carried out using BigDye 3.1 terminator chemistry on an ABI prism 377 DNA sequencer. Sequences were aligned and compared with consensus data obtained from the mouse genome database (http://genome.ucsc.edu). Mutant mouse recovery and genotyping Recovery of the mutant mouse lineage was achieved by in vitro fertilisation with archival sperm and C3H/HeH females using standard methodology. Genotyping of the Foxf2 W174R mice was performed by SfcI (which cuts the mutant locus) and BsrI (wildtype locus) restriction digestion of the exon1c PCR product to distinguish Foxf2 W174R heterozygotes from homozgotes and wildtypes. Because the C3H mice carry a Pde6b rd1 retinal mutation affecting the eye, the identified Foxf2 W174R mice were outcrossed to C57BL/6 mice for 2 generations. To exclude rd1 carriers, genotyping was performed with the following two primers F: 59-ACCTGAGCTCACAGAAAGGC-39 and R: 59-GCTTCTAGC-TGGGCAAAGTG-39 as described previously [48]. The mutation was detected by DdeI restriction digest (which cuts the Pde6b rd1 mutant locus) and SnabI (wildtype locus) thus allowing differentiation between Pde6b rd1 heterozygotes, homozygotes and wildtypes. All subsequent analyses were carried out on mice with only the Foxf2 mutation. Primers for sequencing the Foxc1 gene are in Table 3. All animal work was carried out in accordance with the UK Animals (Scientific Procedures) Act, 1986. The Harwell ethical committee approved the study and the work was performed under UK Home Office project licence numbers 30/1517, 30/ 2049 and 30/2228. Histological analysis Eyes were enucleated and placed in 50% Karnovsky's fixative for 45 minutes. Eyes were then washed 3630 min in PBS, dehydrated through a graded ethanol series (50%, 70%, 90% and
2018-04-03T00:33:36.749Z
2011-10-13T00:00:00.000
{ "year": 2011, "sha1": "13b40f93f92f3b96848cec9c50fa317f28dc6886", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0025489&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "13b40f93f92f3b96848cec9c50fa317f28dc6886", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
234506669
pes2o/s2orc
v3-fos-license
A geometric interpretation of the multiplication of complex numbers In complex analysis courses, it is common to use physical interpretations as a didactic tool for teaching complex numbers. In the case of operations between complex numbers, the geometric interpretation of addition and subtraction is well known; however, many authors avoid the interpretation of the multiplication of complex numbers. In this paper, using the physical concepts of rotation and scaling, we will explain the multiplication of complex numbers through visualization in the Argand plane. In addition, we use visual representations in order to obtain proofs without words for some identities. Introduction Teaching mathematics and physics in engineering represents an opportunity to improve didactic methods; mathematical competencies must be developed for the student to perform applications in their study, keeping the mathematical rigor. There are many methods to facilitate teaching and learning of advanced mathematics courses, history as a didactic resource and didactic modeling are essential for the understanding of the origin of mathematical and physical concepts and its relevance in real life [1]; in the case of complex analysis, geometric interpretations of complex numbers are the main didactic method, but specific contents are not fully utilized [2]. Here we use visualization as a didactic tool, to explain the physical steps of the multiplication of complex numbers. Following [3], visualization is the product and the process of creation, interpretation and reflection upon pictures, images and diagrams. We verify that in the course of advanced mathematics or complex analysis, in the majority of the cases, the study texts present complex numbers as objects that have an algebraic and geometric representation [4], next they present the geometric interpretation for sum and subtract of complex numbers. However, when defining the complex number product, they focus on numerical aspects and avoid the geometric interpretation [5,6]. Comparing the geometry of the real number line and the Argand diagram creates conceptual connections across various mathematical objects; however, researches that study geometrical representations for teaching complex numbers avoid the multiplication of complex numbers [7]. For this reason, it is important to review the texts that we usually find as sources of associated research, which show a disarticulation of the product of complex numbers and their geometric representation [8]. In this paper, we present a didactic proposal for teaching the multiplication of complex numbers through rotations and scaling on the complex plane; also, we present proofs without words for some identities, using Geogebra as a visualization software tool. Similarly, considering complex numbers as pairs of real numbers, the Equation (2) shows the multiplication of complex numbers. For any couple (a, b) it is equivalent to the complex number a + bi, cf. [12]. Complex numbers can be represented as points on the complex plane, also called Argand diagram [13], where the complex number z = a + bi is associated with the point (a, b). Thus, Re(z) = a is associated with points on the x-axis and Im(z) = b correspond to points on the y-axis; in this context, x-axis and y-axis are called real and imaginary axis respectively. The Figure 1, shows the geometric interpretation of the sum of complex numbers, the Figure 2 shows the geometric interpretation of substraction. Substraction of complex numbers can be expressed in terms of a sum, z 1 − z 2 = z 1 + (−z 2 ). In Figure 1, we may deduce the triangle inequality |z 1 + z 2 | ≤ |z 1 | + |z 2 | for each z 1 , z 2 ∈ C. Similar to the direction of a vector, we define the argument of a complex number z, denoted by arg(z), thus we define the principal value of the argument Arg(z) between −π and π. The complex conjugate of z = a + bi is defined by z = a − bi, from the geometric interpretation of z and z, it is easy to check that Arg(z) = −Arg(z). From Figure 1 and Figure 2 we may deduce the parallelogram law. Multiplication of complex numbers through Argand diagram Since the product of complex numbers is not related to scalar multiplication, it is not easy to obtain a geometric interpretation. Here we present the following process for multiplication through the geometrical representation of complex numbers. (i) Perform a rotation of the complex plane, such that the complex number z 1 is on the x-axis of the rotate plane. (ii) The rotated plane is scaled, such that z 1 is (1, 0) in that plane. (iii) Locate z 2 in the rotated complex plane and mark this point. (iv) The marked point, considering from the original complex plane, represents z 1 z 2 . For instance, we calculate z 1 z 2 with z 1 = 1+i and z 2 = 1−3i following the described process. We start performing a rotation of arg(z 1 ) in the complex plane. We perform a scaling in the rotated complex plane in such way that z 1 = 1 + i represents (1, 0), In Figure 3, we see the rotated and scaled complex plane in blue. Starting from the origin, in Figure 4 we locate z 2 = 1 − 3i on the rotated complex plane. In Figure 5, we locate the obtained point in the initial complex plane; therefore, we conclude that plane, thus 1w = w for each w ∈ C; multiplying z by i n , with n a positive integer, means a counterclockwise rotation of π 2 n. Let's verify the identity zz = |z| 2 considering z = 1 + i, therefore in Figure 6 we use the rotated plane presented in Figure 3. Let z and w be complex numbers with z = 1 + i and w = 2 − 2i, since w = 2z we get zw = 2zz = 2|z| 2 , from the above observation we may deduce |αz| = |α| z|. In Figure 7 we perform the rotation and the scaling of the complex plane by the complex w = 1 + i; in Figure 8 we locate w in the rotated plane; thus, by Figure 9 we conclude that (1 + i)(2 − 2i) = 4. In Figure 10 we perform the rotation and the scaling of the complex plane by the complex w = 2 − 2i; in Figure 11 we locate z in the rotated plane, thus obtaining the result of wz in Figure 12. Since Figure 9 and Figure 12 show the same result we verify the commutative law zw = wz. Since zw is calculated by rotating the complex plane arg(z) and locating w in the rotated plane we may deduce that arg(zw) = arg(z) + arg(w) and |zw| = |z||w|, hence we may introduce the polar form of complex numbers. Conclusions The use of physical concepts facilitates the visualization of multiplication of complex numbers, thus turning an abstract object into one with a representation in the complex plane. To explain the visualization process, we follow steps described through physical concepts as rotation, scaling, and locating. Proofs without words for some identities through visualization were presented, they can be considered less elegant than the formal proofs; however, for students, they are easier to understand than formal proofs due to the chance to perform rotations and scaling by software tools. The didactic proposal that we have presented is a useful method for teaching complex numbers through visual representations of their operations. It was performed at Universidad ECCI, generating better results than the formal proofs; students use smartphones and technological devices for interacting with an app developed in Geogebra, they investigate the identity z(w + v) = zw + zv for v, w, z ∈ C.
2020-12-24T09:06:38.676Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "3c8d3c88e4d0bc34e521a1d065f02f77576bb44d", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1674/1/012005", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "f3001e02000de3fcc7b5e1d8fb378587034bbfc4", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
10368190
pes2o/s2orc
v3-fos-license
Quantitative Proteomic Analysis Provides Novel Insights into Cold Stress Responses in Petunia Seedlings Low temperature is a major adverse environmental factor that impairs petunia growth and development. To better understand the molecular mechanisms of cold stress adaptation of petunia plants, a quantitative proteomic analysis using iTRAQ technology was performed to detect the effects of cold stress on protein expression profiles in petunia seedlings which had been subjected to 2°C for 5 days. Of the 2430 proteins whose levels were quantitated, a total of 117 proteins were discovered to be differentially expressed under low temperature stress in comparison to unstressed controls. As an initial study, 44 proteins including well known and novel cold-responsive proteins were successfully annotated. By integrating the results of two independent Gene Ontology (GO) enrichment analyses, seven common GO terms were found of which “oxidation-reduction process” was the most notable for the cold-responsive proteins. By using the subcellular localization tool Plant-mPLoc predictor, as much as 40.2% of the cold-responsive protein group was found to be located within chloroplasts, suggesting that the chloroplast proteome is particularly affected by cold stress. Gene expression analyses of 11 cold-responsive proteins by real time PCR demonstrated that the mRNA levels were not strongly correlated with the respective protein levels. Further activity assay of anti-oxidative enzymes showed different alterations in cold treated petunia seedlings. Our investigation has highlighted the role of antioxidation mechanisms and also epigenetic factors in the regulation of cold stress responses. Our work has provided novel insights into the plant response to cold stress and should facilitate further studies regarding the molecular mechanisms which determine how plant cells cope with environmental perturbation. The data have been deposited to the ProteomeXchange with identifier PXD002189. INTRODUCTION Garden petunias (Petunia hybrida) are very popular bedding plants around the world. This popularity is due, at least in part, to recent breeding achievements which have combined novel characteristics such as prostrate growth habits with increased robustness in the face of environmental stresses (Griesbach, 2006). However, P. hybrida is native to warm habitats, originating from South America. Low temperatures are a crucial limiting factor for the horticultural success of petunia varieties, impacting on their geographical distribution and the length of their display period. Consequently, in northern climates including those of the United States of America, Europe and China, petunia growth is necessarily restricted to environmentally-controlled greenhouses during the late winter and early spring months (Warner and Walworth, 2010), and this inevitably results in considerable expenses for labor and heating. Therefore, a prime target for breeding efforts is the increased cold tolerance of petunia plants. In order to develop sustainable petunia plants cultivated under low temperature conditions, the molecular response of petunia to cold stress needs to be fully understood. This knowledge should identify candidate genes for direct gene manipulation or conventional breeding strategies that will enhance cold hardiness. Groups of differentially expressed regulators of the petunia response at the transcriptional level have previously been described in the context of cold-stress responses, indicating the validity of the transcriptome approach in obtaining meaningful biological information (Li et al., 2015). Nevertheless, a range of studies have demonstrated that transcript levels do not invariably correlate well with the levels of the corresponding proteins (Chen et al., 2002;Tian et al., 2004). This poor correlation is primarily due to the effects of post-translational modifications including ubiquitinylation, phosphorylation, glucosylation and sumoylation (Mann and Jensen, 2003), many of which are pivotal for the regulation of protein function. Therefore, it is necessary to study at the protein level the cellular changes in petunia plants under low temperature stress and, thus, complement the transcriptomic studies in order to further reveal the molecular mechanisms underlying the cellular response to adverse environmental perturbations. After decades of relatively slow progress, partially because of the greater difficulties encountered in sample preparation of plant tissues, the pace of research into the analysis of protein abundance in plants is beginning to quicken, and this can be attributed to various advancements in proteomic technologies (Thelen and Peck, 2007;Jorrín-Novo et al., 2009). In particular, the translational profiling of diverse plant species under cold stress has attracted much research interest, leading to the identification of differentially expressed proteins (DEPs) which have significantly improved our understanding of the cold response. For example, a proteome study was performed to analyze the cold-stress response of Arabidopsis plants by the application of the two-dimensional electrophoresis (2-DE) DIGE technique. The results revealed that, with the approach of proteome, a comprehensive set of proteins related with cellular responses to cold stress could be detected (Amme et al., 2006). Temporal changes in the profile of total proteins in rice leaves after a chilling treatment, and their subsequent recovery, were analyzed based on a 2-D gel electrophoresis technique. From this, 85 DEPs, including many novel cold-responsive proteins were identified; further classification demonstrated that the largest functional category was proteins involved in photosynthesis (Yan et al., 2006). Thus, the study of the influence of cold stress on the proteomes of model plants, cereal crops, woody plants, and other important crop plants, is an actively emerging research area, with publications available for the proteome of Arabidopsis (Bae et al., 2003;Amme et al., 2006), rice (Yan et al., 2006;Hashimoto and Komatsu, 2007), spring wheat (Rinalducci et al., 2011), poplar (Renaut et al., 2004), peach (Renaut et al., 2008), pea (Dumont et al., 2011), soybean (Cheng et al., 2010), and chicory (Degand et al., 2009). By contrast, investigations into petunia responses to cold at the protein level are still lacking, thereby restricting our capacity to fully dissect the molecular mechanisms associated with this species' cold stress response. Traditional 2-DE techniques have been used as the core method in studies to detect the protein expression patterns in diverse plant species. However, in view of certain limitations of these techniques, a number of higher throughput alternatives with improved sensitivity, linearity and reproducibility have been developed and have been applied in plant research. Isobaric tags for relative and absolute quantitation (iTRAQ) coupled to liquid chromatography-quadrupole mass spectrometry (LC-MS/MS) describes a recently developed technique which provides a fast proteomic analytical method for the identification and quantification of expressed proteins with a high degree of efficiency and accuracy (Evans et al., 2012), and is currently being widely used for the quantitative comparative analysis of plant proteomes (Owiti et al., 2011;Zheng et al., 2014). In this study, in order to identify the candidate proteins that are intimately associated with the cold response of petunia, we applied a quantitative proteomic approach combining iTRAQ with LC-MS/MS to detect the DEPs between cold-stressed petunia seedlings and the unstressed controls. Probable biological functions and potential effects of these proteins on cold tolerance are discussed with the aim of determining their roles in cold resistance in petunia. This analysis developed a comprehensive inventory of petunia cold-responsive proteins and highlighted the antioxidation mechanism as well as epigenetic factors in the regulation of the cold stress response. Plant Material, Growth Conditions, and Cold Treatment Petunia hybrida inbred line H has been previously described (Li et al., 2015). In vitro seedlings of line H were grown in plastic pots at 25 • C under long-day conditions (14/10 h light/dark cycle, 2000-2500 lux light intensity) in the laboratory's tissue culture room for 1 month. Plants of a uniform growth status at the developmental stage 4-5 pairs of true leaves were subsequently transferred to the cold-stress conditions (2 • C, 500-1000 lux light intensity). After 5 days of treatment, the stressed plants were harvested and immediately frozen in liquid nitrogen, and then held at −80 • C until required for further processing. Untreated plants (0 h cold stress) were used as controls. Four individual plants were harvested and pooled for each sample, and this collection was repeated four times to provide biological replicates. Protein Extraction and Digestion Seedlings from cold-treated plants and control plants were ground into powder with liquid nitrogen and suspended in a 10× volume of pre-cooled acetone (−20 • C) containing 10% (v/v) TCA. After thorough mixing, proteins were precipitated at −20 • C overnight. Proteins were then collected by centrifuging at 10,000 rpm (Eppendorf5430R; Eppendorf Ltd., Hamburg, Germany) at 4 • C for 45 min. The supernatant was carefully removed, and the protein pellets were washed twice with cold acetone. Protein pellets were dried by lyophilization and then extracted using a 10× volume of SDT buffer composed of 4% (v/v) sodium dodecyl sulfate (SDS) (Bio-Rad, Hercules, CA, USA), 1 mM dithiothreitol (DTT) (Bio-Rad, Hercules, CA, USA) and 150 mM TrisHCl (pH 8.0), with incubation in a boiling water bath for 5 min. Protein extracts were subsequently dispersed by ultrasonication (80 w: 10 times for 10 s each, with 15 s intervals in between). After heating in a boiling water bath for 5 min, the final protein pellets were obtained by passing the extract through a filter tube (0.22 µm diameter). The resulting protein concentration was determined using BCA Protein Assay Kit (Pierce, Thermo Scientific, Rockford, IL, USA). For each sample, 300 µg of proteins were incorporated into 30 µL of STD buffer, composed of 4% (v/v) SDS, 100 mM DTT and 150 mM Tris-HCl (pH 8.0). Removal of DTT and other low-molecular-weight components was achieved by repetitive ultrafiltration using UA buffer composed of 8 M Urea (Bio-Rad, Hercules, CA, USA) and 150 mM TrisHCl (pH8.0). Subsequently, by adding 100 µL of 0.05 M iodoacetamide (IAA) (Bio-Rad, Hercules, CA, USA) in UA buffer, the samples were incubated in darkness for 20 min in order to block the reduced cysteine residues. To wash the filters we used 100 µL of UA buffer (three times), followed by 100 µL of DS buffer (50 mM triethylammonium bicarbonate pH 8.5) (two times). Finally, 2 µg trypsin (Promega, Madison, WI, USA) in 40 µL of DS buffer was used to digest the protein suspensions by incubation at 37 • C overnight, and the digested peptides were collected as a filtrate. An extinction coefficient of 1.1 (0.1% g/L solution, calculation based on the frequency of tryptophan and tyrosine in vertebrate proteins) was used to evaluate the peptide content by UV light spectral density at 280 nm. iTRAQ Labeling and Peptide Fractionation Peptide samples were labeled with 8-plex iTRAQ reagents (Applied Biosysterms) according to the manufacturer's protocol. Four samples from cold-treated seedlings were labeled with reagent 113, 114, 115, and 116, respectively. Four control samples from untreated seedlings were labeled with reagent 117, 118, 119, and 121, respectively. iTRAQ labeled peptides were combined and further fractionated with the AKTA Purifier system (GE Healthcare) by strong cation exchange (SCX) chromatography. In brief, the dried peptide mixtures were reconstituted and acidified with buffer A (10 mM KH 2 PO 4 in 25% of ACN pH 3.0), then, loaded onto a polysulfethyl 4.6 ×100 mm column (5 µm, 200 Å) (PolyLCInc, Maryland, U.S.A.). The peptides were eluted with a gradient buffer B (10 mM KH 2 PO 4 , 500 mM KCl in 25% of ACN pH 3.0) (0-10% for 7 min, 10-20% for 10 min, 20-45% for 5 min, and 45-100% for 5 min) at a flow rate of 1 mL/min. The absorbance at 214 nm was monitored and a total of 10 final fractions were collected. Each final fraction was desalted on C18 cartridges (Sigma, Gillingham, UK) and concentrated by vacuum centrifugation. All samples were stored at −80 • C. LC-MS/MS Measurement The peptide mixtures were loaded onto a packed capillary tip (C18-reversed phase column with 15 cm long, 75 µm inner diameter) with RP-C18 5 µm resin, washed in buffer A (0.1% formic acid), and subsequently separated with a linear gradient of buffer B (0.1% formic acid and 84% acetonitrile) at a flow rate of 250 nL/min over 120 min: 0-100 min with 0-45% buffer B; 100−108 min with 45-100% buffer B; 108-120 min with 100% buffer B. The Q-Exactive (Thermo Finnigan, San Jose, CA, USA) mass spectrometer was used to acquire data in the positive ion mode, with a selected mass range of 300-1800 mass/charge (m/z). Survey scans were acquired at a resolution of 70,000 at m/z 200, and the resolution for HCD spectra was set as 17,500 at m/z 200; MS/MS data were acquired using a data-dependent "top10" method to capture the most abundant precursor ions. The normalized collision energy was 30 eV; the underfill ratio was defined as 0.1% on the Q-Exactive; and the dynamic exclusion duration was 40 s. Protein Identification and Quantification Protein identification and quantification were simultaneously performed with MASCOT 2.2 (Matrix Science, London, U.K.) embedded into Proteome Discoverer 1.4 (Thermo Electron, San Jose, CA, USA), searching against the Uniport database of combined protein sequences of solanaceae, Solanum lycopersicum and Solanum tuberosum (uniprot_solanaceae_108653_20130709.fasta, uniprot_Solanum lycopersicum_36345_20130710.fasta, uniprot_Solanum tuberosum_55352_20130710.fasta, downloaded from: http:// www.uniprot.org/) and the decoy database. Search parameters were set as follows: trypsin as the enzyme; monoisotopic mass; a permitted maximum of two missed cleavages; peptide mass tolerance at ±20 ppm and fragment mass tolerance at 0.1 Da. Lysine and N-term of peptides labeled by iTRAQ 8-plex and carbamidomethylation on cysteine were specified as fixed modifications, while variable modifications were defined as oxidation of methionine and iTRAQ 8-plex labeled tyrosine. False discovery rate (FDR) of both proteins and peptides identification was set as: FDR ≤ 1%. Protein identifications were supported by a minimum of one unique peptide identification. DEPs Identification, Annotation and Subcellular Localization Prediction The normalization of the ratios for the iTRAQ labels was performed according to the user's guide of the Proteome Discoverer (Version 1.3). The final ratios of proteins were normalized by the median average protein ratio of the equal mix of different labeled samples. iTRAQ ratios were log-transformed before being analyzed mathematically. Only proteins detected in all runs (every biological replicate) were included in the data set. To identify the DEPs, the "t.test" function in R program (http://www.r-project.org/) with default settings (alternative= "two.sided, " var.equal=FALSE) were used to calculate the P-values of the students' t-Test, and the P < 0.05 was applied. The higher average in cold stressed plant than control was labeled as up-regulated proteins, and the lower in treatment group was regarded as down-regulated. Differentially abundant proteins were further functionally annotated using Blast2Go. GO enrichment analysis was performed using the singular enrichment analysis (SEA) under agriGO toolkit (Du et al., 2010), and the Arabidopsis thaliana (TAIR9) as well as Solanum lycopersicum (Tomato Affymetrix array) were used as backgrounds in combination with Fisher's test and Yekutieli multiple-test with a threshold of FDR = 0.05. Using the tool of Plant-mPLoc predictor, prediction of subcellular localizations of DEPs was performed. Plant-mPLoc is becoming widely used for the prediction of plant protein subcellular localization as it has the capacity to deal with multiple-location proteins, which is beyond the capability of other existing predictors specialized for identifying plant protein subcellular localization (Chou and Shen, 2010). Transcriptional Validation by Real-Time PCR Analysis Total RNA was extracted from whole plantlets (taken from the same treatment samples as used for protein extraction) by using the EASYspin Plant RNA Mini kit according to the manufacturer's protocol (Aidlab, Beijing, China). RNA concentration and integrity estimation, reverse transcription, and real-time PCR were performed according to previous descriptions (Li et al., 2015). The primers were designed according to the corresponding nucleotide sequences of Petunia hybrida in GenBank. Gene-specific primers for real-time PCR analysis are presented in Table S1 in the Supplementary Material. Activity Assay of Anti-Oxidative Enzymes Three hundred milligram fresh leaves were frozen in liquid nitrogen and then ground in 3 ml solution containing 100 mM phosphate buffer (pH 7.8) and 1% (w/v) polyvinylpolypyrrolidone. The homogenate was centrifuged at 3500 rpm for 15 min, and the supernatant was collected for enzyme assays. All operations above (until analysis) were carried out at 4 • C. The enzyme activities of catalase (CAT), superoxide dismutase (SOD), and glutathione peroxidase (GPX) were determined by using the kits according to the manufacturer's protocol (Nanjing Jiancheng Institute of Biotechnology, China). Genome-Wide Proteomics Identification and Evaluation In order to investigate the proteomic changes associated with petunias exposed to a low temperature treatment (2 • C), iTRAQ analysis was conducted to compare the DEPs between the control and cold-treated plants. In this study, we used high accuracy LC-MS/MS to quantitatively detect and map proteins in the petunia seedlings. The protein concentration of samples was determined by BCA (Table S2). For the purposes of quality control, 20 µg protein aliquots from each sample were evaluated by SDS-PAGE analysis ( Figure S1). The abundance of digested peptides was quantified based on UV-absorption at 280 nm ( Table S3). The combined iTRAQ labeled peptides were fractionated by strong cation exchange (SCX) chromatography ( Figure S2). The mass spectrometry proteomic data of the present study have been deposited to the proteomics data repository -PRIDE Archive (http://www.ebi.ac.uk/pride). Project Accession: PXD002189; http://www.ebi.ac.uk/pride/archive/projects/PXD002189. A total of 8066 unique peptides (FDR<= 0.01) were obtained ( Table S4) and 2862 proteins were ultimately identified ( Table S5). Amongst all of the detected proteins, 2430 common proteins were detected in each replicate of all samples, and their relative quantifications (Table S6) were used for further analyses. The relative abundance levels within each group showed high degrees of positive correlation (P < 2.2E-16; Figure 1), thereby indicating that the overall experimental process and the quantification methods were reliable. By contrast, the genomewide protein abundance in each sample was highly variable ( Figure S3; Table S6), so illustrating the complexity of the regulatory picture in active cells. Thus, as expected, the predicted molecular weights and pIs of the various identified proteins also showed high degrees of variation (Table S6), with molecular weights ranging from 1.19 to 1445.78 kDa with a median of 37.72 kDa, and pIs ranging from 3.92 to 12.38 with a median of 6.8. Together, these results revealed an abundance of diverse proteins in the sampled petunia seedlings and confirmed the effectiveness of the high-throughput methodology used in this study. Identification, Functional Annotation and Subcellular Localization of DEPs Since the draft genome sequence of petunia was not publicly available at the time of this study, we used other closely related species that had already been sequenced, such as Solanum lycopersicum, in order to annotate all of the identified proteins. Statistical t-test analysis was used to identify the possible candidate proteins that are involved in the petunia cold stress response. Differential expression data are summarized in Table S6. Of the 2430 proteins that were quantitated, a total of 117 unique proteins showed differential expression whereas, 2313 proteins were unchanged by cold stress or did not meet the criteria for statistical significance. Of these 117 DEPs, 67 were found to be up-regulated, with the other 50 DEPs down-regulated in treatment lines when compared with the controls. Although a large portion of DEPs were found to either share homology with putative proteins of unknown function or shared no significant homology with any of the database accessions, the remaining 44 identified proteins were successfully annotated and are listed in Table 1. GO enrichment analyses of the cold-responsive DEPs against the genome-wide databases of tomato and Arabidopsis (Table S7) showed enrichment for 20 and 18 GO terms, respectively. Relative quantification of treatment. The "AveCK" represents the average quantification among control samples, and the "AveE" means the average quantification among experimental samples, and the "CK1∼4" and "E1∼4" are the corresponding different individuals. These enriched groups included various biological processes and molecular functions, including cellular biosynthetic process, various binding activities and catalytic activities, and major cellular components integral to membranes. Notably, seven GO terms were found to be common to the results of these two independent GO enrichment analyses ( Table 2), including the terms: oxidation reduction, cation binding, ion binding and intrinsic to membrane etc. These conserved enriched groups offer insights into the biological pathways important to the petunia response to cold stress. The subcellular localizations of the identified cold-responsive proteins were determined using the Plant-mPLoc predictor (Chou and Shen, 2010). The results of these predictions showed that DEPs were typically located in various organelles such as chloroplasts, the nucleus, peroxisomes, Golgi apparatus and also in the cytoplasm. Notably, as many as 47/117 (i.e., 40.2%) of the cold-responsive proteins were predicted to be targeted to chloroplasts (Table S8), and these 47 proteins were comprised of 19 down-regulated proteins (i.e., 40.4%) and 28 up-regulated proteins (i.e., 59.6%). This finding suggests that chloroplasts are significant cellular organelles with regard to the cold stress response in petunia. Comparison between mRNA and Protein Levels of Selected Proteins To investigate the transcript levels of DEPs, real-time PCR analysis was performed using the same plant materials as employed for iTRAQ. Eleven proteins, either up-or down-regulated under cold stress, were selected for the design of gene-specific primers (Table S1) for use in real-time PCR. Of the selected proteins, nine different enzymes such as superoxide dismutase (Q43779), alcohol dehydrogenase 2 (Q84UY3) and pyruvate decarboxylase 1 (Q5BN14), and also one ribosomal protein (M1D6S9) and one putative expansin (Q8S346), were included. The results of real-time PCR demonstrated that only two genes (corresponding to P48498 and Q07346) displayed accordant change tendency as the results of iTRAQ. By contrast, six genes (corresponding to B6EWX5, Q1A531, Q43779, M1D6S9, Q84UY3, and Q8S346) showed no significant changes at the transcript level, despite the detection of differential expression patterns at the protein level, as indicated by iTRAQ data. Interestingly, the remaining three genes (corresponding to Q40878, Q5BN14, and Q6T2D5) showed completely contrary trends between transcriptome and proteome levels. On the basis of these patterns of association between the mRNA and protein levels, the eleven selected DEPs could be clustered into five groups (Figure 2), i.e., group I, up-regulated at both transcript and protein levels (Figure 2A); group II, up-regulated at transcript level while down-regulated at protein level ( Figure 2B); group III, down-regulated at transcript level while up-regulated at protein level ( Figure 2C); group IV, no change at transcript level while up-regulated at protein level ( Figure 2D); group V, no change at transcript level while down-regulated at protein level ( Figure 2E). These analyses showed that both parallel and independent correlations existed between the mRNA and protein expression profiles among cold-responsive proteins, which indicates the existence of a highly complex regulatory network in petunia seedlings exposed to cold. Effect of Cold Stress on Anti-Oxidative Enzymes Five anti-oxidative enzymes were affected by cold stress (Table 1). In order to gain more in-depth insights into the change of anti-oxidative enzymes under cold, activities of three antioxidative enzymes (CAT, SOD, and GPX) were investigated (Figure 3). Results showed that the activities of CAT and GPX were increased, while SOD activity was decreased in cold treated petunia seedlings, further suggesting association between antioxidation mechanisms and cold stress response in petunia. DISCUSSION Our transcriptome analyses have identified several candidate genes which may be involved in the cold stress response in petunia plants (Li et al., 2015). This initial proteomic analysis of petunia seedlings identified several cold-responsive proteins and revealed a complex cellular network affected by the cold stress treatment. Perhaps unsurprisingly, these candidate proteins included some with previously well recognized roles as general stress-inducible proteins, such as catalase and dehydrin. In addition, some of the petunia cold-responsive proteins identified here have been previously verified in other plants using the proteomic approach to cold stress; these proteins include carbonic anhydrase (Gao et al., 2009), beta-hexosaminidase (Yang et al., 2012), and fructose-bisphosphate aldolase (Yang et al., 2012), etc. Furthermore, some novel proteins were also identified in the petunia cold response. These results have demonstrated that, by applying the iTRAQ technology, a comprehensive set of proteins correlated with cellular responses to cold stress in petunia can be detected. These findings support the reliability and robustness of the iTRAQ approach for determining differentially regulated protein responses. The possible biological significance of key DEPs in cold stress adaptation and their associated metabolic pathways are discussed below. Association between Antioxidation Mechanisms and Cold Stress Response in Petunia By integrating the results of two independent GO functional enrichment analyses, the petunia cold-responsive proteins were found to be enriched for seven common GO terms ( Table 2). Of these common terms, "oxidation-reduction process" was the most notable, from which we tentatively suggest that antioxidation mechanisms may contribute to the adaptive response to low temperatures in petunia plants. These findings are consistent with the current understanding that a single environmental stress may simultaneously trigger multiple stress responses at an intracellular level. Cold stress, along with other abiotic stress types, is known to induce the production of reactive oxygen species (ROS). These can perturb cellular redox homeostasis and result in oxidative damage to membrane lipids, proteins and nucleic acids, ultimately leading to stress injuries in plants. To counterbalance this ROS accumulation, plants subjected to cold stress conditions can induce and activate scavenging systems, and trigger the expression of proteins able to protect cell machinery. For example, detoxifying enzymes such as CATs and SODs are induced by cold stress in Arabidopsis and rice (Goulas et al., 2006;Guo et al., 2006). Likewise, we observed higher levels of one petunia CAT (Q6T2D5) which was predicted to be located in the peroxisome (Table S8). In contrast, one molecular form of SOD, i.e., Cu/Zn-SOD (Q43779), was down-regulated by cold stress. Because the altered patterns of CAT and Cu/Zn-SOD at the mRNA and protein levels were not consistent under cold stress (Figure 2), it is suggested that these two enzymes are possibly regulated by post-transcriptional mechanisms. SODs catalyze the dismutation of superoxides into O 2 and H 2 O 2 (Apel and Hirt, 2004). Although SODs are recognized as general stress-inducible proteins, the effect of cold on the expression of Cu/Zn-SOD in this current study was actually not a surprise. It was reported that Cu/Zn-SOD was down-regulated in rice leaf sheaths exposed to 5 • C (Hashimoto and Komatsu, 2007). Similarly, a decrease in levels of Cu/Zn-SOD was found in strawberry plants after a cold treatment (Koehler et al., 2012). The accord of these published results together with our own findings suggests specificity of certain Cu/Zn-SODs in the plant response to cold stress. CAT, which is mainly localized within peroxisomes, catalyzes the decomposition of H 2 O 2 to oxygen and water via the CAT pathway. Our results are consistent with previous studies and further implicate CAT in the response to low temperature, thereby prompting us to speculate that ROS scavenging through the effects of the CAT pathway may contribute to the adaptation of petunia plants coping with adverse ambient conditions. More recently, in addition to those well characterized antioxidant enzymes such as CATs and SODs, the role of glutathione S-transferase (GST) and GPX during various stress conditions in plants has been reported by an increasing number of publications. For instance, overexpression of a cDNA encoding an enzyme with both GST and GPX activity has been reported to enhance the growth of transgenic tobacco seedlings under cold and salt stresses (Roxas et al., 1997). GSTs catalyze the conjugation of glutathione (GSH) to a wide variety of hydrophobic and electrophilic compounds to form non-toxic, or at least less toxic, peptide derivatives (Marrs, 1996;Frova, 2003). In addition, diverse isoforms of GSTs isolated from different plant species also showed significant GPX activity toward lipid hydroperoxides, catalyzing their reduction to the less toxic alcohols (Bartling et al., 1993;Cummins et al., 1999). GPXs are ubiquitously occurring enzymes in plant cells which use GSH to reduce H 2 O 2 and organic and lipid hydroperoxides (Milla et al., 2003;Navrot et al., 2006). Therefore, it was not surprising that both GST (P32111) and GPX (M1AWZ7) were up-regulated in petunia seedlings exposed to cold. We assume that GST and GPX activities are involved in the alterations of GSH and ascorbate metabolism that lead to reduced oxidative damage and enhanced tolerance to stresses, alongside the scavenging of peroxides (Roxas et al., 2000). Moreover, phospholipid hydroperoxide glutathione peroxidase (PHGPx, Q9FXS3) was also up-regulated at the protein level. PHGPx is a unique antioxidant enzyme responsible for reducing lipid hydroperoxides directly, which is generally considered the principal enzymatic defense against oxidative biomembrane destruction in animals (Imai and Nakagawa, 2003). In plants, however, the role of PHGPx has so far remained largely unexplored. Investigations of tissue expression and induction expression profiles at the protein level under a wide range of abiotic stresses ) have highlighted the likelihood of a specific role for PHGPx in ROS scavenging. Our results suggest that the GSH cycle enzymes might also play a significant role as part of an antioxidant protection system in the petunia response to cold stress. Furthermore, activity assay confirmed that anti-oxidative enzymes were regulated in cold treated petunia seedlings (Figure 3), suggesting that they also suffered from oxidative stress. Taken together, these results linked antioxidation mechanisms with cold stress response. Epigenetic Factors Involved in Cold Stress Response of Petunia In order to adapt to environmental challenges, it is of great importance for sessile plants to dynamically control gene expression patterns. This is particularly vital for stress responses, which are controlled through a myriad of signal transduction pathways. For example, when environmental cues are perceived and transmitted, specific transcription factors in the nucleus are turned on and a cascade of downstream gene expressions is triggered. In recent years, it has become evident that the biogenesis of small RNAs and dynamic changes in chromatin properties also contribute to the regulation of gene expression. Studies have indicated that these epigenetic mechanisms are crucial to appropriate plant reactions to stress (Borsani et al., 2005;Angers et al., 2010;Kumar and Wigge, 2010). In our study, the abundance of two epigenetic factors namely, the histone H3 (M1BEC3) and a member of the Argonaute protein family AGO4-2 (Q2LFC1), were decreased after cold stress. It was previously reported that histone H1 was up-regulated at the transcript level in Arabidopsis in response to cold-, salt-and drought-stress (Kreps et al., 2002). Recently, a quantitative proteomic analysis in rice has revealed several coldresponsive histones. Among them, H4, H2B.9, H3.2, and linker histones H1 and H5, were found to be down-regulated (Neilson et al., 2011). Histones are prone to reversible post-translational modifications such as acetylation, methylation, phosphorylation, ubiquitination, and glycosylation, which allow the proteins to respond flexibly to stimuli (Neilson et al., 2011). In rice, submergence of the seedlings under water induced histone H3 acetylation and H3K4 trimethylation in pyruvate decarboxylase 1 (PDC1) and alcohol dehydrogenase 1 (ADH1) genes. These histone modifications were associated with enhanced expression of PDC1 and ADH1 at the transcript level under stress (Tsuji et al., 2006). In fact, histone modification is a critical regulator of gene expression and has been implicated in plant stress responses, including the response to low temperature (Zhu et al., 2008;Kim et al., 2010a). Intriguingly, our proteomic data showed that cold stress also affected the expression levels of PDC1 (Q5BN14) and ADH2 (Q84UY3). It is not clear whether there was any direct relationship between the varied expression pattern of histone H3 and that of PDC1 or ADH in this study; however, a quantitative study of histone posttranslational modifications could provide valuable information as to the role in the regulation of cold hardiness. AGO4 is one of the crucial components in the transcriptional genesilencing pathway correlated with siRNA which directs DNA methylation at specific loci, a phenomenon referred to as RNA-directed DNA methylation (RdDM) (Agorio and Vera, 2007). As a small RNA biogenesis factor that is involved in the biogenesis of heterochromatic siRNAs (hc-siRNAs) and in the pathway of RdDM, AGO4 is essential for antibacterial resistance; in addition, it plays an important role in plant resistance to viruses (Agorio and Vera, 2007;Bhattacharjee et al., 2009). As shown by our iTRAQ data, AGO4-2 expression was altered in response to low temperature, which would offer the possibility that AGO4 is also involved in defensive reactions against abiotic stresses. Since plant epigenetics has recently attracted unprecedented interest, not only as a subject of basic research but also as a potential new source of advantageous characters for plant breeding (Mirouze and Paszkowski, 2011), the identification of these two epigenetic regulators in this work might suggest a new direction for research into the cold tolerance of petunia. Other Cold Responsive Proteins in Petunia Several DEPs identified in this study were predicted to be involved in antioxidative/detoxifying reactions and epigenetic regulation, as discussed above, whereas others that formed the focus of this study were associated with several primary and secondary metabolic processes, such as protein synthesis, energy metabolism and phenylpropanoid biosynthesis. Six DEPs, including three ribosomal proteins (RPs) and three elongation factors, were related to protein synthesis. RPs are essential for protein synthesis and have been revealed to play an important role in metabolism, cell division and growth (Wang et al., 2013). Besides their housekeeping functions, it is interesting to note that there is an increasing awareness of the function of some RPs in other roles. For instance, ribosomal protein S3 (RPS3), a component of the eukaryotic 40S ribosomal subunit, has been proposed to play a central role in regulating numerous aspects of host-pathogen interactions (Gao and Hardwidge, 2011). Although to date there is little evidence for direct links between RPs and cold stress, we shouldn't ignore the decreased levels of three RPs (Q2XPW5, Q3HRW8, and M1D6S9) observed in our experiments. We speculate that RPs participate as regulatory components in the response to stress, although the regulation mechanism remains to be elucidated. Elongation factor Tu (EF-Tu) is an organelle protein playing a central role in the elongation phase of protein synthesis. EF-Tu gene expression has been extensively studied in plants in response to various environmental challenges, especially high temperature stress (Bhadula et al., 2001;Bukovnik et al., 2009). Moreover, there is growing evidence regarding abiotic stress-related EF-Tu expression, which has been acquired using proteomics approaches. For example, proteomic analyses of cold-and heat-stress responses in rice identified plastid EF-Tu as an up-regulated protein (Cui et al., 2005;Lee et al., 2007). On the contrary, in the present work we found two EF-Tu proteins (K4CUX6 and M1B641) that displayed decreased levels in cold-treated petunia seedlings, although a rationale for these findings couldn't be deduced yet. Nevertheless, EF-1α (Q9ZWH9), the cytosolic homolog of EF-Tu in plants which is also pivotal in the regulation of translation under abiotic stresses, was present at a higher level in seedlings at 2 • C. The differential regulation of various components of the translation machinery implies that a complicated mechanism governing protein synthesis exists in response to cold stress. In addition to their roles in translation, within bacteria, mammalian cells and in plants such as Arabidopsis, EF-Tu and EF-1α seem to display chaperone-like activities in protein folding, in protection against thermal denaturation and in interaction with unfolded proteins (Suzuki et al., 2007;Shin et al., 2009). We tentatively suggest that these factors, which are implicated in cold adaptation, may also have similar functions in petunia. Among the cold responsive proteins, three proteins were identified to be energy-related enzymes, including cyanate hydratase (K4CW69), carbonic anhydrase (M1D227) and ATPdependent Clp protease proteolytic subunit (M1APP3). The functions of these enzymes under cold stress conditions are still unknown. Cyanate hydratase, which catalyzes the bicarbonatedependent breakdown of cyanate to ammonia and bicarbonate in cyanogenic glycosides, was induced by cold stress. Along with earlier results from tomato and grapevine (Parker et al., 2013;Liu et al., 2014), a probable role was suggested for cyanate hydratase in plant responses to both biotic and abiotic stresses. A chloroplast-localized carbonic anhydrase, which facilitates CO 2 movement across the chloroplast envelope, was found to decrease in abundance in Thellungiella rosette leaves after 5 days of cold treatment (Gao et al., 2009). However, this enzyme, which was predicted to be also localized in the chloroplasts (Table S8), showed the opposite expression pattern in our experiments. The different results were likely due to the distinctness of plant materials and differences in experimental conditions. ATP-dependent Clp protease proteolytic subunit was shown to decrease in abundance in cultured rice cells at 44 • C (Gammulla et al., 2010). In such a context, our finding that ATP-dependent Clp protease proteolytic subunit increased at 2 • C further supported its involvement in the regulation of temperature stress response. Two DEPs correlated with the plant cell wall (CW) were found to be induced by cold stress. One was a cinnamyl alcohol dehydrogenase (CAD) (P30360) and the other was a putative expansin (Q8S346). The plant CW forms a barrier against pathogen attack and interconnects cells, and thereby plays a variety of distinct, sometimes opposite, roles. It is interesting to note that the CW may play a critical role in plant cold resistance (Tao et al., 1983). Pronounced thickening of CW has been observed in cold-acclimated plants of diverse species, suggesting an increased lignin production during cold acclimation. Lignin is a class of complex organic polymers of phenylpropanoid compounds and is particularly important in the formation of the plant CW. It is believed that CW thickening could provide resistance against cell collapse and, thus, may provide protection against mechanical stresses induced by cold (Wei et al., 2006). CAD is a major rate-limiting enzyme that catalyzes the final step of the lignin biosynthesis, the conversion of cinnamyl aldehydes to alcohols, with NADPH acting as the cofactor (Sattler et al., 2010). While CAD is multifunctional, what interests us is the relationship between the CAD gene family and stress responses. For instance, in sweet potato, a CAD gene transcript has been found to be highly induced by cold (Kim et al., 2010b). Our observation of the up-regulated CAD protein in petunia seedlings at 2 • C offers another line of evidence for this relationship, implicating positive translational regulation of lignin biosynthesis as part of the petunia response under cold stress conditions. The resultant change in lignin content may alter water permeability and/or CW rigidity in cold-exposed petunia plantlets and, thereby, influence their ability to cope with the adverse cold conditions. Expansins are plant CW-remodeling proteins. The published data demonstrate that they are mainly involved in the pHdependent extension of plant CWs and expansins are also involved in CW modifications and cell enlargement induced by plant hormones. Interestingly, the expansin-like gene EXLA2 has been reported to be induced by cold and salinity, as well as by abscisic acid (ABA) treatment. Furthermore, the exla2 mutant exhibited a hypersensitive response to increased cold and salt which was mediated by ABA (Abuqamar et al., 2013). Along with other reports (Abuqamar, 2014), the data indicate that it is not unreasonable to consider the possibility that certain expansins contribute significantly to abiotic stress responsiveness and impact signaling pathways that regulate gene expression. Even though the precise role of these two CW related proteins under low temperature stress is still obscure, future detailed analysis of them at the molecular, biochemical and physiological levels may provide an insight into the signaling pathway involved in the regulation of cold stress adaption. In addition, within the list of cold-responsive proteins, a dehydrin (E5F371) and a methionine aminopeptidase (K4ASP6) were identified. Dehydrins, also known as the D-11 family of late embryogenesis abundant proteins, are formed during stresses which cause dehydration of the cells such as drought, salinity, heat and cold. The expression of dehydrins is closely related with cold resistance in numerous plant species such as Arabidopsis (Kawamura and Uemura, 2003), Citrus (Hara et al., 2003), strawberry (Houde et al., 2004), rice (Lee et al., 2005b), and Rhododendron (Peng et al., 2008), and such accumulated dehydrins are believed to be key components of the cold acclimation process. Consistent with these findings, we identified a dehydrin candidate which was up-regulated almost 2.3-fold in cold-treated lines when compared with controls. In Arabidopsis, the levels of some dehydrins have been reported to be regulated by CBF transcription factors of the cold-response pathway (Lee et al., 2005a). However, it is noteworthy that the expression of individual dehydrin proteins may exhibit particular patterns of tissue specificity which vary according to the different plant species (Koehler et al., 2012) and, thus, it may be useful to conduct further studies of this dehydrin in Petunia. Methionine aminopeptidases (MAPs) are well-known to be required for correct plant development, as demonstrated in Arabidopsis (Ross et al., 2005). It seems to have no relevance to cold stress resistance in plants. However, a recent study of the barley DNAbinding MAP, whose localization changes from the nucleus to the cytoplasm under low temperature treatments, suggested that this novel MAP could also function in conferring freezing tolerance by facilitating protein maturation (Jeong et al., 2011). This result leads us to tentatively suggest that the responsiveness of the petunia MAP to cold stress, observed in our study here, might not be an incidental response, but may deserve further investigation. In summary, this work offers a global perspective of the petunia proteomic profile under cold stress, achieved through the use of the iTRAQ technique. A total of 117 cold-responsive proteins were revealed which are therefore potential candidates to be involved in the overall plant response aimed at achieving the beneficial equilibrium of physiological homeostasis. Our study provides not only novel insights into the plant response to cold stress, but also a promising starting point for further investigations into the functions of candidate proteins as part of the petunia cold response. Nevertheless, it should be noted that a significant proportion of the revealed DEPs remained unidentified with respect to probable function, in large part due to the lack of high quality functional annotations for many plant genomes. The experimental validation of these un-annotated proteins will make an important contribution to bridge the gap between proteomic discoveries of stress-responsive proteins and the selection of target proteins with strong potentiality for the improvement of cold tolerance by genetic engineering in plants (Gong et al., 2015). AUTHOR CONTRIBUTIONS WZ and MB designed the experiments. WZ, HZ, LN, and BL performed the experiments. WZ analyzed the data and drafted the manuscript. MB thoroughly revised the manuscript and finalized the manuscript. All the authors read and approved the manuscript.
2016-06-18T00:33:01.455Z
2016-02-25T00:00:00.000
{ "year": 2016, "sha1": "564c450bacd196808d79e36774b0130ab5912fac", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2016.00136/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "564c450bacd196808d79e36774b0130ab5912fac", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
240478523
pes2o/s2orc
v3-fos-license
An Internal Standard High-Performance Liquid Chromatography Method for Simultaneous Quantification of Candesartan and Hydrochlorothiazide in Combined Formulations The internal standard method is a versatile procedure that avoids misleading results caused by the instability of the chromatographic system or inexperienced workers. It is an effective way to judge the accuracy of any obtained data. As the detector responses of chlorzoxazone (CZN) resemble those of candesartan (CDZN) and hydrochlorothiazide (HCTZ), CZN was employed as an internal standard. Herein, a simple chromatographic method was established for quantification of CDZN and HCTZ. Isocratic elution was conducted using 1% premixed acetonitrile/formic acid (7:3 v/v) at a 0.8 mL/min flowrate. The separation of the three components was maintained using the universal 20 mL loop, and for further simplicity in application, the analysis was optimized at 25°C. CDZN, HCTZ, and CZN were simultaneously monitored and quantified at 270nm. The method developed here complies with all the validation limits according to the British Pharmacopoeia (BP), United States Pharmacopoeia (USP), and the guidelines of the International Council For Harmonisation (ICH). The method proved to be linear in the range of 6.4 to 25.6 mg/mL and 5.0-20 mg/mL for CDZN and HCTZ, respectively, while the quantitation detection limits were less than 1.0 mg/mL for both. INTROdUCTION Candesartan (1-hydroxyethyl 2-ethoxy-1 -[ p -( o -1 H -t e t r a z o l e -5 -y l p h e n y l ) b e n z y l ] -7-benzimidazole carboxylate, cyclohexyl carbonate) (CDZN) and hydrochlorothiazide (6-chloro-3,4dihydro-2H-1,2,4-benzothiadiazine-7-sulphonamide 1,1-dioxide) (HCTZ) are diuretics and angiotensin II receptor blockers prescribed for hypertension ( Fig. 1) [1][2][3] Currently, researchers are interested in developing assay methods to validate their effectiveness, and though several have been developed for HCTZ and CDZN, most tests for these compounds separately, and only a few examine both substances simultaneously. Some researchers use spectrophotometric methods because of their ease of use 4,5 , whereas others use LC-MS/MS, which would be ideal if not for the high cost of the equipment 3 . High-performance liquid chromatography-ultraviolet (HPLC-UV) can be used in place of spectrophotometric procedures as it includes separation, and it is more affordable than the LC-MS/MS. Based on these reasons, an increasing volume of work has explored the use of HPLC-UV in assaying pharmaceutical formulations [6][7][8][9][10][11] . Method validation is a valuable topic in quantitative analysis because it is essential for demonstrating the reliability of any innovative analytical methodology 12,13 . Because of the importance of this process, many agencies have set up a standard procedure [14][15][16] . Method validation is an exclusive protocol that tests the accuracy, specificity, precision, reproducibility, limit of detection (LOD), and limit of quantification (LOQ) of a method to establish its suitability for a given purpose 17 . The internal standard method is a powerful technique avoids the generation of misleading results caused by the detector or separation system 18 . This research was aimed at developing an HPLC photodiode array (HPLC-PDA) for simultaneously quantifying CDZN and HCTZ in pharmaceutical formulations. The muscle relaxant chlorozoxazone (5-chloro-3H-1,3-benzoxazole-2-one) 19 (CZN) presented in Fig. 1c was used as an internal standard substrate to ensure the production of reliable results. As a result, this study presents a simple and fast analytical approach for application in quality control and research analysis. solvents acetonitrile was purchased from Sharlau, Spain. The placebo of the drug formulation was purchased from the Tabuk factory for pharmaceutical industries in Riyadh. Preparation of Solutions For preparation of the mobile phase solution, 700 mL acetonitrile and 300 mL formic acid were combined, and the solution was vacuum filtered, sonicated for 30 min, and cooled to 25°C. CDZN (0.16 g) and HCTZ (0.125 g) were transferred to a 100 mL volumetric flask, which was half-filled with the mobile phase solution, sonicated for 30 min, and cooled to 25°C. It was then filled to the mark with the same solvent. The CZN (300 mg/L) to be utilized as an internal standard substrate was prepared in the same manner. The appropriate stock solution volume was mixed with an aliquot of CZN stock solution, then diluted to 100 mL to attain a concentration of 16 mg/mL CDZN, 12.5 mg/mL HCTZ, and 30 mg/ mL CZN. The average weight of 20 tablets (0.2596 g) underwent similar procedures to prepare a stock. Subsequent dilution was carried out, and the appropriate volume of CZN was added prior to the solution being adjusted to the appropriate final volume. Placebo pow-der (0.2311 g) was transferred to a 100 mL volumetric flask and processed similarly for sample preparation. A 0.22 mm nylon filter, which does not affect the concentration of solutions, was employed to filter solutions prior to their injection into the Shimadzu autosampler. Chromatographic Conditions and Method Validation The validation was conducted utilizing an HPLC/UV-Vis (Prominence, Shimadzu, Japan). Several columns were tested, including phenylhexyl, octyl (C8), octadecyl (C18). To determine the best elution, the mobile phase composition was varied in terms of the formic acid/acetonitrile ratio. Furthermore, column temperatures ranging 20-40°C were tested, as were flowrates ranging 0.5-2.0 mL/minute. CDZN, HCTZ, and CZN were moni-tored at 270nm. Working standards of CDZN, HCTZ, and CZN active pharmaceutical ingredients were supplied from Ranbaxy Laboratories Limited, India. The combined tablets containing 16 mg CDZN and 12.5 mg HCTZ were purchased from a local market in Riyadh, KSA. CZN is an analgesic that was employed as an internal standard. HPLC-grade The previously described standard solution was employed as a test for system suitability. The linearity and accuracy tests were performed using serial concentrations in the range of 40-160% of the target concentration. Typically, CDZN concentrations were 6.4, 9.6, 12.8, 16, 19.2, 22.4, and 25.6 mg/ mL, while 5.0, 7.5, 10, 12.5, 15, 17.5, and 20 mg/mL concentrations were used for HCTZ. The solutions for linearity tests were prepared as a mixed standard, and each was injected with an internal standard aliquot to a final concentration of 30 mg/mL CZN. The detection and quantification limits were calculated statistically from the linearity results 16 . The standard, sample, and placebo solutions at optimized chromatographic conditions were used to investigate the procedure selectivity. Synthetic samples with 80 and 120% of the target concentrations were employed to test for robustness through intraday and interday studies. The three solutions were analyzed five times on the same day and on three consecutive days for the interday determination. The method robustness was tested by examining the effects resulting from slight alterations to the optimized chromatographic conditions. The detection wavelength varied by ±5nm; the mobile phase composition was altered by ±5%, and the column temperature was varied by ±5°C. Each parameter was changed separately, the analysis was conducted, and the recovery percentage then calculated. RESULTS The separation of CDZN, HCTZ, and the internal standard CZN was accomplished using a mobile phase consisting of acetonitrile/formic acid (7:3 v/v) with a pH of 2.8. A flowrate of 0.8 mL/min was the best within the tested range, and the phenylhexyl column performed better than C8 and C18 in separating this combination. Furthermore, the PDA detector revealed that the best response for the three components was at 270 nm. Moreover, the commonly used 20 mL loop volume was selected to ease the procedure's applicability. To validate this method, the USP protocol was followed. A system suitability test that used a mixed standard solution of the target concentration was conducted. As monitored in Table 1, the statistical parameters of CDZN and HCTZ were within the acceptance criteria, implying the method's repeatability. The area of the analytes was divided by that of the internal standard to be corrected. Fig. 2 illustrates the regression lines driven from the corrected peak area versus the concentration plot. The results show that the method was linear within the examined concentrations, and the correlation coefficient for both pharmaceutical ingredients was 0.999. Detection and quantitation limits were determined and considered as an integral part of the validation protocol. The LOD and LOQ were statistically calculated using the results of linear regressions via Equations (1) and (2), respectively. Fig. 2. The linearity relationships of the concentrations and their corrected area for (a) hydrochlorothiazide and (b) candesartan The root mean square error (RMSE) was driven using the LINEST function (Microsoft Excel 2019): 0.0064 and 0.0006 for CDZN and HCTZ, respectively. For CDZN, the LOD was found to be 1.9 × 10−2 mg/mL, while the LOQ was 6.4 × 10−2 mg/mL. On the other hand, the values for HCTZ were 0.2 × 10−2 and 0.6 × 10−2 mg/mL, respectively. The selectivity of the procedure was examined using the standard, sample, and placebo solutions in which the sample and standard solutions were injected with the internal standard. The results are shown in Fig. 3. Fig. 3b,c show that the excipients caused no alteration to the retention time of the active ingredients or the internal standard. Fig. 3a shows two peaks with a negligible area at the retention time of HCTZ and CZN. These peaks could be attributed to cross-contamination from samples or standard injections. These results were considered acceptable since the areas were insignificant compared to the sample peak areas 14,16 . Furthermore, the accuracy of the method was investigated, revealing excellent consistency in the recovery values for a concentration range of 80-120% for both drugs. The average recovery for HCTZ and CDZN was 99.6 and 100.1%, respectively. The RSD values for these recoveries ranged between 0.8 and 1.6, showing that the results for both drugs were within the acceptance criteria (RSD ≤ 2%). These findings were comparable to, or better than, some results found in the literature for earlier methods [18][19][20] . Moreover, the precision study was performed using the solutions prepared for the accuracy test. The repeatability of the internal standard method was inspected by conducting five consecutive assays for the test solutions. The procedure showed assay results in the range of 98.7-101.5 and 98.2-100.7 for HCTZ and CDZN, respectively. It is worth mentioning that the RSD values of the intraday investigations were within the acceptance criteria for both drugs and that the reproducibility of the procedure was tested by conducting the assay test on three different days. The results consistently reflected the method's reliability Table 2 16,20 . The robustness of the internal standard method was investigated by changing various parameters of the optimized conditions separately. T h e m e t h o d p e r f o r m a n c e w a s r e l a t i v e l y consistent even during the slight changes of the chromatographic parameters, as shown in Table 3. The RSD value for the assay at the optimized and altered parameters was less than 2%. In addition, the average assay results of 99.7 and 100.8 for HCTZ and CDZN, respectively, demonstrate the robustness of this method. Assay of local-market tablets by the developed method The labeled content of HCTZ and CDZN in the tablet was 16 and 12.5 mg, respectively. The sample solution was injected with the internal standard (CZN), completed to 50 mL with the same solvent, and analyzed at optimized chromatographic conditions. The content of the HCTZ and CDZN tablets was determined to be within the acceptable range with an assay of 100.2 and 103.7%, respectively. CONCLUSION A reversed-phase HPLC-PDA procedure was developed for the simultaneous quantification of HCTZ and CDZN combined in pharmaceutical formulations in which CZN was employed as an internal standard. Compared to conventional HPLC methods, the method developed here has the advantage of allowing self-correction via inclusion of the internal standard. The validated internal standard method successfully bypassed all the obstacles associated with fluctuations in the detector, temperature, pump flow, and mobile phase composition, thereby showing great potential for application in routine analysis and research.
2021-11-03T15:14:28.378Z
2021-10-30T00:00:00.000
{ "year": 2021, "sha1": "202a6cc5117b406ded275d9fd8fbfb64f118a87a", "oa_license": "CCBY", "oa_url": "http://www.orientjchem.org/pdf/vol37no5/OJC_Vol37_No5_p_1077-1082.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f6538cb5dd71ac15242b8c92a9bf3d157ca0c050", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [] }
168804874
pes2o/s2orc
v3-fos-license
THE ISSUE OF EVALUATING THE PERSONNEL POTENTIAL OF INDUSTRIAL ENTERPRISES The article notes that the activities of industrial enterprises in modern conditions should be based on a high level of human resources. The model of estimation of personnel potential of industrial enterprises as one of the most important factors of influence on economic and financial results of enterprise activity is offered. The conclusion is made on the necessity of attracting, in order to increase the level of personnel potential of enterprises, to the staff of graduates of higher educational institutions, able to give optimal solutions for innovation and investment development of Ukrainian enterprises. Introduction. Activities of modern industrial enterprises in the conditions of a dynamically changing economy of the country should be based on high personnel potential, in close interaction of individuals as potential employees, enterprises of various forms of ownership as potential employers, educational institutions, employment centers, local authorities as forming entities quantitative and qualitative characteristics of the human and labor potential of the enterprise. Such interaction will ensure the development of a joint decision on identifying the most promising directions for the development of a modern enterprise. In connection with the necessity to solve the abovementioned problems, the construction of an appropriate model for assessing the potential of the staffing of industrial enterprises and solving problems with its application in the practical activities of industrial enterprises in Ukraine becomes relevant. Analysis of recent research and publications. Significant contribution to the development of theoretical and general methodological problems of improving the system of personnel management of industrial enterprises made by domestic and foreign scientists, in particular, V. Heyets, A. Kolot, S. Kalinina, S. Bandur, D. Bohynya, N. Volgin, L. Lisogor, V. Bulanov T. Zayats [1][2][3][4][5][6][7][8][9] etc. At the same time, issues of strengthening the personnel potential of industrial enterprises and their estimation remain insufficiently defined. The purpose of the article is to solve the problems of formation and evaluation of personnel potential of industrial enterprises as one of the most important factors influencing the economic and financial performance of the enterprise. Presentation of research results. Formation of staffing and estimation of personnel potential of an enterprise is one of the most important administrative decisions, since the financial stability and competitiveness of industrial enterprises of Ukraine depends on the timely and complete solution of socio-economic and organizational tasks. The term "staffing" reflects a set of qualifications (knowledge, skills) and individual values of the productivity of employees and employees involved, through which the latter exercise a certain set of works and services. The potential of the personnel community is the existing set of employees of the enterprise, which is considered together with their qualitative characteristics. This concept makes it possible to estimate the degree of use of potential opportunities as a separate employee (individual staffing capacity) and their aggregate (capacity of personnel) in order to ensure the economic development of the enterprise. The proposed model of the estimation of the potential of the staffing of the enterprise makes it possible to compare the real level of staffing with its ideal value. Such a model for estimating the potential of a company's aggregate is based on statistical data on staff composition and structure. The sequence of actions to evaluate the staffing potential of a model is as follows: -Assess the level of staffing and-th functional unit of the airline by the formula: where R 1 , estimation of the staffing level of the i-th functional subdivision of the enterprise; f i is the functional subdivision of the enterprise, and = 1,12; r j is the indicator of the staffing of the enterprise in points, j = 1,6; p jk subindicator of the staffing of the enterprise in points; a ijk the number of employees of the i-th functional unit, in accordance with each sub-index j-th indicator in the personnel of the enterprise; r jk the weight of the k-th sub-index of the j-th indicator in the personnel of the enterprise system. The indicator of the staffing of the enterprise in points (r j ) 1 = 1,6. The sub-indicator of the staffing of the enterprise in points (r jk ) was estimated by a group of experts on a scale from 1 to 60 points, depending on the criterion of this component. -Determine the ideal value of the level of staffing andth the functional unit of the airline according to the formula, where R 1 is the ideal value of the staffing level of the ith functional unit of the enterprise; the number of employees of that functional unit; r j max k -maximum weight of the k-th sub-index of j-th indicator; max k is the maximum value of the sub-indicators for each indicator of the staffing population p of the enterprise. -We evaluate the potential of the personnel unit of the enterprise division by comparing the real staffing level with its ideal value. (4) where П1 -estimation of the potential of personnel of the i-th unit of the enterprise. The results of the analysis of the real and ideal levels of the personnel of the i-th unit of the enterprise, as well as the assessment of the potential of the personnel unit of the enterprise's division, are summarized in the general table for the purpose of further management decision regarding the compliance of the personnel with the applicable requirements. The analysis of the received data testifies to the existence of negative tendencies, characteristic for all structural divisions of the enterprise. The degree of coincidence of the qualification requirements of the enterprise and the corresponding capabilities of the personnel community can be considered an assessment of the capacity of the staffing population. Let K i have the skill level of the i-th personnel unit; V fi , the required potential and level of the capabilities of the i-th personnel unit for the requirements of the unit with the number j); V ki available potential (level of capabilities of the i-th personnel unit according to the requirements of the unit with the number 1 enterprise); This individual staffing capacity is some 1 personnel unit. where C j is the personnel capacity of the unit with the number j; m, the number of employees of the unit with the number j. If adopted, C for the assessment of the capacity of the personnel, n -the number of divisions of the enterprise, then you can build a target function in relation to the capacity of the staff: Unlike the estimation of the potential of the staffing of the enterprise, the assessment of the staffing capacity takes into account the relationship between the requirements and the available capabilities of the personnel community. In order to increase the level of personnel potential of the enterprise, it is necessary to involve the personnel of graduates of higher educational institutions in a personnel collectivity, able to accumulate a considerable amount of information and issue optimal solutions for innovation and investment development of industrial enterprises of Ukraine. Most large companies understand the relevance of this issue and are already working with higher education institutions on the training of young professionals. The most widespread model of interaction is the targeted training of specialists, which is funded by the future employer. In some cases, employers and institutions of higher education jointly develop programs aimed at meeting the needs of a particular enterprise. Both parties are interested in establishing close contacts that enable higher education institutions to track the changing requirements of enterprises from different sectors to professionals and to quickly adjust educational programs, which in turn contributes to increasing the competitiveness of an educational institution. At the same time, it is possible for industrial enterprises to influence the training process, to obtain specialists trained for a "special order", as well as to directly participate in their preparation. The quality of the training of specialists is one of the main indicators that determine the competitiveness of both the higher educational institution and the enterprise. Therefore, the positioning of a higher educational institution in the local labor market depends to a large extent on the effectiveness of its interaction with enterprises-consumers of graduates of higher educational institutions. The company, which wants to achieve and maintain a leading position in the market, needs such services constantly, which requires long and stable contacts with higher educational institutions of Ukraine. The interest of the enterprise and higher education institutions in the cooperation is obvious, and the sides of the touch and even the penetration of education and industry are so many that there is an urgent need to create a special structure for their coordination, which can combine the financial resources of the enterprise and the intellectual potential of higher educational institutions, provide a favorable environment for solving educational problems. The criteria for integrating cooperation between higher education institutions and business structures are: maximum employment of graduates of a specific institution of vocational education; the number of long-term cooperation agreements; availability of additional sources of funding and alternative ways to compensate for the costs of maintaining an institution of vocational education; co-ordination of business structures, research projects and educational programs; creation of basic educational and scientificproduction centers for the provision of personalized programs and technologies for the training of young professionals; improvement of educational process and development of innovation-investment technologies. Conclusions On the basis of the conducted research, a statistical model was developed for estimating the potential of the staffing of the enterprise and a theoretical model for estimating the capacity of the staffing population was constructed. The conducted studies give an opportunity to assess the actual state of the staffing of the enterprise, as well as to identify the priority ways of development and effective use of the personnel community to identify the nature of the necessary managerial decisions on the development of the personnel community, taking into account its capacity for industrial enterprises. Managing the appropriate number of specialists from different categories can help to optimize wages, achieve a certain level of economy, thereby providing additional opportunities for addressing economic and social problems of industrial enterprises. Thus, the focus on long-term and mutually beneficial relations with actors-participants in using the mechanism for balancing the local labor market as partners who recognize common goals and are ready to work together to solve them requires a new approach. the improvement of the forms and methods of managing their activities in this area, the introduction of modern means of evaluation and analysis of managerial situations in the interaction of higher education institutions with business structures, with local self-government bodies in the local labor market of Ukraine.
2019-05-30T13:21:20.218Z
2017-12-06T00:00:00.000
{ "year": 2017, "sha1": "511eb5e8fe02d8e6e9d7dd0c9a670f6cdf26da96", "oa_license": "CCBYNC", "oa_url": "http://skhid.kubg.edu.ua/article/download/117241/111651", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9f05ab13d63163790f823762149b7c86163cf878", "s2fieldsofstudy": [ "Economics", "Business", "Education" ], "extfieldsofstudy": [ "Business" ] }
126308289
pes2o/s2orc
v3-fos-license
Research on Arterial Stiffness Status in Type 2 Diabetic Patients Based on Pulse Waveform Characteristics For patients with type 2 diabetes, the evaluation of pulse waveform characteristics is helpful to understand changes in arterial stiffness. However, there is a lack of comprehensive analysis of pulse waveform parameters. Here, we aimed to investigate the changes in pulse waveform characteristics in patients with type 2 diabetes due to increased arterial stiffness. In this study, 25 patients with type 2 diabetes and 50 healthy subjects were selected based on their clinical history. Age, height, weight, blood pressure, and pulse pressure were collected as the subjects’ basic characteristics. The brachial-ankle pulse wave velocity (baPWV) was collected as an index of arterial stiffness. Parameters of time [the pulse wave period (T), the relative positions of peak point (T1) and notch point (T2), and pulse wave time difference between upper and lower limbs (T3)] and area [the total waveform area (A), and the areas of the waveform before (A1) and after (A2) the notch point] were extracted from the pulse wave signals as pulse waveform characteristics. An independent sample t-test was performed to determine whether there were significant differences between groups. Pearson’s correlation analysis was performed to determine the correlations between pulse waveform parameters and baPWV. There were significant differences in T3, A, A1, and A2 between the groups (p<0.05). For patients with type 2 diabetes, there were statistically significant correlations between baPWV and T3, A, A1, and A2 (p<0.05). This study quantitatively assessed changes in arterial pulse waveform parameters in patients with type 2 diabetes. It was demonstrated that pulse waveform characteristics (T3, A, A1, and A2) could be used as indices of arterial stiffness in patients with type 2 diabetes. Subjects This study was based on the "Study on Evaluation Method of Cardiovascular System Based on Noninvasive Detection of Blood Pressure and Pulse Wave of Limbs" [Song, Li, Qiao et al. (2016)], which recruited over 400 subjects and determined their pulse wave and cardiovascular parameters. A total of 25 patients with type 2 diabetes and 50 healthy subjects registered at Beijing University of Technology Hospital in 2015 were included in this study. The measurements of baPWV and pulse waveform, and recording of basic information such as gender, height, and weight were also performed at the time. A total of 75 subjects fulfilled the following criteria: Exclusion of a diagnosis of limb disability, hypertension, arteriosclerosis, congenital heart disease, heart failure, and a history of artery intervention based on medical interviews, physical examinations, and screening examinations. Additionally, previous studies revealed a correlation between the arterial stiffness and the subjects' basic characteristics such as age and BMI [Mitchell, Parise, Benjamin et al. (2004)]. To avoid an influence of these factors on the results, there was no significant difference between the healthy subjects and the diabetics in this study in terms of age, height, weight, and other basic characteristics, which ensured that the change of pulse waveform was mainly caused by type 2 diabetes. The study protocol was approved by the Committee on the Ethics of Human Research of Beijing University of Technology. All participants provided written informed consents on the basis of a detailed understanding of the content of this study. Pulse wave and baPWV measurement Four limb pulse waves and blood pressure were measured using a Fukuda VS-1500A blood pressure and pulse measuring device (Fukuda Company, China) with the assistance of experienced doctors. All subjects were told the detection time several days in advance, and they were told not to eat stimulant food or drinks before the collection was completed. After resting for 15-20 min, each subject placed their hands on both sides of their body in a supine position. A phonocardiogram sensor was fixed in the second intercostal space on the sternum. Cuffs were fixed on the upper arms and ankles and electrodes were fixed at the left and right wrists. The device automatically obtained blood pressure and pulse waveform data of four limbs and automatically calculated the baPWV using the height of the subjects. The obtained data was stored in a database. Pulse characteristics determination 2.3.1 Pulse waveforms denoising and normalization First, the off-line signal processing was used on the pulse waves to remove the various noise signals introduced in the signal acquisition process [Chowienczyk, Kelly, Maccallum et al. (1999)]. Next, all the pulses of four limbs were averaged to obtain a single reference pulse (the averaged raw pulse) of every limb, as shown in Fig. 1a. According to the Nyquist theorem and the sampling frequency of the device, the averaged raw pulse was processed with a calibration for each period of 100 sampling points. This study focused on the pulse shape change. Thus, the pulse amplitude was calibrated to 0-100. An example of the processing result of one limb is shown in Fig. 1b. Following this, the pulse waveform characteristics were extracted. (a) (b) Figure 1: The pulse waveform characteristic extraction from the averaged raw pulse waveform (a) and normalized pulse waveform (b) Pulse waveforms characteristics The pulse wave period (T), which was determined by the number of sampling points contained in the original waveform, was extracted from the averaged raw pulse as shown in Fig. 1a. The relative positions of the peak point (T1) and notch point (T2), total waveform area (A), and the areas of the waveform before (A1) and after (A2) the notch point were extracted from the calibrated pulse waveform as shown in Fig. 1b. These parameters can be used to characterize the arterial compliance of the subject as well as the artificial stiffness [Weber, Wassertheurer, Rammer et al. (2012); Li, Yang, Zhang et al. (2007); Zhang, Wang, Zhang et al. (2005); O' Rourke and MichaeIF (1982)]. During one cardiac cycle, the blood flow to the lower-extremity arteries takes longer than that in the upper limbs. Consequently, when using the starting point of the pulse wave period of the upper limb as a baseline, the pulse waveform of the lower-extremity arteries will be delayed. As shown in Fig. 2, the delay time depends on the PWV in the lowerextremity arteries, which is directly influenced by the lower-extremity artery stiffness. Lower-extremity amputation and diabetic foot disease, which also have a direct relationship with the change of lower-extremity artery stiffness, are common complications of diabetes mellitus [Rith-Najarian and Reiber (2000)]. Therefore, the delay time (T3), which is based on the number of original waveform acquisition points, was extracted as one of the characteristic parameters. The T3 value can directly affect the calculated baPWV value. Compared with baPWV, measurement of T3 is simpler and more accurate. This study analyzed the correlation between the T3 parameters and the corresponding baPWV values to determine the typicality of the selected samples. Statistical analyses The parameters were analyzed using the SPSS15.0 statistical software. The mean±SD of the parameters (baPWV, blood pressure, and pulse pressure; and T, T1, T2, T3, A, A1, and A2) was calculated for the healthy and diabetic subjects. An independent sample t-test was performed to determine whether there were significant differences in the parameters that we chose between healthy and diabetic subjects. Pearson's correlation analysis was used to determine the degree of correlation between pulse waveform parameters and baPWV, which was used as an index of arterial stiffness. A p value less than 0.05 was considered statistically significant. Result of independent samples t-test Tab. 1 shows the basic characteristics of healthy and diabetic subjects. Compared with the healthy subjects group, the diabetic group differed significantly in terms of variables including the pulse pressure and systolic pressure (except at the left arm). An independent sample t-test was also conducted on the two groups to determine the pulse waveform parameters that differed significantly between them. The results are presented in Tab. 2. On comparing the pulse waveform parameters between these two groups, significant differences between these parameters were observed, including in A, A1, and A2, which were separated by the dicrotic notch point, T3 (p<0.05). For the diabetic group, the values of A, A1, and A2 were higher than those of the healthy group, while the value of T3 was lower than that of the healthy group. The remaining waveform parameters (pulse wave cycle T, the pulse peak position T1, dicrotic notch position T2) did not significantly differ between the two groups. Pearson's correlation test Previous studies demonstrated that baPWV is directly related to the incidence of type 2 diabetes, which can be used to assess the risk factors for diabetic complications. The baPWV can be used to characterize the arterial stiffness of patients with diabetes. The basic characteristics (pulse pressure and systolic pressure except for the left arm) and pulse waveform parameters (A, A1, A2 and T3) were selected based on the results of the independent sample t-test, which showed that they differed significantly between these two groups. Next, Pearson's correlation test was performed between those parameters and the corresponding baPWV. The correlation between waveform parameters and the baPWV values was examined to investigate the correlation between the waveform parameters and the arterial stiffness. Pearson's correlation test results are presented in Tabs. 3 and 4. The baseline characteristics of pulse pressure were significantly positively correlated with baPWV (p<0.05). The systolic pressure except the left arm were significantly positively correlated with baPWV (p<0.05). The results showed that A and A1 were significantly positively correlated with baPWV (p<0.05). However, only A2 of the left arm and left ankle was significantly positively correlated with baPWV (p<0.05). The delay time T3 was significantly negatively correlated with baPWV (p<0.05). Discussion and conclusion Pulse waveform analysis is an effective method to monitor and evaluate arterial vascular functions. Herein, the pulse waveform data from patients with type 2 diabetes and from healthy subjects were collected and characteristic parameters were extracted for statistical analysis. Next, the correlations between baPWV used as an index of arterial stiffness and pulse waveform parameters were analyzed to obtain the correlation between waveform parameters and arterial stiffness. Pulse area parameters (A, A1, and A2) showed significant differences (p<0.05) between groups and had significant correlations (p<0.05) with arterial stiffness. T3 as a time parameter also provided the same statistical results. Thus, the pulse waveform parameters (T3, A, A1 and A2) can be used as an index of arterial stiffness for patients with type 2 diabetes. This was the first study to comprehensively investigate the pulse wave shape and its characteristic differences between patients with type 2 diabetes and healthy subjects. The waveform parameters selected in this study could have definitive physiological significance. The differences in pulse area parameters (A, A1 and A2) between patients with type 2 diabetes and healthy subjects was mainly determined by the difference in wave reflection timing. Previous studies reported that the wave reflection timing was primarily determined by arterial stiffness [Hirata, Kawakami and O'Rourke (2006);Mitchell, Parise, Benjamin et al. (2004)]. Therefore, it could be asserted that the differences in pulse area parameters between patients with type 2 diabetes and healthy subjects were caused by the changes in arterial stiffness. Pulse waves are the superposition of the pressure wave generated by the heart and the pressure wave reflected from the body in a cardiac cycle (Fig. 3). The reflected pressure wave can be divided into two types. Wave 1 was mainly reflected from the arterial branch during late systole or early diastole, whereas wave 2 was mainly caused by the collision between the blood and the closed aortic valve during diastole [Huang, Chang, Kao et al. (2010); Hirata, Kawakami and O'Rourke (2006) ;Mitchell, Parise, Benjamin et al. (2004)]. A1 from the beginning to the notch point indicated the physiological characteristics of the cardiovascular system during systole. A1 primarily depended on the pressure wave generated by the heart and the timing of wave 1. A2 after the notch point depended on the timing of wave 1 and wave 2. In this study, increased pulse area parameters were observed for type 2 diabetic patients. The increase of A can be explained by the increase of A1 and A2, whereas the increase of A1 and A2 could be explained by the difference of wave reflection timing caused by a change in arterial stiffness. In healthy subjects, because of the low arterial stiffness resulting in a small baPWV value, wave 1 usually occurred near the notch point [Mitchell, Parise, Benjamin et al. (2004)]. Wave 1 had little influence on the amplitude and width of the main pulse wave, but influenced the dicrotic wave. In contrast, for patients with type 2 diabetes, the baPWV value was higher because of increased arterial stiffness. Wave 1 appeared early, and its relative position was close to the pressure wave generated by the heart, even with the superposition, which made A1 higher due to the amplitude and width of the main pulse wave being increased. For patients with diabetes, although the high amplitude of wave 2 lead to high A2, wave 1 made no contribution to the increase of A2. Furthermore, previous studies reported that the changes in peripheral resistance and blood viscosity also had a great influence on A2 [Huang, Chang, Kao et al. (2010)]. This could be the reason for no significant correlation between baPWV and A2 of the right arm and ankle. In this study, although A2 could reflect the physiological characteristics during diastole, it had limitations as an index for characterizing the arterial stiffness. (a) (b) Figure 3: Determination of the pressure wave generated by the heart, reflection wave1 and reflection wave2. The arrow indicates the timing of the three waves. Compared with healthy people (a), the pulse area parameters of patients with type 2 diabetes patients (b) increased significantly We observed that pulse time parameters (T, T1 and T2) had no significant correlation (p>0.05) with baPWV. This result was in agreement with those reported by previous studies describing that those time parameters were mainly determined by the condition of the heart function within one cardiac cycle, not the arterial stiffness change [Lacey and Lacey (1978)]. As for T3, it was the time difference between the pulse wave of upper and lower limbs. T3 showed statistically significant differences between groups and had a strong correlation with arterial stiffness. Previous studies reported that type 2 diabetic patients had a higher risk of peripheral arterial disease [Carmona, Hoffmeyer, Herrmann et al. (2005); Rith-Najarian and Reiber (2000)], which meant a significant increase in arterial stiffness of the lower limbs. This led to an increase in baPWV and reduction in T3. Therefore, T3 could reflect the change in arterial stiffness of the lower extremities in patients with type 2 diabetes. Combined with clinical data of lower-extremity vascular complications in type 2 diabetic patients, such as diabetic foot disease, a comprehensive analysis of T3 and the development of lower-extremity vascular complications should be carried out. This study had several limitations, namely, the relatively limited number of patients with type 2 diabetes and the incomplete information on complications. A comprehensive comparison using additional clinical data on type 2 diabetes is warranted. The selected subjects have limitations in the basic physiological characteristics such as age, height and weight. The selected population cannot reflect the situation of patients with type 2 diabetes mellitus in all age groups and various somatotype. In this study, the statistical analysis results of blood pressure, pulse pressure, and baPWV from 75 subjects were consistent with those in previous studies. The results indicated that although the number of patients was not large, the participants' characteristics could represent the typical characteristics of the arterial vascular system in patients with type 2 diabetes. Expectedly, the pulse pressure and blood pressure of the diabetic group were higher than those of the healthy subjects. Elevated blood pressure and pulse pressure are more common in people with type 2 diabetes than in the general population [Schram, Kostense, Van Dijk et al. (2002); Adler, Stratton, Neil et al. (2000)]. The statistical analysis results of pulse pressure and blood pressure revealed that pulse pressure and systolic blood pressure could be used to determine the physiological status of arteries including arterial stiffness in patients with type 2 diabetes, while diastolic blood pressure had some limitations in this regard, which is in agreement with a previous study [Cockcroft, Wilkinson, Evans et al. (2005)]. The high baPWV value of patients with type 2 diabetes showed increased arterial stiffness compared with that in healthy subjects [Woolam, Schnur, Vallbona et al. (1962)], which is in line with the characteristics of arterial physiological and pathological changes in diabetic patients and was the theoretical basis of this study. In conclusion, this study quantitatively demonstrated a significant change in the pulse waveform characteristics of patients with type 2 diabetes and analyzed the correlation between waveform parameters and arterial stiffness. This study showed that pulse waveform characteristics (T3, A, A1 and A2) could be used as indices of arterial stiffness in patients with type 2 diabetes.
2019-04-22T13:13:05.267Z
2018-11-28T00:00:00.000
{ "year": 2018, "sha1": "b3b3148ff6237bf0fb1714c4544ac64961378523", "oa_license": "CCBY", "oa_url": "https://doi.org/10.31614/cmes.2018.04100", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "0d4868e6b83f7f758c23afd7d645d7860e23a3c3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
136177569
pes2o/s2orc
v3-fos-license
Qualitative analysis of SBS modifier in asphalt pavements using field samples Series of tests are implemented to analysis the related characteristics of common asphalt and unknown asphalt mainly using Fourier Transform Infrared (FTIR) and Dynamic Shear Rheometer (DSR) for chemical compositions and rheological properties of asphalt, respectively. In addition, a series of mechanical properties were performed on asphalt mixtures, including indirect tensile strength test and three point bending test at low temperature. Experimental results indicated that compared with common asphalt, the characteristic absorption peak of the unknown asphalt are appeared at 966cm-1and 699cm-1, which are accordant with the SBS modifier. The results of DSR indicated that the unknown asphalt’s complex modulus is higher and the phase angle is lower. The mechanical tests indicated that some properties of the unknown mixture samples are increased by 24.7%∼41.8% compared with common pavement sample, like the indirect tensile strength, the bending test at low temperature and indirect tensile resilient modulus. Comprehensive analysis indicates that SBS modifier is existed in the unknown asphalt pavement. Introduction Styrene-butadiene-styrene block copolymer (SBS) is a thermoplastic elastomer rubber that allows for greater performance than that possible with asphalt alone [1]. Due to this property, SBS can improve the asphalt in rutting resistance, thermal cracking, fatigue damage, stripping, and temperature susceptibility, which makes SBS one of the most promising and widely used polymer modifiers for asphalt. The content of SBS plays an important role on the performances of modified asphalt, so the content of SBS modifier is essential to quality control in construction. In the last decades, several research efforts have investigated the use of SBS modifiers in asphalt. Some researches focused on the stability improvement of the SBS modified asphalt binder to ensure a good compatibility between SBS and asphalt [2] [3]. Some other researchers studied the rheological properties of SBS modified asphalt binder, performance of mixture and illustrated the mechanism of SBS modified asphalt [4] [5]. Because polymer content is an essential parameter of asphalt binder, some researchers investigated the method by quantification of polymer content in SBS modified asphalt [6] [7]. There are few studies on analysis of SBS modifier content for long service pavement. This study is part of the research work of service pavement. The main objectives of this study are: i) to qualitatively analyze SBS modifier in asphalt pavements using field samples, and ii) to investigate the performance of asphalt mixture which was subjected to normal traffic for 8 years. Sample Preparation Two types of samples asphalt mixture were obtained from Jinnan road in Zhejiang province for the performance tests: common asphalt mixture, which is pure asphalt cement without modifiers such as SBS, and unknown asphalt mixture, which is a need to determine whether there is SBS modifier. The core samples (100 mm in diameter) and slab samples (Long 400mm wide 400mm) were taken from the constructed wearing course. The field test section was subjected to normal traffic for 8 years. After the mechanical testing, they were further used for the extraction process to separate binder from aggregates. FTIR Fourier transform infrared spectroscopy (FTIR) was recorded on a TENSOR 27 spectrometer (Bruker, Germany) from 4000 to 400 cm-1 by KBr disk method, at a resolution of 4 cm-1. Dynamic shear rheological characteristic Dynamic shear properties were measured with MCR102 dynamic shear rheometer (MCR102, AntonPaar, Austria) in a parallel plate configuration with a gap width of 1mm. Measurements were performed at fixed frequencies 10 rad/s and temperature from 46 to 82 °C under controlled strain condition. Indirect tensile test The indirect tensile strength of specimens was measured at 25ºC and at the loading rate 50 mm/min by using Universal Testing Machine (UTM-100). The peak load at failure is recorded and used to calculate the IDT Strength of the specimen Indirect tensile resilient modulus was measured at 40ºC which was according to ASTM D4123. The frequency of load application was 1 Hz with the load duration of 0.1 s in order to represent field conditions and the resting period of 0.9 s. Four samples were made for each of the two kinds of evaluated mixtures and their averages represented resilient modulus for each mixture. Three point bending test The slabs were cut firstly on highway pavement. Then the slabs sample were cut into the beam specimens with the length of 250±2.0mm, the height of 35±2.0mm and the breadth of 30±2.0mm in lab. The tests were carried on UTM-100 at the temperatures of -10 °C with a loading rate of 50mm/min. In order to reduce discreteness of mechanics properties of samples and ensure the reliability of the test results, there were six beams in every group. Chemical compositions analysis by FTIR FTIR tests were conducted on the asphalt extraction from the common asphalt mixture and unknown asphalt mixture respectively. The testing results were shown in Fig.3. Two major bands at 2924cm-1 and 1460cm-1 were identified as the stretching vibration and deformation vibration for C-H bond of hydrocarbon. However, two major bands at 966 cm-1 and 699cm-1 were just observed for the unknown asphalt compared with absent peaks of common asphalt. The 966 cm-1 band corresponds to the C=C bond in butadiene while 699cm-1 band identifies the existence of styrene. Previous study [7] [8]on SBS polymer modified found that 1375cm-1, 810cm-1 of asphalt and 966cm-1,699cm-1 of SBS could be used for quantification of SBS content. Therefore, the two major bands, 966 cm-1 and 699cm-1, can be used to detect the presence of the SBS in the asphalt binder. It means there were SBS modifiers in the unknown asphalt. As a viscoelastic material, asphalt exhibits either elastic or viscous behavior. Dynamic shear test can be used to characterize the viscous and elastic behavior of the asphalt using complex modulus (G*) and the phase angle (δ) at different temperature. Fig.2 shows the temperature dependency of complex modulus G* and phase angle δ for two asphalts at range of temperature 46-76°C. The results also showed that the unknown asphalt binder has a lower phase angle and higher complex modulus at range of temperature 46-76°C compared to the common asphalt, especially at relatively high temperatures, This indicates that the common asphalt mixtures is more susceptible to rutting. The complex modulus of unknown asphalt at 46°C is 155.99kPa, which is comparable with common asphalt's 82.39kPa. The results above indicate that the deformation resistance of unknown asphalt is superior. Phase angle of 4 1234567890 ACMME 2017 IOP Publishing IOP Conf. Series: Materials Science and Engineering 207 (2017) 012100 doi:10.1088/1757-899X/207/1/012100 unknown asphalt are increased from 61.7°to 73.1°at range of temperature 46-76°C, which is comparable with common asphalt's 64.5°to 79.5°. This means that the common asphalt binder has the higher phase angle significantly over temperature region. The increment of complex modulus and reduction of phase angle will contribute the resistance to the high temperature stability for asphalt binders and mixtures. Measurements of δ are generally considered to be more sensitive to chemical structure than complex modulus [9]. The phase angle of unknown asphalt at 64°C is 68.4°. These data are in good agreement with other published results. Ma's study [10] indicated the phase angle of asphalt modified by SBS after pressure aging vessel test at 64°C was about 70.3°.The complex modulus and phase angle of unknown asphalt in Figure 2 are consistent with the rheological properties of SBS modified asphalt. A rheological parameter, G*/sinδ has been regarded as the rutting parameter of asphalt binder in the American Strategic Highway Research Program (SHRP). High G*/sinδ value was found to correlate with high rutting resistance. Figure 3 illustrates G*/sinδ measured at range of temperature 46-76°C. As has been indicated in this study, the modulus of unknown asphalt was higher and the phase angle was lower. The G*/sinδ values of the unknown asphalt binders were increased. The unknown asphalt results in the increase of rutting parameter (G*/sin δ) at high temperatures, indicated a higher rutting resistance of the binder. Tensile strength testing is used for evaluating the asphalt mixture's fatigue potential and moisture susceptibility. Previous studied have indicated that the tensile strength of asphalt mixture is related to cracking performance [11] [12]. Higher tensile strength means that asphalt pavement can tolerate higher strains before failing. As shown in Table 1, the average indirect tensile strength of the unknown asphalt mixture is about 24.7% higher than the common mixture. Here the unknown asphalt mixture shows 1.97 MPa and the common mixture shows 1.58 MPa. The unknown asphalt mixture exhibits a relatively higher indirect tensile strength, which indicates better cracking resistance compared to the common asphalt mixture. Mechanical property Three point bending tests were performed to evaluate the low temperature cracking of the asphalt concretes. As shown in Table 1, the failure strain of the unknown asphalt mixture is about 34.0% higher than the common mixture. In general, asphalt mixtures that exhibit greater strain at failure in the low temperature range have better flexibility which leads to superior resistance to cracking. Resilient modulus is a measure of materials responses to load and deformation. Generally, higher modulus indicates greater resistance to deformation. The results also show that the unknown asphalt mixture yields a higher resilient modulus value compared to the common mixture at 40°C. The resilient modulus is 5741MPa for the unknown asphalt mix and 4050MPa for the common mixture, which indicates greater resistance to deformation compared to the common asphalt mixture. For comparing the mechanical properties obtained from different tests, the data from indirect tensile strength, bending strain, and resilient modulus for common and unknown asphalt mixtures, respectively, were ranked, as summarized in Table 1. Test results clearly identified that the unknown asphalt mixtures was better than common asphalt mixtures. These properties of unknown asphalt mixtures are consistent with the effects of SBS modified asphalt mixture. Conclusion The following results can be concluded: (1) The chemical compositions analysis of the unknown asphalt Shows Two major bands at 966 cm-1 and 699cm-1 in FT-IR spectra compared to the common asphalt, indicated the unknown asphalt containing SBS modifier. (2) The rheological behaviors of unknown asphalt investigated using Dynamic Shear Rheometer show significant high complex modulus and low phase angle compared to common asphalt resulting in increasing resistance to the high temperature stability. (3) The mixture test results demonstrated up to 24.7% higher indirect tensile strength at 25°C, up to 34.0% higher bending strain, and 41.8% higher indirect tensile resilient modulus at 40℃ than the common asphalt. These properties of unknown asphalt mixtures are consistent with the effects of SBS modified asphalt mixture. (4) Based on the analysis of the chemical composition of asphalt materials and the confirmation of rheological properties and mechanical properties, it is found that the asphalt used in unknown asphalt mixture section contains SBS modifier.
2019-04-29T13:17:06.670Z
2017-06-01T00:00:00.000
{ "year": 2017, "sha1": "f9e7dd659b734808df4404333f61495819c516e4", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/207/1/012100", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "97541c1c7539db2c672da3539bd4bc83ec10ef1d", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
71477
pes2o/s2orc
v3-fos-license
Biological in-vivo measurement of dose distribution in patients' lymphocytes by gamma-H2AX immunofluorescence staining: 3D conformal- vs. step-and-shoot IMRT of the prostate gland Background Different radiation-techniques in treating local staged prostate cancer differ in their dose- distribution. Physical phantom measurements indicate that for 3D, less healthy tissue is exposed to a relatively higher dose compared to SSIMRT. The purpose is to substantiate a dose distribution in lymphocytes in-vivo and to discuss the possibility of comparing it to the physical model of total body dose distribution. Methods For each technique (3D and SSIMRT), blood was taken from 20 patients before and 10 min after their first fraction of radiotherapy. The isolated leukocytes were fixed 2 hours after radiation. DNA double-strand breaks (DSB) in lymphocytes' nuclei were stained immunocytochemically using the gamma-H2AX protein. Gamma-H2AX foci inside each nucleus were counted in 300 irradiated as well as 50 non-irradiated lymphocytes per patient. In addition, lymphocytes of 5 volunteer subjects were irradiated externally at different doses and processed under same conditions as the patients' lymphocytes in order to generate a calibration-line. This calibration-line assigns dose-value to mean number of gamma-H2AX foci/ nucleus. So the dose distributions in patients' lymphocytes were determined regarding to the gamma-H2AX foci distribution. With this information a cumulative dose-lymphocyte-histogram (DLH) was generated. Visualized distribution of gamma-H2AX foci, correspondingly dose per nucleus, was compared to the technical dose-volume-histogram (DVH), related to the whole body-volume. Results Measured in-vivo (DLH) and according to the physical treatment-planning (DVH), more lymphocytes resulted with low-dose exposure (< 20% of the applied dose) and significantly fewer lymphocytes with middle-dose exposure (30%-60%) during Step-and-Shoot-IMRT, compared to conventional 3D conformal radiotherapy. The high-dose exposure (> 80%) was equal in both radiation techniques. The mean number of gamma-H2AX foci per lymphocyte was 0.49 (3D) and 0.47 (SSIMRT) without significant difference. Conclusions In-vivo measurement of the dose distribution within patients' lymphocytes can be performed by detecting gamma-H2AX foci. In case of 3D and SSIMRT, the results of this method correlate with the physical calculated total body dose-distribution, but cannot be interpreted unrestrictedly due to the blood circulation. One possible application of the present method could be in radiation-protection for in-vivo dose estimation after accidental exposure to radiation. Introduction In radiotherapy, high doses have to be delivered to the tumour. However, sparing of healthy tissue and organs at risk is essential. Variations can be made by increasing the number of radiation beams, which leads to differences in dose distribution between two radiation-techniques: the three dimensional conformal (3D) and the Step-andshoot-IMRT (SSIMRT). According to the number of beams, the irradiated volume as well as the dose-distribution can change. Smaller volume has to be compensated by higher dose to reach the prescribed target dose inside the tumor. In our prostate radiotherapy protocol, the 3Dconformal therapy contains 4 beams, whereas in SSIMRT, dose is distributed within 7-9 beams. The distribution of low doses is broader in a larger volume in SSIMRT. Using the gamma-H2AX stain to detect DNA-double strand breaks (DSB) in human lymphocytes is known as an established method [1]. Localized near or at irradiation induced DSB, the H2AX histones are phosphorylated sensitively to provide signalling within the DNA DSB-repair. As one DSB represents one gamma-H2AX focus, it is possible to visualize DSB immunocytochemically using a fluorescence microscope [2,3]. The number of foci can be used as a reliable parameter to estimate the delivered dose, since it increases linearly with the induction of DSB [4]. These cellular responses are equally efficient at different doses. But there is an evidence, that the activation of DNA-repair needs a certain level of DNA damage; approximate 1 mGy [5]. It has to be considered, that gamma-H2AX foci are an indirect marker and that equalization with the exact number of DSB, especially after repair, is currently a debate [6,7]. Lymphocytes can easily be taken from the patient's peripheral vein and, due to the described method, used as biological dosimeters. The focus of the study lies on the dose distribution within the lymphocytes measured indirectly by gamma-H2AX foci in patients undergoing radiotherapy in the prostate region. Whether the results can serve as a surrogate for dose distribution in the irradiated body volume and therefore for a new method of biological dosimetry must be discussed critically. Limitations have to be taken into consideration, e. g. circulation of the lymphocytes in the body during irradiation [4]. The purpose of this study is to visualize the cellular effect of ionizing radiation during prostate cancer treatment, by evaluating the dose-distribution using the gamma-H2AX immunodetection in human lymphocytes. If possible, we want to verify the differences in dose distribution between 3D conformal and SSIMRT with biological methods. Patients and Irradiation Individuals analyzed in this study were all males, with a median age of 71.4 years (range 51.1 -83.6), and had an indication for irradiation of the prostate region. This selection was made, because the DNA damage level depends on the anatomic region [8]. Exclusion criteria were a prior radiation in the patients' medical history (so no exposition in advance could interfere with the test) or the additional radiation of lymphatic regions of the pelvis. For either treatment method (3D, SSIMRT), 20 patients were recruited. All patients gave their informed consent. The study was approved by the ethics committee of the University hospital of Heidelberg. The patients' treatment was not influenced by the study and indications for the different modalities were made clinically. Further patient data comparing 3D with SSIMRT is shown in Table 1. The body volume was calculated by the formula as it is published for male patients [9]: The radiation was performed by a department's linear accelerator (Oncor, Siemens). Table 2 contents the technical parameters of the two irradiation modalities. To calibrate absolute doses to the investigated number of gamma-H2AX foci, blood of 5 volunteers was irradiated in-vitro for 3 independent measurements on different days. Utilization of volunteers was necessary because of intended test repetition, not suitable for patients. Interindividual differences were considered by investigating 5 subjects. The venous blood was irradiated with doses of 0.02, 0.1, 0.5, 1 and 2 Gy by the same linear accelerator used for the irradiations of the patients. The object-tofocus distance was 1.58 m, the radiation field 10 × 5 cm. Radiation absorbing plates were stacked to a 20 cm tower to allow very low dosage; so the beam on time reaches the operating range of the linear accelerator after the stabilization phase. By varying the time of radiation, different doses were applied. Dose was measured by relative online dosimetry (DIN 6800-2) by using an ionization chamber (thimble 0,3 cm 3 , PTW, Freiburg, Germany). Lymphocyte separation and immunofluorescence analysis 7.5 ml of patient's blood were taken from a peripheral vein 10 min after the first fraction of the treatment. The blood circulation was given 10 minutes after fraction to mix the radiated lymphocytes with the rest that hadn't been exposed to radiation. Non-exposed controls were also taken before radiation. The protocol of staining gamma-H2AX by indirect immunofluorescence is published in many papers and its purpose for detecting DNA DSB validated [10, 11, 12, 13, 14 and 15]. Lymphocytes were separated from the blood by layering 5 ml of heparinized, venous blood onto 3 ml of Ficoll and centrifuging at 2300 rpm for 20 min at 37°C. The lymphocytes were washed in 6 ml of PBS-buffer and centrifuged at 1500 rpm for 10 min (37°C). After aspirating the buffer, the cell-pellet was re-suspended in a 1:15 ratio. 200 μl of this suspension, containing about 300,000 lymphocytes, were spread onto a clean slide by means of the Cytospine Centrifuge at 22 rpm for 4 min (room temperature). Fixating the lymphocytes took 10 minutes (room temperature) in fixation buffer (3% paraformaldehyde, 2% sucrose in PBS). For all experiments, this step was performed 2 hours after finishing radiation to allow comparability between the samples. In order to allow the antibodies getting inside the nucleus, the cells were permeabilized for 4 min at 4°C (permeabilisation buffer: 20 mM HEPES (pH 7.4), 50 mM NaCl, 3 mM MgCl 2 , 300 mM sucrose, and 0.5% Triton X-100). Samples were incubated with anti-gamma-H2AX antibody (Anti-Phospho-Histone-gamma-H2AX Monoclonal IgG-mouse-Antibody (# 05-636), Upstate, Charlottesville, VA) at a 1:500 dilution for 1 h, washed in PBS 4 times, and incubated with the secondary antibody (Fluoresceiniso-thiocyanat (FITC)-conjugate, Alexa Fluor 488 Goat-anti-mouse-IgGconjugate, Molecular Probes, Eugene, OR) at a dilution of 1:200 for 0.5 h. Both incubations took place at 37°C. Cells were then washed in PBS four times at room temperature and mounted by using VECTASHIELD mounting medium including the nucleus stain DAPI (Vector Laboratories). Thus, the gamma-H2AX foci could be correlated with the nuclei. The slides were viewed with an × 100 objective (fluorescence-microscope Laborlux S, Leica Microsystems CMS GmbH, Wetzlar, Germany). The spots inside the nucleus were counted by eye because of the possibility to focus manually through the whole nucleus by microscope to detect each focus in the 3D-room. All experiments were counted by one and the same, trained person. For each of the samples, 300 lymphocytes were analyzed within the patient samples with its heterogeneous dosedistribution. All nuclei were morphologically considered by eye (cell form and size) to be properly shaped and in G0/1-phase with haploid chromosome-set. Due to their homogenous radiation, in-vitro samples and controls were investigated by counting 50 cells each experiment and measuring point. Three independent experiments were done. Data and statistical analysis For every patient, gamma-H2AX foci of the lymphocytes were counted. For every count of gamma-H2AX foci per nucleus the averaged relative number of cells was calculated from 20 patients each group (3D and SSIMRT). The calibration curve involved five subjects irradiated at six different doses in three independent measurements. Background foci levels were subtracted. As the relationship between dose application and irradiation induced gamma-H2AX foci formation is linear [4], a linear regression curve was generated, which implies the following general formula: This linear regression curve was used to calculate an equivalent dose for every count of irradiation induced gamma-H2AX foci per nucleus in patients' lymphocytes. Background foci were subtracted again (controls before irradiation). In addition, the values of gamma-H2AX foci were converted into relative doses, whereas 100% corresponds to the given dose of 2.0 Gy (3D) and accordingly 2.17 Gy (SSIMRT). The calibration concerns only the single lymphocyte, irrespectively body site or blood flow. In a further integral diagram, the relative number of lymphocytes with gamma-H2AX foci was plotted against the relative applied dose in %. Each point shows the cumulative number of lymphocytes exposed to a certain dose, or more. This visualization of distribution of radiated lymphocytes was defined as dose-lymphocytehistogram (DLH). The original dose-volume-histograms (DVH) were modified in order to compare them to our generated DLHs: in general, the volume percentage in the DVH refers to the contoured volume of the CT-scanned part of the body (aortic bifurcation to the thigh). The data was standardized by referring it to the individual's total body volume, allowing interpretation equivalent to the DLH. With the rule of proportion the values of the contoured volumes can transferred to values of total body volumes. Formula: % total body volume = % contoured volume×[contoured volume (l) / total body volume (l)] The statistics were done by Sigma Plot 10.0 ® . The level of significance was set at p < 0.05 using a Student's t-test. In-vitro measurements for calibration curve The relation of dose and mean number of gamma-H2AX foci per nucleus (see also Figure 1) of all 5 subjects' lymphocytes follows the same characteristic without significant differences (p > 0.05), which confirms the absence of inter-individual differences [16]. The estimated regression line is used as a calibration curve ( Figure 2) and its formula is: Y = 7.859877 * X (Y = number of gamma-H2AX foci per nucleus, × = dose in Gy) For example, 0.5 Gy correlates with a mean number of gamma-H2AX foci per nucleus of 4.9, 1 Gy with 8.6 and 2 Gy with 16 foci, 2 hours after irradiation. In-vivo measurements of patients' lymphocytes Related to investigated lymphocytes of 20 patients per group the mean number of gamma-H2AX foci per nucleus is 0.49 (3D) and 0.47 (SSIMRT) in the irradiated samples (Figure 3), while the non-irradiated control marks 0.06 (3D) and 0.05 (SSIMRT). The number of foci in the samples after irradiates were for all the patients larger than the number of foci in the non-irradiated control samples. The bars show significant difference between irradiated samples and the control (p ≤ 0.05). The mean number of gamma-H2AX foci in both radiation modalities is the same (p > 0.05). Dose-lymphocyte histogram (DLH) The DLH is a cumulative histogram; each point shows the cumulated number of lymphocytes that has been exposed to a certain dose, or more ( Figure 4). Background foci-levels have been subtracted, since they were also subtracted in the calibration line. The curves cross at about 20% of the described dose, while the SSIMRT curve lies above the 3D curve at lower doses and below it at higher doses. The significant difference is obvious between 40% and 90% of the delivered dose: here, the SSIMRT curve lies significantly below the 3D curve (p ≤ 0.05). There is no difference in relative number of lymphocytes, which get more than 95% of the applied dose. The percentage of lymphocytes exposed to more than 50% of the prescribed dose is 1.8% in 3D technique, compared to 0.9% in SSIMRT. Dose-volume histogram (DVH) The curves' crossing point in the DVH takes place at just below 20% of the described dose, whereas the SSIMRT lies above the 3D at 0%-20% and significantly (p ≤ 0.05) below it between 30%-95% ( Figure 5). The percentage of volume exposed to more than 50% of the prescribed dose is 1.7% in 3D technique, compared to 0.4% in SSIMRT. Discussion Lymphocytes of patients receiving irradiation for the treatment of prostate cancer have been analyzed by scoring gamma-H2AX foci. A distribution of delivered dose to the lymphocytes is shown and visualized in the graphics above. Similarity between DLH (dose-lymphocyte-histogram) and DVH (dose-volume-histogram) has been found. The biological measurement on behalf of the human lymphocytes corresponds to the distribution calculated by the physicists: more low-dose-delivery is observed for the SSIMRT compared to the 3D. At the same time, a lower distribution of 30%-90% of the applied dose can be reported for the SSIMRT. The advantage of this method is an easy and fast access to the required material without any massive medical interventions. The method allows an in vivo estimation respectively proof of the dose distribution calculated by the therapy planning system. The challenge is that every patient has to be irradiated at a comparable volume and same site of the body. Attention also has to be paid to the repair kinetics and withdraw of gamma-H2AX foci, which make it necessary to stop cell metabolism after a certain duration post irradiation. Due to this context, we fixed all cells 2 h after irradiation (in-vivo and in-vitro) to allow comparability between the samples. However, the determination of the probability of lymphocytes' presence in the body tissue is difficult, due to the lymphocytes' kinetics (circulation in the blood vessels), migration and adhesion to the vessel wall. These circumstances have been described by Sak et al. in detail [4]. It has to be considered, that lymphocytes in in-field capillaries move slower and receive more dose, than fast moving lymphocytes in larger vessels. Sak et al. described differences in mean numbers of gamma-H2AX foci in lymphocytes depending on irradiated target sites, e.g. brain and thorax. In our study, target site was no variable parameter, since we compared 3D and SSIMRT only in prostate cancer treatment. The SSIMRT's beam-on-time differed from the 3D's by a factor 5 ( Table 2). Assuming a blood circulation time of one minute, this fact causes inaccuracy while measuring the actual dose distribution. On the other hand, table time in both modalities differs by factor 1.4. During 11.5 vs. 16.3 min of table time, lymphocytes in both groups have the chance of being radiated more than one time. The cumulative formation of gamma-H2AX foci can lead to a false high result in evaluating dose distribution. In order to attempt a correction towards real dose distribution in SSIMRT, one would expect even less cells exposed to higher levels of dose. This correction would amplify the differences between 3D and SSIMRT, which again correspond with the physical model. Statement implying an absolute dose in Gy used for dosimetry, cannot be recommended without doubts, due to the following issues: in the DLH (Figure 4) higher lymphocyte-percentages are plotted, compared to the DVH ( Figure 5). The DLH shows a radiation dose of 5% in 7-9% of lymphocytes (DLH), whereas only about 5% of the body volume receives the same dose (DVH). Doses of above 100% can be observed in the DLH, too. This phenomenon can be explained by the possibility of repeated dose exposure of some lymphocytes as explained above. The linear correspondence between induction of γH2AX foci and the delivered dose has already been verified and practiced especially for low doses [4,17]. Exceptions from this rule are described and due to different irradiation conditions or different kinds of ionizing irradiation [18]. The visualization, which is shown for computed tomography examinations of different sites (1), was now extended to the doses of one fraction of radiotherapy for different techniques. Flow cytometry has also been performed in order to measure delivered dose by γH2AX stain [16], however, in our case it didn't seem appropriate: The intensity of the gamma-H2AX foci varied and could have led to errors while measuring the background level of fluorescence. In our opinion, a concrete number of foci per nucleus is needed to compare dose distribution exactly. Jucha et al. evaluated 2-dimentional pictures of the stained lymphocytes using special software [19], but we set great store by being able to zoom through the slide under the microscope and looking at the complete 3-dimentional nucleus in order to detect every gamma-H2AX foci. For this reason in our experiments foci were counted manually by eye with a fluorescence-microscope. By creating a dose-lymphocyte histogram (DLH), the gamma-H2AX staining method allows the estimation of the dose distribution after irradiation. One possible application of the present method could also be in radiation-protection for in-vivo dosimetry after accidental exposure to radiation. In case of accidental irradiation, background foci level cannot be determined and therefore cannot be subtracted in the DLH. In this situation background foci level should also not be subtracted in the calibration line. In this manner the error due to background foci level can be reduced, however individual differences of background foci levels remain unconsidered. Another possibility to deal with this limitation is to take blood for background foci level examination several weeks after the exposure, when the circulating lymphocytes have been substituted naturally. Conclusion Measurement of γH2AX foci in patients' lymphocytes after prostate irradiation has been performed and dose distribution within the lymphocytes shown. SSIMRT delivers more doses below 20% and less between 30%-90% than 3D. This new biological in-vivo method confirmed the reduction of medium-dose-exposure for normal tissue by SSIMRT. The relation between actually distributed dose (DVH) and distribution of gamma-H2AX foci in lymphocytes (DLH) shows similarity but cannot be interpreted unrestrictedly due to the blood circulation. Author details 1 Department of Radiation Oncology, University of Heidelberg, Heidelberg, Germany. 2 Clinical Cooperation Unit Radiation Oncology, DKFZ, Heidelberg, Germany. Authors' contributions FZ conceived of the study, carried out patients' mentoring and experiments and drafted the manuscript. BS carried out the the gamma H2AX experiments and helped to draft the manuscript. CT helped to draft the manuscript. FS, GM, KW, PH and JD participated importantly in the conception of the study and provided informatics and support with statistics for data analysis. KH participated importantly in the conception and design and helped to draft the manuscript. All authors read and approved the final manuscript.
2017-06-23T22:56:15.325Z
2011-06-07T00:00:00.000
{ "year": 2011, "sha1": "267020b242432220e1b1fe9d5d62e254c1b73818", "oa_license": "CCBY", "oa_url": "https://ro-journal.biomedcentral.com/track/pdf/10.1186/1748-717X-6-62", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bb2303a2007c07695b0abf1d824012694be46971", "s2fieldsofstudy": [ "Medicine", "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
267390497
pes2o/s2orc
v3-fos-license
A Novel Approach to Simulating Realistic Exoskeleton Behavior in Response to Human Motion : Simulation models are a valuable tool for exoskeleton development, especially for system optimization and evaluation. It allows an assessment of the performance and effectiveness of exoskeletons even at an early stage of their development without physical realization. Due to the closed physical interaction between the exoskeleton and the user, accurate modeling of the human– exoskeleton interaction in defined scenarios is essential for exoskeleton simulations. This paper presents a novel approach to simulate exoskeleton motion in response to human motion and the interaction forces at the physical interfaces between the human and the exoskeleton. Our approach uses a multibody model of a shoulder exoskeleton in MATLAB R2021b and imports human motion via virtual markers from a digital human model to simulate human–exoskeleton interaction. To validate the human-motion-based approach, simulated exoskeleton motion and interaction forces are compared with experimental data from a previous lab study. The results demonstrate the feasibility of our approach to simulate human–exoskeleton interaction based on human motion. In addition, the approach is used to optimize the support profile of an exoskeleton, indicating its potential to assist exoskeleton development prior to physical prototyping. Introduction Simulation models are useful tools for designing and optimizing exoskeletons by providing insight into their performance and effectiveness.Multibody and control system simulations are used primarily to evaluate the mechanical, actuator, and control behavior of the exoskeleton.Biomechanical simulation based on a human musculoskeletal model helps to understand the effect of the exoskeleton on the human body [1][2][3][4][5][6][7][8].Due to the close interaction with the user, the user test is a conventional way to evaluate the performance of the exoskeleton.Compared to costly and time-consuming user testing, simulation models allow testing of extreme system configurations without compromising user health.This is especially important when developing rehabilitative exoskeletons for fragile patients.Moreover, simulation-based virtual prototyping reduces physical prototyping costs and iterations. Exoskeletons interact closely with the human body and their movement is determined by human motion.Therefore, dynamic simulation of the human-exoskeleton interaction, especially the reaction of the exoskeleton to human motion, is crucial for mechanical and control design, as well as for optimization of the exoskeleton.Two groups of approaches can be identified in the literature to model the physical interaction between the human and the exoskeleton. The first group models the physical human-exoskeleton interaction by importing the 3D model of the exoskeleton into a musculoskeletal human model.These approaches are often used in parametric studies to optimize exoskeleton hardware design [3][4][5][6][7][8] where the exoskeleton segments move with the coupled human model.The design parameters most commonly analyzed are spring stiffness [3][4][5], range of motion (ROM) [6], and material properties [8].Additionally, the interaction between the exoskeleton and the musculoskeletal human model is also used for user-oriented control optimization [9][10][11].Several studies model a rigid connection between the exoskeleton and the human model [6,8], where relative movement at the physical interface is not possible.These approaches assume that the human and exoskeleton joints are well aligned [6] or that there is some flexibility in the exoskeleton segments [8].In other studies, the exoskeleton model is linked to the human model with additional degrees of freedom (DOF) [1,2,4,5,7].This allows for relative motion between human and exoskeleton segments, thus approximating their connection at the physical interface in real operation.The interaction force at the physical interface is typically determined by contact models between two bodies, which is calculated by a spring-damper force [1,5].The analysis of this group of approaches mostly focuses on the biomechanical impact of exoskeletons on the human body and helps improve the exoskeleton's behavior concerning the user.However, using only musculoskeletal simulation has limitations in exploring the technical performance of exoskeletons, such as the robustness of the mechanical components and the computational efficiency of the controller. The second group of approaches focuses on the technical behavior of the exoskeleton in interaction with humans.Mosconi et al. use human joint movements directly as the joint movements of the exoskeleton under the assumption of perfect joint alignment between humans and exoskeleton [12].Chen et al. use a mathematical model to calculate the trajectory of the human foot for the dynamic simulation of a lower extremity exoskeleton [13].Simplified 2D planar human models with rigid segments are often constructed for the dynamic simulation of the lower extremity [14,15].Most approaches in this group build a 3D kinematic human model with rigid segments and couple it to the exoskeleton multibody model [16][17][18][19][20].To reduce computational costs, these human models mostly use simplified kinematics and have a limited ability to mimic real human motions.The literature shows that it is important to simulate the behavior of the exoskeleton in response to human motion since the movement of the exoskeleton depends primarily on the motion of the user.However, building a human model within the exoskeleton simulation to mimic real human motion requires extensive effort. The challenge of merging two models into a unified, monolithic simulation model without compromising their detailed representations is common for both groups of approaches discussed above.Therefore, there is a strong need for approaches that allow the simulation of exoskeleton behavior and human motion in a distributed but coordinated manner.In the domain of distributed simulation, the "gluing perspective" has demonstrated efficacy in establishing cohesive interactions between individual simulations of multibody systems [21][22][23].To extend this to simulating exoskeleton behavior separate from the human model, an approach is needed that connects human motion to exoskeleton simulation and incorporates the physical interaction between the exoskeleton and the human body. The present study addresses this need and introduces a novel approach to the dynamic simulation of exoskeleton behavior in response to realistic human motion.Our approach seamlessly integrates human motion into the exoskeleton simulation, avoiding the need to embed a human model.Furthermore, our approach allows us to simulate the realistic movements of the exoskeleton arms and interaction forces resulting from the relative motion between the user and the exoskeleton.A real shoulder exoskeleton is used as an example for the development and validation of this approach. Human-Motion-Based Approach for Exoskeleton Simulation The presented approach is developed for simulative evaluation and optimization of exoskeleton behavior based on human motion, especially for the early stage of exoskeleton development, when a physical prototype is not available.As mentioned in the introduction, the motion of an exoskeleton is highly dependent on the motion of the user, and it is beneficial to simulate the behavior of the exoskeleton in response to human motion.Our approach constructs a human-motion-based exoskeleton simulation model that consists of three key elements (see Figure 1): a multibody model to describe the technical details of the exoskeleton, an interface to import human motion in the form of marker positions, and a human-exoskeleton interaction model to simulate the physical contact between the user and the exoskeleton as well as their interaction forces.In this way, no additional digital human model is required within the exoskeleton simulation model for human motion generation to realize human-exoskeleton interaction.The required human motion can be acquired by a motion capture system or simulated in a musculoskeletal human model.The multibody model is derived from the CAD design of the exoskeleton.the need to embed a human model.Furthermore, our approach allows us to simulate the realistic movements of the exoskeleton arms and interaction forces resulting from the relative motion between the user and the exoskeleton.A real shoulder exoskeleton is used as an example for the development and validation of this approach. Human-Motion-Based Approach for Exoskeleton Simulation The presented approach is developed for simulative evaluation and optimization of exoskeleton behavior based on human motion, especially for the early stage of exoskeleton development, when a physical prototype is not available.As mentioned in the introduction, the motion of an exoskeleton is highly dependent on the motion of the user, and it is beneficial to simulate the behavior of the exoskeleton in response to human motion.Our approach constructs a human-motion-based exoskeleton simulation model that consists of three key elements (see Figure 1): a multibody model to describe the technical details of the exoskeleton, an interface to import human motion in the form of marker positions, and a human-exoskeleton interaction model to simulate the physical contact between the user and the exoskeleton as well as their interaction forces.In this way, no additional digital human model is required within the exoskeleton simulation model for human motion generation to realize human-exoskeleton interaction.The required human motion can be acquired by a motion capture system or simulated in a musculoskeletal human model.The multibody model is derived from the CAD design of the exoskeleton.To validate the feasibility of this approach, this work uses a real shoulder exoskeleton Lucy [24] as an example to develop the human-motion-based simulation model, so that experimental data from user testing can be collected for comparison with the simulated data.The validation method is described in Section 3. The procedure for evaluating and optimizing the exoskeleton's support profile for a specific task using this humanmotion-based simulation approach is demonstrated as an example in Section 4. The results of the validation and the task-specific optimization are presented in Section 5. Multibody Model of a Shoulder Exoskeleton The shoulder exoskeleton Lucy is developed to assist users in raising and holding the arm for work at head level or above.The multibody exoskeleton model is built in MATLAB Simscape, based on the CAD model of the exoskeleton Lucy built in SolidWorks (SolidWorks Corporation, Concord, MA, USA).The key mechatronic features of Lucy are replicated in the multibody model, such as the joints' range of motion (ROM), the possi- To validate the feasibility of this approach, this work uses a real shoulder exoskeleton Lucy [24] as an example to develop the human-motion-based simulation model, so that experimental data from user testing can be collected for comparison with the simulated data.The validation method is described in Section 3. The procedure for evaluating and optimizing the exoskeleton's support profile for a specific task using this human-motionbased simulation approach is demonstrated as an example in Section 4. The results of the validation and the task-specific optimization are presented in Section 5. Multibody Model of a Shoulder Exoskeleton The shoulder exoskeleton Lucy is developed to assist users in raising and holding the arm for work at head level or above.The multibody exoskeleton model is built in MATLAB Simscape R2021b (MathWorks Inc., Natick, MA, USA), based on the CAD model of the exoskeleton Lucy built in SolidWorks 2022 SP 5.0 (SolidWorks Corporation, Concord, MA, USA).The key mechatronic features of Lucy are replicated in the multibody model, such as the joints' range of motion (ROM), the possible mechanical adjustments, and control settings.The multibody model retains one active joint and two passive joints (red and blue curved arrows in Figure 2) in each exoskeleton arm, as well as three possibilities for individual adjustments in mechanical structure (yellow arrows in Figure 2).The model uses human anthropometric data such as upper arm length, shoulder width, and torso length to parameterize the position of the armrests, the shoulder bars, and the lap belt.Virtual sensors are added to the two active joints to measure the angular movements of the exoskeleton arms, which are used by the control model to determine the pressures for actuating the two pneumatic cylinders.A mathematical model derived from the technical characteristics of the cylinder is used to calculate the cylinder force.Cylinder forces create supporting moments at the active joints of each exoskeleton arm, helping to lift the human arm.Meanwhile, passive joints increase the degree of freedom of the shoulders.The control model contains the parameters for the support level and the support profile in relation to arm movement and the work task.The support level is the percentage of the maximum available cylinder force: 0% means that the exoskeleton is not providing any support, while 100% means that the exoskeleton is operating at full power of 7.8 N•m peak support torque with 6 bar supply pressure. Robotics 2024, 13, x FOR PEER REVIEW 4 of 21 ble mechanical adjustments, and control settings.The multibody model retains one active joint and two passive joints (red and blue curved arrows in Figure 2) in each exoskeleton arm, as well as three possibilities for individual adjustments in mechanical structure (yellow arrows in Figure 2).The model uses human anthropometric data such as upper arm length, shoulder width, and torso length to parameterize the position of the armrests, the shoulder bars, and the lap belt.Virtual sensors are added to the two active joints to measure the angular movements of the exoskeleton arms, which are used by the control model to determine the pressures for actuating the two pneumatic cylinders.A mathematical model derived from the technical characteristics of the cylinder is used to calculate the cylinder force.Cylinder forces create supporting moments at the active joints of each exoskeleton arm, helping to lift the human arm.Meanwhile, passive joints increase the degree of freedom of the shoulders.The control model contains the parameters for the support level and the support profile in relation to arm movement and the work task.The support level is the percentage of the maximum available cylinder force: 0% means that the exoskeleton is not providing any support, while 100% means that the exoskeleton is operating at full power of 7.8 N•m peak support torque with 6 bar supply pressure. Interface to Human Motion To simulate the movements of the exoskeleton arm, the upper arm movements of the user are required.The movements of the upper arm are imported into the exoskeleton multibody model through the positions of four virtual markers (humerus_rup, hu-merus_rdn, humerus_lup, humerus_ldn in Figure 3a) from a digital human model (DHM) [25] in OpenSim.It is a genetic full-body model built from partial models that specify the general musculoskeletal geometry of different parts of the human body.Its upper extremities are modeled by Holzbaur et al. [26] with 15 degrees of freedom for the shoulder, elbow, forearm, wrist, thumb, and index finger.The kinetic simulation of the DHM is based on the motions recorded by a motion capture system (Vicon Bonita, Oxford Metrics Ltd., Oxford, UK) during a previous lab study [27].The marker placement follows the Vicon guide for the full-body model [28]. The global coordinate systems (GCSs) of the human and exoskeleton models are different.In addition, the exoskeleton moves and rotates with the user, while the exoskeleton model is fixed to its GCS.Thus, the marker positions described in the GCS of the human model need to be transformed into a defined local coordinate system (LCS) for the exoskeleton model.Two reference points are defined on Lucy (see Figure 3b Interface to Human Motion To simulate the movements of the exoskeleton arm, the upper arm movements of the user are required.The movements of the upper arm are imported into the exoskeleton multibody model through the positions of four virtual markers (humerus_rup, humerus_rdn, humerus_lup, humerus_ldn in Figure 3a) from a digital human model (DHM) [25] in OpenSim 4.3 (OpenSim, Stanford, US).It is a genetic full-body model built from partial models that specify the general musculoskeletal geometry of different parts of the human body.Its upper extremities are modeled by Holzbaur et al. [26] with 15 degrees of freedom for the shoulder, elbow, forearm, wrist, thumb, and index finger.The kinetic simulation of the DHM is based on the motions recorded by a motion capture system (Vicon Bonita, Oxford Metrics Ltd., Oxford, UK) during a previous lab study [27].The marker placement follows the Vicon guide for the full-body model [28]. The global coordinate systems (GCSs) of the human and exoskeleton models are different.In addition, the exoskeleton moves and rotates with the user, while the exoskeleton model is fixed to its GCS.Thus, the marker positions described in the GCS of the human model need to be transformed into a defined local coordinate system (LCS) for the exoskeleton model.Two reference points are defined on Lucy (see Figure 3b below) to define the LCS: Lucy_MC is the origin of the LCS, and Lucy_Mleft defines the direction of the x-axis of the LOS.Lucy_MC is on the middle line of Lucy, which should align with the spine of the user.Lucy_Mleft and Lucy_MC build a line along the shoulder bar, which is parallel to the user's shoulder.In the previous study mentioned above, two markers are physically placed on the reference points on Lucy, and their positions are recorded by the motion capture system and then imported into the DHM.Alternatively, as shown in Figure 3b above, they can be virtually placed in the DHM by simply attaching the 3D model of the exoskeleton as a whole to the DHM without modeling each individual joint of the exoskeleton. rection of the x-axis of the LOS.Lucy_MC is on the middle line of Lucy, which should align with the spine of the user.Lucy_Mleft and Lucy_MC build a line along the shoulder bar, which is parallel to the user's shoulder.In the previous study mentioned above, two markers are physically placed on the reference points on Lucy, and their positions are recorded by the motion capture system and then imported into the DHM.Alternatively, as shown in Figure 3b above, they can be virtually placed in the DHM by simply attaching the 3D model of the exoskeleton as a whole to the DHM without modeling each individual joint of the exoskeleton.The marker positions from OpenSim are transformed from the GCS in the human model to the defined LCS in the exoskeleton model in three steps.First, the marker positions are translated from the origin of GCS to the origin of LCS through a translation matrix: where , , is the position of Lucy_MC in GCS of DHM.Second, the rotation of the LCS following the upper body movements, e.g., rotation and lateral flexion, is determined by calculating the rotation matrix to align the vector (from Lucy_MC to Lucy_Mleft) (the orange vector in Figure 4a) to the x-axis of the LCS. is computed by two MATLAB functions: (1) the function vrrotvec determines the rotation axis and the angle that rotates around the axis (see Figure 4a left); (2) the function vrrotvec2mat converts the rotation axis and angle into rotation matrix : The marker positions from OpenSim are transformed from the GCS in the human model to the defined LCS in the exoskeleton model in three steps.First, the marker positions are translated from the origin of GCS to the origin of LCS through a translation matrix: where (x MC , y MC , z MC ) is the position of Lucy_MC in GCS of DHM.Second, the rotation of the LCS following the upper body movements, e.g., rotation and lateral flexion, is determined by calculating the rotation matrix R 1 to align the vector v (from Lucy_MC to Lucy_Mleft) (the orange vector in Figure 4a) to the x-axis of the LCS.R 1 is computed by two MATLAB functions: (1) the function vrrotvec determines the rotation axis u and the angle α that rotates around the axis u (see Figure 4a left); (2) the function vrrotvec2mat converts the rotation axis u and angle α into rotation matrix R 1 : 1 1 where , , is the marker position in GCS of DHM, and , , is the transformed marker position in LCS of the exoskeleton model.The transformed positions of all the markers mentioned above are imported into the exoskeleton model in Simscape using bushing joints with the LCS at Lucy_MC as the base frame. Human-Exoskeleton Interaction Model Because the human-exoskeleton interaction at the arm interfaces, including the armrest and arm strap (see Figure 2, at right, with label), is critical for the movements of the exoskeleton arm and the support torque, the interaction model focused on modeling the physical contact between the human upper arm and the exoskeleton arm interface, as well as the resulting interaction force.For this purpose, the entire arm interface is modeled as a tube, while the part of the human upper arm connected to the exoskeleton is modeled as a sphere inside the tube (see Figure 5).The contact force between the upper arm and the armrest is then implemented by the Sphere to Tube Contact Force model from the Simscape Multibody Contact Force Library [29].The tubes are attached to the armrests by rigid transform blocks in Simscape.The motions of the spheres are driven by the transformed positions of markers humerus_rdn and humerus_ldn on the upper arms (see Figure 5).To mimic a proper connection between the upper arm and the exoskeleton with- Finally, to ensure that the y-axis of the LCS in the exoskeleton model is parallel to the y-axis of the GCS in the human model when the human is standing upright in the neutral position, a rotation on the x-axis of the LCS is required (see Figure 4b).To determine the rotation angle θ (see Figure 4b) around the x-axis, an additional marker T10 placed in the 10th thoracic vertebra is introduced.The second rotation matrix R 2 is given by: The final transformation is summarized as follows: where (x GCS , y GCS , z GCS ) is the marker position in GCS of DHM, and (x LCS , y LCS , z LCS ) is the transformed marker position in LCS of the exoskeleton model.The transformed positions of all the markers mentioned above are imported into the exoskeleton model in Simscape using bushing joints with the LCS at Lucy_MC as the base frame. Human-Exoskeleton Interaction Model Because the human-exoskeleton interaction at the arm interfaces, including the armrest and arm strap (see Figure 2, at right, with label), is critical for the movements of the exoskeleton arm and the support torque, the interaction model focused on modeling the physical contact between the human upper arm and the exoskeleton arm interface, as well as the resulting interaction force.For this purpose, the entire arm interface is modeled as a tube, while the part of the human upper arm connected to the exoskeleton is modeled as a sphere inside the tube (see Figure 5).The contact force between the upper arm and the armrest is then implemented by the Sphere to Tube Contact Force model from the Simscape Multibody Contact Force Library [29].The tubes are attached to the armrests by rigid transform blocks in Simscape.The motions of the spheres are driven by the transformed positions of markers humerus_rdn and humerus_ldn on the upper arms (see Figure 5).To mimic a proper connection between the upper arm and the exoskeleton without compressing the skin, the inner radius of the tube is set equal to the radius of the sphere.Due to the softness of the muscles and fabrics at the arm interface, small movements of the upper arm are still allowed inside the arm interface.Thus, a Cartesian joint with three linear degrees of freedom (DOFs) is connected between the sphere (human upper arm) and the tube (arm interface of the exoskeleton), with no specified limits in each DOF. interface.The contact force law of the model is set to linear with a stiffness of 10 N/m and a damping of 10 N/(m/s) after tuning according to library recommendations [29].The model calculates the contact force between the sphere and the tube caused by their relative motion and applies the force to both, resulting in changes in their motion.This allows the exoskeleton arm to follow the movement of the upper arm in the simulation, with or without the support of the cylinders, just as the system Lucy does in the real world.The coordinate system of the interaction force acting from the tube to the sphere is defined in Figure 5.The z-component of the force is regarded as the interaction force in this paper for the following validations and analysis.When the user raises the arms and the cylinders are off, i.e., no support, the simulated interaction force is negative because the user must work against the weight of the exoskeleton arm.When the arms are raised with support from the cylinder, a positive interaction force occurs in the simulation. Figure 5. Contact modeling between exoskeleton arm interface and human upper arm, left side as an example, with the coordinate system of the interaction force acting from the tube to the sphere. Validation of Human-Motion-Based Simulation Approach Two groups of simulations are conducted, summarized in Table 1, for the validation of the human-motion-based simulation approach.Motion data of three participants (P1-P3) from the previous lab study of the exoskeleton Lucy [27] are used as input for the simulations.In the lab study, two overhead tasks (T1 and T2) with a screwdriver are defined.Task T2 is more demanding on the shoulder and wrist than task T1.Both tasks are carried out under three conditions: (S0) do not wear the exoskeleton Lucy, (S50) set Lucy at 50% of its maximum power, and (S100) set Lucy at 100% of its maximum power [27].According to this setting, each group of exoskeleton simulation contains 12 simulation cases: 3 participants × 2 tasks × 2 supported conditions (S50, S100).The only difference between the two groups is the motion input of the simulation.Simulation group 1 uses the human motion from the same supported conditions (S50/S100), while simulation group 2 uses only the human motion from the unsupported condition (S0).For example, case P1T1S100 simulates the behavior of the exoskeleton in response to the motion of participant P1 while performing task T1.The support level of the exoskeleton model is set to Lucy's full power.Simulation group 1 uses P1's motion captured under condition S100 in the lab study, while simulation group 2 uses P1's motion captured under condition S0.The static and kinetic friction coefficients are set at 1 and 0.8, respectively, thus simulating fabric-to-fabric friction [30] between the cloth and the fabric surfaces of the arm interface.The contact force law of the model is set to linear with a stiffness of 10 3 N/m and a damping of 10 N/(m/s) after tuning according to library recommendations [29].The model calculates the contact force between the sphere and the tube caused by their relative motion and applies the force to both, resulting in changes in their motion.This allows the exoskeleton arm to follow the movement of the upper arm in the simulation, with or without the support of the cylinders, just as the system Lucy does in the real world.The coordinate system of the interaction force acting from the tube to the sphere is defined in Figure 5.The z-component of the force is regarded as the interaction force in this paper for the following validations and analysis.When the user raises the arms and the cylinders are off, i.e., no support, the simulated interaction force is negative because the user must work against the weight of the exoskeleton arm.When the arms are raised with support from the cylinder, a positive interaction force occurs in the simulation. Validation of Human-Motion-Based Simulation Approach Two groups of simulations are conducted, summarized in Table 1, for the validation of the human-motion-based simulation approach.Motion data of three participants (P1-P3) from the previous lab study of the exoskeleton Lucy [27] are used as input for the simulations.In the lab study, two overhead tasks (T1 and T2) with a screwdriver are defined.Task T2 is more demanding on the shoulder and wrist than task T1.Both tasks are carried out under three conditions: (S0) do not wear the exoskeleton Lucy, (S50) set Lucy at 50% of its maximum power, and (S100) set Lucy at 100% of its maximum power [27].According to this setting, each group of exoskeleton simulation contains 12 simulation cases: 3 participants × 2 tasks × 2 supported conditions (S50, S100).The only difference between the two groups is the motion input of the simulation.Simulation group 1 uses the human motion from the same supported conditions (S50/S100), while simulation group 2 uses only the human motion from the unsupported condition (S0).For example, case P1T1S100 simulates the behavior of the exoskeleton in response to the motion of participant P1 while performing task T1.The support level of the exoskeleton model is set to Lucy's full power.Simulation group 1 uses P1's motion captured under condition S100 in the lab study, while simulation group 2 uses P1's motion captured under condition S0.The validation of the human-motion-based simulation approach is conducted in two levels, summarized in Table 2.The first level aims to verify whether this approach can reproduce the exoskeleton behavior in response to the human motion recorded with the support of Lucy.At this level, simulated exoskeleton arm motions and the interaction forces from simulation group 1 are compared with the measured data from the same supported condition (S50 or S100) of the lab study.Besides the qualitative comparison of the curves, Mean Absolute Error (MAE) and Root Mean Square Deviation (RMSD) are calculated to quantitatively measure the deviation between the simulated and measured variables.To ensure comparability of the simulated and measured data, all simulations use the system setup documented in the lab study for the exoskeleton model, including the length of the shoulder bar, the position of the armrest, and the air pressure of the power supply.The elevation angle of the exoskeleton arm is recorded by the datalogger on Lucy during the lab study.The interaction force between the dominant upper arm and the arm interface is measured by a pressure sensing mat (X3 Pro System, LX205:50.100.10,Xsensor Technology Corporation, Calgary, AB, Canada).The screwdriver is held by the dominant hand, which is on the right for P1 and on the left for P2 and P3.Additionally, the plausibility of passive joint movements is also verified by comparing the simulation animation with the corresponding video recording from the lab study of the same motion cycle at the same time stamp.The second validation level aims to evaluate the feasibility of the presented approach to simulate human-exoskeleton interaction based on human motion recorded without the use of the exoskeleton.This is important for simulative evaluation and optimization of exoskeletons in the absence of a physical prototype.For this purpose, the simulated variables from simulation group 1 are compared with those of the corresponding conditions from simulation group 2. The comparison is made qualitatively in terms of the shape and progression of the curve.For a quantitative comparison, the average differences between the results of simulation groups 1 and 2 are calculated.The key question for this level of validation is whether the simulation results are affected by the motion input from different conditions.If so, how critical is this to the simulative analysis of the exoskeleton's behavior?To answer this question, the simulated shoulder elevation angles from the DHM are additionally considered in the comparison. Simulative Optimization of the Support Profile To demonstrate the application potential of the human-motion-based simulation approach, this section presents the methodological procedure for evaluating and optimizing the exoskeleton's support profile for specific tasks using this approach, exemplified by the Lucy system and the two tasks described in the previous section.A solution for taskspecific optimization is derived from the evaluation analysis and virtually implemented in the exoskeleton simulation model.The results of the optimization are presented in the next section. First, the simulated support torques (T_supp.) in both tasks (T1 and T2) under condition S100 are compared to the shoulder elevation torques (T_shld.)from the DHM (see Figure 6).The shoulder elevation torques are simulated in DHM based on the human motions captured in T1S100 and T2S100 with task-related loads but without the support of the exoskeleton.The task-related loads refer to the weight of the screwdriver and the force the user applies to the screwdriver while screwing [31].The curves show that the shoulder elevation torques (T_shld.)increase in proportion to the shoulder elevation angles (shld.elv.) and jump to even higher values in the screw-in phase on the dominant side holding the screwdriver.Higher workloads in T2 than in T1 are confirmed by the shoulder elevation torques of the dominant side during phase screw-in.Compared to the simulated support torque, an increased support torque for the dominant side, especially during the screw-in phase, has a high potential to reduce further physical stress for the user, which is confirmed by the perceptions of the participants in the lab study [27].This requires a more powerful cylinder, as the current one allows a peak support torque of 7.8 N•m at the active joint with 6 bar supply pressure.Considering force, weight, dimensions, and technical feasibility, a new cylinder providing a peak support torque of 12.8 N•m at 6 bar is selected and implemented in the exoskeleton model.Since the kinematic change caused by the new cylinder is small and this study focuses on the optimization potential of the support profile, the 3D models of the exoskeleton arm and the cylinder are not updated in the current multibody model of the exoskeleton. To detect the work phases and adjust the exoskeleton support according to the detected work phases, a state machine is integrated into the control model (see Figure 7).The arm lifting and lowering phases are detected by the elevation angle and angular velocity of the exoskeleton arm.The phase screw-in is detected by the current of the screwdriver [27] and imported as an external signal into the exoskeleton model.A Support Factor is used to adjust the exoskeleton support.It is a percentage of the maximum support level set via the user interface.The motion detection thresholds and Support Factor are determined based on simulation tuning.The results of the optimized simulation tuning and the support profiles for each task are presented in the next section. Besides the workload variation between the two tasks and between the working phases, the curves of shoulder elevation torque in Figure 6 also show a difference in amplitude between individuals, related to differences in arm weight and individual push force on the screwdriver.The optimization of the support profile for each individual is possible but not addressed in this work, as the optimization of this work focuses on the workload pattern of each task in terms of the human motion and effort required to tighten the screws.However, it is still possible to scale the overall amplitude of the support profile through the personal setting of the support level. weight, dimensions, and technical feasibility, a new cylinder providing a peak supp torque of 12.8 N•m at 6 bar is selected and implemented in the exoskeleton model.Sin the kinematic change caused by the new cylinder is small and this study focuses on t optimization potential of the support profile, the 3D models of the exoskeleton arm a the cylinder are not updated in the current multibody model of the exoskeleton.Besides the workload variation between the two tasks and between t phases, the curves of shoulder elevation torque in Figure 6 also show a differ plitude between individuals, related to differences in arm weight and indiv force on the screwdriver.The optimization of the support profile for each in possible but not addressed in this work, as the optimization of this work foc workload pattern of each task in terms of the human motion and effort requi Results The findings of the two validation studies as well as the optimization study are presented in three subsections.First, a comparative analysis of simulated and measured variables assesses the ability of our approach to represent exoskeleton behavior in response to realistic human motion.Subsequently, the feasibility of our approach for virtual prototyping of exoskeletons prior to physical realization is examined by comparing simulation outcomes based on human motion supported by the exoskeleton and without wearing the exoskeleton.Lastly, the results of the task-specific optimization of the support profile are presented, illustrating the practical use of our approach in the refinement of the exoskeleton control design. Comparison of and Measured Variables The feasibility of the present model as well as the human-motion-based simulation approach is first verified in two aspects: (1) the exoskeleton motion in response to human motion in respective supported conditions S50/100; (2) the interaction force between the dominant arm and the arm interface. Exoskeleton Motion The two passive joints on each exoskeleton arm are critical to the range of motion of the user's shoulder and, therefore, affect the user's comfort.Their movements also influence the elevation angle of the exoskeleton arm (active joint).Thus, the passive joint movements are examined first.In response to the human motions in both supported conditions (S50, S100), the simulated passive joint movements from the animation are very similar to the movements in the corresponding videos from the lab study.An example of comparing frames at the same moment of human motion in the animation and the corresponding video is shown in Figure 8.A video demonstration, including simulation animations and the video recording of the lab study, is available in the Supplementary Materials.man motion in respective supported conditions S50/100; (2) the interaction force between the dominant arm and the arm interface. Exoskeleton Motion The two passive joints on each exoskeleton arm are critical to the range of motion of the user's shoulder and, therefore, affect the user's comfort.Their movements also influence the elevation angle of the exoskeleton arm (active joint).Thus, the passive joint movements are examined first.In response to the human motions in both supported conditions (S50, S100), the simulated passive joint movements from the animation are very similar to the movements in the corresponding videos from the lab study.An example of comparing frames at the same moment of human motion in the animation and the corresponding video is shown in Figure 8.A video demonstration, including simulation animations and the video recording of the lab study, is available in the Supplementary Materials.The curves of the simulated elevation angles of the exoskeleton arm match those measured during most of the motion (see Figure 9), especially during the work phases of arm lifting, screw-in, and lowering, where support is desired.In some cases, e.g., T1S50 and T1S100 of P1, there is a noticeable difference between the simulation and the measurement when the upper arm is near neutral and not elevated.The MAE and RMSD between the simulated and measured elevation angles of both exoskeleton arms are 4.8° and 7.1° for all simulation cases.Considering only the movements that require support in all cases, from the beginning of arm lifting to the end of arm lowering, the MAE and RMSD are 3.7° and 6.0°, respectively.These MAE and RMSD values are acceptable for the overhead position with shoulder elevation above 90°.The curves of the simulated elevation angles of the exoskeleton arm match those measured during most of the motion (see Figure 9), especially during the work phases of arm lifting, screw-in, and lowering, where support is desired.In some cases, e.g., T1S50 and T1S100 of P1, there is a noticeable difference between the simulation and the measurement when the upper arm is near neutral and not elevated.The MAE and RMSD between the simulated and measured elevation angles of both exoskeleton arms are 4.8 • and 7.1 • for all simulation cases.Considering only the movements that require support in all cases, from the beginning of arm lifting to the end of arm lowering, the MAE and RMSD are 3.7 • and 6.0 • , respectively.These MAE and RMSD values are acceptable for the overhead position with shoulder elevation above 90 • . Interaction Force at Arm Interface The simulated and measured curves of the interaction forces are similar in shape and both are proportional to the arm elevation angle during the work phases of arm lifting, screw-in, and arm lowering (see Figure 10).The MAE and RMSD are 4.3 N and 5.6 N, respectively, between the simulated and measured interaction force on the dominant side.An outlier is observed in the T1S50 measurement of P3, indicating poor contact between the upper arm and the armrest, which cannot be replicated by the simulation model.The presented simulation of human-exoskeleton interaction is performed under the assumption that the upper arms are in good contact with the arm interfaces.In certain cases, e.g., T1S50 and T2S50 of P1, the measured interaction force is clearly higher than the simulated one during the phase screw-in (between the two black lines in Figure 10).This may be relevant to the participant's push force on the screwdriver to tighten the screw, which is not part of the simulation here.At the beginning of the arm lowering phase, when the user pushes down the exoskeleton arm, a jump in interaction force is observed in both simulated and measured values.However, the jump in the measured force is more significant than the jump in the simulated force, e.g., T1S100 and T2S50 of P3.A large difference between simulated and measured force is also observed in some cases, e.g., T1S50 and T2S50 of P2, when the upper arm is close to its neutral position and the cylinder is inactive.The interaction model assumes that the contact force between the upper arm and the arm interface is zero when the arm elevation is zero.However, this is not always the case in practice, as shown by the measurement data. Interaction Force at Arm Interface The simulated and measured curves of the interaction forces are similar in shape, and both are proportional to the arm elevation angle during the work phases of arm lifting, screw-in, and arm lowering (see Figure 10).The MAE and RMSD are 4.3 N and 5.6 N, respectively, between the simulated and measured interaction force on the dominant side.An outlier is observed in the T1S50 measurement of P3, indicating poor contact between the upper arm and the armrest, which cannot be replicated by the simulation model.The presented simulation of human-exoskeleton interaction is performed under the assumption that the upper arms are in good contact with the arm interfaces.In certain cases, e.g., T1S50 and T2S50 of P1, the measured interaction force is clearly higher than the simulated one during the phase screw-in (between the two black lines in Figure 10).This may be relevant to the participant's push force on the screwdriver to tighten the screw, which is not part of the simulation here.At the beginning of the arm lowering phase, when the user pushes down the exoskeleton arm, a jump in interaction force is observed in both simulated and measured values.However, the jump in the measured force is more significant than the jump in the simulated force, e.g., T1S100 and T2S50 of P3.A large difference between simulated and measured force is also observed in some cases, e.g., T1S50 and T2S50 of P2, when the upper arm is close to its neutral position and the cylinder is inactive.The interaction model assumes that the contact force between the upper arm and the arm interface is zero when the arm elevation is zero.However, this is not always the case in practice, as shown by the measurement data. Figure 10.Simulated interaction force (solid red lines) between the dominant arm and the arm interface compared to the measured force (dashed blue lines).Shoulder elevations in dotted yellow lines as reference for the user's arm movement, as well as solid and dash-dotted black lines for the start and end of the screw-in phase respectively.For graphic titles, P: Participant; T: Task; S: Support Level. Comparison of Unsupported and Supported Motion Simulation The core of the second validation study is to verify the feasibility of the present simulation approach for virtual prototyping of an exoskeleton with human motion without wearing the exoskeleton.In comparison to simulations using human motion from supported condition (S0), those based on human motions from unsupported conditions (S50/100) provide equivalent insights into the exoskeleton's behavior.The same relationship with shoulder elevation is seen in the curves of the exoskeleton arm elevations and the interaction forces in Figures 11 and 12. Since the duration of each recorded motion cycle is slightly different, the results are time-normalized for comparison.The variance in simulation outcomes between supported and unsupported human motions can be attributed to differences in shoulder elevations observed in these respective conditions.These differences are most apparent during the screw-in phase.Since the elevation angles and interaction forces are nearly constant during the screw-in phase, the average differences in their mean values are calculated.During the screw-in phase, the average difference in exoskeleton arm elevations simulated by supported and unsupported motion is 8.3°.This is almost equal to the variance in shoulder elevation in these respective conditions at 8.2°.What is striking in Figure 11 is that the user's shoulder reaches a higher elevation angle than the exoskeleton arm under the same conditions, with an average difference of 13.8° during the screw phase.This highlights the importance of simulating the exoskeleton in response to human motion, rather than copying human joint motion directly to the exoskeleton joint. Interestingly, the interaction forces at the arm interfaces in response to human motion in the conditions S0 and S50/100 are not as obviously different as in the elevations of the exoskeleton arms (see Figure 12).The average difference in the mean interaction Figure 10.Simulated interaction force (solid red lines) between the dominant arm and the arm interface compared to the measured force (dashed blue lines).Shoulder elevations in dotted yellow lines as reference for the user's arm movement, as well as solid and dash-dotted black lines for the start and end of the screw-in phase respectively.For graphic titles, P: Participant; T: Task; S: Support Level. Comparison of Unsupported and Supported Motion Simulation The core of the second validation study is to verify the feasibility of the present simulation approach for virtual prototyping of an exoskeleton with human motion without wearing the exoskeleton.In comparison to simulations using human motion from supported condition (S0), those based on human motions from unsupported conditions (S50/100) provide equivalent insights into the exoskeleton's behavior.The same relationship with shoulder elevation is seen in the curves of the exoskeleton arm elevations and the interaction forces in Figures 11 and 12. Since the duration of each recorded motion cycle is slightly different, the results are time-normalized for comparison.The variance in simulation outcomes between supported and unsupported human motions can be attributed to differences in shoulder elevations observed in these respective conditions.These differences are most apparent during the screw-in phase.Since the elevation angles and interaction forces are nearly constant during the screw-in phase, the average differences in their mean values are calculated.During the screw-in phase, the average difference in exoskeleton arm elevations simulated by supported and unsupported motion is 8.3 • .This is almost equal to the variance in shoulder elevation in these respective conditions at 8.2 • .What is striking in Figure 11 is that the user's shoulder reaches a higher elevation angle than the exoskeleton arm under the same conditions, with an average difference of 13.8 • during the screw phase.This highlights the importance of simulating the exoskeleton in response to human motion, rather than copying human joint motion directly to the exoskeleton joint. force on both sides during the screw phase is 0.66 N between the simulations with supported and unsupported human motion.The difference in interaction force is most noticeable in cases P1T1S100, P1T2S100, and P3T1S100, where the difference in exoskeleton arm elevation between conditions is visibly greater than the difference in shoulder elevation. Task-Specific Support Profile After validation, the current simulation model is used to optimize the supp file for two specified tasks to demonstrate its application potential in the exos Interestingly, the interaction forces at the arm interfaces in response to human motion in the conditions S0 and S50/100 are not as obviously different as in the elevations of the exoskeleton arms (see Figure 12).The average difference in the mean interaction force on both sides during the screw phase is 0.66 N between the simulations with supported and unsupported human motion.The difference in interaction force is most noticeable in cases P1T1S100, P1T2S100, and P3T1S100, where the difference in exoskeleton arm elevation between conditions is visibly greater than the difference in shoulder elevation. Task-Specific Support Profile After validation, the current simulation model is used to optimize the support profile for two specified tasks to demonstrate its application potential in the exoskeleton development process.As a result of the simulation tuning, the Support Factor switching according to motion and screw detection is shown in Figure 13 using P3 as an example.The implemented state machine has the capability to recognize the beginning and end of the arm lifting and lowering phases according to the elevation angle and angular velocities of the Lucy arms.The task-specific Support Factors for the key work phases and both sides of the arm are summarized in Table 3.The Support Factors are percentages in decimal form.For example, a factor of one means 100% power of the current support level.The optimizations here are simulated at 6 bar supply pressure, providing a maximum force of 482 N for the new cylinder.At the same pressure, the current cylinder force is 60% of the new cylinder force.The Support Factor for lifting the dominant arm is set higher than for lifting the non-dominant arm due to the weight of the screwdriver.A higher Support Factor is established for the screw-in phase when an increase in shoulder elevation torque is observed.It should be noted that the pressure force in T2 to tighten the screw is applied by both hands, although an increase in the simulated shoulder elevation torque is only seen in the dominant arm.In the DHM simulation, the entire push force is applied to the dominant side because the distribution of the push force between the two hands could not be determined in the previous lab study due to the limitations of the measurement system.For the arm lowering phase, the factor is reduced following the evaluation results from the laboratory study, as explained in Section 4. As exemplary results of the task-specific optimization, the simulated supporting As exemplary results of the task-specific optimization, the simulated supporting torques for both tasks based on the motion of P3 are shown in Figure 14.The optimized support torque curves (T_opt.supp.)show a closer match to the curves of shoulder elevation torque (T_shoulder) in terms of work phases when compared to the support torque before the optimization (T_supp.).This is consistent with the design expectations outlined in Section 4. The effect of the optimized support profile on the human body compared to the previous one needs to be investigated in future work in a DHM.As exemplary results of the task-specific optimization, the simulated supportin torques for both tasks based on the motion of P3 are shown in Figure 14.The optimize support torque curves (T_opt.supp.)show a closer match to the curves of shoulder ele vation torque (T_shoulder) in terms of work phases when compared to the suppo torque before the optimization (T_supp.).This is consistent with the design expectation outlined in Section 4. The effect of the optimized support profile on the human bod compared to the previous one needs to be investigated in future work in a DHM.14. Optimized support torques (T_opt.supp.)for both tasks compared to shoulder elevation torques (T_shoulder) and support torques before the optimization (T_supp.), in the example of P3.For graphic titles, P: Participant; T: Task. Comparison of Simulated and Measured Variables The similarity between simulated and measured data confirms that the presented exoskeleton model and the simulation approach can reproduce plausible exoskeleton motion and the interaction forces at the arm interfaces, responding to human motions from the same supported condition.The MAE (4.8 • for the full motion cycle and 3.7 • only for work phases where support is desired) and RMSD (7.1 • for the full motion cycle and 6.0 • only for work phases where support is desired) between simulated and measured exoskeleton arm elevations are acceptable for the selected shoulder exoskeleton because its primary target support posture is with the shoulder elevated above 90 • .This posture has been identified as a high potential risk for muscle fatigue [32][33][34].Moreover, the resulting variation in support force caused by this deviation in exoskeleton arm elevation is barely perceived by the user.The deviation between the simulated and measured elevation angle of the exoskeleton arm when the upper arm is close to neutral is allowable for this work because the support profile and range of motion are not affected.If the movements of passive joints need to be validated, additional joint sensors can be added to the exoskeleton in the future. Considering the theoretical support force, the amplitude of the simulated interaction force and its relationship to shoulder elevation are reasonable.The difference between the simulated and measured interaction forces is influenced by the limitations of both the measurement and the model.The measurement is sensitive to inadequate contact at the arm interface and external environmental forces, such as the participant's push force on the screwdriver during the screw-in phase.The influence of the push force is not addressed here, as the primary focus of the current study is on the exoskeleton's behavior in response to human motion.In this work, the interaction at the arm interface is modeled as a point contact between solid bodies.It does not accurately reproduce the deformation of the fabric padding at the arm interface and the soft tissues of the human arm, potentially influencing the interaction force.Furthermore, the point contact model assumes constant contact between the upper arm and the physical interface of the exoskeleton.In practice, however, the contact area varies, and the center of the contact area shifts as the user moves.To address these issues, the contact force model of soft bodies with the full contact force should be considered in future work.Moreover, appropriate measurement technology is expected in future work to validate friction and shear forces at the arm interface, which are crucial to user comfort. Comparison of Unsupported and Supported Motion Simulation The comparison of the simulation results with unsupported and supported human motion proves the feasibility of the human-motion-based simulation approach in replicating realistic exoskeleton behavior using human motion without wearing it.This allows not only the pre-development of a new exoskeleton concept prior to its physical implementation, but also the virtual evaluation of an exoskeleton's application potential for new use cases.The differences between simulation outcomes using unsupported and supported human motion are highly dependent on the variances in the input human motions.It is important to note that the current model cannot simulate changes in human motion caused by exoskeleton support.This is not critical for the Lucy exoskeleton, as a previous study with a sample size of 30 showed that there were no significant changes in shoulder elevation with its support [35].If the effect of the exoskeleton on human kinematics is unknown, it should be noted that a change in human motion may be observed during the physical implementation and testing of the exoskeleton.Predicting changes in human motion due to exoskeleton support is a potential challenge to be addressed in future work, especially if the changes are crucial to the control design.Dynamic reflex or response models that describe the human response to external forces while attempting to maintain stability could be explored as valuable candidates [10,36,37].The results also reveal differences between shoulder elevation and the elevation of the exoskeleton arm, emphasizing the positive aspect of the proposed approach in achieving more realistic exoskeleton motion compared to directly applying human joint motion to the exoskeleton joint. Task-Specific Support Profile The simulative optimized support profiles demonstrate the potential of the current exoskeleton model and simulation approach to adapt the exoskeleton support profile for desired tasks using simulated user physical effort (e.g., shoulder elevation torque) and task process information (e.g., screw events).This allows for human-centered and task-specific optimization of the exoskeleton before its physical implementation, eliminating the risk and hassle of testing with human participants.The effect of the optimized support profile on the human body can first be investigated using a DHM.Nevertheless, achieving a smooth user experience requires individualized fine-tuning, a process that cannot be replicated through simulation alone.In addition, motion detection thresholds may need to be adjusted to account for the difference in noise between the simulation results and real sensor data. Additionally, there are some other limitations of the current model, which can affect the results of the simulative optimization.Kinesthetic and cognitive user responses to the exoskeleton support are difficult to predict and cannot be addressed in the current simulation model.However, kinetic motion prediction can be considered as a compromise in future work.The current exoskeleton model does not simulate body sway, which could affect control decisions for tasks involving forward bending.An inverted pendulum model [38][39][40] could be considered to incorporate upper body motion into the exoskeleton model.In addition, there are typical deviations when translating real human motion to the model, whether it is the scaling of the model, the position of the markers on the model, or the relative motion caused by the influence of soft tissue.These deviations between the real human motion and the recorded or simulated human motion can affect the simulation results of the present model, since it is based on the recorded and simulated human motion. Conclusions This work has presented a novel approach to simulate exoskeleton behavior in response to human motion imported through marker positions.The main advantage of the presented approach is the ability to simulate the physical human-exoskeleton interaction without building a DHM in the exoskeleton model and the ability to simulate realistic exoskeleton movements based on human motion.In this paper, the simulations are performed using marker positions from a DHM.However, the presented approach also works with marker positions directly from a motion capture system if the markers are placed at the required positions.The results indicate that the presented modeling and simulation of the shoulder exoskeleton and its interaction with the human body are promising for the simulation analysis of exoskeleton behavior.This verified simulation approach is suitable not only for the task-specific evaluation and optimization of existing exoskeletons but also for the pre-development of new exoskeleton concepts in advance of physical realization. To improve the capability of this approach for simulating human-exoskeleton interaction, further research should address the challenge of predicting changes in human motion caused by exoskeleton support and extending the current contact model to surface contact and soft bodies.The friction and shear forces should also be validated in the future to address user comfort in the simulation.The next step in the simulation-based optimization of the exoskeleton Lucy is to update the new cylinder in the multibody model to analyze the potential changes to the exoskeleton kinematics and its interaction with humans.In addition, the effect of the optimized support profile on the human body will be investigated in a co-simulation model combined with a DHM and compared with the current one. Supplementary Materials: The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/robotics13020027/s1,Video S1: A video demonstration of an exoskeleton simulation example, along with corresponding human motion simulations and recordings from the lab study.Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. below) to define the LCS: Lucy_MC is the origin of the LCS, and Lucy_Mleft defines the di- Figure 3 . Figure 3. Virtual markers for importing the upper arm movements from DHM into the exoskeleton model: (a) four markers on the humerus for motion input, the suffix "rup" stands for right-up, "ldn" for left-down, and so on; (b) two markers on the exoskeleton Lucy as reference points for the LCS. Figure 3 . Figure 3. Virtual markers for importing the upper arm movements from DHM into the exoskeleton model: (a) four markers on the humerus for motion input, the suffix "rup" stands for right-up, "ldn" for left-down, and so on; (b) two markers on the exoskeleton Lucy as reference points for the LCS. Figure 4 . Figure 4. (a) The first rotation ( 1 ) to align the vector to the x-axis of the LCS; (b) the second rotation ( ) around the x-axis of the LCS. Figure 4 . Figure 4. (a) The first rotation (R 1 ) to align the vector v to the x-axis of the LCS; (b) the second rotation (R 2 ) around the x-axis of the LCS. Figure 5 . Figure 5. Contact modeling between exoskeleton arm interface and human upper arm, left side as an example, with the coordinate system of the interaction force acting from the tube to the sphere. Figure 6 . Figure 6.Torques for shoulder elevation (dashed lines) without support of the exoskeleton and supporting torques (solid lines) from the exoskeleton with 100% support level, red for the rig and blue for the left side.Shoulder elevations in dotted lines as reference for the user's a Figure 6 . Figure6.Torques for shoulder elevation (dashed lines) without support of the exoskeleton and the supporting torques (solid lines) from the exoskeleton with 100% support level, red for the right and blue for the left side.Shoulder elevations in dotted lines as reference for the user's arm movement, as well as solid and dash-dotted black lines for the start and end of the screw-in phase, respectively.For graphic titles, P: Participant; T: Task. Figure 7 . Figure 7. State diagram for adjusting the exoskeleton support according to the work p Figure 7 . Figure 7. State diagram for adjusting the exoskeleton support according to the work phases. Figure 8 . Figure 8.Comparison between simulation animation and lab video recording of case P1T1S100, lateral views of the exoskeleton's two passive joint movements at two different positions of the right upper arm: (a) shoulder elevation angel of 39° at 1 s of the motion cycle, (b) shoulder elevation angel of 129° at 6 s of the motion cycle. Figure 8 . Figure 8.Comparison between simulation animation and lab video recording of case P1T1S100, lateral views of the exoskeleton's two passive joint movements at two different positions of the right upper arm: (a) shoulder elevation angel of 39 • at 1 s of the motion cycle, (b) shoulder elevation angel of 129 • at 6 s of the motion cycle. Figure 9 . Figure 9. Simulated elevation angles (solid lines) of the exoskeleton arms in comparison to the measured ones (dashed lines), red for the right and blue for the left side.For graphic titles, P: Participant; T: Task; S: Support Level. Figure 9 . Figure 9. Simulated elevation angles (solid lines) of the exoskeleton arms in comparison to the measured ones (dashed lines), red for the right and blue for the left side.For graphic titles, P: Participant; T: Task; S: Support Level. Figure 11 . Figure 11.Comparison of simulated elevation angle of exoskeleton (Exo.) based on unsupported (S0) and supported (S50/100) motion, paired with shoulder elevation angle (Shld.)from unsupported and supported motion.Solid lines represent angle curves for S0 condition and dashed lines for S50/100 condition.Red and blue indicate the right and left arm elevations of the exoskeleton, black and yellow indicate the right and left shoulder elevations of the user.For graphic titles, P: Participant; T: Task; S: Support Level. Figure 11 .Figure 12 . Figure 11.Comparison of simulated elevation angle of exoskeleton (Exo.) based on unsupported (S0) and supported (S50/100) motion, paired with shoulder elevation angle (Shld.)from unsupported and supported motion.Solid lines represent angle curves for S0 condition and dashed lines for S50/100 condition.Red and blue indicate the right and left arm elevations of the exoskeleton, black and yellow indicate the right and left shoulder elevations of the user.For graphic titles, P: Participant; T: Task; S: Support Level.Robotics 2024, 13, x FOR PEER REVIEW Figure 12 . Figure 12.Comparison of simulated interaction force at arm interface based on unsupported (S0) and supported (S50/100) motion.Solid lines represent angle curves for S0 condition and dashed lines for S50/100 condition.Red for the right arm, blue for the left.For graphic titles, P: Participant; T: Task; S: Support Level. Figure 13 . Figure 13.Switching the Support Factor (dashed lines) according to the work phases in T1 and T2, in the example of P3, blue for the left and red for the right side.Elevation angle of the exoskeleton arm (elv.) in dashed line, angular velocity (V_elv.) in solid line.For graphic titles, P: Participant; T: Task. Figure 13 . Figure 13.Switching the Support Factor (dashed lines) according to the work phases in T1 and T2, in the example of P3, blue for the left and red for the right side.Elevation angle of the exoskeleton arm (elv.) in dashed line, angular velocity (V_elv.) in solid line.For graphic titles, P: Participant; T: Task. Figure 14 . Figure 14.Optimized support torques (T_opt.supp.)for both tasks compared to shoulder elevatio torques (T_shoulder) and support torques before the optimization (T_supp.), in the example of P For graphic titles, P: Participant; T: Task. Table 1 . An overview of the two simulation groups. Table 2 . The two validation levels. Table 3 . Task-specific Support Factor (percentage in decimal form) for different work phases and arm sides. Table 3 . Task-specific Support Factor (percentage in decimal form) for different work phases and arm sides.
2024-02-03T16:07:56.815Z
2024-02-01T00:00:00.000
{ "year": 2024, "sha1": "00dcc2cd1a334f32d69561702b314aef93e38de8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2218-6581/13/2/27/pdf?version=1706750324", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "39d853a3431f7a1f88093d458a91c689dc287895", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
187642667
pes2o/s2orc
v3-fos-license
FRIB cryogenic system status Construction and installation of the FRIB 4.5 K helium refrigeration system is nearing completion, with compressor system commissioning and 4.5 K refrigerator commissioning on schedule to occur in late 2017. The LINAC 4.5 K helium distribution system, all major process equipment, and the cryogenic distribution for the sub-systems have been procured and delivered. The sub-atmospheric cold box fabrication is planned to begin the summer of 2017, which is on schedule for commissioning in the spring of 2018. Commissioning of the support systems, such as the helium gas storage, helium purifier, and oil processor is planned to be complete by the summer of 2017. This paper presents details of the equipment procured, installation status and commissioning plans. Introduction The cryogenic system at Michigan State University's (MSU's) Facility for Rare Isotope Beams (FRIB) is on schedule for a cool-down to 4.5 K at the end of 2017, as reported previously [1]. All of the cryogenic sub-systems required for operation at 4.5 K have been delivered from the suppliers and are in place at FRIB. The sub-systems for 2-K operation are in the final stages of assembly (e.g., 2-K cold box, guard vacuum skid) at FRIB. The majority of cryogenic and warm piping are installed. The overall cryogenic plant layout is shown in Figure 1. Presently, the commissioning and integration of the refrigeration and support sub-systems are on schedule and progressing to support the scheduled cooldown to 4.5 K. Design basis The FRIB cryogenic system reflects the cumulative project execution and technical experience gained from similar systems, such as at the National Superconducting Cyclotron Lab (NSCL) at MSU [2], the Spallation Neutron Source (SNS) at Oak Ridge National Lab [3], the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Lab (BNL) [4], NASA's Johnson Space Center (JSC) [5], and Jefferson Lab (JLab) [6]. The acquired knowledge has been applied with respect to the planning, design, fabrication and delivery of the major sub-systems while coordinating the funding profile and civil construction. The team members industrial and 24/7 operational experience enabled them to develop standardized designs for the major sub-systems, such as the compressor skids, oil removal, and gas management. Several of these have already been operating successfully for many years at other facilities. The compressor skid design, initially developed for NASA-JSC and further developed into a standardized design for JLab's 12 GeV upgrade, is improved and being used for this project (FRIB) and for others [7,8]. The previously mentioned projects, just to cite a few, have implemented the 'Ganni Cycle Floating Pressure Technology' [9], which enables a cryogenic system to efficiently, automatically, and stably adjust to different load capacities and changes in operating modes (i.e., mixed modes of liquefaction, refrigeration, and cold compressor, with varying shield loads). However, it was not until the development of the standardized compressor skid for NASA-JSC and JLab's 12 GeV upgrade with new features for variable pressure operation that the full potential of this floating pressure technology was realized [5,10]. The FRIB cryogenic distribution to the cryo-modules is different from JLab and SNS due to the 4.5 K magnets in the cryo-modules [11,12]. The cryogenic distribution from the cryogenic plant to the cryomodules are standardized to the extent practical for simplicity and economy. The modularized and subdivided procurement approach, which was used for the FRIB, SNS, JLab and NASA-JSC cryogenic systems, has been recognised to be the lowest technical risk and is more cost-effective than a single turnkey approach [13]. This is the case when the user has experience and expertise in each sub-system and understands how to integrate these together. The main technical design differences in the cryogenic plant and distribution system between FRIB and the other previously mentioned installations are discussed in literature [13,14]. The technical team which will be operating the cryogenic system is integrally involved in the design, procurement, installation, and commissioning stages; which has proved to be productive and efficient from the previous operational and maintenance experience gained from similar systems. Close coordination and planning with the civil construction streamlined the installation of the electrical feeds of all the cryogenic sub-systems. This provided an integrated approach during the final connection of equipment to the electrical system. The controls systems have been baselined on experience obtained from other laboratories with additional improvements such as incorporating dual feeds and power supplies for the main control cabinets. The dedicated cryo-control room and cryonetwork are fully operational at the present time. Background The layout of the FRIB accelerator was driven by the requirement to integrate the existing cyclotron and experimental areas while keeping it on the MSU campus within the FRIB/NSCL site. Of course, this directed the civil construction requirements and the accelerator sub-system layouts including the cryogenics. In addition to conforming to these requirements, the equipment and sub-systems for the cryogenic plant were located and oriented to maximize operability, access, and anticipated future expansion needs. As agreed upon between FRIB and the general contractor at the beginning of the cryogenic system planning in 2013, it was possible to install the major sub-systems before the building beneficial occupancy date (BOD) in March 2017. The scheduling of the sub-system procurements was very successfully coordinated with the fabrication schedule while matching both the funding profile and building construction progress. The design and ordering had to be coordinated such that the delivery dates of all major components, were 'just in time' when the final location spaces in the building were ready. This eliminated unnecessary storage, maintenance, and double handling of these large subsystems. The cryogenic distribution system encompasses: the tunnel cryogenic distribution transfer lines that include 49 standardized sections to interface with the cryo-modules; three tunnel-shaft cryogenic transfer lines that connect the cold box room to the tunnel distribution; and the cold box room cryogenic transfer lines that connect the refrigeration system. Most warm interconnecting piping, the critical parts of the tunnel-shaft cryogenic transfer lines, and the tunnel cryogenic distribution were coordinated with the building construction and completed well before the BOD. bay building, with the interconnecting piping routed via a pipe-bridge between the SRF high-bay and the FRIB building (which are adjacent to each other). These tanks are commissioned containing clean helium and are planned to be operated remotely. The upper and lower 4.5 K cold boxes [13] are installed having their overall configuration shown in Figure 2. The six main compressors and main oil removal vessels are installed as shown in Figure 3. Auxiliary equipment, such as the compressor oil processing system, the purifier compressors, and purifier cold boxes are shown in Figure 4. Table 1 summarizes the present status of all the sub-systems. Figure 5 shows bayonet cans and tunnel cryogenic line relief valve skids that are fabricated but not presently attached to the tunnel-shaft transfer lines. The centrifugal cryogenic (cold) compressor units have been delivered and are in the process of being assembled into the 2-K cold box at FRIB. The 2-K (sub-atmospheric) cold box is under construction (shown in Figure 5, as it is anticipated to look when complete). Distribution system status The original FRIB cryogenic distribution system plan in 2013 to divide the LINAC into three segments was previously presented [12] and the progress described [14]. Presently, the three 9.1 m long tunnelshaft transfer lines, the three lines connecting these tunnel-shaft lines to each LINAC segment (LS) at the "T" section, and each "T" section are installed. The first LINAC segment, LS1 is shown in Figure 6, and consists of multiple standard transfer line sections which are installed and connected to the "T" section. All of the standard transfer line sections which interface to the cryo-modules are in the tunnel. The transfer line from the 4.5 K refrigerator to the bayonet cans in the refrigeration room is presently being installed. Similarly, another transfer line connecting the 4.5 K refrigerator to the 10,000 liter liquid helium dewar and the 2-K cold box is being installed. Table 2 summarizes the cryogenic distribution system status. In process of installation Conclusions Most of the FRIB cryogenic plant sub-systems have been installed and are in various stages of commissioning. The sub-system design, fabrication, acquisition, and installation followed closely to the initial plan and schedule. It is anticipated to commission the 4.5 K refrigerator by the end of 2017 and have the 2 K cold box operational in 2018.
2019-06-13T13:12:17.419Z
2017-12-01T00:00:00.000
{ "year": 2017, "sha1": "e1c67635004154dc61470d546c04ec7d0b6fefc1", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/278/1/012102", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "845d57a2563f3cc02cb6f7715a50a3e68f3d0f2b", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Physics", "Environmental Science" ] }
270704325
pes2o/s2orc
v3-fos-license
Prediction of geothermal temperature field by multi-attribute neural network Hot dry rock (HDR) resources are gaining increasing attention as a significant renewable resource due to their low carbon footprint and stable nature. When assessing the potential of a conventional geothermal resource, a temperature field distribution is a crucial factor. However, the available geostatistical and numerical simulations methods are often influenced by data coverage and human factors. In this study, the Convolution Block Attention Module (CBAM) and Bottleneck Architecture were integrated into UNet (CBAM-B-UNet) for simulating the geothermal temperature field. The proposed CBAM-B-UNet takes in a geological model containing parameters such as density, thermal conductivity, and specific heat capacity as input, and it simulates the temperature field by dynamically blending these multiple parameters through the neural network. The bottleneck architectures and CBAM can reduce the computational cost while ensuring accuracy in the simulation. The CBAM-B-UNet was trained using thousands of geological models with various real structures and their corresponding temperature fields. The method’s applicability was verified by employing a complex geological model of hot dry rock. In the final analysis, the simulated temperature field results are compared with the theoretical steady-state crustal ground temperature model of Gonghe Basin. The results indicated a small error between them, further validating the method’s superiority. During the temperature field simulation, the thermal evolution law of a symmetrical cooling front formed by low thermal conductivity and high specific heat capacity in the center of the fault zone and on both sides of granite was revealed. The temperature gradually decreases from the center towards the edges. Introduction Geothermal energy stands out as a novel form of renewable energy known for its high stability and low vulnerability to external influences when compared to other renewable sources such as tidal, wind, and solar energy (Zhao and Wan 2014;Wang et al. 2020;Qiu et al. 2022).Being a clean energy alternative with minimal carbon dioxide emissions, geothermal energy has gathered considerable attention from researchers and governments globally (Zhu et al. 2015;Xia and Zhang 2019;Yang et al. 2022).Understanding the temperature field distribution within geothermal areas is paramount for assessing their reserves (Bassam et al. 2010;Forrest et al. 2005).The temperature field plays a critical role in identifying the optimal drilling location and depth before commencing geothermal drilling operations (Vogt et al. 2010).Given the substantial costs associated with measuring temperature fields in geothermal regions, it becomes imperative to adopt methods that can accurately simulate these temperature distributions based on available data. Currently, various methods are employed for simulating geothermal temperatures, including geostatistical methods (Williams and DeAngelo 2011;Siler et al. 2016) and numerical simulations (Song et al. 2018;Aliyu and Archer 2021;Salinas et al. 2021;Lv et al. 2022).These methods have been successfully applied to conventional geothermal fields.Fabbri (2001) utilized post-processing indicator Kriging outcomes to derive probability maps, which highlighted areas with high probabilities of temperatures above 80 ℃, between 70 ℃ and 80 ℃, and lower than 70 ℃.Sepúlveda (2012) used Kriging to predict drill-hole temperatures and stratigraphic data sets in the Wairakei geothermal field, New Zealand.Cheng et al. (2019) developed a conceptual numerical model that employs a fully coupled thermos-poroelastic finite-element model with Discrete Fracture Network (DFN) to simulate the response of naturally fractured geothermal reservoirs to water injection.Akbar and Fathianpour (2021) devised a computational model utilizing geological, geophysical, and structural data to enhance the understanding of high-enthalpy geothermal reservoirs.The model incorporates a Curie depth map to estimate heat sources and employs finite element methods to solve governing equations.Lesmana et al. (2021) studied and compared two development strategies, full-scale and stepwise development, for the Tompaso field in North Sulawesi, Indonesia, based on numerical and thermodynamic simulations.Conducting numerical simulations, Li et al. (2022) investigated the impact of geological layering on the thermal energy performance of underground mines, evaluating the influence of geological stratification on heat storage capacity and performance using heat storage and insulation materials.However, these methods have a few weaknesses.Both numerical simulation and geostatistical methods need detailed geological information and data quantity to complete the task, which limits the application of the two methods. Machine learning has emerged as a promising area of research and development in geothermal exploration, with a rise in its widespread application across multiple research fields within geothermal energy.Currently, traditional machine learning techniques are predominantly employed in the exploration, reservoir characterization, petrophysics, and drilling aspects of the geothermal energy industry.In contrast, the deep learning algorithms are primarily utilized in reservoir engineering, seismic activity, and production/injection engineering (Moraga, et al. 2022;Okoroafor et al. 2022).Esen et al. (2007;2008a, b, c, d, e;2015) have conducted extensive research on ground coupled heat pump (GCHP), ground heat exchanger (GHE), and ground source heat pump (GSHP).They utilized various machine learning techniques such as ANFIS, ANN, and SVM to model and predict the performance of GCHP, GHE, and GSHP, providing diverse tools to enhance the modeling and predictive capabilities of machine learning.Rezvanbehbahani et al. (2017) applied the Gradient Boosted Regression Tree (GBRT) model to predict the heat flux distribution in Greenland using the simplified global geothermal heat flow (GHF) data set.Assouline et al. (2019) proposed a new methodology that combined the results random forest algorithm with GIS data processing and physical modeling to assess Switzerland's shallow geothermal potential via the geothermal gradient, ground thermal conductivity, and ground thermal diffusivity.In an effort to forecast the temperature of geothermal reservoirs based on selected hydro geochemistry data, Fusun and Mehmet Haklidir (2020) developed a Deep Neural Network (DNN) model, demonstrating promising results.Lösing and Ebbing (2021) suggested a machine learning-based method that employed the gradient-boosting regression technique to count geothermal heat flow (GHF) in Antarctica.Gudala and Govindarajan (2021) improved the mathematical model through the dynamic variations in the rock, fracture and fluid properties, and checked the geothermal performance through the recently developed integrated machine learning-response surface model-ARIMA model.Ishitsuka et al. (2021) developed two methods: one based on Bayesian estimation and the other based on neural network to estimate the temperature distribution of geothermal field.Xiong et al. (2022) compared the deep learning GoogLeNet model with Support Vector Machine (SVM), Decision Tree (DT), K-Nearest Neighbor (KNN) and other traditional machine learning to recognize geothermal surface manifestations.Yang et al. (2022) used the deep belief network (DBN) to identify the formation temperature field, and successfully applied the network to the identification of stratum temperature field of the southern Songliao Basin, China.Kiran et al. (2022) used FORGE well-logging data to synthesize the evolution of dynamic data, and analyzed and compared K-Nearest Neighbor, Random Forest, Decision Tree, Gradient Boosting and Deep Learning model with hidden layers. The above researchers' work on geothermal based on machine learning and deep learning fully demonstrates the practical significance of using deep learning to predict the geothermal temperature field.Therefore, we propose a novel network called CBAM-B-UNet for simulating temperature fields in complex hot dry rocks.Specifically, CBAM-B-UNet takes key parameters including density, specific heat capacity, and thermal conductivity as inputs, and these parameters are adaptively fused by the neural network to simulate temperature fields of hot dry rocks.The data set used in this study was generated using the finite element method, and the numerical results were compared with logging data to verify the accuracy of the proposed model.Our findings indicate that CBAM-B-UNet is more effective for simulating the temperature field of hot dry rocks.Furthermore, the use of CBAM-B-UNet in complex models allows for the analysis of the evolution of fracture temperature fields, lithology, and other related factors. Methodology The challenge of simulating a temperature field with multiple rock parameters can be reframed as a nonlinear regression problem.In tackling such nonlinear regression challenges, neural network processing emerges as an effective solution.Thus, UNet is leveraged to address the complex task of temperature field simulation.To enhance the UNet capability in accurately simulating the temperature field of hot dry rocks and mitigating uncertainties tied to single-parameter simulations, a combination of bottleneck architectures and Convolutional Block Attention Module is employed with UNet.By utilizing a network to integrate three parameters and merging them with geological structure and additional information, a more precise geothermal temperature field can be generated. CBAM-B-UNet architecture The architecture of CBAM-B-UNet is based on UNet (Ronneberger et al. 2015).The UNet's efficient representation capabilities allow for the accurate simulation of temperature fields from rock parameters.Specifically, the encoder component of the CBAM-B-UNet architecture extracts both rock parameters (density, thermal conductivity, and specific heat capacity) and temperature field data.The decoder subsequently generates a corresponding functional relationship between the two data types, enabling the deep simulation of temperature fields that have been trained through the UNet. The modified UNet features a contraction path on the left side with four downsampling blocks and an expansion path on the right side with four upsampling blocks, in line with the original UNet structure.Each downsampling block in the left path comprises a bottleneck architecture, ReLU activation function, Convolutional Block Attention Module (CBAM), sigmoid activation function, and downsampling operation.The bottleneck architecture reduces the number of network parameters, thereby accelerating the training process.Incorporating the CBAM enhances the robustness and generalization capabilities of the UNet.Both ReLU and sigmoid activation functions are utilized as nonlinear transformations to increase network nonlinearity (Krizhevsky et al. 2012).The downsampling operation involves a 2 × 2 maxpooling layer with a stride of 2, reducing the feature map size by half while retaining the maximum value.The intermediate bottlenecks include the bottleneck architecture, ReLU activation function, CBAM, and sigmoid activation function.Each upsampling block includes an upsampling operation (2 × 2 bilinear interpolation with a stride of 2), concatenation to merge left path features, bottleneck architecture, ReLU activation function, CBAM, and sigmoid activation function.The upsampling operation uses an upsampling layer to double the input image size.Finally, the temperature field is generated by a 1 × 1 convolution layer.By adjusting the number of output channels to 1, the network can accurately map rock parameters to the temperature field during the contraction and expansion learning phases, thereby achieving multi-parameter fusion.The structure of CBAM-B-UNet is shown in Fig. 1. Convolutional block attention module (CBAM) The convolutional block attention module (CBAM) was introduced by Woo et al. (2018).The module includes two sequential submodules, namely the channel and spatial submodules, and serves as a straightforward and efficient attention mechanism for feedforward convolutional neural networks.CBAM aims to direct the attention to the crucial features while also enhancing the representation capability of the neural network.By leveraging attention mechanisms, CBAM effectively focuses on informative features while suppressing redundant ones.In this study, the channel attention module and the spatial attention module are sequentially applied, facilitating the transmission of information in the neural network by learning to reinforce or suppress relevant characteristic information.Figure 2 shows the architecture of CBAM. The channel attention module represents a distinctive type of attention module that aims to address the information loss commonly associated with single pooling operations.This is achieved by performing two pooling operations (maxpooling and averagepooling) in order to obtain two different feature maps.These maps are then processed by multi-layer perceptual filters and added together.Finally, the sigmoid activation function is applied to obtain the channel attention.Figure 3 shows the channel attention module. The channel attention is as follows: where M c (F ) is channel attention module;σ (•) denotes the sigmoid function; MLP(•) is a multilayer perceptron; AvgPool(•) is avgpooling; MaxPool(•) is maxpooling.The spatial attention module principally reflects the importance of input values in the spatial dimension.The attention module is obtained through maxpooling and averagepooling, concatenation, convoluted by a standard convolution layer, and finally sigmoid activation function.Figure 4 shows the spatial attention module. The spatial attention can be expressed as: (1) The input feature map is first multiplied by the channel attention point, then multiplied by the spatial attention point, and ultimately the final feature map is obtained after CBAM processing.This process is as follows: where ⊗ represents the multiplication of matrices by elements; F ′ is input characteristic map; F ′′ is output characteristic map. Bottleneck architectures The bottleneck architecture is a crucial component of ResNet, featuring a distinctive bottleneck design (He et al. He et al. 2016).This module utilizes three convolutional kernels of sizes 1 × 1, 3 × 3, and 1 × 1 to reduce network parameters and accelerate network training.The introduction of the CBAM enhances the network's robustness and generalization capabilities, albeit at the cost of increased network parameters, heightened computational complexity, and slower training speeds.To address this challenge, integrating the bottleneck architecture into UNet helps expedite network training.Figure 5 shows the bottleneck architecture.(2) (5) Data set preparation As a data-driven algorithm, the performance of CBAM-B-UNet is contingent upon the qualities of the training data set.In the study, the training data set builds upon the simulation of the temperature field of the strata after 600,000 years utilizing the finite element method.The rock parameters are derived from high-temperature and high-pressure petrophysical experiments and previous research (Zhang et al. 2021).To simplify the finite element calculation, this study posits several assumptions as the basis for establishing labeled data: (1) The rock matrix is considered to be homogeneous and isotropic, particularly with regard to thermal conductivity, which is assumed to be temperature independent. (2) This study solely accounts for deep heat sources and does not take into consideration the generation of radioactive heat by rocks. (3) Hydrothermal geothermal activity is not accounted for in the geological model, and the rock mass is assumed to be of the hot dry geothermal type.As water is not a part of the geological model, only heat conduction and energy transfer are considered. (4) The dimensions of the geological model are 16 km × 16 km, with the heat source temperature set at 800 ℃ and the ground temperature set at 20 ℃. The rock mass parameter ranges for all training data sets analyzed in this study are shown in Table 1.The geological structure model for the training data subset is shown in Fig. 6.Density, thermal conductivity, and specific heat capacity in the geological model are shown in Fig. 7. Subsequently, solving the temperature field was carried out using the finite element method, and the annotated results are shown in Fig. 8. Data processing The purpose of preprocessing data is to modify it to align with the requirements of the model and ensure compatibility between the data and model.Variations in values may result in the over-representation of attributes with greater values and increase training time for the neural network.Algorithms based on sample distance are sensitive to the magnitude of data.In this research, three parameters (density, thermal conductivity, and specific heat capacity) were selected to simulate the temperature field, and their values varied significantly, necessitating data preprocessing.To address this issue in the present study, a Z-score standardization technique was employed: where x * is the normalized data; x is the original data; µ is the mean of the sample data; σ is the standard deviation of the sample data. (8) Training CBAM-B-UNet The CBAM-B-UNet parameters are optimized using the Adam optimizer (Kingma and Ba 2014), with an initial learning rate of 0.001.To prevent overfitting and improve model generalization, the learning rate is attenuated using the cosine annealing algorithm with warm restart (Loshchilov and Hutter, Loshchilov and Hutter 2016).A batch size of 4 is used for training, and the model is trained for a total of 60 epochs. Result test model In this section, in order to verify the effectiveness and generalization ability of the neural network on the data set, the trained CBAM-B-UNet is applied to the geological model 1. This section establishes a geological model, as shown in Fig. 9, comprising heat conduction channels, non-thermal conductors, and high-temperature granite conductors.The temperature field is simulated using CBAM-B-UNet and compared with the results obtained from a finite element method simulation, as shown in Fig. 10.Although the CBAM-B-UNet training relies on the data set generated by the finite element method, there exist slight disparities between the output of CBAM-B-UNet and the temperature field generated by the finite element method.Specifically, in the granite conductor, the temperature should be higher than in the periphery.However, the isotherms from the finite element method do not exhibit characteristics consistent with theoretical expectations.In contrast, the neural network simulation aligns more closely with the expected theoretical behavior.Consequently, it is concluded that the method proposed in this study offers enhanced performance in accurately predicting the temperature field, especially in scenarios where the granite conductor's temperature behavior differs from that observed in the finite element method simulations, thus showcasing the method's improved predictive capabilities. Study area The Gonghe Basin, located in Qinghai Province, is a rhombic basin that has developed during the Cenozoic Era.It lies on the northeast edge of the Qinghai-Tibet Plateau (Fig. 11A) and has been formed through tectonic movements of the Qilian and Kunlun Mountains (Fig. 11B) (Zeng et al. 2018;Zhang et al. 2018a, b).The basin boundary fault activity has resulted in uplift and rising of surrounding mountains; as a result, the basin has remained relatively stable, and an extensive set of Cenozoic sediments have been deposited.These sediments comprise primarily Quaternary alluvial-diluvial deposits, fluvial-lacustrine deposits, and Neogene and Paleogene lacustrine deposits.The base of the basin is mainly composed of Triassic strata and intrusive rocks, consisting of granite, granodiorite, and porphyry granite (Fig. 11C) (Wang et al. 2015;Li et al. 2015).In recent years, a series of wells (e.g., DR3, DR4, GR1, and GR2) have been organized and implemented by the China Geological Survey and Qinghai Provincial Department of Land and Resources, revealing the occurrence of high temperatures in Gonghe Basin (Fig. 11D) (Zhang et al. 2018a, b;Yan et al. 2015).This highlights the significant development potential of hot dry rocks in the Gonghe Basin. Gonghe model To further validate the applicability of CBAM-B-UNet, a 12 km × 20 km Gonghe geological model was developed in this section, referencing the geological model of the Gonghe Basin established by Gao and Zhao (2024).This model (Fig. 12) comprises various geological structures with complex lateral geological conditions.It is structured into four layers: two uppermost caprocks, a middle geothermal reservoir, and a lower heat source.Tectonic activities have led to the development of numerous faults and cracks horizontally within the model, serving as conduits for geothermal energy underground.Subsequently, the trained CBAM-B-UNet is utilized to simulate the temperature field of the Gonghe geological model (Fig. 13).A comparison is made between the temperature field simulated by CBAM-B-UNet and those simulated by 3D-UNet and the finite element method (Fig. 13) (Gao and Zhao 2024).The comparison results indicate that the performance of CBAM-B-UNet aligns more consistently with theoretical expectations.Specifically, areas with a higher concentration of cracks at depths of 2-3 km, 6 km, 9 km, and 18-19 km exhibit increased temperatures compared to the surrounding regions.In order to verify the accuracy of the simulation, the actual logging temperature measurement curve are compared with the temperature field simulated by CBAM-B-UNet, as shown in Fig. 14.The results reveal a high degree of consistency with the actual geological conditions, thus underlining the reliability and feasibility of CBAM-B-UNet in accurately predicting the temperature field in complex geological settings. Figure 14a shows the theoretical steady-state crustal geotherms of the Gonghe and the temperature curve obtained by CBAM-B-UNet simulation temperature field.The output value indicates a difference of less than 20 ℃, suggesting an error rate of less than 2% for CBAM-B-UNet (Fig. 14b).These results demonstrate the high superiority of our method. Discussion Based on the aforementioned points, we remain confident of the potential success of our proposed approach in simulating the temperature distribution of hot dry rocks.Consequently, this segment of the study emphasizes on scrutinizing the impact of various factors such as CBAM, bottleneck architectures, and cosine annealing algorithm with the warm restart method on the performance of UNet.Additionally, we carry out an indepth analysis of the limitations and possible future directions of our study.The following discussion is based on geological model 1. Effect of CBAM This study enhances the capability of neural networks in handling regression problems by incorporating Convolutional Block Attention Modules (CBAM) into the original Unet architecture.The impact of CBAM on neural network performance is assessed by comparing the simulated temperature field by CBAM-B-UNet and the original UNet.The experimental outcomes reveal that, although the training duration of CBAM-B-UNet needs to be extended, the integrated attention mechanism of CBAM significantly enhances the simulation accuracy of the temperature field (Fig. 15).Notably, CBAMenhanced models outperform original UNet in simulating the effects of heat conduction channels, aligning more precisely with contemporary geological insights.These results demonstrate the efficacy of integrating CBAM into neural network structures to enhance the precision of regression modeling. Effect of bottleneck architectures To mitigate the time overhead induced by merging CBAM, a bottleneck architecture has been introduced into the neural network.Furthermore, the time taken for an epoch by CBAM-B-UNet, CBAM-UNet, and original UNet under identical conditions, as well as the overall training duration, were compared.The experimental findings demonstrate that CBAM-B-UNet exhibits reduced time consumption, underscoring the benefits of incorporating the bottleneck architecture.Table 2 shows a comparative analysis of the time consumed by these three methods. Effect of cosine annealing algorithm with warm restart Hyperparameters are a set of free parameters that provide a means of controlling the entire algorithm.In this study, the learning rate was identified as a critical hyperparameter.To investigate the impact of the learning rate, two distinct learning rate adjustment strategies were compared in CBAM-B-UNet training, while keeping all other hyperparameters of CBAM-B-UNet constant.Figure 16 shows the loss curves resulting from the two different learning rate adjustment strategies.In the conventional algorithm, the loss value initially decreases gently, then gradually stabilizes around the 30th epoch with a high loss value.Conversely, the cosine annealing algorithm with warm restart exhibits only mild fluctuations during the attenuation process and stabilizes around the 30th epoch with a lower loss value.Notably, when the cosine annealing algorithm with a warm restart and the conventional algorithm complete training simultaneously, the conventional algorithm still exhibits underfitting, while the cosine annealing algorithm with a warm restart trains the neural network better.Consequently, this study suggests that the cosine annealing algorithm with a warm restart performs better than the conventional algorithm in training the neural network. Limitations and future work As an AI algorithm that is reliant on data, the training data set plays a critical role in determining the generalization capability of the neural network and is instrumental in establishing the functional relationships used in this study.The training data set for this study was created through the use of the finite element method to simulate the temperature field.As a result, the performance of our neural network is dependent, to some extent, on the finite element method.When utilizing the finite element method to establish the labels, a significant number of preconditions must be taken into account.These include the initial temperature of the heat source, the heat source's location, the boundary conditions (e.g., non-thermal conduction boundary), and the specific time at which the temperature conduction occurs.These preconditions limit the neural network's ability to simulate the temperature field solely under certain circumstances.Consequently, it is not possible to simulate the temperature field of hot dry rocks over time dimensions.Moreover, the process of setting up the labels is time-consuming. Conclusions This study utilizes the CBAM-B-UNet method to simulate the temperature field of hot dry rock.The main findings are as follows: 1. Based on the simulated temperature field: The cover layer has a significant impact on the regional temperature field due to its low thermal conductivity.This results in the temperature field above the cover layer being lower than the surrounding temperature field.Compared to granite and the crust, thermal conductive channels exhibit higher heat transfer rates, with temperatures in the conductive channels also higher than in the surrounding layers.The temperature field inside granite is higher than the surrounding geothermal field, indicating a faster heat transfer speed compared to the surrounding layers. Fig. 2 Fig.2 Convolutional block attention module -B-UNet takes the geological model containing rock parameters R (density, thermal conductivity, and specific heat capacity) as input and the temperature field T as expected output.That is, the relationship between R and T is established by CBAM-B-UNet: where CBAM − B − Net(•) denotes an CBAM-B-UNet;θ = {W , b},W and b both are learnable parameters, W represents weight matrix, b represents bias matrix.In the training process of CBAM-B-UNet, the optimization and adjustment of the objective function are iterative.By continually comparing the current objective function with the Fig. 6 Fig. 7 Fig. 6 Four representative samples from 10,000 simulated training data sets.a Pleated structure model.b Horst model.c Horizontal structure model.d Horizontal structure model.Different geological structure model enriches the training data set Fig. 8 Fig. 8 The simulated temperature field in the context of a pleated structure model, b horst model, c horizontal structure model, and d horizontal structure model.The finite element method is used to simulate the heat conduction of the model, the temperature field after 600,000 years is simulated, and the labeled data are established Fig. 11 Fig. 11Sketch map of regional geology and geothermal geology in Gonghe basin, northeastern Tibetan Plateau (modified fromZhang et al. 2021) Fig. 13 Fig.13 Comparison of geological models in the Gonghe Basin using CBAM-B-UNet, 3D-UNet, and the finite element method to simulate the temperature field.a Temperature field simulated by CBAM-B-UNet; b temperature field simulated by 3D-UNet (Gao and Zhao 2024); c temperature field simulated by the finite element method (Gao and Zhao 2024) Fig. 14 a Fig. 14 a The theoretical steady-state crustal geotherms of the Gonghe and the temperature curve obtained by CBAM-B-UNet simulation temperature field.b The temperature difference between the actual well logging temperature curve and the simulated temperature curve Fig. 16 Fig. 16The loss curves of two different learning rates: the traditional algorithm is blue curve, and cosine annealing is red curve 2. Based on training the neural network: By incorporating attention mechanisms, a better calculation of the weights of three parameters and fitting of the spatial geological model have been achieved.Integration of bottleneck architectures enhances the training speed of the network and significantly reduces the time required for network training.The cosine annealing algorithm with warm restarts can improve the network's fitting efficiency.Utilizing a multi-parameter fusion network to simulate the temperature field can effectively leverage multiple parameters, leading to more accurate results. Table 1 Rock mass parameters Table 2 Time consumed for the training and
2024-06-25T15:02:04.676Z
2024-06-23T00:00:00.000
{ "year": 2024, "sha1": "e4a28fe36289551d89d61ec4cd30b233af7ac0cc", "oa_license": "CCBY", "oa_url": "https://geothermal-energy-journal.springeropen.com/counter/pdf/10.1186/s40517-024-00300-x", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3d290cc9a25dcd178b484672a5d59c80e8995404", "s2fieldsofstudy": [ "Environmental Science", "Engineering", "Computer Science" ], "extfieldsofstudy": [] }
265503166
pes2o/s2orc
v3-fos-license
The genes significantly associated with an improved prognosis and long-term survival of glioblastoma Background and purpose Glioblastoma multiforme (GBM) is the most devastating brain tumor with less than 5% of patients surviving 5 years following diagnosis. Many studies have focused on the genetics of GBM with the aim of improving the prognosis of GBM patients. We investigated specific genes whose expressions are significantly related to both the length of the overall survival and the progression-free survival in patients with GBM. Methods We obtained data for 12,042 gene mRNA expressions in 525 GBM tissues from the Cancer Genome Atlas (TCGA) database. Among those genes, we identified independent genes significantly associated with the prognosis of GBM. Receiver operating characteristic (ROC) curve analysis was performed to determine the genes significant for predicting the long-term survival of patients with GBM. Bioinformatics analysis was also performed for the significant genes. Results We identified 33 independent genes whose expressions were significantly associated with the prognosis of 525 patients with GBM. Among them, the expressions of five genes were independently associated with an improved prognosis of GBM, and the expressions of 28 genes were independently related to a poorer prognosis of GBM. The expressions of the ADAM22, ATP5C1, RAC3, SHANK1, AEBP1, C1RL, CHL1, CHST2, EFEMP2, and PGCP genes were either positively or negatively related to the long-term survival of GBM patients. Conclusions Using a large-scale and open database, we found genes significantly associated with both the prognosis and long-term survival of patients with GBM. We believe that our findings may contribute to improving the understanding of the mechanisms underlying GBM. Introduction Glioblastoma multiforme (GBM) is the most common and devastating primary brain tumor, which is characterized by infiltrative growth and resistance to treatment and leads to an extremely poor prognosis.Despite aggressive treatment strategies against GBM, including chemotherapy, radiotherapy, immunotherapy, and surgical resection, only a few patients survive 2.5 years, and less than 5% of patients survive 5 years following their diagnosis [1]. Extensive studies have focused on the genetics of GBM to improve the understanding of the underlying mechanisms of GBM and to contribute to an improved prognosis of patients with GBM [2].We also previously identified a DKK3 gene from the Wnt/β-catenin pathway and 12 genes from 10 oncogenic signaling pathways associated with GBM prognosis using The Cancer Genome Atlas (TCGA) database [3,4].It is well known that TCGA is the world's largest publicly accessible genomic database.It includes information on digital pathologic slides, mRNA expression data, clinicopathological information, and DNA methylation and mutation data.However, there has not been a study aiming to identify the genes significantly related to the prognosis of GBM by assessing the direct association between the gene expression levels in GBM tissue and both the lengths of the overall survival (OS) and the progression-free survival (PFS) in patients with GBM, using large gene expression datasets of GBM.In addition, we hypothesize that if genes related to long-term survival in patients with GBM are found, it may help predict the future prognosis or treatment of patients with GBM. Therefore, this study aimed to investigate specific genes, using the TCGA database, whose expressions are significantly related to both the lengths of OS and PFS in patients with GBM.Next, we aimed to classify the identified genes significantly associated with the prognosis of GBM, according to the Gene Ontology (GO) terms using bioinformatics.Finally, this study aimed to identify which genes, among the identified genes significantly related to the GBM prognosis, were significantly associated with the long-term survival of patients with GBM.A schematic flow chart depicting the steps involved in this research is presented in Study patients We obtained 1,149 glioma cases, consisting of 619 GBM cases and 530 low-grade glioma cases with mRNA gene expression data from the TCGA database (https://gdc.cancer.gov/about-data/publications/pancanatlas and https://www.cbioportal.org/)[5].We initially selected 594 GBM cases with virtual histopathological slides and clinical data out of 619 GBM cases.We excluded 594 GBM cases with significantly incomplete mRNA gene expression information and clinical data.Therefore, the 525 GBM cases with complete virtual histopathological slides, mRNA expression data, and clinical information were finally included in the study as described elsewhere [3,4].Log 2 (x + 1) transformation normalized all mRNA gene expression values before analysis [6]. Informed consent was not required because the data were obtained from the publicly available TCGA database. Study design In Fig 1 , the study design is shown as follows: (1) we initially observed a dataset from the TCGA database containing mRNA expression information for 12,042 genes from 525 GBM tissues; (2) then excluded 11,187 genes whose expressions showed no significant association with the lengths of the OS or PFS in the study's patients, according to Pearson correlation analysis (p � 0.01); (3) excluded 819 genes with a low strength of correlation: Genes showing a Pearson coefficient absolute value of less than 0.2, according to a previous study [7]; (4) after adjusting for clinical variables, three genes whose expressions were not significantly associated with the lengths of the OS or PFS were further excluded (Table 1); (5) A total of 33 genes whose expressions showed significant independent associations with both the lengths of the OS and PFS in patients with GBM were finally enrolled for the study.We also present the results of the univariate linear regression analysis of the lengths of the OS and PFS according to the 36 significant gene expressions in patients with GBM in the S2 Table .The raw data related to the study design can be found in the S1 Data. In silico flow cytometry As previously reported, we analyzed tumor-infiltrating lymphocytes in GBM tissues using CIBERSORT (https://cibersort.stanford.edu),a versatile computational method for quantifying the immune cell-type fractions.This method relies on a validated leukocyte gene signature matrix containing 547 genes and 22 human immune cell subpopulations [3,4].The gene expression profiles of the GBM tissues from the TCGA were entered into CIBERSORT for analysis, and the algorithm was run using the LM22 signature matrix at 100 permutations. CD8+ T-cells are major drivers of antitumor immunity, and elevated CD8+ T-cell counts in the tumor microenvironment are related to a good prognosis in cancer [8].In addition, as we have previously described, CD4+ T-cells, CD8+ T-cells, regulatory T-cells (Tregs), B-cells, and antigen-presenting cells are reported to play an important role in the immune microenvironment of GBM [3].Therefore, we included the following eight representative immune cells for the study to evaluate the relationships between the status of the GBM immune Bioinformatics analysis We performed bioinformatics analysis using Cytoscape (version 3.9.1)software (https:// cytoscape.org/).We used ClueGo and CluePedia plugins that enabled functional Gene Ontology and pathway network analyses in Cytoscape to interpret the biological roles and interactions of the 33 selected significant genes in GBM [9].We analyzed the biological function annotated pathways based on 33 significant genes related to the prognosis of GBM.We also activated the cerebral view function in the ClueGO application of the Cytoscape to estimate the approximate location of any significant proteins in the cell.We also conducted pathwaybased network analysis using the Search Statistical analysis Heatmap analyses of 33 significant gene expressions and immune cell infiltrations in 525 GBM tissues were performed using R software's "pheatmap" package (version 4.1.2). Pearson correlation coefficients and significance levels were calculated to evaluate the associations between the 33 significant gene expressions and the lengths of the OS and PFS in patients with GBM and the immune cell infiltrations in GBM tissues.We used the "corrplot" package of R software with the clustering technique (R code: corrplot, M, order = "hclust", sig.level = 0.01, method = "square") to visualize the correlations.A scatterplot with a linear regression line was used to visualize the relationship between several significant gene expressions and the lengths of the OS and PFS in patients with GBM.The OS and PFS months were transformed to the natural log scale to normalize the distributions for the analysis.We calculated the OS and PFS rates using Kaplan-Meier analysis based on the gene expression quartiles in patients with GBM. Receiver operating characteristic (ROC) curve analysis was performed to determine the genes significant for predicting the 2.5-year and 5-year survivals in patients with GBM, defined as showing the shortest distance from the upper left corner (where sensitivity = 1 and specificity = 1). A p-value < 0.05 was considered statistically significant.All statistical analyses were performed using R software version 4.1.2and SPSS for Windows version 24.0 (IBM, Chicago, IL). Characteristics of the study patients A total of 525 patients with GBM from the TCGA database were included in this study.The mean patient age at the diagnosis of GBM was 57.7 years, and 39.0% of patients were female.A total of 435 (82.9%) patients underwent radiation treatment, and further detailed information, including immune cell fractions in GBM tissues, is shown in the S1 Table. Expression patterns of the 33 significant genes and immune cells in GBM The heat map shows different mRNA expression patterns between the 33 significant genes in 525 GBM tissues (Fig 2A). GBM: glioblastoma multiforme; OS: overall survival; PFS: progression-free survival There were three genes whose mRNA expression levels were noticeably increased in GBM, and those genes were ATP5C1, CHI3L1, and TIMP1 (Fig 2B).Among the five genes associated with a good prognosis of GBM, the expressions of DHRS2, ADAM22, RAC3, and SHANK1 were relatively reduced in GBM tissues.The heatmap also showed differences in eight immune cell fractions between the 525 GBM tissues (Fig Correlations between the expressions of the 33 genes, the lengths of the OS and PFS, and the immune cells in GBM We visualized the correlations between the mRNA expressions of the 33 significant genes and the lengths of the OS and PFS in patients with GBM (Fig 2E).The expressions of 5 genes (ADAM22, ATP5C1, DHRS2, RAC3, and SHANK1) showed positive correlations with the lengths of the OS and PFS by providing Pearson coefficients greater than 0.2.The remaining 28 genes showed negative correlations with the lengths of the OS and PFS, providing Pearson coefficients less than -0.2.When we estimated correlations between the expressions of the 33 significant genes and the infiltrations of the eight immune cells from the 525 GBM tissues, there were significant correlations (p < 0.01) between the expressions of the 32 genes and the CD8+ T-cell infiltrations, except for the ATP5C1 gene (an x in the box indicates a pvalue � 0.01) (Fig 2F).We also found that the expressions of C13orf18, CHI3L1, CHL1, and CHST2 showed significant correlations with all eight immune cell fractions in GBM. Associations between the expressions of the selected genes and the lengths of the OS and PFS in patients with GBM We observed significant positive linear associations between the expression of ADAM22, ATP5C1, RAC3, and SHANK1 and the lengths of the OS and PFS in patients with GBM (Fig 3A). Using the Kaplan-Meier survival analysis, the fourth quartiles of ADAM22, ATP5C1, RAC3, and SHANK1 expressions showed significantly greater OS and PFS rates than those in the first, second, and third quartiles, except for the fourth quartile analysis of RAC3 for PFS (p = 0.1) (Fig 3B).Among the 28 genes associated with poor prognosis of GBM, we observed that the expressions of C13orf18, CHI3L1, CHL1, and CHST2, which were associated with all eight immune cell fractions, showed significant negative linear associations with the lengths of the OS and PFS in patients with GBM (Fig 3C).The first quartiles of C13orf18, CHI3L1, CHL1, and CHST2 expressions were significantly associated with greater OS and PFS rates compared to other quartile groups (Fig 3D).We also analyzed the OS and PFS in patients with GBM according to the quartile groups of the remaining 25 gene expressions, which are not included in the main figures (S1 and S2 Figs).We observed that both OS and PFS were statistically significant in all the remaining genes except for DHRS2 and SWAP70. Functional gene ontology and pathway network analyses The ClueGO and the CluePedia plugins of Cytoscape were performed to identify the enriched pathways to investigate the functionally grouped networks of the 33 significant proteins in GBM.We found three significant GO terms, which are 'neuromuscular process controlling balance', 'mitochondrial proton-transporting ATP synthase complex, catalytic sector F(1)', 'carbonyl reductase (NADPH) activity' among the five significant proteins (ADAM22, ATP5C1, DHRS2, RAC3, and SHANK1) associated with an improved prognosis of GBM (Fig 4A and 4B).When protein-protein interaction was analyzed using STRING, only RAC3 and SHANK1 demonstrated a significant interaction (Fig 4C).There were 14 significant GO terms for the genes associated with poor prognosis in patients with GBM (Fig 4D and 4E).Among the 14 GO terms, the top four significant GO terms were 'negative regulation of myeloid cell apoptotic process', 'formation of fibrin clot (clotting cascade)', 'regulation of extracellular matrix organization', and 'T-cell aggregation' (Fig 4D and 4E).Following further analysis of the protein-protein interactions between the 28 genes associated with poor prognosis in patients with GBM, we found that the genes were roughly divided into two clusters (Fig 4F ).These findings and possible mechanisms for the 33 significant genes affecting the OS and PFS in patients with GBM based on previous studies are summarized (Table 2). Discussion In this study, we identified 33 independent genes, among 12,042 genes from the TCGA database, whose expressions were significantly associated with the prognosis of 525 patients with GBM.Among them, the expressions of five genes were independently associated with an improved prognosis of GBM, while the expressions of the other 28 genes were independently related to a worse prognosis of GBM.Moreover, the genes associated with long-term survival were identified in GBM patients.Among the five genes associated with an improved prognosis of GBM, the genes whose expressions were significantly associated with long-term survival of GBM patients were ADAM22, ATP5C1, RAC3, and SHANK1.Alternatively, among the 28 genes that were associated with a worse prognosis in GBM patients, the expressions of AEBP1, C1RL, CHL1, CHST2, EFEMP2, and PGCP were negatively related to the long-term survival of GBM patients.When bioinformatics analysis was performed, there were three significant GO terms among the genes associated with an improved prognosis of GBM, whereas, 14 significant GO terms were among genes associated with a worse prognosis of GBM. We classified the 33 significant genes according to their GO terms and the possible roles of those proteins on the prognosis of GBM based on the GeneCards database (www.genecards.org) and previous studies (Table 2) .GeneCards is known as a comprehensive, authoritative compendium of annotative information about human genes, which are automatically mined and integrated from over 80 digital sources, resulting in a web-based deep-linked card for each of > 73 000 human gene entries [45]. Consequently, we found that the expression of the genes involved in the GBM immune microenvironment most commonly influences the GBM prognosis.To support this, our study showed significant correlations between the expressions of all 32 significant genes (except ATP5C1) and CD8+ T-cell infiltrations in the 525 GBM tissues.A recent study also reported that GBM cases with high-risk scores were involved in immune and inflammatory processes or pathways [46].Based on our investigation, among the 33 significant genes, there were 12 significant genes that appeared to be related to the GBM immune microenvironment and may affect the prognosis of GBM: C1RL, CCL2, CHI3L1, CLEC5A, EMP3, FBXO17, MSN, SERP-ING1, STEAP3, SWAP70, TIMP1, and TMEM22.According to our findings, these 12 genes were associated with a worse prognosis for GBM; therefore, we hypothesized that they might be involved in the immunosuppression of the GBM microenvironment.Our findings support this hypothesis since all of these 12 genes were negatively correlated with CD8+ T-cell infiltrations in the GBM tissues.Moreover, we observed that these 12 genes are almost identical to the genes belonging to the red cluster in Fig 4F .The immune microenvironment of GBM is highly immunosuppressive due to the lack of a number of tumor-infiltrating lymphocytes and other Table 2. Classification of the 33 significant genes according to their GO terms alongside the possible mechanisms of the 33 significant proteins affecting the OS and PFS in GBM patients. ADAM22 Neuromuscular process controlling balance ADAM22, a brain-specific cell surface protein, mediates glioma growth inhibition using an integrin-dependent pathway. RAC3 Although it is known as an oncogene, it has been reported that it plays the opposite role in glioma.RAC3 interacts with the integrin-binding protein and promotes integrin-mediated adhesion and spreading.Some integrins can promote the entry of adenoviral complexes into glioma stem cells and produce killing effects.Although the exact mechanism is unclear, we speculate that RAC3 may have tumor suppressive effects in an integrin-dependent manner in glioblastoma. Cell adhesion, structural and extracellular matrix [11][12][13] SHANK1 SHANK1 acts as a negative regulator of integrin activity and consequently interferes with cell adhesion, spreading, migration, and invasion. Cell adhesion, structural and extracellular matrix [14] ATP5C1 Mitochondrial proton-transporting ATP synthase complex, catalytic sector F(1) A common event in tumor cells is the metabolic switch from respiration (in the mitochondria) to glycolysis (in the cytosol), often referred to as "the Warburg effect".The increased expression of ATP5C1 may be associated with maintaining the activities of ATP synthase and cellular respiration leading to the inhibition of tumor progression. DHRS2 Carbonyl reductase (NADPH) activity DHRS2 is known as a tumor-suppressor gene that belongs to the short-chain dehydrogenase/reductase family.DHRS2 decreases the NADP/NADPH ratio and induces ROS clearance in mitochondria.In addition, DHRS2 is reported to bind MDM2 and lead to the attenuation of MDM2-intermediated p53 degradation. Regulation of extracellular matrix organization The AEBP1 activates MAP kinase in adipocytes, leading to adipocyte proliferation and reducing adipocyte differentiation.AEBP1 may promote GBM cell proliferation, migration, and invasion by activating the classical NF-κB pathway, which stimulates the activity and expression of the MMP-9. Structural and extracellular matrix [15] EFEMP2 EFEMP2 is a member of fibulins, which are a family of extracellular matrix glycoproteins.EFEMP2 may promote tumor invasion in glioma by regulating MMP-2 and MMP-9. Structural and extracellular matrix [24] PDPN PDPN is associated with cell elongation, cell adhesion, migration, and tube formation by promoting the rearrangement of the actin cytoskeleton.PDPN may promote invasive capacity, migration, and the radio-resistance of GBM cells. Cell adhesion, structural and extracellular matrix [31] SLC2A10 Both regulation of extracellular matrix organization and negative regulation of myeloid cell apoptotic process SLC2 genes encode glucose transporters.SLC2A10 is significantly highly expressed in GBM with a poor prognosis. Transporter [37] CCL2 Negative regulation of myeloid cell apoptotic process CCL2 is a potential candidate chemokine to regulate the chemoattraction of Treg to glioma.CCL2 recruits Tregs and myeloid-derived suppressor cells as major contributors to the potently immunosuppressive glioma microenvironment. Immune system process [18] CLEC5A CLEC5A is a myeloid specific gene and may promote immunosuppression, tumor angiogenesis and cancer cell invasion in GBM. Immune system process [22] TIMP1 TIMP1 is a specific inhibitor of MMP.TIMP1 shows aberrant upregulation in different types of cancers.TIMP1 levels are positively related to increased immune infiltration levels of tumor-infiltrating lymphocytes and correlate with cancer progression in GBM. F3 Formation of fibrin clot (clotting cascade) F3 encodes coagulation factor III, which is a cell surface glycoprotein promoting hypercoagulation status.The hypercoagulation status both increases the risk of thromboembolic events and influences the brain tumor biology, thereby promoting its growth and progression by stimulating intracellular signaling pathways. Blood coagulation cascade [26] SERPING1 SERPING1 encodes plasma protein involved in the regulation of the complement cascade, C1 inhibitor, and immune cell response.The C1 inhibitor can inactivate plasmin and tissue plasminogen activators to promote clot formation.SERPING1 might also drive the hypoxic phenotype of peri necrotic GBM leading to hypoxia-induced glioma stemness. CBR1 PGE2 is converted to PGF2a by CBR1 CBR1 inactivates highly reactive lipid aldehydes and may play a meaningful role in preserving cells from oxidative stress.Inhibition of CBR1 induces accumulation of intracellular ROS levels leading to an increase in mitotic catastrophe and mitotic arrest.Among patients treated with radiation, patients with low CBR1 expression showed an improved prognosis.CBR1 may be crucial for the survival of cancer cells after radiation and can be a good target for developing radiosensitizers. CHI3L1 Chitin catabolic process CHI3L1 is associated with the inflammatory response and promotes the progression of GBM by secreting cytokines released from immune cells.CHI3L1 may contribute to the immunosuppressive microenvironment of GBM.Inhibition of CHI3L1 may reduce immunosuppression and overcome immunotherapy resistance in GBM. Immune system process [19] CHL1 CHL1 interacts with contactin-6 CHL1 is a member of the cell adhesion molecule L1 family and plays a fundamental role in the development and progression of cancers.CHL1 is associated with promoting the survival of glioma cells while inhibiting apoptosis of glioma cells via the PI3K/AKT signaling pathway. CHST2 Keratan sulfate biosynthetic process CHST family has been reported as an oncogene in various cancers.However, the role of CHST2 in GBM is largely unknown.CHST family significantly increases GBM cell proliferation through the WNT/β-catenin pathway. DYNLT3 Mitotic spindle astral microtubule DYNLT3 is a component of the cytoplasmic dynein complex and binds with the mitotic protein to control mitosis and meiosis progression.It was reported that the low expression of DYNLT3 was associated with longer survival in female patients. Cell cycle [23] MSN T-cell aggregation MSN is a link between the actin cytoskeleton and the plasma membrane and controls T-cell differentiation via the TGF-β receptor.Upregulation of MSN expression in glioblastoma cells might be correlated with increases in cell proliferation, invasion, and migration through the Wnt/β-catenin pathway. Immune system process [28,29] NSUN5 rRNA (cytosine-C5)-methyltransferase activity NSUN5 is an enzyme with tumor-suppressor properties that undergoes epigenetic loss in gliomas leading to an overall depletion of protein synthesis.NSUN5 epigenetic inactivation is a hallmark of glioma patients with long-term survival. Embryonic development [30] PPCS 2xPPCS ligates PPanK with Cys PPCS catalyzes the pathway in which phosphopantothenate reacts with ATP and cysteine to form phosphopantothenoylcysteine. Phosphopantothenoylcysteine is an intermediate in the biosynthetic pathway that converts pantothenate (vitamin B5).Vitamin B5 is the key precursor for the biosynthesis of coenzyme A (CoA) and CoA may act as an acyl group carrier to form acetyl-CoA. Acetyl-CoA promotes glioblastoma cell adhesion and migration through Ca 2+ -NFAT signaling. Metabolism [33,34] (Continued ) immune effector cells in the GBM microenvironment [21].This immunosuppressive GBM microenvironment results in resistance to immunotherapy and promotes a poor prognosis in GBM patients.Among the 12 significant genes involved in the immunosuppression of GBM, CCL2 recruits Tregs and myeloid-derived suppressor cells, which play a critical role in the immunosuppressive glioma microenvironment [20].High levels of CHI3L1 are positively related to the infiltration of Tregs, neutrophils, and resting NK cells, which induces limitations in the effective anti-tumor immune response to GBM [21].In addition, EMP3 is an important immunosuppressive factor for recruiting tumor-associated macrophages in GBM, which induces suppression of T-cell infiltration and leads to tumor progression [27].Furthermore, C1RL may play an immunosuppressive role in the pathogenesis of glioma by triggering the activation of haptoglobin and complement component 1 [18]. The second most common possible mechanism related to the effect these 33 significant genes could produce on the prognosis of GBM was through cell adhesion or structural and extracellular matrix.According to our findings, 10 genes including ADAM22, AEBP1, CHL1, EFEMP2, PDPN, PGCP, RAC3, SHANK1, SWAP70, and TRIP6 appeared to influence the prognosis of GBM through mechanisms involving cell adhesion or structural and extracellular matrix.Among the genes associated with a good prognosis in GBM patients, ADAM22, RAC3, and SHANK1 are thought to inhibit GBM progression in an integrin-dependent manner [10,[13][14][15][16].Meanwhile, based on our investigation, AEBP1, EFEMP2, and PGCP, which were negatively related to long-term survival in GBM patients are thought to affect the prognosis of GBM through matrix metalloproteinases (MMPs)-related mechanisms [17,26,34].Low expressions of MMP9 in GBM tissues are associated with a good response to temozolomide and longer survival of patients with GBM [47].In addition, CHL1, which is also negatively associated with long-term survival in GBM patients, promotes the survival of glioma cells by inhibiting the apoptosis of glioma cells via the phosphatidylinositol 3-kinase (PI3K)/AKT signaling pathway [22]. Meanwhile, among the 33 independent and significant genes, CHST2, PPCS, and FBXO17 were considered to influence the prognosis of GBM in relation to metabolism [23,29,35,36].The role of CHST2 in GBM is largely unknown, however, it is thought to have a negative influence on long-term survival in GBM patients in our study.Moreover, it has been previously reported that the CHST family may cause GBM cell proliferation through the WNT/β-catenin pathway [23].Furthermore, according to our study, the genes related to the blood coagulation cascade, such as F3 and SERPING1, may affect the prognosis for GBM.F3 encodes coagulation factor III, which promotes hypercoagulation status.The hypercoagulation status increases the risk of thromboembolic events and promotes the growth and progression of brain tumors by stimulating intracellular signaling pathways [28].In addition, according to our study, an increased expression of ATP5C1, which is involved in mitochondrial ATP synthesis, was significantly associated with the long-term survival of GBM patients.A metabolic switch from respiration (in the mitochondria) to glycolysis (in the cytosol) is a common feature in tumor cells.However, increased expression of ATP5C1 may also be related to maintaining the activities of ATP synthase and cellular respiration, which leads to the inhibition of tumor progression [11]. In summary, the overexpression of C1RL, CCL2, CHI3L1, CLEC5A, EMP3, FBXO17, MSN, SERPING1, STEAP3, SWAP70, TIMP1, and TMEM22 genes appears to influence the prognosis of patients with GBM by causing an immune-suppressive GBM microenvironment.Immunotherapy holds tremendous promise for revolutionizing cancer therapies, but the significant immunosuppression seen in patients with GBM inhibits the effectiveness of immunotherapy.Therefore, reversing this GBM-mediated immune suppression is critical to increase the effectiveness of immunotherapy for GBM [48].Consequently, we believe it is meaningful to validate whether blocking the above 12 genes, which are associated with immunosuppression in GBM, affects the prognosis of GBM in this study.Secondly, ADAM22, AEBP1, CHL1, EFEMP2, PDPN, PGCP, RAC3, SHANK1, SWAP70, and TRIP6 genes may impact the prognosis of GBM through mechanisms involving cell adhesion or structural and extracellular matrix.ADAM22, RAC3, and SHANK1 were associated with a favorable prognosis in patients with GBM, and the expression of the remaining genes was associated with a poor prognosis.Focal adhesion is at the center of signaling pathways crucial for tumor development and may mediate radioresistance, chemotherapy, and resistance to targeted therapy in glioma [49].Consequently, we believe that the above cell adhesion-related genes associated with the GBM prognosis identified in this study may have clinical implications for the future treatment of GBM.Finally, our results demonstrate that CHST2, PPCS, and FBXO17 may influence the prognosis of GBM through metabolism pathways.CHST2 could impact the WNT/β-catenin pathway, F3 and SERPING1 through blood coagulation cascade, and ATP5C1 through mitochondrial ATP synthesis.Therefore, based on our findings, we are planning future in vitro and/or in vivo experiments to validate the relationship between the identified genes and GBM prognosis.We expect that future experimental studies may contribute to improving the treatment of GBM. This study has several limitations: Firstly, we obtained all clinical and mRNA expression data from the TCGA database, which is retrospective.Thus, further planned studies are required to verify these results.However, since public TCGA data was used and all the raw data is presented as Supplementary Data 1, our results can be evaluated and validated by other researchers.Secondly, the fraction of immune cells in GBM was estimated using in silico flow cytometry-based analysis, although this may not accurately reflect the actual number of immune cells.Thirdly, the current findings were not verified through experimental analyses; therefore, further in vitro and/or in vivo studies are required.Fourth, there are missing clinical and mRNA expression data that were unavailable in the TCGA dataset, potentially influencing the results of the statistical analyses in the study.Lastly, this study is subject to potential bias because it only used data from a single TCGA database.Therefore, verifying the results in future studies using different databases is necessary. Conclusion Overall, we investigated significant genes related to both length of OS and PFS in patients with GBM using a large-scale, open database.According to our findings, there were 33 independent genes among 12,042 human genes whose expressions were significantly associated with the prognosis of GBM.Among these 33 significant genes, the expressions of five genes were associated with an improved prognosis of GBM, while numerous other genes were related to a worse prognosis in patients with GBM.In addition, expressions of ADAM22, ATP5C1, RAC3, SHANK1, AEBP1, C1RL, CHL1, CHST2, EFEMP2, and PGCP genes were either positively or negatively related to the long-term survival of GBM patients.Although our findings are required to be validated in the future, we believe that they may contribute to improving the understanding of the mechanisms underlying the pathophysiology of GBM. Fig 1 . Fig 1.Schematic diagram detailing the process of selecting the independent genes significantly associated with the prognosis of GBM for our study.https://doi.org/10.1371/journal.pone.0295061.g001 Fig 2 . Fig 2. Gene expression patterns of the 33 independent and significant genes with comparisons of the immune cell fractions in GBM.The correlations between the 33 significant genes, the OS and PFS lengths, and fractions of representative immune cells in GBM.(A) A hierarchically clustered heatmap showing the expression patterns of the 33 significant genes related to the prognosis of GBM.Gene expression levels were log2 transformed, and a color density indicating levels of log2 fold changes is presented.Red and blue represent up-and downregulated expression, respectively, in GBM; (B) a bar plot indicating average expression levels of the 33 significant genes in GBM tissue; (C) a hierarchically clustered heatmap showing the expression patterns of eight representative immune cells in GBM; (D) boxplots showing the differences in eight representative immune cell fractions in GBM; (E) Pearson correlation coefficients and significance levels were calculated between the expressions of the 33 significant genes and lengths of the OS and PFS in patients with GBM; (F) Pearson correlation coefficients and significance levels were calculated between the expressions of the 33 significant genes and fractions of representative eight immune cells in GBM.The color-coordinated legend indicates the value and sign of the Pearson correlation coefficient.The number in the box indicates the Pearson correlation coefficient.The 'x' in the box indicates a p-value � 0.01.https://doi.org/10.1371/journal.pone.0295061.g002 Fig 3 . Fig 3. Scatter plot with linear regression line between several significant gene expressions and log2-transformed lengths of the OS and PFS in patients with GBM.Kaplan-Meier analysis showing the OS and PFS rates based on several significant gene expression quartiles in patients with GBM.The ROC curves to identify significant genes associated with 2.5-year and 5-year survivals in patients with GBM.(A) Linear regression lines showing the associations between ADAM22, ATP5C1, RAC3, and SHANK1 expressions and the lengths of the OS and PFS in patients with GBM; (B) Kaplan-Meier curves showing the OS and PFS rates according to ADAM22, ATP5C1, RAC3, and SHANK1 expression quartiles in patients with GBM; (C) linear regression lines showing the associations between C13orf18, CHI3L1, CHL1, and CHST2 expressions and the lengths of the OS and PFS in patients with GBM; (D) Kaplan-Meier curves showing the OS and PFS rates according to C13orf18, CHI3L1, CHL1, and CHST2 expression quartiles in patients with GBM; (E) ROC curves showing the significant genes both positively and negatively associated with a 2.5-year survival in patients with GBM; (F) ROC curves showing the significant genes both positively and negatively associated with 5-year survival in patients with GBM.https://doi.org/10.1371/journal.pone.0295061.g003 Fig 4 . Fig 4. Bioinformatic analysis of the significant genes associated with the prognosis of GBM using Cytoscape with ClueGo and CluePedia plugins and STRING database.(A)Grouping of the networks of the significant genes associated with an improved prognosis of GBM based on functionally enriched GO terms and pathways using the ClueGo and CluePedia plugins of Cytoscape; (B) functionally grouped networks based on the GO terms of the genes significantly associated with an improved prognosis of GBM, showing three significant GO terms.The cerebral view shows the approximate location of those significant proteins in the cell; (C) a protein-protein interaction network was constructed among the genes associated with an improved prognosis of GBM; (D) grouping of the networks of the genes significantly associated with a poorer prognosis of GBM, based on functionally enriched GO terms and pathways using the ClueGo and CluePedia plugins of Cytoscape; (E) functionally grouped networks based on the GO terms of the genes significantly associated with a poorer prognosis of GBM, showing 14 significant GO terms.The cerebral view shows the approximate location of the significant proteins in the cell; (F) a protein-protein interaction network was constructed among the genes associated with a poorer prognosis of GBM, showing that they were roughly divided into two clusters.https://doi.org/10.1371/journal.pone.0295061.g004
2023-12-01T05:07:11.077Z
2023-11-29T00:00:00.000
{ "year": 2023, "sha1": "7d48c4e8f9e15b16e71a515378999216fcd1f402", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1371/journal.pone.0295061", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7d48c4e8f9e15b16e71a515378999216fcd1f402", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
234828137
pes2o/s2orc
v3-fos-license
Cybercrimes during COVID -19 Pandemic : COVID-19 pandemic has changed the lifestyle of all aspects of life. These circumstances have created new patterns in lifestyle that people had to deal with. As such, full and direct dependence on the use of the unsafe Internet network in running all aspects of life. As example, many organizations started officially working through the Internet, students moved to e-education, online shopping increased, and more. These conditions have created a fertile environment for cybercriminals to grow their activity and exploit the pressures that affected human psychology to increase their attack success. The purpose of this paper is to analyze the data collected from global online fraud and cybersecurity service companies to demonstrate on how cybercrimes increased during the COVID-19 epidemic. The significance and value of this research is to highlight by evident on how criminals exploit crisis, and for the need to develop strategies and to enhance user awareness for better detection and prevention of future cybercrimes. Introduction During the last quarter of 2019, the world started new form of life due to the COVID-19 pandemic. The increasing number of affected socials to nearly 60 million and over one million deaths [1]. Countries had to impose prohibitions and separation between people to decrease infections to save lives and reduce the spread of COVID-19 virus. As a result, computer system and virtual world become essential communication between people. For example, most companies requested their employees to work from home, students moved to online studies, online shopping increased, and social networking activity increased, leading to an increase in Internet users significantly [2]. Although traditional crime rates decreased due to curfews' imposition, cybercrime rates have witnessed a remarkable increase since the beginning of the pandemic. However, cybercriminals exploit the COVID-19 pandemic. As such, they have increased their attacks and focused on campaigns related to COVID-19, such as online selling of unlicensed drugs as cure for the disease, sharing fake news on social media, and sending phishing emails to victims [3]. The aim is to deceive victims in order to get their money or steal their confidential information. Furthermore, they exploit the difficulty of protecting enterprise employees' devices by IT staff when working remotely, especially that personal computers are often less protected than company devices. Attack campaigns also included governments and organizations, for example, the World Health Organization on March 13th recorded attack attempts by an organized group to steal information from the organization employees [4]. The purpose of this paper is to analyzes and compare the data collected from the first half of 2020 with the same period during 2019 to highlight different type of cyberattacks that increased or decreased during the epidemic. The study demonstrates on how criminals can exploit crisis to achieve different type of cybercrimes. In addition, the research articulates on how lack of awareness at user level contribute to the increase of cybercrimes. This paper is organized as per the following. Next section discusses related cybercrimes and how cybercriminals take advantage of calamities, natural disasters, and epidemics to increase the number of attacks and targets. Subsequently, details and analysis on data collected from companies in the first half of 2020 were discussed. Afterwards, presentation of the types of cybercrimes that have increased dramatically during COVID-19 and comparing them with the first half of 2019 followed by discussion and concluding remarks. Background Work With the widespread embrace of digital technology, the number of Internet users increased for different purposes, including social interactions, business, and marketing, leading to the emergence of new crimes known as cybercrime. It is expected that losses due to these crimes will reach 10.5$ trillion by 2025 [5]. Cybercrime, also known as "computerrelated crime, is any criminal activity that involves a computer either as an instrument, target, or a means for perpetuating further crimes that come within the ambit of cybercrime" [6]. According to Shinder & Cross in [7], cybercrime, like traditional crime, needs three factors simultaneously for crime to occur. These factors are victim, motivation, and opportunity [8]. The victim is the object of the attack, such as the user on social media. Sometimes there is a group of victims, for example, when the attack is on a group of employees in a particular company. The motivation that encourage criminal to commit crime and the opportunity is the criminal exploits to commit crimes. Routine activity theory (RAT) use similar factors to describe the crime [8]. The difference is the absence of a capable guardian, clear cybercrime laws and legislations, and the difficulty of arresting criminals. With the passage of time and the rapid increase in technology develpoment, the attacker became more experienced and sophisticated in selecting victims in a coordinated manner to increase the likelihood of the attack success. Therefore we have a term known as opportunistic attacks [9]. Opportunistic attacks can be assumed as attacks focus on selection of victims based on the most vulnerable victim [9,10]. In other words, attackers search for gaps or weaknesses in the company's system, making it more vulnerable to attack. Moreover, they are looking for victims who have weaknesses to sustain successful cyber attacks. This weakness may be due to the victim's psychological state (such as anxiety, panic), which helps the attackers succeed. Therefore, they are using social engineering to be able to lure and deceive the victim. For example, when employees are under pressure at work, they become more anxious and fearful. As such, when receiving phishing emails by impersonating their manager, they may not be able to verify geniun emails; this triggers most malicious links and confidential information misuse. There are other factors that opportunistic attackers take advantage to increase their profits and campaigns' success, including exploiting ongoing crises, important public events, natural disasters and epidemics [9,11]. In natural disaster attacks, opportunistic attackers pretend to be legitimate or trusted bodies such as charities or government organizations to communicate with disaster victims to deceive them and give them confidential information or volunteers to persuade them to donate money [11]. For example, during 2005 Hurricane Katrina caused significant damage in New Orleans and nearby areas in the States [12]. Later, Hurricane Katrina websites appeared inviting people to donate to the disaster victims, which later turned out to be fraudulent sites aimed at deceiving volunteers and stealing their money [13]. More in 2014, fraudsters took advantage of Ebola pandemic outbreak to open phishing emails to lure people or donate money to fake organizations [14]. In November of 2014, more than 700,000 spam emails were revealed asking people to donate to fight Ebola by pretending they were donation campaigns led by Indiegogo fundraiser [15]. As we have seen, attackers exploit any incident or epidemic for advantage in oder to make their campaigns successful. As such, it was not surprising that they used the COVID-19 pandemic as advantage by exploiting people's fear of infection. For instance, nearly 200,000 coronavirus-related threats were recorded in seven days during March 2020, some related to email phishing [16]. Next section discuss on cyberattacks and cybercrimes that increased during COVID-19 pandemic and how successful campaigns increased by cybercriminals as result of psychological exploitation and pressures exposed dure to the pandemic. Cybercrimes during COVID-19 Pandemic Due to the emergence of noticeable increase in cybercrimes during the COVID-19 pandemic, most information security companies and international organizations have embarked on studying and identifying cybercrime types that have increased, such as Trend Micro company, Interpol, and Anti-Phishing Working Group (APWG) [17, 3,19]. We have classified Cybercrime in COVID-19 pandemic into threats related to COVID-19, the most vulnerable countries, and cybercrime types that emerged during the pandemic. Threats Related to COVID-19 According to Trend Micro [17], 8,840,336 threats related to the COVID-19 were detected in the first half of 2020, with most of the discoveries occurring in April, coinciding with the pandemic's peak in most countries around the world. As shown in Figure 1, these threats consist of emails threats, URLs threats, and Malware threats, which refer directly to the epidemic (such as websites that publish fake news about the epidemic) or indirect (such as emails that indicate delay in the order due to the curfew). Emails are considered as one of the most used tools, as it is used in phishing campaigns, spear phishing, spamming, spreading fake news, fraud, and Fake donation campaigns [17]. Moreover, emails are considered as the most official communication media between companies and employees, so cybercriminals take advantage of this circumstance to increase their campaigns. Figure 2 is an example of an email impersonating the World Health Organization (WHO) intended to request fraudulent donations to COVID-19 patients. Most of the URLs that were registered as a threat belong to phishing scams, such as exploiting people sitting at home and posting offers for a free Netflix subscription on social media App (Facebook or Twitter). The post contains a malicious link; when clicked; the victim will be transferred to a fake Netflix login page designed to capture their login credentials [17]. They also use websites to promote applications that they claim to protect their users from the Coronavirus. It has been shown that they infect users' devices with a hypothetical virus called: BlackNET RAT. This tool adds the affected device to a botnet used for DDoS attacks, stealing the Firefox cookies, saved passwords, and Bitcoin wallets [18]. As discussed, malware threats were detected during the first half of 2020. One of the most common examples is the appearance of a trojan called QNodeService sent via a fake email shown as tax exemption notice due to COVID-19 from United States government to deceive the victim. Countries Most Vulnerable to Threats United States is one of the countries that has been observed as the most threatened by different forms of cyberattacks. According to the INTERPOL report [3], the most reported cybercrimes are fraud and theft of sensitive information through phishing campaigns that target employees who work remotely. Campaigns includes ransomware targeting small and medium companies, exploitation focuses on the increasing use of social media, child sexual exploitation and more. The United Kingdom, Germany, France, and European countries, have reported a noticeable increase in the malicious domain related to the COVID-19 epidemic. For instance, the word corona in the second-level domain to deceive people searching for information about the epidemic. The spread of ransomware campaigns in health and government sectors, impersonation of websites related to government agencies, and using it in phishing campaigns [3]. MENA region which is considered among others in Figure 3 has recorded a noticeable increase in disseminating fake news about COVID-19 epidemic in social media. In addition to the emergence of malicious domains that refer to counterfeit statistical sites about coronavirus and the increase in fraud campaigns, online selling of unlicensed drugs as a cure for the disease [3]. Types of Cybercrimes The first half of 2020 witnessed noticeable increase in various types of cybercrimes contain phishing, ransomware, spread of misinformation, distributed denial of service, and trojans. Phishing Phishing is considered one of the most common cybercrimes. Phishers take advantage of fear of the virus and the curiosity to find out information about it such as the number of confirmed cases and mortality, disease symptoms, and possible treatment methods to established successful phishing campaigns [3]. According to APWG report [19], 267,372 phishing campaigns were reported in H1 of 2020, increasing (19.06%) over 2019 during the same period. As shown in Figure 4, these campaigns targeted different sectors such as SaaS/email, financial institutions, payment, and social media. Victims were deceived by pretending that the message was from the national or global health authorities, governments, offers of vaccines and medical supplies, urging charitable donations related to COVID-19 [3]. As example, Figure 5 presents a phishing email which has been sent to specific employees pretending to be from company management. As illustrated in Figure 6 below, March witnessed the highest number of phishing sites discover [20]. The domains of some phishing sites that were detected contained "COVID" and "Corona" [2], thus deceiving people who are looking for sites designed to spread information on coronavirus. Ransomware The first half of 2020 witnessed significant decrease in detecting ransomware attacks, including files, URLs, and email. Despite its decline, there was an increase in losses due to higher ransom demands, and the cost of growing remediation [21]. According to the coalition report [21], "the average ransom increased (100%) from 2019 through Q1 2020 and increased 47% from Q to Q2 2020". Therefore, due to the targeted organizations sensitivity, most organizations were forced to pay ransom, even if it was high. As per Trend Micro reports [17] and as demonstrated in Figure 7 below, healthcare was the second most targeted organization. It is the most fragile component of the infrastructure in countries where the COVID-19 pandemic is prevalent. For example, some hospitals have been forced to pay ransoms to the cybercriminal to avoid losing patient lives [9]. People fear of infection and spread of the epidemic has led to new ransomware families. As such, 68 new families have been uncovered [22]. Most of the discovered malwares belongs to campaigns which target users searching for websites or applications related to coronavirus. For example, the DomainTools security research team discovered a website that attracts users to download an Android app that aims to track the heatmap of COVID-19, later it was found that the app is ransomware which is capable to locks the user screen for the sake of ransom [23]. Spread of Misinformation Since the beginning of the epidemic late 2019, misinformation has started to spread in various media, including traditional media, websites, and social media. Social media had the most significant impact on spreading misinformation and fake news more quickly. Facebook report posted warning signs to nearly 50 million of the misinformation posts related to COVID-19 in April; Twitter warned that more than 1.5 million users are spreading misinformation and fake news related to the epidemic during the same month [24]. The fake news included misinformation about how to treat, miracle cures, claims that coconut oil kills the virus, and drugs with no proven Copyright © 2021 MECS I.J. Information Engineering and Electronic Business, 2021, 2, 1-10 clinical benefit that are nonetheless described as useful. Moreover, spreading the news about conspiracy theories such as the virus is being created as biological weapon, and other fake and misleading news [25,26]. Distributerd Denial of Service February 2020 witnessed the largest distributed denial of service (DDoS) attacks. Amazon Web Services reported that an unidentified customer on their network was attacked by 2.3 Tbps and lasted up to 3 days using a technology called Lightweight Offline Access Protocol [27,28]. This technique uses vulnerable third-party servers and raises the volume of data sent to the victim's IP address by 56 to 70 times [27,28]. According to Neustar Report [27], there was increase in the number of DDoS attacks detected in H1 2020 as compared to the same period in 2019. In contrast, Netscout report [29] showed that the number of DDoS attacks detected in H1 of 2020 was 4.83 million, where the increase was estimated at (27.1%) over 2019 during the same period. The epidemiological situation and the infrastructure's weakness in the health, educational, and economic sectors worldwide were exploited to increase the attacks. As shown in Figure 8, an increase in detected attacks in North America (NA), Laten America (LATAM), and in Europe, the Middle East, and Africa (EMEA) region. Turkey is one of the countries that faced high attacks during H1. Although the increase is general, there was a decrease in the number of detected attacks in Asia Pacific (APAC) region. Nevertheless, that attacks have focused on the eCommerce and health sectors. Sunburst Trojans By the end of the year 2020, FireEye's major cybersecurity company discovered the most dangerous and largest hacking operation globally [30,32]. FireEye reported that it had been breached for hacking tools used to test computer defense [30]. Later, emerged that FireEye hack was part of larger attack carried out by professional hackers. FireEye revealed on December 13 that SolarWinds' Orion software update caused the attack [30,31]. SolarWinds customers were asked to update the Orion software using the company page, later, hackers removed the page after news of the attack were spread. More than 18,000 employees updated the software, thinking it was from the original company. According to FireEye, the hack was carried out by a group of Russian hackers (APT29) who breached the infrastructure of the SolarWinds system [31,32]. Once a customer signs in to request an update, APT29 can install the update that contains Trojans; later named as Sunburst. To avoid detection, the attackers modified a legitimate utility on the Orion system with malicious components to be used as legitimate components before they returned the legitimate utility. The size of the losses, the sectors, and the targeted countries have not been entirely determined. FireEye report stated that "the victims included governmental, advisory, technical, communications and extractive entities in North America, Europe, Asia, and the Middle East" [31]. Never the less, the development of proactive monitoring systems and best practices contribute to minimize threats and defense against cyber-attacks [33]. In addition, risk assessment and cybercrime laws contribute to controls and defend organizations for combating cybercrimes at national and international level [34,45]. Concluding Remarks The analyses indicate clear and noticeable increase in cyber-attacks and cybercrimes at the peak of COVID-19 epidemic worldwide. Due to the imposition of bans by governments and the stay at homes, which led to an increase in the use of the Internet and thus the exploitation of cybercriminals to increase their campaigns. As shown in Table 1 above, the increase was in phishing due to phisher's exploitation of the pandemic and the increase of their campaigns related to COVID-19 directly such as messages related to donations for the benefit of COVID-19 patients, or indirectly such as emails that indicate delay in the order due to the curfew. The significant increase in DDoS as attacks on governments, health, and economic sectors intensify. Although the significant decrease was clearly noticed in the recorded ransomware attacks, there was an increase in losses due to higher ransom demands, and the cost of remediation is growing. In addition, a remarkable increase in the spread of misinformation and fake news found a fertile environment in social media. Furthermore, towards the end of 2020, the world witnessed the discovery of the biggest hacking attacks and hence the discovery of the most dangerous and largest hacking operation. Sunburst Trojans were published, pretending to be the updated software. The outcome of this research demonstrates by evident an increase of the cybercrimes in government and private sectors during COVID-19 due to the lack of distinguished low level security measures and the lack user awareness. Therefore, thoughtful measures should be considered by governments and organization leaders to increase the level of cyber security during any abnormal conditions, development of more sophisticated proactive cyber-attack detection software, and to activate firm ICT monitoring during any unprecedented and emergency conditions is indispensable. Tysiac, "How cyber criminals prey on victims of natural disasters," https://www.journalofaccountancy.com/news/2018/sep/cyber-criminals-prey-on-natural-disaster-victims-201819720.html, Retrieved December 13, 2020.
2021-05-21T16:57:30.576Z
2021-04-08T00:00:00.000
{ "year": 2021, "sha1": "74c38b329b6d81a030d3a73d9080c4d91bcda1c5", "oa_license": null, "oa_url": "http://www.mecs-press.org/ijieeb/ijieeb-v13-n2/IJIEEB-V13-N2-1.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "e99b037234fedb3b73673e3cbe0889ea19424aeb", "s2fieldsofstudy": [ "Computer Science", "Psychology", "Political Science" ], "extfieldsofstudy": [ "Political Science" ] }
198562279
pes2o/s2orc
v3-fos-license
On Problems and Countermeasures of the Employment of Postgraduates from Crop Science —Taking Tianjin Agricultural University as an Example With the reform of China's economic system, especially its deepening on agricultural economy, as well as a variety of employment needs emerged in the social market, limitations of crop science postgraduate students from agricultural college make them unable to meet the social demands for all-round talents, thus making the employment situation more and more serious. In order to better understand the employment status of postgraduate students from crop science in our school, this study designed a questionnaire on the employment status of postgraduate students from crop science, mainly involving difficulties and problems encountered in the employment process, and suggestions for postgraduate education in our school after employment. Through the analysis, we found that crop science postgraduates have suffered much pressure from employment due to their pessimistic employment status. Thus, some suggestions are put forward to solve their employment problems and improve their employment status in crop science. Keywords—crop science, postgraduate students, employment I. INTRODUCTION China has entered the stage of higher education popularization. From the perspective of social development, the cultivation of a large number of high-quality talents is compatible with the rapid economic development of China. With the deepening of the reform of education system in our country, however, our country's higher education has turned to the educational model adapted to the market economic system from the past educational model under the planned economic system, and employment distribution of college graduates has been featuring with unified division mode under planned economy system into independent employment and two-way choice model under market economy system. The employment distribution of university postgraduates is no exception. In order to adapt to the development of our society, economy, culture as well as education, our country has adopted the policy of expanding the enrollment of postgraduates in time and achieved a great success during the process. To understand our school postgraduate employment situation for nearly three years, problems existing in the process of graduate education shall be grasped, and suggestions for the improvement of quality of postgraduate education in the school shall be put forward. Thus, we have designed a questionnaire named" employment status of crop science postgraduates from college of agronomy and resources and environment". In June 2008 postgraduates that graduated 3 years ago did the online questionnaire, through investigation, and more comprehensive and accurate results have been made. In this regard, we hereby make a report on the employment situation and problems of postgraduate students involved in this survey. A. Basic employment status of postgraduates Crop science is one of the core disciplines of agricultural science, and is also the traditional discipline of our university. Generally, two main secondary disciplines of postgraduate major are crop breeding and crop cultivation. Table 1 shows the number and employment of postgraduates in the school of agriculture, resources and environment in 2016-2018. It can be seen that the number of postgraduates in the past three years is relatively stable, with a one-time signing rate of only 34%, and the number of students who choose to study for a doctoral degree accounts for 11%. FIG. 1 and FIG. 2 according to their working locations and directions. Based on the results in the figure, most students choose to stay in economically developed cities, accounting for 72% of the total number. Among them, the choice of employment in the field related to these major accounts for 30%, and the choice of enterprises and institutions accounts for 32% due to the influence of social concepts, treatment and welfare and other factors. In general, remote western regions are still not attractive enough for postgraduate students, most of whom prefer to stay in big coastal cities to work. Most local universities are confronted with the situation of small scale of postgraduate students and great pressure of scientific research from tutors. As a result, most postgraduate students spend much time doing experiments and writing academic papers, then causing a lack of training and exercise in production practice. There are many general courses for postgraduates, which are disjointed from local agricultural needs, have no obvious characteristics, and are unable to expand students' knowledge level. Compared with other 211 and 985 postgraduates, they have no competitive advantages, and are often hard to serve local agricultural economy [1]. Crop science is an applied discipline. Most of the postgraduate training programs of local universities have set social practice, teaching practice and other courses. However, in the specific implementation process, these practical courses are often formalized and fictitious, which fail to achieve the established training objectives. In terms of the hardware facilities for students' internship, local colleges and universities also lack the practice bases for graduate students and the internship as well as employment units of school-enterprise cooperation. In the social practice 60% of the students did not participate in social practice or participated in only one item. In the questionnaire survey, 60% of students considered internship experience as the most important factor for finding a job, which is bound to result in the basic lack of specialty with local characteristics made by colleges and universities for local economic and social development, and the cultivation of students' industrial ability is difficult to meet the needs of local characteristics, thus resulting in the disconnection between talent cultivation and local needs. Fig 3. shows that how many internships participants in during the postgraduate period. Due to the small scale of postgraduate education, it has not received the same attention as undergraduate education. The employment of postgraduate students is not equipped with the guidance from professional teachers of the employment office and generally, and the head teachers in the college are appointed to conduct unified management, which cannot provide special guidance on employment problems. It lacks professional and systematic employment guidance knowledge and the ability to develop and recruit postgraduate students. There is little recruitment information at the postgraduate level. Employers invited by the school to participate in campus recruitment from agriculture-related enterprises and institutions are mostly undergraduates, and there is a lack of student information feedback mechanism at the postgraduate employment unit. Graduate tutors are the first persons responsible for postgraduate education. At present, most of students' cultivation by tutors in colleges and universities is still in the initial stage of assisting students to complete their studies, and the profound ideological and political education, emotional exchange and employment training for graduate students need to be further improved. According to the results from which channels the final work was implemented in the questionnaire, mentor recommendation and social recruitment websites accounted for 62%. C. Insufficient career planning education Crop science postgraduate students in local colleges and universities take up a certain proportion of the transferred admission. Most of these students take the postgraduate entrance examination to avoid employment pressure or simply improve their academic qualifications in order to find a more stable job, lacking the goal of postgraduate study. At the same time, due to inferior country mark line of agronomy, its enrollment quality differs, and often causes students not to understand their major advantages, also not to know their conditions according with the requirement of which post after graduation. Agronomy is a discipline with strong practicality and due to heavy study and scientific research tasks, postgraduates do not have enough time to participate in social practice and production practice, so they are not familiar with the employment requirements of enterprises in the future. At the same time, many local colleges and universities do not have career planning courses at the postgraduate level, and the training of postgraduates' employment teachers is not enough. The employment guidance for postgraduates is often limited to the release of employment information and the submission of employment data, lacking of systematic guidance in psychological counseling, employment policies, employment skills and other aspects. Although some schools have set up elective courses such as "career planning", their teaching is of a mere formality and the courses are not well targeted. Students do not really form conscious thinking or then put it into target action because of elective courses. D. Inaccurate positioning of crop science postgraduate students in employment Crop science postgraduates do not actually agree with the employment prospects of their major. It's because the employment of agriculture majors cannot leave "farming", and their working places are either remote areas, or urban suburbs. Despite the higher education has been accepted, the employment idea of crop science postgraduate students is still relatively conservative, and their job objectives are still in economically developed cities, and the eastern coastal area, rather than the basic positions of employment since units at the grass-roots level are relatively backward, and their treatment is not high. Employment expectations remain high and the concept of employment cannot keep pace with The Times. Urban students will not give up superiority and convenience after long accustomed to the city life, while rural students, not always easy to enter colleges, which means out of the ground floor, and into the "palace", are still of a great burden after entering, and they are also unwilling to return to rural employment. Therefore, no matter urban students or rural students, they are not willing to settle down to "farming", and would rather choose to be employed in the cities, regardless of professional counterparts. In other words, the study of agriculture-related majors plays a role of "springboard" for them and the low matching rate between majors and employment has laid a hidden danger for secondary unemployment. IV. EMPLOYMENT COUNTERM EASURES OF CROP SCIENCE POSTGRADUATE STUDENTS FROM AGRICULTURAL COLLEGES AND UNIVERSITIES A. Optimize the discipline structure and improve the quality of talent training Optimizing the discipline structure is the premise of improving the quality of postgraduate education and employment. Higher agricultural colleges and universities, especially local agricultural colleges and universities, should enhance their adaptability to local economic development. Agricultural colleges and universities should conduct regular research, fully understand the local economic development situation, combine their own teachers and students, correctly position the professional curriculum and personnel training objectives, give play to the comparative advantages of school disciplines, strengthen the characteristics of running schools, and improve the quality of personnel training. At present, as the general lack of local agricultural technical personnel and rural economic management talents, especially local colleges and universities with both high-quality talents should actively response to the national education policy, and make efforts to expand the scale of professional degree. At the same time they should consciously conform to the needs of the development of the national postgraduate education according to the trend of current social and economic development, consciously adjust disciplines and specialties, dynamic adjustment enrollments, and reduce some of the less competitive, social adaptability of unpopular professional [2]. In terms of talent training model, it is necessary to establish compound talent training mode, strengthen overall education of research and application, and cultivate high-level talents suitable for the development needs of market economy. They should pay attention to the traditional classroom teaching, and strengthen the practical teaching strength; In terms of hardware facilities, while improving the scientific research platform, they should expand the postgraduate production practice base and social practice projects; In the cultivation of innovation ability, they should attach importance to academic research, deepen the implementation of social production practice, and combine scientific research innovation with social practice in depth. B. Combine the characteristics of the major, cultivate students' innovative consciousness and improve their comprehensive ability With the development of China's economy, enterprises and institutions have increasingly higher requirements for college students, and college graduates are facing a grim employment situation. In the process of job-hunting, employers also prefer to accept students with strong comprehensive quality. Therefore, in the process of agricultural professional talent training, agricultural colleges and universities should aim at the characteristics of agricultural students and the changing characteristics of social demand for talents so as to comprehensively improve the comprehensive quality of students. On the one hand, professional education as well as ideological and political education should be combined to strengthen the moral cultivation of agricultural students. Through the role model, theme education, social practice and other ways, students' noble sentiments and lofty aspirations shall be cultivated; Through subtle influences, students' interest in learning shall be stimulated, and the enthusiasm and initiative of students learning shall be fully mobilized; On the other hand, the cultivation of innovative and practical abilities of agricultural students should be highlighted. Starting from the reality, based on professional knowledge and by means of scientific and technological innovation activities, the innovative and practical consciousness of students will be strengthened, and the students will be trained into high-quality application-oriented talents with strong professional practical and innovative abilities. C. Further strengthen employment and entrepreneurship guidance, and guide students to establish a correct view of employment Career planning for postgraduates is not only an important means to promote the employment of postgraduates, but also an important way to help students set correct goals for postgraduate study, lead them to complete the tasks of study and scientific research, exercise their production and practice ability, and then promote their overall development. The career planning education in agricultural colleges and universities is particularly important as a highland for the cultivation of technical application talents in the field of crop science. Most of the students in the field are from the countryside, and learn career planning education only in undergraduate period and their professional design concept is insufficient or even missing in life. Therefore, in view of the students with different characteristics, agricultural universities and college should positively advocate policies of the party and the state, strengthen the understanding of agronomy students on the specialty and the agricultural development of our country, improve their interest in learning, actively guide them to set up the correct outlook on life and values, adjust the employment psychology, position accurately, and correctly guide students greatly work for future. Secondly, the employment system and the professional team of employment guidance teachers shall be established and improved to constantly improve the ability of employment guidance teachers, promote the professionalization of employment guidance services, and actively guide students to exercise and grow at the grassroots level. Agricultural colleges and universities should also open up employment channels, strengthen entrepreneurship education, and cultivate students' entrepreneurship awareness and ability. Internship opportunities for students at the grassroots level shall be increased so that they can experience life more at the grassroots level, and make the countryside a promising field for agricultural talents. Students ready to start businesses in various forms in the countryside at the grassroots level shall be actively guided to create a world of their own. V. CONCLUSION In a word, the improvement of the employment quality of agronomy postgraduates in local colleges and universities is a complicated and long-term project, which requires the joint efforts of schools, society and families, as well as the support of national policies and systems. Only by innovating management ideas and committing to the reform of education and teaching can graduate students improve their employ-ability and change their employment concepts. Only in this way can postgraduate students of crop science be transformed into agricultural talents full of energy and vitality, with strong attraction and influence, and promote the better development of agricultural disciplines in agricultural colleges and universities.
2019-07-26T13:58:26.280Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "2612c2d81c476062ed323391ee7d9e4763194bde", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.2991/iserss-19.2019.125", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "03c08821fdb06f610956217d5a62f7ce249179e5", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Education" ], "extfieldsofstudy": [ "Business" ] }
49397776
pes2o/s2orc
v3-fos-license
Differences in perinatal health between immigrant and native-origin children : Evidence from differentials in birth weight in Spain Hector Cebolla-Boado OBJECTIVE This paper explores perinatal inequality between migrants and natives in Spain, or, more specifically, differences in birth weight. BACKGROUND We re-examine the logic of the ‘healthy immigrant paradox’, according to which the children of immigrant mothers have superior birth outcomes. DATA Using the universe of births in Spain in 2013, we go beyond the standard approach of using a dichotomous variable for estimating the risk of low birth weight (LBW) and high birth weight (HBW). METHODS We estimate quantile regression to explore migrant-native differentials in their children’s birth weight across the range of observed values and also focus on the impact of migrant status among babies weighing more than 4,000 and 4,500 grams ‒ two thresholds which, in a similar way to LBW, are associated with certain pathological characteristics and problematic future development. RESULTS Our paper not only confirms that the well-known epidemiological regularity of immigrant-origin babies having an advantage in avoiding LBW applies to Spain, but also, at the other extreme, it shows that when birth weight is above 4,000 or 4,500 grams, migrant-origin babies weigh significantly more than those of native origin. 1 Authors in alphabetical order; both contributed equally to the paper. Universidad Nacional de Educación a Distancia (UNED), Spain. E-Mail: hector.cebolla@gmail.com. 2 Authors in alphabetical order; both contributed equally to the paper. Universidad Nacional de Educación a Distancia (UNED), Spain. Cebolla-Boado & Salazar: Differences in perinatal health among immigrant and native-origin children 168 http://www.demographic-research.org CONTRIBUTION In sum, we contribute to the literature by showing that the higher average weight of newly born babies from immigrant mothers is not always a source of perinatal advantage. We provide access to the data and the syntax used, so that our results can be replicated (our dataset is publicly available). 1. The relevance of birth weight, and its determinants in the Spanish context Birth weight has been the object of extensive research in various fields of scientific enquiry, from medicine to social epidemiology, sociology, and demography. The study of the adverse consequences of unhealthy weights at birth (<2,500 and >4,000/4,500 grams) has mainly focused on health and educational outcomes. Because of the huge amount of evidence linking Low Birth Weight (LBW) to adverse health and cognitive outcomes, social epidemiology has tried extensively to assess the prevalence of LBW in different settings and different subsamples of the population (see, for instance, Reichman 2005, Teitler et al. 2007, Buekens et al. 2013). Although scholars have traditionally privileged the study of Low Birth Weight (LBW), research on High Birth Weight (HBW) is gaining momentum. In this review we briefly summarize both the determinants and consequences of deviation from healthy weights. On the one hand, the World Health Organisation defines LBW as less than 2,500 grams, irrespective of the gestational age of the infant. In the specialized literature it is interpreted as one of the most straightforward indicators of perinatal health and of infant health more generally. According to the American Academy of Paediatrics, LBW has different origins, ranging from the most obvious ‒ those associated with genetic factors (foetal chromosomal abnormalities), the mother’s health (high blood pressure, heart or kidney disease), and the mother’s lifestyle (incorrect nutrition during gestation, smoking, and the consumption of other substances) ‒ to problems with the development of the placenta (intra-amniotic infection, placental abruption, and placental insufficiency). LBW correlates with infant morbidity and mortality. Smaller babies are more likely to experience severe health risks after birth, and the effects of this early disadvantage are long-lasting: they are more prone to report general worse health later in life (Johnson and Schoeni 2007) and to suffer from a higher incidence of specific conditions such as diabetes, asthma, coronary disease, metabolic syndrome, and high blood pressure (Barker 1995, Johnson and Schoeni 2011). The negative impact of LBW on cognitive development and educational outcomes (Hack et al. 1995) has been shown Demographic Research: Volume 35, Article 7 http://www.demographic-research.org 169 to be similarly enduring. These children show poorer school readiness (Reichman 2005), evidence of increased school difficulties and hyperactivity until the age of 18 (McCormick et al. 1990), lower chances of completing high school at the standard age, lower educational attainment (Conley and Bennet 2000), and even lower earnings as adults (Black et al. 2007). However, socio-economic factors tend to mediate these relationships. Large differences in the incidence of LBW have consistently been reported across socioeconomic groups in different countries (Kramer et al. 2000). Whereas more maternal resources ‒ whether a higher educational level (Boardman et al. 2002), social class (Pattenden et al. 1999), or a supportive social and emotional climate (HohmannMarriott 2009) ‒ all tend to improve birth outcomes, pregnancy later in life (Luke and Brown 2007) and non-marital birth (Castro-Martín 2010) are associated with an increased risk of LBW. Interestingly, according to the literature, in a number of affluent countries immigrant women tend to experience better birth outcomes than native women (see Guendelman et al. 1999), a result that will be discussed in the next section. Spain is no exception in this general picture. There are significant traces of inequality in perinatal health according to social background. Castro-Martín (2010) showed that the children of unmarried mothers suffer a higher risk of low birth weight, suggesting that the health disadvantage of children of non-marital couples is significant, even though recent social acceptance of non-marital unions and the selection of couples into this new form of cohabitation have helped to reduce it over time. Juárez and Revuelta Eugercios (2013) showed that the risk of low birth weight is more pronounced among children born in more vulnerable households, both in terms of occupation and education. In addition, Spain’s incidence of LBW is systematically higher than either the OECD or European average, and, although the prevalence of LBW has intensified in most European nations since the mid-1990s, Spain has experienced an increase in LBW unmatched by any country for which data are available (OECD 2009, 2014). The increase in the proportion of births to mothers at older ages due to postponement of maternity (Luque Fernández 2008), the spread of fertility treatments and the consequent higher incidence of multiple births (Blondel et al. 2002), and the increased survival of vulnerable babies resulting from improved technology are surely factors accounting for this remarkable trend. Other factors promoting the aggregate result of higher LBW rates are the larger proportion of unmarried women in the population (Castro-Martín 2010), the increased labour participation of women, and the expansion of occupations that might entail risk during pregnancy (Ronda et al. 2005). HBW is defined as the weight of a newborn of either less than 4,000 grams or less than 4,500 grams at any gestational age (see Frank et al. 2000 for a discussion of the various thresholds used), and is also known medically as ‘macrosomia’. HBW has not received as much attention as LBW. Similarly to LBW, the range of determinants of Cebolla-Boado & Salazar: Differences in perinatal health among immigrant and native-origin children 170 http://www.demographic-research.org excessive foetal growth include genetic factors (such as the Beckwith-Wiedemann syndrome), lifestyles (insufficient pre-gestational physical activity), some maternal characteristics such as advanced age or obesity, and conditions such as diabetes and hypertension. Analysis of the consequences of HBW has also tended to focus on health and educational outcomes. In the health domain, children born with high birth weight give rise to more complications during delivery for both mother and baby. Mothers of large babies are exposed to increased rates of caesarean section, infections such as chorioamnionitis, perineal lacerations, and postpartum haemorrhage (Stotland et al. 2004), and tend to need longer hospitalization periods (Weissmann-Brenner et al. 2012). Heavy babies are more prone to experience conditions related to oxygen deprivation during delivery (Hawdon 2011), shoulder dystocia, neonatal hypoglycemia (Weissmann-Brenner et al. 2012), and higher risks of morbidity and mortality compared to those within the healthy range of weights (Zhang et al. 2007). They are also more prone to suffer from a number of conditions in the midand long-term. Recent research has shown that HBW is associated with increased probability of experiencing type-2 diabetes in young male adults, and obesity in both men and women (Johnsson et al. 2015). The link with some types of cancer like leukaemia has also been documented for the Nordic countries (Hjalgrim et al. 2004). However, as regards cognitive outcomes, previous consensus on the lower IQ of children born with heavy weights (see, for instance, the early contribution by Record et al. 1969) has been recently disputed in research using sibling analysis with adult samples, supporting the interpretation that most of the association conventionally found is actually due to confounders from family characteristics (Kristensen et al. 2014). The socio-economic determinants of HBW have also been addressed less often. Advanced (35+) maternal age and lower levels of education appear to be associated with increased likelihood of macrosomia (Frank et al. 2000). Clearly more research is needed to examine the role of other maternal and family characteristics, including ethnic origin and migrant status, a task that CONTRIBUTION In sum, we contribute to the literature by showing that the higher average weight of newly born babies from immigrant mothers is not always a source of perinatal advantage.We provide access to the data and the syntax used, so that our results can be replicated (our dataset is publicly available). The relevance of birth weight, and its determinants in the Spanish context Birth weight has been the object of extensive research in various fields of scientific enquiry, from medicine to social epidemiology, sociology, and demography.The study of the adverse consequences of unhealthy weights at birth (<2,500 and >4,000/4,500 grams) has mainly focused on health and educational outcomes.Because of the huge amount of evidence linking Low Birth Weight (LBW) to adverse health and cognitive outcomes, social epidemiology has tried extensively to assess the prevalence of LBW in different settings and different subsamples of the population (see, for instance, Reichman 2005, Teitler et al. 2007, Buekens et al. 2013).Although scholars have traditionally privileged the study of Low Birth Weight (LBW), research on High Birth Weight (HBW) is gaining momentum.In this review we briefly summarize both the determinants and consequences of deviation from healthy weights. On the one hand, the World Health Organisation defines LBW as less than 2,500 grams, irrespective of the gestational age of the infant.In the specialized literature it is interpreted as one of the most straightforward indicators of perinatal health and of infant health more generally.According to the American Academy of Paediatrics, LBW has different origins, ranging from the most obvious -those associated with genetic factors (foetal chromosomal abnormalities), the mother's health (high blood pressure, heart or kidney disease), and the mother's lifestyle (incorrect nutrition during gestation, smoking, and the consumption of other substances) -to problems with the development of the placenta (intra-amniotic infection, placental abruption, and placental insufficiency). LBW correlates with infant morbidity and mortality.Smaller babies are more likely to experience severe health risks after birth, and the effects of this early disadvantage are long-lasting: they are more prone to report general worse health later in life (Johnson and Schoeni 2007) and to suffer from a higher incidence of specific conditions such as diabetes, asthma, coronary disease, metabolic syndrome, and high blood pressure (Barker 1995, Johnson andSchoeni 2011).The negative impact of LBW on cognitive development and educational outcomes (Hack et al. 1995) has been shown to be similarly enduring.These children show poorer school readiness (Reichman 2005), evidence of increased school difficulties and hyperactivity until the age of 18 (McCormick et al. 1990), lower chances of completing high school at the standard age, lower educational attainment (Conley and Bennet 2000), and even lower earnings as adults (Black et al. 2007). However, socio-economic factors tend to mediate these relationships.Large differences in the incidence of LBW have consistently been reported across socioeconomic groups in different countries (Kramer et al. 2000).Whereas more maternal resources -whether a higher educational level (Boardman et al. 2002), social class (Pattenden et al. 1999), or a supportive social and emotional climate (Hohmann-Marriott 2009) -all tend to improve birth outcomes, pregnancy later in life (Luke and Brown 2007) and non-marital birth (Castro-Martín 2010) are associated with an increased risk of LBW.Interestingly, according to the literature, in a number of affluent countries immigrant women tend to experience better birth outcomes than native women (see Guendelman et al. 1999), a result that will be discussed in the next section. Spain is no exception in this general picture.There are significant traces of inequality in perinatal health according to social background.Castro-Martín (2010) showed that the children of unmarried mothers suffer a higher risk of low birth weight, suggesting that the health disadvantage of children of non-marital couples is significant, even though recent social acceptance of non-marital unions and the selection of couples into this new form of cohabitation have helped to reduce it over time.Juárez and Revuelta Eugercios (2013) showed that the risk of low birth weight is more pronounced among children born in more vulnerable households, both in terms of occupation and education.In addition, Spain's incidence of LBW is systematically higher than either the OECD or European average, and, although the prevalence of LBW has intensified in most European nations since the mid-1990s, Spain has experienced an increase in LBW unmatched by any country for which data are available (OECD 2009(OECD , 2014)).The increase in the proportion of births to mothers at older ages due to postponement of maternity (Luque Fernández 2008), the spread of fertility treatments and the consequent higher incidence of multiple births (Blondel et al. 2002), and the increased survival of vulnerable babies resulting from improved technology are surely factors accounting for this remarkable trend.Other factors promoting the aggregate result of higher LBW rates are the larger proportion of unmarried women in the population (Castro-Martín 2010), the increased labour participation of women, and the expansion of occupations that might entail risk during pregnancy (Ronda et al. 2005). HBW is defined as the weight of a newborn of either less than 4,000 grams or less than 4,500 grams at any gestational age (see Frank et al. 2000 for a discussion of the various thresholds used), and is also known medically as 'macrosomia'.HBW has not received as much attention as LBW.Similarly to LBW, the range of determinants of excessive foetal growth include genetic factors (such as the Beckwith-Wiedemann syndrome), lifestyles (insufficient pre-gestational physical activity), some maternal characteristics such as advanced age or obesity, and conditions such as diabetes and hypertension. Analysis of the consequences of HBW has also tended to focus on health and educational outcomes.In the health domain, children born with high birth weight give rise to more complications during delivery for both mother and baby.Mothers of large babies are exposed to increased rates of caesarean section, infections such as chorioamnionitis, perineal lacerations, and postpartum haemorrhage (Stotland et al. 2004), and tend to need longer hospitalization periods (Weissmann-Brenner et al. 2012).Heavy babies are more prone to experience conditions related to oxygen deprivation during delivery (Hawdon 2011), shoulder dystocia, neonatal hypoglycemia (Weissmann-Brenner et al. 2012), and higher risks of morbidity and mortality compared to those within the healthy range of weights (Zhang et al. 2007).They are also more prone to suffer from a number of conditions in the mid-and long-term.Recent research has shown that HBW is associated with increased probability of experiencing type-2 diabetes in young male adults, and obesity in both men and women (Johnsson et al. 2015).The link with some types of cancer like leukaemia has also been documented for the Nordic countries (Hjalgrim et al. 2004).However, as regards cognitive outcomes, previous consensus on the lower IQ of children born with heavy weights (see, for instance, the early contribution by Record et al. 1969) has been recently disputed in research using sibling analysis with adult samples, supporting the interpretation that most of the association conventionally found is actually due to confounders from family characteristics (Kristensen et al. 2014). The socio-economic determinants of HBW have also been addressed less often.Advanced (35+) maternal age and lower levels of education appear to be associated with increased likelihood of macrosomia (Frank et al. 2000).Clearly more research is needed to examine the role of other maternal and family characteristics, including ethnic origin and migrant status, a task that we undertake in this paper. At least four factors make Spain an appropriate test case for our research objectives.First, the unparalleled rise in low birth weight, documented above.Second, the very high incidence of overweight and obesity, both for the adult population -one of the highest in Europe (World Health Organization 2013) -and for infants (National Institute of Statistics 2012).Third, the specificities of immigration to Spain, which took place in a very short period of time, at unprecedentedly high rates, and with a very homogeneous age profile.These features suggest that the vast majority of the children in our analyses are first-generation, or so-called 1.5 generation.Fourth, the increase in total births of the share of births to mothers of immigrant origin, a status that has traditionally been associated with superior birth outcomes, which our findings challenge. The healthy immigrant paradox in birth outcomes re-assessed There is ample evidence of an epidemiologic regularity suggesting that the children of immigrant families have better health outcomes upon birth or arrival.This regularity has been described as further proof of the 'healthy immigrant paradox', most often referring to the adult population: that despite the low average socio-economic status of many immigrant groups, the risks and loss of human capital associated with migration, and their inferior access to health care, immigrants in advanced societies are generally healthier than natives in the host country.This phenomenon has been researched in many countries and migrant groups, and there is extensive research on the mortality gap between North Americans and Hispanics in the US (Palloni and Arias 2004).Different explanations have been given for this phenomenon (Abraído-Lanza et al. 1999).Health behaviours, genetic factors, and culture and more protective social networks have been used ex post to account for this regularity.However, a large body of the literature has questioned the very existence of such a paradox by focusing on migratory aspects (Palloni and Morenoff 2001), specifically on two processes (see Jasso et al. 2004).The first is the positive selection of migrant populations, an argument that suggests that migrants are not a representative sample of the population of origin from which they came, but rather are selected and, consequently, more able and predisposed to success in different realms.The second is selective return rates, known in the literature as the 'salmon bias'.The testing of this hypothesis is hindered by data quality issues, which result in an underestimation of certain conditions and the associated mortality rates.The literature on mortality has found conflicting evidence regarding the existence of the 'salmon bias' effect and therefore the validity of the Hispanic mortality paradox.Palloni and Arias (2004) argue that return migration exists among Mexicans but is not so evident among other foreign-born Hispanics. This paradox has been documented -and in some cases challenged -not only among adult migrants but also among their children (Mendoza 2009).In parallel with the discussion of adult mortality among Hispanic-origin migrants, Hummer et al. (2007) confirmed the lower mortality rates of babies born to Mexican mothers, a research setting in which outmigration is likely to be negligible.Internationally, scholars specifically analysing perinatal inequalities by migrant status have concluded that immigrant children are at lower risk of having LBW.More broadly speaking, there is systematic evidence suggesting superior birth outcomes (lower incidence of pre-term birth [<37 weeks] and LBW [<2,500 grams]) among immigrants in the US regardless of their ethnic and racial background.In the US this has been labelled the 'Mexican Paradox' because migrant Mexican mothers appear to have better birth outcomes than immigrant-origin Mexican mothers born in their host country (Cervantes et al. 1999).Comparisons of African Americans with those born outside the US have also concluded that migrant status represents a source of advantage in terms of birth outcomes.Howard et al. (2006) found substantial variation in risks of premature birth and LBW, with USborn African Americans exhibiting worse outcomes than the foreign-born.Comparisons between the US and European countries have confirmed the finding that immigrantorigin newborns experience superior birth outcomes (Guendelman et al. 1999).The perinatal status of children born to immigrants in Spain has also been reported to be generally better than that of children born to native mothers (Varea et al. 2012, but see Fuster et al. 2014 for counter evidence using stillbirths).This occurs despite the lower socio-economic profile of migrants settling in Spain (Cebolla-Boado and González Ferrer 2013).Differences by migrant status as regards high birth weight have only more recently started to be addressed.One of the first contributions assessing differentials in the prevalence of macrosomia across a large number of ethnic origins in the United States showed that the only group with a higher risk of HBW than non-Hispanic Whites were Native Americans (Frank et al. 2000). Explanations for this early childhood advantage are diverse, just like those for general differentials in birth weight: migrant mothers are known to have healthier lifestyles, smoke less, and tend to be generally healthier (Reichman et al. 2008).Yet immigrant mothers are less likely to start prenatal care during the first term of pregnancy, although the impact of prenatal care on this indicator is much disputed (Green 2012).As regards the impact of residence in the country on perinatal health, evidence suggests that the relationship tends to be curvilinear, i.e., low birth weight declines in the first few years after migration and then increases.Interestingly, time of residence is not related to increased alcohol or drug consumption or smoking, which suggests that convergence with the natives' lifestyle does not account for the subsequent decline in the perinatal health gap (Teitler et al. 2012). Following these previous findings, in this paper we look at differences in birth weight between children born to immigrant and native mothers in Spain.We have a twofold objective.Firstly, we intend to update existing evidence for Spain using the most recent available empirical material (data from 2013).Secondly, we seek to use more innovative research techniques in order to evaluate the impact of migrant status on birth weight.More specifically, we use quantile regression to explore whether the better perinatal health status of children of immigrant mothers that has been consistently documented for a number of countries actually applies all along the range of values of our dependent variable (birth weight).Quantile regression allows us to explore the effect of migrant status not only on the risk of avoiding unhealthily low birth weight but also at the right end of the distribution.By doing so, we contribute both methodologically and substantively to the existing literature.Although the literature on birth weight has tended to focus on LBW, large babies experience more complications during delivery, increased morbidity and mortality (Boulet et al. 2003), and associated conditions later in life. Data and method In this paper we used data from the Population Movement Statistics (Estadística del Movimiento Natural de la Población, EMNP) in the Childbirth Statistics Bulletin (Boletín Estadístico del Parto) provided by the National Statistics Office (Instituto Nacional de Estadística) for the most recent available year, 2013.The data is easily accessible online. 3For the sake of transparency, the Appendix includes the Stata syntax, elaborated for the replication of our results.This is a longstanding dataset, available since 1996, compiling information for the universe of births registered in Spain.Parents are asked to fill in an administrative questionnaire at the time they register their babies in the civil registers.The parents or other relatives registering the child are obliged by law to provide information about the delivery and the context of the birth and are also asked to provide basic socio-economic information about the parents.The dataset does not use a probabilistic frame but instead includes the universe of births occurring in Spain in every single year. In 2013 our dataset contained information from 417,999 individuals across all areas of Spain.Since our analysis focuses on the risks of low and high birth weight, we restricted the sample by excluding multiple birth deliveries and stillbirths.We include both pre-term and full-term births in order to account for the possibility that the relationship between pre-term and birth weight might be mediated by migrant status.In other words, ignoring pre-term births in this context might overestimate native-migrant differentials in birth weight if these groups have different probabilities of having preterm babies or if pre-term is a more common route to LBW for one of the subsamples.In 2013, 409,008 mothers gave birth to a single baby, 23,278 of which were premature.Out of this initial sample of valid births for our analysis, which excludes multiple births and stillbirths (1,264), 22.42% correspond to children born to immigrant mothers (90,870). Table 1 below provides descriptive information, the distribution of the variables used, and the number of cases available in the analytic sample.The migrant status of the mother is a dummy variable, adopting the value of 1 if the mother was born outside Spain and 0 otherwise.Since we also want to allow for some within-variation in the migrant group due to biological differences in the various origins that correlate with the mothers' and children's health, we also used the mother and father's country of birth to build each of the five (0/1) nativity categories, namely Chinese, Colombian, Ecuadorean, Moroccan, and Romanian.Note that we separate Ecuadorean and Colombian origins because of the radically different shares of indigenous population in the two countries and the well-known differences in important factors that correlate with health and birth outcomes, such as the prevalence of certain conditions, mortality rates, and response to illness in indigenous versus non-indigenous communities (Montenegro and Stephens 2006). The models also control for three maternal characteristics known to be determinants of birth weight: education, which has been transformed into four broad levels (0 No education; 1 Primary; 2 Secondary; 3 University degree), age, and number of children born by the mother prior to the observed delivery.Finally, the model also considers whether the newborn is male or female (since boys are known to be larger than females).Unfortunately, our register-based dataset contains no information about the mother's lifestyle, which would allow controlling for other behavioural determinants of our outcome of interest. 4he literature on perinatal health has extensively used dichotomous recodifications of birth weight, using the consensual threshold of weight <2,500 grams as an indicator of LBW.Other meaningful thresholds include weight <1,500 (very low birth weight) and weight <1,000 grams (extremely low birth weight).Although there is less consensus on the cut-off points for HBW, the convention is to use either the 4,000 or the 4,500 thresholds.Table 2 presents the distribution of birth weight, expressed in grams, separately for migrants and natives, together with the average age of mothers in each cell.Native mothers in Spain, as in other advanced democracies, have been increasingly putting off motherhood, and maternal age is associated with increased risks of adverse birth outcomes in a U-fashion, with the youngest (under 15) and oldest (over 40) mothers experiencing higher risks (Reichman and Pagnini 1997).Descriptively, immigrant mothers are more likely to have children at the extremes of the birth weight distribution, and slightly less likely to have children with a normal weight (88.8% among native mothers versus 86.3% among migrant mothers).In addition, they tend to be younger than their native counterparts in all birth weight categories.Besides, as Table 3 shows, among those cases falling into the LBW category, pre-term births are much more frequent among immigrant mothers (60.8%) than they are among native mothers (52.3%).In fact, and interestingly, pre-term birth appears to be the most usual reason for newborns of migrant-origin women falling into the LBW category.Source: our estimation from EMNP. Figure 1 presents overlapped histograms for the two populations under scrutiny in order to provide additional descriptive information about the unconditional differences in the distribution of weights of the newborn children of migrant and native mothers.The two most salient differences between the two distributions are a slightly more pronounced incidence of very low birth weights and a notably more intense concentration of births above the median in the immigrant group.These differences, however, might not be statistically significant once relevant factors for perinatal outcomes other than the native vs. migrant status of mothers are controlled for in a multivariate context. In our analysis we use a continuous version of our dependent variable: birth weight.As an innovation in the literature, we propose estimating the effect of being born to an immigrant mother using quantile regression (Hao and Naiman 2007) rather than logistic regression, the standard way of estimating native-migrant gaps in birth weight.Logistic regression is usually applied in order to estimate the average association between being the child of an immigrant mother and the risk of being born with a weight below 2,500 grams and above 4,000/4,500 grams (the most often used thresholds to define LBW and HBW, respectively).This standard procedure does not allow us to consider whether this differential might vary in size (and even sign) across different ranges of the dependent variable.The same holds for other relevant covariates.Alternatively, quantile regression allows us to consider the relationship between our main regressor and the outcome of interest (birth weight) using the following conditional median function: y q (y|x)=β 0 + β n x n + ε where the median is the quantile q of the empirical distribution.The quantile q ranges from 0 to 1 (0 to 100, if expressed as percentiles), and results from splitting the data into equal shares of the distribution.Quantile regression produces estimates for the effect of each regressor on the specific range of values of the dependent variables delimited by quantiles.Table 4 shows the average birth weight for the entire analytic sample (selected percentiles) and some basic descriptives of our outcome variable.Since our substantive interest lies in measuring the differentials between immigrants and natives in the extremes of the distribution of the birth weight variable, where the consensus has set the threshold for defining LBW (<2,500 grams) and HBW (>4,000 and 4,500 grams), we selected percentiles corresponding to these benchmark values, namely percentiles 5.87, 94.10, and 99.34.As a robustness check, the Appendix also includes standard logistic regression models exploring the impact of being born to a migrant mother on the specific risk of a baby being below and above these three theoretically relevant thresholds.Our quantile regression also allows for more variation in the effect of migrant status in the range of the dependent variable comprised by those benchmark values, by specifying in addition the effects for percentiles 25%, 50%, and 75%. Since we use the entire universe of births in 2013 (excluding stillbirths and multiple births), we estimated our standard errors using bootstrapping.Specifically, our estimates are the average effect obtained from a set of 20 repetitions of the estimating protocol on subsamples of our dataset.By so doing, we allow variation in our estimates and provide meaningful significance tests. Results The results of our main estimation are presented in Table 5.For each percentile (5.87, 25, 50, 75 94.10, and 99.34) we present the effect of our regressors.Note that the constant term in each panel reflects average birth weight in each quantile when all the independent variables and controls adopt the value of 0 (thus native mothers with no formal education having a male baby).It is for this reason that the intercept in Q5.87 is slightly below the threshold of 2,500 grams.Similarly, the constant term for Q94.10 is a few grams below 4,000. The results in Table 5 confirm the well-known regularity of female babies having lower average weights (Ellis et al. 2008).In our results this gap increases as we move towards higher values of our distribution.Older mothers (above the age of 35) have a greater chance of having a baby with adverse birth weight at the two extremes (with the exception of Q94.19, where differences are not significant).The mother's socioeconomic status (reflected by her level of education) is consistently related to birth weight: within the LBW category, less-educated mothers tend to have smaller babies than those with more education, while at the other extreme they have, on average, heavier babies in the highest quantile that is specified in the regression (Q99.34).Education is therefore a source of systematic advantage.The children of more-educated mothers are heavier even when categorized as LBW, and they maintain a positive differential with the reference category in all other quantiles except Q99.34, where more-educated mothers are associated with smaller babies.Having had more children before the observed delivery is associated with consistently larger babies.The graphic illustration of the effects of these controls is shown in Figure A-1 in the Appendix.However, in our estimation the most important finding is the changing size of the immigrant effect across different parts of the distribution of the dependent variable.Migrant origin is associated with an advantage of 29 grams when the baby falls into the group considered to have LBW.In other words, immigrant babies, even when their weight is below 2,500 grams (in Q5.87), have a relative advantage compared to children born to native mothers.The size of the immigrant mother effect grows larger at Q25, Q50, and Q75.Within the limits set by the distribution of weights in these percentiles, immigrant babies have some systematic advantage, since they are 63, 75, and 88 grams heavier, respectively, than native children.Importantly, the higher the average weight of the newborn, the larger the immigrant mother effect becomes.This also applies to babies born above the threshold of 4,000 grams where the impact of our variable of interest grows to 104 grams, and above 4,500 grams, with a migrant-native differential of 140 grams in favour of the former.In other words, and crucially -as large babies suffer from a number of risks associated with their birth weight -the advantage in terms of heavier babies born to immigrant mothers turns into a marked disadvantage among the largest babies. A summary of this changing impact of migrant origin across quantiles is provided in Figure 2.This representation further allows for the comparison of this effect with the impact of the migrant status of the mother on birth weight when estimated using a standard OLS regression (dashed horizontal line).Note that the estimates of OLS and quantile regression overlap in the median of the distribution of weight, where the immigrant mother effect represents an average weight gain of some 69 grams.In all other cases, the children of immigrants surpass the average birth weight of the children of natives.Legend: Effect estimated from the model shown in Table 5.The dashed line reports the OLS estimate (and associated confidence intervals).The upward sloping line corresponds to the changing effect of immigrant mothers across pre-defined quantiles (and confidence intervals). The results shown visually in this figure confirm our interpretation.Immigrant babies are better off when they fall into the category of low birth weight.They are also systematically heavier than native newborns in the healthy weight groups (quantiles 25, 50, and 75).However, this consistent advantage turns into a substantial disadvantage at the highest quantiles, where immigrant babies are also larger than babies born to native mothers.These results support Hamilton and Choi's conclusion (2015) that broadening the scope of the examined indicators of infant health provides more nuanced evidence than conclusions taking a narrow approach and focusing only on LBW and mortality. Finally, note that according to our evidence some origin residuals remain unexplained.Babies of Chinese origin are importantly advantaged.They appear to be systematically better off (198 grams more) than the average baby born weighing below 2,500 grams and their advantage remains significant at quantiles 25, 50, and 75.Interestingly, children of Chinese mothers do not appear to be additionally overweight compared to those born to Spanish mothers when they fall above the HBW thresholds (differences with natives are non-significant).Ecuadorean and Colombian babies exhibit a very similar pattern relative to natives, as do children born to Romanian mothers.Finally, the children of Moroccan mothers are larger than native-born children across all selected quantiles.Further research to explain these differences should be pursued in the future. As a robustness check, one further analysis shown in the Appendix confirms the validity of our findings using the more standard approach of estimating the risk of LBW and, at the other extreme, of having babies with high birth weight (using both the 4,000 and the 4,500 grams thresholds) versus children within a healthy range of birth weights using logistic regressions (see Table A-1).In order to ease the interpretation of the immigrant mother estimate, we have also plotted the specific impact that immigrant and native mothers have on each of these two risks (Figures A-2 and A-3). Discussion Birth weight is known to be a very powerful indicator of perinatal health, as well as a strong predictor of mid-and long-term health-related outcomes and, especially in the case of LBW, cognitive development and behavioural problems.Immigrants in Spain, in line with the pattern in other affluent countries, tend to occupy positions in the labour structure that are more vulnerable than those enjoyed by natives.However, there is no consistently significant disadvantage in terms of their health status.The estimation of differences between native and immigrant-born babies in terms of birth weight shows that the latter have a systematically higher weight across all levels of average weight.The gains obtained from having an immigrant mother are larger as we move towards higher average birth weights.The use of quantile regression has allowed us to decipher that as this higher weight for the children of immigrants also holds among the category of large babies, what appears to be general advantage turns into a marked disadvantage at the highest end of the weight distribution, a finding that conventional OLS and logistic regression would disregard.This result represents a substantive contribution to the literature on differentials in perinatal health between natives and migrants, as it shows that there are instances in which migrant status is not advantageous, and it demonstrates that the use of quantile regression is a methodological improvement in the study of the determinants of birth weight. These results are evidence of the inadequacy of the 'healthy immigrant paradox' in the case of Spain, a paradox that would have been clearly confirmed had we not analysed the whole range of the weight distribution (see, for instance, Farré 2016, who obtains results consistent with the paradox).Our paper thus lends support to the idea that HBW should be considered more in analyses of perinatal health, since disregarding it actually biases the substantive interpretations of native-migrant health differentials.Fortunately, research showing the crucial relevance of both ends of the weight distribution has become more common recently, and the implications of the reexamined evidence for debates on public health have started to emerge (Hamilton and Choi 2015).The higher prevalence of macrosomia in certain ethnic groups and/or specific migrant communities, together with the well-known health-related risks such as metabolic disease and obesity associated with HBW, have recently led to increased awareness of the potential role of migration processes in the spread of infant and adult obesity in both sending and receiving countries (Riosmena et al. 2013). Finally, we need to stress that our findings, even if drawn from analyses using the universe of cases and not a random sample of births, are subject to important limitations.Unfortunately, our register data do not allow us to directly measure the effect of lifestyle.In a country where health, including prenatal care, is universally available and state-funded, the possibility that the higher weight at birth of migrantorigin children might be due to lifestyle cannot be ruled out (the last tables and figures in the Appendix, using evidence on body mass index and smoking habits from the National Health Survey, show mixed evidence regarding the healthier lifestyles of migrants).Nor does the data allow accounting for unobservable characteristics affecting non-random selection, which is likely to be behind the higher propensity of immigrant mothers to somehow promote the transmission of HBW to their babies.Selectivity, which would be one of the most straightforward explanations of the slight advantage immigrant mothers have in preventing LBW in their children, is only presented as an ex-post explanation.Future research needs to explore the explanations of the regularities that we detect here. Acknowledgement We are grateful for funding received from the Spanish Ministry of Economy and Competitiveness (Project Ref. CSO2014-58941-P). Figure 1: Histogram: Birth weight by migrant status Figure 2 : Figure 2: Changing effect of immigrant mother on birth weight (size of the differentials between immigrant and natives) Table 1 : Descriptive statistics of the variables used in the analysis Source: our estimation from EMNP. Table 4 : Distribution of birth weight Source: our estimation from EMNP. : Probability of HBW by immigrant status (with 95% confidence intervals) Source: our estimation from EMNP.Estimated from models 3 and 4 in TableA.1. 4: Kernel density distribution of body mass index for native and foreign-born females Source: our elaboration from 2011 National Health Survey (Encuesta Nacional de Salud).
2018-06-25T01:16:03.573Z
2016-07-26T00:00:00.000
{ "year": 2016, "sha1": "99f247154e9ce15dd072b328d1ed227d90b6d9a5", "oa_license": "CCBYNC", "oa_url": "https://www.demographic-research.org/volumes/vol35/7/35-7.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "99f247154e9ce15dd072b328d1ed227d90b6d9a5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235856990
pes2o/s2orc
v3-fos-license
Finite-time consensus control for heterogeneous mixed-order nonlinear stochastic multi-agent systems This study investigates the finite-time consensus control problem for a class of mixed-order multi-agent systems (MASs) with both stochastic noises and nonlinear dynamics. The sub-systems of the MASs under consideration are heterogenous that are described by a series of differential equations with different orders. The purpose of the addressed problem is to design a control protocol ensuring that the agents' states can achieve the desired consensus in finite time in probability 1. By using the so-called adding a power integrator technique in combination with Lyapunov stability theory, the required distributed consensus control protocol is developed and the corresponding settling time is estimated. Finally, a simulation example is given to demonstrate the correctness and usefulness of the proposed theoretical results. Introduction In recent years, along with the fast development of network communications , multi-agent systems (MASs) have been stirring considerable research interest due to the broad practical applications in various fields, ranging from autonomous vehicles to sensor networks (Chen et al., 2015;Ge & Han, 2016Li et al., 2017;Ma et al., 2017a;Oh et al., 2015;Tariverdi et al., 2021;Wang & Han, 2018;Yousuf et al., 2020;Zhang et al., 2011;Zou, Wang, Dong et al., 2020;Zou, Wang, Hu et al., 2020). MASs consist of a multitude of agents (sub-systems) that can interact with neighbours for the purpose of achieving the common goals collectively. It should be mentioned that the consensus control problem, which aims to seek a control law/protocol that enables the agents' states to reach certain common values, is one of the most fundamental yet active research topics in the study of MASs (Ma et al., 2017b). Many other tasks (e.g. Ma, Wang, Han et al., 2017) can be equivalently converted to the consensus control issue, and therefore, the consensus control of MASs have been extensively investigated and a huge amount of results have been reported in literature, see Wang and Wang (2020), Xu and Wu (2021), Herzallah (2021) and for some recent publications. Among the aforementioned works, most algorithms have been exploited to reach the consensus in the CONTACT Lifeng Ma malifeng@njust.edu.cn asymptotic mean. In other words, the required consensus might be achieved when the time approaches infinity rather than in a finite time interval (Zhu et al., 2014;Zou et al., 2019). It is widely known that the convergence rate is critical which is utilized to evaluate the speed of attaining the consensus, as in many practical systems, faster convergence speed indicates better performance. Consequently, the finite-time consensus control issue for MASs with the purpose of reaching consensus in a limited/required time interval has started to gain research interest. So far, much effort has been devoted to the investigation on the finite-time consensus control, resulting in many research fruits available in the literature, see e.g. Li et al. (2011), Wang et al. (2016), Lu et al. (2017) and Li et al. (2019) and the references therein. Note that to date, almost all of the investigation regarding the finite-time consensus control of MASs have been concerned with the homogeneous case where the considered MASs are comprised of sub-systems of identical dynamics. In engineering practice, however, quite a few types of MASs are consisting of agents with different parameters, dynamics and/or structures. In such cases, the existing results on homogeneous MASs cannot be employed directly to deal with the heterogeneous ones. This gives rise to the study toward the consensus control problem for heterogeneous MASs (Shi et al., 2020). For instance, for a class of heterogeneous linear MASs, the consensus control problem has been solved in Wieland and Allgower (2009) where the sub-systems are of the same structure but with different parameters. Not only the parameters but also the structures can be different in a heterogeneous MAS. A quintessential example that should be mentioned is the multi-vehicle systems composed of unmanned ground vehicles (UGVs) and unmanned aerial vehicles (UAVs), which can be found wide utilizations in various areas like patrol, search and rescue (Luo et al., 2016). Notice that agents of UGVs are usually modelled by second-order differential equations, whereas those of UAVs are modelled by fourth-order ones. Unfortunately, the aforementioned literature concerning heterogenous MASs are mainly focus on agents with relatively low-order dynamics (Du et al., 2020;Zheng & Wang, 2012), and therefore, the corresponding algorithms would be unapplicable for the high-order MASs (Du et al., 2017;Li et al., 2019;Sun et al., 2015;You et al., 2019). Limited research has been carried out but the obtained results can be applied only for relatively simple dynamics, see e.g. Zhou et al. (2015). As for the more complex cases such as general nonlinear high-order MASs, the corresponding research has been far from adequate which still remains challenging. On the other hand, in real-world applications, all the systems are inherently nonlinear Ma, Fang et al., 2020;Ma, Wang et al., 2020;Zhang et al., 2020) and subject to stochastic disturbances (Hu & Feng, 2010;Li & Zhang, 2010;Wen et al., 2012;Zhao & Jia, 2015). Accordingly, it is of vital importance to take into account nonlinearities and stochasticity when handling the consensus problem of MASs, which gives us the main motivation of conducting the current research. In response to the above dissuasions, this paper tackles the finite-time consensus control problem for a class of mixed-order stochastic nonlinear MASs. The main difficulties of the addressed problem can be identified as follows: (i) How to develop an appropriate methodology to design a consensus control protocol ensuring the agents' states of the same order could reach some common values during a finite-time interval? (ii) In addition to mixed-order agents' dynamics (which contains highorder components), both random noise and nonlinearities are also taken into consideration in the system model, which makes the design of the consensus protocol more complicated. The main contributions of this study can be outlined as follows: (1) The model of the heterogeneous MASs discussed in this paper is comprehensive, not only the nonlinearities but also the orders of agents can be different. (2) Stochastic noises, nonlinear terms, high-order and mixed-order dynamics are considered simultaneously that provide a comprehensive yet realistic reflection of the real-world engineering complexities. In comparison to algorithms in existing literature, the advantage of the consensus approach proposed in this paper mainly lies in its capability of dealing with finite-time consensus in probability for mixed-order dynamics and highorder dynamics in a uniform framework. (3) By resorting to adding a power integrator technique in combination with Lyapunov theory, the addressed finite-time consensus control protocol is proposed and the desired specific settling time is formulated. The rest of this paper is organized as follows. The consensus control problem of the mixed-order heterogeneous stochastic nonlinear MAS is formulated in Section 2. The design of consensus control protocol and the analysis of the setting time are presented in Section 3. To demonstrate the usefulness of the proposed protocol, a simulation is given in Section 4. Section 5 draws our conclusion. Notation: The notations used in this paper is quite standard except where otherwise stated. R n refers to the n-dimensional Euclidean space. | · | denotes the absolute value. The superscript T denotes the transpose and trace(A) means the trace of matrix A. λ(A) denotes the eigenvalue of matrix A. 1 n refers to an n-dimensional column vector with all ones. The notation P{A} denotes the probability of event A, while E{A} stands for the mathematical expectation of random variable A. Preliminaries and problem formulation In this paper, we use an undirected graph G(V, E, H) to describe the interaction among N 2 agents. Denote, respectively, V = {1, . . . , N 2 }, E and H = [h ij ], as the set of N 2 agents, the set of edges, and the adjacency matrix of G. If there is an edge between agent i and agent j, it means the two agents can communicate with each other. In this case, h ij = h ji > 0. Specially, we set h ii = 0. The matrix L = [l ij ] represents the Laplacian matrix, where l ii = N 2 j=1 h ij and l ij = −h ij , i = j. If a path can be found between any two nodes, then the graph is connected. Here, we suppose G(V, E, H) is connected. Suppose that the addressed heterogeneous MAS is composed of agents whose dynamics are described by kth-order (k = 2, . . . , n) differential equations. The amount of all agents is N 2 . Agents with nth-order dynamics are labelled as i = 1, . . . , N n , and agents with mthorder (2 ≤ m < n) dynamics are labelled as i = N m+1 + 1, . . . , N m , where N n ≤ N n−1 ≤ · · · ≤ N 2 . The model of agent i is described as follows: In the following, we apply the abbreviated notations Then, we can rewrite equations in (1) as follows Zhao and Jia (2015): where W ∈ R denotes the standard Wiener process. Remark 2.1: It is worth mentioning that MASs composed of different orders of agents are quite common. The system model considered in this paper can represent both multi-agent systems with different order dynamics and multi-agent systems with the same order dynamics. Here are two examples to help readers understand the system model proposed above. For instance, when n = 4, N 3 − N 4 = 0, the MAS can be described as a combination of N 4 fourth-order agents and When n = k(k ≥ 2), N k = N k−1 = · · · = N 2 , the MAS is only composed of kth-order agents: The following assumption and definition are needed in our subsequent development. Assumption 2.1: There exist constants p i1 ≥ 0 and p i2 ≥ 0 such that Definition 2.1: The mixed-order MAS (2) is said to achieve finite-time consensus in probability 1 if the following holds: where T is the settling time. Main results Before the establishment of main results, the following lemmas are firstly introduced. Consider the following system: where x ∈ R n is the system state, w(t) is an m-dimensional standard Wiener process (Brown motion) defined on a complete probability space ( , F, P) with the augmented filtration {F t } t≥0 generated by {w t } t≥0 . In (10), f (, ) and g (, ) are continuous functions of appropriate dimensions with f (t, 0) = 0 and g(t, 0) = 0 for all t ≥ 0. The differential operator of Lyapunov function V in regard to (10) is defined as: Definition 3.1 : The trivial solution of (10) is said to be finite-time stable in probability if the following two requirements are met simultaneously: (1) System (10) admits a solution (either in the strong sense or in the weak sense) for any initial data x 0 ∈ R n , denoted by x(t; x 0 ). Moreover, for every initial value (2) For every pair of ε ∈ (0, 1) and r > 0, there exists Next, we introduce a lemma regarding the finitetime stability in probability for the nonlinear stochastic system (10). Lemma 3.4 (Yin et al., 2015; Yin & Khoo, 2015): Suppose that system (10) has a solution for each initial value then the origin of (10) is finite-time stochastically stable and the stochastic settling time τ x 0 can be estimated by E{τ The eigenvector corresponding to eigenvalue 0 is 1 N 2 . When the graph corresponding to the Laplacian matrix L is a connected graph, the second smallest eigenvalue λ 2 (L) is Design of the finite-time consensus protocol In this section, we shall design the finite-time consensus protocols for the considered mixed-order MASs. To this end, we first present three lemmas that play vital roles in the establishment of our main results. To begin with, where q j+1 = q j + α, α = − p 1 p 2 ∈ (− 1 n , 0), q 1 = 1, p 1 > 0 is a known even integer, p 2 > 0 is a known odd integer and β j > 0, j = 1, . . . , n are suitable constants. In this paper, the proposed distributed control protocol is of the following form: where θ n i ,1 , θ n i ,2 , θ n i ,3 , θ n i ,4 and β n i are positive parameters to be determined later. Defining two functions V 1 and V 2 by where x 1 = [x 1,1 , . . . , x N 2 ,1 ] T , and where we present the following lemma which is pivotal in the design of control parameters. By following a similar line, by defining a function V k as where we give the following lemma. Initial Step: For k = 2, it can be known directly from Lemma 3.7 that inequality (34) holds. Inductive Step: Given that at step k−1, 3 < k ≤ n − 1, the following inequality is true: Then it remains to show that inequality (34) still holds at step k. The following theorem gives our main results of this paper. The proof is now complete. Remark 3.1: So far, the finite-time consensus problem has been solved for a class of mixed-order stochastic nonlinear systems. With a recursive design method, the controller of agent i has been designed at step n i . By assuming that the nonlinear terms satisfying Lipschitztype conditions and with the help of the adding a power integrator technique, the consensus control problem has been solved for the case where both mixed-order and high-order dynamics are involved. Remark 3.2: Note that in this paper, we have proposed the algorithm to drive the agents consensus in a finite time interval. It should be mentioned that the provided design framework could not be directly used to deal with the fixed-time control that requires MAS to reach consensus in a pre-specified time interval. However, it is worth pointing out that, on basis of our obtained results, it is not difficult to extend the provided theory and techniques to deal with the fixed-time consensus study which is indeed an interesting direction of our future research work. Another research topic would be the consideration of the communication threats/attacks occurring in the data propagation among the agents (Liu et al., 2021;Ma et al., 2021). Conclusion In this paper, the finite-time consensus control problem has been solved for mixed-order heterogeneous stochastic MASs with non-identical nonlinear dynamics. By assuming that nonlinear terms satisfy Lipschitz-type conditions, a novel consensus control protocol has been provided by resorting to adding a power integrator technique. Then, the finite-time consensus in probability has been proven where the boundness of the settling time has been formulated. Finally, a simulation example has been given to verify the usefulness of the proposed control law. Disclosure statement No potential conflict of interest was reported by the authors.
2021-07-16T00:06:07.315Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "2ae000a8ac4be913dddf5e30bbd01cb949e53d94", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21642583.2021.1914238?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "f7a9b9784ae95e2840eebbfb63aaf885d84461c0", "s2fieldsofstudy": [ "Engineering", "Mathematics" ], "extfieldsofstudy": [ "Computer Science" ] }
1262803
pes2o/s2orc
v3-fos-license
Iconic Gestures for Robot Avatars, Recognition and Integration with Speech Co-verbal gestures are an important part of human communication, improving its efficiency and efficacy for information conveyance. One possible means by which such multi-modal communication might be realized remotely is through the use of a tele-operated humanoid robot avatar. Such avatars have been previously shown to enhance social presence and operator salience. We present a motion tracking based tele-operation system for the NAO robot platform that allows direct transmission of speech and gestures produced by the operator. To assess the capabilities of this system for transmitting multi-modal communication, we have conducted a user study that investigated if robot-produced iconic gestures are comprehensible, and are integrated with speech. Robot performed gesture outcomes were compared directly to those for gestures produced by a human actor, using a within participant experimental design. We show that iconic gestures produced by a tele-operated robot are understood by participants when presented alone, almost as well as when produced by a human. More importantly, we show that gestures are integrated with speech when presented as part of a multi-modal communication equally well for human and robot performances. INTRODUCTION Based on the idea that embodiment leads to stronger social engagement than a screen (Adalgeirsson and Breazeal, 2010;Hossen Mamode et al., 2013), we wondered whether a viable alternative for telecommunication is to use a tele-operated humanoid robot as an embodied avatar in a remote location. In previous work with robot avatars they have been shown to improve social presence of a remote operator (Tanaka et al., 2015), and their salience to people in the robot's presence (Hossen Mamode et al., 2013), relative to more traditional telecommunication media (audio and video). In order for a robot avatar to be a viable communication method it must be capable of transmitting human interactive behavior. In everyday communication people can be observed performing arm gestures alongside their verbal communications (McNeill, 1992;Kendon, 2004). Though there is much debate on whether such gestures have a communicative value for observers, a recent meta-analysis of the literature concluded that they are of communicative value (Hostetter, 2011). Indeed, a number of studies in the human communication literature demonstrate observers of co-verbal gestures comprehend information from them (Cassell et al., 1999;Kelly et al., 1999;Shovelton, 2005, 2011;Cocks et al., 2011;Wang and Chu, 2013). Hence, we are motivated to investigate the use of gesturing on a humanoid robot avatar to capitalize on the reported benefits (salience and social presence), while still maintaining multi-modal communication efficacy. To transmit the multi-modal communications of a human operator, we have developed a tele-operation interface that uses motion tracking of the operators arms, and audio streaming, to replicate their communication on a NAO robot (Aldebaran Robotics, Gouaillier et al., 2009). By using this implicit control method we aim to allow an operator to communicate as they would face-to-face. Before being able to investigate the benefits of embodiment over video in telecommunication, and interaction benefits of gestures, we first need to demonstrate the capability of the system to reproduce comprehensible gestures on the robot; thus, this is the first aim of the work presented here. Which kind of gestures are particularly important in humanhuman communication, and how they can be shown to add communicative value, underpins our approach to evaluating multi-modal communication on a robot avatar. Within the literature on gestures in human interaction a number of schemes have been proposed to classify them according to their form and function (Ekman, 1976;McNeill, 1992;Kendon, 2004). Iconic gestures are a key class of gestures from the classification scheme proposed by McNeill (1992). Iconic gestures are those that have a distinct meaning, they are of a form that either reiterates or supplements information in the speech they accompany. They typically convey information that is more efficiently and effectively conveyed in gesture than in speech, such as spatial relationships and motion of referents (Beattie and Shovelton, 2005), or the way in which an action is performed (termed manner gestures) (Kelly et al., 1999). Hence, multimodal communication can be said to be more effective and efficient at conveying information between speaker and listener than uni-modal communication, i.e., taking less time to convey the desired message, and in a clearer way (Beattie and Shovelton, 2005). Given the high communicative value of iconic gestures, here we investigate their use in robot avatar communication. For human-human communication, a number of approaches have been taken to establish the communicative value of iconic gestures, by examining whether the information understood by observers of multi-modal communication differs from unimodal communication. One suggested value of gestures is that they improve how memorable the speech they accompany is. Hence, participants' ability to recall details of speech delivered with and without different gestures has been tested (e.g., Cassell et al., 1999;Kelly et al., 1999). Analysis of results for such experiments is non-trivial, and depends strongly on how easy the stimulus material content is to remember. An alternative approach was suggested by Beattie and Shovelton (2005), whereby participants were asked questions about short multi-modal vignettes, the answers to some of which were only contained in the gestural channel. However, in such an approach it might be difficult to distinguish between speech and gesture integration, and contextual inferences (Beattie and Shovelton, 2011). To avoid confounds such as the ones potentially inherent in the approaches described above, we decided to base our experiments on a seminal study presented by Cocks et al. (2011). We adapted their design for use with the NAO robot and our tele-presence control scheme (see Section 2). In their study, participants were presented with a series of actions conveyed either through speech alone, gesture alone, or an iconic (manner) gesture accompanying speech, and asked to select, from a set of images of actions one that best matches what was communicated. The authors were able to clearly distinguish and compare understanding of actions both in uni-modal and multi-modal communication. Hence, their method was able to evaluate integration of information from the two communication channels, a process vital for the utility of co-speech iconic gestures (Cocks et al., 2011). One of the aims of the work presented here is to investigate whether the integration of speech and gesture occurs for a non-human agent, such as a robot, in the same way that it does for a human. Knowledge in this regard is as yet very limited. Speech and gesture integration for robot-performed pointing (deictic) gestures has been investigated (Ono et al., 2003;Cabibihan et al., 2012b;Sauppé and Mutlu, 2014), this showed that relative locations of referents could be better understood by using gestures to supplement speech information. While these studies provide some evidence for speech and deictic gesture integration, iconic gestures have yet to be examined. Moreover, to the best of our knowledge, it has never been investigated whether this integration process is as reliable in robots as it is in people. A key issue in robot gesturing, is joint coordination and motion timing. Work on how the human brain processes gestures suggests this may be of importance to gesture recognition, and hence in studying speech and gesture integration. In their recent meta-analysis of studies concerning the neural processing of observed arm gestures Yang et al. identified three brain functions associated with gesture processing: mirror neurons, biological motion recognition, and response planning (Yang et al., 2015). Of particular relevance here are mirror neurons, part of the brain associated with performing actions that fire when those actions are recognized. Gazzola et al. showed that mirror neurons still fire when observing some robot motion (Gazzola et al., 2007). However, they suggested that this depends on identification of the goal of the motion. With gesture, the motion goal is often not clear, and so mirror neuron based gesture recognition may instead rely upon identification of motion primitives, component parts of gestural motion based upon muscle synergies in the arm (Bengoetxea et al., 2014). A potential advantage in our study is we might overcome any scripting-related issues by using our tele-operation control scheme to copy both the shape, timing and joint coordination of human movement. Note, however, that even a tele-operation control system is limited by the design and the degrees of freedom of the robotics system used. Moreover, the non-biological appearance of the robot may interfere with identification of the gestures. Hence, we included testing conditions that allowed us to evaluate the comprehensibility of the gestures produced with our system when presented on their own. In this paper we aim to address the following research questions: (1) can iconic gestures performed with our teleoperation system be identified?; (2) is performance comparable to when the same gestures are performed by a person?; (3) are iconic gestures performed using our tele-operation system integrated with speech?; and (4) is integration as efficient for robot performed multi-modal communication as human performed multi-modal communication? In detail, we pre-recorded a set of communications consisting of verb phrases and appropriate iconic gestures produced by the robot using our tele-operation system, and a matching set by a human actor. The same actor was used for producing the robot stimuli and the human stimuli (recorded on video) to make the conditions as closely matched as possible. The recorded stimuli were then used in an experimental study adapted from the human-human communication literature (Cocks et al., 2011) to investigate whether hand gestures on their own were comprehensible for both robot and human, and whether they could be integrated with speech. To evaluate integration, we established whether the understanding of the observers' was changed as compared to speech or gesture alone. Understanding was also directly compared for the human (on video) and the robot (embodied replay of recorded communications) within the same observers. We sought to establish the extent of integration benefit achievable with robotic communication, relative to the one observed for a human communicator. We used videos of human gestures in our study to ensure identical stimuli for all participants. We reasoned they would be as efficient as live performances, given high recognition and integration rates (close to ceiling) were observed using video stimuli, in the study on which our work is based (Cocks et al., 2011). An additional motivation for our comparison of human video communication with a physically present robot is that it allows us to evaluate the differences between these two modes of telecommunication for multi-modal communication. If the performance of gesture understanding and integration for the robot avatar is comparable to video communication, it will enable further work on the salience and utility of these gestures in an interactive context. Beyond the application of the results to the utility of the NAO robot as an avatar, the tele-operated approach allows us to make more general inferences for the design of autonomous communicative robots. Directly comparing participants' comprehension of iconic gestures and their integration with speech for human and robot performers (in a single experiment) allows us to eliminate a range of confounds that make it difficult to compare findings within the literature. To the best of our knowledge we are the first to make this direct comparison. This paper is an extended version of our work published in Bremner and Leonards (2015a). We extended our previous work by adding in depth analysis of the gestures used, and the performance of the tele-operation system in reproducing these gestures. Additionally there is far more detailed discussion of our results, including implications of related work in neuroscience on human gesture processing. MATERIALS AND METHODS We conducted an experimental study with 22 participants (10 female, 12 male), aged 18-55 (M = 34.80 ± 10.88SD), all of whom were Native English speakers. Participants gave written informed consent to participate in the study, in line with the revised Declarations of Helsinki (2013), and approved by the Ethics Committee of the Faculty of Science, University of Bristol. Stimuli consisted of a series of pre-recorded communications, these were either speech alone, gesture alone, or speech and gesture. Each communication was performed by either the human actor (on video) or the NAO robot (physically present). Video was used for the human stimuli to ensure repeatability, and to allow direct comparison of data obtained for speech and gesture integration in dependence of the type of communicator: human or tele-operated robot. Hence, the experiment used a 2 (performer) × 3 (communication mode) within-subjects design. Tele-Operation System To reproduce gestures performed by a human actor on the NAO humanoid robot platform from Aldebaran Robotics (see Figure 1, for specifications see Gouaillier et al., 2009), we designed a motion capture based tele-operation system. The system was built using the ROS framework. Architecturally, ROS can be described as a computation graph made up of software modules (termed nodes), communicating with one another over edges (Quigley et al., 2009). Communication is built on a publisher/subscriber model where a node sends a message by publishing it, and nodes using that message subscribe to it. ROS offers a number of advantages that make it well suited to our system. Firstly, its communication architecture means that the system is inherently modular, so if one node fails the others can keep running while the failed node is restarted. Secondly, this modularity means nodes can be easily modified independently, only needing to adhere to correct message structure, making the system easily extensible. Thirdly, nodes can be written in different programming languages, here some nodes use C++ and some Python. Finally, ROS is well documented with a large library of existing nodes on which to base our work, speeding development time. Hence its use over viable alternatives such as YARP (Metta et al., 2006) or URBI (Baillie et al., 2008). In our tele-operation system we have developed separate nodes to gather kinematic information of the human tele-operator from several sensor systems. Each sensor node then publishes its data as ROS messages, a NAO control node subscribes to these message streams and then calculates the required commands that are then sent to the robot. Figure 1 shows the system architecture schematic. Audio streaming was handled separately from ROS using the GStreamer media framework to develop a NAO module and corresponding PC application to allow streaming of audio to the robot. In order to ensure that gestures are reproduced on the robot as closely as possible to the original human motion, hand trajectories, joint coordination and arm link orientations must be maintained. To this end arm link end points (i.e., shoulder, elbow and wrist) are tracked using a Microsoft Kinect sensor; the Nite skeleton tracker API from OpenNI is used to process the Kinect data and produce the needed body points. A Kinect node was written with the Nite API that uses the arm link end points provided by the skeleton tracker to calculate unit vectors for the upper and lower arm in the operator's torso coordinate frame 1 , these were then published as ROS messages. Sensor update rate was 30 Hz. The arm unit vectors are then used by the NAO control node to calculate robot arm joint values that align the arm links of the robot with those same unit vectors in the torso coordinate frame of the robot 1 . An example mapping between human and robot arm positions is shown in Figure 2. Data from the Kinect were subject to high levels of noise, consequently the joint angles were smoothed using a moving average filter with a 10 frame window. The filtering process added undesirable delay to the robot commands. Consequently, each filtered value is then modified by adding a trend term, calculated for each joint as a 10 frame moving average of the change in position each frame, then scaled by a factor of 4 (empirically determined) to produce a command similar to, but slightly ahead of, the raw value. To prevent overshoot due to sudden changes in velocity the filtered output was limited to deviate from the un-filtered value by an empirically determined maximum threshold value (0.04 rad). The NAO control module executed these commands to ensure the joints are still in motion when new commands are received, to do this it sent motor demands to execute the motion over a longer period than the update rate would require, so the controller doesn't decelerate more than demanded by the control node. This process utilized the inbuilt NAO position controllers to counteract commands being ahead of the raw value (resulting from the trend term in the filter), and thus allowed smooth handling of the stream of position demands. Due to limitations of the resolution of the Kinect when viewing the full body, it is not able to provide all degrees of freedom (DoF) required. Specifically, finger flexion and extension, and hand rotation relative to the forearm (pronation/supination). To overcome these limitations additional sensors were used: a Polhemus Patriot provides pronation/supination, and 5DT data gloves provide finger bend information. ROS nodes were developed for each of the additional sensors, which publish that data as ROS messages at 30 Hz. The NAO node processes this additional data to calculate 1 calculations are omitted here for brevity as they are relatively trivial. the needed joint angles for the robot. It then combines the calculated angles for all arm joints into a single message to send to the robot each command cycle. Phrase and Gesture Selection In order to evaluate whether the tele-operation system could produce comprehensible gestures, and whether the produced gestures were integrated with speech they accompany, we first had to determine a suitable set of phrases and accompanying gestures. We selected 10 verb phrases, depicting common actions (e.g., I played, I opened), chosen from those used by Cocks et al. (2011), see Table 1 for the full list. An important feature of the phrases selected is that they have more than one manner in which they can be conducted, and these manners can be conveyed with manual gestures. For each phrase two different iconic (manner) gestures were determined that conveyed manner in which the action was performed. This is an extension of the original design as presented by Cocks et al. (2011), who used only a single gesture for each phrase. We made this modification for two main reasons, firstly to give us a larger range of gestures to evaluate for comprehensibility on the NAO robot; secondly, and more importantly, to better evaluate speech and gesture integration. Indeed, we would argue that showing two different shifts in meaning from a speech only interpretation provides stronger evidence for integration. To select appropriate gestures there are a number of factors that must be considered. The primary aim for the gestures is that they are sufficiently vague that they might convey multiple possible meanings when viewed without words; at the same time, they must still be interpretable without the need for speech. This requirement also served to increase the ecological validity of the gestures being used, as they were close to those that might be performed in everyday speech. Note that this clearly contrasts with a precise pantomime gesture of a particular action, which is likely to have only one interpretation, and which is rarely used in normal conversation (Cocks et al., 2011). Another important requirement was that the gestures had to be performable by the NAO robot, such that a fair comparison To accommodate for this restriction we selected gestures which mainly comprised arm movements, for which precise hand shape and finger movements were deemed less critical. Note further, the NAO robot also has only one degree of freedom in the wrist (pronation/supination), compared to the 3 degrees of freedom in the wrist of humans, a reduced range of flexion in the elbow, and a safety algorithm to prevent the two hands from colliding. While we have tried to select gestures that are relatively unaffected by these restrictions, in order to maintain ecological validity, the human performer/tele-operator was not instructed to accommodate any of these factors. The final selection of gestures are described in Table 1. To simplify descriptions, and aid analysis of gesture features, the description of gesture space proposed by McNeil was used (McNeill, 1992). To further aid description we use the terms power grip: gripping with the whole hand, and precision grip: gripping with the finger tips. Materials and Procedure The experiment stimuli consisted of recordings of the 10 verb phrases detailed in Table 1. Each verb phrase was performed twice, once for each of the iconic (manner) gestures that portrayed how the action was performed. Two stimulus sets were recorded, the human performer stimuli was recorded using a digital video camera, the robot stimuli was recorded using the tele-operation system. In order to avoid interindividual variability in action performance, the same human actor performed both human and robot stimuli. To avoid possibly distorting participant perceptions due to the presence of the data-gloves necessary for tele-operation, the two stimulus sets were recorded separately. In order to ensure that the stimulus sets were as similar as possible, prior to performing without the data-gloves the actor reviewed the video of each teleoperation performance. The two recordings of each stimulus item were compared, and, where necessary, repeat performances were recorded. The robot communication stimuli were created by recording the messages transmitted by the sensor nodes using the built in recording capabilities of ROS. Audio was captured using the GStreamer based software module. To allow immediate verification, the robot was controlled and streamed to during recording. The human video stimuli and the recorded tele-operation stimuli were then edited to produce a set of presentations lasting approximately 5 s each, in three conditions: verbal only condition (V; audio only no performer movement); gesture only condition (G; gesture visible but audio not played); verbal-gesture condition (VG; gesture seen and verbal phrase heard). In both G and VG conditions, there were two different manner gestures so two presentations were created for each verb phrase. Hence, each action phrase came in five different versions per performer (V, G1, G2, VG1, VG2). To create the human stimuli the audio recorded during the robot performances was added to the videos of the human performance (i.e., replacing the original audio). Hence, identical audio was used for both robot and human performances in the 3 condition with a verbal component. Audio-information was overridden for the human stimuli to make sure that the audio information provided was identical between both human and robot stimuli. To prevent any lip-syncing issues, and eliminate the possibility of facial gesture effects, the human performer's face was obscured in the video. The relative timing of speech and gesture for the robot performances was based on video recorded of the robot captured during stimulus recording with the tele-operation system. There were 10 experimental conditions in total: five communication modes (V, G1, G2, VG1, VG2) for each of the two performers. Ten action phrases were used in each experimental condition; hence, each participant responded to 100 different trials. The trials were split into 10 blocks, each containing all 10 phrases, and all 10 experimental conditions. To prevent ordering effects, trial presentation order was counterbalanced across and within blocks by means of pseudo-randomization using partial Latin squares. Following each stimulus presentation, participants were presented with a set of six color photos of people performing actions on the (12.1 inch) screen of a response laptop, and were asked to select one. To do so they clicked with the laptop's mouse cursor on the photo they thought most closely matched what had been communicated; doing so moves on to the next stimulus presentation. The layout of the images, and hence the location of the target(s) on the response screen, were randomized between conditions and between phrases. Presentation of the response images, and recording of responses was done using the PsychoPy software (Peirce, 2007). Average experiment time was 20 min. The response image set for each phrase consisted of: a gesture only target for each gesture, that matched the corresponding gesture but not the speech; an integration target for each of the two manner gestures, which matched the corresponding speech and gesture combination; a pair of unrelated foils, not matching either the gesture or the speech, each one linked semantically to one of the gesture-only images (Figure 3 shows an example set, for "I paid"). For a particular gesture, one gesture only image and one integration target were both semantically congruent with it, so should have been selected with equal likelihood in the G condition. Both of the integration targets were semantically congruent with the speech, so in the V condition each should have been selected with equal likelihood. In each of the VG conditions only a single integration target was congruent for that particular speech and gesture combination, hence it should be the most probable image selection. Figure 4 shows the experimental set-up. The video screen and the NAO robot were both positioned 57 cm from the participant. A 32 inch wide-screen TV was used to display the video stimuli, thus, the human performer and robot appeared to be of a similar size. The start of each trial was signaled to the participant by playing a tone and displaying either human or robot on the response laptop for 1 s to indicate which presenter was next. This allowed the participant to concentrate on the correct presenter from the outset of each trial. Each trial consisted of playback of the performance of the phrase, followed by automatic display of the response image set. Each trial was initiated by the experimenter after the participant had completed the previous trial; the experimenter was sat out of view of the participant. Prior to the experimental trials, participants performed two practice trials to ensure they understood the experimental procedure. Gesture Comprehension Gesture comprehension was tested by calculating the proportion of correct responses in the conditions with only gestures. To evaluate each gesture, in both performance conditions, a chisquared test was used to compare the proportion of correct responses for that gesture with chance (of the six images in the response set two were the correct answer, so chance was at 0.33). These results are shown in Figure 5. Almost every gesture (with the exception of both the "I lit"gestures in the robot condition) was identified significantly better than chance in both human and robot conditions, with high average proportions of correct responses (M human = 0.943 ± 0.065SD; M robot = 0.802 ± 0.17SD). A Wilcoxon signed rank test (used as the data did not meet assumptions needed for a parametric test) revealed a significant difference between performers (p < 0.001) for the same gestures even excluding the "I lit"gestures. It is apparent from Figure 5 that sizeable differences in gesture comprehension between performers existed only for some of the gestures examined. Hence, the data were further analyzed, on a per gesture basis, to find for which individual gestures there were significant differences in recognition rate between performers. As the data is binomial and paired (each participant viewed human and robot performances of each gesture), we used an exact McNemar test to evaluate differences. An exact McNemar test for each gesture revealed gestures were identified correctly significantly more frequently in the human performances than in the robot performances for lit1 (p = 0.00098), lit2 (p = 0.00049), and fixed1 (p = 0.00781). Cut2 approached being significantly more frequently correctly identified in human performances than in in robot performances (p = 0.0625). There were no other significant differences in gesture identification between human and robot performance conditions. Note, however, that these results 2 need to be treated with caution as performance was almost at ceiling, resulting in small values for the dichotomous variables used in the test calculations. 2 For access to results data pertaining to this work please contact the lead author. In order to investigate possible sources for the difference in gesture comprehension found between human performer and robot, controller performance was further analyzed for two of the gestures; namely those for which significant differences had been reported-lit1 and fixed1. First we compared the physical movement profiles: for this, the recorded robot joint values over the duration of each gesture were plotted along with the joint values for the human performer as recorded by the Kinect (Figure 6, Lit1, Figure 7, Fixed1). It is clear from the graphs that joint co-ordination and velocity profiles, and hence hand trajectories, are very comparable between human and robot for the two gestures analyzed. However, two common differences can be observed in both plots, firstly the elbow flexion has a limited range of motion on the robot relative to the human, decreasing the amplitude of the peak of the gesture (approximately 15% reduction in vertical travel); further, they have a very brief pause at the top of the stroke. Secondly, the predictive filter caused the robot joints to accelerate at a slightly different rate to the human joints when the human joint velocity was at certain values; this resulted in those joints finishing their motion approximately 0.1s early. It is hard to quantify the significance of these differences. Although they appear relatively small, critical visual examination of the robot motion on these two gestures may provide further insight. In both cases the hand trajectory is largely as expected and joint coordination appears on visual inspection human-like. However, the slightly shorter vertical travel is noticeably different from what is expected for these two actions, but vertical travel is still clearly perceptible. Further, in the human version of these gestures ulnar/radial deviation in the wrist is used, a degree of freedom lacking in the NAO robot. A pause in the gesture is barely perceptible, and only in the oscillatory motion in fixed1, appearing less smooth than expected. To provide further insight into differences in gesture performances, the gestures lit2, cut2, played1, and cleaned1 were also analyzed by visual inspection. Though not significantly different in identification between performers, cut2, played1 and cleaned1 all led to differences in identification performance between human and robot performer (5). Similarly to lit1 and fixed1, cut2 and played1 showed reduced vertical travel for the robot performance due to a reliance on elbow flexion. It is also apparent from lit1, lit2, cut2, and cleaned1 that the wrist rotation sensor did not always give accurate readings. As a result, wrist orientation differed visibly from the human version of these gestures. Although we would have thought that hand-shape itself should play only a minor role in these gestures, in lit2, and cut2, a fairly particular hand-shape was adopted by the human which the NAO was unable to approximate well enough. Speech and Gesture Integration To test for speech and gesture integration all stimulus item scores were summed for every participant (the scores for a particular phrase were the combined results for the two gestures that accompanied each), hence we determined the proportion of integration target choices (ITC). Figure 8 shows the proportion of participant responses where the integration target was selected, in dependence of the presented stimulus mode. Uni-modal presentations had a uni-modal image as a correct answer as well as the integration image. In line with expectations that each was equally likely to be chosen, ITC for the verbal condition were made close to 50% of the time; the gesture conditions favored the non-integration target image, with ITC close to 40%. In the multi-modal presentation condition we observed a distinct increase in the frequency with which the integrated image was selected. Underlying the averaged values for uni-modal image selection, a number of individual stimuli had a particular image of the two viable image choices that was chosen significantly more often than the other. In some cases this was the integration target and in some cases it was not; integration target choice in the multi-modal version of those stimuli did not vary significantly from the value found in less extreme uni-modal cases. Hence, this provides stronger evidence for multi-modal integration in cases where a large change occurred. Moreover, this shows the robustness of our approach to these variations as the averaged values are close to those expected. Accordingly, a 2 (presenter) × 3 (communication modus) repeated measures ANOVA revealed a significant main effect of communication mode [F (2,42) = 282.57, p < 0.0001]. Post-hoc analysis (Tukey) confirmed that participants chose the integrated images far less often in the gesture only condition (M = 0.39 ± 0.11SD) than in the verbal only condition (M = 0.49 ± 0.02SD, p < 0.0005). More importantly, participants selected the image constituting the integrated information from speech and gesture in the VG condition (M = 0.82 ± 0.08SD; p < 0.0005). Hence, there there is clear indication that ambiguity is decreased by means of correct integration of speech and gesture information. We found no significant main effect for presenter [F (1,21) = 2.61, p = 0.12], nor a significant interaction between So that we can gain a clearer picture of the pairwise comparisons of integration target image choices, we propose calculation of an estimate of the effect size of changes in ITC proportions in dependence of condition. The method we have utilized to do so is based on the method proposed by Cocks et al. (2011) termed multi-modal gain (MMG). MMG is a means by which we can estimate the change in probability of ITC between uni-modal (speech or gesture alone) and multi-modal conditions (speech and gesture together). To estimate the value of MMG, the proportion of ITC in uni-modal communication (P(Uni)) is estimated, and then subtracted from the proportion of ITC in the VG conditions (P(Multi)), see Equation (1). To estimate the proportion of ITC in the uni-modal conditions (P(Uni)) the weighted mean of ITC in the verbal (ITC V ) and gesture (ITC G ) conditions are summed, see Equation (2). The basis for this calculation is that the different modalities vary in how likely they are to be utilized by observers, i.e., it is assumed that participants are more likely to be influenced by the modality that they perceive as providing the most useful information. Thus, the two weights, WV and WG, for the verbal and gesture conditions respectively, are calculated as normalized proportions of trials in which integration targets were selected (PCV for V trials and PCG for G trials), see Equations (3) and (4). Hence, MMG calculates a single figure for percentage gain, taking into account how often the integration targets were chosen in both uni-modal conditions (the results for both gestures for each phrase were included together). The values for each performer were calculated separately and are shown in Figure 9. By using two gestures per phrase we found that for some phrases in the verbal condition one of the two matching images was selected far more frequently than the other. Hence, MMG for the preferred integration target image was close to zero, i.e., gesture had no effect; conversely, for the other integration target image MMG was very high, i.e., gestures had a large effect. This gives us a clear advantage over the original study of Cocks et al. (2011) as we were less vulnerable to the variability of individual meaning preferences, and hence could gain a clearer picture of whether integration effected understanding by incorporating the scores in a single calculation. We conducted a two tailed t-test for each performer against the null hypothesis of MMG = 0, the means of both samples (M H = 0.393 ± 0.079SD; M R = 0.355 ± 0.095SD) differed significantly from 0 [t H (21) = 23.12, p < 0.001, r = 0.98; t R (21) = 17.405, p < 0.001, r = 0.97]. It is important to be aware that a maximum estimate for MMG is given by 1 − P(Uni), hence, MMG Rmax = 0.56 and MMG Hmax = 0.56 (i.e., 56 and 55% for the robot and human respectively). The MMG values for both performance modes are approaching ceiling. The means of the two performers were compared using a paired two tailed t-test, and this showed no significant differences [t (21) = −2.005, Dif = 0.019, p > 0.05, r = 0.21]. However, for testing the hypothesis that there is no difference between performance modes this analysis was underpowered. In order to allow us to more reliably test this hypothesis, i.e., that the performance mode results are interchangeable, a repeatability measure was used, intraclass correlation coefficient (ICC). The MMG scores for each participant were calculated from responses in multiple trials (so can be considered akin to a mean score), hence we used ICC(2, k), as suggested in Shrout and Fleiss (1979). We found significant correlation between the results, indicating fair to substantial reliability [ICC(2, k) = 0.61, F (21,21) = 2.8, p = 0.011]. Taking these two analyses together, we thus feel confident that participants' ability to integrate gestures and speech was independent of the performers. DISCUSSION The findings in this paper address the four research questions proposed in Section 1. We found that (1) human observers were able to identify upper limb manner gestures the majority of the time when produced by a tele-operated NAO robot. (2) Although identification of robot-performed gestures was worse than that for human-performed gestures, it was still good enough for them to be useful. More importantly, as gesture in human communication is most commonly employed along with speech, we found that (3) when such gestures were performed with speech they were integrated with it; (4) this process was as efficient for the robot as the human performances. Moreover, this integration compensated for any difficulties in identification of robot performed gestures. In the following sections we will discuss these findings in more detail. Gesture Comprehension With the exception of those accompanying "I lit," all gestures used in this experiment were identified clearly above chance for both the human and the robot when they were presented without speech. Though robot gestures were more difficult to identify than human gestures, the general ability to do so is in clear contrast to earlier findings by Cabibihan et al. (2012a) and Zheng and Meng (2012). In both these previous studies they found robot performed gestures were difficult to identify on their own. There are a number of possible causal factors for the differences between our study and previous work. Possible factors are the subtleties in gestures captured by the tele-operation scheme, the different methods of response-gathering (restricted choices as used here, in contrast to free response in related work), the types of gestures used (they used more emblematic gestures, often close to pantomime, in contrast to the iconic manner gestures used here), or some combination of all of these. Whichever the explanatory case, the work presented here provides evidence for the idea that there is communicative value in robot performed gestures. We suggest that there might be a wider range of gestures than those tested here that will have communicative value for a robot. Therefore, we will look at common features of the gestures used here that were correctly identified. It is also instructive to examine these same features for gestures that were more difficult to identify when performed on the robot than when performed by a human. Differences in the performances likely account for the lower mean recognition rate for robot performed gestures (80.2%, compared to 94.3% for human performances). The primary common feature is the importance of hand trajectory, including the appropriate hand velocity profile. This is used to convey easily identifiable relative motions that are either part of the action being carried out, or of objects manipulated by the action. This idea is supported by the work of Beattie and Shovelton (2005), who found that gestures portraying relative positions and movements are the most successful at conveying information. Relatedly, when the trajectories could not be correctly perceived gestures were harder to identify. The main reason for this here was due to the reduced range of motion on the NAO elbow flexion, and the lack of the ulnar/radial deviation degree of freedom, resulting in smaller vertical travel for some gestures, and in some cases increased jerk. Moreover, these deviations might also cause difficulties in identifying motion primitives used in gesture recognition (Bengoetxea et al., 2014), or limit the perception of the movement to being artificial where different mental processes are applied (Yang et al., 2015). One way in which this issue of gesture recognition has been circumvented, is by having participants evaluate gestures not on their meaning alone, but rather on what action they would do in response, as this activates another area of the brain used in gesture recognition (Yang et al., 2015). This was demonstrated in the findings of Riek et al. where in speeded response trials participants were reported to correctly identify responses to robot performed co-operative gestures; they remained able to do so even when the robot used non-human-like velocity profiles (Riek et al., 2010). This suggests that the context in which the gestures are used may be of importance in the ease with which they are recognized. A second common feature is hand orientation, as different hand orientations for the same hand trajectory can convey very different actions. Indeed, we found that for gestures where the wrist rotation sensor provided erroneous information, those gestures were less frequently correctly identified. As with deviations in arm trajectory this might mean that movement expected according to muscle synergies observed in human gesture (Bengoetxea et al., 2014) is not observed. A final feature, important for robots that do not possess fully articulated hands such as NAO, is a minimal reliance on hand shapes; i.e., gestures where arm trajectories and the degree to which the hand was open or closed contained sufficient information. We found that for some gestures hand shape was required for the gesture not to be too ambiguous to be correctly identified. A good illustration of the importance of these features can be found in the gesture lit1, which, while being correctly identified in the human presentation condition, was not identified correctly in the robot presentation condition. The lit1 gesture comprises a vertical hand motion demonstrating pulling a cord to switch on a light (a common action in the UK). In the robot condition the unrelated foil images were selected with close to identical frequency as the target images. Examining the response image set for "I lit," we observed that the main differences between target and foil images was hand orientation, and motion range. Hence, we suggest, if gesture is to be used in uni-modal communication for a robot, as an avatar or autonomously, which gestures are used needs to be carefully examined, and the capabilities of the robot platform taken into account. While the evidence for the relevance of the aforementioned deviations is limited, it does highlight an important factor both for gestures in HRI and in human communication that merit further investigation. We suggest this key factor is that the differences between human and robot gestures are relatively small, as shown in the performance analysis of the teleoperation control scheme in producing closely matched joint motion. Hence, our data provide further evidence for the notion that people are well conditioned to making subtle gestural discriminations and to identify biological motion and meaning (Kilner et al., 2003;Yang et al., 2015). This is further reinforced by our observations during the development of the range of gestures to be tested. To test how susceptible observers are to subtle variations in robot performed gesture and how much this depends on the context (e.g., whether observers are needed to physically or socially interact with the robot) requires more compelling evidence (see also Riek et al., 2010). Further, whether such effects vary between deliberate gesture identification, and the use of such gestures in conversation, also needs to be investigated. Indeed, by testing subtle gesture effects for robot communication we may be able to also learn more about the mechanisms underlying human communication and gesture perception. Speech and Gesture Integration Our findings demonstrate that when performed together speech and gesture are integrated, even when performance is mediated by a tele-operated NAO robot. We observed a larger proportion of integration target choices (ITC) in the multi-modal condition, as compared to either uni-modal condition. Multi-modal communication disambiguates the possible meaning of either gesture or speech on their own. ITC differed between unimodal conditions, making it difficult to directly evaluate and compare the extent of speech and gesture integration for the two performers. To overcome this difficulty we followed the methodology of Cocks et al. to calculate multi-modal gain (MMG) (Cocks et al., 2011). MMG incorporates the results from both uni-modal presentation conditions in a calculation to estimate the change in probability of ITC for multi-modal communication as a single value. Highly significant values for MMG were found for both performance conditions. More importantly, the extent to which speech and gesture could be integrated was comparable between the two performers, indicating that robot-performed gestures are as efficiently integrated with speech as human-performed multimodal communication. As the lit gestures were not identified correctly when presented alone by the robot, it is instructive to examine the image choices when presented alongside speech. For lit1 and lit2 gestures, the correct target image was selected by 82% of participants and 95% of participants, respectively. This shows that participants were able to compensate for the lack of clarity in the gesture performance by using speech information to resolve ambiguity. These results are somewhat surprising given previous work on speech and gesture integration with mismatched appearance and voice (here there is a clear mismatch of human voice and robot appearance). Kelly et al. showed that when there was a gender mismatch between voice and gesture performer, integration was reduced, and required considered rather than automatic mental processing (Kelly et al., 2010). Hayes et al. replicated these findings with human voice and robot performed gestures (Hayes et al., 2013). Similarly, we found that in speeded trials integration of speech and beat gestures does not occur when using a robot avatar to communicate (Bremner and Leonards, 2015b). The work presented here differs from the aforementioned, in that trials were not speeded. We suggest that though integration of robot gesture and human speech may not be an automatic process, it occurs nevertheless. Whether there is a difference in mental processing for the gestures examined here, and if there is, whether it effects interaction with robot tele-operators requires further investigation. One way in which this could be tested is to look not only at information comprehension, but also response times in speeded trials. As well as being important for tele-communication using humanoid robot avatars, our findings also have implications for design of communicative behavior in autonomous humanoid robots. Perhaps the most important implication is that when a humanoid robot needs to communicate this can be done more accurately and efficiently by splitting semantic information across verbal and gestural communication modalities. In addition, our results demonstrate that multi-modal communications are interpreted similarly whether the gestural component is mediated by video only or by a tele-operated robot. Hence, autonomous robots should, where possible, use gestures to produce more natural seeming human-robot interaction. Thus, our work reinforces findings in the literature that higher subjective ratings are given to robots when they perform gestures (Han et al., 2012;Aly and Tapus, 2013;Salem et al., 2013). Importantly, the difference in gesture recognition between human video and robot-embodied communication for gesture only communication is compensated for in multi-modal communication. That is to say, a humanoid robot avatar offers comparable performance to video communication when using speech along with gestures. Hence, a robot avatar operator might take advantage of previously observed advantages of robots over 2D communication media, such as enhanced engagement, improved social presence and action awareness (Powers et al., 2007;Adalgeirsson and Breazeal, 2010;Hossen Mamode et al., 2013), while maintaining communicative efficacy. Conclusion We show in this paper, using a fully within subject design, that using our Kinect based tele-operation system iconic manner gestures conveyed on the NAO robot are recognizable. This is despite physical restrictions in the degrees of freedom and movement kinematics of NAO relative to a human. Further, there seem to exist a large range of gestures which might be conveyed successfully. More importantly, we show that such robot-executed gestures can be integrated with simultaneously presented speech as efficiently as human-executed gestures. Whether this is because of, or despite the speech clearly originating from a human operator, remains to be further investigated. Hence, with regard to multi-modal semantic information conveyance, a NAO tele-operated avatar can be close to video mediated human communication in terms of efficacy. These two findings provide strong evidence as to the utility of a tele-operated NAO for conveying multi-modal communication. Although gestures are not recognized quite as well for the robot as they are for the human on video, they are still recognized well enough to make it a viable communication medium. We suggest the slight compromise in uni-modal gesture recognition for a robot performer is compensated for by the potential improvements in social presence and salience to interlocutors. Our findings also have implications for autonomous communication robots, for which gesturing is an active area of research, and has been shown to offer a number of communicative benefits beyond information conveyance. Huang and Mutlu found that robot performed deictic gestures improved participants' recall of items in a factual talk; however, gestures other types had minimal effects (Huang and Mutlu, 2014). Bremner et al. showed that although higher certainty in the information recalled was observed for parts of a monolog that were accompanied by (beat and metaphoric) gestures, the amount of information recalled was no better than for parts without gesture (Bremner et al., 2011). However, Van Dijk et al. found there was a positive influence on memory when redundant iconic gestures were performed when describing action performance (Dijk et al., 2013). Other gesture effects beyond memory have been observed by Chidambaram et al. (2012), who demonstrated a robot was significantly more persuasive when it used gestures and other non-verbal cues. Additionally, hand gestures have been found to improve user ratings of robots on scales such as competence, likeability, and intention for future contact in a number of studies (e.g., Han et al., 2012;Aly and Tapus, 2013;Salem et al., 2013). These findings suggest that performing gestures on a robot avatar may have additional benefits to the robot operator that can be capitalized on, and we are in a position to do so now that we have shown they can be interpreted correctly. We suggest that, when it is possible, robot communication should be multi-modal to ensure clarity of meaning, and to improve its efficiency and efficacy. This demonstration of the utility of multi-modal communication is not only of importance for our continuing work with tele-operated humanoid robot avatars, but also for socially communicative autonomous humanoid robots. We suggest our results might be generalizable in this way as previous studies showed that participants treat avatars similarly to how they do autonomous systems (von der Pütten et al., 2010). Indeed, one of the applications of humanoid tele-operation is as a tool to test what is important in terms of robot behavior for successful HRI in so-called super Wizard of Oz studies (Gibert et al., 2013). Limitations and Future Work While the work presented here provides initial insight into speech and iconic gesture integration for robotic communicators, it has a number of limitations which we hope to address in future work. Firstly, the range of tested gestures was limited to manner gestures where hand shape was not expected to be critical. In the future we intend to expand on our findings that integration can occur even for gestures that, as a consequence of differences in physical capabilities, can not be realized in a precisely humanlike way by a robot. Limited evidence was found for this with the "I lit" gestures which were poorly recognized when performed by the robot. The degree of similarity between robot performed and the original human gestures was not objectively controlled, other than visual inspection. Given our preliminary findings on the effects of subtle gesture differences, and existing literature on human sensitivity to biological motion, we suggest the examination of the degree of similarity required for comprehension and integration. Doing so would inform robot design and control requirements (extending the ideas in Riek et al., 2010). Additionally, we suggest that by both carefully controlling gesture motion requirements, and similarity to human motion, one could more easily generalize our results across different robot platforms. Another limitation of our work was that all gestures used were tested in a laboratory setting, with a limited set of short communications. In future work we aim to improve the ecological validity of our findings by investigating gestures in more interactive settings (extending the ideas in Hossen Mamode et al., 2013). In doing so we aim to look at a larger range of types of gesture, situated within longer sentences, and accompanied by other non-verbal behaviors such as gaze. An important component of this further work will be timing of gestures relative to speech (McNeill, 1992;Kendon, 2004). Though initial testing has shown coordination between speech and gesture to be close to that of the robot operator, whether it is close enough needs to be experimentally verified to fully validate our robot avatar system as a communication medium. It is also important to note that our results might not be generalizable across cultures. Different nationalities have different gesturing conventions, and semantics (i.e., words that are ambiguous in English are often not in other languages). Further work is required to see if integration varies across different cultures, particularly where gestures are more (e.g., Italy), or less (e.g., Japan) prevalent in everyday communication. AUTHOR CONTRIBUTIONS PB, conception and design of the work; acquisition, analysis, and interpretation of data for the work; drafting of the manuscript. UL, conception and design of the work; analysis, and interpretation of data for the work; revising work critically for important intellectual content. PB and UL, final approval and accountability. FUNDING This research grant is funded by the EPSRC under its IDEAS Factory Sandpits call on Digital Personhood, grant ref: EP/L00416X/1.
2016-05-12T22:15:10.714Z
2016-02-17T00:00:00.000
{ "year": 2016, "sha1": "47cb42bd21089a2b402d72d4365b43eea5b4cf01", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2016.00183/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7d3f68e596948d8e804ab5368cfbaea5df03bb4e", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
145174484
pes2o/s2orc
v3-fos-license
Legal Definition of the Decision of General Meeting in the Russian Law In the clause the concept "the meeting decision" in civil law of the Russian Federation and foreign countries is analyzed. The author considers the legal nature of the decision of meeting on an example of decisions of general meeting of proprietors of premises, copartners of proprietors of habitation, analyzes bases of its invalidity, a consequence of acknowledgement of the decision of meeting of proprietors of premises, copartners of proprietors of habitation void, studies the moment from which the meeting decision is considered void, and also the persons having the right to the appeal of decisions of specified meetings. It is necessary to notice that for today in Russia the substantive provisions, meetings concerning decisions are legislatively established only, and foreign scientists-jurists state considerable number of assumptions of their legal nature. The specified circumstances induce to research, first of all, assumptions and conclusions of foreign authors and to the further rather-legal analysis. Introduction In a science of civil law of Germany [1; 2] there is a concept of "decision" to which carry decisions of general meetings, the supervisory boards, other controls of economic societies, and also other consolidations and organisations: connections physical and juridical persons, capable and not capable consolidations or people [3].The specified decisions concern, including decisions of participants of condominiums [4].However general provisions about decisions of meetings in the legislation of Austria, Germany, Switzerland are absent. Foreign researchers determine the decision as "technics collective will" [5; 6], but it is thus specified that the will of separate persons is generalised in the decision. From the point of view of foreign researchers, specificity of the decision of meeting is that it is not declaration of will, but is based on separate declarations of will: on voting of shareholders or other partners.R. Bork, leaning against court practice and the doctrine, notices that the decision of meetings consists of set in parallel flowing, under the maintenance unanimous, namely expressed in the form of the voting, those declarations of will who participates in decision-making [7].Thus the meeting decision is considered accepted if for its accepting everything have voted not, and the majority of the persons participating in voting.Besides to the meeting decision should submit including the persons who have not taken part in voting and even not consent.V. Boeken asserts that the meeting decision represents the legal transaction with several unidirectional declarations of will.Such orientation, in its opinion, distinguishes the decision from the agreement which also represents the multilateral legal transaction, however is characterised by mutual declarations of will [8].In turn, V. Flum, leaning against court practice, specifies that the decision represents not the sum of separate declarations of will, and their interaction at participation in decision-making [9]. Theory In the acting Russian legislation decisions of various meetings are provided.Bases of their invalidity, a consequence of their acknowledgement are legislatively fixed by those, and also the moment from which the meeting decision is considered void is specified.So, decisions of participants of the legal entity admit Russia (including the decision of general meeting of shareholders, participants of restricted liability society, housing memory co-operative society, copartners of proprietors of habitation and others); and also decisions of general meeting of proprietors of premises in an apartment house. The Civil code of the Russian Federation determines the meeting decision as follows: "the meeting Decision with which the law connects civil-law consequences, generates legal consequences on which the decision it is directed, for all persons, having the right to participate in the given meeting (participants of the legal entity, joint owners, creditors at bankruptcy and others -participants of civil-law community), and also for other persons when it is established by the law or follows from a being of relations" [10].The given formulation of the decision of meeting as bases of origin of civil laws and obligations, in our opinion, allows to draw a conclusion that it cannot be carried to transactions. In the Russian right the analysis of the legal nature of decisions of meetings is performed in a context of interpretation of the legislation on juridical persons.So, N.V. Kozlova considers that all acts of an internal of juridical persons including their decisions, represent itself as multilateral transactions [11].G.S. Shapkin underlines that "decisions, as a rule, mention the rights and legitimate interests of shareholders, but not as the persons acting as the party in the transaction, and as participants of a society within the limits of the corporate relations regulated by special precepts of law" [12]. Other scientists, analyzing the legal nature of acts of controls of the legal entity, determine such acts as local normative acts [13].The third group of scientists believes that decisions of controls of the legal entity are not standard legal acts [14; 15]. V.I.Dobrovolsky's position which considers that is impossible, obviously to regard the decision of meeting as not standard legal act as relations in a society are not public or administrative and are based exclusively on the civil legislation as the transaction directed on an establishment, change or the termination of the rights and obligations [16], proves to be true court practice materials.It is necessary to notice that it, regarding the decision of general meeting of shareholders exclusively as the act of the supreme body of management of a society, notices that the meeting decision is a basis for the conclusion of the large transaction or the transaction in which fulfilment there is an interest, and the decision of meeting on election of board of directors and (or) the general director generates powers of controls joint stock company and, as consequence, powers on fulfilment of transactions.Thus any conclusions concerning the legal nature of the decision of meeting are not made. Findings and Discussion Features by which decisions of meetings are allocated, allow to carry them to special type of dispositive facts.The given circumstance proves to be true V.S. Em's position [17] and A.E. Sherstobitov [18] which believe that the meeting decision as the corporate act, undoubtedly, can be carried to special dispositive facts of civil law. It is necessary to agree with O.M. Rodionova's opinion which believes that any of these approaches to understanding of the nature of decisions of controls the juridical person cannot be accepted on unique, but to very significant basis: bodies of the legal entity are not persons of law, hence, cannot make any legally significant actions, including transactions, not standard legal acts, etc. [19] Invalidity of decisions of meetings and its kinds In a German science it is supposed that decisions of meetings can be recognised by void by general rules about invalidity of the transactions, provided by civil codes.For acknowledgement void decisions of general meetings of joint stock companies, restricted liability societies [20], decisions of condominiums of [21] and other communities laws establish special rules. Division of nullity decisions on insignificant and debatable is represented reasonable as bases for invalidity of decisions of meetings a little, and the illegality of several of them is obvious and, as a rule, does not demand judicial consideration.Besides, such division is claimed by court practice. In item 181.3 the Civil code of the Russian Federation decisions of meetings are divided on insignificant and debatable.Just as in the German joint-stock right it is fixed that the meeting decision can be nullified on the bases established by the law, owing to acknowledgement by its that court (debatable the decision) or irrespective of such acknowledgement (insignificant the decision).By analogy to the German legislation position about an order of confirmation of decisions of meetings is entered.So, in item 2 of item 181.4 the Civil code of the Russian Federation it is told: "the meeting Decision, debatable in connection with infringement of an order of its accepting, cannot be challenged, if it is confirmed appropriate repeated by the decision before acknowledgement by its court void". Division of decisions of meetings into the insignificant and debatable has found continuation in a designation of corresponding methods of protection: possibility of acknowledgement of the decision is fixed by the void.Voidability of the decision disappears, if general meeting has confirmed disputable the decision new and it has not been appealed during term of contest or contest has been refused.ISSN 2039-9340 (print) Mediterranean Journal of Social Sciences MCSER Publishing, Rome-Italy Vol 6 No 1 S3 February 2015 13 Bases of invalidity of decisions of meetings In the acting Russian legislation the positions similar to the German joint-stock legislation are provided: so, .5 items 46 of the Housing code of the Russian Federation the right to appeal in court gives to proprietors of premises in an apartment house the decision, accepted by general meeting of proprietors of premises in the given house with infringement of requirements of the law, in a case if it did not accept participation in this meeting or voted against accepting such decisions and if such the decision breaks its rights and legitimate interests) [22]. The Russian legislator also uses criterion of importance.So, in item 6 of item 46 of the Housing code of the Russian Federation it is established that the court taking into account all circumstances of business has the right to uphold the decision if voting of the specified proprietor could not affect results of the voting, the admitted infringements are not essential and the agreed conclusion has not caused causing of losses to the specified proprietor. In item 4 of item 181.4 the Civil code of the Russian Federation it is told: "the meeting decision cannot be nullified, if voting of the person which rights are mentioned challenged by the decision, could not affect its accepting and the decision does not attract essential adverse consequences for this person".It means that acknowledgements of the decision of meeting void both circumstances should be present simultaneously.However, it is necessary to notice that determination of "essential adverse consequences" in the legislation is absent. It is thought, importance of infringement should not be a basis of invalidity of the decision of meeting if it is not legislatively specified how it is made in other cases of the use in the term current legislation "essential".For example, in item 1 of item 178 the Civil code of the Russian Federation is established importance of error and its explanatory is given.Also in item 432 the Civil code of the Russian Federation that concerns essential treaty provisions is specified. Let's agree with O.M. Rodionova's determination which by analogy to determination in item 2 of item 450 the Civil code of the Russian Federation of fundamental breach of the agreement in which interpretation there is a considerable experience, would solve a problem of division of infringements on attracting and not attracting causing suggests to understand as essential adverse consequences to the participant of community of such harm invalidity of decisions of meetings "that it substantially loses that, on what have the right was to count, having the right to vote" [16]. Consequences of acknowledgement of decisions of meetings the void Any consequences of acknowledgement of decisions of meetings void in the Civil code of the Russian Federation it is not fixed.This question till today has not found the decision in the acting Russian legislation.In a science and practice the problem is shined with reference to decisions of separate kinds of meetings. So, researchers mark an ambiguity of consequences of acknowledgement void decisions of general meetings.As to acknowledgement consequences void decisions of meetings of general meetings of proprietors of premises or copartners of proprietors, about it the legislator at all speaks nothing. Consequences of invalidity of the decision, considering it character, transactions, and the state structure or local government act (item 13 the Civil code of the Russian Federation) are similar to invalidity consequences not.In this connection it would be logical by analogy to paragraph 2 of item 13 the Civil code of the Russian Federation to fix in the Civil code of the Russian Federation a rule that in case of acknowledgement by court of the decision of meeting void the broken right is subject to restoration or protection by the different ways provided by the law. Taking into account told it is represented what followed give to considered position more general character and to fix in the Civil code of the Russian Federation the prescription that acknowledgement of decisions of meeting about fulfilment of transactions void in case of the appeal of such decisions separately from contest of corresponding transactions does not involve acknowledgement of corresponding transactions by the void. The persons having the right to the appeal of decisions of meetings In the current legislation, and in the Civil code of the Russian Federation a circle of persons, having the right to appeal against the meeting decision, it is limited by those who is anyhow connected with its accepting.In item 3 of item 181.4 the Civil code of the Russian Federation it is specified that the participant of the corresponding civil-law community not accepting participations in meeting or voting against accepting challenged decision has the right to challenge the meeting decision in court.Thus under participants of community in item 181.1.The persons are understood, first of all, having the right to participate in the given meeting: participants of the legal entity, joint owners, creditors at bankruptcy, etc. However the specified possibility is not excluded and for other persons when it is established by the law or follows from a being of relations. In the current legislation a circle of persons, having the right to appeal against decisions of meeting of participants of juridical persons and joint owners, it is limited only by the authorised persons.So, in item 6 of item 46 of the Housing code of the Russian Federation it is fixed that the proprietor has the right to appeal against the decision in court, accepted by general meeting of proprietors of premises in the given house with infringement of requirements the Housing code of the Russian Federation in case it did not take part in this meeting or voted against accepting such decisions and if such the decision breaks its rights and legitimate interests. The right to the appeal cannot be transferred the authorised person to other person as has personal character.The given circumstance has great value when the question on possibility of the appeal of the decision of general meeting of proprietors of premises or general meeting of copartners of proprietors of habitation by the persons who have become by proprietors of corresponding object of real estate or copartners of proprietors of habitation after its accepting is solved. With reference to general meeting of copartners of proprietors of habitation the meeting decision -the internal act of the specified meeting which is created by its participants during their activity during the concrete moment of time.For decision-making by meeting two factors matter: voting of copartners and the decision statement.The first cannot be performed at any time.It is limited by the frameworks specified by the law: the copartner of proprietors of habitation has the right to vote during the moment which has been taken away for this purpose at meeting, or in other established order, but not when will solve itself.Therefore, as it is truly marked by courts, the person who was present at meeting, but did not vote, has not exercised the right, did not participate in decision-making, cannot demand acknowledgement of the decision by the void.All other persons do not concern the meeting decision, as it is not published, not expressed for them.Even if the subject becomes subsequently the participant of meeting, it can challenge only new the decision, but not already accepted.Accordingly, the new member of meeting if its rights and interests are broken by the decision, as well as any interested persons, can challenge only actions (for example, the transactions made after decision-making by meeting) which element it will be. Thus the satisfaction of the requirement about invalidity of the decision of meeting is supposed possible if voting of the person which rights are mentioned challenged by the decision, could affect its accepting and the decision attracts essential adverse consequences for this person (item 4 of item 181.4 the Civil code of the Russian Federation). Results of comparison of general meeting of proprietors of premises and general meeting of copartners of proprietors of habitation by various criteria are displayed in table 1 which can assist at the organisation and carrying out of the specified meetings. Table 1. The comparative table of meeting in partnership of proprietors of habitation in system of controls an apartment house Controls an apartment house The supreme body of management of partnership of proprietors of habitation Regulation of an order of convocation The housing code of the Russian Federation The housing code of the Russian Federation and the partnership charter Features of carrying out -Can be conducted only in case of absence of quorum at internal meeting -Can be conducted only in case of absence of quorum at internal meeting Data which should be specified in the message / the notification The list is contained in item 5 of item 45 of the Housing code of the Russian Federation The list is contained in item 2 of item 146 of the Housing code of the Russian Federation Quorum 50 % of voices from total of voices of proprietors 50 % of voices from total of voices of copartners of proprietors of habitation The initiator of carrying out of meeting The proprietor (proprietors) Any member (members) of partnership Leads meeting The selected chairman of meeting -The chairman of the board or the trustee -The competence It is specified in item 2 of item 44 of the Housing code of the Russian FederationIt is specified in item 2 of item 145 of the Housing code of the Russian FederationISSN 2039-9340 (print)
2017-09-09T16:19:29.164Z
2015-02-28T00:00:00.000
{ "year": 2015, "sha1": "529ec7820fc5c3238e26e6450559f37e706178d8", "oa_license": "CCBY", "oa_url": "https://www.mcser.org/journal/index.php/mjss/article/download/5661/5457", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "529ec7820fc5c3238e26e6450559f37e706178d8", "s2fieldsofstudy": [ "Law" ], "extfieldsofstudy": [ "Sociology" ] }
207847204
pes2o/s2orc
v3-fos-license
Fitness Optimization and Evolution of Permanent Replicator Systems In this paper, we discuss the fitness landscape evolution of permanent replicator systems using a hypothesis that the specific time of evolutionary adaptation of the system parameters is much slower than the time of internal evolutionary dynamics. In other words, we suppose that the extreme principle of Darwinian evolution based the Fisher's fundamental theorem of natural selection is valid for the steady-states. Various cases of the evolutionary adaptation for permanent replicator system are considered. Introduction Starting from Fisher's fundamental theorem of natural selection, evolutionists began to apply extreme principles to Darwinian evolution [1,2,3]. The theorem postulates: "The rate of increase in (mean) fitness of any organism at any time is equal to its genetic variance in fitness at that time" [4]. However, the notion of "genetic variance of fitness" was not strictly defined in 1 the early studies. Later, Wright [5] introduced another important conceptadaptive fitness landscape, which is extensively applied in theoretical biology. Many of the extreme principles in evolutionary theory rely on the assumptions of the constant fitness landscape and steadily growing mean fitness. In biological studies, underlying understanding of the fitness landscape is often based on common visualization as a statical hypersurface. From this point of view, the evolutionary process is depicted as a path going through the space with hills, canyons, and valleys, ending up in one of the peaks [6,7]. For the avoidance of doubt and misinterpretation, we define the notion "fitness landscape" explicitly, providing mathematical formalization for its geometry in the case of general replicator systems. Let us start with the classical evolutionary model -the replicator equation [8,9,10]:u i = u i ((Au) i − f (u)) , i = 1, . . . n. (1) Here, A = [a ij ] is a given n × n matrix of fitness coefficients. Value (Au) i is i-th element of the vector Au, where vector u stands for the distribution of the species in the population over time. The term f guarantees that for any time moment t the vector u(t) belongs to the simplex S n : f (u) = n i,j=1 a ij u i u j = (Au, u) . In terms of evolutionary game theory, f (u) is the mean population fitness of the population of composition u and the payoff matrix A. The equilibriaū are given by the solutions to the system of algebraic equations: where f (ū) is the mean fitness at the equilibrium stateū (which is not necessarily stable). The geometric object Σ in R n corresponding to the following quadratic form: where u(t) ∈ S n = {u ∈ R n : ∀i u i 0, n i=1 u i = 1}, defines the fitness landscape of the system (1). Note that for every trajectory γ t of the system (1), γ t ∈ S n , there is a curve Γ t ∈ Σ. Since any matrix A can be decomposed into the sum: where B is a symmetric matrix, C is a skew-symmetric matrix, then This means that there is an orthogonal transformation U, such that U T BU = Λ = diag(λ 1 , . . . , λ n ). Here, the values λ i stand for real eigenvalues of the matrix B. Hence, the transformation u = Uw reducts the quadratic form to the canonical form: where λ + i and λ − j denote positive and negative real eigenvalues of B correspondingly (assuming that |B| = 0). The same transformation affects the simplex S n , giving the convex set: Thus, the fitness landscape of the general replicator system (1) is defined by the shape of the surface (5), which has a canonical form for a convex set (6). We suggest extending classification for quadratic forms on fitness landscapes, emphasizing their geometrical features. Thus, we define three types of fitness landscapes, depending on the eigenvalues of the matrix B: elliptic if all the eigenvalues are of the same sign, hyperbolic if some eigenvalues have opposite signs, and parabolic if there are zero eigenvalues. Autocatalytic replicator systems have elliptic fitness landscapes with n peaks in the corners of the simplex S n , each of which is an attractor. The trajectories γ t , depending on the initial state, belong to different basins of attraction, and the curves Γ t converge to one of the corresponding attractors. In hypercycle systems with the coefficients k i = 1, i = 1, . . . , n, the eigenvalues of the matrix B can be obtained as λ k = cos 2π n k, k = 0, . . . n−1 (where λ 0 corresponds to the vector (1, . . . , n), which does not belong to the simplex S n ). In this case, the fitness landscape has a hyperbolic type. The hyperspace Σ reaches its maximum heights on the simplex border for n 5 when the system has a limit cycle, and in a steady-state for n = 2, 3, 4, when there is a stable equilibrium. These illustrate the correspondence between Σ geometry and the behavior of phase trajectories. From a mathematical perspective, Fisher's fundamental theorem is correct only for systems with symmetric matrices of interaction, which corresponds to the diploid population. Moreover, from these assumptions, the maximum fitness value should be reached in the steady-state of the evolutionary system. This forms significant restrictions on the applicability, making these cases rather exceptional than realistic [11]. Various studies [12,13,14,15,16] were dedicated to new interpretations and re-consideration of Fisher's postulates. In [14], e.g., which provides an extensive literature review on mathematical formalism for the fundamental laws in evolution, Fisher's approach to natural selection is discussed in terms of the F-theorem. In the current study, we develop a fitness optimization technique introduced in [17]. Dynamical fitness landscapes: adaptation process One of the ways to examine fitness landscapes is to consider their fluctuations. The question arises: how the adaptive changes can be achieved in evolution. The central hypothesis of this study is that the specific time of the evolutionary adaptation of the system parameters is much slower than the time of the internal evolutionary process, which leads the system to its steady-state. Throughout the paper, we will call the first the evolutionary time. For hypercycles, we introduced a similar concept in the study [17]. This assumption leads to the fact that evolutionary changes of the system parameters happen in a steady-state of the corresponding dynamical system. In other words, we can write an equation for a steady-state with respect to the evolutionary time over a set of possible fitness landscapes. Consider a population distribution u = (u 1 , . . . , u n ) representing the frequencies of different types in a replicator system. If the system is permanent over a simplex S n (here, the notation is the same as in the previous subsection) and there is a unique internal equilibrium u ∈ intS n (stable or unstable one), then the mean integral value of the frequencies and the mean fitness value coincide with ones that reached in a steady-state. This allows examining an evolutionary process of fitness landscape adaptation using only the equation for a steady-state, where all the elements depend on the evolutionary time τ . Therefore, fitness landscape adaptation happens in time, which describes system dynamics converging to a steady-state over the set of possible fitness landscapes. It is worth pointing out, that this approach is valid only for permanent systems. In this case, it holds: The adoptation of the fitness landscape over the evolutionary time τ can be described by: where a ij ((τ )) of the matrix A are smooth functions with respect to the parameter τ ∈ [0, +∞). Each solution of the equation (8) corresponds to the dynamics of the permanent replicator system, which is characterized by (7). The possible fitness landscapes satisfy the condition: We show that the problem of evolutionary adoptation of the replicator system over the set of possible fitness landscapes (9) transforms into the problem of functionf (τ ) maximization over the set of the solutions to (8). In a previous study [17], we obtained an expression for the mean fitness variation. Based on this result, we suggested a process of fitness optimization in the form of a linear programming problem. The numerical simulations show that during the iteration process, the fitness value increases and the systems behaviour changes. For the big enough number of iterations, the steady-state of the system stays almost the same; however, the fitness value grows drastically at the same time. In this case, when the initial state of the system is described by the hypercycle equations, we see a qualitative transformation of the system: new connections appear, and autocatalysis can start. Increasing the number of iterations further, the coordinates of the steady-states split and, over the simplex, the system converges to a fixed fitness value. In [17], it was shown, that the observed effect is similar to an "error threshold" effect in the quasispecies system by Eigen [18,19], when the eigenvalue and the eigenvector of the system stay unchanged with the mutation rate growth. In the Appendix, we provide mathematical proof for this effect. That is, if the mean fitness reaches the extrema for some evolutionary parameter's value, then the mean fitness does not grow further even for larger parameter values. The main question is the relevance of the approach to the ESS (evolutionarily stable strategy) concept, which is widely used in the studies focused on the evolutionary game theory. It is known, that if the state u(τ ) ∈ intS n is evolutionary stable, then it is asymptotically stable [20]. One can also show, that in this case the steady-state is a local extrema point for the mean fitness function [11]. Consider a special case of the replicator system -a hypercycle system: For n 5, there is no evolutionary stable state, since the only equilibriâ u ∈ intS n is unstable and a stable limit cycle exists. Despite this fact, the suggested process of evolutionary adaptation of the fitness landscape can drastically change the mean fitness of the system (10) without affecting the coordinates of the steady-state. In the case of hypercycles with small size (n = 2, 3), the steady-state is ESS. The numerical simulations show, that every step of the iteration process of the evolutionary adaptation this ESS property holds and the fitness function steadily growth (see Fig.1). In this paper, the approach of the evolutionary adaptation process is applied to different classes of replicator systems. In the first part, we consider the system of cyclic replication, where each species depends on the two previous ones in the scheme. We call this class "bi-hypercycles" or binary hypercycles. In the second part, the evolutionary process targets the case, when at any random time moment a new species can be added to a hypercycle. The main goal here is to examine how the combination of determined and stochastic factors affects the replicator system's evolutionary dynamics. The importance of this direction was highly recognized in the literature [10]. In the last part of the paper, we consider two specific cases. First, we analyze a hypercycle with a dominating type, which influences all the evolutionary process. We call this class of the replicator systems "ant hill" in association with the internal dynamics of ant populations. In another example, we study a specific matrix of transition for the replicator system inspired by the experimental results [21]. Evolution of Bi-hypercycles Let us consider the following system, which we call a "bi-hypercycle" system throughout the paper: The reproduction of each element in the hypercycle of this type is catalyzed by two previous elements. As in the previous models, the frequencies of the types belong to a simplex: where a i > 0, a 0 = a n , u 0 (t) = u n (t), u −1 (t) = u n−1 (t). We rewrite the equations (11) for u(t) = (u 1 (t), u 2 (t), . . . , u n (t)) in a matrix form. For doing this, we introduce the transition matrix: To finalise the matrix equation, we denote: The equalities (11,12) transform into: The system (11,12) is permanent, hence [22] the values of the matrix U(t) and function f are defined in a steady-state as follows: Consider the set of non-negative matrices A(τ ) = (a kj (τ )) n k,j=1 , where elements a kj are smooth functions with respect to τ . The condition (9) applies here: Moreover, we assume A(0) = A, where A -is a matrix (13). We apply the hypothesis concerning evolutionary changes of the matrix elements A(τ ) during the fitness optimization process. The steady-state is described by the equality: From here it follows that the equation for evolutionary adaptation of the system has the form: Let us vary the evolutionary time parameter τ to τ + ∆τ and examine the perturbation of the matrix A(τ ),Ū(τ ), vectorū(τ ), and functionf (τ ). We denote as δA, δŪ, δū, and δf the corresponding linear parts of the increments. From (18), we get the equation with the accuracy o(∆τ ) : Multiplying the latter equality by the vector and taking into account we get the formula for the mean fitness variation with respect to the small parameter perturbation τ : According to (16), elements of the matrix δA(τ ) have to satisfy the condition The correspondence of the value δū(τ ) to the matrix elements δA(τ ) can be obtained from (20): Note that the condition δū > 0 has to be taken into account, if one or several componentsū are close to the simplex' border S n . The set of matrices, falling under the condition (16) is convex. Moreover, the setū ∈ intS n is convex. However, this does not guarantee that the functionf (τ ) in (18) is convex. Consider the following equation: Since u ∈ S n , then (ū, 1) = 1, hence (δū, 1) = 0. Multiplying (24) byv, we obtain: From here, we have the expressions: If the inequality (22) reaches an equality, then the necessary condition for extremum takes the form: For the cases where (25) takes place, we observe the effect similar to the "error threshold" phenomena in the quasispecies models [18]. In , we derive the maximum condition for the mean fitness value , which can be described as an inequality δ 2 f (τ ) < 0 for δf (τ ) = 0. Expression (21) shows the iteration process of the mean fitness maximization, where each step is a linear programming problem. As the matrix elements A(τ ) are smooth, then Based on the hypothesis that the elements of the fitness landscape change slowly, suppose that for ε > 0. We have: (26), then the error for each step of the iteration process, which is using only linear approximations, will be of order o(ε 2 ). As an initial state (τ = 0), we consider the steady-state (17). Using (16), we solve the linear programming problem: As a result, we have such perturbations of the matrix A, that guarantee the mean fitness growth with the evolutionary time change ∆τ . The matrix δA(0) is used for the mean fitness calculation on the next step. Thus, each step includes the choice of the optimal matrix over the matrix set (16), which leads to the increasing fitness value. Numerical calculations for bi-hypercycle: the result of the iteration algorithm of the evolutionary process with n = 5 As an example of the bi-hypercycle system (11,12), we take n = 5 with the transition matrix: Taking the matrix (28) as the initial condition τ = 0, we apply the adaptation process described above and calculate the dynamics of the system (11) numerically. The main properties of the system (11, 28) are investigated numerically and illustrated by the figures below. Figure 1 depicts the evolutionary change of the mean fitness in the timescale τ . The graph forf represents the convergence of the fitness to the maximum value after some time moment. As figure 2 shows, the position of the steady-state remains the same over around 300 steps of the evolutionary process. After some critical time, the equilibrium point splits into different trajectories: one of the coordinates converges to 1, while the other four elements -to δ, which is set up in the algorithm. Figure 3 represents the bi-hypercycle system evolving after 200 steps of the adaptation process in the regular timescale t, where the stable equilibrium is shownū i = 0.2, i = 1, 2, . . . , 5. At the same timescale, Figure 4 shows the dynamics of the bi-hypercycle system, but for 450-th step: unlike the case shown in Figure 4, here we do not see any stable equilibria, and the trajectories have cycles. The transitions within the system are represented in Fig. 5. Numerical calculations for bi-hypercycle: the result of the iteration algorithm of the evolutionary process with two parasites Consider another example of the bi-hypercycle system (11,12), we take the transition matrix of the form: As we have shown in a previous study [17], evolved hypercycles obtain resistance to parasite invasion. The same property takes place with the bihypercycle system described above. Consider an example to illustrate this property. Fig. 6 shows how the cycle of length n = 5 and its reaction to a parasite invasion. Evolution of the replicator system with new species adoptation at random time moments Consider the evolutionary process of the mean fitness value optimization of the replicator system (1,2). In [17], we have shown an iteration process for hypercycles, leading to a steady growth of the functionf (τ ) over the matrix set under the condition (9). Let a new element of the replicator system appear at the random step of the iterative process k, which is characterized by τ k = k∆τ . We denote a new matrix (n + 1) × (n + 1) as A k , which corresponds to the system (1, 2) on k-th step with n × n dimension. The steady-state distribution vector changes on this k step to include the new species:ū k = (u k 1 , u k 2 , . . . , u k n+1 ). We assume the following: 1. The mean fitness value in a steady-state of the systemf (τ k+1 ) for (k + 1)-th step is the same as it should be without a new species: 2. Matrix elements A k+1 for (k +1)-th iteration do not change compare to A k for the first n rows and n columns, i.e. a k+1 ij = a k ij , i, j = 1, . . . , n. 3. The first n elements of the distribution vector for k-th and (k + 1)-th steps satisfy the condition: 4. Impact of all species is the same, i.e., From (32), we obtain that a parameter 0 < α k < 1 exists, such as the components of the vectorū k+1 defined as follows: It follows from (34) that: From the latter, we have: The formula (35) defines all the elements in (n + 1)-th column, besides the last element in the matrix A k+1 . The expression (33) gives the connection between A k+1 : Thus, the procedure for the matrix A k+1 and the vectorū k+1 at the time moment k of the iteration process depends only on the parameter 0 < α k < 1. Numerical experiments have shown that if the frequency u k+1 n+1 is small enough for a new species, then α k > 1 u k max +1 , u k+1 max = max u k 1 , . . . , u k n , then the suggested evolutionary process of the mean fitness value maximization in a steady-state happens without losing the permanence property. The figures show the results of the numerical calculations for the evolutionary dynamics of the replicator system with new species adoption. The random value, which defines the number of step of the iteration process for the new species to enter the system, is described by the Poisson distribution. Using the uniform distribution, we define the parameter α k : As the initial state (τ = 0), we consider (28). We illustrate the of the evolutionary process with random species adoption in details below (based on the system (1,2) for n = 5): - Figure 7 shows the mean fitness in a steady-state. Here, the Poisson distribution adds new species at 682-th and 1240-th steps with the parameters β 682 = 0.829 and β 1240 = 0.557 correspondingly. - Figure 8 demonstrates the steady-state evolution. Green line defines the steady-state for the first five elements of the system. Blue line corresponds to the steady-state of the system, which describes the sixth element: the first "new" one in the system. Red color used for the seventh element, which is the second new element after adding it to the system. -In Figure 9, we have the graphics for the frequency distribution in the replicator system over internal time t before a new type entered the system. -In Figure 10, the graphics describe the frequency distribution in the replicator system, depending on t before the second new type was added to the system. Green line corresponds to the set, where the frequencies of the original 5 types fluctuates, blue -for the sixth element. - Figure 11 shows the frequency distribution in the replicator system, depending on t at 1735-th iteration. As it was shown before, green color is used for the original 5 types, blue -for the sixth and redfor the seventh added at 1239-th step. To show, how the hypercycle systems evolve under this assumption of new species inclusion, consider a third-order system. In Figure 12 (a-b), we see how the initial state of the system transforms after the fourth type appearance. Figure 13(a-b) depicts the state before the fifth type and the transitions after the further extension. These results show the possibility of a significant evolutionary change of the system, which is extended by a new type at random time moments. Special settings for replicator systems "Ant hill" system The graph represents the "ant hill" system "ant hill". In this case, we have a dominating type which is supported by n − 1 other types. Here, catalyzic coefficients are β i , i = 1, . . . , n − 1, for the dominating type and all the backwards coefficients are α. Moreover, there is cycle for general types with coefficients k i . 21 The system state is described by the following: where It is natural to consider n 3 The system (37) has a unique steady-stateū ∈ intS n+1 , which is a necessary condition for the system to be permanent. Theorem 0.1. Let the condition hold: If such values of the parameters exist, then (39) is permanent. Consider the function: The estimation works Taking into account (39), we derive: Hence, Consider a function S(t) = n i=0 u i (t): Here r 2 = k M +α β m+α < 1. From the comparison theorem, we get: which completes the proof. To illustrate this analysis, we take n = 5, where: and α = 0.1, β i = 0.8, k i = 1. Evolutionary changes during the system's adaptation and mean fitness maximization lead to reduction of the parameters β i , which define the catalysis of the dominating macromolecule (Fig. 15). The corresponding steadystate describing dominating macromolecule converges to zero (approximately at 310-th step). Figure 16 shows dominating molecule (dotted line) and general molecules (solid line) at 50, 175, 250, and 400 steps of the evolution of the system (37) with A (42). It is worth mentioning, that the influence of the dominating molecule goes down and the amplitude of all others increase. This system describes a population, which consists of two types -"egoists" and "altruists". We call egoists the molecules 1, 2, and 3, which are participating in autocatalysis with coefficient α, and one of the 4, 5, or 6 with coefficient σ. Altruists, in this case, are 4, 5, and 6, which enforce the catalysis of others: egoists -with βand one od the altruists γ. c) d) Figure 18: The frequencies of the species in the hypercycle system (43) with matrix A (44) changing over system time t: a) at the beginning, b) at the 125-th step of the fitness landscape evolutionary process, c) at the 175-th step of the fitness landscape evolutionary process, d) at the 200-th step of the fitness landscape evolutionary process. Conclusion In this paper, we applied an algorithm for the fitness landscape evolution of the replicator system. We defined the limitation on the sum of squares of the fitness matrix coefficients while looking at the mean integral fitness maximum. We follow our previous study, suggesting that the evolutionary time of the hypercycle adaptation is much slower than the internal system dynamics time. The numerical simulation showed that the process of the mean fitness maximization is qualitatively similar for classical hypercycles and bi-hypercycles. At the beginning of the fitness landscape adaptation process, for a significantly long period in the evolutionary timescale, the steady-state of the system (1, 2) remains the same. However, the structure of the transition matrix changes, which leads to the new transitions in the hypercycle system: besides the original connections, we get the backward cycle, autocatalysis, and new connections between species. This can be interpreted as a more diverse and sustainable evolutionary state. After some critical number of changes, the coordinates of the steady-state split into two parts: one species dominates, and its frequency converges to one, while the frequencies of the others converge to a minimum value. The latter process goes along with a significant increase in the autocatalytic coefficient for the dominant species, promoting its selfish behavior in the system. According to the numerical investigation, this dominant species is chosen by random and varies among the experiments. We suppose that this choice depends on computational errors. As a final stage of the evolutionary process, there is a stabilization of the fitness landscape. Here, calculations drastically depend on the restrictions on the steady-state coordinates. This process is similar to Eigen's error catastrophe proposed for the quasispecies systems. The longevity of the evolutionary period before stabilization grows with the number of resources allowed in the system (9).
2019-11-07T13:28:21.000Z
2019-11-07T00:00:00.000
{ "year": 2019, "sha1": "0e495377e1c15e36778c831d8c9719330d8a9870", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1911.02893", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "0e495377e1c15e36778c831d8c9719330d8a9870", "s2fieldsofstudy": [ "Mathematics", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology", "Physics", "Computer Science" ] }
225341523
pes2o/s2orc
v3-fos-license
Lithium Metal Negative Electrode for Batteries with High Energy Density: Lithium Utilization and Additives Metallic lithium is considered to be the ultimate negative electrode for a battery with high energy density due to its high theoretical capacity. In the present study, to construct a battery with high energy density using metallic lithium as a negative electrode, charge/ discharge tests were performed using cells composed of LiFePO 4 and metallic lithium at various lithium utilization values. A relationship was observed between utilization and cycle performance, and the degradation behavior of metallic lithium was evaluated by cross-sectional FE-SEM observations. We also investigated the e ff ects of additives in the electrolyte and found that FEC and VC e ff ectively improved cycle performance. Introduction Since the early 1960s, lithium metal negative electrodes have been extensively examined due to their high theoretical capacity (3860 mAh g ¹1 ) and low redox potential (¹3.04 V vs. SHE). [1][2][3] Metallic lithium is considered to be the ultimate negative electrode; however, some limitations need to be overcome for its practical use, such as reactions with electrolytes, inhomogeneous deposition, and low Coulombic efficiency. In the late 2000s, research was conducted on the development of a 500 Wh kg ¹1 -class battery for electric vehicles. 4 Metallic lithium is considered to be the first candidate for the negative electrode. When a battery cell with 500 Wh kg ¹1 -class is manufactured, it is necessary to use a relatively thin film and to perform charge/discharge at high lithium utilization. In recent years, negative electrode-free batteries have also been suggested, 5,6 but require high lithium utilization. The morphology of deposited/ dissolved lithium and the cycle performance of lithium metal negative electrodes are influenced by a number of factors, such as the electrolyte composition, applied current, current collector, separator, and confined pressure. [7][8][9][10] These factors also differ depending on whether a lithium/lithium symmetric cell or full-cell combined with a positive electrode material is used for evaluations. Although these factors have been investigated in detail, limited information is currently available on the effects of lithium utilization on the cycle performance of the cell. 11,12 Therefore, further studies are needed to evaluate the cycle performance of lithium metal negative electrodes at high lithium utilization in order to construct a battery with high energy density. The effects of lithium utilization on charge/discharge behavior in an electrolyte containing cyclic and linear carbonates have already been reported. 12 Our research group is currently examining high energy density batteries using a lithium metal negative electrode with transition metal sulfides or FeF 3 as the positive electrode. 13,14 We previously demonstrated that favorable charge/discharge behavior was observed when the electrolyte containing only cyclic carbonates. This result indicated that the electrolyte has better compatibility with the positive electrode material. To construct the battery with high energy density, as mentioned above, thin metallic lithium must be used at high utilization. Thus, it is important to understand the electrochemical behavior of the lithium metal negative electrode under such severe conditions in the electrolyte containing only cyclic carbonates. In the present study, we investigated the effects of lithium utilization on the stability of a lithium metal negative electrode in an electrolyte composed of ethylene carbonate and propylene carbonate. The cells evaluated in the present study were constructed using LiFePO 4 as a positive electrode, which shows stable cycle performance with a plateau at 3.4 V vs. Li/Li + . 15,16 A cell with different lithium utilization was fabricated by controlling the areal capacity of both LFP and metallic lithium. The effects of additives for the electrolyte were also examined. The results obtained are fundamentally important for the practical use of lithium metal negative electrodes. Charge/discharge tests were performed at 25°C using a charge/ discharge system (ACD-M01C, Aska Electronic). Cut-off voltages for the discharge and charge processes were set at 2.5 and 4.0 V, respectively. After charge/discharge tests, the lithium metal negative electrode was washed with dimethyl carbonate (DMC). A cross-section of the deposit was observed under a field emission scanning electron microscope (FE-SEM, JSM-6700FV, JEOL) after polishing with ion beams using a cross-section polisher (IB-19520CCP, JEOL). When lithium metal was transferred from the Ar-filled glove box to FE-SEM and the polisher, the transfer vessel used was not exposed to air. In the present study, lithium utilization was defined as follows: lithium utilization ð%Þ ¼ areal capacity of LFP ðmAh cm À2 Þ areal capacity of Li ðmAh cm À2 Þ þ areal capacity of LFP ðmAh cm À2 Þ Â 100 ð1Þ Table 1 shows 10 types of cell configurations fabricated to investigate the effects of lithium utilization on the stability of the lithium metal negative electrode toward the charge/discharge cycle. In the present study, 1.0 mol dm ¹3 LiTFSA in EC : PC (1 : 1 vol%), a conventional electrolyte, was selected as the electrolyte to assemble the cell. Figure 1(a) shows changes in the discharge capacity of cells using LFP of 4.3 or 4.4 mAh cm ¹2 and metallic lithium with a different areal capacity (Cell Nos. 1-5 in Table 1). The galvanostatic charge and discharge cycling were conducted at a current density of 1.0 mA cm ¹2 , which was calculated from the area of the lithium metal negative electrode. This current density corresponded to an approximately 0.17C rate calculated from the capacity of LFP. The capacity of the cell with high lithium utilization, i.e. using thin metallic lithium as the negative electrode, decayed more rapidly. The lithium that originally existed was gradually consumed during the discharging process. Cells with lower lithium utilization exhibited capacity decay from approximately 100 cycles. Figure 1(c) shows the cumulative discharge capacity of cells. Decay was considered to be caused by the clogging of the separator and depletion of the electrolyte in the cell as the cumulative discharge capacity increased. The cell with lithium utilization of 10% exhibited better cycle performance than the other cells. We are currently investigating why cycle performance did not correspond to the order of utilization, and one of the factors responsible may be the confined pressure in the cell, which changes depending on the thickness of the electrode. Figure 1(b) shows changes in the discharge capacity of cells composed of LFP with different areal capacities and the lithium metal negative electrode of 10.3 mAh cm ¹2 , which was the thinnest metallic lithium used in the present study (Cell Nos. 6-10 in Table 1). Galvanostatic charge/ discharge cycling was conducted at a current density of 0.2C, which was calculated from the capacity of LFP in each cell. The charge/ discharge curves of the cell using LFP with a different areal capacity was shown in Fig. S1. The initial discharge capacity was almost 140 mAh g ¹1 , regardless of lithium utilization. The slightly lower capacity of the cell with lithium utilization of 1% was attributed to experimental weighing errors. The capacity of the cell with lithium utilization of 1% exhibited negligible degradation even after 300 cycles, whereas those with lithium utilization of 5, 11, 16, and 30% decayed rapidly after approximately 250, 150, 100, and 30 cycles, respectively. Even when thin metallic lithium was used, more than 300 cycles of charge/discharge was possible when lithium utilization was low. The cumulative discharge capacity of cells was shown in Fig. 1(d). Charge/discharge continued in the cells with lithium utilization of 1 and 5%, even after 200 cycles, due to the smaller cumulative discharge capacity than those of the other cells. In these experiments, the interpretation of the results obtained was slightly complicated because the discharge rate (current density) and amount of metallic lithium changed as the capacity of LFP changed. However, the capacity of the cell with lithium utilization of 30% was restored when the lithium metal negative electrode was replaced by a new one after capacity decay (Fig. S2), clearly indicating that the cause of decay is the metallic lithium negative electrode. Results and Discussion Since cycle performance markedly changed depending on the utilization of lithium, the morphology of lithium after the charge/ discharge test was observed by FE-SEM, including cross-sectional observations, as shown in Fig. 2. Cells with lithium utilization of 5 and 30% (Cell Nos. 7 and 10, respectively) were disassembled after 5 and 30 cycles, respectively, and the lithium metal negative electrode was then examined. As shown in Figs. 2(a)-2(d), the surface of lithium was inhomogeneous in both cells after 5 cycles; however, a marked difference was confirmed in cross-sectional images. A thin mossy deposit was detected on the original metallic lithium when lithium utilization was 5%. Contrary thicker deposit (³40 µm thickness) was found when lithium utilization was 30%. Formation of this deposit was attributed to the deposition of the decomposition product of the electrolyte at a higher current density and utilization. 11 After 30 cycles, the mossy deposit was thicker in the cell with lithium utilization of 5%, whereas the original thickness of the lithium metal negative electrode was almost unchanged (Figs. 2(e) and 2(f )). In contrast, in the cell with lithium utilization of 30%, the thickness of the mossy deposit increased and partially extended into the original metallic lithium (Figs. 2(g) and 2(h)). The cell exhibited capacity decay after 30 cycles, as shown in Fig. 1(b), which was considered to be due to the consumption of the original metallic lithium during the discharge reaction. In order to develop a lithium metal negative electrode with high utilization, it is important to suppress mossy deposits during charge and the consumption of metallic lithium during discharge. 17,18 Various additives have been proposed to improve the cycle performance of the lithium metal negative electrode, and their chemical structures and amounts have been shown to significantly influence performance. 19,20 In the present study, typical additives, which are considered to form a film on the negative electrode under a reducing atmosphere, were applied to the electrolyte in the cell with high lithium utilization. Figure 3 shows changes in the discharge capacity and charge/discharge curves of the cell with lithium utilization of 30% when FEC and VC were added to 1.0 mol dm ¹3 LiTFSA in EC : PC (1 : 1 vol%). With both additives, cycle performance was dependent on the amounts added. Comparisons of the effects of FEC and VC revealed that cycle performance Electrochemistry, 88(5), 463-467 (2020) was favorable when FEC was used as an additive. When VC was added, an increase in overvoltage was confirmed in the charge/ discharge curve at 50 cycles, suggesting the formation of a film with higher resistance than with the addition of FEC. 19 VEC, ES, and PS were also examined as additives, and the results obtained indicated that VEC was as effective as VC, while ES and PS did not improve cycle performance (Fig. S3). Figure 4 shows FE-SEM images of the lithium metal negative electrode after the charge/discharge cycle using the electrolyte with 5 wt% of FEC. When the electrolyte containing FEC was used, a black deposit was detected on metallic lithium after disassembling the cell. This deposit was hard, brittle, and easily exfoliated, possibly due to the high content of inorganic components derived from the decomposition of FEC. 19,21,22 Figures 4(a) and 4(b) show FE-SEM images of the deposit and the part from which it was exfoliated after 5 charge/discharge cycles, respectively. In cross-sectional images (Fig. 4(c)), the thickness of the deposit was thinner than that obtained in the electrolyte without the additive, suggesting that the decomposition of the electrolyte was suppressed by the film that formed on lithium after the addition of FEC. After 30 cycles, a significant difference was observed between the presence and absence of additives, with the consumption of metallic lithium being negligible in their presence ( Fig. 4(d) to 4(f )). These results indicated that the dissolution and deposition of lithium proceeded uniformly due to the film formed by Electrochemistry, 88(5), 463-467 (2020) the addition of FEC. As shown in Fig. 4(b) and (e), the lithium surface was not heavily roughened at the position from which brittle deposits were exfoliated. Easily exfoliated and thick deposits did not appear to function as a protective film, the so-called solid electrolyte interphase (SEI). If the deposits contained the electrolyte, the electrode/electrolyte interface should be on the lithium. The buried interface between the electrode and film is important for understanding charge/discharge behavior. 23,24 The chemical composition of the deposits and the underlying lithium surface will be evaluated by X-ray photoelectron spectroscopy and other methods in the future with changes in the electrolyte and lithium salt. Conclusion The charge/discharge behavior of a lithium metal negative electrode was investigated with a focus on lithium utilization using a LFP positive electrode. When the charge/discharge test was conducted at a high utilization rate, capacity decay was observed after approximately 30 cycles, regardless of whether the thickness of lithium or capacity of LFP was changed. Cycle performance was improved when additives such as FEC and VC were used in the electrolyte. FE-SEM observations of metallic lithium after cycling showed a difference in the thickness of the deposits on top of lithium that depended on whether FEC was used in the electrolyte. Furthermore, the consumption of metallic lithium was suppressed, which was attributed to the effects of the film formed by the addition of FEC. We are currently investigating the chemical compositions of effective film components, but deposits should be considered in the analysis. The electrochemical behavior of a lithium metal negative electrode needs to be considered not only in terms of utilization, but also other parameters, such as the electrolyte, lithium salt, type of separator, cell shape, electrolyte volume, confined pressure, current density, and temperature. Although the present results are from our experimental system, they will provide useful information for future research on the practical use of metallic lithium negative electrodes.
2020-08-20T10:07:46.669Z
2020-09-05T00:00:00.000
{ "year": 2020, "sha1": "91cf009ddea13465b1d1d59879fd343c921351f9", "oa_license": "CCBYNCSA", "oa_url": "https://www.jstage.jst.go.jp/article/electrochemistry/88/5/88_20-00085/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "e5d3eb19729ad173181ebab0bb5f8a73d8dc0502", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
86703889
pes2o/s2orc
v3-fos-license
Predicting the Turkish Stock Market BIST 30 Index using Deep Learning The non-linearity and high change rates of stock market index prices make prediction a challenging problem for traders and data scientists. Data modeling and machine learning have been extensively utilized for proposing solutions to this difficult problem. In recent years, deep learning has proved itself in solving such complex problems. In this paper, we tackle the problem of forecasting the Turkish Stock Market BIST 30 index movements and prices. We propose a deep learning model fed with technical indicators and oscillators calculated from historical index price data. Experiments conducted by applying our model on a dataset gathered for a period of 27 months on www.investing.com demonstrate that our solution outperforms other similar proposals and attains good accuracy, achieving 0.0332, 0.109, 0.09, 0.1069 and 0.2581 as mean squared error in predicting BIST 30 index prices for the next five trading days. Based on these results, we argue that using deep neural networks is advisable for stock market index prediction. INTRODUCTION Stock market prediction is a difficult task due to the huge amount of data to be processed, frequent and nonlinear stock price changes, and the diversity of influencing factors, such as national/global economic conditions and news, investors moods etc. In addition, the efficient market hypothesis states that stock movements are in accordance with the random walk model, thus making it highly improbable to predict their movement directions and prices. Some investors use technical indicators and oscillators to build charts and patterns to help them discover price trends, and devise strategies for high-return investments. This method is called technical analysis and its proponents believe that the market discounts everything, prices move in trends and historical trends repeat themselves. Technical analysis or charting indicators focus on historical values and movements of index prices as the entry point of the financial market analysis, and build charts illustrating hidden points that investors can use in their investment strategies. Another camp of investors rely on fundamental indicators rather than technical ones and they are called fundamental analysts. They analyze the volume of shares, financial and political news, investors' moods and other factors that influence the market. The main difference between these two schools of financial market analysis is the period of time that investment strategies consider. Technical analysis focuses on the next short time period whereas fundamental analysis considers longer time periods, i.e. at least one quarter or more. Many traditional machine learning proposals like support vector machines, variant decision trees, k-nearest neighbors and artificial neural networks have been suggested for stock market prediction. These algorithms are powerful in many problems but in such highly volatile and non-linear problems they suffer from stability issues. Deep learning has proved itself a promising solution for such environments and showed good performance (Akita et al., 2016). In this work, we propose a deep learning model trained on the most important technical oscillators of the BIST 30 Index of the Turkish stock market. These indicators are noted by famous technical analysts and www.investing.com, a popular website among traders and investors. To the best of our knowledge, this paper is the first in the literature using a deep learning model to predict Turkish stock market index prices. We have conducted our experiments on historical index data obtained from www.investing.com for a 27-month period from 01.01.2016 to 11.04.2018. Our model was able to produce good predictions with low error rates, as discussed in the evaluation section. The rest of the paper is organized as follows. A brief review of related works is given in Section 2. Technical indicators and oscillators used in our proposal have been described and fundamental indicators are briefly mentioned in Section 3. Our proposal and the walkthrough of our method have been explained in detail in Section 4. Section 5 describes our experiments and presents the evaluation results. Finally, conclusions and future work areas are highlighted in Section 6. RELATED WORK Technical analysis and fundamental analysis are the two main schools of thought when it comes to analyzing the financial markets. While technical analysis focuses on historical data of stock prices and volumes, fundamental analysis gives significant weight to investors' sentiments, economic and political conditions and news. Nassirtoussi et al. (2015) proposed an approach to predict intraday forex currency-pair directions by analyzing breaking financial news headlines. Another work done by Shynkevich et al. (2015) analyzed the effects of market-related articles on stock trends and prices. The authors have developed a model to predict these values from industry-specific articles. Oliveir et al. (2017) analyzed the impact of microblogging data related to stock market news on the investors' sentiments. The authors forecast the returns, volatility and trading volume of diverse indices and portfolios from tweet messages. Ni et al. (2015) investigated the effects of investors' sentiments on stock market index prices and directions. According to their findings, the impact of investor sentiment is considerable for up to 2 years. Its effect is asymmetric, that is, it is positive and large for stocks with high returns in the short term, while negative and small in the long term. Leigh et al. (2002) studied the effectiveness of technical analysis approaches using multiple technical indicators and how they are used to achieve high return rates using decision making systems. They emphasized the importance of the "bull flag" price and volume pattern heuristic in getting abnormal results. Later, the indicators noted by these schools are used as inputs or features to prediction systems. Machine learning algorithms are the primary techniques used for predicting stock prices and directions. Gui et al. (2014) proposed an interesting approach through which the prediction is not a specific number but a limit instead. The authors transformed financial time series into fuzzy particle sequences and then used support vector machine to build a regression model on the lower and upper bounds to decrease the estimation error. Dechow et al. (2001) showed how short-sellers benefit from those factors in refining their investment strategies and maximizing their returns. Another work (Lewellen et al., 2010) emphasized the importance of key factors of fundamental analysis and suggested some improvements in empirical tests. Dechow et al. (2010) reviewed the various measures of "earning quality" and how it is related to the company fundamental performance. Qian et al. (2007) used the Hurst exponent to select highly predictable period, and later training patterns or indicators are generated by auto-mutual information and false nearest neighbor methods. Trained by an ensemble of inductive machine learning approaches such as artificial neural networks, decision trees and k-nearest neighbors, the model achieved 60-65% of accuracy. Sands et al. (2015) compared different classification proposals: Support vector machine using least squares implementation, artificial neural networks, naïve Bayes classifier and SVM optimized by particle swarm optimization in building an investment portfolio with maximum gain and minimum risk. According to their experiments, SVM optimized by particle swarm optimization is capable of predicting the stock values with high accuracy. Another work (Ince et al., 2017) proposed a hybrid model for forecasting stock market movements. This model is composed of ICA for selecting important features between some technical indicators and then using kernel methods such as SVM, TWSVM, MPM, KFDA and random walk to build a model for predicting stock movements. According to their experiments on Dow-Jones, Nasdaq and S&P500 indices, the models like ICA-SVM, ICA-TWSVM, ICA-MPM and ICA-KFDA have achieved high accuracy. Bastı et al. (2015) addressed the underpricing of Turkish companies in initial public offers traded in Istanbul Stock Exchange. They employed decision tree and support vector machine to investigate the key factors affecting the short-term performance of initial IPOs. Another approach (Chen et al., 2017) proposed a hybrid model composed of feature weighted of both support vector machine and k-nearest neighbors. The authors applied the model on two well-known Chinese stock market indices, Shanghai and Shenzhen stock exchange indices. Teixeira et al. (2010) combined technical analysis and k-nearest neighbors. Qian et al. (2007), Zhang et al. (2009), Moghaddam et al. (2016), and Boyacioglu et al. (2010) have investigated the use of artificial neural networks in stock market prediction. A recent work (Akita et al., 2016) using deep learning for stock market prediction was applied on Tokyo stock exchange market. The authors used paragraph vector to convert newspaper articles into distributed representations and used them with historical prices to predict values close to the actual stock prices. We have noticed that most of the proposals based on technical analysis use their indicators heuristically without any features engineering or what recently technical analysts prefer to use between such indicators. For this reason, we did feature engineering and chose the most important ones noted by highly reputed global investment website. The closest study to our proposal was the one by Akita et al. (2016) due to its use of deep learning techniques but what differentiates our work is applying deep learning for Turkish stock exchange and using technical analysis in feature selection. BACKGROUND: TECHNICAL ANALYSIS AND INDICATORS In this section, we formalize the problem to be addressed. Prediction of stock index prices is a time series problem where each sample or observation contains the price values that an index can take during a trading day such as open, low, high, volume, trading date and closing price. The goal is to predict the price value for the following trading days with low errors. In this paper, we use samples and observations on a daily basis. This could be adapted by using other trading periods like minutes, weeks, months or even years. We express this in mathematical terms as follows: We build a deep learning model using certain technical analysis indicators as features. The following subsection describes technical analysis and the indicators we have used. Technical Analysis Technical analysts believe that historical prices of stock indices contain very important hidden information and they are highly related to the current prices. According to them, this could be explored by what they call indicators, oscillators and charts. So, the prices and directions of stock indices could be predicted by using such indicators. In that case we can rewrite the formula as given in Eq. 2.  = ( ℎ − )     Technical analysis relies on analyzing certain indicators to extract information such as buy/sell signals from historical data and construct high-return investment strategies. There are approximately 150 technical indicators, but we only provide a brief description of the most important ones which are accepted by the popular investment portal www.investing.com. Relative Strength Index -RSI RSI is the most important momentum indicator developed by noted analyst J. Welles Wilder Jr and is explained in (Welles, 1978). It is used to identify overbought and oversold regions of the analyzed index. These regions are highly significant to the technical analyst or the trader to give buy or sell orders. What RSI does is observing the magnitude of recent gains and losses over a specified time period (14 trading days by default) to measure speed and change of price movements of an index. RSI is calculated by the following formula: There are two important RSI levels: (70,30). When the value of RSI exceeds 70, this interpreted as a sell signal as the price becomes overvalued. On the other hand, when the RSI value falls under 30, a buy signal is generated. Some investors use extreme version of the RSI indicator where these two levels are (80,20). It is important to mention that the time unit considered by the technical indicators in our calculations is one day but could be other trading units like minutes, hours, months or years. Bollinger Bands -BB: BB is a momentum indicator or chart developed in the 1980s by noted trader John Bollinger (2001) through which the price of the index is bracketed by an upper and lower band along with a 21-day simple moving average (the default time period is 21 trading units). The upper and the lower band is double standard deviation of the middle band. According to Bollinger, when the price exceeds the upper band, it becomes overvalued and there will be a correction, so a sell opportunity is generated. Conversely, when it goes below the lower band, then the price is undervalued and it should be corrected, so a buy signal is generated. Stochastic Oscillator -STOCH: STOCH is a momentum indicator or oscillator frequently used by market traders and it compares the price of an index to the range of its prices over a certain period of time (The default time period is 14 trading units). The stochastic oscillator is calculated using the following formula: Where: is the most recent closing price, 14 is the low of the 14 previous trading sessions, 14 is the highest price traded during the same 14-day period, % is the current market rate for the currency pair, % is 3-period moving average of % . Williams %R: Williams %R is a momentum indicator developed by famous technical analyst Larry Williams (1973), and it is the inverse of the Fast-Stochastic Oscillator. Williams %R reflects the level of the closing price relative to the highest high for the look-back period. In contrast, the Stochastic Oscillator reflects the level of the closing price relative to the lowest low. Williams %R is calculated by the following formula: is the highest high, is the closing price, is the lowest low. The time period considered by the formula is 14 trading units. Price Rate of Change -ROC: The price rate of change (ROC) is a technical indicator of momentum that measures the percentage change in price between the current price and the price n periods in the past. It is calculated by using the following formula: Where is the most recent closing price, is the closing price n periods ago. Simple Moving Average -SMA: SMA is the simplest momentum indicator used by many traders and calculated by adding the closing price of the index for a number of time periods (The usual time period like other momentum indicators is 14 trading units) then dividing this total by the number of considered time periods as in the following formula: (7) 3.1.7. Exponential Moving Average -EMA: An exponential moving average (EMA) is the exponential variation of the standard simple moving average except that in the former we give more importance to the latest closing prices of the index. This type of moving average reacts faster to recent price changes than a simple moving average and is calculated by using the following formula: Where: is the current EMA value, is the previous EMA value, is the length of the EMA. Commodity Channel Index -CCI: Another momentum indicator called Commodity Channel Index or CCI was developed by Donald Lambert and it measures the current price level relative to an average price level over a given period of time (14 trading units). CCI is relatively high when prices are far above their average, and CCI is relatively low when prices are far below their average. In this manner, CCI can be used to identify overbought and oversold levels which are important levels considered by traders to make buy and sell orders. It is calculated by using the following formula: 3.1.9. On-Balance Volume -OBV: On-balance volume or OBV is a momentum indicator developed by Joseph E. Granville (1976) that considers index volume flow to predict changes in its price. According to him, the price of the index will eventually jump upward when volume increases sharply without a significant change in it and vice versa. It is computed by using the following formula: Where: in the current on-balance volume, is the positive-negative volume (volume is positive if current volume is bigger than previous volume) Moving average convergence divergence -MACD: Moving average convergence divergence or MACD is a trend-following momentum indicator that shows the relationship between two moving averages of prices. The MACD is calculated by subtracting the 26-day exponential moving average (EMA) from the 12-day EMA. A nine-day EMA of the MACD, called the "signal line", is then plotted on top of the MACD, functioning as a trigger for buy and sell signals. MACD is calculated by the following formula: = ( 26 − 12 ) 3.1.11. STOCHRSI: Some momentum indicators give a good performance when they are accompanied by other technical or momentum indicators. STOCHRSI is one such indicator used in technical analysis that ranges between zero and one. It is created by applying the Stochastic Oscillator formula to a set of Relative Strength Index (RSI) values rather than standard price data. Using RSI values within the stochastic formula gives traders an idea of whether the current RSI value is overbought or oversold -a measure that becomes specifically useful when the RSI value is confined between its signal levels of 20 and 80. Fundamental Analysis: Fundamental analysts believe in fundamental factors rather than technical ones. They care about the intrinsic values of stocks and take into account everything related to the stocks such as earnings, market shares, financial conditions, news and investors' sentiments. Contrary to technical analysts, fundamental analysts perform their analysis and calculations for a sufficiently long time period and try to minimize their transactions. Eq. 13 gives the estimation from fundamental analysts' point of view. Proposed Model Historical data of the stock index are quite simple and contain only few values that it can take during a trading unit (hour, day, month or year) such as open, high, low, volume and closing price. Our goal is to predict the closing price from these values, which is a challenging task due to the volatility of these prices. We need to calculate technical indicators from such values as these indicators hold valuable hidden information about prices. There are approximately 150 indicators, below we list the most important ones as noted by technical analysts and by a popular investment portal. Selecting and calculating these oscillators is the first step of our model which consists of two parts: Calculation of Technical Indicators: This step calculates technical indicators or oscillators from historical data of BIST 30 index price and volume. While some of these indicators depend only on the closing price of the index, others depend on the low and high as well as the closing price. For instance, one of the oscillators called On-Balance Volume (OBV) depends on the volume value of the index. The calculation of these oscillators is based on the default time period of each one. The output of this calculation will be the input of our deep NN. In other words, they will be accepted like its features. Deep Neural Network (Deep Learning): Artificial Neural Network or ANN is one of the most important research areas in artificial intelligence and machine learning. The main idea behind ANN is inspired by the natural neural network of the human nervous system. Neurons are imitated with computing units connected with each other in the form of a network through axons and dendrites. Each neuron or node receives inputs from other nodes through its dendrites, performs an operation on them and sends the result of that operation to other neurons. The inputs to the ANN (also known as features) are technical indicators in our case. A perceptron is a binary classifier that uses a linear prediction function. Most ANNs are networks of perceptrons, also known as feed forward neural networks, organized into fully connected layers. While a perceptron is suitable when trying to build a linear decision boundary, simple ANN becomes unfeasible in the case of building a regression model with many features, hence deep neural networks are needed. Deep NNs are simply ANNs with more hidden layers and neurons in each of them as illustrated in Fig. 1. In the following subsection, we explain the steps taken in our work to build a regression model to predict the future prices of the BIST 30 index. Walkthrough Here, we provide a step-by-step explanation of the phases of our method. Gathering data: As our problem is analyzing the Turkish stock exchange market in order to predict future price movements of BIST 30, we needed to gather significant amount of financial data. One of the most reliable websites followed by many traders is www.investing.com. We obtained our dataset from that website for a period of more than two years. The prices of stock indices are generally given in csv format containing the values of closing, opening, low, high, and volume of the index. Table 1. shows a sample portion of the dataset. 86,147.25 85,981.14 86,940.28 84,502.58 488.09 06.01.16 86,862.50 86,147.25 86,970.83 84,904.24 596.21 07.01.16 87,417.44 86,862.50 87,577.47 84,994.29 705.17 08.01.16 86,234.62 87,417.44 88,226.75 85,932.68 565.21 11.01.16 86,825.17 85,933.88 87,568.90 85,517.25 500.37 12.01.16 87,724.37 86,783.72 88,216.83 86,094.12 634.06 Data preparation: Cleaning and processing data is necessary in most cases before applying machine learning algorithms. The datasets related to financial markets suffer from several specific problems:  Some companies may not exist any longer.  The market is closed during national holidays and on the weekends.  For technical problems prices contain negative errors.  These issues should be addressed when constructing a machine learning model. Two important preprocessing issues are normalization and finding correlated features. It is strongly advised to make the features data range [0, 1]. Fig. 2 and Fig. 3 show the histograms of the features before and after normalization. We see that all of the features are normalized, and their values are in the range [0, 1] except the price since it is not a feature, but the target value we are going to predict. Another issue is finding out if, and which, features are correlated with each other. Such features should be eliminated as correlated features cause an ANN to overfit and have a bad impact on its performance. According to technical indicator formulas, we expect high correlation between SMA and EMA as they both represent moving averages. If so, we should eliminate one as it serves nothing. Fig. 4 confirms our intuition that these two features are highly correlated. It is important to notice that some indicators have more than one output and these outputs are correlated with each other we keep them as they are. Fig. 5 shows the pairwise features correlation. We see that OBV indicator is highly correlated with Bollinger bands (BB) indicator, so we drop it from our calculations. As a result, we have dropped two indicators (SMA and OBV), and used the remaining nine indicators from Section 3.1 in our model. Choosing a model: Selecting a suitable model is critical for the performance of machine learning. In this work, we try to predict stock index values, so we focus on regression. For reasons outlined in the introduction, we pick a deep neural network trained over technical oscillators obtained from technical indicators calculator. Training: Training a machine learning model means adjusting the model parameters to reduce the loss and achieve the desired prediction. Parameters in our case are neuron weights and bias. We train our deep NN using Keras API over the TensorFlow framework developed by Google. We have used Keras sequential model API with ReLU (Rectified Linear Unit) Activation Function for hidden layers. As our target model is a regression model, there is no need for an activation function in the output layer. The last point that we should mention is the optimizer algorithm which is responsible for adjusting weights and bias. We have used Adam adaptive moment estimation optimizer. Evaluation: A common split ratio between the training set and the test set is 80-20, and we use this ratio. We could not use cross validation when splitting the test set from the training set because our problem is time-series prediction and in such a situation the algorithm learns on the first portion of the dataset (training set) and is then evaluated on the test set (the last portion of the data). In other words, the algorithm could not be trained on recent data and tested on older data. So, our model is trained on the first 80% of our dataset (BIST 30's historical data for 27 months of trading) and it is tested over the last 20% after shifting the y values according to the target trading day. Hyperparameter tuning: Typically, it is hard to generate a robust and highly accurate model on the first run of an algorithm. Thus, some parameters of the model should be readjusted to decrease the loss of the regression model. Possible changes include increasing or decreasing the number of hidden layers, number of the neurons in such layers, activation function and optimizer algorithm used for training the network. We have achieved the desired performance with 7 hidden layers and 2 dropout layers, Relu as an activation function and Adam as an optimizer. Fig. 6. shows our network where the input layer has 15 neurons (technical indicators after dropping SMA and OBV) and the output layer has only one neuron as we try to predict the index value (one value). Additionally, there are 7 hidden layers with 512, 256, 128, 64, 32, 16, and 8 neurons. There are two dropout layers after the first and the second hidden layer with 30% and 25% as dropout rates respectively. Prediction: After adjusting the parameters which helped to obtain an acceptable model, this model applied over the test set data to make predictions and evaluate the performance using various metrics. EXPERIMENTAL RESULTS We have conducted our experiments on a dataset gathered from 01.01.2016 to 11.04.2018 on www.investing.com. Each observation or row contains the trading date, closing, opening, low and high price values as well as the volume and change percentage with respect to the previous trading day. After preprocessing our data and clearing out negative and null values, we calculate the technical indicators or oscillators to be used as features in our model. We split the dataset into training set X and test set y and train the deep neural network. We use 80/20 as the training/test split ratio where the first 80% of data (BIST's historical data for 27 months of trading) is used as training set and the last 20% of data is used as test set. The y values in each training and test portion are shifted according to the trading day. For example, if we want to predict the index value for the next trading day we shift the y values with one and for second trading day with two and so on. One important point we should mention here is that as our problem is a time-series problem, in order to predict the price value after one or two trading days we should shift the target column as much as needed. For example, to predict the index closing price for the next day, we should shift y by one row, and by two rows in the case of predicting the price for the next two days. This mechanism is known as window mechanism. As our problem is a regression problem, we use mean squared error (MSE), R2 score, mean absolute error (MAE) and mean absolute percentage error (MAPE) metrics to evaluate the performance of our model and compare it with other methods in the literature (Patel et al., 2015), (Sakarya et al., 2015). We have used multiple performance metrics as each metric yields some valuable information not supplied by the others. For example, sometimes the MSE is very low but the R2 score is negative, which means that the model is arbitrary and did not train well. Generally, the metrics except R2 are considered better when close to zero, whereas the best value for R2 is 1. Fig. 7. shows the loss achieved by our model. As the loss trends towards and stays close to 0, this means that our model is trained well. Table 4 gives the performance metrics achieved by our deep learning model for the first five trading days and compare them with two other techniques: SVR (support vector regression) and regular ANN (artificial neural network). Our deep learning model clearly outperforms ANN for the first five trading days and SVR for the first four trading days, whereas SVR gives better results for the fifth trading day. Also, our deep model outperforms the proposals presented by Patel et al. (2015) and by Sakarya et al. (2015) as illustrated in Table 2 and Table 3 using the metrics reported in those works. Fig. 8. plots predicted closing prices vs. real closing prices for the five next trading days. We observe that the predicted prices closely follow the actual trends. CONCLUSIONS AND FUTURE WORK The non-linearity and high volatility of stock market index prices make it challenging to forecast these prices. Successful prediction of stock market index values would immensely help investors devise a high-return investment strategy. Generally, stock market prediction can be categorized into two camps in terms of the features used to build prediction models: Technical analysis-based proposals, fundamental analysis-based proposals. We addressed the BIST 30 index prediction problem using deep learning where features are selected from common important technical indicators. Using data from 01.01.2016 to 11.04.2018, we trained and tested our model to show that our model outperforms other techniques like ANN and SVR as well as comparable proposals in the literature (Patel et al., 2015, Sakarya et al., 2015. Therefore, we conclude that deep learning in this context has proved itself as a promising solution for such a complex task. Stock market index prediction can be divided into two main broad categories in terms the output of predictions: Stock index price prediction (regression model, which is what we have focused on) and stock index direction prediction (classification model) which can be either up or down. The latter is important for building investment strategies containing more than one index. In future work, we plan to predict the index direction using deep learning with the same indicators. Another future work area is combining fundamental and technical indicators and using them together as features of the deep neural network. Also, another potential work could be adding breaking news to features sets to make features more complete and improve learning performance. Finally, all proposals are currently applied on offline datasets, and it would be useful to extend the model to handle live data as well.
2019-03-28T13:14:25.870Z
2019-01-31T00:00:00.000
{ "year": 2019, "sha1": "231991d2996ada93f8e752e0308661ced781820e", "oa_license": null, "oa_url": "https://dergipark.org.tr/en/download/article-file/650427", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "231991d2996ada93f8e752e0308661ced781820e", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
3851845
pes2o/s2orc
v3-fos-license
Bacterial Species and Antibiotic Sensitivity in Korean Patients Diagnosed with Acute Otitis Media and Otitis Media with Effusion Changes over time in pathogens and their antibiotic sensitivity resulting from the recent overuse and misuse of antibiotics in otitis media (OM) have complicated treatment. This study evaluated changes over 5 years in principal pathogens and their antibiotic sensitivity in patients in Korea diagnosed with acute OM (AOM) and OM with effusion (OME). The study population consisted of 683 patients who visited the outpatient department of otorhinolaryngology in 7 tertiary hospitals in Korea between January 2010 and May 2015 and were diagnosed with acute AOM or OME. Aural discharge or middle ear fluid were collected from patients in the operating room or outpatient department and subjected to tests of bacterial identification and antibiotic sensitivity. The overall bacteria detection rate of AOM was 62.3% and OME was 40.9%. The most frequently isolated Gram-positive bacterial species was coagulase negative Staphylococcus aureus (CNS) followed by methicillin-susceptible S. aureus (MSSA), methicillin-resistant S. aureus (MRSA), and Streptococcus pneumonia (SP), whereas the most frequently isolated Gram-negative bacterium was Pseudomonas aeruginosa (PA). Regardless of OM subtype, ≥ 80% of CNS and MRSA strains were resistant to penicillin (PC) and tetracycline (TC); isolated MRSA strains showed low sensitivity to other antibiotics, with 100% resistant to PC, TC, cefoxitin (CFT), and erythromycin (EM); and isolated PA showed low sensitivity to quinolone antibiotics, including ciprofloxacin (CIP) and levofloxacin (LFX), and to aminoglycosides. Bacterial species and antibiotic sensitivity did not change significantly over 5 years. The rate of detection of MRSA was higher in OME than in previous studies. As bacterial predominance and antibiotic sensitivity could change over time, continuous and periodic surveillance is necessary in guiding appropriate antibacterial therapy. INTRODUCTION Otitis media (OM) is defined as inflammation of the middle ear and mastoid space, regardless of cause or pathogenesis (1). OM has the second highest incidence rate, after upper respiratory tract infection, in patients who visit otorhinolaryngology and pediatrics departments (2). Without appropriate treatment, OM may become chronic, resulting in various complications. Acute otitis media (AOM) and otitis media with effusion (OME) are closely related clinical conditions. AOM represents an acute infective process, whereas OME is characterized by the presence of middle ear effusion in the absence of symptoms and signs of acute inflammation (3). In general, Eustachian tube dysfunction and bacterial infection have been found to be the most frequent causes of OM, making the selection of appropriate antibiotics important in its treatment. The recent overuse and misuse of antibiotics, however, has led to changes in the major pathogens causing OM and in their antibiotic sensitivity. Empirical antibiotic treatment of patients with antibiotic resistant bacteria may result in treatment failure or complications. Spontaneous otorrhea is a frequent complication of AOM and, when it occurs, the use of antibiotics is recom-mended. If OME persists after a 3-month period of watchful waiting, treatment with ventilation tubes may be considered (4). The causes of inflammatory response in patients with OME have been difficult to identify, especially because OME is not characterized by symptoms and signs of acute inflammation expected during typical acute bacterial infections (5). Data on the microbiologic characteristics of patients with AOM presenting as spontaneous otorrhea are also limited. Selecting appropriate antibiotics and preventing the development of antibiotic resistant bacteria are major goals in the primary treatment of patients with OME and AOM. Primary pathogens and their antibiotic sensitivities in patients diagnosed with AOM and OME may change over time. This multi-center study therefore evaluated changes over 5 years in principal pathogens and their antibiotic sensitivity in patients in Korea diagnosed with AOM or OME. Subjects The study population consisted of 683 outpatients who visited 7 tertiary hospitals in Korea from January 2010 to May 2015 and were diagnosed with OM, based on the results of medical history taking and physical examination, including otoscopy, tympanometry, and pure tone audiometry. Patients were classified as having AOM or OME based on diagnostic results and clinical findings. AOM was defined as inflammation of the middle ear, in which fluid in the middle ear was accompanied by acute onset of signs or symptoms of ear infection, including a bulging eardrum, usually accompanied by otalgia or a perforated eardrum and often with drainage of purulent otorrhea. OME was defined as fluid in the middle ear without acute signs or symptoms of ear infection. Otoscopy may reveal a translucent eardrum, but more frequently the eardrum is opaque. A translucent eardrum may be accompanied by a fluid-air level. Alternatively, the eardrum may be immobile, either retracted or bulging. Sample collection, bacterial culture tests and antibiotic sensitivity tests An otorrhea sample was collected from each patient with AOM on the first day of their hospital visit. Middle ear fluid was obtained after the bulk of the otorrhea fluid had been removed and the ear canal had been cleansed with a dry cotton swab. Under direct otoscopic visualization, the remaining discharge was collected, using an extra-thin flexible wire swab, from an area near the tympanic membrane or the perforation site of the tympanic membrane. Discharge or middle ear fluid samples were collected from patients with OME during middle ear surgery procedures, including ventilation tube insertion. Each collected sample was added to Stuart transport medium and inoculated into blood agar and thioglycollate liquid medium. All cultures were incubated for at least 24 hours at 35ºC, and resultant bacteria were identified by Gram staining and biochemical tests. Antibiotic sensitivity tests were performed after bacterial identification, following the guidelines of the National Committee for Clinical Laboratory Standards (NCCLS) (6). Ethics statement This study was approved by the Institutional Review Board of Kyung Hee University Hospital (IRB No. KMC IRB 1431-03). Informed consent was exempted by the board. Bacterial detection rate and major isolated strains Culture of ear fluid samples collected from the 683 patients showed the presence of bacteria in samples from 306 patients (44.8%) and fungi in samples from 33 patients (4.8%). In contrast, neither bacteria nor fungi were isolated from the samples of the remaining 344 patients (50.4%) ( Table 1). Antibiotic sensitivity tests Antibiotic sensitivity of Gram-positive bacteria Most of the isolated staphylococci were classified as S. aureus (SA), including CNS, MSSA, and MRSA. All MRSA strains isolated from 41 patients were sensitive to VAN and TCP, whereas 90.2% were sensitive to co-trimoxazole. These strains, however, showed low sensitivity to other antibiotics, with 100% being resistant to PC, TC, CFT, and EM. Bacteria isolated from patients with AOM showed particularly low sensitivity to EM, CIP, and RFP. More than 80% of the MSSA strains isolated from 48 patients were sensitive to TCP, VAN, and co-trimoxazole, CIP, LZ, and CFT. About 85%-100% of the CNS strains isolated from 71 patients were sensitive to TCP, VAN, LZ, and co-trimoxazole, similar to findings for MSSA, with about 75% being sensitive to CL. In contrast 25%-60% of the isolated CNS strains were sensitive to other antibiotics, whereas ≥ 80% were resistant to PC and TC. Analysis of CNS strains according to OM subtype showed that sensitivity to most antibiotics was higher in strains isolated from patients with OME than with AOM. All SP strains isolated from 26 patients were sensitive to VAN and TCP, but showed low sensitivity to other antibiotics, in particular being resistant to SPT, CL, PC, and EM (Table 3). Antibiotic sensitivity of Gram-negative bacteria PA strains isolated from 24 patients showed high sensitivity to PITA, IMP, and CAZ as well as relatively high sensitivity to CFP and PIP. These strains, however, showed low sensitivity to quinolone antibiotics, including CIP and LFX, and to aminoglycoside antibiotics. high sensitivity to CIP (80%) and TOB (80%), but low sensitivity to SPT (53.3%) and CL (33.3%) ( Table 4). DISCUSSION AOM is defined as all acute inflammatory states occurring in the middle ear cavity within 3 weeks of symptom onset. AOM that becomes chronic without drum perforation is described as progressing to OME, with ear fullness and hearing loss caused by drum retraction without erythema or otalgia. In most patients, however, inflammatory fluid remains inside the middle ear. The mechanism by which acute infection progresses to chronic inflammation remains unclear (6). As the primary causes of OM are Eustachian tube dysfunction and bacterial infection, many studies have investigated the primary pathogens in and the use of antibiotics to treat patients with OM. Due to the overuse and misuse of antibiotics in the treatment of various infectious diseases and the increasing frequency of antibiotic resistant bacteria, empirical antibiotic therapy may delay appropriate treatment regimens, causing secondary complications. Thus, If OM patients also have concurrent symptoms, particularly otorrhea, the otorrhea samples should be cultured to identify causative bacteria, with appropriate antibiotic therapy based on the results of antibiotic sensitivity testing. Standard bacterial culture and sensitive molecular detection techniques have shown that the healthy middle ear is typically a sterile site (7). Bacterial isolation rates from patients with AOM have been found to range from 50% to 90%, but to be lower (21% to 70%) in patients with OME (8). Overall, we were able to isolate bacteria from 44.8% of the patients with OM. In addition to pathogenic bacteria, the normal flora always present in the external auditory canal (EAC) include Staphylococcus epidermidis, S. auricularis, S. capitis, and Corynebacterium (9,10). We found that the isolation rate of normal flora was low, whereas the isolation rates of pathogenic bacteria, including MSSA, MRSA, and PA, were high. In this study, 18 (23.7%) patients with AOM and 54 (23.5%) with OME were positive for CNS. Contamination by bacteria present in the EAC may explain the high rate of detection of CNS in these middle ear effusion samples, but likely had no effect on the results for other strains. Differences in bacterial strains from previous studies may reflect a shift in the bacterial population toward more resistant isolates under antibiotic pressure, induced by the use of antibiotics prescribed at primary and secondary medical institutions (11,12). These changes may also be caused by nosocomial infection by healthcare workers or medical instruments used during surgery and treatment. CNS, Veillonella spp., and SA were found to be the 3 pathogens most frequently isolated from effusion fluid of patients with OME (5). Although long regarded as non-pathogenic commensals, CNS strains were shown to form biofilms, making them the leading cause of biomaterial-related infections (13). CNS have also been implicated in OM, with a recent study finding that they account for 60% of bacteria isolated from OME (14). Children with spontaneous otorrhea differ from those with uncomplicated AOM, in that the former is associated with S. pyogenes, which has shown greater local aggressiveness than other pathogens (15). Culture of middle ear fluid of children with otorrhea showed that 18% were positive for SA, with the presence of this bacterium regarded as the most significant microbiological characteristic of children with otorrhea (16). Despite studies reporting that Moraxella catarrhalis, alone or in combination with other bacteria, was etiologic of AOM in a substantial number of children, this bacterium was cultured from only a small number of otorrhea samples, in agreement with previous findings (15). M. catarrhalis was reported to cause milder episodes of AOM than other etiologic agents, to be associated with significantly lower rates of spontaneous otorrhea at the time of diagnosis of AOM and to not cause severe complications such as mastoiditis. The low incidence in our patients of Moraxella strains may be related to the high sensitivity and decreased resistance of these organisms to most commonly used antibiotics. Fungi are present in nature and are found as normal flora in the oral and nasal cavities. In a previous study the presence of fungal DNA in middle ear effusion was found to be associated with AOM and SOM in 34% of middle ear effusion samples. In our study, fungi were found in 12.3% of AOM patients and 32.1% of SOM patients, and may be thought to have an etiologic role. However, additional research is needed to clarify this issue (17). Previous studies have shown that the major pathogens in patients with AOM and OME were SP, Haemophilus influenza, and M. catarrhalis, in that order. It is unclear why the percentages of samples in this study positive for these bacteria were lower. The prevalence of OME has been found to vary over time, and may be due to patterns of antibiotic use and/or vaccination, particularly following the introduction of vaccines against H. influenza type b and SP. The organisms most frequently causing OM and bacterial resistance have been found to vary considerably over time and geographical region ( Table 5). These differences among studies may be due, in part, to differences in inclusion criteria, sample sizes, microbiological methodology, climate, and geographical areas (5,8,16,19,21,22). The increased rates of inoculation with the conjugated heptavalent pneumococcal conjugate vaccine (PCV7) has reduced SP-associated morbidity in patients with AOM and OME, re- sulting in variations in bacteria causing OM (16,18). We found that the isolation rate of SP was 6%-15%, significantly lower than that of MSSA, which was present in 17.1% of patients with AOM and 16.1% of patients with OME. In addition, the isolated MSSA strains showed ≥ 60% sensitivity to the antibiotics CL, LZ, CIP, TMP/SMX, VAN, and TCP. The isolation of SA and MRSA has recently increased in patients with spontaneously draining AOM (14), increases that may be due, at least in part, to the doubling since the 1990s of the amoxicillin dose administered to children (23). Previous studies have recommended that patients with OM and concurrent otorrhea should be treated with empirical antibiotics, such as EM and amoxicillin, regardless of OM subtype (15), as these antibiotics were effective against SP, H. influenzae, M. catarrhalis, and PA, the main causes of OM. In addition, CIP and augmentin (amoxicillin-clavulanate) have been reported effective against various Gram-positive and Gram-negative bacteria that cause AOM (24). We found, however, that MSSA strains recently isolated from Korean patients with OM over the 5-year study period had different antibiotic sensitivity profiles. These findings indicate that the use of EM, amoxicillin, and CIP as primary empirical antibiotics in patients with OM should be reviewed. The isolation rate of MRSA from patients with AOM over the 5-year study period was maintained at approximately 5%-7%. However, the detection rate of MRSA was higher in our patients with OME than in previous studies, suggesting that the chronic use of medications in patients with OME may increase the frequency of MRSA detection. These findings emphasize the importance of refraining from excess use of antibiotics in treating OME. Treatment of ear infection caused by MRSA is challenging. As MRSA is resistant not only to methicillin but to other antibiotics, it cannot be effectively treated with conventional antibiotics alone (25). We found that, regardless of OM subtype, 100% of isolated MRSA strains were sensitive to VAN and TCP and 90% were sensitive to TMP/SMX, but < 10% were sensitive to other antibiotics. Antibiotics effective in treating MRSA give rise to more complications than antibiotics effective against MSSA, suggesting that the former may carry a higher risk of morbidities related to these complications (26). Thus, in treating patients with OM, it is important to select antibiotics with sufficient antimicrobial effect. The use of topical anti-infective agents in the treatment of purulent OM is of potential benefit, delivering a high concentration of drug to the site of infection and having a higher safety profile than systemic treatment (27,28) PA is hard to treat, as this species does not require a particular environment or nutrition to grow and is highly resistant to conventional antibiotics (24). In addition, PA strains from different individuals have different antibiotic sensitivity profiles, emphasizing the importance of simultaneous bacterial identifi-cation and antibiotic sensitivity testing to identify appropriate antibiotics. We found that ≥ 70% of isolated PA strains were sensitive to CFT, AK, AZM, and CAZ, but that these strains were resistant to GM, TOB, and quinolone antibiotics such as CIP and LFX. Thus, empirical antibiotics conventionally used to treat otorrhea are unlikely to achieve appropriate treatment outcomes in patients thought to have PA-caused otorrhea. One limitation of this study was its reliance only on culture generated data. More specific techniques, such as PCR, may have resulted in a much higher identification rate of the bacteria associated with middle ear effusion, and provided a more accurate representation of the development of OM. In conclusion, assessments of patients with AOM and OME showed that the most frequently isolated Gram-positive bacteria were CNS, MSSA and MRSA, whereas the most frequently isolated Gram-negative bacterium was PA. Analysis of changes in bacterial isolation rate by OM subtype and antibiotic sensitivity over the 5-year study period showed little change in the bacteria responsible for each OM subtype compared with earlier years. However, we found that the detection rate of MRSA in OME had increased. Alternative treatments, including topical procedures, should be applied before antibiotic use. The use of systemic antibiotics should be guided by culture and sensitivity tests.
2018-04-03T03:41:07.365Z
2017-02-22T00:00:00.000
{ "year": 2017, "sha1": "c0d31511889e796efa5f787b7838fe976f2b4ad2", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3346/jkms.2017.32.4.672", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c0d31511889e796efa5f787b7838fe976f2b4ad2", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
12852010
pes2o/s2orc
v3-fos-license
Oncolytic adenovirus-expressed RNA interference of O6-methylguanine DNA methyltransferase activity may enhance the antitumor effects of temozolomide Temozolomide (TMZ) is an example of an alkylating agent, which are known to be effective anticancer drugs for the treatment of various solid tumors, including glioma and melanoma. TMZ acts predominantly through the mutagenic product O6-methylguanine, a cytotoxic DNA lesion. The DNA repair enzyme, O6-methylguanine DNA methyltransferase (MGMT), which functions in the resistance of cancers to TMZ, can repair this damage. RNA interference (RNAi) has been previously shown to be a potent tool for the knockdown of genes, and has potential for use in cancer treatment. Oncolytic adenoviruses not only have the ability to destroy cancer cells, but may also be possible vectors for the expression of therapeutic genes. We therefore hypothesized that the oncolytic virus-mediated RNAi of MGMT activity may enhance the antitumor effect of TMZ and provide a promising method for cancer therapy. Perspective As a relatively recently identified alkylating (methylating) agent, temozolomide (TMZ) has become a focus of attention, most notably in malignant glioma and melanoma treatment (1,2). Resistance to TMZ occurs following prolonged treatment and therefore poses a major therapeutic challenge. A key mechanism of the resistance to TMZ is the overexpression of O 6 -methylguanine-DNA methyl transferase (MGMT) (3). MGMT repairs the TMZ-induced DNA lesion, O 6 MeG, by removing the methyl group from guanine to a cysteine residue (4). Suppressing MGMT activity, therefore, could enhance the cytotoxicity of TMZ against melanoma and glioblastoma multiforme (4). In previous years, we have focused our research on oncolytic virotherapy. Oncolytic viruses exhibit selective replication and lysis in tumor cells, while also amplifying the expression and functions of therapeutic gene in the tumor microenvironment (5). Two main strategies are used for oncolytic adenovirus generation. One strategy is the deletion of the viral element that is required for replication of the virus in normal cells, but is dispensable in tumor cells, such as ONYX-015 or ZD55 with E1B-55K gene deletion (6,7). The other strategy is the use of a tumor-specific promoter to drive the gene that is required for viral replication (8). In clinical trials, the E1B 55-kDa-deleted oncolytic virus, ONYX-015, or the ONYX-015 derivative, H101, have exhibited encouraging anticancer activity when combined with chemotherapy (9). RNA interference (RNAi) technology is able to downregulate targeted genes and has been evaluated as a potential therapeutic strategy in human cancer therapy (10). The knockdown of DNA repair genes by small interfering RNA (siRNA) and virally delivered short hairpin RNA (shRNA), can sensitize various cancer cells to chemotherapeutic agents in vitro (11). A previous study has shown that the use of siRNA to transiently transfect nasopharyngeal carcinoma cells and glioma cells results in the inhibition of MGMT gene expression and increased sensitivity to bis-chloroethylnitrosourea (12). Similarly, a study by Kato et al (13) revealed that the transduction of TMZ-resistant glioma cells with a LipoTrust™ liposome, which contains siRNA to inhibit MGMT gene expression, enhanced the sensitivity of the glioma cells to TMZ. Zheng et al (14,15) focused on the production of several shRNA constructs using an oncolytic virus for delivery. Examples of these constructs included siRNAs against Ki67 and hTERT, which were observed to act as antiproliferative and apoptotic inducers in cancer cells. shRNA delivery via armed oncolytic viruses has potential for enhancing antitumor efficacy as a consequence of synergism between viral replication and oncolysis and shRNA antitumor responses (11). When conveying shRNA, oncolytic viruses are expected to effect a marked reduction in the tumor MGMT level, which should result in an increase in the cytotoxicity of TMZ (Fig. 1). We hypothesize that the effects of the oncolytic virus-mediated RNAi of MGMT activity may enhance the cytotoxicity of TMZ in tumors for the following reasons: Firstly, the use of armed oncolytic viruses to deliver shRNA combines the advantages of gene therapy and virotherapy. The inserted shRNA can target the DNA repair protein, MGMT, in tumor cells and multiply by several 100-to several 1,000-fold in parallel with viral replication. The oncolytic adenovirus-armed shRNA targeting MGMT also offers the advantage of enhancing shRNA-mediated antitumor responses through its intrinsic oncolytic activity (10). Secondly, as a delivery agent that couples shRNA expression with viral replication, oncolytic adenoviruses can minimize the effects of off-target activity in normal cells, and facilitate, sustain and regenerate shRNA expression within the tumor microenvironment (15). Thirdly, as oncolytic adenovirus vectors and chemotherapeutic agents act by different mechanisms, there is a synergistic or additive effect rather than cross-resistance on the death of tumor cells (5). The combination of these advantages and possibilities suggest that using oncolytic adenoviruses to deliver therapeutic shRNA targeting MGMT protein may be a powerful technique for overcoming resistance to TMZ in human cancers. This may result in a significantly enhanced antitumor outcome through MGMT-knockdown and viral oncolysis. Figure 1. Schematic representation of MGMT downregulation by oncolytic adenovirus-armed shRNA to overcome temozolomide resistance in cancer cells. Following oncolytic adenovirus infection and replication, the inserted shRNA can target the DNA repair protein, MGMT, in tumor cells and multiply from several 100-fold to several 1,000-fold, in parallel with viral replication. The oncolytic adenovirus-armed shRNA targeting MGMT offers the advantage of an enhanced shRNA-mediated antitumor response through its intrinsic oncolytic activity. MGMT, O 6 -methylguanine DNA methyltransferase; shRNA, short hairpin RNA.
2016-05-18T09:45:13.013Z
2014-08-12T00:00:00.000
{ "year": 2014, "sha1": "f4c94caff441f6f4e49912f967dbec77ae9035d2", "oa_license": "CCBY", "oa_url": "https://www.spandidos-publications.com/ol/8/5/2201/download", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f4c94caff441f6f4e49912f967dbec77ae9035d2", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
258858498
pes2o/s2orc
v3-fos-license
Gene expression dynamics in input-responsive engineered living materials programmed for bioproduction Engineered living materials (ELMs) fabricated by encapsulating microbes in hydrogels have great potential as bioreactors for sustained bioproduction. While long-term metabolic activity has been demonstrated in these systems, the capacity and dynamics of gene expression over time is not well understood. Thus, we investigate the long-term gene expression dynamics in microbial ELMs constructed using different microbes and hydrogel matrices. Through direct gene expression measurements of engineered E. coli in F127-bisurethane methacrylate (F127-BUM) hydrogels, we show that inducible, input-responsive genetic programs in ELMs can be activated multiple times and maintained for multiple weeks. Interestingly, the encapsulated bacteria sustain inducible gene expression almost 10 times longer than free-floating, planktonic cells. These ELMs exhibit dynamic responsiveness to repeated induction cycles, with up to 97% of the initial gene expression capacity retained following a subsequent induction event. We demonstrate multi-week bioproduction cycling by implementing inducible CRISPR transcriptional activation (CRISPRa) programs that regulate the expression of enzymes in a pteridine biosynthesis pathway. ELMs fabricated from engineered S. cerevisiae in bovine serum albumin (BSA) - polyethylene glycol diacrylate (PEGDA) hydrogels were programmed to express two different proteins, each under the control of a different chemical inducer. We observed scheduled bioproduction switching between betaxanthin pigment molecules and proteinase A in S. cerevisiae ELMs over the course of 27 days under continuous cultivation. Overall, these results suggest that the capacity for long-term genetic expression may be a general property of microbial ELMs. This work establishes approaches for implementing dynamic, input-responsive genetic programs to tailor ELM functions for a wide range of advanced applications. Introduction Bioproduction with engineered microbes plays an increasingly important role in the industrial synthesis of drugs, chemicals, food and peptides [1]. Microbes can be encapsulated in 3D biocompatible polymeric scaffolds to create engineered living materials (ELMs) with changes in metabolism [2] that offer significant advantages compared to cells cultured in traditional liquid suspension [2][3][4][5][6]. ELMs fabricated by encapsulating microbes in hydrogels have long-term, sustained metabolic functions that can be harnessed for bioproduction cycles lasting up to one year [3]. Microbe encapsulation in biomaterials has long been known to give rise not only to changes in metabolic capacity, but also in growth rate and colony morphology [7][8][9][10][11]. At present, much less is known about how encapsulation affects the capacity and dynamics of long-term microbial gene expression. Synthetic biologists have developed a number of strategies, including implementing inducible gene expression and dynamic gene regulatory networks, to program bioproduction functions by controlling the timing and expression levels of biosynthetic genes [12][13][14][15][16][17]. Microbial ELMs responsive to external stimuli have been successfully implemented through the use of inducible gene expression systems [6,[18][19][20][21][22][23]. Tunable differences in biosynthetic output have been achieved in stimuli-responsive microbial ELMs by varying the number of light-inducer pulses [6]. Input-responsive ELMs have exhibited programmable genetic responses to small-molecule inducers following multiple cycles of preservation and cold-storage [4,18]. Collectively, these results are consistent with the idea that microbes encapsulated in hydrogels can respond to inducers and express heterologous genes under long-term continuous culture conditions. If encapsulated microbes retain the ability to express high levels of heterologous genes, then it may be possible to develop input-responsive ELMs [24] with bioproduction functions that can be cycled ON and OFF, or switched between multiple products. These dynamically-programmable functions could dramatically increase the versatility and reliability of on-demand bioproduction, without the process complexities or cold storage typically needed to achieve such capabilities with free-floating, planktonic cells [3,25]. We sought to investigate how hydrogel encapsulation affects the capacity and dynamics of microbial gene expression. We fabricated two different microbial ELM systems. We encapsulated and casted engineered Eschericia coli in Pluronic F127-bisurethane methacrylate (F127-BUM) hydrogels. E. coli is a gram-negative, generally-recognized-as-safe (GRAS) bacterium that can utilize glucose efficiently to make a wide array of bioproducts [26]. Additionally, we encapsulated engineered Saccharomyces cerevisiae in bovine serum albumin (BSA) -polyethylene glycol diacrylate (PEGDA) hydrogels via stereolithographic apparatus (SLA) 3D printing [27,28]. The eukaryotic yeast S. cerevisiae has a long history as a bioproduction host [29]. We created an approach to measure gene expression directly in hydrogels and quantified the impact of encapsulation on the timing and levels of heterologous E. coli gene expression. Interestingly, the ability to generate high levels of heterologous gene expression was retained up to ten times longer in encapsulated E. coli than in free-floating, planktonic cells. The successful construction of input-responsive E. coli ELMs incorporating CRISPR gene activation (CRISPRa) programs [14,15] that cycle pteridine biosynthesis [15,30] ON and OFF over the course of multiple weeks illustrates that complex bioproduction functions can be implemented in these systems. We show that input-responsive bioproduct switching can be achieved in dynamically-programmed S. cerevisiae ELMs over the course of 27 days of continuous culture. Taken together, this work demonstrates that encapsulated microbes can express high levels of heterologous genes and suggests that the capacity for long-term gene expression may be a general feature of microbial ELMs. E. coli plasmid construction and preparation All plasmids used in this study are listed in Supplementary Table S1. Plasmids were designed using the Benchling sequence designer. PCR fragments for assembly were amplified using Phusion DNA Polymerase (Thermo Fisher Scientific) for cloning using 5X In-Fusion HD kits (Takara Bio). The assembled plasmids were transformed into chemically competent E. coli NEB Turbo cells (New England Biolabs) in Luria-Bertani (LB) medium or agar plates supplemented with antibiotics (100 μg/mL carbenicillin and/or 25 μg/mL chloramphenicol). Plasmids were purified using QIAprep Spin Miniprep Kits (Qiagen 27,104) according to the manufacturer's protocol. Plasmid concentrations were quantified via spectrophotometry (Nanodrop 2000c, Cat. ND-2000C). Constructs were confirmed by Sanger sequencing (Genewiz-Azenta), and sequencing results were analyzed using Benchling. For E. coli ELM experiments, E. coli MG1655 cells were grown overnight at 37 C with orbital shaking at 220 RPM in MOPS-based EZ-Rich Defined Medium (EZ-RDM, Teknova M2105) with 0.2% glucose, unless otherwise specified, and appropriate antibiotics (100 μg/mL carbenicillin and/or 25 μg/mL chloramphenicol). F127-BUM synthesis F127-BUM was synthesized according to the protocol described in Millik et al. [31]. Briefly, Pluronic F127 (60 g, 4.8 mmol) was dried under vacuum at room temperature (RT) in a round bottom flask, and anhydrous CH 2 Cl 2 (550 mL) was charged to the flask under a N 2 atmosphere. The mixture was stirred until F127 was completely dissolved before adding dibutyltin dilaurate. 2-Isocyanatoethyl methacrylate (3.5 mL, 24.8 mmol) was diluted in anhydrous CH 2 Cl 2 (50 mL) and added to the reaction mixture dropwise. The reaction was allowed to run for 2 days. F127-BUM was condensed by evaporating CH 2 Cl 2 , precipitated in Et 2 O (2000 mL) overnight and decanted. The precipitate was washed twice in Et 2 O before drying overnight under vacuum (RT). The final product was stored in the dark at 4 C before use. E. coli ELMs fabrication E.coli laden F127-BUM hydrogels were prepared using a method previously developed by Johnston & Yuan et al. [3] with the following modifications: starter culture of E. coli MG1655 cells was grown in 3 mL EZ-RDM with appropriate antibiotics and incubated overnight at 37 C with 220 RPM orbital shaking. Hydrogels were seeded with 1.5 Â 10 8 cells/g of hydrogel. The microbial hydrogel mixture was transferred to a syringe while kept cold (sol-state, 5 C) and then warmed to room temperature (~21 C) to allow the sol-to-gel transition. The shear-responsive hydrogel was then extruded into a cylindrical silicone mold (diameter ¼ 4 mm, height ¼ 2 mm) placed between two glass slides. The hydrogel was allowed to sit for 15 min in the mold to assure a proper shape conformation. The entire mold assembly was photocured for 3 min on each side under a UV 365 nm lamp (UVP 95-0005-05) at 18.6 mW/cm 2 (Supplementary Fig. S1). The fully-cured hydrogels were rinsed with 100 μL EZ-RDM and incubated overnight at 37 C in 2 mL of EZ-RDM with appropriate antibiotics. For hand-extruded (uncast) hydrogels, 100 μL of microbial gel mixture was extruded directly from the syringe onto a glass slide and UV photocured according to the steps described above. The photocured hydrogels were cut into four equal parts of~25 μL before incubating in EZ-RDM. Direct-gel fluorescence measurement for cast hydrogels End-point fluorescence measurements for both hydrogels and the surrounding liquid media were collected using a microplate reader (Biotek Synergy HTX). The following settings were used for the different constitutively-expressed fluorescent proteins: sfGFP (excitation: 485 nm, emission: 528 nm, gain: 35) and mRFP1 (excitation: 540 nm, emission: 600 nm, gain: 35). For directly measuring fluorescence from hydrogels, the hydrogels were rinsed once with 100 μL EZ-RDM and transferred into a 96-well clear flat-bottom plate (Corning 3916) with wells pre-filled with 100 μL of EZ-RDM. To ensure proper fluorescence readings, the cylindrical cast hydrogels were positioned at the center of the well with the flat circular surface facing the bottom of the plate. For all experiments, three to five technical replicates per treatment were used. 100 μL of the surrounding liquid media containing planktonic cells were also measured for fluorescence comparison purposes. Data were plotted using Prism (GraphPad). Fluorescence microscopy of cast hydrogels Cast F127-BUM hydrogels containing E. coli MG1655 with CRISPRadirected sfGFP expression (pCK389.306 and pPC003) were continuously cultured for two days before imaging. The hydrogel was removed from the spent media and washed with 100 μL of EZ-RDM media before it was placed in a chamber slide (Ibidi μ-slide 8 well, Cat. 80,826) pre-filled with 100 μL of EZ-RDM media. Hydrogel images were taken using an EVOS FL auto microscope without any image manipulation (Thermo Fisher). Inducible expression in E.coli liquid continuous cultures A starter culture of E. coli MG1655 containing an aTc-inducible CRISPRa plasmid (pCK389.306) and an sfGFP reporter plasmid (pPC003) was grown overnight at 37 C with 220 RPM orbital shaking in 3 mL EZ-RDM with 0.2% glucose and appropriate antibiotics. An OD 600 measurement was taken the next day using a spectrophotometer to calculate the cell concentrations for controlling the seeding density of 2 mL liquid cultures in 14-mL tubes (Fisher Scientific, Cat. 149569C) at 1.5 Â 10 8 cells/ml media. Anhydrotetracycline (aTc) inducer (final concentration of 200 nM) was added into the media to induce gene expression in planktonic microbes. Fluorescence from 100 μL of culture was measured every 24 h using a plate reader. For continuous culture, the cells were pelleted by centrifugation at 7000 RPM for 5 min. The spent media was removed and the cells were resuspended in fresh EZ-RDM. Dynamic inducible expression in E. coli ELMs A starter culture of E. coli MG1655 containing an aTc-inducible CRISPRa plasmid (pCK389.306) and an sfGFP reporter plasmid (pPC003) was grown overnight at 37 C with 220 RPM orbital shaking in 3 mL of EZ-RDM with 0.2% glucose and antibiotics for selection. Microbial hydrogels were prepared and incubated according to the ELM fabrication protocol described above. aTc inducer (final concentration of 200 nM) was added into the surrounding EZ-RDM to induce gene expression of the hydrogel-encapsulated microbes. Fluorescence of both the hydrogels and the surrounding liquid media was measured every 24 h using the direct-gel measurement method described above. For dynamic induction experiments, the hydrogels were rinsed once with 100 μL EZ-RDM before measurement and serially diluted every 24 h into fresh EZ-RDM either with or without inducers depending on the induction scheme/cycle. Dynamic inducible CRISPRa program for bioproduction in E. coli ELMs A starter culture of E. coli MG1655 containing an aTc-inducible CRISPRa plasmid (pCK389.306) and a pteridine biosynthetic pathway plasmid (pCK014) was grown overnight at 37 C with 220 RPM orbital shaking in 3 mL EZ-RDM with 0.2% glucose and appropriate selection antibiotics (100 μg/mL carbenicillin and/or 25 μg/mL chloramphenicol). Microbial hydrogels were prepared and incubated according to the ELM fabrication protocol above. aTc inducer (final concentration of 200 nM) was added into the surrounding EZ-RDM to induce gene expression in the hydrogel-encapsulated microbes. Pteridine fluorescence for both the hydrogels and the surrounding liquid media was measured every 24 h using the pteridine measurement method described below. For the dynamic induction experiments, the hydrogels in the cultures were rinsed once with 100 μL EZ-RDM and serially diluted every 24 h into fresh EZ-RDM with or without inducers, depending on the induction scheme. Pteridine fluorescence measurement End-point fluorescence measurements for both the hydrogels and the surrounding liquid media were made using a microplate reader equipped with a monochromator (TECAN infinite M1000). The following settings were used to measure the fluorescence of pteridines in hydrogels or from surrounding liquid media: excitation 340 nm AE 5 nm, emission 440 nm AE 5 nm, gain of 150. For directly measuring the fluorescence from hydrogels, the hydrogels were transferred into a 96-well clear flat-bottom plate (Corning 3916) with wells pre-filled with 100 μL EZ-RDM. To ensure uniform fluorescence readings, the cylindrical hydrogels were positioned at the center of the well with the flat circular surface facing the bottom of the plate. Two to four technical replicates per treatment were measured. Data were plotted using Prism (GraphPad). Logistic fits for expression rate determination were performed in Python (ver 3.8.5). LC-MS for pteridine detection The LC-MS analysis of culture supernatants was adapted from a prior method described in Kiattisewee et al. [15]. Culture supernatants were collected by centrifugation of the spent cultures (without the hydrogels) at 7000 RPM for 5 min. LCÀMS analysis was completed using an Agilent UPLC 1290 system equipped with a QTOF-MS 6530 and a ZORBAX RRHT Extend-C18, 80 Å, 2.1 Â 50 mm, 1.8 μm column and an electrospray ion source. LC conditions: solvent A-water with 0.1% formic acid; solvent B-methanol with 0.1% formic acid. Gradient: 2 min at 95%:5%:0.2 (A:B:flow rate in mL/min), 4 min ramp from 95%:5%:0.2 (A:B:flow rate in mL/min) to 70%:30%:0.2, 1 min ramp back to 95%:5%:0.2 (A:B:flow rate in mL/min), and 2 min post time. The MS acquisition (positive ion mode) scan covered m/z 80À3000. Analysis of pyruvoyl tetrahydropterin was performed with extracted m/z (M þ H) 238.0935. Because an analytical standard for pyruvoyl tetrahydropterin was not commercially available, we report relative production levels as ion counts. We showed that the retention time of pyruvoyl tetrahydropterin (PT, 2.6 min) is significantly different than that of dihydrobiopterin (BH2, 1.2 min), previously reported in Kiattisewee et al., which shared the same chemical formula (C9H13N5O3) and molecular ion (M þ H: m/z ¼ 238.0935) ( Supplementary Fig. S7a). The retention time of BH2 . was determined using a commercially available standard (Cayman Chemical). Data were plotted using Prism (GraphPad). S. cerevisiae plasmid construction and preparation All plasmids used in this study are listed in Supplementary Table S1. PCR fragments for assembly were amplified using Q5 High-Fidelity DNA Polymerase (New England BioLabs) for cloning using Q5 High-Fidelity 2X Master Mix (New England BioLabs). The assembled plasmids were transformed into electrocompetent E. coli DH10B cells (Thermo Fisher Scientific) and selected on LB agar plates supplemented with antibiotics (100 μg/mL Ampicillin or 50 μg/mL Kanamycin). Plasmids were purified using a GeneJET Plasmid Miniprep Kit (Thermo Fisher K0502) according to the manufacturer's protocol. Plasmid concentrations were quantified via spectrophotometry (Nanodrop 2000c, Cat. ND-2000C). Constructs were confirmed by Sanger sequencing, and the sequencing results were analyzed using Benchling. For proteinase A production, the plasmid PlTy3-GAL1-ScPEP4 was digested with XhoI for integration into the delta region and transformed into S. cerevisiae BY4741 cells via the Frozen EZ Yeast Transformation II Kit (Zymo Research). Successful transformants were selected on YPD agar plates supplemented with Geneticin G418 (200 μg/mL) and confirmed via colony PCR. For the production of betaxanthins, the plasmid p415-UraInt-pCup1-MjDOD-CYP76AD5 was linearized using NotI for integration into the Ura locus and transformed into protease-producing S. cerevisiae BY4741 cells via the Frozen EZ Yeast Transformation II Kit (Zymo Research) to yield strain spk05. Successful transformants were selected on SC agar plates lacking Leucine and confirmed via colony PCR as well as by the presence of fluorescence. Conjugation of PEGDA to BSA BSA-PEGDA conjugates were formulated according to a protocol described in Sanchez-Rexach et al. and optimized for SLA 3D printing [27,28]. To make 20 g of resin, 10 wt% PEGDA (M n ¼ 700) (Sigma Aldrich) was dissolved in 11.2 mL of deionized water. Then, 30 wt% BSA in powder form (Nova Biologics) was added slowly to the PEGDA solution to form BSA-PEGDA conjugates via an aza-Michael addition reaction. The resin mixture was stored overnight at 4 C before use. S. cerevisiae ELMs fabrication A starter culture of S. cerevisiae spk05 cells was grown in 4 mL YPD media (10 g/L yeast extract, 20 g/L peptone, 20 g/L glucose) with 50 μg/ mL of geneticin (G418) for counterselection against bacteria and incubated overnight at 30 C with 220 RPM orbital shaking. OD 600 measurements were taken the next day using a spectrophotometer to calculate the cell concentrations for controlling seeding density. For 20 g of BSA-PEGDA resin, 1 Â 10 9 cells/mL culture was introduced to the formulation. Then, 0.075 wt% Ru(bpy) 3 Cl 2 (Sigma Aldrich) and 0.24 wt % sodium persulfate (Sigma Aldrich) were sequentially added into the resin-microbial mixture as photoinitiators. ELM constructs were printed using a SLA 3D printer (Formlabs Form 2) in Open Mode using a layer height of 100 μm and photocured using a 405 nm violet laser (250 mW) with 140 μm laser spot size. Dynamically-inducible bioproduction in S. cerevisiae ELMs S. cerevisiae-laden BSA-PEGDA hydrogels were prepared according to the ELM fabrication protocol given above. The ELM constructs were placed in 50 mL culture tubes and incubated in 20 mL of YPD media at 30 C with 220 RPM orbital shaking. The production of betaxanthins was induced by adding 0.5 mM Cu 2 SO 4 .5H 2 O inducer into the YPD media, while proteinase A production was induced by the addition of 1 wt% galactose to the YPD media. For the dynamic induction experiment, the culture media was replaced every 48 h with fresh YPD with one inducer depending on the induction scheme. Culture supernatants were collected for fluorescence measurements by centrifugation of the spent cultures (without the hydrogels) at 4400 RPM for 10 min. Measurements of betaxanthins and proteinase A Fluorescence measurements of culture supernatants were made using a microplate reader (Fluoroskan Ascent FL -Thermo Labsystems). The following settings were used for both the betaxanthins and the proteinase A fluorescence measurements: excitation at 485 nm and emission at 520 nm. For the betaxanthins measurements, 300 μL of supernatant were transferred to a 96-well clear flat-bottom microplate (Greiner Bio-One, UV-Star®). For the proteinase A measurements, an Amplite® Universal Fluorimetric Protease Activity Assay Kit (AAT Bioquest) was used to quantify the amount of functional proteinase A in 100 μL of supernatant. Three replicates were used. Data were plotted using Prism (GraphPad). Direct measurement of gene expression in ELMs Fluorescent reporter protein expression has been an invaluable tool for quantifying the timing and strength of microbial gene expression [32,33]. For microbes encapsulated in hydrogels, fluorescent reporter gene expression can be monitored with microscopy [34] using relatively laborious multi-step sample preparation and image analysis workflows [35]. We developed a method for 'direct gel measurement' that uses a plate reader (Fig. 1a) to quantify gene expression, allowing a much larger number of hydrogel samples to be easily analyzed than with microscopy. Once we standardized the process of hydrogel casting and photocuring, we found that reporter protein expression could be measured simply by removing the hydrogels from the culture conditions, rinsing them once to remove cells that had leaked from the hydrogels [3], and then placing them into microplate wells. Initially, we applied our direct gel measurement workflow to E. coli ELMs fabricated from F127-BUM hydrogels using a hand-extrusion method [3]. F127-BUM is a polymer material with a temperature-dependent sol-gel transition at~17 C, compatible with growth of microbial cells [2][3][4]36]. This hydrogel is shear-thinning, making it easily extrudable [3]. F127-BUM hydrogels are also optically transparent, which permits the use of fluorescent proteins to monitor changes in gene expression of encapsulated microbes ( Fig. 1a and b top inset). We observed poor consistency in measured reporter protein fluorescence levels, which we attributed to the highly irregular shapes of the hand-extruded hydrogels ( Supplementary Fig. S2). By casting the hydrogels in silicone molds, we were able to enforce uniform shapes and dimensions (Fig. 1a). We measured constitutive (always ON) green fluorescent protein (sfGFP) expression across sets of E. coli ELMs cast in a mold with 2 mm thickness and two different diameters, 3 mm and 4 mm (Supplementary Fig. S3 and Fig. 1b). We reasoned that maximizing coverage of the microplate well with the cast hydrogel would give more reliable fluorescence measurements. Consistent with this view, the larger 4 mm diameter hydrogels generated higher sfGFP signals (130 AE 15 versus 76 AE 11 relative fluorescence units, RFUs) and had a coefficient of variation that was 57% smaller than the 3 mm diameter hydrogels (Supplementary Table S2). Increasing the hydrogel diameter to 10 mm appeared to qualitatively reduce sfGFP fluorescence compared to the 4 mm diameter hydrogels, except when there was a corresponding increase in the volume of culture media ( Supplementary Fig. S4). The 4 mm diameter hydrogels have a larger surface area-to-volume ratio than the 10 mm diameter hydrogels, which could improve mass transfer into the smaller diameter ELMs and improve gene expression [37][38][39]. Based on the results in this section, the rest of the E. coli ELMs experiments were conducted with the 4 mm diameter hydrogels. As a final optimization of our direct gel measurement method, we evaluated the relative performance of sfGFP, as above, with a commonly employed red fluorescent protein variant, mRFP1. For this experiment, E. coli constitutively expressing one of the two fluorescent proteins were cast into 4 mm diameter hydrogels and placed into liquid media (Fig. 1b, Supplementary Fig. S3, and Supplementary Table S2). We observed a steady increase in fluorescence for hydrogels seeded with sfGFPexpressing E. coli, saturating over the course of 36 h (Fig. 1b). Across all time points, the mean coefficient of variation for sfGFP fluorescence was only 8.9 AE 0.4%, while the mRFP1 counterparts yielded a much larger mean coefficient of variation (35.0 AE 28.7%, Supplementary Table S2). The background fluorescence contributions from blank hydrogels (1.0% of the measured value for hydrogel-encapsulated E. coli expressing sfGFP), hydrogel-encapsulated cells with no reporter (1.6%), and free-floating planktonic cells (1.4%), are all very low (Fig. 1c). Thus, we conclude that the direct gel measurement method provides a reliable approach for quantifying gene expression in hydrogels, and that most of the measured fluorescence is generated by hydrogel-encapsulated cells expressing the reporter protein. Persistence of genetic activity in ELM continuous culture To evaluate the potential for input-responsive ELMs to function in long-term bioproduction applications, we first investigated whether encapsulated microbes can respond to external stimuli under continuous culture conditions. For these experiments, we implemented an inducible system for CRISPR-based transcriptional activation (CRISPRa) [14][15][16]40] (Fig. 2a). In this system, the addition of anhydrous tetracycline (aTc) to the culture media induces heterologous expression of the biochemical machinery for CRISPRa: a nuclease defective Cas9 protein (dCas9), guide RNAs modified to contain a protein recruitment domain (scRNAs), and a SoxS transcriptional activator protein. The CRISPRa machinery generates scRNA-directed gene expression by recruiting the SoxS transcriptional activator to targeted promoters specified by the scRNA spacer sequence. To track the capacity and dynamics of input-responsive gene expression over time, the sfGFP reporter was placed under the control of the inducible CRISPRa machinery (Fig. 2a). We could then compare the relative levels of sfGFP fluorescence when aTc induction was initiated at the beginning of the experiment (i.e., 0-day delay) with sfGFP outputs produced when induction was delayed for multiple days under continuous cultivation. We also fabricated input-responsive E. coli ELMs with inducible CRISPRa-programmed sfGFP expression and monitored the gene expression dynamics over multiple weeks of continuous cultivation using the method described in Section 3.1. Microscopy images confirmed that the inducible CRISPRa program was active in ELMs, as indicated by a large abundance of well-distributed, sfGFP-expressing microcolonies (Fig. 2b). Free-floating, planktonic E. coli induced at the beginning of the experiment without delay reached about 50% of their maximum sfGFP expression after one day, and their highest levels of expression after two days (Fig. 2c). When induction was delayed for one day (1-day delay), no significant difference in expression capacity compared to the 0-day delay samples was observed. However, when the delay period was increased to two days (2-day delay), the expression capacity of the free-floating planktonic cells dropped significantly, and was only 57% of the maximum expression achieved by the 0-day delay samples (Fig. 2c). Across the set of E. coli ELM samples, about 60% of the final endpoint expression level was reached two days after induction, with the highest levels of expression occurring three days after induction (Fig. 2d). E. coli ELMs had no discernable differences in the capacity or dynamics of sfGFP expression following 1 or 2 days of delayed induction compared to the 0day delay sample (Fig. 2d, Supplementary Figs. S5a and S6). Even when the induction delay was extended to 19 days of continuous culture, the ELMs retained the capacity to express 56% of the maximum level achieved with the 0-day delay samples. By Day 2 of aTc induction, the instantaneous rate of sfGFP expression in the 19-day delayed induction samples was reduced 59% compared to the 0-day delay samples (Supplementary Fig. S6). Thus, we do see that long-term continuous cultivation affects the capacity and dynamics of ELM gene expression. Nonetheless, these results show that input-responsive genetic programs in E. coli ELMs can be activated for multiple weeks and that encapsulation greatly increases the capacity for long-term, inducible gene expression compared to cells grown in liquid suspension culture. The mixture is transferred via syringe into a cylindrical silicone mold (diameter ¼ 4 mm and height ¼ 2 mm). The mold is placed in between glass slides and photocured with UV light (365 nm). Cast hydrogels are continuously cultured in EZ-RDM media. Fluorescent reporter gene expression from cells encapsulated in hydrogels is quantified using the direct-gel measurement method: cultured hydrogels are placed at the center of a microplate well pre-filled with fresh media for quantification using a microplate reader (see methods for additional details). b) sfGFP reporter expression from the pBT001-J2:RR2.CM.J23118 plasmid was measured at the time points indicated using the direct-gel measurement method. A gradual increase in ELM turbidity was observed with progressing culturing time (top inset). Bars represent the mean AE standard deviation from n ¼ 5 technical replicates. c) High expression of sfGFP from encapsulated E. coli harboring pWS028.J3.J23106.sfGFP plasmid was measured using the direct-gel measurement method shown in a. Cast hydrogels and surrounding media were independently seeded with or without cells: acellular (light blue), no reporter (beige) or sfGFP-expressing (green) cells. Bars represent the mean AE standard deviation from n ¼ 3 technical replicates. Statistical significance was assessed using a two-tailed unpaired Student's t-test (p > 0.05). (For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.) Gene expression capacity retained in multiple-cycles of induction To function as inducible bioproduction platforms, ELMs must be able to generate heterologous gene expression levels sufficient for biosynthesis [4]. In the previous section, we showed that heterologous gene expression can be induced after multiple weeks of continuous culture. In this section, we investigated the expression dynamics of repeated heterologous gene expression induction events. The goal was to quantify the fraction of inducible gene expression capacity retained after multiple-cycles of induction [3,5,6]. These experiments consisted of two Delay periods (expression OFF), where ELMs were cultured in media alone, and two Induction periods (expression ON), where aTc was added to induce sfGFP gene expression from the CRISPRa program ( Fig. 2a and 3a). ELMs were cultured 0, 1 and 2 days without aTc during Delay period 1 (Delay 1). Induction period 1 (Induction 1) was continued until sfGFP gene expression reached a maximum plateau. The ELMs were then cultured in media without aTc until sfGFP fluorescence decreased and plateaued again at a relatively low level (Delay 2). Heterologous gene expression was re-initiated during Induction period 2 (Induction 2) by re-adding aTc to the media, and the cells were then cultured until saturating sfGFP fluorescence was obtained a second time. For encapsulated cells induced without an initial delay in the first round (Delay 1 ¼ 0 days), sfGFP expression increased from the initial baseline and reached maximum expression after four days of continuous culture (Fig. 3b, top). sfGFP fluorescence began decreasing as soon as aTc was withdrawn at the start of Delay 2. sfGFP fluorescence continued to decrease exponentially and plateaued after 6 days, reaching a baseline that was about 70% lower than the Induction 1 maximum. Increases in sfGFP expression were observed immediately upon aTc re-addition at the start of Induction 2. Within three days, sfGFP fluorescence reached a second maximum, equivalent to 84% of the Induction 1 maximum. No further increases in sfGFP expression were observed and the experiment was stopped at Day 16. Samples cultured 1 and 2 days without aTc during Delay 1 exhibit expression dynamics very similar to the 0-day Delay 1 samples (Fig. 3b, middle and bottom). The Delay 1 ¼ 1 day and Delay 1 ¼ 2 day samples reach their first maxima within four days after initiating Induction 1. Exponential decreases in sfGFP fluorescence began when aTc was withdrawn at the start of Delay 2, and continued for 6-7 days, until second baselines about 70% lower than the respective Induction 1 maxima were reached. In both cases, inducing gene expression a second time (i.e., Induction 2) yields sfGFP maximum expression levels in 3-4 days that are within 10% of the Induction 1 maxima. Collectively, the data in this section show that encapsulated bacteria can generate high levels of heterologous gene expression following a second induction event and provide an initial characterization of inducible gene expression dynamics in this ELM system. Dynamic inducible CRISPRa programs for bioproduction in ELMs Motivated by our findings that ELMs have long-term, dynamicallyresponsive genetic activity, we examined whether the metabolic activity of encapsulated microbes can be dynamically controlled with inducible CRISPRa programs. We constructed an aTc-inducible CRISPRa program to activate expression of a two-gene metabolic pathway producing a pteridine derivative, pyruvoyl tetrahydropterin (PT) [30]. Pteridines constitute a large class of compounds with therapeutic potential as drugs for metabolic deficiencies, cancer, inflammation, and more [41]. In our system, aTc-induction activates expression of GTPCH from E. coli MG1655 and PTPS from M. alpina, which together catalyze the conversion of guanosine triphosphate (GTP) into PT (Fig. 4a). PT is fluorescent, which allowed us to monitor ELM production of PT by adapting an established pteridine detection assay [15,30], for use with our direct gel measurement method (Fig. 4 d,e and Supplementary Fig. S7b). Although the lack of readily-available commercial standards prevented quantification of PT production, the presence of PT produced by the ELMs and secreted into the media could be confirmed using LC-MS ( Supplementary Fig. S7c). We measured PT bioproduction across two programmedbioproduction cycles that alternated between induced (production ON) and non-induced (production OFF) conditions. In the first instance, induction was initiated at the start of the experiment (Delay 1 ¼ 0 days) and carried out for four days (Fig. 4b). PT production increased immediately upon aTc addition, reaching the first maximum production level on Day 3 (Fig. 4d). Turning production OFF by withdrawing aTc during Delay 2 caused an immediate decrease in PT production, as indicated by the immediate reduction in measured fluorescence. Six days after aTc withdrawal, PT fluorescence plateaued at 19% of the maximum fluorescence measured in Induction 1. aTc was added back to the culture media during Induction 2, and PT production increased again, reaching 93% of the first maximum within three days (Fig. 4d). The first and second maxima were indistinguishable from the fluorescence generated by always-ON ELMs constitutively-expressing the PT pathway (þaTc control), showing that dynamically-programmed cycles can access relatively high PT bioproduction levels. We evaluated whether the inducible bioproduction phenotype was impacted by the length of time spent under continuous culture conditions by extending Delay 1 from 0 to 10 or 14 days (Fig. 4c). No significant PT fluorescence was observed until aTc was added to the culture media (Fig. 4e). At saturation, PT production levels for ELMs with Delay 1 ¼ 10 or 14 days were similar to the Delay 1 ¼ 0 days and always-ON (þ aTc control) samples. Overall, the inducible dynamics of the bioproduction cycles resemble the sfGFP gene expression dynamics presented in Section 3.3, indicated by immediate increase in the induced response, reaching a maximum after 3-4 days and returning to the baseline within 5 days upon inducer withdrawal. The results in this section show that E. coli ELMs are genetically and metabolically active under long-term culture conditions, and that the metabolic activities of encapsulated cells can be dynamically controlled with inducible CRISPRa programs. Multi-input genetic programs in S. cerevisiae ELMs We investigated whether S. cerevisiae ELMs exhibit a similar capacity for long-term, dynamically-programmable gene expression as we Fig. 2. Persistence of genetic activity in E. coli ELMs. a) Schematic of an inducible CRISPR activation (CRISPRa) program for sfGFP expression. CRISPRa-directed expression of sfGFP uses nuclease-defective Cas 9 (dCas9) and a scaffold RNA (scRNA) that specifies a target site upstream of the engineered J3 promoter. scRNA (J306) is a modified guide RNA that includes a 3 0 MS2 hairpin to recruit a transcriptional activator (SoxS) fused to the MS2 coat protein (MCP). The CRISPRaprogrammed expression of sfGFP is induced upon the addition of anhydrotetracycline (aTc), which expresses MCP-SoxS from the pTet promoter. b) CRISPRaprogrammed sfGFP expression in hydrogel-encapsulated E. coli. Microscopy images were taken from a two-day old continuously-cultured ELM (scale bar ¼ 300 μm). c) Gene expression dynamics were evaluated in bacterial liquid continuous culture following variable delays of 0, 1, and 2 days (gray) before CRISPRaprogrammed sfGFP expression was initiated and carried out for 5 days (green) by the addition of aTc inducer. Media was continuously removed and replenished every 24 h throughout the entire experiment, keeping the culture volume and composition constant via cell pelleting. sfGFP expression was tracked over time following continuous culture induction delays. Values represent the mean AE standard deviation from n ¼ 3 replicates. d) Gene expression dynamics in ELMs were evaluated following variable delays of 0, 1, 2 and 19 days (gray) of continuous culture before initiation of CRISPRa-programmed sfGFP expression for 5 days (green). Media was replenished every 24 h throughout the entire experiment. The impact of induction delay duration on ELM gene expression levels was quantified by tracking sfGFP expression. Values represent the mean AE standard deviation from n ¼ 5 technical replicates, except for 19-day delay samples, where n ¼ 3 technical replicates. (For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.) observed for E. coli ELMs. The F127-BUM hydrogels employed in the sections above have been used to encapsulate yeast for sustained bioproduction [2][3][4]36]. Here, we used a second type of acrylate-based hydrogel conjugated with bovine serum albumin (BSA) protein [27,28]. Recent applications of BSA-PEGDA have employed vat photopolymerization to fabricate microbial ELMs [28]. This fabrication method yields mechanically robust protein-based hydrogel networks that have good mechanical properties (moduli and toughness) and are also enzymatically degradable [27,28,42]. We fabricated yeast hydrogel-based ELMs by encapsulating S.cerevisiae spk05 cells in BSApolyethylene glycol diacrylate (PEGDA) bioconjugates (Fig. 5a). The ELMs were printed using a stereolithographic apparatus (SLA) 3D printer and photocured as described in the methods [27,28]. The S. cerevisiae cells were programmed to express two different proteins, each under the control of a different inducible promoter. Expression of secreted proteinase A enzyme from the scPEP4 gene was placed under the control of a galactose-inducible promoter. Copper (II)inducible expression of a dioxygenase enzyme (mjDOD gene) combined Fig. 3. Retention of induction response in E. coli ELMs. a) Schematic of dynamic induction cycles. Alternating delay (gray) and induction (green) and cycles were implemented by placing the ELMs into fresh media every 24 h. Varying delay lengths were applied to the first delay cycle (Delay 1). b) Retention of induction response following a subsequent induction cycle. Subplots (top to bottom): Delay 1 ¼ no-delay, 1 day, 2-days, with uninduced, no aTc controls (Supplementary Fig. S5b) replotted on each subplot. Full expression capacity is defined as the maximum expression level achieved during Induction 1. Retained expression capacity and nonbaseline returns for each Delay 1 length were calculated relative to the full expression capacity. Plotted values represent the mean AE standard deviation from n ¼ 5 technical replicates, except for uninduced** samples (**n ¼ 2 technical replicates). Green and gray shades denote induction and delay periods, respectively. Statistical significance in sfGFP fluorescence at two consecutive time points was evaluated using a two-tailed unpaired Student's t-test (p > 0.05). (For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.) with constitutively-expressed P450 enzyme (CYP76AD5 gene) catalyzes the production of secreted betaxanthin pigment molecules [3] (Fig. 5b). With this design, inducers can be selectively added or withdrawn from the media to schedule increases or decreases, respectively, in production of the target molecules ( Fig. 5c and d -top). In the experiments that follow, the relative production levels of proteinase A and betaxanthins were monitored over the course of 27 days under continuous cultivation ( Fig. 5c and d -bottom). Specifically, we employed a fluorometric enzyme activity assay to measure relative proteinase A expression levels. Betaxanthins are formed from the spontaneous reactions of betalamic acid and heterogeneous primary amines within the cell [43]. We quantified the relative production of the resulting mixtures of betaxanthins by measuring the fluorescence of yellow pigment betaxanthin molecules [44]. We performed two different dynamic induction cycles, each consisting of three phases: Phase 1 ¼ 7 days, Phase 2 ¼ 8 days, and Phase 3 ¼ 12 days. In the first dynamic induction cycle, copper (II) was added in the first phase, withdrawn and replaced with galactose in the second phase, and added again in the third phase while galactose was withdrawn (Fig. 5c). The second dynamic induction cycle was carried out in a similar manner, where galactose was added in the first phase, replaced with copper (II) in the second phase, and then re-added to the third phase where copper (II) was withdrawn (Fig. 5d). In the first induction cycle, galactose-induced proteinase A production in Phase 2 was 1.5-2.1 fold higher than the levels measured in the absence of galactose (Phases 1 and 3) (Fig. 5c, bottom). The production of betaxanthins in Phase 3, induced with the addition of copper (II) on the 15th day of the experiment, reached 84% of the maximum level of induced betaxanthins production observed in Phase 1. In the second induction cycle, relatively large differences in proteinase A production were observed, and 65% of the maximum Phase 1 expression was also achieved by Day 27 in Phase 3 (Fig. 5d, bottom). The transitions between the phases for betaxanthins production were less distinct in the second induction cycle compared to the first induction cycle. Nonetheless, clear variations in the levels of measured betaxanthins could be seen at the endpoints of the phases. Thus, as with the E. coli ELMs, the S. cerevisiae ELMs retain the capacity to express multiple genes in multi-week continuous cultures. Moreover, these results constitute a proof-ofconcept demonstration that multi-input, multi-output gene expression programs can be implemented in ELMs to permit switching between multiple bioproducts. Discussion and conclusions Bioproduction using microbes engineered with synthetic biology provides a promising approach for manufacturing unnatural and natural chemicals [45]. ELMs fabricated by encapsulating microbes in biocompatible hydrogels can produce a wide range of value-added biochemicals. Recent demonstrations have included E. coli ELMs secreting >150 mg/L of L-DOPA after only 22 h and S. cerevisiae ELMs secreting 1.5-1.7 g/L of 2,3-butanediol after only 48 h of culturing [3]. ELMs can also exhibit bioproduction phenotypes that are difficult to achieve with free-floating, planktonic cells. Planktonic cells in continuous culture tend to suffer from genetic instability, which generally prevents their use for long-term bioproduction [46]. Previous work has shown that ELMs have long-term metabolic activity that can be sustained over multiple rounds of culturing and preservation, permitting long-term, high-yield bioproduction [2][3][4]10,11,47]. We found that encapsulated microbes have persistent genetic activity that can be induced multiple weeks longer than in planktonic cells (Figs. 2 and 3). These results have immediate implications for the further development of ELMs as platforms for on-demand bioproduction and for understanding how microbes interact with materials to generate novel phenotypes. We successfully applied our dynamic CRISPR-based expression programs to a two-gene heterologous pathway in E. coli ELMs for PT bioproduction, a direct precursor of tetrahydropterin (BH4) for phenylketonuria treatment [15,30,48]. PT production from E. coli input-responsive ELMs can be repeatedly cycled ON/OFF and delayed by multiple weeks of culturing before turning the production ON via induction. We obtained a bioproduction profile similar to that of the fluorescent reporter gene expression, showing both persistence and retention of bioproduction activity upon production cycling and delay. In S. cerevisiae ELMs, induction of multiple genetic programs could be scheduled one at a time, permitting the switch between bioproduction of betaxanthins and proteinase A over multiple weeks of continuous cultures. The successful extension of long-term programmable gene expression from bacterial to yeast ELMs suggests that long-term, multi-week genetic activity may be a general property of microbe-laden ELMs. ELMs deployed as portable bioreactors for chemical bioproduction could provide localized, on-demand access to commodity and specialty chemicals needed for health, biodefense and consumer goods manufacturing [3,4,49]. This capability would be especially impactful when unpredicted market fluctuations, such as during a pandemic, create acute supply shortages, or when supply chain access is limited, such as during military operations or space exploration. With current technologies, product cycling, or the process of stopping and then re-starting production, requires either discarding spent culture and re-inoculating with fresh cells, or large-volume cold-storage that can preserve cellular bioproduction capacity [50]. While both approaches are easy to achieve at a laboratory scale, the cost becomes expensive at larger production scales of dozens or even thousands of liters [51]. By controlling induction timing in input-responsive ELMs, cost-effective product cycling could be implemented. The ability to switch between multiple products simply by inducing a separate genetic program within the encapsulated cells adds entirely new flexibility to ELM bioproduction. At present, the exact physiological state of hydrogel-encapsulated microbes remains uncertain [52]. It is well-understood that planktonic microbial cells grown in liquid culture typically enter stationary phase after two days and suppress translation for survival, resulting in the decline of protein synthesis levels [53]. By comparison, we found that continuously-cultured ELMs can express heterologous gene products at high levels for at least 2.5 weeks. We observed increasing turbidity of continuously-cultured ELMs over time, indicative of growth and colonization within the hydrogel microstructures (Fig. 1b -top inset, Fig. 4. Pyruvoyl tetrahydropterin synthesis in E. coli ELMs. a) Schematic of aTc-inducible CRISPRa programs controlling the expression of two enzymes (GTPCH and PTPS) catalyzing the conversion of guanosine triphosphate (GTP) into the pteridine derivative pyruvoyl tetrahydropterin (PT). b) Dynamic induction cycles on E. coli ELM for PT bioproduction. Bioproduction cycles alternating between periods of induction (production ON, green), and delay (production OFF, gray), were implemented under continuous culture conditions for 15 days. Media was replenished every 24 h. c) PT bioproduction capacity retained following a subsequent induction cycle. PT fluorescence from ELM constructs was measured every 24 h, with uninduced (no aTc) and induced (þaTc) controls included in the plot. Full production capacity is defined as the maximum pteridine level achieved in Induction 1. Retained production capacity and non-baseline returns were calculated relative to the full production capacity. Values represent the mean AE standard deviation from n ¼ 3-4 technical replicates. Green and gray shades denote induction and delay periods, respectively. Statistical significance in pteridine fluorescence at two consecutive time points was evaluated using a two-tailed unpaired Student's t-test (p > 0.05). d) PT bioproduction in ELMs was measured following variable delays of 10 or 14 days (gray) of continuous culture before CRISPRa-programmed enzyme expressions were initiated by the addition of aTc inducer and carried out for 5 days (green). e) Persistence of bioproduction in ELMs was characterized as measured fluorescence from pyruvoyl tetrahydropterin every 24 h, with uninduced (no aTc) and induced (þaTc) controls included in the plot. Values represent the mean AE standard deviation from n ¼ 3-4 technical replicates. Statistical significance in pteridine fluorescence at two consecutive time points was evaluated using a two-tailed unpaired Student's t-test (p > 0.05). (For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.) (caption on next page) Supplementary Fig. S8) [54]. At this point, we cannot quantify how much of the observed increases in gene expression arise from increases in the number of living cells within the ELM. Nonetheless, taken together, these results are consistent with the idea that at least some portion of the microbes within these materials are not in a dormant state akin to the stationary or death phase [55]. Hydrogel formulation is known to influence microbial cellular phenotypes in ELM cultures [7,52]. The hydrogel matrix asserts mechanical forces [56] to the encapsulated microbes, resulting in growth rate changes relative to planktonic cells in liquid cultures [57,58]. Variations in oxygen transport into the hydrogel matrix can affect the size of the encapsulated colony [8], the expression and maturation of chromophores [23,59], and microbial metabolism [2]. Compared to planktonic cells, immobilized bacteria trapped in biofilms [60] and in rigid 3D extracellular matrix [61] experience lower fluxes through the tricarboxylic acid (TCA) cycle. Because the TCA cycle is used for energy generation in oxygen-rich conditions, lower TCA cycle fluxes could lead to decreased immobilized cell growth rates [58,62]. Similarly, immobilized yeast are known to have less flux through the glycolytic pathway than planktonic cells and experience higher rates of internal carbohydrate reserve utilization in starvation conditions [63]. Future work to link hydrogel environments and the resulting functional properties could enable rational design of engineered microenvironments to obtain the desired cellular responses for enhanced bioproduction phenotypes. On-going developments continue to expand the synthetic biology toolset for implementing dynamic gene regulation in microbes [64]. CRISPR-Cas transcriptional control has emerged as a modular and programmable route for implementing genetic circuitry in diverse systems, including prokaryotes, cell-free systems, and here in ELMs [14][15][16]. Functional CRISPRa tools require expression of multiple components -dCas9, activation domain, and guide RNAsto regulate downstream genetic circuitry ( Fig. 2a and 4a). Our demonstration that these tools can be applied in ELMs suggests that more complex, multi-gene programs combining CRISPR activation and inhibition (CRISPRa/i) can be active in ELM systems. These next-generation tools would allow us to build complex regulatory programs by simultaneously activating or repressing multiple genes within engineered pathways and the host genome [16]. We expect that developing programmable ELMs capable of coordinating dynamic control over complex multi-gene programs will result in entirely new capabilities for high-yield bioproduction [65], including rapid product switching and cycling, product diversification, and in situ process monitoring [66][67][68]. Creating dynamically-programmable, input-responsive ELMs will also be valuable for studying how encapsulation affects cellular physiology and for synthesizing ELMs as smart biosensors [69], as devices for the in-situ delivery of biomedicines [6,70], or as environmental sentinels [18]. Declaration of competing interest The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: James M. Carothers is an advisor to Wayfinder Biosciences. Data availability Data will be made available on request. 5. Dynamic inducible bioproduction in S. cerevisiae ELMs. a) Schematic of bioproducing S.cerevisiae ELMs. These ELMs are fabricated by seeding BSA-PEGDA hydrogels with engineered S. cerevisiae. The ELMs are printed using a SLA 3-D printer and continuously cultured with media exchange every 2 days. For scheduled bioproduction, ELMs were dynamically induced to control expression of native and heterologous enzymes. b) Multi-input genetic programs in S. cerevisiae ELMs for multi-product biosynthesis: 1) copper (II)-inducible expression of dioxygenase enzyme (mjDOD gene product) with constitutive expression of P450 enzyme (CYP76AD5 gene product) for betaxanthins production, and 2) galactose-inducible expression of secreted proteinase A enzyme (scPEP4 gene). c) Bioproduction Induction cycle 1 consists of 3 phases (Phase 1 ¼ 7 days, Phase 2 ¼ 8 days, Phase 3 ¼ 12 days). Top: Alternating induction by selective addition of copper (II) inducer (orange) in Phases 1 and 3, and galactose inducer (green) in Phase 2. Bottom: Time-dependent betaxanthins and proteinase A production levels. Values represent the mean AE standard deviation from n ¼ 3 technical replicates. d) Bioproduction Induction cycle 2 with 3 phases (phase lengths are identical to c). Top: Alternating induction by selective addition of galactose inducer (green) in Phases 1 and 3, and copper (II) inducer in Phase 2. Bottom: Time-dependent betaxanthins and proteinase A production levels. Values represent the mean AE standard deviation from n ¼ 3 technical replicates. (For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.)
2023-05-24T15:08:31.276Z
2023-05-01T00:00:00.000
{ "year": 2023, "sha1": "d497fedd4a1befe17e9935073995675406bae569", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.mtbio.2023.100677", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5d67b05072f74cf916457fdbb5dae5bb549d71d1", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
51936029
pes2o/s2orc
v3-fos-license
Insights into medical humanities education in China and the West Medical humanity is the soul of health education. Beginning medical students are taught various aspects of basic medicine, such as biochemistry, anatomy, and immunology. However, cultivation of the humanistic aspects of medicine has received increasing attention in recent decades. We performed a comparison study based on a literature search and our experience with medical humanistic courses in Western and Chinese medical colleges. We found both similarities and disparities in humanities courses offered in Western medical colleges and Chinese medical colleges. The delivery of humanities courses, such as medical sociology, medical ethics, medical psychology, and medical history, is widespread and helps to prepare students for their transformation from medical students to skilful medical professionals. Both Western and Chinese medical colleges offer a variety of medical humanistic courses for undergraduate students. Although Chinese medical humanistic education has undergone major changes, it still requires improvement and educators can learn from Western practice. We hope that our analysis will contribute to education reforms in the medical field. Introduction Throughout medical history, the healing power of the humanistic approach has ensured that many patients have been at least partially healed, despite receiving ineffective or harmful remedies based on incorrect theories of disease. Although medical knowledge and facilities have improved in modern times, the scientific approach to treatment will never overshadow the art of healing. Considerable attention has been paid to the training of future physicians, including their academic achievements and aspects of their personality, such as moral reasoning, compassionate listening, and empathy for patients. In his book The Place of the Humanities in Medicine, Eric J. Cassell argued that humanity-oriented professional courses can provide substantive education, particularly when they focus on clients rather than disease. 1 This suggests that the cultivation of a humanistic approach should be established on the very first day of medical education courses. Basic medical education In China, medical education is composed of three stages: basic medical education, clinical education, and internship. Basic medical education introduces students to the medical world. This first stage is crucial, and should ideally provide freshmen with a foundation in a thorough understanding of humanistic ideas. Basic medical education is a necessary requirement for prospective medical professionals. In this phase, students are taught various aspects of basic medicine such as anatomy, biochemistry, physiology, genetics, and immunology. Incorporation of humanistic education into traditional biomedical courses The delivery of traditional biomedical courses used to be prescribed and even dull, simply because teachers were accustomed to a conventional teaching style and were afraid of making changes to course delivery. Under the guidance of humanities professionals, the teaching of such courses has become more integrated and experiential. Since 2001, a series of innovations has been implemented at the School of Medicine at Shanghai Jiao Tong University, China. In the human anatomy course, teachers show students the anatomy museum. Before the start of any anatomic procedures, students are asked to observe a moment of silence to express their gratitude to the individuals who have voluntarily donated their bodies and contributed to the advancement of health care. For the regional anatomy laboratory reports, students are required to note down their personal reflections after each anatomic procedure. They include a paragraph about their internal conflicts and feelings, which may refer to their gratitude and respect to donors and their determination to pursue a medical career. Equal respect is shown at the end of each term for those animals that have been used in experiments and sacrificed. Teachers and students stand in silence, lay a wreath and eulogise about the animals' contributions to medical development in front of a gravestone erected for experimental animals. Biochemistry, genetics, and immunology are rapidly developing courses. Lecturers illustrate the creativity of medical research by referring to major discoveries and innovations and recounting anecdotes about Nobel Laureates in medicine. The discovery of Helicobacter pylori deeply impresses students, when they are told that Dr. Marshall risked his own life drinking contaminated water to prove that Helicobacter pylori causes gastritis. Marshall's persistence helped him to win the Nobel Prize in Physiology or Medicine and made a global contribution to medicine. Early exposure to clinical practice: a new approach in basic humanistic education In addition to the required foundational courses in medicine, the development of humanistic care has become increasingly important over the past few decades. The desirability of early exposure to clinical practice is accepted worldwide, and its importance has been particularly recognised in developing countries. For instance, the South Africa Health Professions Council has contributed to educational reform for medical workers. Medical freshmen undertake health care visits in Year 1 to enhance their knowledge of future professional environments and to promote their enthusiasm for medicine. Both direct assessment (a survey) and indirect assessment (student comments about their experiences) revealed positive findings. Many students expressed their gratitude for the provision of an insight into daily medical practice and 69% of students identified hands-on experience during ambulance duty as their most rewarding experience. 2 Most students felt they could learn from the health care visits and were better prepared for medical practice. Medical colleges in China have similar programs that run for approximately 1-2 weeks. One study conducted by Peking University assessed student experiences using questionnaires and reports. Students reported the development of greater understanding, purpose and effectiveness, made comments about how the management of the course could be improved, and provided suggestions about early exposure to clinical practice. The results showed that all the students felt that they had benefited from the activities and achieved perceptual knowledge of clinical work; 61.5% of students reported that the early exposure to clinical practice had greatly helped them. 3 The general aim of such courses is to offer students a positive vocational perspective, to reinforce their original desire to study medicine and to serve as an introduction to actual medical practice. Potential of art and leisure activities in cultivating medical humanity Some researchers have suggested that recreation and leisure activities, such as going to the cinema, reading, singing, and physical exercise, can have positive effects on the development of medical humanity in students. A survey study at Harvard University in the USA showed that students who were willing to attend medical humanity courses were frequently involved in physical training and activities such as football; some philosophers have considered that the 'beautiful game' can provide a basis for discussions about cultural practices. 4,5 Another study revealed that yoga activity could improve students' attendance of anatomy classes. 6 These findings suggest that such activities help students to cope with different situations in medical cases. Arts and sports activities could help to prepare students for challenging cases. 7 Similar research in China on the important role of leisure activities in cultivating medical humanity is lacking. This topic deserves more attention and future investigation. Medical humanities courses in top American and Chinese medical colleges In medical humanities courses, students receive a moral-oriented education. This involves the development of self-discipline and awareness, the acquisition of knowledge about basic medicine and an understanding of the practicalities of a medical career. To successfully prepare for and deal with real-life health care issues, qualified medical professionals need several important qualities, including calmness, care, insight, patience, and courage. The aim of medical humanities courses is to help students to prepare for their gradual transformation from students to skilful medical professionals. 8 The top 20 medical colleges in the USA all offer elective medical humanities courses to students at all grades; 20% of colleges provide specifically named medical humanities courses and 40% have established a humanities division, or facilities for the teaching of humanities for human development. The top three curricula are social medicine, medical ethics, and medical psychology. Harvard University provides the most extensive humanities teaching, which covers 32 different issues under 8 themes (Table 1 shows the top 20 medical colleges in the USA and their curricula). Most colleges use traditional lectures. However, students at the University of California, San Francisco, have a greater choice of teaching methods. They can either join a medical humanities interest group or attend the humanities book club and enjoy a spiritual feast. In addition, they have access to multiple peer-group seminars for independent or supervised study. Other colleges use problem-based learning (PBL) and achieve good results. Some researchers have proposed the use of web-based learning for medical humanities courses and suggested that this method can enhance reflective study, develop an understanding of other viewpoints and help to develop creativity via engagement with the arts. 9 In comparison, Chinese medical colleges universally lag behind Western colleges in their medical humanities teaching content, style, and methodology. Recent statistics show that insufficient importance is placed on medical humanities in the Chinese medical education system. 10 The top 10 medical colleges in China offer a few social medical courses and a small range of course types. Only 1 in 10 universities has a specific medical humanities institute. courses are offered in the first and second year of basic medical education so that students are simultaneously exposed to humanistic cultivation and professional knowledge. 3. Mandatory humanistic education: some humanistic courses are mandatory to ensure that students will acquire fundamental knowledge. (Table 2 shows the top 10 medical colleges in China and their curricula). Primary humanistic courses Our research indicates that medical sociology, medical ethics, medical psychology, and medical history are the most wellreceived medical humanities courses in China and other countries. Therefore, we now describe these courses and describe how they are presented in China and other countries. Medical sociology: Patient-Doctor I. Medical sociology is an emerging medical humanities area in China. In contrast, the concept of medical sociology was developed decades ago in the West and courses have an excellent reputation. Patient-Doctor I (PDI), is a medical humanities course offered to students in their first year at Harvard School of Dental Medicine in the USA. It helps and encourages students to value relationships with their patients, treat them with enthusiasm and respect and communicate well with them. 11 The university administration collected PDI course assessment, admissions data, National Board test scores, and data on interactive studentpatient abilities. They noticed significant linear relationships between PDI assessment scores and clinical performance, including manual skills and humanistic and interactive student-patient ability scores (p ¼ 0.03). 11 In China, the concept of medical sociology has a unique connotation. Medical health reform and doctor-patient relationships are closely related to medical sociology. The frequent murders of medical practitioners and the public misunderstanding of doctors has led some medical students to give up their pursuit of a medical career. At this critical time, there is a great need for medical sociology education, as it could promote the sustainable development of the health care industry. Medical ethics. Medical ethics is an important part of medical humanities education. It focuses on knowledge, abilities, and attitudes with the aim of empowering practitioners in the ethics of decision making. In view of the importance of medical ethics, we systematically analysed the contents and methods used in medical ethics courses. 12,13 Problem-based learning. PBL is more useful than lectures for medical ethics teaching and learning. 14 Peer-supervised PBL can be used when there are shortages in teaching resources and can improve teaching efficacy. 15 PBL is used in medical ethics courses at the University of Texas, USA. Classes comprise seven to nine students and two advisors in realistic clinical scenarios. The appeal of using PBL to teach ethics is that it places ethical problems in the context of clinical problems encountered by physicians. Students actively analyse each case, systematically consider the respective approaches to each problem and think about and identify the ethical, behavioural, and diagnostic problems with their peers. The PBL approach also appeals to clinicians, who often lament that students on the wards frequently fail to recognise ethical problems, even if those same students can skilfully reason about problems once they are identified. 16 Medical ethical reasoning is of great importance in further occupational preparation. Medical ethical reasoning. Medical ethical reasoning is the foundation of ethics reasoning and includes (1) problem identification and information collection; (2) decision making; (3) treatment planning and (4) clinical behavioural observation. The final aspect is affected by both individual and wider social factors, such as conflicts, family support, and accessible resources. Experts need to consider all aspects of a case and make conclusive ethical decisions. The model is a very important foundation for ethical reasoning and learning. 17 Despite the great importance attached to medical ethics education in China, its problems are obvious and the situation is far from optimistic. Outdated materials and dull methods are two prominent shortcomings. Medical ethics textbooks can remain unchanged for a couple of years, resulting in outdated information. In addition, in contrast to teaching in other countries, students in China are introduced to only the major ethical issues, such as brain death, euthanasia, and informed consent. Medical psychology. In addition to medical sociology and medical ethics, medical psychology is a core humanistic subject. Medical psychology is now incorporated into basic medical education and combines clinical psychology, health psychology, and behavioural medicine. Medical students in the USA are required to familiarise themselves with patients' lives, personal histories, values, and attitudes during the learning process and to develop selfawareness, personal growth, and well-being. 18 Chinese medical psychology was founded by Professor Zan Ding. Owing to his efforts, the concept of disease-related mental health was gradually accepted by his colleagues at Peking Union Medical College and was disseminated nationwide. Despite Zan Ding's early death, medical psychology is widely accepted as an important part of medical humanities courses in China. 19 In Chinese medical colleges, psychological concepts are currently closely aligned with concepts from medical ethics and overlap with the topics discussed in medical sociology, particularly doctorpatient relationships. This means that the importance of medical psychology can be overlooked. However, the importance of medical psychology education lies more in self-supervision. Medical students have more responsibility and a greater burden than other undergraduate students, as they must memorise everything they have learned and attempt to use the knowledge in future clinical scenarios. In addition, their tight study schedules leave very little time to develop personal hobbies or to take part in extracurricular activities. This heavy load affects the mental health of medical students. 20 Medical psychology courses can help students to identify their mental health problems, address them appropriately and ask psychological professionals or mentors for help. Medical history. Medical history is receiving increasing attention both in China and other countries. A consideration of medical history is the basis of medical education, as it allows students to identify and reflect on historical medical advances and mistakes. Many changes to medical practice have threatened the integrity of this revered profession, including the decline of professional autonomy and the erosion of moral integrity. An awareness of medical history allows students to review the development of medical practice and to regain confidence and professional self-identity. 21 In China, medical history was generally taught using a lecture-based teaching style. However, students often showed insufficient interest in the subject and paid little attention to the important developments in medicine. In addition, Chinese medical history classes only focused on traditional Chinese medicine history and did not include comparisons with Western medical innovations. Recently, there have been many changes to medical history teaching in China. Following other medical humanities courses, the teaching of medical history now mainly uses PBL. Courses now focus on the real-life significance of medical history and involve students in extracurricular activities such as medical history museum visits and medical revolution debates. In addition, the PBL model encourages students to collect historical information after class and share their findings, which can increase their active learning potential. 22 In the USA, the delivery of medical history courses reflects a sense of respect for the profession and honours the rich heritage of medicine. Bryan and Longo have argued that history of medicine education contributes to nostalgic professionalism and strengthens students' sense of belonging and solidarity as members of an honourable profession. Those authors attempted to introduce students to history of medicine courses, which included early exposure to preclinical and clinical departments, formal lectures, and informal mentoring. Such measures can attract students and inspire active learning and thinking. Bryan and Longo argue that an understanding of developments in medicine can help to determine students' future career paths. In choosing specialties and subspecialties, students tend to narrow their choices; however, an awareness of the history of medicine could help students to choose their specialty with greater understanding and an increased sense of identity. 23 In Australia, medical students receive medical history courses to broaden their horizons and develop critical thinking. At Monash University, an elective medical history course was introduced to students in the first year of their medical education. No timetables or detailed contents for this course were developed, because staff believed that the method was more important than the contents. The staff adapted medical history teaching methods used in medical schools overseas and added their own elements. Students were often given several topics to work on for extracurricular assignments and were allowed to use the Internet and academic books for data collection. In class, students were asked to share their thoughts about the assignment in detail. The assessment was an essay on a topic of the students' choice. The best report was rewarded with 100 dollars and the essays reflected what students had learned during the semester. 24 Conclusions Medical humanities education is very important to the development of a successful medical practitioner; it can influence clinical performance and increase empathy for patients. 25 Unsatisfactory doctorpatient relationships can lead to mistrust and even clashes between physicians and patients. Tucker et al. analysed the patient-physician relationship in China and suggested that medical humanities should be a core component of clinical training. In addition, they suggested that it is very important to promote experiential learning and partnerships between the community and medical schools. It is also vital to improve the evaluation of medical humanities education to rebuild physicianpatient trust and restore harmonious mutual relationships in medical care. 26,27 Therefore, we believe that medical humanities education should be implemented at an early stage of medical education, as students need to know the requirements for becoming a qualified and humane physician before they learn anything else. A comparison of medical humanities teaching in China and Western countries can identify differences, help to improve our own methods and teaching styles and expand the horizons of our medical students. If we really want to strengthen medical humanities education, we need to create a suitable atmosphere for positive learning and select students based on their humanistic qualities as well as their professional behaviours. 28 The following are some recommendations for educational reforms in medical humanities education in China. First, we should integrate humanities and social science resources and improve medical humanities curricula. Second, we should optimise teaching methods and improve teaching effectiveness. Third, we should establish close links with clinical practice and train students in medical humanistic practice. Finally, we should reform methods of evaluating medical humanities and establish a formative assessment system. Medical humanities education is based on the belief that the best doctors and nurses are humans before they are health care workers. First and foremost, medicine is a learned and humane profession.
2018-08-14T19:40:32.979Z
2018-08-08T00:00:00.000
{ "year": 2018, "sha1": "7e72f1a901f2006902de8af10e97d9a8948d80d5", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0300060518790415", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7e72f1a901f2006902de8af10e97d9a8948d80d5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236321008
pes2o/s2orc
v3-fos-license
Hydrotalcite-Embedded Magnetite Nanoparticles for Hyperthermia-Triggered Chemotherapy A magnetic nanocomposite, consisting of Fe3O4 nanoparticles embedded into a Mg/Al layered double hydroxide (LDH) matrix, was developed for cancer multimodal therapy, based on the combination of local magnetic hyperthermia and thermally induced drug delivery. The synthesis procedure involves the sequential hydrolysis of iron salts (Fe2+, Fe3+) and Mg2+/Al3+ nitrates in a carbonate-rich mild alkaline environment followed by the loading of 5-fluorouracil, an anionic anticancer drug, in the interlayer LDH space. Magnetite nanoparticles with a diameter around 30 nm, dispersed in water, constitute the hyperthermia-active phase able to generate a specific loss of power of around 500 W/g-Fe in an alternating current (AC) magnetic field of 24 kA/m and 300 kHz as determined by AC magnetometry and calorimetric measurements. Heat transfer was found to trigger a very rapid release of drug which reached 80% of the loaded mass within 10 min exposure to the applied field. The potential of the Fe3O4/LDH nanocomposites as cancer treatment agents with minimum side-effects, owing to the exclusive presence of inorganic phases, was validated by cell internalization and toxicity assays. Introduction Magnetic fluid hyperthermia (MH) has been developed as an alternative approach for the heat-mediated treatment of cancer cells at a controllably localized level [1]. MH stands on the energy losses of magnetic nanoparticle (MNP) dispersion subjected to AC magnetic fields (H AC ), resulting in a temperature elevation of the dispersion medium. In recent years, significant research effort has been devoted to the synthesis of MNPs with high heating efficiency, their successful incorporation into biological matrices (cells or tissues), the treatment optimization based on theoretical models and the technical improvement of field generation devices and error-free measuring protocols [2,3]. Thanks to their facile and low-cost availability in various geometries, their chemical stability, affordable biocompatibility and magnetic response, iron oxide nanoparticles are widely considered as the most efficient agents for magnetic hyperthermia applications [4,5]. The best heating In all these cases, magnetic featuring of the drug-loaded nanocomposites was only used for the magnetically assisted delivery through the application of a static magnetic field. Here, we report an attempt to illustrate the potential incorporation of inorganic magnetic nanohybrids, consisting of Fe 3 O 4 nanoparticles and Mg-Al LDH loaded with anticancer 5-fluorouracil (C 4 H 3 FN 2 O 2 , FU), as a way to improve therapeutic efficiency by combining magnetic hyperthermia and chemotherapy. Importantly, a major milestone was to go beyond the parallel occurrence of the two therapeutic modalities as described elsewhere for doxorubicin-loaded nanocomposites [32], and provide their coupling, i.e., the drug release switch-on upon application of the AC magnetic field (Figure 1). The main advantage of the proposed nanocomposites is their fully inorganic nature which is able to combine the heating capability of magnetic nanoparticles with the drug hosting capacity of the layered double hydroxide, considering also that the Mg/Al layered double hydroxide is already recognized for its compatibility for human use as an antacid. Nanocomposite Synthesis The synthesis of the nanocomposite consisting of Fe 3 O 4 nanoparticles (IONPs) distributed into a matrix of Mg/Al layered double hydroxide was carried out in a continuousflow sequence of two stirring reactors (operating volume 1 L) by the combined precipitation of iron, magnesium and aluminium salts ( Figure S1). In the first reactor, Fe 3 O 4 seeds were prepared after the coprecipitation of FeSO 4 ·7H 2 O and Fe 2 (SO 4 ) 3 ·9H 2 O, which were pumped in the form of aqueous solutions (5 mM), under alkaline conditions (pH 11) regulated by the continuous addition of NaOH solution (2 g/L) in drops. The blackcoloured suspension was directed into the second reactor in which the coprecipitation of Mg(NO 3 ) 2 ·6H 2 O and Al(NO 3 ) 3 ·9H 2 O took place. The two reagents were pumped as aqueous solutions with concentration 10 mM and hydrolyzed at a pH 9 maintained by the addition of a 1:1 mixture of NaOH/Na 2 CO 3 (3.5 g/L). Sodium carbonate was introduced in this step to serve also as the source of CO 3 2− which participated in the building up of the layered hydrotalcite structure. Each reactor operated with a residence time of 1 h. The final product was received in the outflow of the second reactor and then, centrifuged and washed several times to remove soluble residuals. The described reactions can be realized with similar success in batch reactors, however, advantages such as the good reproducibility, the achievement of constant concentrations in all ionic and solid forms, the minimization of operation cost and the scale-up potential to mass production would not be covered. A schematic summary of the process is presented in Figure 2 while a picture of the laboratory continuous-flow system appears in Supplementary Materials. Using such proportions, the Mg-to-Al molecular ratio was adjusted to 3. Under such conditions, the production rate of the nanocomposite in terms of dry solid varied between 0.35−0.55 g/h. Characterization An overview of produced nanocomposites' morphology and separately of their constituting phases was obtained by electron microscopy. For more clarity on the distribution of IONPs, high magnification images were taken by transmission electron microscopy (TEM) using JEM-1210 (JEOL, Tokyo, Japan), operating at 120 kV. TEM samples were prepared by dropping a diluted aqueous dispersion of the material onto a carbon coated copper grid. Quasi-static magnetic properties of the samples were measured using a superconducting quantum interference device (SQUID) MPMS XL-7T magnetometer (Quantum Design, San Diego, CA, USA). Structural-phase identification was performed by powder X-ray diffractometry (XRD) using a water-cooled Ultima+ diffractometer (Rigaku, Tokyo, Japan) with CuKa radiation, a step size of 0.05 • and a step time of 3 s, operating at 40 kV and 30 mA. Average elemental content of the nanocomposites was determined by graphite furnace atomic absorption spectrophotometry, using a AAnalyst 800 instrument (Perkin Elmer, Waltham, MA, USA). The actual ratio of Fe 2+ /Fe 3+ in the Fe 3 O 4 fraction was defined after digestion of a weighted quantity of each sample in 7 M H 2 SO 4 under heating and titration with 0.05 M KMnO 4 solution till the appearance of a pink colour. The percentage of carbonates (CO 3 2− ) located in the interlayer space of hydrotalcite was quantified using a FOGL bench-top soil calcimeter (BD Inventions, Thessaloniki, Greece) with a determination error of less than 5%. The potentiometric mass titration method was applied to define the positive charge density of the solid. In the first step, the point of zero charge (PZC) was determined after equilibrating water suspensions of the nanocomposites (10 g/L) in 0.001, 0.01 and 0.1 M NaNO 3 solutions and adjusting pH to 11 by adding 0.1 M NaOH. Then, suspensions were titrated by adding stepwise small quantities of a 0.1 N HNO 3 solution and recording equilibrium pH until pH 3 was reached. For the three ionic strengths, the plotting of surface charge density, which is proportional to the difference of acid volume used to set the same pH in the dispersion and a blank titration, indicates the PZC as the point of intersection. Magnetic Heat Losses Calorimetry measurements of magnetic suspensions under H AC were performed using a commercial AC magnetic field generator (SPG-06-III 6 kW High Frequency Induction Heating Machine, Shenzhen Shuangping Ltd., Shenzhen, China) working at 765 kHz frequency and 24 kA/m magnetic field intensity. The specific loss power (SLP), referred to as specific absorption rate (SAR) hereafter, was derived from the slope of the temperature versus time curve after subtracting water background signal and heat losses to the environment [33]. Temperature variations during the application of AC field were recorded using a commercial optical fibre thermal probe located in the centre of the sample and connected to an PicoM device (Opsens, Quebec, QC, Canada) with an experimental error of ±0.1 • C. SAR values under non-adiabatic conditions were determined through the temperature increment as a function of time (dT/dt) using the following expression: where C d is the mass specific heat of the dispersion media, m d is the dispersion's mass, m Fe is the iron mass related to the IONPs diluted in the dispersion and dT/dt is the effective slope upon switching H AC on, after subtracting the contributions of coil surface heating and environment cooling. The value of C d considered in this study was 4.18 J/g K for water dispersion. AC magnetometry measurements of the magnetic colloids were carried out by commercial inductive magnetometers (AC Hyster Series; Nanotech Solutions, Madrid, Spain). The AC Hyster Series magnetometer offers a wider field frequency range from 10 kHz up to 300 kHz and field intensities up to 24 kA/m which are automatically selected. Hyster Series measure magnetization cycles from IONPs dispersed in liquid media at room temperature, consisting of three repetitions to obtain an average of the magnetization cycles and the related magnetic parameters (H C , M R , Area). In order to accurately quantify the magnetic losses of IONP suspensions, the specific absorption rate (SAR) values were calculated according to SAR = A·f, where A is the magnetic area and f is AC magnetic field frequency [34]. Drug Loading and Release Loading of 5-fluorouracil in the interlayer space of the Mg-Al LDH part of the nanocomposite was carried out by equilibrating a quantity of the samples with a solution of the drug in phosphate-buffered saline (PBS). The obtained FU-loaded nanocomposite sample is referred as Fe 3 O 4 /LDH-FU. In these experiments, a freshly prepared 20 mM stock solution of 5-fluorouracil in PBS was used after proper dilution. Exchange between 5-fluorouracil molecules and carbonates was maximized when experiment took place at pH 9. In these conditions, the kinetic behavior of loading was studied using a 2 g/L dispersion of MGT-35 in a 0.5 mM 5-fluorouracil PBS solution and measuring the residual concentration in different time intervals till equilibrium was reached. The adsorption capacity variation (isotherm) as a function of residual 5-fluorouracil was determined after equilibrating 2-20 g/L of nanocomposite with a 15 mM drug solution. After drug loading, the leaching behavior under different pH values was evaluated. Sample loaded with around 1 mmol/g of 5-florouracil was dispersed in PBS adjusted to pH 4.0, 7.4 and 8.5, and the released drug was monitored for a period of up to 60 min. Similarly, the release of drugs was examined for various temperatures, 10, 20 and 35 • C, while in another experiment the temperature of the sample was increased at 40 • C by the application of AC magnetic field and kept at this value for 10 min. Cell Internalization Human colon cancer cells (HT29) were cultured in a microscope coverslip placed in the wells of a 12-well plate containing DMEM/F-12 (Dulbecco's Modified Eagle Medium/Nutrient Mixture) basal medium supplemented with 10% fatal bovine serum (FBS), 1% l-glutamine and 1% penicillin/streptomycin. After 24 h, they were incubated with the nanoparticles (IONPs, Fe 3 O 4 /LDH and Fe 3 O 4 /LDH-FU) at a concentration of 0.1 mg/mL in growth media for two days. Then, the cells were washed and incubated with lysotracker red (0.25 µM dispersed in basal media for 25 min) to stain the lysosomes. The cells were then washed with PBS, dispersed in PBS and observed under a LSP2 Leica Confocal Laser Scanning Microscope. Samples were excited with a 543 nm Green Helium-Neon laser and collected emitted light from 555 nm to 620 nm. Toxicity A resazurin-based cytotoxicity assay was used for checking the biocompatibility of the nanoparticles (IONPs and Mg-Al LDH-FU) in HT29. A number of 20,000 cells/well in 100 µL growth medium were seeded in a 96-well plate. After growth for two days, cells were incubated with different nanoparticle concentrations ranging from 0 to 256 mg/mL for one day. As negative control for cell viability, cells were incubated with dimethyl sulfoxide (DMSO) inducing cell death. Three wells were used per sample tested. Afterwards, resazurin (10%) was added for three hours and measured for its fluorescent metabolite (resorufin) (λ ex 560 nm; λ em 572-650 nm). First, the maximum intensity emission wavelength value was taken and the maximum intensity value obtained for the negative control to remove background signal was subtracted. Then, data was normalized with the highest value. Finally, the normalized values (relative cell viability, %) were plotted towards the logarithm of the concentrations. GraphPad Prism 6 was used to obtain the logIC 50 . This experiment was replicated three times. Material Properties The structural characterization of the developed nanocomposite indicated the preservation of the constituting phases, LDH and iron oxide, in the final product. Figure 3 shows the XRD diagrams of the pure components when separately prepared in comparison to their sequential preparation as a hybrid in the two-stage reaction setup. The observed diffraction peaks for IONPs fitted well to those expected for the inverse spinel structure of Fe 3 O 4 whereas the layered formation of the Mg-Al LDH was signified by the strong low-angle peaks which were identified as hydrotalcite. Chemical analysis indicated that Mg/Al molecular ratio was 2.9/1 which is very close to the nominal value for the R3m space group of hydrotalcite. For initially formed hydrotalcite, the unit cell parameters of its 3R stacking sequence were calculated to be α = 0.3055 nm and c = 2.2725 nm respectively, whereas the interlamellar distance estimated by the reflection (003) The nanocomposites showed relatively high specific surface areas, 78 m 2 /g for MGT-20 and 65 m 2 /g for MGT-35, although they appeared significantly decreased in comparison to the pure Mg-Al LDH (175 m 2 /g). The nanocomposites indicated also a significant surface charge density which was maintained at around 0.7 mmol OH − /g. The nanocomposites showed relatively high specific surface areas, 78 m 2 /g for MGT-20 and 65 m 2 /g for MGT-35, although they appeared significantly decreased in comparison to the pure Mg-Al LDH (175 m 2 /g). The nanocomposites indicated also a significant surface charge density which was maintained at around 0.7 mmol OH − /g. The TEM images shown in Figure 4 provide a representative view on the nanoscale morphology and distribution of the nanocomposites in comparison to the separately prepared pure LDH and IONPs. Following the described precipitation method, Mg-Al LDH form very thin nanosheets, while IONPs have a nearly spherical shape with an average diameter of 31 ± 6 nm. The sequential synthesis of the two phases resulted in a good distribution of the nanoparticles onto the surface of the LDH sheets. Magnetic interactions between nanoparticles and the absence of any stabilizing agent during synthesis contributed to the observed aggregation effects. The lower magnification images of the sample MGT-35 ( Figure S7), recorded with scanning electron microscopy, indicate that in the powder form, the layered matrix appears as a continuous substrate without obvious limits of the separation that occurs when dispersed. This effect may be attributed to the secondary self-organization of the layered structure by weak forces during dewatering and it is fully reversible considering that a hydrodynamic diameter of samples was measured around 500 nm. The magnetic response of the nanocomposites was attributed to the participation of Fe3O4 and its intensity appeared to be proportional to the magnetic phase percentage. The hysteresis loops under quasi-static conditions shown in Figure 5 indicated that samples MGT-20 and MGT-35 had saturation magnetisation values around 17 and 30 Am 2 /kg respectively, in good accordance to the measured magnetisation for pure IONPs (90 Am 2 /kg) and the mass percentage of Fe3O4 in each case. The TEM images shown in Figure 4 provide a representative view on the nanoscale morphology and distribution of the nanocomposites in comparison to the separately prepared pure LDH and IONPs. Following the described precipitation method, Mg-Al LDH form very thin nanosheets, while IONPs have a nearly spherical shape with an average diameter of 31 ± 6 nm. The sequential synthesis of the two phases resulted in a good distribution of the nanoparticles onto the surface of the LDH sheets. Magnetic interactions between nanoparticles and the absence of any stabilizing agent during synthesis contributed to the observed aggregation effects. The lower magnification images of the sample MGT-35 ( Figure S7), recorded with scanning electron microscopy, indicate that in the powder form, the layered matrix appears as a continuous substrate without obvious limits of the separation that occurs when dispersed. This effect may be attributed to the secondary self-organization of the layered structure by weak forces during dewatering and it is fully reversible considering that a hydrodynamic diameter of samples was measured around 500 nm. The magnetic response of the nanocomposites was attributed to the participation of Fe 3 O 4 and its intensity appeared to be proportional to the magnetic phase percentage. The hysteresis loops under quasi-static conditions shown in Figure 5 indicated that samples MGT-20 and MGT-35 had saturation magnetisation values around 17 and 30 Am 2 /kg respectively, in good accordance to the measured magnetisation for pure IONPs (90 Am 2 /kg) and the mass percentage of Fe 3 O 4 in each case. Hyperthermia Performance The temperature increase during the AC field application indicated a significantly high potential of the samples to deliver heat flow to the environment ( Figure S3). For instance, a temperature increase from room temperature to around 35 • C within 2 min of field application was obtained for a 2 g/L aqueous dispersion of sample MGT-35, yielding a SAR value of 1970 W/g Fe (±10%). Keeping the same field strength, SAR values seemed to follow an exponentially increasing trend in the frequency range from 30 to 765 kHz succeeding in relatively high performances even for frequencies below 100 kHz ( Figure 6). The uncertainty of the determined SAR values in the lower frequencies range was quite small (typically below 3%) following the accuracy of the AC magnetometry measurements. Importantly, it was found that the addition of the Mg-Al LDH phase did not appear to modify the heating performance of IONPs and therefore, temperature rise was proportional to the content of the magnetic phase. Compared to other studies on single iron oxide nanoparticles or LDH composites, the obtained SAR values were among the highest reported, covering a very wide frequency range. Typically, Fe 3 O 4 nanoparticles prepared by the oxidative precipitation method, showed efficiency of around 2 kW/g at 765 kHz, translated into 2.3 W/g Fe when produced by a continuous flow process [35], but less than half when produced in batches [36]. Aiming to achieve higher SAR values (up to 10 kW/g at 500 kHz), combined ferrite phases were employed but the requirement for using hazardous reagents for their synthesis and the toxicity of elements such as Mn and Co inhibit their potential for clinical application [37][38][39]. Research reports on the heating performance of magnetic nanoparticle-decorated LDH systems are only scarce. For example, Fe 3 O 4 /Mg-Al LDH nanohybrids were found to reach an SAR of 73.5 W/g at 425 kHz and 30 kA/m [32]. The same study provided promising results concerning the combined hyperthermia and drug delivery with doxorubicin as well as the therapeutic efficiency in HeLa cells. Drug Release Behavior The capacity of the Mg-Al LDH structure to host 5-fluorouracil molecules after exchange with structural carbonates was first demonstrated through the kinetic experiments ( Figure S2). Within less than 1 h, MGT-35 was able to capture significant quantities from the equilibrated drug solution overcoming a loading of 6.5 mmol/g and approaching the 75% of its maximum ability (~8.7 mmol/g) into PBS at pH 9. The procedure is described as: Mg 6 Al 2 (OH) 16 It should be noted that the loss of carbonate content at the end of this experiment, which validates the presence of the ion exchange mechanism, was around 1 wt.% Considering the uptake capacity values for 5-fluorouracil, carbonate losses appeared much higher than the stoichiometrically expected ones, suggesting that the incorporation of drug's voluminous molecule caused the release of multiple carbonate ions in order to fit in the interlayer space. The partial replacement of the structural carbonates by 5-fluorouracil was also reflected in the expansion of Mg-Al LDH unit cell which was revealed by the shift of the XRD diffractogram to smaller angles in the drug-loaded sample ( Figure S6). The uptake capacity can be adjusted by varying the dispersion's concentration and therefore, the residual 5-fluorouracil concentration as shown in the adsorption isotherm of Figure S4. The stability of loaded 5-fluorouracil and the release rate can be evaluated by modifying the pH of the dispersion medium ( Figure S5). At pH 8.5, slightly below the loading acidity, the release rate was very low with less than 20 wt.% of the drug to be found in soluble state. However, the loss percentage within 1 h of contact, reached 50 wt.% when the pH was adjusted to 7.4. The weakening of the layered structure by the increase of metal component solubility was the reason for the increasingly observed drug release. Under more acidic conditions, the whole quantity of drug was completely released immediately, however, a significant part was captured back to the nanocomposite within less than 1 h of contact. The last observation could be of high importance when stimuli-responsive drug delivery systems are required. The temperature of the studied medium was another important parameter which defined the release rate even in the short term (Figure 7). More specifically, the loading loss at temperatures of 10, 20 and 35 • C (close to the human body temperature) after 5 h of contact at pH 7.4 was found to be 45, 55 and 70 wt.%, respectively. The temperature-dependence study provides a general view of the release kinetics in a wide temperature range indicating the behavior of the drug-loaded nanocomposite during storage in a refrigerator or its application in cancer treatment. Importantly, even at the higher temperature, around 20% of the initial load was stabilized into the nanocomposite for several hours. A very interesting finding that validates the motivation of this study was that the drug release was very rapid and reached 80 wt.% only by applying the AC magnetic field for 10 min and reaching a temperature of 40 • C by means of magnetic hyperthermia. This result should be attributed to the high localized temperature increase within the nanocomposite mass which was able to initiate a fast release response to the LDH phase. Cellular Uptake and Biocompatibility The biological characterization of the pure and drug-loaded MGT-35, in comparison to the corresponding IONPs, involved cell internalization and the induced cellular toxicity. The physicochemical properties of nanoparticles (e.g., size, shape, charge) are known to affect cellular responses such as internalization (rates and mechanisms) or cytotoxicity [40][41][42][43] and, therefore, knowledge of these parameters is crucial to predict the nanoparticle potential for the magnetic hyperthermia treatment of cancer in real biological conditions that may interfere with performance. For example, previous results showed differences in the uptake of magnetic nanoparticles depending on the surface charge (positive vs. negative) [44]. Quantitative analysis indicated that positively charged magnetic nanoparticles avoid the early endosomes and they are preferentially located in the lysosomes. On the other hand, negatively charged nanoparticles were first accumulated in early endosomes and then, transferred to the lysosomes with time. In general, positively charged nanoparticles are considered as more toxic (at least, acutely) than their negatively charged counterparts due to their stronger interaction with cellular components [43,45]. HT29 cells were incubated with the three tested systems, i.e., Fe 3 O 4 nanoparticles, pure MGT-35 and drug-loaded MGT-35 (MGT-35-FU). Figure 8 shows that the nanoparticles were steadily taken up by the cells located in the lysosomes after two days of incubation. The nanomaterials provided a high contrast in the transmitted channel of the confocal microscope, mostly due to a strong intracellular accumulation. This high contrast enabled their tracking inside the cells and colocalization inside the lysosomes (labelled in red). Figure 9 illustrates the toxicity profile at the level of mitochondrial activity of the three samples when exposed at different concentrations ranging from 256 to 0.5 mg/mL for 24 h. Reference IONPs practically exhibited no toxicity at the given concentrations and exposure time. Iron oxide nanoparticles are known to be metabolized with the iron metabolism in the spleen and liver [46]. MGT-35 and MGT-35-FU showed a sigmoidal curve for toxicity. The IC 50 values of Fe 3 O 4 , MGT-35 and MGT-35-FU, which indicate the concentrations needed for each system to kill half of the cell population, were 476 mg/mL, 12 mg/mL, and 18 mg/mL, respectively. It appears that the presence of the Mg-Al LDH modifies the toxicity profile of the sample, although the provided concentration range for safe use without chemically induced side-effects still remains wide. Such observations should be attributed to the relatively good structural stability which favours membrane damage and the higher release of metal ions and carbonates at the acidic conditions occurring in endosomes or lysosomes [47]. Noteworthy, MGT-35-FU and MGT-35 exhibited similar IC 50 values. Their toxic profile appeared to be dominated by the Mg/Al layered double hydroxide matrix while the effect of the 5-fluorouracil presence was minor. Such a finding was attributed to the drug release of around 80% (Figure 7) which took place during the initial washing of the nanocomposite before contact with the cells. The remaining 20% could be stabilized into the structure for the time window of the experiment, as explained by the temperature-dependent kinetic data. The application of localized heating by magnetic hyperthermia enabled this barrier to be overcome and rapidly promoted the complete release of this fraction which appeared to be the most strongly captured. Another possible explanation is that the presence of serum protein corona around the nanoparticles may influence the short-term release kinetics of 5-fluorouracil [41,44]. Conclusions A 100% inorganic-based drug carrier was developed to be used for the controllable delivery and release of anticancer molecules. In particular, a nanocomposite built from iron oxide nanoparticles and a matrix of Mg/Al layer double hydroxide was produced by an environmentally friendly and scalable procedure which provides very stable synthesis conditions, high production rates and an affordable cost for an engineered drug-delivery system. The nanocomposite is capable of loading up to several mmol/g of anionic molecules, such as 5-fluorouracil, which is then released very rapidly when an AC magnetic field is externally applied, due to the high heat generation at the microstructure level. Importantly, such advantages are combined with sufficient cell internalization of the nanocomposite and very limited toxicity, even for relatively high applied concentrations. Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/nano11071796/s1, Figure S1, Laboratory setup for the continuous-flow production of the hydrotalcite/magnetite nanocomposites. Figure S2, Kinetic of drug loading. Figure S3, Temperature increase during AC field application. Figure S4, Uptake capacity versus residual 5-fluorouracil concentration. Figure S5, Time dependent leaching behavior of 5-fluorouracil. Figure S6, XRD diagrams of pure hydrotalcite before and after loading with 5-fluorouracil. Figure S7, Scanning electron microscopy images and elemental analysis.
2021-07-26T05:29:21.769Z
2021-07-01T00:00:00.000
{ "year": 2021, "sha1": "7c607d51de87c4cf62d03e9f6ee8231224fa427d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-4991/11/7/1796/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7c607d51de87c4cf62d03e9f6ee8231224fa427d", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
221172989
pes2o/s2orc
v3-fos-license
CosyPose: Consistent multi-view multi-object 6D pose estimation We introduce an approach for recovering the 6D pose of multiple known objects in a scene captured by a set of input images with unknown camera viewpoints. First, we present a single-view single-object 6D pose estimation method, which we use to generate 6D object pose hypotheses. Second, we develop a robust method for matching individual 6D object pose hypotheses across different input images in order to jointly estimate camera viewpoints and 6D poses of all objects in a single consistent scene. Our approach explicitly handles object symmetries, does not require depth measurements, is robust to missing or incorrect object hypotheses, and automatically recovers the number of objects in the scene. Third, we develop a method for global scene refinement given multiple object hypotheses and their correspondences across views. This is achieved by solving an object-level bundle adjustment problem that refines the poses of cameras and objects to minimize the reprojection error in all views. We demonstrate that the proposed method, dubbed CosyPose, outperforms current state-of-the-art results for single-view and multi-view 6D object pose estimation by a large margin on two challenging benchmarks: the YCB-Video and T-LESS datasets. Code and pre-trained models are available on the project webpage https://www.di.ens.fr/willow/research/cosypose/. Introduction The goal of this work is to estimate accurate 6D poses of multiple known objects in a 3D scene captured by multiple cameras with unknown positions, as illustrated in Fig. 1. This is a challenging problem because of the texture-less nature of many objects, the presence of multiple similar objects, the unknown number and type of objects in the scene, and the unknown positions of cameras. Solving this problem would have, however, important applications in robotics where the knowledge of accurate position and orientation of objects within the scene would allow the robot to plan, navigate and interact with the environment. Object pose estimation is one of the oldest computer vision problems [1-3], yet it remains an active area of research [4][5][6][7][8][9][10][11]. The best performing methods 5 https://www.di.ens.fr/willow/research/cosypose/ arXiv:2008.08465v1 [cs.CV] 19 Aug 2020 that operate on RGB (no depth) images [7,8,[10][11][12] are based on trainable convolutional neural networks and are able to deal with symmetric or textureless objects, which were challenging for earlier methods relying on local [3,[13][14][15][16] or global [17] gradient-based image features. However, most of these works consider objects independently and estimate their poses using a single input (RGB) image. Yet, in practice, scenes are composed of many objects and multiple images of the scene are often available, e.g. obtained by a single moving camera, or in a multi-camera set-up. In this work, we address these limitations and develop an approach that combines information from multiple views and estimates jointly the pose of multiple objects to obtain a single consistent scene interpretation. While the idea of jointly estimating poses of multiple objects from multiple views may seem simple, the following challenges need to be addressed. First, object pose hypotheses made in individual images cannot easily be expressed in a common reference frame when the relative transformations between the cameras are unknown. This is often the case in practical scenarios where camera calibration cannot easily be recovered using local feature registration because the scene lacks texture or the baselines are large. Second, the single-view 6D object pose hypotheses have gross errors in the form of false positive and missed detections. Third, the candidate 6D object poses estimated from input images are noisy as they suffer from depth ambiguities inherent to single view methods. In this work, we describe an approach that addresses these challenges. We start from 6D object pose hypotheses that we estimate from each view using a new render-and-compare approach inspired by DeepIM [10]. First, we match individual object pose hypotheses across different views and use the resulting object-level correspondences to recover the relative positions between the cameras. Second, gross errors in object detection are addressed using a robust object-level matching procedure based on RANSAC, optimizing the overall scene consistency. Third, noisy single-view object poses are significantly improved using a global refinement procedure based on object-level bundle adjustment. The outcome of our approach that optimizes multi-view COnSistencY, hence dubbed CosyPose, is a single consistent reconstruction of the input scene. Our singleview single-object pose estimation method obtains state-of-the-art results on the YCB-Video [18] and T-LESS [19] datasets, achieving a significant 34.2% absolute improvement over the state-of-the-art [7] on T-LESS. Our multi-view framework clearly outperforms [20] on YCB-Video while not requiring known camera poses and not being limited to a single object of each class per scene. On both datasets, we show that our multi-view solution significantly improves pose estimation and 6D detection accuracy over our single-view baseline. Related work Our work builds on results in single-view and multi-view object 6D pose estimation from RGB images and object-level SLAM. Single-view single-object 6D pose estimation. The object pose estimation problem [15,16] has been approached either by estimating the pose from 2D-3D correspondences using local invariant features [3,13], or directly by estimating the object pose using template-matching [14]. However, local features do not work well for texture-less objects and global templates often fail to detect partially occluded objects. Both of these approaches (feature-based and template matching) have been revisited using deep neural networks. A convolutional neural network (CNN) can be used to detect object features in 2D [4,6,18,21,22] or to directly find 2D-to-3D correspondences [5,7,8,23]. Deep approaches have also been used to match implicit pose features, which can be learned without requiring ground truth pose annotations [12]. The estimated 6D pose of the objects can be further refined [4,10] using an iterative procedure that effectively moves the camera around the object so that the rendered image of the object best matches the input image. Such a refinement step provides important performance improvements and is becoming common practice [8,11] as a final stage of the estimation process. Our single-view single-object pose estimation described in Section 3.2 builds on DeepIM [10]. The performance of 6D pose estimation can be further improved using depth sensors [10,11,18], but in this work we focus on the most challenging scenario where only RGB images are available. Multi-view single-object 6D pose estimation. Multiple views of an object can be used to resolve depth ambiguities and gain robustness with respect to occlusions. Prior work using local invariant features includes [15,16,24,25] and involves some form of feature matching to establish correspondences across views to aggregate information from multiple viewpoints. More recently, the multi-view singleobject pose estimation problem has been revisited with a deep neural network that predicts an object pose candidate in each view [20] and aggregates information from multiple views assuming known camera poses. In contrast, our work does not assume the camera poses to be known. We experimentally demonstrate that our approach outperforms [20] despite requiring less information. Multi-view multi-object 6D pose estimation. Other works consider all objects in a scene together in order to jointly estimate the state of the scene in the form of a compact representation of the object and camera poses in a common coordinate system. This problem is known as object-level SLAM [26] where a depth-based object pose estimation method [27] is used to recognize objects from a database in individual images and estimate their poses. The individual objects are tracked across frames using depth measurements, assuming the motion of the sensor is continuous. Consecutive depth measurements also enable to produce hypotheses for camera poses using ICP [28] and the poses of objects and cameras are finally refined in a joint optimization procedure.Another approach [29] uses local RGBD patches to generate object hypotheses and find the best view of a scene. All of these methods, however, strongly rely on depth sensors to estimate the 3D structure of the scene while our method only exploits RGB images. In addition, they assume temporal continuity between the views, which is also not required by our approach. Other works have considered monocular RGB only object-level SLAM [30][31][32]. Related is also [33] where semantic 2D keypoint correspondences across multiple views and local features are used to jointly estimate the pose of a single human and the positions of the observing cameras. All of these works rely on local images features to estimate camera poses. In contrast, our work exploits 6D pose hypotheses generated by a neural network which allows to recover camera poses in situations where feature-based registration fails, as is the case for example for the complex texture-less images of the T-LESS dataset. In addition, [31,32] do not consider full 6D pose of objects, and [20,33] only consider scenes with a single instance of each object. In contrast, our method is able to handle scenes with multiple instances of the same object. 3 Multi-view multi-object 6D object pose estimation In this section, we present our framework for multi-view multi-object pose estimation. We begin with an overview of the approach (Sec. 3.1 and Fig. 2), and then detail the three main steps of the approach in the remaining sections. Approach overview Our goal is to reconstruct a scene composed of multiple objects given a set of RGB images. We assume that we know the 3D models of objects of interest. However, there can be multiple objects of the same type in the scene and no information on the number or type of objects in the scene is available. Furthermore, objects may not be visible in some views, and the relative poses between the cameras are unknown. Our output is a scene model, which includes the number of objects of each type, their 6D poses and the relative poses of the cameras. Our approach is composed of three main stages, summarized in Fig. 2 Fig. 2: Multi-view multi-object 6D pose estimation. In the first stage, we obtain initial object candidates in each view separately. In the second stage, we match these object candidates across views to recover a single consistent scene. In the third stage, we globally refine all object and camera poses to minimize multi-view reprojection error. In the first stage, we build on the success of recent methods for single-view RGB object detection and 6D pose estimation. Given a set of objects with known 3D models and a single image of a scene, we output a set of candidate detections for each object and for each detection the 6D pose of the object with respect to the camera associated to the image. Note that some of these detections and poses are wrong, and some are missing. We thus consider the poses obtained in this stage as a set of initial object candidates, i.e. objects that may be seen in the given view together with an estimate of their pose with respect to this view. This object candidate generation process is described in Sec. 3.2. In the second stage, called object candidate matching and described in detail in Sec. 3.3, we match objects visible in multiple views to obtain a single consistent scene. This is a difficult problem since object candidates from the first stage typically include many errors due to (i) heavily occluded objects that might be mis-identified or for which the pose estimate might be completely wrong; (ii) confusion between similar objects; and (iii) unusual poses that do not appear in the training set and are not detected correctly. To tackle these challenges, we take inspiration from robust patch matching strategies that have been used in the structure from motion (SfM) literature [34,35]. In particular, we design a matching strategy similar in spirit to [36] but where we match entire 3D objects across views to obtain a single consistent 3D scene, rather than matching local 2D patches on a single 3D object [36]. The final stage of our approach, described in Section 3.4, is a global scene refinement. We draw inspiration from bundle adjustment [37], but the optimization is performed at the level of objects: the 6D poses of all objects and cameras are refined to minimize a global reprojection error. 3.2 Stage 1: object candidate generation Our system takes as input multiple photographs of a scene {I a } and a set of 3D models, each associated to an object label l. We assume the intrinsic parameters of camera C a associated to image I a are known as is usually the case in single-view pose estimation methods. In each view I a , we obtain a set of object detections using an object detector (e.g. FasterRCNN [38], RetinaNet [39]), and a set of candidate pose estimates using a single-view single-object pose estimator (e.g. PoseCNN [18], DPOD [8], DeepIM [10]). While our approach is agnostic to the particular method used, we develop our own single-view single-object pose estimator, inspired by DeepIM [10], which improves significantly over state of the art and which we describe in the next paragraph. Each 2D candidate detection in view I a is identified by an index α and corresponds to an object candidate O a,α , associated with a predicted object label l a,α and a 6D pose estimate T CaOa,α with respect to camera C a . We model a 6D pose T ∈ SE(3) as a 4 × 4 homogeneous matrix composed of a 3D rotation matrix and a 3D translation vector. Single-view 6D pose estimation. We introduce a method for single-view 6D object pose estimation building on the idea of DeepIM [10] with some simplifications and technical improvements. First, we use a more recent neural-network architecture based on EfficientNet-B3 [40] and do not include auxiliary signals while training. Second, we exploit the rotation parametrization recently introduced in [41], which has been shown to lead to more stable CNN training than quaternions. Third, we disentangle depth and translation prediction in the loss following [42] and handle symmetries explicitly as in [9] instead of using the point-matching loss. Fourth, instead of fixing focal lengths to 1 during training as in [10], we use focal lengths of the camera equivalent to the cropped images. Fifth, in addition to the real training images supplied with both dataset, we also render a million images for each dataset using the provided CAD models for T-LESS and the reconstructed models for YCB-Video. The CNNs are first pretrained using synthetic data only, then fine-tuned on both real and synthetic images. Finally, we use data augmentation on the RGB images while training our models, which has been demonstrated to be crucial to obtain good performance on T-LESS [12]. We also note that this approach can be used for coarse estimation simply by providing a canonical pose as the input pose estimate during both training and testing. We rendered objects at a distance of 1 meter from the camera and used this approach to perform coarse estimate on T-LESS. Additional details are provided in the appendix. Object symmetries. Handling object symmetries is a major challenge for object pose estimation since the object pose can only be estimated up to a symmetry. This is in particular true for our object candidates pose estimates. We thus need to consider symmetries explicitly together with the pose estimates. Each 3D model l is associated to a set of symmetries S(l). Following the framework introduced in [43], we define the set of symmetries S(l) as the set of transformations S that leave the appearance of object l unchanged: where R(l, X) is the rendered image of object l captured in pose X and S is the rigid motion associated to the symmetry. Note that S(l) is infinite for objects that have axes of symmetry (e.g. bowls). Given a set of symmetries S(l) for the 3D object l, we define the symmetric distance D l which measures the distance between two 6D poses represented by transformations T 1 and T 2 . Given an object l associated to a set X l of |X l | 3D points x ∈ X l , we define: (2) D l (T 1 , T 2 ) measures the average error between the points transformed with T 1 and T 2 for the symmetry S that best aligns the (transformed) points. In practice, to compute this distance for objects with axes of symmetries, we discretize S(l) using 64 rotation angles around each symmetry axis, similar to [9]. Stage 2: object candidate matching As illustrated in Fig. 2, given the object candidates for all views {O a,α }, our matching module aims at (i) removing the object candidates that are not consistent across views and (ii) matching object candidates that correspond to the same physical object. We solve this problem in two steps detailed below: (A) selection of candidate pairs of objects in all pairs of views, and (B) scene-level matching. A. 2-view candidate pair selection. We first focus on a single pair of views (I a , I b ) of the scene and find all pairs of object candidates (O a,α , O b,β ), one in each view, which correspond to the same physical object in these two views. To do so, we use a RANSAC procedure where we hypothesize a relative pose between the two cameras and count the number of inliers, i.e. the number of consistent pairs of object candidates in the two views. We then select the solution with the most inliers which gives associations between the object candidates in the two views. In the rest of the section, we describe in more detail how we sample relative camera poses and how we define inlier candidate pairs. Sampling of relative camera poses. Sampling meaningful camera poses is one of the main challenges for our approach. Indeed, directly sampling at random the space of possible camera poses would be inefficient. Instead, as usual in RANSAC, we sample pairs of object candidates (associated to the same object label) in the two views, hypothesize that they correspond to the same physical object and use them to infer a relative camera pose hypothesis. However, since objects can have symmetries, a single pair of candidates is not enough to obtain a relative pose hypothesis without ambiguities and we thus sample two pairs of object candidates, which in most cases is sufficient to disambiguate symmetries. In detail, we sample two tentative object candidate pairs with pair-wise consistent labels (O a,α , O b,β ) and (O a,γ , O b,δ ) and use them to build a relative camera pose hypothesis, T CaC b . We obtain the relative camera pose hypothesis by (i) assuming that (O a,α , O b,β ) correspond to the same physical object and (ii) disambiguating symmetries by assuming that (O a,γ , O b,δ ) also correspond to the same physical object, and thus selecting the symmetry that minimize their symmetric distance where l = l a,α = l b,β is the object label associated to the first pair, and S is the object symmetry which best aligns the point clouds associated to the second pair of objects (O a,γ and O b,δ ). If the union of the two physical objects is symmetric, e.g. two spheres, the pose computed may be incorrect but it would not be verified by a third pair of objects, and the hypothesis would be discarded. Counting pairs of inlier candidates. Let's assume we are given a relative pose hypothesis between the cameras T CaC b . For each object candidate O a,α in the first view, we find the object candidate in the second view O b,β with the same label l = l a,α = l b,β that minimizes the symmetric distance D l (T CaOa,α , . In other words, O b,β is the object candidate in the second view closest to O a,α under the hypothesized relative pose between the cameras. This pair (O a,α , O b,β ) is considered an inlier if the associated symmetric distance is smaller than a given threshold C. The total number of inliers is used to score the relative camera pose T CaC b . Note that we discard the hypothesis which have fewer than three inliers. B. Scene-level matching. We use the result of the 2-view candidate pair selection applied to each image pair to define a graph between all candidate objects. Each vertex corresponds to an object candidate in one view and edges correspond to pairs selected from 2-view candidate pair selection, i.e. pairs that had sufficient inlier support. We first remove isolated vertices, which correspond to object candidates that have not been validated by other views. Then, we associate to each connected component in the graph a unique physical object, which corresponds to a set of initial object candidates originating from different views. We call these physical objects P 1 , ...P N with N the total number of physical objects, i.e. the number of connected components in the graph. We write (a, α) ∈ P n to denote the fact that an object candidate O a,α is in the connected component of object P n . Since all the objects in a connected component share the same object label (they could not have been connected otherwise), we can associate without ambiguity an object label l n to each physical object P n . Stage 3: scene refinement After the previous stage, the correspondences between object candidates in the individual images are known, and the non-coherent object candidates have been removed. The final stage aims at recovering a unique and consistent scene model by performing global joint refinement of objects and camera poses. In detail, the goal of this stage is to estimate poses of physical objects P n , represented by transformations T P1 , . . . , T P N , and cameras C v , represented by transformations T C1 , . . . , T C V , in a common world coordinate frame. This is similar to the standard bundle adjustment problem where the goal is to recover the 3D points of a scene together with the camera poses. This is typically addressed by minimizing a reconstruction loss that measures the 2D discrepancies between the projection of the 3D points and their measurements in the cameras. In our case, instead of working at the level of points as done in the bundle adjustment setting, we introduce a reconstruction loss that operates at the level of objects. More formally, for each object present in the scene, we introduce an objectcandidate reprojection loss accounting for symmetries. We define the loss for a candidate object O a,α associated to a physical object P n (i.e. (a, α) ∈ P n ) and the estimated candidate object pose T CaOa,α with respect to C a as: where ||·|| is a truncated L2 loss, l = l n is the label of the physical object P n , T Pn the 6D pose of object P n in the world coordinate frame, T Ca the pose of camera C a in the world coordinate frame, X l the set of 3D points associated to the 3D model of object l, S(l) the symmetries of the object model l, and the operator π a corresponds to the 2D projection of 3D points expressed in the camera frame C a by the intrinsic calibration matrix of camera C a . The inner sum in Eq. (5) is the error between (i) the 3D points x of the object model l projected to the image with the single view estimate of the transformation T CaOα that is associated with the physical object (i.e. (a, α) ∈ P n ) (first term, the image measurement) and (ii) the 3D points T Pn x on the object P n projected to the image by the global estimate of camera C a (second term, global estimates). Recovering the state of the unique scene which best explains the measurements consists in solving the following consensus optimization problem: where the first sum is over all the physical objects P n and the second one over all object candidates O a,α corresponding to the physical object P n . In other words, we wish to find global estimates of object poses T Pn and camera poses T Ca to match the (inlier) object candidate poses T CaOa,α obtained in the individual views. The optimization problem is solved using the Levenberg-Marquart algorithm. We provide more details in the appendix. Results In this section, we experimentally evaluate our method on the YCB-Video [18] and T-LESS [19] datasets, which both provide multiple views and ground truth 6D object poses for cluttered scenes with multiple objects. In Sec. 4.1, we first validate and analyze our single-view single-object 6D pose estimator. We notably show that our single-view single-object 6D pose estimation method already improves state-of-the-art results on both datasets. In Sec. 4.2, we validate our multi-view multi-object framework by demonstrating consistent improvements over the single-view baseline. Single-view single-object experiments Evaluation on YCB-Video. Following [5,10,18], we evaluate on a subset of 2949 keyframes from videos of the 12 testing scenes. We use the standard ADD-S and ADD(-S) metrics and their area-under-the-curves [18] (please see appendix for details on the metrics). We evaluate our refinement method using the same detections and coarse estimates as DeepIM [10], provided by PoseCNN [18]. We ran two iterations of pose refinement network. Results are shown in Table 1a. Our method improves over the current-state-of-the-art DeepIM [10], by approximately 2 points on the AUC of ADD-S and ADD(-S) metrics. Evaluation on T-LESS. As explained in Section 3.2, we use our single-view approach both for coarse pose estimation and refinement. We compare our method against the two recent RGB-only methods Pix2Pose [7] and Implicit [12]. For a fair comparison, we use the detections from the same RetinaNet model as in [7]. We report results on the SiSo task [44] and use the standard visual surface discrepancy (vsd) recall metric with the same parameters as in [7,12]. Results are presented in Table 1b. On the e vsd < 0.3 metric, our {coarse + refinement} solution achieves a significant 34.2% absolute improvement compared to existing state-of-the-art methods. Note that [10] did not report results on T-LESS. We also evaluate on this dataset the benefits of the key components of our single view approach compared to the components used in DeepIM [10]. More precisely, we evaluate the importance of the base network (our EfficientNet vs FlowNet pre-trained), loss (our symmetric and disentangled vs. point-matching loss with L 1 norm), rotation parametrization (our using [41] vs. quaternions) and data augmentation (our color augmentation, similar to [12] vs. none). Loss, network and rotation parametrization bring a small but clear improvement. Using data augmentation is crucial on the T-LESS dataset where training is performed only on synthetic data and real images of the objects on dark background. Multi-view experiments As shown above, our single-view method achieves state-of-the-art results on both datasets. We now evaluate the performance of our multi-view approach to estimate 6D poses in scenes with multiple objects and multiples views. Implementation details. On both datasets, we use the same hyper-parameters. In stage 1, we only consider object detections with a score superior to 0.3 to limit the number of detections. In stage 2, we use a RANSAC 3D inlier threshold of C = 2 cm. This low threshold ensures that no outliers are considered while associating object candidates. We use a maximum number of 2000 RANSAC iterations for each pair of views, but this limit is only reached for the most complex scenes of the T-LESS dataset containing tens of detections. For instance, in the context of two views with six different 6D object candidates in each view, only 15 RANSAC iterations are enough to explore all relative camera pose hypotheses. For the scene refinement (stage 3), we use 100 iterations of Levenberg-Marquart (the optimization typically converges in less than 10 iterations). Evaluation details. In the single-view evaluation, the poses of the objects are expressed with respect to the camera frame. To fairly compare with the singleview baseline, we also evaluate the object poses in the camera frames, that we compute using the absolute object poses and camera placements estimated by our global scene refinement method. Standard metrics for 6D pose estimation strongly penalize methods with low detection recall. To avoid being penalized for removing objects that cannot be verified across several views, we thus add the initial object candidates to the set of predictions but with confidence scores strictly lower than the predictions from our full scene reconstruction. Multi-view multi-object quantitative results. The problem that we consider, recovering the 6D object poses of multiple known objects in a scene captured by several RGB images taken from unknown viewpoints has not, to the best of our knowledge, been addressed by prior work reporting results on the YCB-Video and T-LESS datasets. The closest work is [20], which considers multiview scenarios on YCB-Video and uses ground truth camera poses to align the viewpoints. In [20], results are provided for prediction using 5 views. We use our approach with the same number of input images but without using ground truth calibration and report results in Table 2a. Our method significantly outperforms [20] in both single-view and multi-view scenarios. We also perform multi-view experiments on T-LESS with a variable number of views. We follow the multi-instance BOP [44] protocol for ADD-S<0.1d and e vsd < 0.3. We also analyze precision-recall tradeoff similar to the standard practice in object detection. We consider positive predictions that satisfy ADD-S<0.1d and report mAP@ADD-S<0.1d. Results are shown in Table 2b for the ViVo task on 1000 images. To the best of our knowledge, no other method has reported results on this task. As expected, our multi-view approach brings significant improvements compared to only single-view baseline. Benefits of scene refinement. To demonstrate the benefits of global scene refinement (stage 3), we report in Table 3 the average ADD-S errors of the inlier candidates before and after solving the optimization problem of Eq.(6). We note a clear relative improvement, around 20% on both datasets.. Relative camera pose estimation. A key feature of our method is that it does not require camera position to be known and instead robustly estimates it from the 6D object candidates. We investigated alternatives to our joint camera pose estimation. First, we used COLMAP [45,46], a popular feature-based SfM software, to recover camera poses. On randomly sampled groups of 5 views from the YCB-Video dataset COLMAP outputs camera poses in only 67% of cases compared to 95% for our method. On groups of 8 views from the more difficult T-LESS dataset, COLMAP outputs camera poses only in 4% of cases, compared to 74% for our method. Our method therefore demonstrates a significant interest compared to COLMAP that uses features to recover camera poses, especially for complex textureless scenes like in the T-LESS dataset. Second, instead of estimating camera poses using our approach, we investigated using ground truth camera poses available for the two datasets. We found that the improvements using ground truth camera poses over the camera poses recovered automatically by our method were only minor: within 1% for T-LESS (4 views) and YCB-Video (5 views), and within 3% for T-LESS (8 views). This demonstrates that our approach recovers accurate camera poses even for scenes containing only symmetric objects as in the T-LESS dataset. Qualitative results. We provide examples of recovered 6D object poses in Fig. 3 where we show both object candidates and the final estimated scenes. Please see the appendix for additional results, including detailed discussion of failure modes. Results on the YCB-Video are available on the project webpage 6 . Computational cost. For a common case with 4 views and 6 2D detections per view, our approach takes approximately 320 ms to predict the state of the scene. This timing includes: 190 ms for estimating the 6D poses of all candidates (stage 1, 1 iteration of the coarse and refinement networks), 40 ms for the object candidate association (stage 2) and 90 ms for the scene refinement (stage 3). Further speed-ups towards real-time performance could be achieved, for example, by exploiting temporal continuity in a video sequence. Conclusion We have developed an approach, dubbed CosyPose, for recovering the 6D pose of multiple known objects viewed by several non-calibrated cameras. Our main contribution is to combine learnable 6D pose estimation with robust multi-view matching and global refinement to reconstruct a single consistent scene. Our approach explicitly handles object symmetries, does not require depth measurements, is robust to missing and incorrect object hypothesis, and automatically recovers the camera poses and the number of objects in the scene. These results make a step towards the robustness and accuracy required for visually driven robotic manipulation in unconstrained scenarios with moving cameras, and open-up the possibility of including object pose estimation in an active visual perception loop. Appendix The appendix is organized as follows. In Sec. A, we give more details of our single-view single-object 6D object pose estimator. In Sec. B we illustrate the object candidate matching strategy on a simple 2D example. In Sec. C, we give additional details about our parametrization and initialization of the object-level bundle adjustment problem, introduced in Sec. 3.4 of the main paper. Sec. D presents the datasets used in the main paper and recalls the metrics that are used for each dataset. Finally, in Sec. E we present additional qualitative results of our multi-view multi-object 6D pose estimation approach. We discuss in detail some examples to illustrate key benefits of our method as well as point out the main limitations. Examples randomly selected from the results on the T-LESS and YCB-Video datasets are available on the project webpage 7 . A Our single-view single-object method We now detail our single-view single-object pose estimation network introduced in Sec. 3.2 of the main paper. Our method builds on DeepIM [10] but includes several extensions and improvements. Given a single image I a and a 2D detection D a,α associated with an object label l a,α , our method outputs an hypothesis for the pose of the object with respect to the camera. This pose is noted T Ca,Oaα . In this section, we focus on one view and one object and thus omit the a and α subscripts. Similar to DeepIM [10], we use a deep neural network that takes as input two images and iteratively refines the pose. The first image is the (real) input image I cropped on a region of the image showing the object, denoted I c . At iteration k, the second image is a (synthetic) rendering of the object with label l rendered in a pose T k−1 C,O that corresponds to the object pose estimated at the previous iteration. The network outputs an updated refined pose T k C,O . The initial pose T 0 C,O can be provided by any coarse 6D pose estimation method (such as PoseCNN [18]) but we also show that we can simply use a canonical pose of the object for T 0 C,O as explained in the "Coarse estimation" pagraph below. We now detail our method and present the main differences with [10]. Network architecture. The network takes as input the concatenation of the synthetic and real cropped images. Both images are resized to the input resolution: 320 × 240. The backbone is EfficientNet-B3 [41] followed by spatial average pooling. The prediction layer is a simple fully connected layer which outputs 9 values corresponding to one vector [v x , v y , v z ] for the translation and two vectors e 1 , e 2 to predict the rotation component of T CO . A rotation matrix R is recovered from e 1 , e 2 using [41] by simply orthogonalizing the basis defined by the two vectors e 1 , e 2 . Please see "Rotation parametrization" for the equations to recover the rotation matrix R from e 1 , e 2 . Compared to DeepIM [10], the main difference is that we use a more recent network architecture (DeepIM is based on FlowNet [47]) and we do not include auxiliary predictions of flow and mask. This makes the method simpler and easier to train. Our input resolution of 320 × 240 is also smaller than 640 × 480 used by DeepIM, reducing memory consumption and allowing to use larger batches while training. Transformation parametrization. Similar to DeepIM, we use the objectindependent rotation and translation parametrization which consists in predicting a rotation of the camera around the object, a xy translation [v x , v y ] in image space (in pixels) for the center of the rendered object and a relative displacement v z along the depth axis of the camera. Given the input pose T k CO and the outputs of the network ([v x , v y , v z ] and R = f (e 1 , e 2 )), the pose update is obtained from the following equations: where [x k , y k , z k ] is the 3D translation vector of T k CO , R k the rotation matrix of T k CO , f C x and f C y are the focal lengths that correspond to the (fictive) camera associated with the cropped input image I C . Finally, [x k+1 , y k+1 , z k+1 ] and R k+1 are the parameters of the output pose estimate T k+1 CO . The differences with DeepIM are twofold. First, we use a linear parametrization of the relative depth (eq. (9)), instead of z k+1 = z k e −vz , which we found more stable to train. Second, we use the intrinsics f C x , f C y of the cropped camera associated with the input (cropped) image. DeepIM uses the intrinsics parameters of the noncropped camera f x , f y and fix them to 1 during training because the intrinsic parameters of the input camera are fixed on their datasets. We use the cropped focal lengths instead because (a) cropping and resizing the crop of the input image changes the apparent focal length and (b) the focal lengths of the input images are not unique on T-LESS. Using the cropped focal lengths forces the network to only predict xy translations in pixels and the network can therefore become invariant to the intrinsic parameters of the input (cropped) camera. Rotation parametrization. Given two vectors e 1 and e 2 (6 values) predicted by the neural network, we recover a rotation parametrization R by following [41]: where ∧ is the cross product between two 3D vectors. This representation has been shown to be better than quaternions (used by DeepIM) to regress with a neural network [41]. Cropping strategy. DeepIM uses (a) the input 2D detections and (b) the bounding box defined by T k CO and the vertices of the object l to define the size and location of the crop in the real input image during training. Indeed, the ground truth bounding box is known during training. At test time, only (b) is used by DeepIM because ground truth bounding boxes are not available. In our case, we only use (b) while training and testing. The intrinsic parameters of the cropped camera are also used to directly render the cropped synthetic image at a resolution of 320 × 240 instead of rendering at a larger resolution followed by cropping. Symmetric disentangled loss. A standard loss for 6D pose estimation is ADD-S [18] which allows to predict pose of symmetric objects. Our loss is inspired by ADD-S loss with two main differences. First, we enumerate all the possible symmetries to find the best matching between the vertices of the predicted model and the ground truth model instead of finding the nearest neighbors. This is similar in spirit to the approach of [9] to handle object symmetries. Second, we disentangle depth v z and translation predictions v x , v y , following the recommendations from [42]. More formally, we define the update function F which takes as input the initial estimate of the pose T k CO , the outputs of the neural network [v x , v y , v z ] and R, and outputs the updated pose, i.e. the function such that where the closed form of F is expressed in equations (7)(8)(9)(10) of the appendix. We also write [v x ,v y ,v z ] andR the target predictions, i.e. the predic- , whereT CO is the ground truth pose of the object. Our loss function is then: where D l is the symmetric distance defined in the Sec. 3.2 of the main paper, with the L 2 norm replaced by the L 1 norm. The different terms of this loss separate the influence of: xy translation (15), relative depth (16) and rotation (17). We refer to [42] for additional explanations of the loss disentanglement. Coarse estimation. To perform coarse estimation on T-LESS, we use the same network architecture, parametrization and losses defined above. As input T 0 CO we provide a canonical input pose that corresponds to the object being rendered at a distance of 1 meter of the camera in the center of the input 2D bounding box. The coarse and refinement networks use the same architecture, but the weights are distinct. Each network is trained independently. Training data. Due to the complexity of annotating real data with 6D pose at large scale, most recent methods [8,10,12] generate additionnal synthetic training data. In our experiments, we use the real training images provided by YCB-Video and the images of the real objects displayed individually on black backgrounds provided by T-LESS. In addition, we generate one million synthetic training images on each dataset using a simple procedure described next. We randomly sample 3 to 9 objects from the set of 3D models considered, place them randomly in a 3D box of size 50 cm and sample randomly the orientation of each object. Half of the images are generated with objects flying in the air, the other half is generated by taking the images after running physics simulation for a few seconds, generating physically feasible object configurations. This is similar to the approach described in [6,48], though none of our rendered images are photorealistic. The camera is pointed at the center of the 3D box, its position is sampled uniformly above the box center at the same range of distance as the one of the real training data, and its roll angle is sampled between (-10, 10) degrees. On T-LESS, the distance to the object is fixed in the real training images and we use instead the range of distances of the testing set provided (which is explicitly allowed by the guidelines of the BOP challenge [44] 8 ). We do not use any information from the testing set beside this distance interval. On the T-LESS dataset, we generate data using the CAD models only. We add random textures on the CAD models following work on domain randomization [49][50][51]. We also paste images from the Pascal VOC dataset in the background with a probability 0.3, following [10]. On both datasets, we add data augmentation to the input RGB images while training, following [12]. Data augmentation includes gaussian blur, contrast, brightness, color and sharpness filters from the Pillow library [52]. Examples of training images are shown in Fig. 4. Finally, when training the refinement network, we use the same distribution as DeepIM for the input poses. Training procedure. All of the networks (refinement network on YCB-Video, coarse network on T-LESS, refinement network on T-LESS) are trained using the same procedure. We use the Adam optimizer [53] with a learning rate of 3.10 −4 and default momentum parameters. Networks are trained using Pytorch and synchronous distributed training on 32 gpus, with 32 images per GPU for a total batch size of 1024. The networks are randomly initialized and we use the following training procedure. First, the network is trained for 80k iterations on synthetic data only. Then, the network is trained for another 80k iterations on both real and synthetic training images. In this second phase, the real training images account for around 25% of each batch. Following [54], we also use a warmup phase where we progressively increase the learning rate from 0 to 3.10 −4 during the first 5k iterations. Experimental findings. On YCB-Video, we found that pre-training the model on synthetic data yields an improvement of approximately 2 points on the AUC of ADD(-S) metric. Without this pre-training phase, our model performed comparably to the results reported by DeepIM. Note that this is hard to directly compare because the synthetic training images are different from the ones used by DeepIM. On T-LESS, we found that the data augmentation is crucial as also pointed out by [12]. Without data augmentation, the performance of the coarse and refinement networks is poor, with a e vsd < 0.3 score of around 37% compared to 64% when training with data augmentation. B Object candidate matching: additional illustration In Fig. 5, we illustrate our method for "Sampling of relative camera poses sampling" described in Sec. 3.3 of the main paper with a simple 2D example. , we estimate the relative camera pose TC a C b that best aligns candidates Oa,γ, O b,δ . In this example, the red camera pose C b is also valid due to the symmetries of the triangular object lα. It is discarded because the error between O b,δ and Oa,γ is bigger than between O b,δ and Oa,γ. C Scene refinement Initialization. There are multiple ways to initialize the optimization problem defined in equation (6) of the main paper. We use the following procedure. We start by picking a random camera and setting it's coordinate frame as the world coordinate frame. Then, we iterate over all cameras, trying to initialize each one. In order to initiliaze a camera a, we randomly sample another camera b which is already initialized (placed in the world coordinate frame) and use the relative pose between these two cameras T CaC b estimated while running RANSAC (relative camera pose sampling in Sec. 3.2) to place camera a in the world coordinate frame. Once all the cameras have been initialized, we initalize objects by randomly picking an object p an initializing it using a candidate associated with this physical object from a random view. Rotation parametrization. We use the same rotation parametrization as the one used for our single-view single-object network for which the equations are provided in Sec. A of this appendix. D.1 Datasets In this section, we give details of the datasets used in our experiments. YCB-Video. The YCB-Video [18] dataset is made of 92 scenes with around 1000 images per scene. The dataset is split into 80 scenes for training and 12 scenes for testing. It is mostly challenging due to the variations in lightning conditions, significant image noise and occlusions. The objects are picked from a subset of 21 objects from the YCB object set [55] for which reconstructed 3D models are available. The models are presented in Fig. 7. These models are used to generate additional synthetic training images. There is at most one object of each instance per scene and most of the objects are visually distinct with the exception of the large and extra-large clamps. When testing, we follow previous works [5,10,18] and evaluate on a subset of 2949 keyframes. The variety of the viewpoints for each scene is limited as the camera is usually moved in front of the scene, but not completely around it. T-LESS. The T-LESS [19] dataset is made of 20 scenes featuring multiple industry-relevant objects. There are 30 object instances, all of them are textureless and most of them are symmetric. The reconstructed 3D models of these objects are presented in Fig. 7. Many objects have similar visual appearance, making the class prediction task challenging for the object detector. The images in the dataset are taken all around the scene. Scene complexity varies from 3 objects of different types to up 18 objects with 7 belonging to the same type. In single-view experiments we consider all images of the testing scenes to provide meaningful comparison with [7,12]. For multiview experiments we consider the subset of the BOP19 challenge [44]. We use the CAD models for generating synthetic images and for evaluation. D.2 Metrics In this section, we give some details about the metrics reported in the main paper. We refer to [44,56] for more information about these metrics. The ADD (average distance) metric is introduced in [56] and is typically used to measure the accuracy of pose estimation for non-symmetric objects. Given a label l of an object and following the notation introduced in Sec. 3.2 of the main paper, this metric is computed as : where T is the predicted object pose,T is the ground truth pose, X h l are the vertices of the 3D models and H l is the number of vertices of the model of the object l. For symmetric objects, the average distance is computed using the closest point distance and noted ADD-S: The notation ADD(-S) corresponds to computing ADD for non symmetric objects and ADD-S for symmetric objects. It is also common to report the percentage of objects for which the pose is estimated within a given threshold such as 10% of it's diameter. We use the notations ADD-S < 0.1d and ADD(-S) < 0.1d for this metric and report the mean computed over object types. The authors of PoseCNN [18] also proposed to report the area under the accurracy-threshold curve for a threshold (on ADD-S, or ADD(-S)) varying between 0 to 10cm. We note this metric as AUC of ADD(-S) or AUC of ADD-S and we use the implementation provided with the evaluation code 9 of YCB-Video. When evaluating on the T-LESS dataset, we also report the Visual Surface Discrepancy metric (vsd). This metric is invariant to object symmetries and takes into account the visibility of the object. As in [7,12], the pose is considered correct when the error is less than 0.3 with τ = 20mm and δ = 15mm. We note this metric e vsd < 0.3 and use the official implementation code of the BOP challenge [44] 10 . There are multiple instances of objects in multiple scenes of the T-LESS dataset. When comparing with prior work [7,12] on all images of the primesense camera, we only evaluate the prediction which has the highest detection score for each class, and only objects visible more than 10% are considered as ground truth targets. This corresponds to the SiSo task. When evaluating our multi-view method, we follow the more recent 6D localization protocol of the ViVo BOP challenge which considers the top-k predictions with highest score for each class in each image, where k is the number of ground truth objects of the class in the scene. Note that the metrics of the BOP challenge do not penalize making many incorrect predictions for classes that are not in the scene, which happens in most methods and is problematic for practical application. We thus propose to analyze precision-recall tradeoff similar to the standard practice in object detection, using ADD-S<0.1d to count true positives. When computing the mean of ADD-S errors in our scene refinement ablation, we only consider as true positives predictions the ones which have an ADD-S error lower than half of the diameter of the object, to ensure that the prediction is matched to the correct ground truth object. Without limiting the error to this threshold and using only class labels and scores, some predictions may be matched to ground truth objects which are at a very different location in the scene. This tends to increase the errors while not being representative only of the 6D pose accuracy of the predictions. E Additional multi-view multi-object results Each scene reconstruction is presented with a dedicated figure and we provide close-ups on various parts of the visualization to illustrate the different aspects in detail. The explanation is provided in the caption of each figure. Layout of the figures. In each figure presented below, four (on T-LESS) or five (on YCB-Video) RGB images were used to reconstruct each scene. In each figure, each row corresponds to results associated with one image and different columns present the results of different stages of our method. The last column shows the ground truth scene. The different columns are described next. -"Input image" is the (RGB) image used as input to the method. -"2D detections" shows the detections obtained by the object detector (Reti-naNet on T-LESS, PoseCNN on YCB-Video), after removing detections that have scores below 0.3. The color of each 2D bounding box illustrates the object label predicted for this detection, each color is associated with a unique type of 3D object in the object database. Note that the colors for each type of 3D object are shared for all visualizations corresponding to one scene (one figure) but not shared across the figures because of the high number of objects in the database. -"Object candidates" illustrates the 6D object poses predicted for each 2D detection. The candidates considered as outliers (those who have not been matched with a candidate from another view and are discarded) are marked with red color and are transparent. The candidates considered inliers are shown in green. Inliers are used in the final scene reconstruction. Note that the red and green colors in this (3rd) column are only used to indicate inliers and outliers and there is no correspondence with red and green colors in the 4th column that denote the different object types. -"Scene reconstruction" illustrates the scene reconstructed by our method using all the views presented in the figure. Once the scene is reconstructed, we use the recovered 6D poses of physical objects and cameras to render the scene imaged from each of the predicted viewpoints. The renderings are overlaid over the input image. -"Ground truth" corresponds to the ground truth scene viewed from the ground truth viewpoints. These images are shown to enable visual comparison with the results of our method. The ground truth information (number of objects, types of objects, poses of cameras, poses of objects) is not used by our method. In the following, we illustrate the main capabilities of our system. E.1 Highlights of the capabilities of our system Large number of objects, robustness to occlusions, symmetric objects. Our method is able to recover the state of complex scenes that contain multiple objects, even if parts or the scene are partially or completely occluded in some of the views. The poses of cameras and objects can be correctly recovered even if all objects in the scene are symmetric. An example is presented in Fig. 8. Note how some objects are missing in each individual view but our method is able to recover correctly all objects. Input image 2D detections Object candidates Ground truth Reconstruction Multiple object instances. Our method is able to successfully identify the correct number of objects and their labels even if there are multiple objects of the same type in the image, objects are partially occluded in some views and multiple types of objects have very similar visual appearance. An example is presented in Fig. 9 Input image 2D detections Object candidates Ground truth Reconstruction Fig. 9: Higlight II: Scene with multiple object instances of the same object type. Note how our method is able to correctly identify all objects in this challenging scene. Object poses and labels/colors predicted by our method, shown in close-up (b) are very similar to the ground truth, shown in close-up (c). This is particularly challenging because the green and orange objects have similar visual appearance, are close to each other in the scene, and objects are partially occluded in some of the views, as shown in close-ups (a) and (d). Cluttered scenes with distractors. Our method is also robust to distractor objects that are not in the database of objects. We present in Fig. 10 a complex example with many distractors where our method is able to successfully recover all objects in the scene, which are in the object database while filtering out the other ones. This is especially important for robotic applications in unstructured environments where the objects of interests are known and should not be confused with other background objects. High accuracy. One of the key components of our approach is scene refinement (section 3.4 in the main paper), which significantly improves the accuracy of pose predictions using information from multiple views. In Fig 11, we show an example of a reconstruction that highlights the accuracy that can be reached by our method using only 4 input images. E.2 Detailed examples We now explain in detail few simpler examples that demonstrate how our system works and how it achieves the kind of results presented in the previous section. Robustness to missing detections. In some situations, objects are partially or completely occluded in some of the views. As a result, 2D detections for one physical object are missing in some views. If this physical object is visible in other views, our reconstruction method is able to estimate it's pose with respect to the other objects. If all cameras can be positioned with respect to the rest of the scene using other non-occluded objects, our approach can also position the partially occluded object with respect to all cameras, even if there were initially no candidates corresponding to the object in these views. An example is shown in Fig. 12. Robustness to incorrect detections. In T-LESS, many objects have similar visual appearance. As a result, the 2D detector often makes mistakes, predicting incorrect labels for some of the detections in some views. Our method is able to handle multiple 2D detections that have different labels at the same location in the image. In this case, a pose hypothesis is generated for each of the label hypothesis. If the object candidate cannot be matched with another view -either because the incorrect label is predicted in only one view or because the poses are not consistent -our method is able to discard this object candidate. An example is shown in Fig. 13. Please see the discussion "Duplicate objects" and Fig. 14 for examples where an object is consistently mis-identified across multiple views. Duplicate objects. When multiple objects share the same visual appearance as it is the case in the T-LESS dataset, there are often multiple label hypotheses that are consistent across views for the same physical object. Because these objects look similar to each other and match the observed image, the pose estimation network (which tries to match a rendering with the observed image, regardless of the object type) predicts reasonable poses for each label that are Input image 2D detections Object candidates Ground truth Reconstruction Fig. 10: Highlight III: Scene with multiple distractors. Our method is also robust to distractor objects that are not in the database of objects. Our method correctly localizes and estimates the pose of all databse objects in the scene (cf. our reconstruction (4th column) and the ground truth (5th column)) despite the presence of several distractor objects (objects not colored in the ground truth). A single-view approach (Object candidates, 3rd column) incorrectly detects three of the distractor objects and places them in the scene because they look similar to some objects of the database, as shown in the close-up (a). Our robust multi-view approach is able to filter these outliers: the objects estimated at the positions of the distractors are marked in red in (a). Distractor objects have been filtered in the final reconstruction as shown in the close-up (b) (cf. ground truth close-up (c)). Input images (a) (b) Fig. 11: Highlight IV: Accuracy of our approach. Left: input images. Then (a) and (b) shows the output scene imaged from two viewpoints different from the views used for the reconstruction. Please note in (a) how the yellow object is accurately estimated to only touch the green objects, and in (b) how the brown object is correctly plugged inside the yellow object. Fig. 12: Example I: robustness to missing detections. One of the objects (marked by purple circle) in the scene is detected in two views (b) (d), but not in the other two views due to partial (c) or complete (a) occlusion. Our method is able to (i) position the views 1 and 3 with respect to the scene using the other visible candidate objects and (ii) position the purple object with respect to these other objects using views 2 and 4, where the purple object is visible. Once the scene is reconstructed, it is also possible to directly recover the pose of the purple object with respect to views, where it was not originally detected, like in (e). Input image 2D detections Object candidates Ground truth Reconstruction Fig. 13: Example II: Robustness to incorrect detection labels. One of the objects that is correctly identified in two views (a) (c), has two label hypotheses in view (b) and is not detected in view (d). Our method keeps the two hypotheses in (b) and predicts two 6D object candidates (e) but it is able to discard one of them because it's label is not consistent with the other views: one of the two object candidates is marked as an outlier (red) in (e). In our final scene reconstruction, the gray object is correctly recognized (it has the same color (gray) in out output "Reconstruction" and in the "Ground truth"). consistent across different views. These candidates are matched across views and multiple objects with different labels are predicted in the final scene at the same spatial position. In our visualization, we remove these duplicate objects by using a simple 3D non-maximum suppression (NMS) strategy on the estimated physical objects of the final scene. If multiple objects are too close to each other in the 3D scene, we keep the object with the highest score -the sum of the 2D detection scores of all inlier object candidates that are associated with one physical 3D object. Duplicate objects and 3D non-maximum suppression are illustrated in Fig. 14, including one correct and one incorrect example. The column "Reconstruction" in all figures corresponds to the output of our method after the 3D NMS. Robustness to distractors and false positives. The complex scenes in the T-LESS dataset also have background distractor objects that are not in the object database. Some of these distractors look similar to objects in the database and can be incorrectly detected, sometimes in multiple images. In these cases, the pose estimator most often produces 6D pose estimates that are not consistent across views because the input real images are outside of the training distribution (they display objects that are not used to generate the training data). Because these estimates are not consistent across views, our method is able to filter them and mark them as outliers (red), thus gaining robustness with respect to these distractors. An example is shown in Fig. 15. E.3 Limitations We now describe the most challenging scenarios that our method is currently not able to recover from. For each of these, we briefly discuss possible improvements. Limitation I: consistent mistakes If two incorrect 6D object candidates are consistent across at least two views, an (incorrect) object will be present in the reconstructed scene. Such failure case typically happens when two viewpoints are similar to each other. An example is shown in Fig. 16. If two views are very similar, the incorrect candidates will be matched together. Note that this failure mode could be resolved by using a higher number of views, and by only considering physical objects that have a sufficiently high number of associated object candidates. Limitation II: Objects missing in the final reconstruction. Our current approach requires that a candidate in one view is matched with at least one candidate from another view. If a candidate detection and pose estimate is correct in one view but not in any other view, it will be missing from the final reconstruction. An example is presented in Fig. 17. Note that in this case, all camera poses are still estimated correctly. An interesting direction to overcome this problem would be to grow the number of object candidates in each view by reprojecting the detection from other views, as done in guided matching. (grey and pink) are predicted for the same object consistently across two views, (a) and (c). Because the 3D models of the pink and grey objects are similar, the poses predicted in both views are consistent and thus both pairs of object candidates are associated to separate objects. In the final scene reconstruction, two objects (grey and pink) overlap at the same 3D location (e). We use a 3D non-maximum suppression strategy to retain only a single hypothesis. In the final output (after NMS), the correct object is retained (pink), c.f. the ground truth column. In some cases, incorrectly identified objects are kept as shown in (b) Limitation III: Incorrect estimates of camera pose. To position the camera with respect to the scene, our method requires that there are at least three object candidate inliers in the view: two for positioning the camera with respect to the scene, and another one to validate the camera pose hypothesis. Sometimes, however, there is insufficient number of inliers. This typically happens if only two objects are visible, or if there is a small number of objects visible and some of the detections are incorrect. An example is shown in Fig. 18. only two visible objects, as shown in close-up (a), the corresponding camera view with respect to the rest of the scene cannot be estimated as it requires at least three correctly estimated objects. As a result the objects are not reprojected in the image (c). This also happens if three candidates are detected in one view, as shown in close-up (b), but one of the object candidates is not consistent with the other views (here red object instead of green object).
2020-08-20T01:01:14.557Z
2020-08-19T00:00:00.000
{ "year": 2020, "sha1": "486c76e7d8a4bcf3dce64bbc42551a0c73120fc9", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2008.08465", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "486c76e7d8a4bcf3dce64bbc42551a0c73120fc9", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
257866487
pes2o/s2orc
v3-fos-license
Localized Therapeutic Approaches Based on Micro/Nanofibers for Cancer Treatment Cancer remains one of the most challenging health problems worldwide, and localized therapeutic approaches based on micro/nanofibers have shown potential for its treatment. Micro/nanofibers offer several advantages as a drug delivery system, such as high surface area, tunable pore size, and sustained release properties, which can improve drug efficacy and reduce side effects. In addition, functionalization of these fibers with nanoparticles can enhance their targeting and therapeutic capabilities. Localized delivery of drugs and/or other therapeutic agents via micro/nanofibers can also help to overcome the limitations of systemic administration, such as poor bioavailability and off-target effects. Several studies have shown promising results in preclinical models of cancer, including inhibition of tumor growth and improved survival rates. However, more research is needed to overcome technical and regulatory challenges to bring these approaches to clinical use. Localized therapeutic approaches based on micro/nanofibers hold great promise for the future of cancer treatment, providing a targeted, effective, and minimally invasive alternative to traditional treatments. The main focus of this review is to explore the current treatments utilizing micro/nanofibers, as well as localized drug delivery systems that rely on fibrous structures to deliver and release drugs for the treatment of cancer in a specific area. Introduction According to the World Health Organization (WHO), cancer is the second leading cause of death globally, with an estimated 9.6 million deaths in 2018. In 2020, the American Cancer Society (ACS) estimated that there would be approximately 1.8 million new cancer cases and 606,520 cancer deaths in the United States [1,2]. Cancer is a collection of diseases that can start in virtually any organ or tissue in the body when abnormal cells grow out of control, invade neighboring tissues, and/or spread to other organs. In the normal functioning of the human body, healthy cells grow and spread to generate new cells when the body requires them. As cells become damaged or age, they undergo apoptosis and are replaced by younger cells [3][4][5]. However, sometimes this orderly process is disrupted and abnormal or damaged cells (cancer cells) grow and multiply uncontrollably. This makes it difficult for the body to function properly, affecting the part of the body where cancer cells grow, leading to the appearance of tumors. Cancer cells can spread throughout the body using the circulatory and lymphatic systems, giving rise to metastases, which are the leading cause of death from cancer [3][4][5][6]. This disease develops due to multiple changes in your genes, which can have many possible causes, such as lifestyle habits, genes or exposure to cancer-causing agents [6]. There are several treatments available for cancer, such as surgery, chemotherapy, radiotherapy, immunotherapy, endocrine therapy, photodynamic therapy and hyperthermia therapy [7,8]. However, despite being a widely studied disease, the majority of treatments have many disadvantages, with many side effects, often highly serious for patients [9]. To circumvent these disadvantages and reach only the necessary target, drug delivery systems are increasingly being developed. These are technologies designed to deliver medicinal substances in a targeted and/or regulated manner. Several structures can be used as polymeric drug delivery systems, such as pharmacological films, hydrogels, wafers, sticks, microspheres, and fibers, among others. Fiber-based materials also offer several advantages for use in drug delivery applications. They are easy to fabricate, typically have high mechanical properties, provide a desirable drug release profile, and have a high surface area to volume ratio. To produce these intelligent systems, it is necessary to choose biocompatible materials linked to stimulus-responsive systems capable of controlling drug release [10,11]. Therefore, fibers composed of biocompatible and biodegradable polymers offer a low risk of inducing immune responses from the patient's immune system, making them a favorable option for drug delivery applications. These fibers can be customized by adjusting their polymer composition, length, and cross-sectional radius. Additionally, the composition and morphology of the fibers can be tailored to meet the specific requirements of the application, resulting in low cytotoxicity, improved viability, and effective drug release [11]. The therapeutic properties of nanoparticles can also be used in cancer diagnostic techniques. Researchers managed to develop platforms for detecting and generating highly sensitive and specific images, showing great potential in detecting and diagnosing the disease, as well as monitoring the response to treatment. This monitoring is also called theranostic. Again, the fibers hold promise for theranostic drug delivery systems and may also incorporate imaging agents, allowing for both diagnosing and treating the disease [12]. Therefore, the development of localized drug delivery systems based on fibrous structures emerges as a promising strategy for the treatment of several types of cancer. Diagnostic The signs and symptoms of cancer are varied and numerous. They can also be similar to those of other diseases, such as infectious or autoimmune disorders, and can range in severity. In many cases, cancer does not present any signs or symptoms, especially in its early stages. Detecting cancer early requires both the individual and the healthcare team to be vigilant. Recognizing abnormal signs and symptoms and pursuing a proper diagnosis in a measured way offers the best chance of identifying cancer early and effectively managing it [13]. Cancer can be diagnosed through several methods, including physical exams, laboratory tests, imaging procedures, and biopsy [14]. During a physical exam, a healthcare provider may look for signs of cancer such as lumps, changes in the skin, or abnormalities in organs. The physical examination report for most cancers should provide detailed information on the tumor's location, including the site and subsite, its extension to adjacent organs or structures, and the accessibility, palpability, and mobility of the lymph nodes. The report should also mention the probability of distant site involvement, such as the presence of organ enlargement, pleural effusion, ascites, or neurological symptoms. In the case of breast cancer, the physical examination should describe the precise location and size of the tumor mass, as well as the condition of the skin around the tumor, including any changes in color or texture and the mass's attachment or fixation. The examination should encompass the entire axial and regional nodal area, including the supraclavicular nodes [15]. Laboratory tests may be performed on blood or other bodily fluids to look for abnormal levels of certain substances that may indicate cancer. Tests that measure the levels of certain substances in your body can indicate the presence of cancer [14]. However, abnormal results do not necessarily mean that cancer is present, and other tests such as biopsies and imaging are also used to make a diagnosis. It is important to note that laboratory results can vary among healthy individuals due to factors such as age, sex, race, and medical history. Normal results are often reported as a range based on the results of past tests from large groups of people. It is possible to have normal lab results and still have cancer, and likewise, abnormal results do not always indicate disease. Therefore, lab tests alone cannot provide a definitive diagnosis of cancer or any other illness. Imaging procedures, such as Molecules 2023, 28, 3053 3 of 25 X-rays, CT scans, MRI, and PET scans, may be used to detect abnormalities in the body that may be indicative of cancer [13,16]. To confirm a cancer diagnosis, doctors often need to perform a biopsy, a procedure that involves removing a sample of abnormal tissue [14]. The tissue sample is then analyzed under a microscope and subjected to other tests by a pathologist to determine if cancer cells are present. Depending on the type and stage of cancer, one or more of these methods may be used to diagnose cancer. The findings are described in a pathology report that provides information about the diagnosis. Pathology reports are crucial for determining treatment options as they provide valuable information about the disease [13,16]. Prevention is often considered the best way to deal with cancer because it can help reduce the number of new cases and deaths from the disease. Cancer can be caused by a combination of genetic and environmental factors, and some of these factors can be modified or avoided. Overall, prevention measures can help reduce the burden of cancer on individuals, families, and society as a whole. By reducing the number of new cases and deaths from cancer, prevention measures can help save lives, reduce healthcare costs, and improve quality of life. Cancer Therapies The aim of cancer therapies is to considerably prolong and improve the patient's quality of life and, if possible, cure [8]. Currently, there are several treatments available for cancer, such as surgery, chemotherapy, radiotherapy, immunotherapy, endocrine therapy, photodynamic therapy and hyperthermia therapy. These treatments change depending on the type of cancer being treated and whether it is at an advanced stage or not [7,8]. These therapies can be carried out individually, that is, only one treatment, or a combination of therapies can be used, trying to achieve the best possible results [7]. Surgery is the oldest method and is a procedure for removing the tumors from the patient's body [17,18]. This method is not easy and cannot always be performed. In situations when it can be used, there is a chance that the cancer will not be entirely eradicated and, for that same reason, there is a risk that it will spread from its original position to other parts of the body, leading to metastasis. There are also secondary dangers such as bleeding, tissue and organ damage, pain and poor recovery of other body functions that are not well perceived by the patients in question [18]. Chemotherapy is a drug-based treatment to destroy cancer cells [17,19]. This treatment is used to reduce the tumor before surgery or to delay its growth until surgery [17,20]. Classical chemotherapeutic agents have their primary effect on macromolecular synthesis or function, contributing to cell death. This means that they interfere with the synthesis of DNA, RNA or proteins or with the proper functioning of the pre-formed molecule. When interference with macromolecular synthesis or function in the neoplastic cell population is large enough, a fraction of the cells dies. In other cases, chemotherapy may trigger differentiation, senescence or apoptosis. One of the problems with this treatment is that the drug is delivered through the bloodstream, reaching and affecting cells throughout the body, because it is a non-localized systemic therapy. Therefore, in selecting an effective drug, it is necessary to find an agent that has a marked inhibitory or growth-controlling effect on cancer cells and a minimal toxic effect on the host. Even so, there are several side effects in the body such as fatigue, nausea, loss of appetite and hair, and even blood clots [17,20]. Combination chemotherapy is a grouping of medications that is usually more effective in producing responses and prolonging life than medications used separately (monotherapy) and sequentially [17]. Radiation therapy is a treatment that uses ionizing radiation and deposits energy in the tissue cells it passes through to kill cancer cells or slow their growth. Radiation does not kill cancer cells immediately; it can take days or weeks of treatment before the DNA is damaged enough for the cancer cells to die, and there is a possibility that it can damage normal cells as well [8,17,21,22]. It is indicated to relieve symptoms (palliative treatment), shrink the tumor before surgery and kill remaining cancer cells when complete removal of the tumor is not possible, or after surgery to prevent recurrence [8,17,21]. Beyond that, the equipment is expensive, which leads to a high cost of the treatment, which is not always supported by the patients [22]. The immune system is responsible for helping the body fight infection and disease. Immunotherapy induces the immune system to fight cancer, through white blood cells, as well as organs and lymphatic tissues. This treatment has increased overall survival in several cancers at various stages of development, including metastatic disease. In contrast, some malignancies create immunosuppressive microenvironments characterized by high expression of immune checkpoint molecules, limited tumor antigen expression, and limited infiltration of circulating immune effector cells. These "cold" tumors, which are non-immunogenic and non-inflamed, do not respond well to immunotherapies and can successfully evade anticancer immune responses, leading to potential problems. The disadvantages of this treatment are that some drugs harm the organs and systems and it can take longer [23,24]. Endocrine therapy slows down or stops the proliferation of cancer cells that need hormones for their proliferation [25]. Although there are reports of successful cancer treatment with hormone therapy, this therapy is usually never used without the combination of another, as hormone therapy only attempts to extend "control" over the stage of cancer. In addition, side effects can significantly impair the patient's daily life. The administration of hormones may result in organ damage and various side effects for the patient, including hot flashes, weight gain, muscle loss, breast swelling and tenderness, fatigue, and irritability. There is also an increased risk of anemia, cardiovascular disease (such as infarction), and metabolic syndrome, which can cause concern for those involved [26]. Photodynamic therapy involves the use of a photoactive molecule, light, and molecular oxygen present in tissues. When combined, these three compounds are capable of producing reactive oxygen species (ROS), which will induce target cell death [7,27,28]. Despite being a promising treatment, it has been used in a small number of patients, due to skin photosensitivity (the most common adverse effect) caused by systemically administered photosensitizers. Patients must avoid sunlight and strong artificial light for weeks, which is usually highly undesirable. Other limitations of photodynamic therapy are pain and decreased effectiveness for large or deep tumors due to difficulty in tissue penetration [29]. In hyperthermia, therapy body tissue is heated to temperatures between 40 and 43 • C to destroy cancer cells [30,31]. The hyperthermia procedure is based on the notion of subjecting body tissue to high temperatures in order to harm and kill cancer cells (by apoptosis) or to make cancer cells more specific to the effects of radiation and specific anticancer drugs [32]. There are several approaches used to apply this treatment: radiofrequency, microwave, water-filtered infra-red-A, ultrasound, and capacitive heating techniques. While hyperthermia therapy can elevate the intracellular temperature to the point of causing cell death, it has a significant limitation: cancerous and non-cancerous cells are often equally sensitive to heat. Hence, the most difficult aspect of hyperthermia is to maintain a high enough temperature in the tumor while keeping the surrounding normal tissues at a lower temperature to prevent damage to the healthy cells [33]. In fact, all existing therapies have advantages and disadvantages, which is why there is still no ideal solution for the treatment of cancer [9]. This has encouraged research and development of new strategies in order to find more effective and less invasive, painful and toxic treatments for patients. To overcome these drawbacks, drug delivery systems have been developed, which are technologies designed to deliver drugs in a targeted and/or regulated manner [34]. Various structures such as drug-eluting films, hydrogels, wafers, rods, and microspheres can be used as polymeric drug delivery systems. However, fibrous materials have several advantages for drug delivery applications [10,11,35,36]. In the next chapter, several examples of drug delivery systems using fibrous structures will be discussed. Local and Systemic Drug Delivery Systems Drug delivery technologies are categorized according to the delivery method and effect site as systemic or local ( Figure 1) [37]. This choice is critical, because selecting and regulating the drugs administered to a patient is crucial to an effective and comfortable treatment. and/or regulated manner [34]. Various structures such as drug-eluting films, hydrogels wafers, rods, and microspheres can be used as polymeric drug delivery systems However, fibrous materials have several advantages for drug delivery applications [10,11,35,36]. In the next chapter, several examples of drug delivery systems using fibrous structures will be discussed. Local and Systemic Drug Delivery Systems Drug delivery technologies are categorized according to the delivery method and effect site as systemic or local ( Figure 1) [37]. This choice is critical, because selecting and regulating the drugs administered to a patient is crucial to an effective and comfortable treatment. Systemic delivery methods are often adopted because they are easily administered and since they are inserted through oral or intravenous routes, they are more tolerated by patients. Local drug delivery is by its nature more invasive. The medicine is frequently administered directly in the desired site through an injection. This method can have direc negative consequences if the target on body is highly sensible (for example the brain) and as such is frowned upon by patients [38]. Drugs that have a systemic effect are generally administered through systemic delivery methods, distributing the substance throughout the body, regardless of how small the target part might be [37,39]. Since the medicine acts indiscriminately in every part of the body, it can lead to harsh side-effects, particularly with anticancer agents Furthermore, repeated dosage of the chosen substance is necessary to maintain an adequate concentration. Nonetheless, this leads to a concentration oscillation between Systemic delivery methods are often adopted because they are easily administered, and since they are inserted through oral or intravenous routes, they are more tolerated by patients. Local drug delivery is by its nature more invasive. The medicine is frequently administered directly in the desired site through an injection. This method can have direct negative consequences if the target on body is highly sensible (for example the brain) and as such is frowned upon by patients [38]. Drugs that have a systemic effect are generally administered through systemic delivery methods, distributing the substance throughout the body, regardless of how small the target part might be [37,39]. Since the medicine acts indiscriminately in every part of the body, it can lead to harsh side-effects, particularly with anticancer agents. Furthermore, repeated dosage of the chosen substance is necessary to maintain an adequate concentration. Nonetheless, this leads to a concentration oscillation between doses, originating peaks and falls in each administration cycle. In some situations, this will result in a seesaw effect in which the minimum value is so low, it does not give any therapeutic effect, and the maximum value is so high, that several undesirable side-effects arise [40]. Considering the high costs, high risk, and the long time associated with the development and research of novel medication, a significant effort is being put into changing its effect [41]. Medications that have a local effect contrast with the previously described drugs because they are intended to act predominantly on a specific part of the body. Even if Molecules 2023, 28, 3053 6 of 25 this localization is not perfect, it yields far more controlled results than the systemic effect, leading to an improved efficacy and reduced toxicity in off-target sites. This not only obliges a safer treatment, but in some cases might even allow a higher dosage administration of the desired drug [37]. This direction breathes new life into drugs that have already been developed, and becomes a safer research path to be taken by pharmaceutical companies overall. The synthesis method is known, market approval has been cleared, and clinical trials have already been successfully performed [42]. Drugs with a local effect can be administered systematically and locally. If the drug is administered systematically it needs to have a trigger for the release or a have a method to accumulate the particles in the desired site. In the case of cancer, this is typically done by placing a biomarker at the tumor cells, causing the drug carriers to gather at the intended local site [37]. However, since these particles will spread through the body, they need to have a smaller size (ideally less than 400 nm) to avoid any emboli in the blood stream [43] and prevent further complications in blood vessels [37]. The benefit of having a local delivery system is that a drug's therapeutic concentration can be upheld for a longer duration without recurrent dosage, reducing drug under/overdosage concerns and delivering it to the required location [40]. Contrary to the previous method, this makes it easier to design the system and favors the usage of larger particles [37]. These are designed as drug-eluting films, hydrogels, fibrous structures, wafers, rods, and microspheres [44][45][46][47]. There is a growing recognition that success in drug delivery systems focuses to-wards developing increasingly compact devices and agents [48]. Over the past few decades, the utilization of nanotechnology has become widespread. Nanoparticle drug delivery is now regarded as a promising approach to cancer treatment, owing to the drug-loaded nanoparticles' high loading capacity, reduced toxicity, stability, efficacy, specificity, and tolerability when compared to conventional chemotherapy drugs. Nanoparticles loaded with anticancer drugs have the potential to deliver drugs to tumors during cancer treatments, either actively or passively. One of the benefits of using nanoparticles is their ability to be produced in various small sizes, as well as being composed of different materials, including lipids (e.g., liposomes, solid lipid nanoparticles), polymers (e.g., polymeric nanoparticles), and inorganic substances (e.g., gold nanoparticles). Liposomes, micelles, polymeric nanoparticles, solid lipid nanoparticles, and gold nanoparticles are commonly used in cancer treatment among the different types of nanoparticles available [49]. Fibrous Structures as Localized Drug Delivery Systems for Cancer Treatment Fibers (micro and nanofibers) can create very promising material structures for localized drug delivery systems, especially regarding cancer treatment. In order to avoid strong immune responses from the patient's immune system, these structures can biomimic other biodegradable polymers with low immunogenicity. Furthermore, adjustments can be made to their structural integrity by changing the polymers used to make them, their length and their cross-sectional radius, finding the best composition and morphology for the application required [47]. Common strategies to improve the fiber's performance include mixing synthetic and natural polymers, harnessing the properties of each polymer in order to create a superior arrangement. Many polymers and polymer mixtures have been studied for fiber structure production. As such, several studies were performed with different polymers, analyzing how several of these structures can be used in drug delivery systems for cancer therapies [50]. Zhang et al. developed a multilayer nanofiber mat by layering Poly-l-lactic acid with the drugs oxaliplatin and dichloroacetate. When subjected to tests, these nanofibers mats showed that they can be used as a time-programmed drug carrier in local chemotherapy against malignancies alone or in combination with already used treatment regimens, particularly for patients undergoing total tumor resection or cyto-reductive surgery. For 30 days, this multilayer device showed a synergistic impact between the two drugs and a decrease in toxicity to neighboring healthy tissues [51]. Li et al. developed a nanofibrous delivery system for dual photothermal therapy and chemotherapy. This system consisted of zwitterionic poly(2-methacryloyloxyethyl phosphorylcholine)-b-poly(ε-caprolactone) encapsulated with indocyanine green (ICG) and doxorubicin (DOX), triggered by near-infrared (NIR). It manages to convert light into thermal energy (raises the temperature by 45 • C) and, simultaneously, accelerates the release of encapsulated DOX, due to the softening of the nanofibers. This means that drug release can be controlled and turned on/off by flashing light. In addition, it is to increase cell lethality [52]. Arumugam et al. manufactured silk fibroin/cellulose acetate/gold-silver (CA/SF/ Au-Ag) nanoparticles composite nanofiber. Silk fibroin and cellulose acetate were used as reductor agent to stabilize Ag + and Au + anions. Figure 2 displays the TEM images of CA/SF/AuAg composite NF with different magnifications (500 nm, 50 nm, 30 nm and 10 nm). The CA/SF polymeric matrix was formed into needle and rod-shaped morphology with a range of 86.02 ± 57.35 nm in diameter. The Au and Ag nanoparticles were incorporated into the fiber matrix with an average size of 17.32 nm and 53.21 nm, respectively. Biological tests of these nanofibers were performed on breast cancer cell lines, showing excellent anticancer activity [53]. and doxorubicin (DOX), triggered by near-infrared (NIR). It manages to thermal energy (raises the temperature by 45 °C) and, simultaneous release of encapsulated DOX, due to the softening of the nanofibers. Thi release can be controlled and turned on/off by flashing light. In additio cell lethality [52]. Arumugam et al. manufactured silk fibroin/cellulose acetate/gold-Ag) nanoparticles composite nanofiber. Silk fibroin and cellulose ace reductor agent to stabilize Ag + and Au + anions. Figure 2 displays th CA/SF/AuAg composite NF with different magnifications (500 nm, 50 nm). The CA/SF polymeric matrix was formed into needle and rod-sh with a range of 86.02 ± 57.35 nm in diameter. The Au and Ag n incorporated into the fiber matrix with an average size of 17.32 n respectively. Biological tests of these nanofibers were performed on lines, showing excellent anticancer activity [53]. In Table 1, several examples of drug delivery systems used in c represented. These systems will be addressed in the following sectio categorized according to the nanofibers used. In Figure 3 are represe structures of the released drugs. In Table 1, several examples of drug delivery systems used in cancer therapy are represented. These systems will be addressed in the following sections and have been categorized according to the nanofibers used. In Figure 3 are represented the chemical structures of the released drugs. Chitosan Fibrous Structures as Localized Drug Delivery Systems for Cancer Treatment Chitosan (CH) is a cationic polysaccharide derived from natural chitin. CH is a biopolymer widely used in biomedical applications due to its biocompatibility, biodegradability, non-toxicity and water-solubility [34,81,82]. It is easily manufactured into different forms, namely as nanofibers [81,82]. Sanpui et al. developed a silver nanoparticle-CH nanocarrier and tested its cytotoxic effect on colon cancer cell lines. After treatment with this nanocarrier, cells were examined by fluorescence and scanning electron microscopy (morphological) and cell viability assay and flow cytometry (biochemical) to verify whether cell apoptosis had occurred. It was concluded that the use of low concentrations (24-48 µg mL −1 ) of silver nanoparticles-CH nanocarriers induced cell apoptosis, indicating its potential for use in cancer therapy [54]. Yan et al. successfully manufactured poly(vinyl alcohol) (PVA) and CH nanofibers. The surface morphology and microstructures of the nanofibers were being altered by changing the proportion of feed between PVA and CH. In Figure 4 are represented TEM Chitosan Fibrous Structures as Localized Drug Delivery Systems for Cancer Treatment Chitosan (CH) is a cationic polysaccharide derived from natural chitin. CH is a biopolymer widely used in biomedical applications due to its biocompatibility, biodegradability, non-toxicity and water-solubility [34,81,82]. It is easily manufactured into different forms, namely as nanofibers [81,82]. Sanpui et al. developed a silver nanoparticle-CH nanocarrier and tested its cytotoxic effect on colon cancer cell lines. After treatment with this nanocarrier, cells were examined by fluorescence and scanning electron microscopy (morphological) and cell viability assay and flow cytometry (biochemical) to verify whether cell apoptosis had occurred. It was concluded that the use of low concentrations (24-48 µg mL −1 ) of silver nanoparticles-CH nanocarriers induced cell apoptosis, indicating its potential for use in cancer therapy [54]. Yan et al. successfully manufactured poly(vinyl alcohol) (PVA) and CH nanofibers. The surface morphology and microstructures of the nanofibers were being altered by changing the proportion of feed between PVA and CH. In Figure 4 are represented TEM images of PVA/CH core-shell composite nanofibers with different feed ratios of 1:1, 1:1.3 and 1:1.6. The interface of the core and shell layers is clearly visible and no overlap is seen. There was a high contrast difference between the core and shell because the core and shell components possessed different densities [55]. images of PVA/CH core-shell composite nanofibers with different feed ratios of 1:1, 1:1.3 and 1:1.6. The interface of the core and shell layers is clearly visible and no overlap is seen. There was a high contrast difference between the core and shell because the core and shell components possessed different densities [55]. These fibers were used as a carrier for DOX delivery, a drug used to fight cancer cells, onto human ovary cancer cells. Through observation by confocal laser scanning microscopy, it was confirmed that the prepared fibers exhibited a controlled release of DOX in the cancer cell nucleus, which were effective in prohibiting the adhesion and proliferation of ovarian cancer cells, a very important step to tumor therapy [55]. Wade et al. tested that CH-based drug-loaded fibers could be used as a device capable of locally delivering sustained high concentrations of gemcitabine. This drug is used in the treatment of cancer, with minimal toxicity for localized therapy of pancreatic cancer [56]. Jafari et al. recognized the effect of CH/Poly(Ethylene Oxide) (PEO)/Berberine (BBR) nanofibers on cancer cell lines. An inverted microscope was used to examine the development and proliferation of human breast cancer cell lines, human HeLa cervical cancer cells, and fibroblast cells in cultured media. By comparison with control group cell lines, nanofibers containing BBR concentrations of 0.5-20% by weight reduced cell proliferation. Cancer cell lines' viability was also drastically reduced after being exposed to CH/PEO/BBR nanofibers [57]. Qavamnia et al. introduced DOX-hydroxyapatite in CH/PVA/Polyurethane nanofibers. The potential of produced nanofibers was evaluated for controlled release of DOXhydroxyapatite and bone cancer treatment in vitro. The DOX-hydroxyapatite encapsulation efficiency on fibers was higher than 90% and its sustained release was obtained within 10 days under acidic and physiological pH. The cell attachment and cell death results also indicated the great potential of these loaded fibers for bone cancer treatment [58]. In another study, Bazzazzadeh et al. manufactured an MIL-53 nanometal organic structure combined with CH/polyurethane nanofibers grafted with poly(acrylic acid). In this structure, the drugs temozolomide and paclitaxel (PTX) were applied for the release for testing against glioblastoma cancer cells. Synthetized core-shell nanofibers had a yield superior to 80% regarding the encapsulation effectiveness of temozolomide and PTX, implying that they are potential drug carriers' biomaterials [59]. Polyvinyl Alcohol Fibrous Structures as Localized Drug Delivery Systems for Cancer Treatment PVA is a Food and Drug Administration (FDA)-approved polymer with many applications for drug delivery and biomedical applications due to its physical and chemical properties [60,61]. This polymer is easy to process and produce spin, it is soluble in water, non-toxic, biodegradable and biocompatible, which makes it very interesting to be inserted into nanoparticles [61,83]. PVA fibrous structures as localized drug delivery systems are presented below. Cao et al. fabricated PVA/silk fibroin (SF) nanoparticles with distinct core-shell structures using a coaxial electrospray technology. DOX, an anticancer drug, was created in These fibers were used as a carrier for DOX delivery, a drug used to fight cancer cells, onto human ovary cancer cells. Through observation by confocal laser scanning microscopy, it was confirmed that the prepared fibers exhibited a controlled release of DOX in the cancer cell nucleus, which were effective in prohibiting the adhesion and proliferation of ovarian cancer cells, a very important step to tumor therapy [55]. Wade et al. tested that CH-based drug-loaded fibers could be used as a device capable of locally delivering sustained high concentrations of gemcitabine. This drug is used in the treatment of cancer, with minimal toxicity for localized therapy of pancreatic cancer [56]. Jafari et al. recognized the effect of CH/Poly(Ethylene Oxide) (PEO)/Berberine (BBR) nanofibers on cancer cell lines. An inverted microscope was used to examine the development and proliferation of human breast cancer cell lines, human HeLa cervical cancer cells, and fibroblast cells in cultured media. By comparison with control group cell lines, nanofibers containing BBR concentrations of 0.5-20% by weight reduced cell proliferation. Cancer cell lines' viability was also drastically reduced after being exposed to CH/PEO/BBR nanofibers [57]. Qavamnia et al. introduced DOX-hydroxyapatite in CH/PVA/Polyurethane nanofibers. The potential of produced nanofibers was evaluated for controlled release of DOX-hydroxyapatite and bone cancer treatment in vitro. The DOX-hydroxyapatite encapsulation efficiency on fibers was higher than 90% and its sustained release was obtained within 10 days under acidic and physiological pH. The cell attachment and cell death results also indicated the great potential of these loaded fibers for bone cancer treatment [58]. In another study, Bazzazzadeh et al. manufactured an MIL-53 nanometal organic structure combined with CH/polyurethane nanofibers grafted with poly(acrylic acid). In this structure, the drugs temozolomide and paclitaxel (PTX) were applied for the release for testing against glioblastoma cancer cells. Synthetized core-shell nanofibers had a yield superior to 80% regarding the encapsulation effectiveness of temozolomide and PTX, implying that they are potential drug carriers' biomaterials [59]. Polyvinyl Alcohol Fibrous Structures as Localized Drug Delivery Systems for Cancer Treatment PVA is a Food and Drug Administration (FDA)-approved polymer with many applications for drug delivery and biomedical applications due to its physical and chemical properties [60,61]. This polymer is easy to process and produce spin, it is soluble in water, non-toxic, biodegradable and biocompatible, which makes it very interesting to be inserted into nanoparticles [61,83]. PVA fibrous structures as localized drug delivery systems are presented below. Cao et al. fabricated PVA/silk fibroin (SF) nanoparticles with distinct core-shell structures using a coaxial electrospray technology. DOX, an anticancer drug, was created in this system, with a drug encapsulation efficiency of over 90%. By changing the PVA concentration (0.1, 0.3, and 0.5wt%), the drug's controlled release profiles were studied. DOX was released slowly and steadily due to the barriers of carrier polymers, but its release can be accelerated using ultrasound treatment. The researchers also studied drug release in response to pH. The cell apoptosis assay showed that the sustained release of DOX increases with time and showed high cytotoxicity for breast cancer tumor cells [60]. Steffens et al. used PVA to develop a nanofibrous system based on encapsulated dacarbazine (an anticancer drug) for the treatment of recurrent glioblastoma. The produced nanofibres demonstrated 83.9 ± 6.5% drug loading, good stability and mechanical characteristics, and prolonged drug release. This regulated release improved anticancer effects such as DNA damage and cell death via apoptosis showing a system with great potential as a drug delivery system for cancer therapy [61]. Yan et al. used polycaprolactone (PCL)/PVA nanofibres with pH-responsive properties to test as carriers of an anticancer drug, PTX. Figure 5 shows the SEM (scanning electron microscope) and TEM (transmission electron microscopy) images for PCL/PVA fibers. The flow ratio between the core and shell solutions (PCL/PVA) was 0.5:0.5, 0.5:0.6 and 0.5:0.7, respectively. Good adhesion between the fibers is visible. It has been shown that these fibers, in response to pH, release the drug. These fibers have been tested in colon cancer cells; they have shown that they can completely inhibit the proliferation and growth of cancer cells and can even cause their death. This indicates that they have promising abilities to be used as biomaterials in the therapy of some tumors [62]. this system, with a drug encapsulation efficiency of over 90%. By changing the PVA concentration (0.1, 0.3, and 0.5wt%), the drug's controlled release profiles were studied. DOX was released slowly and steadily due to the barriers of carrier polymers, but its release can be accelerated using ultrasound treatment. The researchers also studied drug release in response to pH. The cell apoptosis assay showed that the sustained release of DOX increases with time and showed high cytotoxicity for breast cancer tumor cells [60]. Steffens et al. used PVA to develop a nanofibrous system based on encapsulated dacarbazine (an anticancer drug) for the treatment of recurrent glioblastoma. The produced nanofibres demonstrated 83.9 ± 6.5% drug loading, good stability and mechanical characteristics, and prolonged drug release. This regulated release improved anticancer effects such as DNA damage and cell death via apoptosis showing a system with great potential as a drug delivery system for cancer therapy [61]. Yan et al. used polycaprolactone (PCL)/PVA nanofibres with pH-responsive properties to test as carriers of an anticancer drug, PTX. Figure 5 shows the SEM (scanning electron microscope) and TEM (transmission electron microscopy) images for PCL/PVA fibers. The flow ratio between the core and shell solutions (PCL/PVA) was 0.5:0.5, 0.5:0.6 and 0.5:0.7, respectively. Good adhesion between the fibers is visible. It has been shown that these fibers, in response to pH, release the drug. These fibers have been tested in colon cancer cells; they have shown that they can completely inhibit the proliferation and growth of cancer cells and can even cause their death. This indicates that they have promising abilities to be used as biomaterials in the therapy of some tumors [62]. Poly(lactic-co-glycolic acid) (PLGA) is a polymeric nanoparticle approved by the FDA for use in drug delivery systems, due to its properties of controlled and sustained release, low toxicity, effective biodegradability and biocompatibility with tissues and cells. This polymer undergoes hydrolysis in the body producing monomers of biodegradable metabolites, lactic acid and glycolic acid. This results in very low toxicity because the human body is able to metabolize these monomers through the Krebs cycle [84]. Below are some examples of localized drug delivery systems using PLGA fibers. Xie et al. manufactured, by electrospun, micro, and nanofibers based on PLGA as implants for the treatment of brain cancer. These PLGA-based micro and nanofibers were encapsulated with the drug PTX, with an encapsulation efficiency greater than 90%. Sustained drug release was achieved for more than 60 days and toxicity test results showed an IC50 value of PLGA nanofibers with PTX comparable to PTX alone [63]. Choi et al. employed PLGA to allow hollow fibers' production. NIR light sensitivity was incorporated with a segmental switch ability in its chain for cancer therapy. These fibers were responsible for providing a core that encapsulated DOX and a shell that entrapped gold nanorods as a photothermal agent. On exposure to NIR light, the photothermal agent generated heat to increase the local temperature of the fibers, which depended on the power density of the NIR light. As the temperature was above the glass transition of the polymer, the PLGA chains became mobile. This increased the free volume within the shell which led to rapid drug release. When the NIR light was turned off, heat Poly(lactic-co-glycolic acid) (PLGA) is a polymeric nanoparticle approved by the FDA for use in drug delivery systems, due to its properties of controlled and sustained release, low toxicity, effective biodegradability and biocompatibility with tissues and cells. This polymer undergoes hydrolysis in the body producing monomers of biodegradable metabolites, lactic acid and glycolic acid. This results in very low toxicity because the human body is able to metabolize these monomers through the Krebs cycle [84]. Below are some examples of localized drug delivery systems using PLGA fibers. Xie et al. manufactured, by electrospun, micro, and nanofibers based on PLGA as implants for the treatment of brain cancer. These PLGA-based micro and nanofibers were encapsulated with the drug PTX, with an encapsulation efficiency greater than 90%. Sustained drug release was achieved for more than 60 days and toxicity test results showed an IC 50 value of PLGA nanofibers with PTX comparable to PTX alone [63]. Choi et al. employed PLGA to allow hollow fibers' production. NIR light sensitivity was incorporated with a segmental switch ability in its chain for cancer therapy. These fibers were responsible for providing a core that encapsulated DOX and a shell that entrapped gold nanorods as a photothermal agent. On exposure to NIR light, the photothermal agent generated heat to increase the local temperature of the fibers, which depended on the power density of the NIR light. As the temperature was above the glass transition of the polymer, the PLGA chains became mobile. This increased the free volume within the shell which led to rapid drug release. When the NIR light was turned off, heat generation was interrupted by the inactivation of the photothermal agent and froze the segmental movement of the chains, interrupting drug release. Figure 6 shows an operating principle for a fibrous system made of a polymer and an NIR light-absorbing photothermal agent [64]. Molecules 2023, 28, x FOR PEER REVIEW 12 of 25 generation was interrupted by the inactivation of the photothermal agent and froze the segmental movement of the chains, interrupting drug release. Figure 6 shows an operating principle for a fibrous system made of a polymer and an NIR light-absorbing photothermal agent [64]. Regarding cell viability, when not exposed to NIR light, cell viability above 90% was observed, proving excellent biocompatibility. With six cycles of NIR light exposition, the cell viability decreased to 8%, proving that most of the cells were dead and that the system could properly work as an anti-cancer one [64]. In another study, Mohebian et al. incorporated curcumin (a natural antitumor agent) into mesoporous silica nanoparticles (MSNs), and these were, consequently, incorporated into PLGA, originating an electrospun nanofiber-mediated drug release system (CUR@MSNs/PLGA nanofibers). The morphology of electrospun NFs (neat PLGA NFs, MSNs/PLGA NFs, CUR/PLGA NFs, and CUR@MSNs/PLGA NFs) was studied by SEM and TEM and is depicted in Figure 7. The average diameter of the MSNs/PLGA NFs and CUR@MSNs/PLGA NFs were found to be (600 ± 125 and 620 ± 144 nm, respectively) which were greater than neat PLGA NFs and CUR/PLGA NFs (480 ± 78 nm and 510 ± 150, respectively), owing to the increment in the viscosity of the mixed solution with the content of mounting MSNs. The TEM image shows us the curcumin incorporated in the fibers. This system was tested in breast cancer and the results showed that the CUR@MSNs were successfully incorporated into the PLGA nanofibers, exhibiting a sustained and prolonged drug release profile. This composite nanofiber also had greater in vitro cytotoxicity, low migration, and was capable of enhancing apoptosis induction, thus being a promising application for the treatment of cancer [65]. Regarding cell viability, when not exposed to NIR light, cell viability above 90% was observed, proving excellent biocompatibility. With six cycles of NIR light exposition, the cell viability decreased to 8%, proving that most of the cells were dead and that the system could properly work as an anti-cancer one [64]. In another study, Mohebian et al. incorporated curcumin (a natural antitumor agent) into mesoporous silica nanoparticles (MSNs), and these were, consequently, incorporated into PLGA, originating an electrospun nanofiber-mediated drug release system (CUR@MSNs/PLGA nanofibers). The morphology of electrospun NFs (neat PLGA NFs, MSNs/PLGA NFs, CUR/PLGA NFs, and CUR@MSNs/PLGA NFs) was studied by SEM and TEM and is depicted in Figure 7. The average diameter of the MSNs/PLGA NFs and CUR@MSNs/PLGA NFs were found to be (600 ± 125 and 620 ± 144 nm, respectively) which were greater than neat PLGA NFs and CUR/PLGA NFs (480 ± 78 nm and 510 ± 150, respectively), owing to the increment in the viscosity of the mixed solution with the content of mounting MSNs. The TEM image shows us the curcumin incorporated in the fibers. This system was tested in breast cancer and the results showed that the CUR@MSNs were successfully incorporated into the PLGA nanofibers, exhibiting a sustained and prolonged drug release profile. This composite nanofiber also had greater in vitro cytotoxicity, low migration, and was capable of enhancing apoptosis induction, thus being a promising application for the treatment of cancer [65]. cules 2023, 28, x FOR PEER REVIEW 12 of 25 generation was interrupted by the inactivation of the photothermal agent and froze the segmental movement of the chains, interrupting drug release. Figure 6 shows an operating principle for a fibrous system made of a polymer and an NIR light-absorbing photothermal agent [64]. Regarding cell viability, when not exposed to NIR light, cell viability above 90% was observed, proving excellent biocompatibility. With six cycles of NIR light exposition, the cell viability decreased to 8%, proving that most of the cells were dead and that the system could properly work as an anti-cancer one [64]. In another study, Mohebian et al. incorporated curcumin (a natural antitumor agent) into mesoporous silica nanoparticles (MSNs), and these were, consequently, incorporated into PLGA, originating an electrospun nanofiber-mediated drug release system (CUR@MSNs/PLGA nanofibers). The morphology of electrospun NFs (neat PLGA NFs, MSNs/PLGA NFs, CUR/PLGA NFs, and CUR@MSNs/PLGA NFs) was studied by SEM and TEM and is depicted in Figure 7. The average diameter of the MSNs/PLGA NFs and CUR@MSNs/PLGA NFs were found to be (600 ± 125 and 620 ± 144 nm, respectively) which were greater than neat PLGA NFs and CUR/PLGA NFs (480 ± 78 nm and 510 ± 150, respectively), owing to the increment in the viscosity of the mixed solution with the content of mounting MSNs. The TEM image shows us the curcumin incorporated in the fibers. This system was tested in breast cancer and the results showed that the CUR@MSNs were successfully incorporated into the PLGA nanofibers, exhibiting a sustained and prolonged drug release profile. This composite nanofiber also had greater in vitro cytotoxicity, low migration, and was capable of enhancing apoptosis induction, thus being a promising application for the treatment of cancer [65]. Khanom et al. created a single multifunctional PCL-PLGA-DOX nanofiber mats-using pyrroleon with in situ polymerization on the surface. Pyrrole is a powerful photother-mal agent that, through absorption of NIR energy, can increase the temperature of the surrounding medium at various concentrations. In response to an 808 nm NIR laser, the pyrrole-coated fiber mats demonstrated outstanding photothermal conversion capability and hyperthermia. The pyrrole-coated PCL-PLGA-DOX membrane also had a better inhibitory impact than non-irradiated mats or mats that only caused hyperthermia. A possible justification is the greater release of DOX at the site due to fiber mobility and the hyperthermia effect. These results confirm the promising application of these nanofibers as a localized drug delivery cancer treatment [66]. The inherent magnetism of these materials facilitates targeting and therefore they are a viable option in smart drug delivery systems. These nanoparticles will act as drug carriers that will be directed through an external magnetic field to the desired location, where the drug will be released. To speed up the process of delivering the medicine in time and to the specific target, it is common to resort to inducing an alternating magnetic field (AMF) leading to the movement of nanoparticles [91]. As described so far, the fibers have a potential application for local drug delivery. Their combination with MNPs may further increase their interest for cancer treatment [92]. Below are described several drug delivery systems using MNPs combined with fibers. Kim et al. developed a smart hyperthermia nanofiber with simultaneous heat generation and drug release in response to "on-off" switching of the AMF. The nanofiber was composed of chemically cross-linkable temperature-responsive polymer (poly(NIPAAm-co-HMAAm), DOX, and MNPs. To study the response of nanofibers to an AMF, a neodymium magnet was kept near a dish in which MNPs-nanofibers were used, as shown in Figure 8a. In 5 s, the nanofibers were attracted by the magnet, showing their response behavior and the potential for manipulation under a controlled magnetic field. In Figure 8b are infrared thermal images of nanofibers (25 mg/300 µL, after crosslinking) in an AMF application with 31 wt% MNPs. The temperature rises in the middle of the photos where the nanofibers are placed, indicating that MNPs-nanofibers might be employed to treat hyperthermia. Even in the presence of this device, 70% of human melanoma cells died after just 5 min of application of an AMF, due to the double effect of heat and drugs, again showing the immense potential of this area [67]. Lin et al. used Fe3O4 NPs incorporated onto crosslinked electrospun CH nanofibres using chemical coprecipitation. Iminodiacetic acid was also grafted in CH to increase the amount of MNPs formed in the magnetic nanofiber composite. This incorporation led to the formation of more MNPs in the nanofiber matrix. In addition, the magnetic IDAgrafted CH nanofibers composite showed that it could reduce the proliferation/growth rate of malignant cells of the tumor under the application of magnetic field. This system Lin et al. used Fe 3 O 4 NPs incorporated onto crosslinked electrospun CH nanofibres using chemical coprecipitation. Iminodiacetic acid was also grafted in CH to increase the amount of MNPs formed in the magnetic nanofiber composite. This incorporation led to the formation of more MNPs in the nanofiber matrix. In addition, the magnetic IDA-grafted CH nanofibers composite showed that it could reduce the proliferation/growth rate of malignant cells of the tumor under the application of magnetic field. This system can be delivered to the treatment site precisely by surgical or endoscopic method [68]. Sasikala et al., reported a smart nanoplatform responsive to a magnetic field to administer both magnetic hyperthermia (MH) and pH-dependent anticancer drug release for cancer treatment. For this, magnetic iron oxide nanoparticles (MIONs) were incorporated into the nanofiber matrix, forming a magnetic nanofiber matrix (MMNF). To develop this nanofiber, PLGA was used (Figure 9a). Regarding the anticancer drug delivery, this step was realized by surface functionalization using dopamine to conjugate the bortezomib through a catechol metal binding in a pH-sensitive manner. The in vitro studies verified that the device demonstrated a synergistic anticancer effect by applying hyperthermia and drug delivery simultaneously. This approach provides a secure route for targeted anticancer drug delivery to the tumor, as evidenced by Figure 9b, and ensures that the MNPs are sufficiently concentrated in the tumor region to enable hyperthermia treatment [69]. Lin et al. used Fe3O4 NPs incorporated onto crosslinked electrospun CH nanofibres using chemical coprecipitation. Iminodiacetic acid was also grafted in CH to increase the amount of MNPs formed in the magnetic nanofiber composite. This incorporation led to the formation of more MNPs in the nanofiber matrix. In addition, the magnetic IDAgrafted CH nanofibers composite showed that it could reduce the proliferation/growth rate of malignant cells of the tumor under the application of magnetic field. This system can be delivered to the treatment site precisely by surgical or endoscopic method [68]. Sasikala et al., reported a smart nanoplatform responsive to a magnetic field to administer both magnetic hyperthermia (MH) and pH-dependent anticancer drug release for cancer treatment. For this, magnetic iron oxide nanoparticles (MIONs) were incorporated into the nanofiber matrix, forming a magnetic nanofiber matrix (MMNF). To develop this nanofiber, PLGA was used (Figure 9a). Regarding the anticancer drug delivery, this step was realized by surface functionalization using dopamine to conjugate the bortezomib through a catechol metal binding in a pH-sensitive manner. The in vitro studies verified that the device demonstrated a synergistic anticancer effect by applying hyperthermia and drug delivery simultaneously. This approach provides a secure route for targeted anticancer drug delivery to the tumor, as evidenced by Figure 9b, and ensures that the MNPs are sufficiently concentrated in the tumor region to enable hyperthermia treatment [69]. For a treatment of leukemia cancer, Hosseini et al. used electrospun polylactic acid (PLA) nanofibers incorporated with MNPs and multiwalled carbon nanotubes (MWCNT). As a model drug, they chose daunorubicin, which was successfully encapsulated in the synthesized nanofibrous scaffolds. They also investigated the release rate and cell proliferation of K562 cancer cells, and it was clear that the presence of a magnetic field increased both of these metrics. This applied magnetic field also reduced cell viability, increasing the inhibition effect on K562 cancer cells, indicating its synergistic effect and promising efficacious behavior [70]. Tiwari et al. created a magnetically actuated smart-textured fibrous system based on PCL with MIONs, DOX and fluorescent carbogenic nanodots. By applying an AMF, the system demonstrated enhanced heating, showing that more than 90% of HeLa cells were dead by apoptotic-necrotic mechanism. The use of AMF also accelerated the release of the drug and increased the effectiveness of the therapy, evidencing its ability to navigate in the fluid. This system also proved to be non-toxic to the cells and during incubation it also did not release toxic materials, proving to be a good option in the treatment of cancer [71]. Also using PCL, Niiyama et al. developed a nanofibrous mesh incubated with an anticancer agent, PTX, and MNPs. In vitro tests showed that the drug was released slowly over six weeks. Furthermore, when the mesh was excited with an AMF, the MNPs within the nanofibers created localized heat, which promoted heat-induced cell death as well as enhanced chemotherapeutic impact of PTX. This was also confirmed with a cytotoxic test, where heating and the release of PTX were combined, and 58% of tumor-bearing mouse cells (NCI-H23 cells) died ( Figure 10) [72]. dead by apoptotic-necrotic mechanism. The use of AMF also accelerated the release of the drug and increased the effectiveness of the therapy, evidencing its ability to navigate in the fluid. This system also proved to be non-toxic to the cells and during incubation it also did not release toxic materials, proving to be a good option in the treatment of cancer [71]. Also using PCL, Niiyama et al. developed a nanofibrous mesh incubated with an anticancer agent, PTX, and MNPs. In vitro tests showed that the drug was released slowly over six weeks. Furthermore, when the mesh was excited with an AMF, the MNPs within the nanofibers created localized heat, which promoted heat-induced cell death as well as enhanced chemotherapeutic impact of PTX. This was also confirmed with a cytotoxic test, where heating and the release of PTX were combined, and 58% of tumor-bearing mouse cells (NCI-H23 cells) died ( Figure 10) [72]. Matos et al. mixed MNPs (Fe3O4) with cellulose acetate to form composite membranes for MH. To create stable suspensions at physiological pH, the supermagnetic NPs were stabilized by oleic acid (OA) or dimercaptosuccinic acid (DMSA). Through SEM and TEM its incorporation into the fiber matrix was confirmed. It was feasible to obtain therapeutic temperatures by adjusting the quantity of Fe3O4 NPs present on cellulose acetate mats and modelling the parameters of the hyperthermia experiment. The tensile studies confirmed that the addition of these NPs had a considerable influence on the mechanical response of cellulose acetate, raising Young's Modulus, elastic limit stress, and ultimate Matos et al. mixed MNPs (Fe 3 O 4 ) with cellulose acetate to form composite membranes for MH. To create stable suspensions at physiological pH, the supermagnetic NPs were stabilized by oleic acid (OA) or dimercaptosuccinic acid (DMSA). Through SEM and TEM its incorporation into the fiber matrix was confirmed. It was feasible to obtain therapeutic temperatures by adjusting the quantity of Fe 3 O 4 NPs present on cellulose acetate mats and modelling the parameters of the hyperthermia experiment. The tensile studies confirmed that the addition of these NPs had a considerable influence on the mechanical response of cellulose acetate, raising Young's Modulus, elastic limit stress, and ultimate tensile strength. In vitro research has shown that the concentration of Fe 3 O 4 NPs in these membranes must be regulated so as not to exceed the clinically necessary heat, to avoid cytotoxic effects that could harm healthy tissues. Composite fibers' membranes with included DMSA-Fe 3 O 4 NPs proved to be the most promising for application in MH, considering heating ability and lack of cytotoxicity, after adsorption in a solution with a concentration of 0.5 mg mL −1 [73]. Suneet et al. manufactured magnetic nanofibrous mat-based bandage using an external AMF-induced hyperthermia to treat skin cancer non-invasively. The authors used the electrospinning technique to manufacture the Fe 3 O 4 nanoparticles-incorporated PCL fibersbased bandages. The efficacy of the bandage was investigated in vitro using parental/DOX hydrochloride-resistant HeLa cells and in vivo using BALB/c mouse model in the presence of an external AMF. The results showed that this system dissipates thermal energy locally in the application of external AMF and increases from room temperature to 45 • C in a controlled manner in a few minutes. It has also been found that elevated temperature can significantly kill parental and Dox-resistant HeLa cells. When fibrous mat-containing Dox was incubated with HeLa cells and exposed to an AMF for 10 min, more than 85% of the parental HeLa cells were killed, likely due to the enhanced activity of Dox at higher temperatures. In vivo tests confirmed the full recovery of chemically induced skin tumors on BALB/c mice within a month after five hyperthermic doses for 15 min. There were also no signs of post-therapy inflammation and cancer recurrence [74]. Hu et al. also created a drug delivery system with NPs, using PCL and Fe 3 O 4 . Fiber sizes ranged from 4 to 17 µm, due to the different percentages of NPs that were tested in the fibers. The magnetic composite fiber membranes showed excellent heating efficiency and thermal cycling characteristics. To prove this, the temperature increase of the fibers with different percentages of MNPs was studied and is represented in Table 2. The heating temperature increased rapidly over time and eventually stabilized after reaching a certain temperature. Starting from an initial temperature of 15°C, it was observed that the temperature increased with the concentration of MNPs in the fibers following the application of an AMF. These results reveal that PCL/Fe 3 O 4 fiber membrane has the potential to be used in the treatment of hyperthermia [75]. Also from cellulose nanofibers and MNPs, Sumith et al. developed a pH-responsive and bioactive DOX delivery system. This device had a saturation magnetization of 50.1 emu g −1 and a strong drug cellular internalization index. This system exhibits strong hyperthermia potential in an AMF and pH-triggered DOX delivery. Measuring the in vitro cytotoxicity as a function of DOX release from the samples confirmed its efficacy. With these advantageous multifunctional activities, it has been demonstrated that this system can be successfully recommended for cancer treatment applications, namely hyperthermia [76]. Chen et al. manufactured a magnetic composite nanofiber mesh using PCL with DOX, MNPs and 17-allylamino-17-desmethoxygeldanamycin. This system can achieve mutual synergy of hyperthermia, chemotherapy and thermomolecular-targeted therapy for highly potent therapeutic effects. The developed nanofiber mesh exhibits hyperthermia, good biocompatibility, a sustained release behavior and is pH-sensitive. These features are favorable for long-term maintenance of effective drug concentration in tumor tissue. As it can be seen by Figure 11a,b, infrared thermal images of PCL, MNP-PCL, and MNP/DOX/ 17-allylamino-17-demethoxygeldanamycin (17AAG)/PCL nanofiber meshes loaded with 12.0 mg of MNPs' AMF irradiation lead to the conclusions that the temperature of an AMF-exposed mesh of MNP-PCL and MNP/DOX/17AAG-PCL MNPs increased from 25.8 • C and 25.9 • C to 44.1 • C and 43.8 • C, respectively, while in the PCL nanofiber mesh there were no significant changes in temperature. In MCF-7 breast cancer cell lines, this nanofiber mesh efficiently induced apoptosis, demonstrating its potential as a new tumor therapy and as an effective locally implantable system to enhance the effectiveness of cancer combination therapy [77]. loaded with 12.0 mg of MNPs' AMF irradiation lead to the conclusions that the temperature of an AMF-exposed mesh of MNP-PCL and MNP/DOX/17AAG-PCL MNPs increased from 25.8 °C and 25.9 °C to 44.1 °C and 43.8 °C, respectively, while in the PCL nanofiber mesh there were no significant changes in temperature. In MCF-7 breast cancer cell lines, this nanofiber mesh efficiently induced apoptosis, demonstrating its potential as a new tumor therapy and as an effective locally implantable system to enhance the effectiveness of cancer combination therapy [77]. To develop a smart hyperthermia nanofibrous scaffold, Samadzadeh et al. used temperature responsive polymers (N-isopropylacrylamide and N-hydroxymethylacrylamide) blended with MNPs (10 nm) and mesoporous silica NPs loaded with metformin. This system is capable of generating heat and releasing metformin in two stages in response to the "on-off" switching of an AMF for better hyperthermic chemotherapy. Tests were performed to study the rate of swelling with reversible changes and the associated drug discharge in response to an AMF application with on-off switching. It was found that when applying an AMF for 300 s, during the second and third days, the metabolic activity of B16-F10 skin melanoma cells incubated with the system was decreased. The resultant system also demonstrated a persistent release, showing a great combination of early quick and late extended drug discharge as intended [78]. More recently, Serio et al. presented the design, fabrication and characterization of biocompatible fibers of PCL co-loaded with DOX as well as MNPs of cubic shape. The coloading of DOX within the magnetic fibers was made possible by simply adding the drug to the solutions containing the PCL polymer and the MNPs. These fibers were obtained with 0.5-1 mm in diameter. Its characterization was done by TEM analysis (Figure 12) and proved the incorporation of nanocubes inside the fibers. Also visible is the preferential alignment of the nanocubes into small chains within the length of the fiber [79]. To develop a smart hyperthermia nanofibrous scaffold, Samadzadeh et al. used temperature responsive polymers (N-isopropylacrylamide and N-hydroxymethylacrylamide) blended with MNPs (10 nm) and mesoporous silica NPs loaded with metformin. This system is capable of generating heat and releasing metformin in two stages in response to the "on-off" switching of an AMF for better hyperthermic chemotherapy. Tests were performed to study the rate of swelling with reversible changes and the associated drug discharge in response to an AMF application with on-off switching. It was found that when applying an AMF for 300 s, during the second and third days, the metabolic activity of B16-F10 skin melanoma cells incubated with the system was decreased. The resultant system also demonstrated a persistent release, showing a great combination of early quick and late extended drug discharge as intended [78]. More recently, Serio et al. presented the design, fabrication and characterization of biocompatible fibers of PCL co-loaded with DOX as well as MNPs of cubic shape. The coloading of DOX within the magnetic fibers was made possible by simply adding the drug to the solutions containing the PCL polymer and the MNPs. These fibers were obtained with 0.5-1 mm in diameter. Its characterization was done by TEM analysis (Figure 12) and proved the incorporation of nanocubes inside the fibers. Also visible is the preferential alignment of the nanocubes into small chains within the length of the fiber [79]. loaded with 12.0 mg of MNPs' AMF irradiation lead to the conclusions that the temperature of an AMF-exposed mesh of MNP-PCL and MNP/DOX/17AAG-PCL MNPs increased from 25.8 °C and 25.9 °C to 44.1 °C and 43.8 °C, respectively, while in the PCL nanofiber mesh there were no significant changes in temperature. In MCF-7 breast cancer cell lines, this nanofiber mesh efficiently induced apoptosis, demonstrating its potential as a new tumor therapy and as an effective locally implantable system to enhance the effectiveness of cancer combination therapy [77]. To develop a smart hyperthermia nanofibrous scaffold, Samadzadeh et al. used temperature responsive polymers (N-isopropylacrylamide and N-hydroxymethylacrylamide) blended with MNPs (10 nm) and mesoporous silica NPs loaded with metformin. This system is capable of generating heat and releasing metformin in two stages in response to the "on-off" switching of an AMF for better hyperthermic chemotherapy. Tests were performed to study the rate of swelling with reversible changes and the associated drug discharge in response to an AMF application with on-off switching. It was found that when applying an AMF for 300 s, during the second and third days, the metabolic activity of B16-F10 skin melanoma cells incubated with the system was decreased. The resultant system also demonstrated a persistent release, showing a great combination of early quick and late extended drug discharge as intended [78]. More recently, Serio et al. presented the design, fabrication and characterization of biocompatible fibers of PCL co-loaded with DOX as well as MNPs of cubic shape. The coloading of DOX within the magnetic fibers was made possible by simply adding the drug to the solutions containing the PCL polymer and the MNPs. These fibers were obtained with 0.5-1 mm in diameter. Its characterization was done by TEM analysis (Figure 12) and proved the incorporation of nanocubes inside the fibers. Also visible is the preferential alignment of the nanocubes into small chains within the length of the fiber [79]. When compared to individual treatments (only non-specific drug release and only MH without drug load), the heat induced by MH combined with DOX release led to increased mortality (cell viability < 20%) in the group of cells treated with PCL-MNPs-DOX and subjected to MH therapy [79]. In conclusion, as it can be seen from the review of all these articles, fibrous structures are capable of acting as drug delivery systems for the treatment of various cancers. Nanofibers Theranostic Drug Delivery Systems for Cancer Treatment The term "theranostics" is used when a treatment incorporates therapy and medical imaging personalized treatment for individual patients, allowing a more informed monitorization of the substance into the patient and its effects. Fibers have gained attention as a promising platform for theranostic drug delivery systems due to their high surface area for drug loading, tunable release rates, and localized delivery. Additionally, fibers can be engineered to incorporate imaging agents, enabling simultaneous disease diagnosis and treatment. With their versatility and biocompatibility, fibers have a high potential for the development of novel theranostic delivery systems for various diseases [12]. Extensive research has been conducted to explore the therapeutic properties of nanoparticles in the previous sections. Therefore, this section of the review will focus more on the diagnostic properties of nanoparticles, specifically their use in imaging and sensing applications. By utilizing the unique properties of nanoparticles, such as their size, shape, and surface chemistry, researchers have been able to develop highly sensitive and specific imaging and sensing platforms. These platforms have shown great potential in the detection and diagnosis of diseases, as well as the monitoring of treatment response. There are several types of fibers that have been explored for use in theranostic drug delivery systems, including polymeric fibers, inorganic fibers, and natural fibers [12,93]. As established before, polymeric fibers are biocompatible and biodegradable, making them an attractive option for drug delivery. Furthermore, as we will see, it is possible to observe that these particles can also be functionalized with imaging agents for theranostic applications, making them ideal candidates for these functions. Soares et al. created a drug delivery system utilizing PLGA nanofibers that integrated contrast agents for MRI. This system included superparamagnetic nanoparticles which enhance the nuclear relaxation of water protons, resulting in a decrease in contrast (darker contrast) in the transverse (T2) relaxation. Therefore, the inclusion of MNPs facilitates treatment monitoring through MRI [94]. Liao et al. developed an NP system in which a PLGA core was coated using a paramagnetic substance. Similar to the previous study, this approach also allowed drug delivery through PLGA and monitorization through MRI due to the surface properties. Not only this, due to the coating, the treatment with the aforementioned method resulted in improved cellular internalization compared to untreated nanoparticles, potentially reducing chemotherapy side effects and facilitating targeted delivery of anti-cancer drugs [95]. In the study by Varani et al., specific PLGA-NPs were chosen and modified with a NIR fluorochrome, enabling fluorescence to penetrate deeper into tissues. It is possible to observe in Figure 13 how the particles acted as both a drug carrier and an imaging agent [96]. A similar approach was taken by Park et al., where PCL fibers were loaded with surface openings containing biocompatible photothermal agent. Even though this agent could be used for imaging purposes, in this case it was adapted to be used for triggered release of the drug through NIR Light [97]. Inorganic fibers also have potential as drug delivery platforms due to their high surface area and tunability, as previously observed. Various shapes and sizes of particles can be synthesized, and for specific applications, imaging moieties can be attached to them, as we will see next. Inorganic fibers also have potential as drug delivery platforms due to their high surface area and tunability, as previously observed. Various shapes and sizes of particles can be synthesized, and for specific applications, imaging moieties can be attached to them, as we will see next. As Jafari et al. highlighted, the fabrication of implants has gained increasing attention due to the nontoxic and nanotopographical characteristics of TiO 2 nanomaterials. Unlike polymeric fibers, these materials by themselves have imaging properties since it is possible to use them as biosensors through two main methods: excitation (light) and detection (current). With these methods, it is able to give highly accurate readings with a good performance and low noise interference, significantly improving the detection of targets. The article discusses the limitations of using TiO 2 nanomaterials in biomedical applications, including their inability to absorb a significant portion of the solar spectrum and the potential damage to biomolecules caused by photo-generated holes. It also emphasizes the importance of investigating the toxicity of TiO 2 nanostructures in various biomedical applications, such as drug delivery and implants [98]. Natural fibers are biocompatible, biodegradable, and abundant in nature, making them an attractive option for drug delivery. They can be functionalized with various targeting or imaging agents and can be tailored to exhibit specific properties for theranostic applications [94]. He et al. combined silk fibroin nanofibers to make a hydrogel hybrid system. It exhibited remarkable imaging characteristics using an upconversion luminescence (UCL) imaging diagnosis, a photon-emitting optical process resulting from the absorption of two or more low-energy photons [99]. The in vivo experiments showed significant inhibition of tumor growth in mice, indicating that the hydrogel system has the potential to be used for both tumor imaging and anti-tumor therapy in clinical applications [100]. Ma et al. created natural silk fibroin nanofibers and encapsulated clinical indocyanine green molecules in them to perform in vivo NIR-I/II fluorescence imaging. The coupling of these two nanomaterials was performed to inherit their safety and biocompatibility. This approach is unique as the longer wavelengths of the light provide fast feedback and high resolution at the cost that these particles have a relatively low lifetime [101]. Overall, the choice of fiber material for theranostic drug delivery depends on several factors, such as the type of disease being targeted, the drug being delivered, and the desired release profile. Each type of fiber has its own unique properties and advantages that make it suitable for specific applications. Incorporation of imaging agents is a crucial aspect of the development of theranostic drug delivery systems based on fibers as it can help to visualize the drug delivery process and monitor the fate of the drug within the body [102,103]. Two common types of imaging agents used in theranostic drug delivery systems, as shown before, are fluorescent dyes and MNPs. Fluorescent dyes are commonly used for optical imaging of the drug delivery process. These dyes emit light when excited by a specific wavelength of light, and the intensity of the emitted light can be used to determine the concentration and location of the dye. The incorporation of fluorescent dyes into fibers allows for real-time visualization of the drug delivery process and can provide valuable information about the distribution and pharmacokinetics of the drug [104,105]. However, although fluorescent probes are useful in pre-clinical applications, they have limited tissue penetration and are not appropriate for human studies. On the other hand, the limited light penetration can be overcome through the use of radioactive isotopes, including Copper-64 for positron emission tomography (PET) or Technetium-99m for gamma-camera imaging. Functionalizing with targeting molecules, such as VEGF, may mitigate the serious problem of liver and kidney radiotoxicity posed by the use of radioisotopes, particularly alpha or beta emitters [96]. MNPs are commonly used for MRI of the drug delivery process. These nanoparticles have magnetic properties that allow them to interact with a MNO field and produce a signal that can be detected by an MRI scanner. The incorporation of MNPs into fibers allows for real-time monitoring of the drug delivery process and can provide information about the biodistribution and pharmacokinetics of the drug [106,107]. Another advantage of this imaging technique is that, depending on the material used, it might not be necessary to add any additional agent, as it could be seen when using inorganic fibers, such as TiO 2 described above [98]. The field of diagnostic nanofibers is rapidly advancing and holds great promise for the future. Theranostic nanocarriers offer an additional feature when compared to regular delivery systems, which is the ability to track their localization in the body and provide information on the progression of the disease and the efficacy of the therapy [108]. Conclusions Cancer is one of the main causes of death worldwide and despite intensive research in this area and numerous strategies developed to treat this disease, there is still no ideal solution to cure this disease. There are several treatments available for cancer, such as surgery, chemotherapy, radiotherapy, immunotherapy, endocrine therapy, photodynamic therapy and hyperthermia therapy, but they all have significant drawbacks, making the treatment dangerous and very painful for the patient. For this reason, research continues to be carried out, looking for alternatives that are better than the previous ones. Drug delivery systems have been intensively studied, their objective being to reach only the necessary target, preserving healthy cells, reducing the side effects of patients. These are technologies designed to deliver medicinal substances in a targeted and/or regulated manner. Several structures can be used as polymeric drug delivery systems, such as pharmacological films, hydrogels, wafers, sticks, microspheres, and fibers, among others. Fibers (micro and nanofibers) can create very promising material structures for localized drug delivery systems, especially regarding cancer treatment. The properties of fibers can be affected by the materials used to produce them. Biodegradable and biocompatible polymers, such as CH, PVA, PLGA, PCL, MNPs, and others, have been utilized to achieve the desired application while minimizing the therapy's risk. As demonstrated, numerous studies have indicated that the incorporation of nanoparticles into fibrous substrates yields promising outcomes, imparting a diverse range of properties and reducing the proliferation and growth rate of cancerous cells in the tumor. Fibrous structures can also be used in theranostics, that is, in the detection and diagnosis of the disease, in addition to monitoring the response to treatment. It is also possible to incorporate imaging agents, allowing the diagnosis and treatment of diseases. Despite the progress made, none of these systems have successfully passed clinical trials and are not yet available for treating patients. Therefore, they still face several challenges that must be overcome before reaching their full potential. Conflicts of Interest: The authors declare no conflict of interest.
2023-04-01T15:09:07.796Z
2023-03-29T00:00:00.000
{ "year": 2023, "sha1": "d645e4762a4b93e82a3fcc0d0f7d0b76792eac37", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/28/7/3053/pdf?version=1680090493", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "873a9105041c29d3d9e96f737e93bcbbb3388a9b", "s2fieldsofstudy": [ "Medicine", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
218628700
pes2o/s2orc
v3-fos-license
Altering sensorimotor simulation impacts early stages of facial expression processing depending on individual differences in alexithymic traits Simulation models of facial expressions suggest that posterior visual areas and brain areas underpinning sensorimotor simulations might interact to improve facial expression processing. According to these models, facial mimicry may contribute to the visual processing of facial expressions by influencing early stages. The aim of the present study was to assess whether/how early sensorimotor simulation influences early stages of face processing. A secondary aim was to investigate the relationship between alexithymic traits and sensorimotor simulation. We monitored P1 and N170 components of the event-related potentials (ERP) in participants performing a fine discrimination task of facial expressions while implementing an animal discrimination task as control condition. In half of the experiment, participants could freely use their facial mimicry whereas in the other half, they had their facial mimicry blocked by a gel. Our results revealed that, on average, both P1 and N170 ERP components were not sensitive to mimicry manipulation. However, when taking into account alexithymic traits, a scenario corroborating sensorimotor simulation models emerged, with two dissociable temporal windows affected by mimicry manipulation as a function of alexithymia levels. Specifically, as a function of mimicry manipulation, individuals with lower alexithymic traits showed modulations of the P1 amplitude, while individuals with higher alexithymic traits showed modulations of the later N170). Furthermore, connectivity analysis at the scalp level suggested increased connectivity between sensorimotor and extrastriate visual regions in individuals with lower alexithymic traits compared to individuals with higher alexithymic traits. Overall, we interpreted these ERPs modulations as compensative visual processing under conditions of interference on the sensorimotor processing. Introduction Neurobiological models of face processing propose that posterior areas, responsible for the visual analysis of faces, and central and frontal regions, committed to the extraction of emotion and the recovery of semantic and biographical information, interact in order to assign meaning to faces (Calder & Young, 2005;Hoffman, Gobbini, & Haxby, 2000). Among the most endorsed neurobiological models of face processing, the model by Haxby and colleagues (Hoffman, Gobbini, & Haxby, 2002;Hoffman et al., 2000) comprises a core system (including fusiform face area, occipital face area and superior temporal sulcus) for the visual analysis of faces, and an extended system for the advanced processing mentioned above, encompassing a large number of brain regions (including medial prefrontal cortex, temporo-parietal junction, anterior temporal cortex, precuneus, inferior parietal/frontal operculum, intraparietal sulcus, frontal eye fields and the limbic system). An interesting feature that was included in a later revision of Haxby and colleagues' model is "motor simulation" as a mechanism for assigning a meaning to facial expressions and, therefore, for the attribution of emotions (Haxby & Gobbini, 2011). Models of face and facial expressions processing -including that by Haxby and Gobbini (2011) -do not expand on the exact mechanism by which this simulation process takes place and contributes to emotion recognition and understanding, although several lines of research support the role of simulation in this regard (see, e.g., Gallese & Sinigaglia, 2011;Goldman & de Vignemont, 2009;Pitcher, Garrido, Walsh, & Duchaine, 2008). Neural underpinnings of the motor simulation would include the mirror neuron system (MNS), the premotor cortex (PMC), the inferior parietal lobe (IPL) and the frontal operculum (FO) (see, e.g., Banissy et al., 2011;Montgomery & Haxby, 2008;Montgomery, Seeherman, & Haxby, 2009). To note, additional brain areas may be implicated in this simulation mechanism, especially the somatosensory cortex (SC), and in this regard the term "sensorimotor simulation" seems more appropriate (e.g., Wood, Rychlowska, Korb, & Niedenthal, 2016). A meta-analysis conducted on patients with focal brain lesions, revealed that damage to the right SC is associated with deficits in the recognition of observed expressions (Adolphs, Damasio, Tranel, Cooper, & Damasio, 2000). Studies that used transcranial magnetic stimulation of the right SC demonstrated the critical involvement of this region in the processing of others' emotions (Adolphs et al., 1999;Hussey & Safford, 2009;Pitcher et al., 2008;Pourtois et al., 2004) and, crucially for the purposes of the present investigation, revealed the sequential involvement of extrastriate areas (60-100 ms) and right SC (100-170 ms) in facial expression recognition (Pitcher et al., 2008). These results support the hypothesis that sensorimotor simulation can influence early stages of face processing. Although previous evidence indicates that regions underpinning sensorimotor simulation and regions responsible for visual analysis of faces interact, it is still unclear how early this interaction occurs during face and facial expression processing. A recent study by Sessa and colleagues (Sessa, Schiano Lomoriello, & Luria, 2018) showed that visual working memory (VWM) representations of faces are affected by the blockage/alteration of the observers' facial mimicry. In a change detection task (Luria, Sessa, Gotler, Jolicoeur, & Dell'Acqua, 2010;Meconi, Luria, & Sessa, 2014;Sessa & Dalmaso, 2016;Sessa, Luria, Gotler, Jolicoeur, & Dell'Acqua, 2011;Sessa et al., 2012;Vogel & Machizawa, 2004;Vogel, McCollough, & Machizawa, 2005), participants had to memorize a precued and lateralized face expressing a certain intensity of anger (memory array) and decide, about 1 second later, whether a second face (test array) presented in the same hemifield and of the same individual had the same or a different intensity of facial expression. Critically, participants could freely use their facial mimicry whilst performing the task in half of the experiment whereas in the other half their mimicry was altered/blocked by a hardening facial gel (with the order of the two conditions counterbalanced across participants). In a 300-1300 ms time-window between the memory and the test array, a component of the event-related potentials (ERPs) indexing the quantity/quality of VWM representation was monitored. The amplitude of this ERP component, known as sustained posterior contralateral negativity (SPCN; Jolicoeur et al., 2007;Luria et al., 2010;Meconi et al., 2014;Sessa and Dalmaso, 2016;Sessa et al., 2011Sessa et al., , 2012 or as contralateral delay activity (CDA; Vogel and Machizawa, 2004) was found to be reduced in the condition in which participants' mimicry was blocked/altered when compared to the condition in which they could freely use their mimicry. Furthermore, participants who were most affected by the mimicry manipulation were the most empathic on the basis of their scores in the Empathy Quotient questionnaire (EQ; Baron-Cohen & Wheelwright, 2004), in line with the evidence suggesting that the most empathic individuals are those using their facial mimicry more and they are also characterized by a greater susceptibility to emotional contagion (Balconi & Canavesio, 2016;Bos, Jap-Tjong, Spencer, & Hofman, 2016;Dimberg, Andréasson, & Thunberg, 2011;Prochazkova & Kret, 2017;Seibt, Mühlberger, Likowski, & Weyers, 2015;Sonnby-Borgström, 2002; but see also Franzen, Mader, & Winter, 2018). Thus, this latter evidence suggests that a high-level visual processing stage, i.e. VWM, may be influenced by the sensorimotor simulation activity during face/facial expression processing. The model by Wood and colleagues (2016) proposes that these effects might be observed even earlier during initial stages of face processing, possibly already at the stage of faces structural encoding (George, Evans, Fiori, Davidoff, & Renault, 1996;Jeffreys, 1983;Perez, Mccarthy, Bentin, Allison, & Puce, 1996). Furthermore, along the line of Pitcher and colleagues' findings (2008), which revealed somatosensory activity 100-170 ms after the presentation of facial expressions, this hypothesis seems even more plausible. The aim of the present study was exactly to provide a direct test of the hypothesis that sensorimotor activity can influence early stages of face processing. The recent simulation model by Wood and colleagues (2016) hypothesizes indeed an iterative process between the posterior regions of visual processing and the sensorimotor regions. Therefore, it is reasonable to hypothesize that the effects of sensorimotor simulation on face visual processing should be considered within a cascade model, in which the impact of sensorimotor activity possibly becomes increasingly evident during processing in the extrastriate areas. In the present investigation we employed the ERP technique in a within-subjects design. We administered our participants a task similar to that by . This previous behavioral studyconducted on a large sample of participants (N = 122)involved, in a between-subjects design, a mimicry manipulation by means of a facial gel able to block/alter the participants' facial mimicry during a task that involved distinguishing target expressions from highly similar distractors. Stimuli could be both faces, selected from a morphing continuum of a face identity from an expression of 100% anger to an expression of 100% sadness, and animals selected from a morphing continuum from the image of a horse (100%) to the image of a cow (100%), as a control condition. The results showed that blocking/altering facial mimicry had a selective negative impact on the accurate discrimination of facial expressions. The authors then proposed that this decrease in accuracy in the fine discrimination of emotions was due to a selective interference with the simulation process that in turn would not have contributed (or would have to a small extent) to the construction of face visual percepts. Although exciting, this evidence is indirect and does not allow to reach these intriguing conclusions with certainty. We employed stimuli and then manipulated participants' facial mimicry in a within-subjects design such that participants performed the discrimination task with a hardening facial gel in half of the experiment (with counterbalanced order). By means of ERPs we were able to trace the time course of the effects of mimicry on fine facial expressions discrimination focusing on early components of ERPs associated with face and facial expressions processing, i.e. P1 and N170 ERP components. We hypothesized that blocking/altering facial mimicry would have affected sensorimotor simulation causing a cascading effect on early face processing and translating into modulations of the P1 and/or N170 ERP components. A secondary aim of the present study was to start an exploration of the relationship between alexithymic traits and sensorimotor simulation as a mechanism for fine facial expression discrimination. To this purpose, participants completed the Toronto Alexithymia Scale (TAS-20; Bagby, Parker, & Taylor, 1994;Caretti, La Barbera, & Craparo, 2005, for the Italian version) at the end of the experimental electroencephalographic session. Recent studies have suggested that alexithymiadefined as the difficulty of identifying one's own and others' emotionscould be characterized by a deficit in sensorimotor simulation (or embodied simulation; e.g., Gallese & Sinigaglia, 2011; see, e.g., Scarpazza & di Pellegrino, 2018;Scarpazza, Làdavas, & Cattaneo, 2018) during the processing of facial expressions of others' emotions, especially with regard to those with negative valence (Scarpazza, di Pellegrino, & Làdavas, 2014;Scarpazza, Làdavas, & Di Pellegrino, 2015;Sonnby-Borgström, 2009). One of these studies, in particular, demonstrated in alexithymic participants a reduced activity of the corrugator supercilii and of the zygomaticus major, respectively for negative and positive emotions, during the passive view of facial expressions (Sonnby-Borgström, 2009). These previous studies indicate that individuals with greater alexithymic traits might be lessor differentlyaffected by mimicry manipulations in fine emotions discrimination tasks precisely because they tend to use to a lesser degree the sensorimotor simulation mechanism to recognize and discriminate emotions in others. With regard to the present investigation, we hypothesized to observe a relationship between alexithymic traits and modulations of P1 and/or N170 ERP components as well as accuracy in the fine discrimination task as a function of the mimicry manipulation. Finally, we also investigated the effect of altering the facial mimicry on the connectivity between visual and sensorimotor regions. According to Wood and colleagues' model (2016), facial expressions' processing occurs within a continuous information exchange between the visual and sensorimotor areas. Thus, another aim of the present investigation was to test this hypothesis by studying whether -and eventually how -this information flow could be affected by altering/blocking facial mimicry, also in relation to alexithymic traits. Method Participants Data were collected from 35 volunteer healthy students (6 males) from the University of Padova. Data from two participants were discarded from analyses due to excessive electrophysiological artifacts. All participants included in the final sample reported normal or corrected-to-normal vision and no history of neurological disorders. The final sample included 33 participants (mean age: 22.8 years, SD = 3.28, 4 left-handed) in line with a reference study for this investigation (Sessa et al., 2018; see also Achaibou et al., 2008). All participants signed a consent form according to the ethical principles approved by the University of Padova (Protocol number: 1986). Stimuli The stimuli were 11 grayscale digital photographs (i.e., faces and animals stimuli) for each morph continuum. We adopted the stimuli developed by Niedenthal and colleagues (Niedenthal, Halberstadt, Margolin, & Innes-Ker, 2000) and then used in Wood and colleagues' experiment (2016). In particular, the face stimuli consisted of images of a female model expressing morphed combinations of sadness and anger emotions, while the non-face control images were selected from a morph of a horse and a cow that had maximally similar postures. Specifically, the face continuum began at 100% sad and 0% angry and transitioned in 10% increments to 0% sad and 100% angry (see Figure 1). All images were resized to subtend a visual angle between 10 and 12 deg. Participants were seated about 60 cm away from the screen. Procedure The XAB discrimination task required participants to discriminate a target from a perceptually similar distractor. Before starting the experiment, participants performed twelve practice trials to get familiar with the task. Each trial ( Figure 2 depicts the trial structure of the XAB discrimination task) began with a 500 ms fixation cross, followed by the target image (X) for 750 ms. The target was then followed by a 350 ms noise mask, aiming to limit the processing of the stimuli, thus controlling for the potential effects of iconic memory representations. Every trial was interleaved by a variable blank interval (Inter-stimulus Interval, ISI: 800-900 ms). The target image reappeared alongside a distractor, with left-right locations counterbalanced across trials. The target and distractor images could be at 20% apart on the morph continuum, yielding nine image pairs, or 40% apart, yielding six pairs. The motivation for this experimental manipulation is that previous work suggested that sensorimotor simulation may be especially recruited in the case of subtle discrimination of facial expressions (Rychlowska et al., 2014;Wood, Lupyan, et al., 2016;Wood, Rychlowska, et al., 2016). The target and distractor remained on the screen until participants' response. Participants' task was to press a key (F = left; J = right) to indicate which image matched the target image seen in the first screen presentation. Participants performed 4 experimental blocks, two for each level of discrimination (20% or 40% distance between the target and distractor, alternately), with counterbalanced order across participants. Each block consisted of 144 trials (i.e., 576 trials in total). Each participant performed the task in two different conditions (counterbalanced order across participants); in one, (blocked/altered mimicry condition) a mask gel was applied on the participant's whole face, so as to create a thick and uniform layer, excluding the areas near the eyes and upper lip. The product used as a gel was a removable cosmetic mask (BlackMask Gabrini©) that dries in 10 minutes from application and becomes a sort of plasticized and rigid mask. Participants perceived that the gel prevented the wider movements of face muscles. In the other half of the experiment (free mimicry condition) nothing was applied to the participants' faces. As in the study by Wood and colleagues , at the beginning of the experimental session participants were told that the experiment concerned "the role of skin conductance in perception" and that they would be asked to spread a gel on their face in order to "block skin conductance" before completing a computer task. EEG Data Preprocessing The EEG was recorded during the task by means of 62 active electrodes distributed on the scalp according to the extended 10/20 system, positioning an elastic Acti-Cap with reference to the left ear lobe. Sampling rate was 1000 Hz and the high viscosity of the gel used allowed the impedance to be kept below 10 KΩ. Continuous data were downsampled to 500 Hz, high-pass filtered at 0.1 Hz, re-referenced to the average of all channels and segmented in epochs starting from -500 ms to 1000 ms with respect to stimulus onset. Independent component analysis (ICA) was applied to the segmented data in order to identify and manually remove artifactual activity related to eye-blinks and saccades (Jung et al., 2000). Statistical Analysis Behavior A repeated measures analysis was performed via logistic mixed effect models using restricted maximum likelihood (REML) estimation to estimate the effect of type of stimulus (faces vs. animals), mimicry (free vs. blocked/altered mimicry) and level of discrimination (20% vs. 40% apart the morph continuum), as fixed effect, on participant's accuracy level. We included participants' variability as random effect, as suggested by Baayen and colleagues (Baayen, Davidson, & Bates, 2008). Then, to discover the relationship between alexithymic traits and sensorimotor simulation, we divided the sample into two groups, based on the median TAS-20 value: low alexithymic scores (TAS-20 ≤ 43; N = 16) and high-scorers (TAS-20 > 50; N = 17). Thus, in a second model we also included the group (high vs. low alexithymic traits) as fixed effect, and its potential interaction with the other factors, to estimate its potential impact on participant's accuracy level. The behavioral analyses were performed using the software R (2.13) with the lmer function from the lme4 package (Bates, Mächler, Bolker, & Walker, 2015). ERP For the quantification of the ERPs, data cleaned with ICA were further segmented into epochs starting from -200 ms to 600 ms with respect to stimulus onset, baseline corrected and low pass filtered at 30 Hz. Epochs with a peak-to-peak amplitude exceeding ± 50 μV in any channel were identified using a moving window procedure (window size = 200 ms, step size = 50 ms) and discarded from further analysis. For each experimental condition, P1 and N170 activities were defined as the mean amplitude of each participants' grand average in the time windows 100-120 ms and 160-180 ms, respectively, and averaged in two ROIs, one per each hemisphere, clustering together the activity of channels PO7-PO9-O1 (left hemisphere) and PO8-PO10-O2 (right hemisphere). In order to test the effect of wearing the gel mask, a delta score was derived subtracting the activity in the free-mimicry condition from the one in the blocked/altered mimicry condition, i.e. ΔP1 and ΔN170, which represented the units of observation for statistical analysis. Statistical analysis of ERP activity was performed using linear mixed-effects model adopting a model selection strategy based on the Akaike Information Criterion (AIC) (Maffei, Spironelli, & Angrilli, 2019;Wagenmakers & Farrell, 2004). AIC (Akaike, 1973) is a powerful metric derived from Information Theory which, starting from a set of candidate models, allows to derive the relative quality of each model (the lowest the AIC the highest the quality of the model, controlling for its complexity). For each ERP component, data were fitted with a full model including four fixed-effect predictors (type of stimulus, level of discrimination, hemisphere and group), their interactions and a random intercept to model repeated measurements across subjects. This model was compared with its simpler instances by removing the predictors until reaching an intercept-only model. The best model with lowest AIC was identified and the significance of its predictors was assessed with an F-test using the Satterthwaite approximation for degrees of freedom (Luke, 2016). In addition, significant effects have been explored using post-hoc pairwise contrasts, corrected for multiple comparisons using false discovery rate (FDR; Benjamini and Hockberg, 1995). Connectivity In order to characterize the dynamic information flow between visual and sensorimotor areas, the instantaneous phase locking value (iPLV) was used. The PLV is a metric that describes the absolute value of the mean phase difference between two signals (Lachaux, Rodriguez, Martinerie, & Varela, 1999), and has been widely used to investigate brain functional connectivity (Sakkalis, 2011). Instantaneous phase synchrony in the beta band (13-30 Hz) between each pair of channels was computed for each experimental condition. Connectivity in the beta band was studied according to evidences suggesting that processing of emotional facial expression modulates oscillatory activity in this spectral range, and that the extent of this modulation is related to individual differences in empathic abilities (Cooper, Simpson, Till, Simmons, & Puzzo, 2013). Furthermore, beta band connectivity during emotional face processing is reduced in individuals with autism compared to typical developing participants (Leung, Ye, Wong, Taylor, & Doesburg, 2014). Finally, there are converging evidences that the core activity of the sensorimotor system is encoded in beta oscillations (Jensen et al., 2005). The average PLV between two ROIs, one over the occipital region (channels: PO7, PO8, PO9, PO10, O1 and O2) and one over the central region of the scalp (channels: C1, C2, C3, C4, C5 and C6), was computed in the same time windows used for ERP analysis (100-120 ms and 160-180 ms), in order to model the dynamic changes in the functional connectivity between visual and sensorimotor cortices. The statistical approach to test our predictions regarding connectivity was the same employed for ERPs analysis. A linear mixed-effects model including as fixed-effects the predictors condition (free vs. blocked mimicry) and group (high vs. low alexithymic traits) and their interaction and a random intercept for each subject was fitted, separately, to the data for each time window (P1 and N170) and type of stimulus (face and animal). A model selection approach was used to identify the best model explaining the data according to the AIC. The significance of the predictors included in the best model was then assessed with an F-test using the Satterthwaite approximation for degrees of freedom (Luke, 2016), and significant effects have been explored using post-hoc pairwise contrasts, corrected for multiple comparisons using false discovery rate (FDR). In the second model, we tested the effect of alexithymic traits in participants' accuracy level; there was no differences between individuals who had high (μ = .851) and low alexithymic traits (μ =.867). None interactions reached significance (min p = .082). ERP: ΔP1 Models comparison showed that the best model explaining the data observed for the ΔP1 was the one including as fixed-effects level of discrimination, type of stimulus, group and the interaction between type of stimulus and group (AIC = 807.9, logL = -396.74, ΔAIC 1 = 23.2). The F-test revealed a significant interaction between type of stimulus and group (F(1,233) = 13.96, p < 0.001). Pairwise comparisons performed on the interaction revealed that, when the facial mimicry was blocked ( Figure 3A), ΔP1 for facial expressions was higher in the participants with low alexithymia traits compared to ΔP1 for animals (t(233) = 3.39, p = 0.004), and higher compared to ΔP1 for facial expressions in the group with high alexithymia traits (t(44) = 2.86, p = 0.01). ERP: ΔN170 The best model identified for the analysis of ΔN170 was that including as fixed-effects type of stimulus, group and their interaction (AIC = 926, logL = -456.82, ΔAIC 1 = 17.2). The F-test revealed a significant main effect of stimulus type (F(1,232) = 4.2, p = 0.041) and a significant interaction 18 between stimulus type and group (F(1,232) = 4.21, p = 0.041). Pairwise comparisons performed on the interaction revealed that, with facial mimicry blocked by the gel mask ( Figure 3B), ΔN170 for facial expressions was significantly more negative than the one for animals only in the participants with high alexithymia traits (t(232) = 3.01, p = 0.01). Discussion By means of the ERP technique, the present investigation had the main objective to monitor the effects of blocking observers' facial mimicry while engaged in a fine facial expression discrimination task (see on early stages of face and facial expression processing, reflected in potential modulations of the P1 and N170 ERP components. To assess the selectivity of this effect, a control condition was implemented in which participants had to perform a similar task of fine discrimination of animal shapes (see . It is important to underline that for this purpose we used a within-subjects manipulation of facial mimicry, so that participants performed one half of the experiment being able to freely use their facial mimicry and a second half of the experiment with their mimicry blocked by a hardening gel (the order of the two conditions was counterbalanced across participants). The model proposed by Wood and colleagues (2016) suggests an iterative connection between the areas responsible for the visual processing of faces and the sensorimotor areas in charge of simulation processes. Previous evidence also suggests that sensorimotor activity can be observed as early as 100-170 ms from the presentation of facial expressions (Pitcher et al., 2008). Starting from these theoretical and experimental foundations, we first of all hypothesized that early ERP components, i.e., P1 and/or N170, would have been modulated as a function of mimicry manipulation in fine discrimination of facial expressions. Additionally, it is important to consider that facial mimicry is mainly conceived as a manifestation of sensorimotor simulation and its feedback to the motor and somatosensory regions (via the motor regions) seems to be relevant for the dynamic modeling of the simulation itself. In fact, several previous studies have shown how an alteration of facial mimicry is detrimental for the recognition/discrimination of facial expressions (Baumeister et al., 2016;Baumeister et al., 2015;Keillor et al., 2002;Korb et al., 2016;Niedenthal et al., 2001;Oberman et al., 2007;Rychlowska et al., 2014;Stel & van Knippenberg, 2008;. Based on these evidences, and in line with the theoretical framework proposed by Wood and colleagues (2016), we expected to observe that blocking mimicry would impact connectivity at the scalp level between temporo-occipital and central electrodes selectively for faces (but not for animal stimuli) in the temporal window including the P1 and N170 components. The present results are in line with the predictions, but reveal a more complex picture than originally expected. Indeed, our results revealed that both the P1 and the N170 ERP components were not significantly modulated as a function of the mimicry manipulation by itself, but only when individual differences in alexithymia are taken in account. Additionally, connectivity analysis reveals that during face processing our manipulation affected the information flow between visual and sensorimotor regions differently in the two groups of participants. Whilst, on the one hand, we did not expect a modulation of the N170 component in participants with higher alexithymic traits as a function of the mimicry condition, on the other hand, the results at the connectivity analysis shows a reduction of the phase synchrony in the beta band between occipito-temporal and central regions only in participants with higher alexithymic traits. These results are very nicely in agreement with our hypothesis and previous experimental evidence, in which a deficient sensorimotor simulation in alexithymic subjects has been documented, likely arising from abnormal interoceptive abilities (Scarpazza & di Pellegrino, 2018;Scarpazza et al., 2014Scarpazza et al., , 2015Sonnby-Borgström, 2009). Additionally, we found exactly the opposite pattern for those individuals with low alexithymic traits where the alteration/blockage of the facial mimicry prompts an increase in the visual-sensorimotor connectivity. Therefore, combining together these findings provides supports to the hypothesis that a manipulation of facial mimicry has an impact on early stages of face processing, and reveals that it is temporally dissociable according to the level of alexithymia of the subjects. This temporal dissociation likely results from the different degree of connectivity between sensorimotor and visual regions in the two groups, as indicated by the instantaneous phase synchrony in the beta band. Therefore, both modulations, of P1 (in participants with lower alexithymic traits) and N170 (in participants with higher alexithymic traits) would be the consequence of interference on the simulation system induced by the manipulation of facial mimicry. This interpretation is corroborated by the observation that these modulation effects have been selectively observed for face stimuli but not for animal stimuli. It remains unclear whether these modulations reflect disruption of face processing or rather a compensatory mechanism. In this perspective, the null behavioral results in terms of accuracy as a function of the mimicry conditions allow us to reach out for the second interpretation. This whole pattern of findings finds support in more recent evidence. A study by de la Rosa and colleagues (de la Rosa, Fademrecht, Bülthoff, Giese, & Curio, 2018) has recently supported the existence of the two pathways of facial expressions processing, i.e. visual and sensorimotor, sensitive in opposed directions to visual and motor adaptation, thus suggesting that the two systems are dissociable; furthermore, the recent study by Sessa and colleagues (2018) demonstrated that facial mimicry can impact high-level visual processing stages of facial expressions (in terms of modulations of the SPNC ERP component; also named CDA). Overall, this evidence sustains the conclusion that the two systems, visual and sensorimotor, are dissociable but interact, exactly as suggested by the Wood and colleagues' model (2016). In this vein, the impairment of the functioning of one system (e.g., sensorimotor, as in the case of the mimicry manipulation) can be compensated by the other system (e.g., visual). In this perspective, in the absence of appropriate feedback from the sensorimotor regions, greater compensative activity at the level of the regions of the core system (Haxby, 2011) could give rise to P1/N170 of greater amplitude in the condition of blocked mimicry. Therefore, the modulation of the P1/N170 as a function of facial mimicry manipulation would be an expression of a mechanism of visual compensation acting at early stages of visual processing; furthermore, how early it appears to be a function of the level of alexithymia and the level of connectivity between visual and sensorimotor systems. This compensative mechanism could very clearly explain the absence of a behavioral effect as a function of the facial mimicry manipulation. Finally, these results would fit in part with the EEG-EMG co-registration study by Achaibou, Pourtois, Schwartz & Vuilleumier (2008) who observed that increased levels of facial muscle activity for happy and angry faces were associated with smaller N170 amplitudes, and the authors themselves, in the discussion of their results, suggest that this pattern might result from the existence of two dissociable systems able to compensate each other. In conclusion, the present study demonstrates for the first time that facial mimicry, as a manifestation of sensorimotor simulation, is able to influence the visual processing of facial expressions at early stages, in particular at the level of the P1 and N170 ERP components. Furthermore, the present results highlight how these modulatory effects are related to individual alexithymic traits, and, in general, to the connectivity between the visual and sensorimotor systems. It is necessary to take into consideration that the participants of our study with higher levels of alexithymia cannot however be classified as alexithymic (except for 2 participants) on the basis of the TAS-20 cut-off scores. It is therefore perhaps legitimate to hypothesize that as the alexithymic traits increase, the sensorimotor activity and/or the connectivity between the sensorimotor and the visual systems is reduced, thus not allowing the implementation by the visual system of the alexithymic subjects of a compensation mechanism. However, this is currently a speculative hypothesis to which future studies will be able to answer more precisely.
2019-06-14T22:28:31.000Z
2019-06-14T00:00:00.000
{ "year": 2019, "sha1": "0c06113322056618b44c180e0e951738b04c5779", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1906.06424", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "0c06113322056618b44c180e0e951738b04c5779", "s2fieldsofstudy": [ "Psychology", "Biology" ], "extfieldsofstudy": [ "Psychology", "Biology", "Medicine" ] }
2167809
pes2o/s2orc
v3-fos-license
A New Model Using Routinely Available Clinical Parameters to Predict Significant Liver Fibrosis in Chronic Hepatitis B Objective We developed a predictive model for significant fibrosis in chronic hepatitis B (CHB) based on routinely available clinical parameters. Methods 237 treatment-naïve CHB patients [58.4% hepatitis B e antigen (HBeAg)-positive] who had undergone liver biopsy were randomly divided into two cohorts: training group (n = 108) and validation group (n = 129). Liver histology was assessed for fibrosis. All common demographics, viral serology, viral load and liver biochemistry were analyzed. Results Based on 12 available clinical parameters (age, sex, HBeAg status, HBV DNA, platelet, albumin, bilirubin, ALT, AST, ALP, GGT and AFP), a model to predict significant liver fibrosis (Ishak fibrosis score ≥3) was derived using the five best parameters (age, ALP, AST, AFP and platelet). Using the formula log(index+1) = 0.025+0.0031(age)+0.1483 log(ALP)+0.004 log(AST)+0.0908 log(AFP+1)−0.028 log(platelet), the PAPAS (Platelet/Age/Phosphatase/AFP/AST) index predicts significant fibrosis with an area under the receiving operating characteristics (AUROC) curve of 0.776 [0.797 for patients with ALT <2×upper limit of normal (ULN)] The negative predictive value to exclude significant fibrosis was 88.4%. This predictive power is superior to other non-invasive models using common parameters, including the AST/platelet/GGT/AFP (APGA) index, AST/platelet ratio index (APRI), and the FIB-4 index (AUROC of 0.757, 0.708 and 0.723 respectively). Using the PAPAS index, 67.5% of liver biopsies for patients being considered for treatment with ALT <2×ULN could be avoided. Conclusion The PAPAS index can predict and exclude significant fibrosis, and may reduce the need for liver biopsy in CHB patients. Introduction Up to 40% of patients with chronic hepatitis B (CHB) would develop cirrhotic complications or hepatocellular carcinoma (HCC) during their lifetime [1]. While several clinical parameters, including male gender, older age, higher levels of alanine aminotransferase (ALT) and serum HBV DNA have been identified as risk factors for severe liver disease [2,3,4], the golden standard in assessing disease severity remains to be liver biopsy. Liver biopsy is still recommended for certain CHB patients, especially those with an ALT level of ,26upper limit of normal (ULN) [5,6]. However, up to 2% of patients develop complications from liver biopsy [7,8]. Others problems like intra-observer variation and sampling error are also unavoidable [9,10,11]. There is thus an increasing demand for developing predictive models of fibrosis based on non-invasive markers. Many predictive models of fibrosis, including the AST/platelet radio index (APRI) and FIB-4 index, were based on patients with chronic hepatitis C [12,13,14,15,16,17]. Using such models to predict liver fibrosis in CHB patients had produced conflicting results [18,19]. Only a minority of models were based on CHB patients [20,21,22,23], and these models were limited by a disproportionate percentage of either hepatitis B e antigen (HBeAg)-positive or -negative patients. Some of these studies also lack patients with normal serum ALT [20,21]. A recently-derived model is the aspartate aminotransferase (AST)/platelet/gammaglutamyl transpeptidase (GGT)/a-fetoprotein (AFP) (APGA) index, but this is limited by its correlation with transient elastography and not actual liver histology [24]. Another factor limiting the use of other non-invasive models is that markers used in prediction may not be routinely available in non-research laboratories [18,20,25,26]. The aim of this study is to create a predictive model based on routinely-available clinical parameters to accurately predict significant fibrosis in both HBeAg-positive and -negative CHB. Patients The current study included treatment-naïve patients who were enrolled into therapeutic drug trials between 1994 and 2008 in the Department of Medicine, the University of Hong Kong, Queen Mary Hospital. All patients were positive for hepatitis B surface antigen (HBsAg) for at least 6 months, with a HBV DNA level of more than 2,000 IU/mL, and a serum ALT of less than 10 times the ULN prior to recruitment. Patients with decompensated cirrhosis or concomitant liver disease, including chronic hepatitis C or D virus infection, primary biliary cirrhosis, autoimmune hepatitis, Wilson's disease, and significant intake of alcohol (20 grams per day for female, 30 grams per day for male) were excluded. Written consent was obtained prior to liver biopsy, and all trials had been approved by the Institutional Review Board of the University of Hong Kong. Patient demographics and laboratory parameters (altogether 12 variables) were recorded at the time of liver biopsy. These include age, gender, HBeAg status, HBV DNA levels, albumin, bilirubin, ALT, AST, alkaline phosphatase (ALP), GGT, AFP and platelet count. The ULN of ALT was based on the respective drug trial, ranging from 45 to 53 U/L in men and 31 to 43 U/L in women. Serum HBV DNA levels were measured by three different assays, as follow: a branched DNA assay (Versant HBV DNA 3.0 assay, Bayer Health-Care Diagnostic Division, Tarrytown, NY), with a lower limit of quantification of 400 IU/mL in 33 patients, Cobas Amplicor HBV Monitor Test (Roche Diagnostic, Branchburg, NJ) with a lower limit of quantification of 60 IU/mL in 88 patients, and Cobas Taqman assay (Roche Diagnostic, Branchburg, NJ) with a lower limit of quantification of 12 IU/mL in 116 patients. Liver Biopsy An 18G sheathed cutting needle (Temno Evolution, Cardinal Health, McGaw Park, IL) was used for liver biopsy for 33 patients, with a minimum length of 1.5 cm obtained. For the remainder of the cohort, a 17G core aspiration needle (Hepafix, B. Braun Melsungen AG, Germany) was used, with a minimum length of 2 cm obtained. Histologic grading of necroinflammation and staging of liver fibrosis were performed using the Knodell histologic activity index [27] and Ishak fibrosis score [28] respectively, by a single histopathologist blinded to the patients' laboratory data. Significant fibrosis was defined as an Ishak score of 3 or more, meaning the presence of at least bridging fibrosis. Statistical analysis The primary endpoint of the present study was to determine whether there were associations between significant fibrosis which were present in 77 patients (32.4%) in the entire cohort, and the 12 routinely-available clinical parameters mentioned above. Data was randomly divided into a training cohort and a validation cohort. Concerning the optimal sample size of this study, with 32.4% of our patient cohort having significant fibrosis and allowing a 10% error for a 95% confidence interval, 84 patients were needed in each cohort for the study to be adequately powered. A training cohort consisting of 108 patients (45.6%) was used to develop the model. The remaining 129 patients (54.4%) formed the validation cohort. All statistical analyses were performed using SPSS version 16.0 (SPSS Inc., Chicago, IL), SAS system version 9.1, R version 2.81 and STATA/SE 9.2. To create a new predictive model, all variables were subjected to a logarithmic transformation for a better model fit. The sequence of variables in order of their associations with significant liver fibrosis (co-efficient path) was determined by L1 regularized regression. The area under the receiving operating characteristics (AUROC) curve was determined for each number of variables used for the prediction of significant fibrosis. The number of variables used was decided when the addition of extra variables failed to give a relatively better accuracy. A new predictive model was then created with the optimal cut-off value determined as the value with the highest sensitivity and specificity. Using the new regression model, the AUROC, sensitivity, specificity, positive and negative predictive values and likelihood ratios were calculated. This new predictive model was compared to three pre-existing non-invasive indexes using routinely-available clinical parameters: the APRI, the FIB-4 index and the APGA index. The APRI was calculated using [AST (U/L)/(ULN of AST)/platelet count (610 9 /L)]6100 [12]. The FIB-4 index was calculated using [age The Mann-Whitney U test was used for continuous variables with a skewed distribution; the chi-squared test was used for categorical variables. Correlation between different predictive models with significant fibrosis was performed using Spearman correlation co-efficient. A two-sided p value of ,0.05 was considered statistically significant. Results A total of 237 patients with all 12 clinical parameters available were recruited. The characteristics of all 237 patients at the time of liver biopsy, including a comparison between the training and validation cohorts, are shown in Table 1. The median age was 38.2 years and 98 patients (41.3%) were HBeAg-positive. Twentyfive patients (10.5%) had a normal ALT level. Significant fibrosis and cirrhosis were present in 77 patients (32.4%) and 5 patients (2.1%) respectively. The percentage of patients with significant fibrosis in patients with ALT $26ULN and ,26ULN were 39.6% (44 out of 111 patients) and 26.2% (33 out of 126 patients) respectively. The sequence of variables added at each step under the AUROC curve is shown in Figure 1. The addition of the first 5 variables (AFP, ALP, age, AST, platelet count) achieved a best fit in the regression model. The further addition of variables only increases the complexity of the formula without achieving a marked improvement in prediction accuracy. Using L1 regularized regression, a new predictive model for significant fibrosis, named the PAPAS index (Platelet/Age/Phosphatase/AFP/AST), was derived as follows: The AUROC for predicting significant fibrosis was 0.701 for the training cohort and 0.776 for the validation cohort ( Figure 2). There was no significant difference in the AUCs of both training and validation groups (p = 0.270). The PAPAS index was then compared with three previously published non-invasive indices i.e. the APRI, the FIB-4 index and the APGA index. The boxplots of the four indices in predicting significant fibrosis are shown in Figure 3. APRI, the FIB-4 index, the APGA index and the PAPAS index all correlated well with significant fibrosis [r = 0.337, 0.338, 0.418 and 0.426 respectively (all p,0.001)]. The AUROC for predicting significant fibrosis in the validation cohort for all four models is shown in Figure 4a. The AUC of the PAPAS index, APGA index, FIB-4 index and APRI were 0.776, 0.758, 0.723 and 0.708 respectively ( Table 2). The AUC of the PAPAS index was significantly better than APRI (p = 0.009). There were no significant differences between the AUCs of PAPAS index, APGA index and FIB-4 index. For patients with ALT ,26ULN, the AUROC for all for indices is shown in Figure 4b. The AUC of the PAPAS index improved to 0.797 (Table 3). The accuracy and correlation coefficients of the PAPAS index are the best among the 4 models. The sensitivity, specificity, predictive values and likelihood ratios of all four indices are shown in Table 4, using the various cut-offs suggested for each model. Using an optimal cut-off of 1.662, the PAPAS index had a sensitivity of 73.3% and a specificity of 78.2% in predicting significant fibrosis. The negative predictive value was 88.4%. One hundred and twenty-six patients (53.2%) among our total patient cohort had an ALT level of ,26ULN, a patient group in whom liver biopsies are recommended before considering treatment. Among this group, 85 patients (67.5%) had a score less than the optimal cut-off of 1.662, suggesting that these patients do not have significant fibrosis and liver biopsies could be avoided. Seventy-five out of these 85 patients (88.2%) had insignificant fibrosis (Ishak stage 0 to 2) on actual histology. For the remaining 10 patients (11.2%), 5 had stage 3 fibrosis and 5 had stage 4 fibrosis. If the revised ULN of ALT as suggested by Prati et al (30 U/L for men, 19 U/L for women) [29] was used, 39 patients would have an ALT level of ,26ULN, of which 30 patients (76.9%) could avoid liver biopsy by having a score of less than 1.662. Twenty-eight out of these 30 patients (93.3%) had insignificant fibrosis. For the remaining 2 patients (6.7%), one had stage 3 fibrosis and another had stage 4 fibrosis. Discussion Given the invasiveness of liver biopsy, the development of noninvasive markers for liver fibrosis has always been an attractive option, especially since non-invasive markers for fibrosis in CHB are not well-established. Liver biopsy itself also has its limitations, thus using the AUROC in evaluating non-invasive markers of fibrosis could never reach the perfect value of 1.0. In fact, it had been shown previously that a perfect marker for significant fibrosis would not even reach an AUROC of 0.90 [30,31], which is the reason for many previous studies can only obtain an AUROC range of 0.76-0.88 [30]. The PAPAS index obtained an AUROC of 0.776 for the prediction of significant fibrosis. The AUROC improves to 0.797 for patients with ALT ,26ULN, the group of patients with liver biopsy recommended before considering treatment. The sensitivity and specificity of our model were both equally high at 73.3% and 78.2% respectively, and a high negative predictive value of 88.4% was achieved at the optimal cut-off value. The AUROC obtained was superior to other models of fibrosis based on commonlyavailable clinical parameters used in our cohort. Two such models, the FIB-4 index and APRI, were initially created based on patients with chronic hepatitis C, and therefore might not be suitable for CHB patients. According to one study, the AUROC of APRI in 218 CHB patients in predicting fibrosis was only 0.63 [19]. Two other such models based on chronic hepatitis C patients, Fibrotest and Actitest, achieved satisfactory results in CHB patients, but were limited by the requirement of using special and non-routinely available biomarkers. In addition, the majority of the study population was HBeAg-negative [18]. The disproportionate representation of either HBeAg-positive or HBeAg-negative patients was also seen in other non-invasive models for CHB [20,21,22]. Our study had a good mixture of both HBeAg-positive (41.3%) and -negative patients, making it more representative of the whole spectrum of CHB population. Our study also had patients with different ALT ranges, including a proportion of patients with normal ALT. A high negative predictive value meant the predictive model would excel in excluding CHB patients with significant fibrosis. For patients with an ALT level of ,26ULN, 67.5% of our cohort would be able to avoid the invasiveness of a liver biopsy. Among this subgroup of patients, 88.2% actually had insignificant fibrosis from histology. While 11.8% (10 out of 85) of patients had a discordance between the predictive model score and actual histology, this figure is lower than other studies validating noninvasive models of liver fibrosis [32,33]. If the revised ULN of ALT as suggested by Prati et al [29] was used, the percentage of patients able to avoid liver biopsy would further increase to 76.9%. The PAPAS index was based on five common clinical parameters: age, ALP, AST, AFP and platelet count. All 5 parameters had been shown in previous studies to be associated with significant fibrosis in CHB [20,21,24]. Age is a valuable predictor since progression of fibrosis in CHB is time-dependent [34,35]. Increased fibrosis results in a reduced clearance of AST and hence an elevated serum level [36]. A low platelet count has also been associated with advanced liver fibrosis through the altered production of thrombopoietin [37]. The addition of extra variables other than these five parameters did not further improve the accuracy of the current predictive model. Both ALT and HBV DNA levels, known to fluctuate during the natural history of CHB [38], were not included in the PAPAS index. While previous studies had shown several markers, including hyaluronic acid, a-2 macroglobulin and apolipoprotein A 1 , to have a predictive value in CHB, these markers may not be available in the routine evaluation of chronic liver diseases. Using them in predictive models might hinder their widespread use [18,20,25,26]. Many predictive models in previous studies [15,21,22,25] were created using stepwise regression, a prediction method based on identified independent variables to achieve a best-fit model [39]. While commonly used, stepwise regression had been shown to be prone to errors of sampling, measurement and specification [40]. Moreover, a rigid setup in computer programming and a misreading in the order of importance of various predictor variables could result in serious misinterpretation of results [41]. L1 regularized regression adopted in the present study identifies the order in which variables enter or leave the created model, allowing more flexibility in finding a regularized fit with any given number of parameters [42], and has been increasingly used in the design of predictive models in different clinical studies [24,43,44,45]. The current study has certain limitations. Our study only had Chinese CHB patients. Given that 67.6% of patients in our study cohort had limited fibrosis, the study would be biased towards having a high negative predictive value. The PAPAS index was not statistically superior to both the APGA index and FIB-4 index, probably due to the limited number of patients in our present study. Hence, external validation of the PAPAS index with an independent validation cohort would be important before considering widespread use. Body mass index and cholesterol levels were not available in our study, thus we were unable to compare our model with other predictive indices, including the Forns index [15,21]. Given that the current patient cohort consists of patients with potential to be recruited into drug trials, there would be fewer patients with an inactive disease and low viral load. Our predictive model might not be applicable to this group of patients. However, our cohort included patients with HBV DNA $2000 IU/mL, which is the threshold level suggested by CHB guidelines in commencing treatment. Due to the small number of patients with histologic cirrhosis, we were unable to create a predictive model for cirrhosis, which would have less measurement and observer error in detection if possible [11]. Similar to previous models based on CHB patients [20,21], the PAPAS index did not achieve a high positive predictive value. Therefore, the PAPAS index will be best applicable in excluding patients with insignificant fibrosis in whom treatment may not be necessary at the time of measurement. For patients with the score above the optimal cut-off level of 1.662, the decision of treatment should be considered in conjunction with other disease parameters or viral markers. A possible method to improve the diagnostic accuracy of predictive models is to combine the available clinical parameters with imaging or transient elastography. The former had been attempted by including the spleen size on imaging, with a high positive predictive value for cirrhosis obtained [23]. The accuracy of transient elastography in CHB is hindered whenever the ALT levels are elevated [46], but this could be improved by combining transient elastography with a non-invasive predictive model like the Forns index [47]. The sequential use of non-invasive markers is also another option [48], although such studies are lacking in CHB patients. In conclusion, the PAPAS index, a newly-designed predictive model using routinely-available clinical parameters, can accurately predict significant liver fibrosis in CHB patients, and potentially reduce the need for liver biopsies. Further studies would be needed to validate this model and compare it with other non-invasive models of fibrosis in CHB.
2014-10-01T00:00:00.000Z
2011-08-11T00:00:00.000
{ "year": 2011, "sha1": "39bb25de19b8f19f38e9ed535ea434c85e7afbcb", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0023077&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "29eea2e08d17ea9fada1b25dec159fa8fe931553", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
234139596
pes2o/s2orc
v3-fos-license
2 Detection of Grayscale Image Implementation Using Multilayer Perceptron The human eye can instantly recognize two patterns of color images that are almost the same quickly, but the computer cannot directly recognize any pattern in the image. The problem faced is how the computer can recognize the image pattern entered. Pattern recognition is also a technique that aims to classify previously processed images based on similarity or similarity in characteristics. In the Artificial Neural Network there are several methods that can be used to identify image patterns, one of them with the Multilayer Perceptron architecture. Multilayer Perceptron Neural Network is a type of neural network that has the ability to detect or perform analysis for problems that are sufficient or even very complex, such as in language processing problems, recognition of a pattern and processing of an image or image. The results of the study are a system that is able to recognize grayscale image patterns and is able to provide a percentage of pattern recognition in two similar and different images. Introduction The human eye can immediately recognize two colored images that are almost the same quickly, but the computer cannot immediately recognize the image. The problem faced is how the computer can recognize the inserted image. Image Similarity Detection is almost similar to Pattern Recognition, which is a technique aimed at classifying previously processed images based on the similarity or similarity of the features they have. In Artificial Neural Networks, there are several methods that can be used to detect image similarities, one of which is the Multilayer Perceptron architecture. Multilayer Perceptron is a type of neural network that has the ability to detect or perform analysis for problems that are quite or even very complex, such as problems with language processing, similarity detection, and image processing. In this research, the writer made a similarity detection program between grayscale and Multilayer Perceptron images. In this case, the file type to be used is a bitmap image (.bmp). Through the use of this Artificial Neural Network, especially the Backpropagation Network method, it is hoped that it will be implemented in detecting the similarity of the image scrayscale. IOP Publishing doi: 10.1088/1742-6596/1737/1/012013 2 Based on the background description above, the writer formulates the problem to be investigated, namely how can Artificial Neural Networks be used to detect similarities in grayscale images using Multilayer Perceptron? Research problem limitation: 1. The method used for image similarity detection is using Artificial Neural Networks on the Multilayer Perceptron architecture. 2. Input / input given is an image with the file type .bmp (Bitmap) 8-bit, 16-bit, and 24-bit. 3. Image is limited to a maximum size (pixel) of 100x100. In this research, a grayscale image similarity detection program will be made using the Multilayer Perceptron. Research Methodology Dhaneswara states that an Artificial Neural Network with a Multilayer Percepton architecture and using back propagation training using momentum can be used as a data classification technique [3]. One of the main problems in ANN is the length of the training process (network model formation), therefore choosing the right network configuration (number of hidden layers, neurons, momentum value, learning-rate, activation function) is needed to speed up the training process. However, this configuration can differ from one set of training data to another, so experimentation is needed to find it Pattern recognition is a discipline that classifies objects based on image, weight or predetermined parameters into a number of categories or classes. Pattern recognition is also a technique that aims to classify previously processed images based on the similarity or similarity of features they have. According to Renaldi Munir, Bitmap is a bit mapping, meaning that the pixel intensity value in the image is mapped to a certain number of bits [12]. A common bit map is 8, meaning that each pixel is 8 bits long. These eight bits represent the pixel intensity value. Thus there are as many as 256 degrees of gray, ranging from 0 to 255. There are 3 types of bitmap formats, namely monochrome (binary image), grayscale and color. Grayscale image: there are only 2 colors. Each pixel only contains 1 bit (0 or 1) of information to represent the pixel value. The color seen is a combination of 3 basic colors, namely Red, Green, Blue (RGB). In this study, the image to be used is a grayscale image. Grayscale image: in the form of 256 shades of gray. Each pixel contains 8-bit (0-255) color information. In a grayscale image, the value indicates that a gray image has only one color channel. The gray image is generally an 8 bit image. Artificial Neural Network is a representation of the artificial human brain network in describing the learning process in the human brain. Artificial neural networks are implemented in computer programs in order to be able to complete several calculation processes during the learning process. The Artificial Neural Network consists of several neurons, and there are connections between neurons as in the human brain. Neurons change the information received through the output to other neurons. the relationship between the descendants is known as weights. The information is stored at a certain value in that weight. Based on the connection patterns between neurons, there are 3 main characteristics of Artificial Neural Network systems that are often used, namely: Single Layer Neural Network, Recurrent Neural Network, and Multilayer Perceptron Neural Network. In this study, the architecture to be used is the Multilayer Perceptron architecture. The Multilayer Perceptron model has additional neuron screens in addition to the input and output screens, which is a hidden layer located between the two screens. The number of hidden layers varies depending on the level of difficulty of the problems handled by the system, so that in applying Multilayer Perceptron it is more powerful than other Artificial Neural Network models. Figure 1 Artificial Neural Networks have 3 learning methods, namely Unsupervised Learning, Semi Supervised Learning, and Supervised Learning. In this study using the Supervised Learning method, any knowledge that is given a reference value for mapping an input, will be the desired output. The learning process will be carried out continuously as long as the desired error conditions have not occurred. For each error value obtained at each learning stage, it will be calculated until the desired data or target value is achieved. The Backpropagation training method is a type of supervised learning where the output of the network is compared with the expected target so that an output error is obtained with the delta rule, then this error is propagated back to update the network weight in order to minimize errors. Supervised learning algorithms are typically used by multi-layered perceptrons to change the weights associated with neurons in a hidden layer called Backpropgation. The Backpropagation algorithm uses an output error to change the backward weight value. The forward propagation stage must be carried out first in order to get an error. The sigmoid activation function is used to activate neurons during the forward propagation process.The requirements that must be met so that the activation function on Backpropagation is used are: 1. Continuous 2. Differential easily 3. Is an ascending function. The binary sigmoid function with a range (0,1) is a function that fulfills the three requirements of the activation function in Backpropagation. Definition of the binary sigmoid function: The bipolar sigmoid function with range (-1,1) is defined as follows The maximum value of the sigmoid function is 1. For patterns that have more than 1 target, the output and input patterns must be changed first so that all patterns have the same range as the sigmoid function. The activation function in the output leyer uses the identity function f (x) = x. Training using the Backpropagation method consists of three steps, namely: [5] a. feedforward b. Calculation and backpropagation of the error in question c. Updated weights and biases. During the feedforward process the input unit (xi) will receive the input signal and will send the signal to each hidden unit (zj). The hidden unit will count the activation and send a signal (zj) to the output unit. Then, each unit of output will calculate the activation to produce a response to the input given by the network. During the training process, each unit of output compares the activation to the desired output value to determine the amount of error. From the existing errors, the factor (δk) is calculated. Factors are used to distribute the error from the output back to the previous layer. The factor (δi) is also calculated in hidden units (zj), as in the calculation of the factor (dk). Updating the weight between the hidden layer and the input layer using a factor (dk). The layer weights are adjusted together after all the factors are determined. The update of the weight (wjk) is carried out based on the result of the factor (δk) and the activation of the hidden unit. The update of weight (vij) is carried out based on the results of the factor (δj) and activation (xi) of the input. The steps of back propagation training are as follows [5]: Step 0. Initialize the weights Step 1. When the stop condition is wrong, do steps 2 -9 Step 2. For each training pair, do steps 3 -8 Feedforward Step 3. Each input unit (xi, i = 1 …… n) receives the input signal (xi) and sends a signal to all units in the upper layer (hidden unit). Step 4. Each hidden unit (zj, j = 1 ...... p) adds up the signal weights input, (5) apply the activation function to calculate the signal the output: (6) and send signals to all units in the layer above it (unit output). Step 5. Each unit of output (Yk, k = 1,..., M) adds a weight input signal. (7) and applies its activation function to compute output signal (8) Backpropagation: Step 6. Each unit of output (Yk, k = 1, ..., m) receives a target pattern according to the input training pattern, calculate the error information. Step 7. Each hidden unit (Zj, j = 1,..., p) adds up the input delta (from units on the top layer). and calculates the bias correction (used for update v0j), (17) Step 9: Stop condition test. Application algorithm Step 0: Initialize the weights V and W Step 1: For each input vector perform steps 2-4 Step 2: For i = 1,. . . , n activation set for the input unit xi. Feedforward Step 3: for each hidden unit (zj, j = 1, ......, p) add the weighting input signal, (18) activation function, calculates the output signal: Step 5: Each output unit (yk, k = 1 …… ..m) adds up the input signals, (20) activation function, calculates the output signal: (21) The author designed the system in 3 components, namely the master file input, new image input and output. The following will be described by component. The input of the master file (Input) is received in the form of an image with the .bmp format, the image will be normalized into a grayscale image, then from the image the weights of v and w will be obtained by using Backpropagation training which can be seen in Figure 2. In Figure 3, the new image input will be processed using the application procedure in the Backpropagation training using the weights v and w in the master file image, and the savings are the result of the pattern calculation process from the input image master file and the new image process. Figure 4 shows the process of the image similarity percentage in the master file and the new image at the output system. Figure 5. The first stage is Feedforward, the second stage is Backpropagation, and the third phase is weight modification. Figur 5. Backpropagation Flowchat Training Phase 1: The input signal is propagated to the hidden screen using the predefined activation function. The output of each hidden screen unit (zi) is then propagated forward to the hidden screen above it using the predefined activation function. As long as it has not produced output (yk), the process returns to the beginning. Results Output (yk) is compared with the target to be achieved (tk). The result of the tk-yk difference is an error that occurs. If the error is less than the specified tolerance limit, the iteration is terminated. If the error is still greater than the tolerance limit, the weight of each line in the network will be modified to reduce the error that occurs. Figure 6 is a forward propagation flowchart. Figure 6. Forward propagation flowchart Phase 2: Backpropagation The validity of tk-yk is calculated from the factor δk (k = 1,2,…, m) and is used to transmit the error to all hidden units that are directly connected to the unit (yk). The factor δk is used to change the weight of the line directly connected to the output unit. Changes in all line weights derived from hidden units in the lower screen are obtained by calculating the factor δj for each hidden layer. This stage is carried out continuously until all the factors δ in the hidden unit that are directly related to the input unit are accounted for. Phase 3: Change in weight The change in weight of all lines is used to factor δ of neurons in the layer above it. Factors are calculated to change the weight of all lines. The factor is calculated to modify the weights of all lines. The three phases of Backpropagation training are repeated until the stopping condition reaches the target. In general, the termination conditions that are often used are seen from the number of iterations and errors. Result and Discussion 3.1 Interface Design On the main system form, there are 2 main menus and a page control. Page control is used to perform the image similarity detection process. The design of the image similarity detection form can be seen in Figure 7. The following buttons and functions are found in the image similarity detection form: • Load:used to select the master image with the bitmap format (.bmp) that will be used, after the user selects the image, the image will be displayed on the input image form. • Weight v and w:used to calculate the weight v and w after the image is selected and also used as a storage of the weight results v and w that have been obtained. • New Image:used to select a new image with the bitmap format (.bmp) that will be compared, after the user selects the image, the image will be displayed on the output image form. • Image Similarity: used to run the image similarity detection process. • Exit: used to exit the form. Analysis In this study, there are several things that will be analyzed, namely: Analysis of Pattern Recognition of grayscale images with 8-bit, 16-bit, and 24-bit .bmp formats. From the results of system testing in Table 1, itcanbeseenclearlythat the images tested are of various sizes and lots of z (hiddenscreen). The results show that not all sizes canberecognized 100%. At a size of 3x2 with lots of z = 3 to 512 the image patterns canberecognized 100%, at sizes 5x5, and 20x20 the image patterns canberecognizedwith lots of z to 256, and at pattern sizes of 10x10, 50x50, and 100x100 canonlyberecognized by many z up to 128. It canbeconcludedfrom the table abovethat the larger the image size and the more z in the image, the pattern is not 100% recognized. The system test results in Table 2 canbeseenclearlythat the tested images are of various sizes and many z (hiddenscreen). The results show that not all sizes canberecognized 100%. At 3x2 and 5x5 sizes with lots of z = 3 to 512, the image pattern canberecognized 100%. The size of 10x10 image patterns canberecognizedwith lots of z = 3 to 256, and at a pattern size of 20x20 canonlyberecognizedwith lots of z to 128. In testingthereis a difference, namely in testing images measuring 50x50 and 100x100. The image size is 50x50 the pattern canberecognized 100% with lots of z = 8 to 128, while in images that are 100x100 only at lots of z = 8. Table 3 shows the results of system testing using various sizes (pixcel) and hidden screens (z). The results show that not all sizes can be recognized 100%. At sizes 3x2 and 5x5 with lots of z = 3 to 512, the image pattern can be recognized 100%, at 10x10 the image pattern can be recognized with lots of z = 3 to 8, and at a pattern size of 20x20 can only be recognized with lots of z up to 128 Patterns measuring 50x50 can only be recognized by lots of z = 8 to 128, whereas patterns at 100x100 can only be recognized by using lots of z = 3. Table 3. Experiment Results for 24-bit Image Pattern Recognition Conclusions Artificial neural network with Multilayer Perceptron architecture can be used as a grayscale image pattern recognition seen from the analysis results. From the experiments that the author has done, it can be stated that not all two image patterns are the same 100% similar to the human eye which sees the same two image patterns. The inequality of the two image patterns is the same because of the many hidden screens (z) on the Multilayer Perceptron. The more hidden screens (z) and the larger the image size being tested, the results are far from 100% or the pattern recognition is not perfect, the tolerance limit for many screens in pattern recognition is at the 256 limit. The inaccuracy of image pattern recognition is also due to the large number of colors in the two images, and the difference in 10 the number of colors between the master image and the new image. The difference in the size of the master image and the new image also affects the inaccuracy of pattern recognition. In this research, the image pattern recognition that is tested is only up to the image that has a size (pixel) of 100x100. To investigate further, image pattern recognition is not only limited to 100x100 but can be developed with a system that can quickly test large images such as 1024x768 and not only grayscale images but with color images.
2021-05-11T00:07:03.592Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "d5451d81aa1a315b452777ae316abb209c4fdcb7", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/1737/1/012013/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "5a5f30ecaa3f72965335a0cc09629390b5bd0856", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
270303986
pes2o/s2orc
v3-fos-license
Myocardin-Related Transcription Factor Mediates Epithelial Fibrogenesis in Polycystic Kidney Disease Polycystic kidney disease (PKD) is characterized by extensive cyst formation and progressive fibrosis. However, the molecular mechanisms whereby the loss/loss-of-function of Polycystin 1 or 2 (PC1/2) provokes fibrosis are largely unknown. The small GTPase RhoA has been recently implicated in cystogenesis, and we identified the RhoA/cytoskeleton/myocardin-related transcription factor (MRTF) pathway as an emerging mediator of epithelium-induced fibrogenesis. Therefore, we hypothesized that MRTF is activated by PC1/2 loss and plays a critical role in the fibrogenic reprogramming of the epithelium. The loss of PC1 or PC2, induced by siRNA in vitro, activated RhoA and caused cytoskeletal remodeling and robust nuclear MRTF translocation and overexpression. These phenomena were also manifested in PKD1 (RC/RC) and PKD2 (WS25/−) mice, with MRTF translocation and overexpression occurring predominantly in dilated tubules and the cyst-lining epithelium, respectively. In epithelial cells, a large cohort of PC1/PC2 downregulation-induced genes was MRTF-dependent, including cytoskeletal, integrin-related, and matricellular/fibrogenic proteins. Epithelial MRTF was necessary for the paracrine priming of the fibroblast–myofibroblast transition. Thus, MRTF acts as a prime inducer of epithelial fibrogenesis in PKD. We propose that RhoA is a common upstream inducer of both histological hallmarks of PKD: cystogenesis and fibrosis. Preparation of GST-Fusion Protein and Rho Activation Assay The preparation of GST-RBD (RhoA-binding domain (RBD): amino acids 7-89 of Rhotekin) and the Rho affinity assay were described in [52].The beads were stored frozen in the presence of glycerol.Confluent LLC-PK1 cells were transfected with PC1 or 2 siRNA, and 48 h later, lysed with ice-cold assay buffer containing 100 mM NaCl, 50 mM Tris base (pH 7.6), 20 mM NaF, 10 mM MgCl 2 , 1% Triton X-100, 0.5% deoxycholic acid, 0.1% SDS, 1 mM Na 3 VO 4 , and protease inhibitors.The lysates were centrifuged, and aliquots for determining the total RhoA were removed.The remaining supernatants were incubated with 20-25 µg of GST-RBD at 4 • C for 45 min, followed by extensive washing.Aliquots of total cell lysates and precipitated proteins were analyzed by Western blotting and quantified by densitometry.Precipitated (active) RhoA was normalized using the corresponding total cell lysates. Immunofluorescence Microscopy and Quantification Cells were grown on glass coverslips for visual expression analysis or on 96-well plates (Corning) for quantitative image analysis with the ImageXpress Micro 4 System (Molecular Devices, San Jose, CA, USA).The cells were transfected as detailed above.Immunofluorescence staining was performed as described [53].The following primary antibodies were used: MRTF-A (1:300), active RhoA (1:100, New East Biosciences, King of Prussia, PA, USA), and active ITGB1 12G10 (1:100 ab150002, Abcam, Cambridge, UK).Factin was visualized by staining with fluorescently labelled phallodin (Phalloidin iFLUOR 555, Abcam) at 1:10,000 dilution for 1 h.The cells were imaged by either a WaveFX spinningdisk microscopy system (Quorum Technologies, Eugene, OR, USA) equipped with an ORCA-Flash4.0digital camera or by an Olympus IX81 microscope with the Evolution QEi Monochrome camera, both driven by the MetaMorph 7.8 software (Molecular Devices, San Jose, CA, USA).Nuclear translocation was assessed by ImageXpress, driven by MetaExpress software's inbuilt Multi Wavelength Translocation analysis module, as in our previous work [53].Briefly, nuclear staining was measured as the mean fluorescence intensity of MRTF-A-specific staining within the DAPI-positive nuclei, and cytoplasmic staining was measured as the MRTF-A-positive fluorescence intensity in a preset ring around the nuclei.The nuclear/cystoplasmic ratio of MRTF-A was arranged in bins incremented by 0.02 (x axis).Active integrin clusters were counted by the MetaMorph software using the Manually Count Objects option.Counts were normalized to the cell number. Next-Generation Sequencing Transcriptome Analysis mRNA libraries were prepared using the NEBNext ® Poly(A) mRNA Magnetic Isolation Module, NEBNext ® Ultra™ II Directional RNA Library Prep with Sample Purification Beads and NEBNext Multiplex Oligos for Illumina (96) (New England Biolabs, Ipswich, MA, USA).Sequencing was carried out on the NovaSeq SP flowcell SR200 or PE100 (Illumina, San Diego, CA, USA).The data analysis is detailed in Appendix A. LLC-PK1 and Fibroblast Communication, Collagen Substrate Wrinkling Quantification Wild-type subcutaneous fibroblasts (WT SCF) were isolated from C57BL/6 mice and cultured on soft substrates with a Young's elastic modulus of 0.2 kPa.The substrates were generated as described.Subsequently, the substrates were oxygenized and coated with gelatin (2 µg/cm 2 diluted in PBS) [54].In total, 2.000 SCF/cm 2 were seeded and maintained in Dulbecco's modified Eagle's medium (DMEM) supplemented with 10% fetal bovine serum (FBS) and 1% penicillin-streptomycin for 6 h prior to synchronization with serumfree DMEM overnight.Conditioned media (CM) was obtained from LLC-PK1 PK1 renal epithelial cells.The epithelial cells were transfected with the indicated siRNAs.Twentyfour hours post-transfection, the media was removed, the cells were washed three times, and fresh serum-free media was added.After 24 h (48 h post-transfection), the CM was collected.Fibroblasts were stimulated with the CM for 48 h.Subsequently, phase-contrast images were taken of the fibroblasts by using a Zeiss Axio Observer Microscope.Cell contractile function, related to % area covered by wrinkles, was analyzed by ImageJ-win64 software (Madison, WI, USA) and the values were normalized to the cell number. Nephrectomy specimens were collected at St Michael's Hospital and included samples from PKD and RCC patients, following their informed consent. Immunohistochemistry and Quantification Paraffin-embedded sections were stained as described.Antigen retrieval was performed by boiling the section for 5 min in TE buffer.The staining was viewed by an Olympus BX50 microscope, using the Cellsense 1.14 software (Olympus LS, Tokyo, Japan).Stained sections were scanned with Axio Scan Z1 (Zeiss, Jena, Germany) driven by the integrated ZEN Slidescan software, and analyzed by HALO v2.3.2089.23,using the inbuilt Area Quantification and Object Quantification modules.For quantifying CTGF and PDGF-B, 3 animals in each group were used, and DAB staining was quantified by HALO in 5-10 large randomly selected fields.DAB OD was normalized to the number of nuclei in each area.For quantitation of MRTF-A staining, thresholds were set to recognize hematoxylin stain (nuclei) and DAB (MRTF-A), and the average DAB staining intensity was determined for each nucleus.The total scale of DAB OD was divided into 256 evenly distributed intensity bins (X axis), and the frequency for each intensity bin was calculated in Excel.Six PKD2 WS25/− animals and six controls (PKD2 WS25/+ littermates) were analyzed, however one PKD2 WS25/− animal was removed from the quantification because it did not develop renal cysts.Tubules with a diameter over 1.5× the diameter of normal tubules were classified as dilated.Microcysts had flattened epithelial lining and, in general, lacked brush border. RNAScope Three-month-old PKD2 WS25/− (n = 3) and control PKD2 WS25/+ littermates (n = 3) were analyzed for MRTF-A and CTGF mRNA expression using 3 µm thin sections of paraffin-embedded kidneys.Staining was carried out by closely following the manufacturer's protocol; 10% Gill's hematoxylin was used to stain the nuclei.The stained sections were scanned by Axio Scan Z1 and analyzed using the HALO software v2.3.2089.23.Renal tubules with normal and dilated morphology (n = 15) and cysts (n = 5) were selected for analysis.Thresholds were set for hematoxylin, MRTF-A, and CTGF staining.OD was normalized to the number of nuclei. Statistical Analysis Immunofluorescence and Western blot images show representative results of a minimum of three similar experiments or the number of experiments indicated (n).Graphs show the means ± standard error of the mean.Statistical significance was determined by Student's t-test or one-way analysis of variance (Tukey post hoc testing), using Excel2021 or Prism v7.0 softwares (Microsoft, Redmond, WA, USA).Violin plots were generated with the statskingdom.com/violin-plot-maker.html website, including the median and excluding outliers (Tukey) options.Statistical significance was calculated using a one-sample or two-tailed t-test, as appropriate.p < 0.05 was accepted as significant.Unless indicated otherwise, *, **, and *** correspond to p < 0.05, <0.01, and <0.001, respectively. PC1 or PC2 Downregulation Activates RhoA and Induces Nuclear Translocation of MRTF To test whether PC1 or PC2 downregulation affected MRTF signaling, we silenced the corresponding genes in LLC-PK1 tubular cells, using specific siRNAs.This approach efficiently and selectively reduced the expression of the corresponding proteins, allowing for their independent manipulation (Figures 1A and S1).Active RhoA precipitates and total cell lysate blots were probed with a RhoA-specific antibody.Fold change was normalized against the active RhoA/total RhoA ratio of control siRNA-transfected cells (non-related siRNA or siNR) (n = 8).(C) Cells were transfected as above.After 48 h, active RhoA was detected by immunofluorescence, using an antibody specific for the active, GTP-bound form.Representative images are shown for each indicated condition.Fluorescence intensity was quantified in individual cells using ImageJ.(D,E) PC1 (150 nM siRNA) or PC2 (100 nM siRNA) were silenced alone or concomitantly with RhoA (50 nM siRNA), as indicated.MRTF-A subcellular localization was assessed by immunofluorescent staining in each condition.In parallel, F-actin was visualized by staining with iFLUOR555-labelled phalloidin (representative images of the brightest plane are shown in D).Note that RhoA knockdown was highly efficient (E).Using visual assessment of MRTF-A subcellular localization, cells were grouped into the following three categories: 1. predominantly nuclear MRTF-A (red bars, N = nuclear), 2. even nuclear and cytoplasmic presence of MRTF-A (grey bars, N+C), and 3. nuclear exclusion of MRTF-A (black bars, N exclusion).* indicates p < 0.05, each colored * indicates the significance of the corresponding bar.(F) MRTF-A subcellular localization was analyzed by ImageXpress, a high-throughput automated digital imaging system.The distribution curves indicate the frequency of cells with increasing nuclear-to-cytoplasmic MRTF-A ratio (N/C).(G) Cumulative quantitation of the distribution curves shown in F, using N/C > 1.4 as a cutoff, which corresponds to definite nuclear MRTF-A localization by visual classification.* p < 0.05, **, p < 0.01, and *** p < 0.001. To assess whether these proteins impact RhoA activity in our cellular system, we measured the level of active (GTP-bound) RhoA using a GST-Rhotekin pull-down assay, as previously performed [52].Active RhoA (normalized to total RhoA) was significantly elevated upon PC1 loss [44,45], and similar results were obtained for PC2 (Figure 1B).Quantitative immunofluorescence staining with an active RhoA-specific antibody corroborated these results (Figure 1C).Loss of PC1 or PC2 resulted in dramatic cytoskeletal remodeling, characterized by the formation of strong actin stress fibers (Figure 1D).This was accompanied by a substantial nuclear translocation of MRTF.Both changes were prevented/mitigated by the concomitant downregulation of RhoA (Figure 1D).These qualitative observations were quantified by two means.First, by visual inspection, using a tripartite compartmental distribution (cytosolic, nuclear, or both/even) of MRTF.While MRTF was predominantly cytosolic under control conditions, it exhibited a significant shift to the nucleus upon the loss of PC1 or PC2.This was reversed by RhoA downregulation (Figure 1E).Second, to overcome regional heterogeneity, the distribution of single-cell nuclear/cytosolic MRTF ratios was automatically determined in large cell populations (detailed in Section 2).PC1 or PC2 silencing resulted in a shift toward higher N/C ratios (black vs. red curves), which was strongly mitigated by RhoA silencing (Figure 1F).The cumulative frequency of N/C ratios ≥1.4 (univocally assessed as "nuclear MRTF" by visual inspection) showed a ≈6-fold rise in PC1-or PC2-silenced cells relative to controls; this change was abolished by RhoA downregulation (Figure 1G).Together, these results show that the loss of PC1 or PC2 induces robust RhoA-dependent MRTF translocation into the nucleus. PC1/2 Loss Elevates MRTF Expression During these experiments, we noted that PC loss affected not only the distribution, but also the total expression of MRTF.Indeed, MRTF-A immunostaining significantly increased both in PC1-and PC2-silenced cells (Figure 2A,B), a finding confirmed by the Western blots (Figure 2C).This rise was, at least in part, due to increased MRTF gene transcription, since PC1 and PC2 downregulation significantly elevated the message for both MRTF-A and MRTF-B (Figure 2D).These findings imply that PC1/2 loss facilitates MRTF signaling both at the level of activation/nuclear translocation and that of total expression.creased both in PC1-and PC2-silenced cells (Figure 2A,B), a finding confirmed by the Western blots (Figure 2C).This rise was, at least in part, due to increased MRTF gene transcription, since PC1 and PC2 downregulation significantly elevated the message for both MRTF-A and MRTF-B (Figure 2D).These findings imply that PC1/2 loss facilitates MRTF signaling both at the level of activation/nuclear translocation and that of total expression. MRTF in Action upon PC Loss in the Epithelium: A Targeted Approach Next, we addressed the functional significance of the observed changes.As an initial strategy, we concentrated on genes that satisfied each of the following criteria: they were MRTF in Action upon PC Loss in the Epithelium: A Targeted Approach Next, we addressed the functional significance of the observed changes.As an initial strategy, we concentrated on genes that satisfied each of the following criteria: they were shown to be (a) PC-sensitive [50,57], (b) MRTF targets [37], and (c) involved in fibrogenesis/PEP.The downregulation of PC1 (Figure 3A, left panels) or PC2 (Figure 3A, right panels) significantly enhanced the mRNA expression for matricellular proteins/fibrogenic mediators, such as transgelin (TAGLN) (see also Figure S1 for alternative PC1 and PC2 siRNAs), CTGF, and CYR1.The downregulation of MRTF-A strongly reduced TAGLN and CTGF expression upon PC1 or PC2 silencing, and significantly decreased CYR61 expression in PC1but not in PC2-downregulated cells (Figure 3A).Efficient MRTF-A downregulation was not altered by the concomitant silencing of PC1 or PC2 (Figure S1B).The impact of MRTF-A was preserved when PC1/2-downregulated cells were exposed to TGFβ1, the most potent fibrogenic cytokine.Of note, TGFβ1 is elevated in PKD and plays a pathogenic role in the disease [58,59]. Cells 2024, 13, 984 9 of 26 CTGF expression upon PC1 or PC2 silencing, and significantly decreased CYR61 expression in PC1-but not in PC2-downregulated cells (Figure 3A).Efficient MRTF-A downregulation was not altered by the concomitant silencing of PC1 or PC2 (Figure S1B).The impact of MRTF-A was preserved when PC1/2-downregulated cells were exposed to TGFβ1, the most potent fibrogenic cytokine.Of note, TGFβ1 is elevated in PKD and plays a pathogenic role in the disease [58,59].TGFβ1 potentiated/augmented the effect of PC1/PC2 loss for each of these three genes, and MRTF significantly reduced the combined effect of these stimuli, except for PC2downregulation-induced CYR61 expression (Figure 3A).PC1 or PC2 loss also stimulated the expression of ANXA1 and RASSF2 in an MRTF-A-dependent manner (Figure 3B), although these were not stimulated by TGFβ1.We used the most responsive gene, TAGLN, to test RhoA and SRF dependence as well.The downregulation of either efficiently reduced PC1/2 loss-provoked TAGLN expression (Figure 3C,D).MRTF-B also contributed to these responses, because its silencing also mitigated the PC1-or PC2-loss-triggered increase in TAGLN, CTGF, and CYR61.Interestingly, in PC2-downegulated cells, CYR61 mRNA expression was selectively sensitive to MTRF-B depletion (Figure 3E,F). Next, we quantified mRNA expression for some pro-inflammatory genes, aware that MRTF can physically associate with and inhibit NFκβ [60,61].PC2 downregulation stimulated TNFα, CCL2, and IL1β expression, while it did not alter IL1α mRNA levels.MRTF-A silencing significantly potentiated the effect of PC2 loss on TNFα and IL1β and increased IL1α mRNA expression (Figure 3G).Thus, MRTF is an important contributor to the PC-loss-induced expression of key fibrogenic/PEP genes (Figure 3F) and a suppressor of some proinflammatory genes, shifting the balance from acute epithelial inflammatory responses to fibrosis. MRTF in Action upon PC Loss: A General Approach To assess the role of MRTF in the molecular pathogenesis of PKD from a wider angle, we performed RNA-Seq.We focused on genes that (a) were upregulated upon PC1 or PC2 silencing and (b) were MRTF-dependent.Three complementary analysis methods were used to identify such PKD-related and MRTF-dependent events (Figure 4A and Appendix A).First, we identified differentially expressed genes/transcripts (DEG) that were significantly elevated upon PC1 or PC2 downregulation and significantly suppressed by concomitant MRTF silencing.The corresponding 130 (PC1/MRTF) and 128 (PC2/MRTF) transcripts are shown in (Figure 4B).Common enriched biological processes included epithelial cell migration, wound healing, and cell contractility, in line with the key role of MRTF in early epithelial injury responses (Figures 4C and S2-S4). Second, Gene Set Enrichment Analysis (GSEA) (Figure 4D) indicated that the actomyosin cytoskeleton was among the most significant MRTF-dependent PKD-associated GO terms.The presence of closely related categories (microtubules and supramolecular polymers) underlines the relationship between PKD, cytoskeletal reorganization, and MRTF.GSEA also identified the 'mitochondrial matrix' and 'organelle assembly' as MRTFsupported categories.The MRTF dependence of these is of special interest, as altered cellular metabolism is a hallmark of PKD [62,63].Moreover, MRTF emerged as a significant negative regulator of the early immune response. Third, we utilized Weighted Gene Co-expression Network Analysis (WGCNA) to identify unbiased positive and negative correlation patterns.Among the resulting 90 modules, 6 matched our expression criteria, supplemented with the extra-requirement that PC1 and PC2 loss should act concordantly. Of these, ME18 was ranked first, and it was the second most significant among all the 90 modules (Figure 4E).ME18-related biological processes reflected the previously identified themes (cell junction, microtubules, cytoskeletal organization, cell projection assembly, and cell motility) (Figure 4F). Considering the common cis-elements in the target genes, SRF-binding sites were highly enriched in the MRTF-dependent, PKD-related gene promoters, as expected.Interestingly, the recognition elements of innate-immune-response-related transcription factors (particularly NFκB) were also significantly enriched, concordant with the DEG analysis, suggesting that MRTF inhibits NFκB (Figure 4G-H). Third, we utilized Weighted Gene Co-expression Network Analysis (WGCNA) to identify unbiased positive and negative correlation patterns.Among the resulting 90 modules, 6 matched our expression criteria, supplemented with the extra-requirement that PC1 and PC2 loss should act concordantly.siNRA (siNR, 200 nM, n = 3), siPC1 (150 nM, n = 4), siPC2 (100 nM, n = 4), alone, or with siMRTF-A (100 nM, siPC1+siMRTF-A n = 5, siPC2 n = 4).Gene expression was compared across the indicated conditions using RNA-Seq.Finally, we confirmed the PC1/2 status-independent downregulation of MRTF-A by siRNA and the concomitant suppression of the PC1/2 loss-promoted TAGLN and PC1specific response of CYR61 [64][65][66].In addition, we also showed the MRTF-dependent behavior of Col4A2, a basement membrane component, the expression of which was shown to increase in renal fibrosis [15] (Figure 4H).Thus, RNA-Seq indicated that MRTF significantly contributes to PKD-associated transcriptome alterations in our epithelial cell model system by enhancing cytoskeletal reorganization and matricellular proteins, which are key features of PEP.In addition, it regulates mitochondrial organization and metabolism and mitigates the acute innate immune response. Integrin β1 (ITGB1) and MRTF-A Form a Feed-Forward Loop and Regulate Profibrotic Gene Expression Our transcriptome sequencing data also indicated that PC loss stimulated the expression of ITGB1 (Figure 5A).Relevantly, the deletion of ITGB1 dramatically reduced cystogenesis and fibrosis in Pkd1 fl/fl, Aqp2-Cre animals [66].Further, ITGB1 is a known direct MRTF target, whose promoter harbors a CARG box [67,68].Using qPCR, we verified that PC1 and PC2 knockdown increased ITGB1 mRNA expression.This transcriptional change was partially MRTF-A-dependent (Figure 5B).Moreover, ITGB1 activity was also affected by PC loss; using an activation-specific ITGB1 antibody (12G10), we observed a major increase in the number of active ITGB1 clusters, visualized as parallel thin lines in PC1-or PC2-silenced cells.A similar effect was observed upon the addition of manganese (Mn 2+ ), a pan-integrin activator, which was used as a positive control (Figure 5C) [69].Importantly, Mn 2+ treatment alone was sufficient to prompt MRTF-A nuclear accumulation and robust TAGLN expression (Figure 5C-F).To assess the potential contribution of ITGB1 to MRTF translocation, we treated the cells with ITGB1 siRNA, which resulted in a ≈70% drop in ITGB1 mRNA (Figure 5E, left panel).This treatment efficiently suppressed the Mn 2+ -induced nuclear accumulation of MRTF (Figure 5E,F), and significantly but modestly mitigated PC1-or PC2-loss-provoked MRTF translocation (Figure 5G,H).Together, these findings imply that PC1/2 loss elevates the level and activity of ITGB1, and integrin activation is sufficient to induce MRTF translocation, which could contribute to (but is likely not indispensable for) this effect.Thus, while ITGB1 and MRTF can mutually activate each other, MRTF can also be triggered by other pathway(s) downstream of PC loss (see Discussion). Cells 2024, 13, x FOR PEER REVIEW 13 of 25 likely not indispensable for) this effect.Thus, while ITGB1 and MRTF can mutually activate each other, MRTF can also be triggered by other pathway(s) downstream of PC loss (see Discussion). The Role of MRTF in Paracrine Epithelial-Mesenchymal Communication upon PC Loss To assess whether PC loss can elicit a functional PEP state inducing fibroblast-MyoF transition [15,16,19,70]), and to test whether this might occur in an MRTF-dependent manner, we employed a bioassay.Conditioned media derived from control or siPC1-transfected epithelial cells were transferred onto fibroblasts, and the ensuing fibroblast-MyoF The Role of MRTF in Paracrine Epithelial-Mesenchymal Communication upon PC Loss To assess whether PC loss can elicit a functional PEP state inducing fibroblast-MyoF transition [15,16,19,70]), and to test whether this might occur in an MRTF-dependent manner, we employed a bioassay.Conditioned media derived from control or siPC1transfected epithelial cells were transferred onto fibroblasts, and the ensuing fibroblast-MyoF transition was determined based on the capacity of MyoF to contract and wrinkle the underlying soft substrate [54,71] (Figure 6A).The conditioned media of the siPC1-transfected cells induced a strong wrinkling capacity in the fibroblasts.Importantly, the conditioned media derived from cells exposed to the simultaneous knockdown of MRTF-A and PC1 lacked this effect (Figure 6B).Thus, PC loss induces the profibrotic paracrine PEP state in an MRTF-dependent manner. Cells 2024, 13, x FOR PEER REVIEW 14 of 25 transition was determined based on the capacity of MyoF to contract and wrinkle the underlying soft substrate [54,71] (Figure 6A).The conditioned media of the siPC1-transfected cells induced a strong wrinkling capacity in the fibroblasts.Importantly, the conditioned media derived from cells exposed to the simultaneous knockdown of MRTF-A and PC1 lacked this effect (Figure 6B).Thus, PC loss induces the profibrotic paracrine PEP state in an MRTF-dependent manner. The MRTF Pathway Is Activated In Vivo in Various Forms of PKD To assess whether PKD affects MRTF distribution in vivo, we analyzed two established adult-onset mouse models, Pkd2 WS25/− and Pkd1 RC/RC [44,55,56] (Figure 7A).Pkd2 WS25/− is a compound heterozygous model, wherein the somatically unstable WS25 allele undergoes high rates of recombination, resulting in the loss of Pkd2 and leading to early cystogenesis (within 3 months) [56].Pkd1 p.R3277C (RC) is a functional hypomorphic mutation, causing cystogenesis and fibrosis at 12 months of age.Because the younger (3 months) Pkd2 WS25/− cohort showed more concordant results, this model was analyzed in more detail.The active RhoA levels were significantly higher in the Pkd2 WS25/− kidneys compared to their corresponding controls (Figure 7B).As expected, the Pkd2 WS25/− kidneys were histologically characterized by a large number of dilated tubular structures and cysts (Figure 7C, upper panes).Remarkably, robust nuclear MRTF accumulation was observed in a subset of epithelial cells, predominantly in dilated, precystic tubules (Figure 7C).Nuclear MRTF accumulation appeared to be much less and more homogenous in the control animals (Figure 7C).It is worth noting, however, that pathological tubular structures in the PKD animals were interspersed among normal ones (with low nuclear MRTF staining), reflecting the inherent variance in PKD1 dosage and PKD2 genomic rearrangements, as suggested before [55,56].In addition to tubular MRTF The MRTF Pathway Is Activated In Vivo in Various Forms of PKD To assess whether PKD affects MRTF distribution in vivo, we analyzed two established adult-onset mouse models, Pkd2 WS25/− and Pkd1 RC/RC [44,55,56] (Figure 7A).Pkd2 WS25/− is a compound heterozygous model, wherein the somatically unstable WS25 allele undergoes high rates of recombination, resulting in the loss of Pkd2 and leading to early cystogenesis (within 3 months) [56].Pkd1 p.R3277C (RC) is a functional hypomorphic mutation, causing cystogenesis and fibrosis at 12 months of age.Because the younger (3 months) Pkd2 WS25/− cohort showed more concordant results, this model was analyzed in more detail.The active RhoA levels were significantly higher in the Pkd2 WS25/− kidneys compared to their corresponding controls (Figure 7B).As expected, the Pkd2 WS25/− kidneys were histologically characterized by a large number of dilated tubular structures and cysts (Figure 7C, upper panes).Remarkably, robust nuclear MRTF accumulation was observed in a subset of epithelial cells, predominantly in dilated, precystic tubules (Figure 7C).Nuclear MRTF accumulation appeared to be much less and more homogenous in the control animals (Figure 7C).It is worth noting, however, that pathological tubular structures in the PKD animals were interspersed among normal ones (with low nuclear MRTF staining), reflecting the inherent variance in PKD1 dosage and PKD2 genomic rearrangements, as suggested before [55,56].In addition to tubular MRTF accumulation, there was a striking general (cytosolic and nuclear) upregulation of total cellular MRTF-A expression in the cyst-lining epithelium (Figures 7C and S5A).Concordant with these qualitative observations, both the total cortical (Figure 7D) and cystic (Figure 7E) MRTF expressions were significantly higher in the PKD2 kidneys than in the corresponding controls, and they were much higher in the cysts than in the normal regions of the PKD2 kidneys (Figure 7E).Similar observations (i.e., marked nuclear MRTF accumulation in the tubules and overexpression in the cysts) were made in the PKD1 animals as well (Figure 7C, lower panes, Figure 7D).In accordance with these results, the PKD patients' samples also exhibited strong MRTF-A staining in the cystic epithelium (Figure S6). accumulation, there was a striking general (cytosolic and nuclear) upregulation of total cellular MRTF-A expression in the cyst-lining epithelium (Figures 7C and S5A).Concordant with these qualitative observations, both the total cortical (Figure 7D) and cystic (Figure 7E) MRTF expressions were significantly higher in the PKD2 kidneys than in the corresponding controls, and they were much higher in the cysts than in the normal regions of the PKD2 kidneys (Figure 7E).Similar observations (i.e., marked nuclear MRTF accumulation in the tubules and overexpression in the cysts) were made in the PKD1 animals as well (Figure 7C, lower panes, Figure 7D).In accordance with these results, the PKD patients' samples also exhibited strong MRTF-A staining in the cystic epithelium (Figure S6). were evaluated.At the selected time points, micro-and macrocysts were present in both models, except for one of the Pkd2 WS25/− mice.This animal was omitted from the analysis.(B) RhoA activation was assessed by detecting the GTP-bound form by immunofluorescence.Ten fields were quantified from each animal (n = 3, right panel).(C) Nuclear MRTF expression was quantified using HALO Area Quantification module.Arrowheads point at cells with nuclear MRTF-A expression (middle panels) or strong MRTF-A overexpression in the cystic wall (right panels).Scale bar corresponds to all images.(D) IHC assessed MRTF subcellular localization in both PKD mouse models and control animals.Arrowheads indicate nuclear MRTF-A in the tubules and increased MRTF-A expression in the cystic epithelial wall.(E) Whole kidney sections were analyzed, and MRTF-A/DAB average OD was individually reported for the >300,000 nuclei per animal, using the inbuilt Object Quantification module of HALO.OD values were assigned to 256 incremental bins, and single-nuclei-associated OD data are presented as distribution curves.The highest-frequency OD bin in the PKD animals was assigned as the threshold for 'high' nuclear MRTF-A accumulation (dotted arrows).The inserts depict the AUC ratio corresponding to nuclei with 'high' nuclear MRTF-A.(F) MRTF-A expressions on histologically 'normal' tubules and cysts were compared within the PKD2 WS25/− and PKD1 RC/RC kidneys.Automatic Threshold Method (ATM score 0 or 1, inbuilt Area Quantification module of HALO) was used to dichotomize DAB-positive and -negative pixels.MRTF-A positive area was normalized to the number of nuclei in each examined tubule and cyst.* p < 0.05. Considering that kidneys show focal heterogeneity in nuclear MRTF accumulation, we sought to compare this parameter quantitatively in the whole kidney sections of normal, PKD2, and PKD1 animals.Therefore, the distribution of nuclear MRTF-A (averaged for all animals tested with whole kidney slices, >300,000 cells/kidney) was measured by the HALO image analysis platform.A substantial shift towards higher nuclear intensities (OD) was observed in both PKD2 and in PKD1 kidneys, compared to the corresponding healthy controls (Figures 7F, S5B and S7).The OD corresponding to the maximum of the distribution curve in the PKD mutant animals was selected as a threshold defining a 'high' MRTF-A nuclear presence.As shown in the insets, a significantly larger percentage of cells exhibited a 'high' nuclear MRTF-A expression in the PKD mutant kidneys than in the controls (Figure 7F). CTGF Expression Shows Spatial Correlation with Increased MRTF Expression Under physiological conditions, only stromal cells express CTGF in the kidneys.CTGF participates in ECM-associated profibrotic signaling by binding cell surface receptors (integrins), growth factors (TGFβ and BMPs), and ECM proteins (fibronectin) [72][73][74].Further, both CTGF and PDGF expression are regulated by MRTF/SRF [29,32,75], constituting a positive profibrotic feedback loop in the epithelium.IHC quantification confirmed that CTGF and PDGF expression were elevated in the tubules of the PKD2 mutant animals compared to those of the control littermates (Figure 8A,B). We reasoned that, if MRTF directly drives CTGF transcription, the corresponding messages should show a spatial correlation; therefore, we compared MRTF-A and CTGF spatial mRNA expression on consecutive sections using RNAScope.MRTF-A mRNA expression was greatly upregulated in the dilated tubules, the adjacent stroma, and the cyst-lining epithelium of the PKD2 kidneys, contrasting the minimal expression in the normal tubules of the same animals and the control kidneys (Figure 8C-E). Importantly, MRTF-positive epithelial cells ubiquitously expressed CTGF (Figure 8F-I), with a strong positive correlation between tubular MRTF and CTGF expression (Figure 8J).These tubules were typically localized in the vicinity of cysts and stromal areas with a strong CTGF expression.Altogether, the mRNA expression of MRTF-A and CTGF spatially overlapped, strengthening the potential role of MRTF in the epithelial initiation of profibrotic signaling.scarring.These findings are consistent with the previously demonstrated roles of MRTF (reviewed in [33]) and the prominence of SRF as a key transcriptional hub in PKD [50].Moreover, our approaches assigned two novel functions to MRTF in this context.The first regards the expression of genes governing mitochondrial functions/metabolism.This new aspect requires further studies, given that PKD is characterized by robust metabolic changes (reduced OXPHOS and increased glycolysis) and alterations in mitochondrial shape and function [14,62,63,101,102].The other possible function of MRTF is the suppression of inflammatory genes via the negative regulation of relevant transcription factors (e.g., NFκB, as raised before [60,61]).This may signify a role of MRTF as a switch between the inflammatory and the fibrotic aspects of PKD.In addition, the MRTF-dependent gene sets and GO categories, while overlapping, are not identical for PC1 vs. PC2 loss (Figure S4).This highlights the PKD-type specific roles of MRTF.While we started to unravel the spatial correlation between MRTF expression and fibrogenic cytokine production, spatial transcriptomics studies should extend these findings in the future. Conclusions What is the significance of fibrosis in the overall pathology of PKD?A recent elegant report argues that fibrosis, per se, may inhibit cyst formation by mechanically restricting cyst growth, but it worsens survival [59].These findings argue that combatting fibrosis is an important therapeutic option in PKD.Our proof-of-principle studies demonstrate that MRTF is activated upon PC loss and in PKD, and suggest that it acts as a significant mediator in the pathobiology of the disease.However, future functional studies should test how genetic or pharmacological interference with MRTF affects various aspects (inflammation, cytogenesis, and fibrosis) of the disease, and discern if MRTF is a viable drug target for the treatment of PKD. Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/cells13110984/s1, Figure S1.Experimental setup.(A) siPC1-1433 and siPC2-1644 are a second set of PC1and PC2-targeting siRNAs (respectively).Transfection of either siPC1-1433 (150 nM) or PC2-1644 (100 nM) enhanced TAGLN expression.qPCR data are shown.(B) MRTF-A mRNA was quantified by RT-qPCR.siPC1 or siPC2 knockdown did not alter the efficiency of MRTF-A silencing (n = 3, n.s.non-significant, ** p < 0.01, *** p < 0.001).Figure S2.PKDlinked cytoskeletal changes are regulated by MRTF.Differentially expressed genes were determined as detailed in Supplementary Methods, and were subjected to enrichment analysis of Biological Processes (A) and Cellular Compartment (B), using ClusterProfiler in R. Categories that were depleted in siPC1 and siPC2 conditions (compared to siNR) are shown in green.Further, selected top categories that were enriched in siPC1 or siPC2 conditions (compared to siNR) are listed in the siPC1 and siPC2 columns.Significant decreases compared to these in siMRTF-treated samples are shown in the next two columns.Note that biological processes that overlap between PC knockdown conditions and the siPC+siMRTF-A conditions were related to cytoskeleton organization and cell contractility (red rectangles).Figure S3.PC1 and PC2 loss-specific underrepresented biological processes and their MRTF-dependence.Top panel: Epithelial cells knocked down for PC1 and PC2 transcriptionally underrepresented several metabolic pathways.Lower panels: Out of these, MRTF-A silencing partially mitigates amino acid transport across plasma membrane (siPC1).Figure S4.PC1 and PC2 loss-specific biological processes.DEG searched for transcripts that were upregulated upon PC1 loss but were unaffected by PC2 loss, or vice versa.For PC1, loss-related biological processes centered around DNA replication and transcription (top panel).Within this transcript set, MRTF-dependent genes were grouped in DNA recombination-related repair.Interestingly, PC2-specific upregulated categories remained to be linked to cell projection/cytoskeletal reorganization/cell motility.The MRTF-dependent subset is predicted to regulate the actin cytoskeleton, cell contractility, and muscle development.Figure S5.Expression analysis methods.(A) ATM within the Area Quantification Module measured MRTF-A/DAB signal and colored each pixel according to its DAB optical density.Yellow to red color gradient corresponds to increasing MRTF-A expression, as indicated.Green and yellow masks show selected normal tubules without or with lumen, respectively.Red masks depict visibly enlarged tubules; lumen measurements are given in µm.Scale bar corresponds to all panels.(B) Automated Area Quantification Module was trained to recognize the hematoxylin-stained Figure 1 . Figure 1.Loss of Polycystin 1 or 2 activates RhoA and leads to RhoA-dependent cytoskeletal remodeling and nuclear accumulation of MRTF-A.(A) PC1 or PC2 were silenced in LLC-PK1 cells with the corresponding siRNAs (150 nM and 100 nM, respectively, 48 h), without altering each other's expression, as assessed by Western blot analysis.PC/GAPDH ratios are presented in the right panel.(B) Active RhoA was detected by GST-Rhotekin binding domain precipitation assay.Active RhoA precipitates and total cell lysate blots were probed with a RhoA-specific antibody.Fold change was normalized against the active RhoA/total RhoA ratio of control siRNA-transfected cells (non-related Figure 1 . Figure 1.Loss of Polycystin 1 or 2 activates RhoA and leads to RhoA-dependent cytoskeletal remodeling and nuclear accumulation of MRTF-A.(A) PC1 or PC2 were silenced in LLC-PK1 cells Figure 3 . Figure 3. Polycystin loss induces the expression of profibrotic phenotype-related transcription program in a partially MRTF-dependent manner.(A-E,G) LLC-PK1 cells were transfected with control siRNA (NR) or siRNAs targeting PC1 (150 nM), PC2 (100 nM), MRTF-A (50 nM), MRTF-B (50 nM), SRF (100 nM), or RhoA (100 nM), as indicated.In (A,B), 24 h post-transfection, cells were treated with DMSO or 5 ng/mL TGFβ in serum-free media for an additional 24 h.Synergy/additional effects between the loss of PCs and the presence of TGFβ was assessed by measuring the expression of known PEP-related genes that are direct transcriptional targets of MRTF-A.The synergistic effects varied among genes, as shown.ANXA and RASSF expression was not stimulated by TGFβ treatment, but was stimulated by PC loss and was partially MRTF-driven.Fold changes (normalized to PPIA expression) were compared to the DMSO-treated, siNR-transfected samples (n > 4 in triplicates).(C,D) Knockdown of RhoA and SRF diminished the profibrotic effect of PC loss.(F) The table summarizes the MRTF-dependence of the investigated PEP-related and validated MRTF/SRF targets.(G) Silencing MRTF-A facilitates the PC loss-induced increase of some Nuclear factor κB (NFκB)dependent inflammatory genes.mRNA abundance for tumor necrosis factor-α (TNFα), chemokine (C-C motif) ligand 2 (CCL2), and interleukin-1β and α (IL-1β and IL-1α) are shown.* p < 0.05. Figure 4 . Figure 4. Next-generation transcriptome analysis of Polycystin-and MRTF-A-dependent gene expression.(A) Overview of experimental framework.LLCPK-1 cells were transfected with control Figure 5 . Figure 5. ITGB1 is overexpressed and activated upon Polycystin loss and regulates the subcellular localization of MRTF.(A,B) ITGB1 mRNA expression was quantified using RNA-Seq and RT-qPCR in LLC-PK1 cells transfected with siPC (siPC1 150 nM or siPC2 100 nM) or siNR, and siMRTF-A (50 nM) or siNR, as indicated.RT-qPCR results were normalized to PPIA expression, against the siNR-transfected controls.(C) Left panel: Antibody, specific for the active conformation of ITGB1 (12G10), was used to visualize the clustering of active ITGB1.Right panel presents the number of ITGB1 clusters, normalized to cell number.(D) Mn 2+ treatment (500 µM, 1 h) activated integrins and was sufficient to drive high TAGLN expression.Relative expression was measured by qRT-PCR and was normalized against PPIA expression.(E) ITGB1 was partially silenced by specific siRNA (100 nM, 48 h).(F) Integrins were activated by Mn 2+ treatment and MRTF-A subcellular localization was estimated by immunofluorescence staining.Using visual assessment (25 fields), MRTF-A localization was categorized as nuclear (red bars), even (grey bars), or cytoplasmic (black bars) (right panel).(G,H) Cells were transfected with siPC1 (150 nM) or siPC2 (100 nM) alone or together with siNR or siITGB1 (100 nM).Representative images of anti-MRTF-A immunofluorescent staining are shown (left panel) and reveal a modest but significant cytoplasmic shift upon ITGB1 silencing.MRTF-A localization was quantified as in (D, right panel).The images and quantification represent the results of a minimum of three independent experiments.* p < 0.05, **, p < 0.01, and *** p < 0.001. Figure 6 . Figure 6.Epithelial Polycystin 1 loss induces MRTF-dependent paracrine signaling that potentiates fibroblast-to-myofibroblast transition.(A) Experimental design.LLC-PK1 cells were transfected with the indicated siRNAs (150 nM siPC1 and 50 nM siNR or siMRTF-A).Twenty-four hours posttransfection, cells were thoroughly washed to avoid carry-over of to fibroblast culture, and fresh serum-free media was added.Conditioned media was collected after an additional 24 h and was used to stimulate fibroblasts.Forty-eight hours later, >10 randomly selected areas of the fibroblast cultures were photographed under each condition.Cell contractile function, related to % area covered by wrinkles, was analyzed by FIJI ImageJ software and the values were normalized to cell number.Scale bars correspond to all images.(B,C) Representative images of fibroblasts cultures at the experimental end points and their quantification.Scale bars correspond to all images.**** p < 0.0001. Figure 6 . Figure 6.Epithelial Polycystin 1 loss induces MRTF-dependent paracrine signaling that potentiates fibroblast-to-myofibroblast transition.(A) Experimental design.LLC-PK1 cells were transfected with the indicated siRNAs (150 nM siPC1 and 50 nM siNR or siMRTF-A).Twenty-four hours posttransfection, cells were thoroughly washed to avoid the carry-over of siRNAs to fibroblast culture, and fresh serum-free media was added.Conditioned media was collected after an additional 24 h and was used to stimulate fibroblasts.Forty-eight hours later, >10 randomly selected areas of the fibroblast cultures were photographed under each condition.Cell contractile function, related to % area covered by wrinkles, was analyzed by FIJI ImageJ software and the values were normalized to cell number.Scale bars correspond to all images.(B,C) Representative images of fibroblasts cultures at the experimental end points and their quantification.Scale bars correspond to all images.**** p < 0.0001. Figure 8 . Figure 8. Fibrogenic cytokine expression is increased in PKD, and CTGF expression spatially correlates with MRTF mRNA abundance in vivo.(A,B) CTGF and PDGFB expression was detected in Pkd2 WS25/− (PKD2) and Pkd2 WS25/+ animals (Control, Ctrl) by IHC (n = 3).Expression was quantified in 4-10 fields per animal using the Area Quantification package of HALO, normalizing for the number of nuclei.(C-I) RNAScope was used to detect MRTF-A and CTGF mRNA expression in Pkd2 WS25/− and Pkd2 WS25/+ animals.Images in (D) are enlarged areas of normal and dilated (F)The graph is the visual representation of significant GO biological processes related to ME18 module.(G) Predicted key transcription factors for genes that were upregulated upon PC knockdown and showed expression change upon MRTF-A silencing (Transfac TF Binding Site Enrichment Analysis).(H) Expression profiles from the RNA-Seq data shown for some individual genes under various conditions, as indicated.Key PKD-related genes (MRTF-A, TAGLN, CYR61, and COL4A2) and innate immunity-related genes (lower panels) are shown.Adjusted p-values are indicated: * p < 0.05, ** p < 0.01, and *** p < 0.001.
2024-06-07T15:14:01.468Z
2024-06-01T00:00:00.000
{ "year": 2024, "sha1": "10cb0c6c6b300072faa071fdc631e86d4c6292be", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4409/13/11/984/pdf?version=1717581686", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7e93bdb8a0557cb7e72478d13c84e21ac4e9b25a", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
149554559
pes2o/s2orc
v3-fos-license
The European Union and Practices of Governing Space and Population in Contested States: Insights from EUPOL COPPS in Palestine ABSTRACT This paper examines the EU Police Mission in the Palestinian Territories (EUPOL COPPS) with a focus on its effects on everyday police work on the ground. The main argument is that the mission illustrates the ways in which its training and advisory activities work to foster logics and practices that feed into and reproduce the borders that have over the years been imposed, primarily through Israeli security practices. Operating under conditions of contested statehood, EUPOL COPPS promotes Palestinian policing activities based on particular spatial logics and actions as to the governance of the Palestinian population. The article presents new empirical material collected through interviews and document analysis. As such, it aims to build bridges between the literature on critical border studies, EU external relations, the EU’s role in the Israeli–Palestinian conflict as well as the literature on the EU police missions in conflict and post-conflict missions by emphasising their spatial dimension. Introduction Borders, space and territory remain crucial issues in the Israeli-Palestinian conflict. Firstly, the Zionist ideology is essentially a territorial one, which aspires for spatial and territorial configuration to achieve a homogenous national population. This has direct relevance for how citizenship and nationality are understood by the Jewish population as well as in the policies and actions of the State of Israel (Newman 2001). Space also matters for Palestinian imaginations of self-determination and independent statehood, of which the quest for a 'good' border is a key componentequally for the Israeli side (Falah and Newman 1995). Over the decades, the conflict has been shaped by attempts and practices at spatial control and configuration (Newman 1989). Borders have been imposedprimarily through the use of force that has taken the form of military intervention and occupation not only of Palestinian territories by Israel, but also the latter's annexation of regions in the neighbouring countries such as Syria and Jordan and its enforcement of borders by reference to national security considerations (Newman 1989(Newman , 1996. In addition to military means, space has been acted upon and borders have been imposed in the conflict region by means of various techniques ranging from the "Separation Wall" in the West Bank (Pallister-Wilkins 2015) to checkpoints (Parsons and Salter 2008) along with architecture and infrastructure projects (Weizman 2007). There are many actors involved in spatial imaginations or non-spatial workings of bordering in the Israeli-Palestinian conflict, including the Israeli state and military. Israeli settlements in the occupied territories are part and parcel of spatial control and organisation, which brings to the fore the centrality of territory for the ongoing conflict and the peace process (Newman 1996). Meanwhile, borders permeate the visions and discursive articulations of ordinary Israelis and Palestinians illustrating the significance of 'social boundaries' (Falah and Newman 1995, 691). In this article, we examine the European Union (EU) as one actor, whose activities in the context of the Israeli-Palestinian peace process have important implications for space and borders as protracted issues of the conflict (Bicchi and Voltolini 2018), including the management of mobility and population. The focus is on the EU Police Mission in the Palestinian Territories (EUPOL COPPS) established in 2006 with the declared objective to assist Palestinians in reforming their security sector by means of improving the civil police component of security reforms. This article draws mainly on critical border studies. Del Sarto's research (2015) entails a 'borderlands approach' to examine the spatial dimension of EU relations with the conflict parties. Our approach differs from Del Sarto's study in two ways. First, while Del Sarto examines the 'overlapping border regimes within a space' entailing the EU, Israel and the Palestinian territories, our focus is on EU civilian mission activities within a delineated space and its implications for the latter. Second, different from Del Sarto's (2014) interest in the space-making effects of visa policies, our study explores how policing works to govern space and population with broader implications for contested borders. We are especially interested in conceptual debates over 'borderwork' (Rumford 2006(Rumford , 2012 and 'bordering' (Bialasiewicz et al. 2009;Pallister-Wilkins 2017). The examination of EUPOL COPPS would be incomplete without looking at its spatial dimension. This is because the mission does not target an autonomous and independent state with clear and internationally agreed borders and/or control over its population. The borders of the two conflicting parties are highly contested, and Palestinian sovereignty over territory and its autonomy in governance matters are strictly restricted due to the Israeli occupation and settlement activities (Del Sarto 2015). As such, EUPOL COPPS is deployed in a contested state and in the context of the Israeli-Palestinian conflict, which is still ongoing. The examination of EUPOL COPPS is important because the mission is not a border assistance mission, such as for example the European Union Border Assistance Mission to Rafah (EUBAM Rafah), which was deployed in 2006/2007 at the Rafah Crossing point between Gaza and Egypt but has remained inactive since 2007. Examining EUPOL COPPS' spatial dimension is an endeavour, which has not been conducted previously and by doing so we highlight how a non-bordering mission has bordering effects. EUPOL COPPS is interesting also because it is a civilian mission without an executive mandate, which means that the mission's staff do not have executive powers and cannot implement policies themselves but the mission has a rather technical and advisory character. Yet, a closer examination of the mission's training and advisory activities on the ground indicates how the mission works to foster logics and practices of the governance of population that feed into and reproduce the borders which have over the years been imposed, primarily through Israeli security practices. Operating under conditions of contested statehood, EUPOL COPPS promotes Palestinian policing activities based on particular logics and actions as to the governance of the Palestinian population and space. The article also speaks to the existing literature on critical border studies, EU external relations, the EU's role in the Israeli-Palestinian conflict as well as the literature on the European Union police missions in conflict and post-conflict missions by emphasising their spatial dimension. This strand of scholarship has provided original and critical insights into the underlying rationales, instruments, processes and outcomes of EU activities in the third countries. Studies have looked, among others, at the EU police mission in Bosnia and Herzegovina (Celador 2009;Merlingen and Ostrauskaite 2005;Osland 2004), in the Democratic Republic of Congo (Martinelli 2006), in Macedonia (Merlingen and Ostrauskaite 2008) and in the occupied Palestinian territories (oPts) (Bouris 2012(Bouris , 2014(Bouris , 2015İşleyen 2018a;Müller and Zahda 2018;Tartir and Ejdus 2018). This literature has pointed out that EU police missions have rested on a peacebuilding discourse emphasising liberal forms of state-building and good governance institutions and illustrated the outcomes of EU activities for democracy, peace and regional politics. While previous research has studied the transformation of police skills and capacities in third countries through Common Security and Defence Policy (CSDP) missions, the operational effectiveness of such missions and analysing them as part of the EU's conflict resolution strategies, little attention has been paid to their spatial dimension. We study how EUPOL COPPS produces and transfers specific logics and practices of governing space and population in the Palestinian territories. It does so through its police training activities that produce and reproduce borders. This works through the promotion of particular forms of Palestinian mobility at the expense of others. The article begins by outlining the conceptual framework that we draw upon to examine EUPOL COPPS activities. This is followed by a section on the regional and international context in which EUPOL COPPS has emerged and been deployed in the oPts. We then apply our conceptual framework to the analysis of the mission's activities on the ground. The article draws on primary material that has not been published before and on a number of interviews conducted in English with EUPOL COPPS officials in Ramallah as well as with EU officials from the Civilian Planning and Conduct Capability (CPCC) and the Political and Security Committee (PSC) in Brussels. These interviews were conducted in the last eight years with the most recent of them conducted in Ramallah in May 2018. The aim of these interviews has been to triangulate the material collected through secondary sources. Secondary sources such as academic literature on borders, borderwork, the EU's role in the Israeli-Palestinian conflict and EUPOL COPPS have also been consulted as well as media and newspaper articles. The last part summarises the findings and the implications of the mission's activities for the ongoing Israeli-Palestinian conflict. Bordering and the Governance of Space and Mobility Drawing upon critical border studies (Johnson et al. 2011;Rumford 2008Rumford , 2012, we study EUPOL COPPS as a particular form of 'bordering' activity that promotes and disseminates specific logics of and relationships between space, governance and population. There are three central arguments of this body of scholarship that this article addresses. First, borders are not limited to state borders. As Coleman (2007) puts it 'the borderand border enforcementis increasingly everywhere'. This is not to negate the existence of state borders, be they lines, fences or rivers, but to recognise that 'contemporary borders become deterritorialised and disaggregated' (Côté-Boucher, Infantino, and Salters 2014, 196). Rather than taking borders as rigid and centralised forms of division and exclusion, a critical perspective on borders look at the ways in which new forms and mechanisms of bordering take place in 'dispersed and heterogonous sites located beyond the geopolitical border lines' (Côté-Boucher, Infantino, and Salters 2014, 196). Bordering is not restricted to activities 'at the territorial margins of the state' (Coleman 2007, 56). Instead, bordering is manifested in a wide range of locations away from the state border bringing a wide range of actors and institutions and following diverse spatio-temporal logics and mechanisms that are hard to explain solely from a state-centred perspective (İşleyen 2018b). Borders are also constructed within and beyond states, which requires us to move away from the territorial demarcation lines between states (Bialasiewicz et al. 2009). Second, borders and bordering are not 'exclusively the business of the state' (Rumford 2012, 897). Instead, non-state actors are key in 'envisioning, constructing, maintaining and erasing borders' (Rumford 2012, 2). What is common to borderwork literature is their call for a shift in focus in the examination of border from state to non-state actors engaged in bordering (Johnson et al. 2011). The EU is one such non-state actor whose discursive and non-discursive practices have productive effects in the emergence, production and reproduction of territorial and spatial configurations both within the European space and beyond (Mamadouh 2015;Bialasiewicz, Elden, and Painter 2005;Rumford 2006). One example is the type of bordering coming into play through the European Neighbourhood Policy (ENP) as a form of region-building. The ENP illustrates the coming into play of 'new political geographies of the European "neighbourhood"' that are produced and reproduced through the activities undertaken by diverse EU agents and institutions targeting the Mediterranean, the Black Sea and the Western Balkans as well as selected spaces within the territorial borders of EU member states. The ENP constitutes the 'hierarchies of places, rights, and access' (Bialasiewicz et al. 2009) that are established and enhanced by means of a wide range of pre-emptive, region-building and post-conflict state-building activities undertaken by supra-national, trans-national and sub-national actors in the EU. Third, bordering does not have to build on 'consensus' to be powerful and effective. As contemporary borders increasingly become varied, dispersed, and dislocated, they are not necessarily agreed upon, developed and utilised through consensus. Nor are all borders 'identifiable, recognised by all parties' and generate consequences that equally affect all. The activities of FRONTEX, the EU's border agency, in the Mediterranean does bordering work conducted by a non-state agent, whereby consensus is not the norm and the effects of bordering are not allencompassing. 'The "FRONTEX" border is a new sort of flexible border, deployed whenever and wherever it is needed' yet constitutes 'a border which is not mutually agreed by those on either side of it' (Rumford 2012, 891). Moreover, FRONTEX operations do not affect the EU and the non-EU in the same way. Whereas the 'inside' does not always become aware of the 'FRONTEX border' in its daily deployment, the 'outside' experiences and is confronted by the exclusionary effects of this border on an everyday basis (Rumford 2012, 892;Bigo 2014). Nevertheless, bordering is also constitutive of spaces of governance within and beyond traditional conceptions of statehood and borders. Borders function in multifarious ways and produce diverse and multiple geographical logics and practices of territorialisation, re-territorisalisation and de-territorialisation (Bialasiewicz et al. 2009). They disseminate particular logics of governing space and population. This occurs through the governing of the mobility of the population. This might work to either 'expand the spatial zone of intervention' (Pallister-Wilkins 2017, 93) or concentrate it inside the territorial borders of the nation state (İşleyen 2018c), the latter being manifest, for example, in transit spaces such as hotspots and hubs where EU bordering practices have recently concentrated on (Pallister-Wilkins 2017). The ENP shows the ways in which economic and security partnership and political cooperation are meant to constitute the EU's neighbourhood as a space that is tied to a process of bordering, re-bordering and de-bordering. The ENP produces and reproduces particular spatial logics and practices of the 'interior' and the 'exterior' (Bialasiewicz et al. 2009) and enables the EU to export its rules and practices to bordering countries or as often called its 'near abroad' (Del Sarto and Schumacher 2005;Lavenex 2008;Lavenex and Schimmelfennig 2009). The ENP creates the socalled 'neighbourhood' as a space, where the 'EU-ropean project' is promoted and 'EU-ropean solutions' are being experimented, and neighbourhood countries are drawn into the EU's normative, legal and institutional framework without necessarily being attached to its identity. Meanwhile, the 'externalisation' of selected security, economic and political aspects of the 'EU-ropean project' inadvertently creates and reinforces (new) asymmetries in terms of actors, conditions, relations, institutions and actions within the 'South' (Bialasiewicz et al. 2009, 83). Political scientists have so far tried to explore the EU's impact on and role towards candidates for accession or third states through the process of Europeanisation or through the lens of external governance (Featherstone and Radaelli 2003;Grabbe 2001;Lavenex and Uçarer 2004;Schimmelfennig 2009;Sedelmeier 2011) but they have largely treated territory and space as 'essentially isotrophic and planaran abstract, uniform, featureless medium, upon which human political action is played out' (Clark and Jones 2013, 306). As Clark and Jones (2013, 306) have argued 'territory is often depicted as a passive backdrop over which Europeanisation politics and political actions are played out, a setting rather than a dynamic quantity in its own right'. The case of the Palestinian territories though exposes these weaknesses due to the fact that the ideational and territorial aspects remain dynamic and also have significant reverberations with regard to the state-building and conflict resolution initiatives taken on this space. To this end, this article and our approach contribute (and enhance) a number of different literatures, which are not always brought together. First of all, our main focus is not on a sovereign state but a contested one. This paper draws upon the definition of contested statehood offered by Papadimitriou and Petrov (2012, 749). According to them, contested statehood is a state of affairs, where one or more of the following characteristics hold true: (a) An internationally recognised state authority (as expressed by full membership of the UN) cannot maintain effective control over its respective territory (or parts of), either as a result of an ongoing conflict or its profound disconnection with the local population; (b) The de facto governing authority of a contested territory has declared independence, but it does not command full diplomatic recognition by the international community as expressed by full membership of the UN; (c) The capacity of an internationally recognised or a de facto government to exercise authority is severely compromised due to the weakness of its state apparatus, either because of poor resources or complications in the constitutional arrangement underpinning its operation. This definition is closely linked to the work of political geographers on space and territoriality. Jessop (2016) for example argues that statehood 'has different forms, rests on specific political and calculative technologies that support territorialisation, and can be combined with other forms of political authority and broader patterns of spatial organisation, resulting in different kinds of state and polity'. To this end, the very notion of the state is broken down into three key components: (1) An apparatus which is politically organised, coercive, administrative with general and specific powers; (2) A clearly defined territory under the continuous (and uncontested) control of a state apparatus and (3) a permanent population, upon which the state's political authority and decisions are binding. We specifically focus on the bordering work conducted by a non-state actor namely an EU CSDP mission (EUPOL COPPS), which has so far not been missing from the debate. What makes this even more important is that we are looking at the spatial dimension of the mission. Although issues of the importance of space and territory especially in regard to the Israeli-Palestinian conflict have extensively been discussed throughout the years and authors have engaged with the debate of how borders come into existence through social practices, so that they could be described as bordering processes (Newman and Paasi 1998) or issues of bordering, space, power and conflict in the context of Israel/Palestine (i.e. Falah and Newman 1995;Newman 1989Newman , 1996Newman , 2001Pallister-Wilkins 2015;Parsons and Salter 2008;Weizman 2007), this literature has not engaged with the EU's initiatives on the ground. As such, this article moves beyond these literatures and their aforementioned limitations and examines the kind of borderwork conducted though the EU's police mission taking place in the PA. While CSDP missions have been deployed in many countries in the EU's so-called neighbourhood, the case of Palestine is unique because of the ongoing Israeli-Palestinian conflict and the related Israeli occupation of the Palestinian Territories. Therefore, as a non-state actor, EUPOL COPPS operates under complex political, geographical and social conditions stemming from the aforementioned conflict and occupation. The next section turns to the kind of bordering work conducted by the EUPOL COPPS mission and discusses its effects for the realities on the ground. In the following we focus on the logics and micro-level techniques of EUPOL COPPS and aim to illustrate how this civilian mission translates into a sort of bordering. EU Involvement in the Palestinian Contested State through Security Sector Reform (SSR) in the Aftermath of the Oslo Accords The reasons of contested statehood in Palestine can be traced back to the collapse of the Ottoman Empire, the subsequent British mandate and the eventual British withdrawal from these territories. In Resolution 181, the UN decided upon the division of Palestine into two states, an Arab one and a Jewish one, and the internationalisation of Jerusalem. Following the 1967 War, Israel occupied the West Bank (including East Jerusalem), Gaza and the Golan Heights. In 1993, Israel and the Palestinians signed the Oslo Accords, which among others, also created the Palestinian Authority (PA), tasked to control a number of non-contiguous population centres. With the signing of Oslo II in 1995 (also known as Interim Agreement on the West Bank and the Gaza Strip), the West Bank was divided into three areas: A, B and C. Area A constituted 17.7%, Area B 21.3% while Area C represented 61% of the West Bank. What is noteworthy to mention is that it was only in Area A that the PA was given full responsibility for civilian and security affairs. In Area B, the PA was given only civilian control (the security control would be maintained by Israel) while in Area C Israel would retain full control. These areas are not contiguous and this is the reason that the West Bank is often called 'Swiss cheese' : 'Israel kept the cheese and left the holes for the Palestinians' (Lia 2006, 283). The result of this compartmentalisation of the territory of the West Bank was the construction of visible and invisible borders which would have significant reverberations with regards to the relationship between space and governance in the oPts both for internal Palestinian politics but also for the involvement of the EU and other external actors in the Palestinian state-building. The compartmentalisation of the territory has led to a situation where every externally-devised initiative is closely linked to the creation of borders decided and approved by the occupying power, that is Israel. At the same time, this fragmentation of the territory has also contributed to the PA's promotion of specific security logics and the production and reproduction of a 'legitimate' space for the Palestinians as well as a particular relationship between space, governance and population, which is another form of establishing borders. The division of the West Bank into these areas has not been just territorialthis division has led to the promotion of mobility in certain ways while restraining others as well as to the dissemination of particular logics of governing space and population. Israeli occupation makes extensive use of infrastructural arrangements, documentation as well as temporary and permanent checkpoints to regulate the mobility of the Palestinian population. This exemplifies the need for going beyond the state-centred perspective of borders to the examination of their materialisation in deterritorialised and dispersed spaces and sites (Parsons and Salter 2008). Neither do borders have to build on 'consensus' to be powerful and effective but are rather imposed by the occupier, which is Israel, whose population has a different daily experience of those borders than Palestinians. The most important outcome of the Oslo Accords was that Israel would remain the final arbiter of Palestinian life by having the ultimate control of all 'internal' and 'external' borders, or in other words all entry/exit points into/from Palestinian areas (Agha and Khalidi 2006;Le More 2008, Luft 2004. While the Oslo provisions were supposed to be temporary and the Accords themselves were considered as an interim period which would end in 1999, their provisions still guide the way that the international community in general and the EU in particular engage with the oPts and the way that the PA governs over its territories (Bouris and Kyris 2017). In the aftermath of the Oslo Accords, the EU engaged actively in every aspect of the state-building project carried out in the oPts (Bouris 2014) and it also provided half the funding needed for the setting up of the PA's institutions because it was hoped that building Palestinian institutions would be a first step towards the establishment of a Palestinian state and the end of the conflict (Bouris 2014, 73). Security has been central to all agreements signed throughout the Oslo period and it was inherently linked with the debate on borders. Details of all security and policing arrangements were agreed and specified in the agreements signed during the so-called Interim period. Despite all these detailed agreements, legally and politically 'the Palestinian Police was a far cry from a national police force in an independent state' (Lia 2006, 269). Much of the security infrastructure built during the Oslo years was almost completely destroyed by Israel following the outbreak of the second intifada in 2000 (Friedrich and Luethold 2007, 19). Palestinian SSR assumed a central role in the 2003 EU-inspired and Quartet 1 sponsored Performance-based Roadmap to a Permanent Two-State Solution to the Israeli-Palestinian Conflict'. Soon after George W. Bush made clear that 'The United States will not support the establishment of a Palestinian state until its leaders engage in sustained fight against terrorists and dismantle their infrastructure' (2002), a number of initiatives in the domain of SSR were taken to (in theory) help the Palestinians reform their security sector which would benefit first and foremost themselves. The reality though has been very different. The whole SSR activity would prove to be a way of exerting power towards the Palestinians, of 'convincing' and 'training' them on what is right and wrong and how the security apparatus in general and the civil police in particular should operate (according to Western standards). As Mustafa (2015, 220) puts it: The key point is that this order is imposed in such a way that it does not appear to be imposed at all; the coercive agency necessary to achieve this disposition is disguised, possibly even from the agents themselves, because of the ideal of 'consensual' politics they espouse and the way the coercive power guaranteeing it hides behind it. Power becomes more persuasive and pervasive when its action and function is disguised as something other than what it is. It is on this basis that EUPOL COPPS was established in 2006 building on a previous bilateral British initiative, which had been initiated in mid-January 2005 by the Department of International Development (DfID) and was called EU Coordinating Office for Palestinian Police Support (EU COPPS). EU COPPS was established within the office of the EU Special Representative for the Middle East Peace Process, Marc Otte at that time, was comprised of just four senior police advisors and led by Jonathan McIvor. EU COPPS carried out a fact-finding mission and produced a Palestine Police Project Memorandum with specific proposals for a programme which would support the Palestinian Civil Police (PCP) in both short and long-term plans, something which was considered as an important element of strengthening overall governance in the oPts. According to an official involved in the initial fact-finding mission: Little attention had been paid to the needs of the Palestinians and their safety and security which in theory should have been the priority of the Civil Police. Until that moment capacity building of the PCP meant satisfying, first and foremost, Israeli security needs and demands. While the mission initially had a 3-year mandate, this has been extended since then. EUPOL COPPS has two main operational pillars: a Police Advisory and a Rule of Law section (from 2008). Its main tasks according to its mandate are: (a) to mentor and advice the PCP; (b) to co-ordinate and facilitate EU member financial assistance to the PCP and (c) to give advice on politically related criminal justice elements. What is noteworthy to mention is that, EUPOL COPPS does not have an executive mandate, which means that its role is limited to mentoring and advising. The CPCC is responsible for the planning and conduct of the mission under the political control and strategic direction of the PSCboth based in Brussels. EUPOL COPPS consists of four sections namely: the rule of law section, the police advisory section, the planning and evaluation section and the mission support section. The mission staff have expanded considerably since its inception; while the mission started with 48 staff in 2006, in 2017 it had a strength of 114 staff (Bouris and Dobrescu 2018, 261). The mission also has special, field and training advisers who work in different parts of the West Bank providing assistance and helping in the identification of training and equipment needs. Training and Equipping Initially, the mission focused on tackling the most urgent equipment and infrastructure needs of the PCP. An EUPOL COPPS official who has been there since the deployment of the mission argues: 'when we first arrived here we witnessed a Palestinian Civil Police just exiting from the intifada, you couldn't enter Nablus… everything was chaos'. 3 The operational beginning of the mission also coincided with Hamas' success in the Palestinian elections and the subsequent boycott of its government by the EU and the international community (Gunning 2008;Voltolini and Bicchi 2015;Pace and Pallister Wilkins 2018). 'From our first days here we were hostages of the political situation without being able to do our job' argues another official from the mission. 4 The subsequent Hamas takeover of the Gaza Strip in 2007 and the division of the Palestinian Territories into the Fatah-led West Bank and Hamas-led Gaza Strip further complicated the situation on the ground (Persson 2017) and proved the argument that bordering is constitutive of new spaces of governance both within but also beyond traditional conceptions of statehood. As such, the realities of the contested statehood in Palestine as well as the division and separation of the border of 'Palestine' (which includes the West Bank, the Gaza Strip and East Jerusalem) into two 'internal and separate borders' (i.e. the Hamas-led Gaza government and the Fatah-led West Bank government) resulted to EUPOL COPPS being operational only in the areas which the Fatah-led government of the West Bank could control (Pace and Cooley 2012). Moreover, this separation did not build on 'consensus' but it rather built on internal Palestinian divisions. What is noteworthy to mention is that by engaging with the Fatah-led government in the West Bank the mission in essence has legitimised an institution (in this case the Palestinian civil police), which is contested and in essence lacking democratic oversight as there have been a paralysis of the Palestinian Legislative Council and all laws since 2007 are enacted through Presidential decrees. The ground for the mission's stronger engagement in the SSR objectives became more fertile after Salam Fayyad's appointment as a Prime Minister in the oPts in June 2007 and the subsequent adoption of his plan entitled 'Palestine -Ending the Occupation, Establishing the State' two years later (Palestinian National Authority 2009). In parallel to these developments, in June 2008, EUPOL COPPS expanded its mission so as to include a rule of law component based on the belief that a well-governed security sector would require more than just well-trained and well-equipped security forces; it would require a system of transparency where security governance would be accountable to people. As a DfID official argues: We realised that we needed a holistic approach that would help us bridge and merge security and justice because there was a fear that the justice system would be left behind and would not be able to catch up with the security system. 5 Another official from the Crisis Management and Planning Directorate argued that 'You can train as many policemen as you want but if you do not include criminal justice training your efforts will not mean anything'. 6 As a result, the mission started focusing more actively on the strategic level of reforms and more specifically on the criminal justice sector by targeting the most important actors in the 'criminal chain' namely prosecution services, courts, the High Judicial Council, penitentiary, the Ministry of Justice, the Palestinian bar association, civil society and the scientific legal community. By providing training to the local police staff, EUPOL COPPS has managed to promote specific techniques of governing the space and thus diffusing power through the training they offer. As İşleyen (2018a) puts it: The mission's activities introduce an asymmetric relationship between EUPOL COPPS officials and the local police, whereby the former is portrayed as the normal as opposed to the latter's abnormality in terms skills, experiences and competence. 'Over 60% of the prosecutors and administrative staff are inexperienced, have been employed during the last year and need additional basic training and supervision' argues a 2009 Assessment report (EUPOL COPPS 2009b). The conclusions of another report on a criminal investigation within the Palestinian civil police are telling: We have identified chokepoints within the organisational structure of the criminal investigation units of PCP. We do not think that today's system is an optimal solution. A change towards a more 'One Stop Shop' model would be more efficient (EUPOL COPPS 2009a). EUPOL COPPS has also been involved in spatial configurations of governing this space through specific techniques. After the split in 2007 between the Fatah-led West Bank and the Hamas-led Gaza Strip, which also resulted in a specific governance rift, the PA decided to relocate the Ministry of Justice to Ramallah. The Ministry of Justice and the seats of the highest bodies of the judiciary had originally been placed in Gaza in an attempt of the PA to spread its institutions over both territories (EUPOL COPPS Rule of Law Section 2009b). Hamas' takeover of Gaza Strip in 2007 and the subsequent unwillingness of the EU to engage with it also had a direct effect on EUPOL COPPS, which could only engage with the security sector reform conducted in the West Bank (and mainly in Area A). As such, the mission feeds into and reproduces borders and spatial configurations emerging from the domestic and regional dynamics relating to the conflict. More specifically, EUPOL COPPS has contributed to the imagination of a specific space for the Palestinians, which includes specific areas of the West Bank and thus excluding the Gaza Strip, where the mission was also supposed to be operational. Another example is the training provided by EUPOL COPPS to the Palestinian Prosecution Office involving several modules on economic crime management. EUPOL COPPS training included the transfer of expert knowledge on countering economic crimes both in the public and private realm (İşleyen 2014). This training has bordering effects because it imagines the Palestinian economic space from a restricted spatial gaze. It identifies the problems of the Palestinian economy as crimes pertaining to the 'internal' meaning that EUPOL COPPS training only addresses issues understood as crime within the spatial confines that Palestinian economy 'can' operate. Such spatial imagination is a form of bordering as it conceals realities surrounding the Palestinian economy that go well beyond the domestic space as Palestinian economy is far from being independent. The Paris Protocol (1994) integrated the Palestinian economy into the Israeli one through a customs union. This meant in practice that all imports and exports would be subject to Israeli supervision and that Israel would collect and pass to the PA the taxes and custom duties imposed on Palestinian imports from or via Israel. Therefore, by turning problems into matters of capacity building, EUPOL COPPS training reduces the Palestinian economy and economic crime to the domestic while detaching these issues from structural conditions and connections. Disciplining and Managing the Space and Population EUPOL COPPS' engagement in the strategic level working closely with the Palestinian Ministry of Justice and Ministry of Interior has allowed the mission to diffuse specific ideas of 'how things should work'. In other words, the mission has been extremely instrumental in inserting particular rationalities and techniques of governing the space in the oPts through its direct involvement in the lawmaking processes. For example, advisers from the mission are engaged closely with senior officials from the Ministry of Interior supporting the reform and development of the PCP. As underlined on the website of the mission EUPOL COPPS advisers support the Ministry of Interior and PCP at strategic level to embed the concept of civilian police primacy through cooperation and coordination with the wider security sector agencies and their international advisers (EUPOL COPPS 2017). 7 More recently the mission has stepped up its efforts to get more involved in the strategic level. As two officials from the CPCC and the PSC who are responsible for the planning and strategic guidance of EUPOL COPPS' activities at Brussels level argue 8 We started re-focusing on the strategic level and getting involved in the drafting of key legislation. We have already assisted the drafting of the Code of Conduct on the Use of Force and Firearms which has entered into force and currently we assist the Ministry of Interior and the Ministry of Justice to draft the Police Law and the Criminal Procedure Law. By engaging in such a strategic level and having a say on the laws governing police and criminal procedures, EUPOL COPPS promotes the disciplinarisation and normalisation of police officers (İşleyen 2018a) and the governance in the oPts similarly to what the EU has been doing through EU Police Mission (EUPM) in Bosnia (Merlingen and Ostrauskaite 2005). This is also confirmed by recent interviews with EUPOL COPPS officials involved at providing advice at the Palestinian Ministerial level. EUPOL COPPS experts worked closely with the Palestinian Ministry of Interior especially with regard to Palestinian Strategic Planning and the Security Sector Reform Strategic Plan 2017-2022 and they argue that most of their recommendations were accepted. 9 This means that in practice they managed to promote specific logics of how laws regarding the Palestinian Civil Police should 'look like' in the policing of the Palestinian population. This includes for example, what the Palestinian Security Services are 'allowed' and 'not allowed' to do, as having a direct say on the law drafting and enactment has particular power effects. This has also spatial reverberations as these laws apply primarily to the Palestinian population in the West Bank but not in Gaza. As such a particular 'border' is created while at the same time the fighting against organised crime also serves the Israeli security concerns. This is also the reason why Israel has 'endorsed these technical achievements of EUPOL COPPS, realising the mission can make the PA more effective in policing the West Bank and a more reliable partner in quashing dissent and countering insurgency' (Ejdus and Tartir 2017). Among others, EUPOL COPPS also provides training with regard to arrest techniques and crowd control, which is the responsibility of Palestinian Special Police Forcepart of the PCP. The Special Police Force, which is the main antiriot and crowd control of the PCP, receives training from EUPOL COPPS, which focuses on proportionate and non-lethal use of force when dispersing crowds and demonstrations. Despite this, in a number of demonstrations in the last years, lethal force has been used against the demonstrators. In July 2013 for example, hundreds of Palestinians demonstrated in Ramallah against the US-led resumption of final status talks between the Palestinian Liberation Organisation and Israel. Their route was blocked by regular and riot police and when the demonstrators starting throwing stones the police responded with force. 'The riot police attacked us with batons' argues one of the demonstrators while another one said that 'the demonstrators were trying to get past the barrier when the attack started' (Human Rights Watch 2013). In a similar incident in 2012, the Palestinian security services spokesman argued that police acted to stop the protesters from approaching and reaching the Presidential headquarters in Ramallah where demonstrations were 'prohibited' (Human Rights Watch 2012). More recently, in June 2018, Palestinian demonstrators gathered in the centre of Ramallah to demand an end to the PA's sanctions against Gaza. The police had pre-emptively declared the demonstration illegal and prevented demonstrators from gathering in the 'Manara square' (the central square of Ramallah). Among the Palestinian security forces were also civil police personnel in police uniforms (some others were special forces and others in military uniforms). As a witness argues: 'determined to clear the streets, large groups of Palestinian security forces moved towards and targeted with tear gas and stun grenades any gathering of even a few people. Then came the undercover offices' (Younis 2018). What should be acknowledged though is that the presence of different security services during these demonstrationssuch as the Palestinian Authority Intelligence and the Preventive Security Force (which are not part of the PCP)make it hard to tell which security service was behind the excessive power used. Despite this, the training of crowd control is closely linked with the logic to discipline and manage space and population. The aim of this crowd control is to ensure that the demonstrators will be limited to a specific geographical space and territory and that they will not cross the visible (or invisible) border and/or boundary determined by the PCP. As such, crowd control techniques and policies aim at disciplining and managing the Palestinian population but also to govern the mobility of the population thus adding to the first layer of this control, which is the Israeli occupation of the Palestinian Territories thus reproducing it. These techniques also constitute a space-making function as they are also closely linked to the creation, expansion but also concentration of a 'permitted' space and also shape the mobility of Palestinians (İşleyen 2018a). The implications of the operationalisation of EUPOL COPPS in the construction, enforcement and reinforcement of borders are not limited to crowd control. As mentioned above, the mission does not have an executive mandate, which means that it can only be present where the Palestinian police are allowed (by Israel) to operate. The Oslo Accords had allowed for the establishment of 25 Palestinian police stations in Area B, each with 25-40 civil police, so as to enable the PA to exercise its responsibility for public order (although Israel would retain ultimate responsibility for security). The Accords contained specific provisions about the exact number of police at each station and required that the movement of Palestinian police in Area B should be coordinated and confirmed with Israel (Cordesman 2005). Following the eruption of the second intifada and the subsequent violence, Israel closed down all these police stations and the Palestinian police was not allowed to operate there. The first time, since 2001, the Palestinian police was permitted to return to these areas was in 2008 when Israel approved the opening of 20 police stations (Jerusalem Post 2008). 'We were and still are totally handicapped' argues a EUPOL COPPS official. 10 'Everything we do is done with the approval of the State of Israel. Any equipment we bring in has to be approved by the Coordinator of Government Activities in the Territories' admitted Henrik Malmquist, who was the Head of the mission between 2010 and 2012, in an Israeli newspaper interview (Hass 2011). 'No one can understand the seriousness of the situation if he/she does not witness it. A police station can be 100 metres away from another police station in Area B. We need permission from the Israelis to go from one to another. 11 As such, the mission, consolidates the borders dictated by Israel and consequently it ends up unintentionally conducting a border activity that turns into a form of bordering that produces and reproduce the Israeli occupation of the Palestinian Territories. Conclusions The aim of this article has been to analyse EUPOL COPPS' activities on the ground and to highlight how the mundane exercises of the mission do bordering through the transfer of particular conceptions and practices of governing mobility. Moving away from engaging with issues of effectiveness and also moving beyond a traditional understanding between sovereignty and borders, the adoption of a critical border studies approach has allowed us to unpack and uncover the ways in which the EU diffuses and 'produces' power through the deployment of CSDP missions in its neighbourhood and beyond. Our main observation is that despite the fact that EUPOL COPPS is a civilian mission without an executive mandate, which at first sight makes it appear as 'innocent' because its aim is to provide 'technical' assistance and advice, in reality the operationalisation of the mission on the ground has crucial effects not only with regard to the EU's role in the Israeli-Palestinian conflict but also as far as the dynamics of the conflict itself are concerned. EUPOL COPPS' promotion of specific 'ways of doing things', logics and relationships between space and governance results in the creation of a second layer of control of the Palestinian population thus reproducing the Israeli occupation of the Palestinian territories. This is even more problematic especially because the engagement and bordering activities are taking place in a contested state and non-sovereign space, where borders have even more significance as they are deeply linked to final status issues for the resolution of the conflict. Taking into account that the majority of EU CSDP missions deployed so far are civilian, without executive mandates and the fact that most of those civilian missions are deployed in cases where borders are contested (oPts, Georgia, Kosovo, Ukraine, Libya, Moldova, Afghanistan, Mali, Nigeria, Somaliland), the need for further research into their 'bordering' activities becomes even more urgent. With a specific attention to space and space-making, critical border studies offer conceptual tools for EU scholars interested in CSDP missions. The article aspires to encourage more researchers to focus on the bordering effects of such EU-led civilian missions by exploring the reverberations that their operationalisation might have, including for example, with regard to asymmetrical power relations between the conflicting parties concerned.
2019-05-12T14:24:23.939Z
2018-12-12T00:00:00.000
{ "year": 2020, "sha1": "53c55129aca6d0aa418ea7c046042f53818cae02", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/14650045.2018.1552946?needAccess=true", "oa_status": "HYBRID", "pdf_src": "TaylorAndFrancis", "pdf_hash": "3a6fc82aa23d15d5cf65fa977b303e1d67d1ea6f", "s2fieldsofstudy": [ "Political Science", "Sociology" ], "extfieldsofstudy": [ "Political Science" ] }
265083586
pes2o/s2orc
v3-fos-license
Ensemble Learning Based Malicious Node Detection in SDN-Based VANETs Background: The architecture of Software Defined Networking (SDN) integrated with Vehicular Ad-hoc Networks (VANETs) is considered a practical method for handling large-scale, dynamic, heterogeneous vehicular networks, since it offers flexibility, programmability, scalability, and a global understanding. However, the integration with VANETs introduces additional security vulnerabilities due to the deployment of a logically centralized control mechanism. These security attacks are classified as internal and external based on the nature of the attacker. The method adopted in this work facilitated the detection of internal position falsification attacks. Objective: This study aimed to investigate the performance of k-NN, SVM, Naïve Bayes, Logistic Regression, and Random Forest machine learning (ML) algorithms in detecting position falsification attacks using the Vehicular Reference Misbehavior (VeReMi) dataset. It also aimed to conduct a comparative analysis of two ensemble classification models, namely voting and stacking for final decision-making. These ensemble classification methods used the ML algorithms cooperatively to achieve improved classification. Methods: The simulations and evaluations were conducted using the Python programming language. VeReMi dataset was selected since it was an application-specific dataset for VANETs environment. Performance evaluation metrics, such as accuracy, precision, recall, F-measure, and prediction time were also used in the comparative studies. Results: This experimental study showed that Random Forest ML algorithm provided the best performance in detecting attacks among the ML algorithms. Voting and stacking were both used to enhance classification accuracy and reduce time required to identify an attack through predictions generated by k-NN, SVM, Naïve Bayes, Logistic Regression, and Random Forest classifiers. Conclusion: In terms of attack detection accuracy, both methods (voting and stacking) achieved the same level of accuracy as Random Forest. However, the detection of attack using stacking could be achieved in roughly less than half the time required by voting ensemble. I. INTRODUCTION Vehicular Ad-hoc networks (VANETs) are reliable communication network that alert drivers about potential collision risks and accidents on the road.VANETs are similar to Mobile Ad-hoc Networks (MANETs), where the nodes are vehicles instead of mobile phones.However, they differ in terms of high node mobility due to the continuous rapid movement of vehicles, frequent changes in network topology, and high bandwidth requirements [1].VANETs offer a range of advantages, including accident reduction, traffic management, congestion notification, fuel station location, and parking spot suggestions.Alongside these benefits, the performance constraints are due to frequent disconnections among vehicles caused by high mobility, dynamic connectivity, and security attacks [2,3].Software Defined Network (SDN) integrated with VANETs provide enhanced flexibility and adaptability as the data plane is regulated and controlled by a separate centralized programmable controller.Its architecture significantly reduces the dependence on hardware setup, as previously, even when a small modification in the vehicular network requires hardware upgrades. 137 As shown in Fig. 1, SDN-based VANETs architecture comprises three layers, namely the data (forwarding), control (logical), and application (services) planes.The data and control planes are separated using Southbound APIs, while control and application are separated using Northbound.Data is a forwarding plane consisting of moving vehicles and fixed roadside units (RSUs) that communicate wirelessly among vehicles and infrastructure.Control, on the other hand, comprises a software-defined controller (SDNC), serving as the logical brain of the vehicular system.It uses flow tables for packet transmission and decision-making.The application layer is a collection of network-based application services, such as external memory using cloud computing, quality services, routing, and more.It also manages the high mobility and heterogeneous behavior of the network in a programmed and systematic way.However, SDN-based VANETs have a dual nature, and may introduce security risks or new vulnerabilities to the system due to their centralized characteristics [4].Security is a major concern in SDN-based vehicular networking architecture, as the networking system is vulnerable to various attacks, either external or internal.External attacks are carried out by unauthorized individuals and aim to impact the security and communication of the network.They may include phishing, DDoS attacks, and spamming, which can be mitigated using cryptography, a first line of defense.On the other hand, internal attacks are executed by authorized malicious nodes (member nodes of the network) within the data plane.These nodes possess legitimate sources of information, making them challenging to detect using public key-based cryptography method ques.Internal attacks may include position falsification, wrong alert messages, location spoofing, packets dropping, and more.The current integration of machine learning (ML) with VANETs networking is a potential method to analyze the data and protect against these types of attacks. Khatri et al. [5] discussed the challenges of traffic, communication, and safety in VANET systems, as well as the potential solutions offered by ML methods.Several previous investigations presented the use of ML integrated security solutions to enhance the accuracy of attack detection in VANETs [6][7][8].Supervised learning classifiers, such as k-NN, SVM, logistic regression, and decision trees are indispensable for identifying anomalous candidates within typical data sets.These methods also play a significant role in intrusion detection for vehicular systems [7][8][9].Another popular classification strategy entails using multiple ML classifiers in an "ensemble" to boost performance.This combines the strength of various classifiers and bases the conclusions about the future on their predictions.Voting, stacking, bagging, and boosting are some of the recognized ensemble methods exhibiting higher accuracy rates in classification.Internal attacks pose a greater threat to the functioning of the VANETs system, as authorized users may act maliciously, triggering scenarios of fake alert messages, jamming, collisions, and altered routing suggestions.Ensemble learning algorithms are recommended to effectively address such attacks. In this motivation, an experimental comparative study was conducted to detect internal attacks in SDN-based VANETs using k-NN, SVM, Naïve Bayes, Logistic Regression, and Random Forest ML classifiers.Subsequently, ensemble learning methods, namely voting and stacking, were used for detection of internal attacks, leveraging the predictions generated from ML classifiers.With the specific aim of detecting the position falsification attacks in SDNbased VANETs, this study used open-source Vehicular Reference Misbehavior (VeReMi) dataset as it is an application-specific dataset for the VANETs environment.All simulations were carried out using the Python programming language.The results showed that the detection of attacks using "voting ensemble" could be achieved in roughly half the time required by "stacking ensemble". The following are the key contributions of this study: (1) Evaluated the performance of k-NN, SVM, Naïve Bayes, Logistic Regression, and Random Forest ML algorithms in the detection of position falsification attack using VeReMi dataset and (2) Conducted comparative analysis of two ensemble classification models namely voting and stacking, for final decision-making.These methods cooperatively leveraged ML algorithms to improve decision-making. The subsequent sections a classified as follows: Section II explained the prevailing methods used for attack detection in SDN-based VANETs.Section III presented the proposed method for attack detection.Section IV provided details of the results, evaluation was discussed in Section V, while Section VI concluded the study. II. LITERATURE REVIEW The classification of various types of attacks using ML methods in different communication systems has been extensively explored in recent years.With the growth of wireless communication and services, the study community has shown significant interest in designing security solutions for vehicular networks. Sangwan et al. [10] reviewed and categorized various misbehavior assaults in VANETs systems based on architecture, method, node-centricity, and data-centricity.It also discussed the significance of ML methods in the context of misbehavior detection.Ghosh et al. [11] presented a comprehensive survey on misbehaving node detection attacks in VANETs, with an emphasis on the adoption of hybrid methods to achieve improved performance in terms of precision and accuracy.Singh et al. [12] investigated the effect of normalization using various ML classifiers for detecting position falsification attacks through VeReMi dataset.The experimental study showed that normalized SVM outperformed logistic regression, both with and without normalization.Sonker et al. [13] also introduced a method for selecting the optimal algorithm to detect malicious node attacks, with Random Forest achieving a higher accuracy rate of 97.62%. The extraction of features is a crucial step in dataset processing, as it significantly influences the accuracy of the prediction model.So et al. [14] presented a plausibility check score with ML-based scheme for misbehavior detection in VANET, and showed that k-NN and SVM classifiers performed better.Similarly, Gyawali et al. [15] devised a new MDS (misbehavior detection system)-based ML algorithm for detecting position falsification attacks [15].Grover et al. [16] proposed ML classifier-based malicious node detection method, using various feature attributes, such as geographical position, speed deviation, acceptance range of RSU, and received signal strength (RSS).The results showed that Random Forest and J48 outperformed other classifiers, including Nave Base, IBK, and Adaboost1.Montenegro et al. [17] designed a trust-based model for position falsification attacks based on the k-NN classifier model.In this model, position and received power coherency were used in detecting misbehaving VANETs nodes.The results showed a maximum accuracy rate of 95.8% for the validation model and 84.4% for the random offset position attack.Sultana et al. [18] proposed ML-based scheme with local and global detection levels for Constant attack Type 1. Kang et al. [19] investigated the role of a deep neural network (DNN)-based intrusion detection scheme, with special emphasis on the in-vehicle system over inter-vehicle communication.Bangui et al. [20] proposed hybrid methods comprising Random Forest classifier and clustering-based algorithm to detect both known and unknown attacks.In comparison to conventional ML classifiers, the hybrid method substantially improved detection efficiency. Multiple ensemble classifiers have also been evaluated for the detection of different types of attacks in VANETs environment.Ercan et al. [21] emphasized the significance of the stacking method for validating newly designed features under various vehicle densities.Ghaleb et al. [22] introduced a collaborative intrusion detection system (MA-CIDS) that used ensemble learning to enhance attack detection efficacy in VANETs models.Using the network security laboratory-knowledge discovery data mining (NSL-KDD) dataset, locally trained and weighted classifiers were thoroughly evaluated.The results showed a false positive rate of 4% and F1 score of 97%.Khan et al. [23] 139 developed an ensemble-based voting classifier for intrusion detection that incorporated multiple base classifiers, showing a 96% accuracy rate for GPS detection compared to standard ML algorithm.Similarly, Azam et al. [24] used majority voting to detect sybil attacks in the VANET system.Standard ML classifiers, including k-NN, Nave Bayes, Decision tree, SVM, and Logical Regression, were used within the majority voting framework, with the proposed scheme achieving a 95% degree of accuracy.Sonker et al. [25] designed an algorithm that combined stacking and the bagging algorithms Random Forest and Xgboost.The proposed scheme attained a 98.44% detection rate of misbehaving nodes.The study showed that ensemble classifier could significantly enhance the scheme performance for detecting misbehaving nodes.A comprehensive framework for detecting internal intrusions is presented in the next section. III. METHOD This section presents the proposed framework for detecting malicious nodes and identifying position falsification attacks using VeReMi dataset.Fig. 2 presents the proposed malicious node detection framework, comprising Levels 1, 2, and 3, using voting and stacking ensemble learning methods. A. Dataset Preparation and Analysis The experimental simulations were conducted using the open-source VeReMi dataset, which comprised 5 position falsification attack scenarios, each repeated 5 times.These repetitions provided a comprehensive representation of various real-time scenarios of misbehaving vehicular nodes [26].The dataset is presented in Table 1.The raw VeReMi dataset was processed for the implementation and evaluation of ML algorithm using the algorithm proposed in [14].The vehicles communicated by broadcasting their details every 100 ms, including speed in the x-y direction, current location, acceleration, velocities, and more, through Basic Safety Messages (BSM).The type of attack was labeled from 0 to 5, where '0' represented a legitimate vehicle, and '1-5' corresponded to different attack types.In the final dataset, the first two features pertained to the accuracy of the sender data and the movement of the sending vehicles, while the remaining four described their behaviors.The location plausibility check was based on previous velocity in the BSM message, GPS location, and average accelerations, using (1) and movement. The movement plausibility check determined whether the speed fell below predefined speed threshold.This was calculated using the total displacement between the previous and current locations, velocity, and time obtained from BSM messages.When the total displacement is 0 but the average velocity is not, the value of the score would be 1, otherwise it would be set at 0. The plausibility score fell within the range of [0,4], and represented the total sum of plausibility scores for the x and y coordinates, indicating the number of misbehaving vehicles.These scores were calculated for the x and y coordinates within the pre-defined confidence interval of 95% and 99%.To estimate the behavior, the displacement, magnitude and velocity of vehicles in x and y directions were computed.Features 3 and 4 represented the differences between the calculated average velocities based on total displacement and time, as well as the predicted average velocities based on the reported velocity and time in the x and y directions, respectively.Feature 5 represented the magnitude of Features 3 and 4, while Feature 6 was the total displacement between the calculated distance and the predicted total displacement based on average velocity.For advanced attack detection, these extracted features were used to classify legitimate from misbehaving vehicles using various learning-based ML classifiers. C. Machine Learning (ML) based Attack Classification In the attack detection level of the proposed method, the data were cleaned and parsed for training and testing.Multiple classifiers, including k-NN, SVM, Naïve Bayes, Logistic Regression, and Random Forest, were comparatively applied for internal attack detection. 1) k-Nearest Neighbor (k-NN) Classifier k-NN is a well-known supervised learning algorithm for detecting attacker vehicle using Euclidean distance measurement.The Euclidean distance between two vehicles at position (xi, yi) and (xj, yj) is calculated using (2). d = (x − x ) + (y − y ) . (2) 2) Logistic Regression Classifier Logistic Regression is a classifier that uses a sigmoid function to separate data points.The decision boundary, where the sigmoid function f(x) equals 0.5, defines the border between the classes.The equation is given in (3). 3) SVM Classifier Support Vector Machine (SVM) is a supervised learning-based classifier that uses multidimensional hyperplanes to segregate data points.The decision boundary relies on various support vectors that define the extreme maximums and minimums.predicted ( , ) = x + ∆t v ( , ) + a ( , ) * ∆t   4) Naïve Bayes Classifier The Naïve Bayes Classifier is based on Bayes theorem and predicts data based on the probability of the object.It calculates probabilities using the following equation, where P(X) is the independent probability of X, P(Y) is the independent probability of Y, P(X/Y) that is seen on ( 4) is the probability of X when Y is true, and P(Y/X) is the probability of Y when X is true. (4) 5) Random Forest Classifier Random Forest Classifier is method based on decision trees.To classify the attacks, the dataset was divided into smaller sets, and each decision tree provided a class for input data.The classifier subsequently processed the information and selected the most voted prediction as the output.It is capable of handling large datasets with high dimensionalities and less prone to overfitting.Therefore, to evaluate its impact on the attack detection rate, the RF classifier was applied to the processed features extracted from VeReMi dataset.Random Forest classifier can be seen in Fig. 3.The results were evaluated in terms of parametric metrics, such as precision, recall value, F1-score, accuracy, and prediction time.The precision showed the number of true positives predicted positive attack (TP and FP) predictions.The precision value ranged between 0 and 1, representing the specificity of the model.The equation used to calculate precision can be seen on ( 5).Precision = . ( Recall represents the sensitivity of the model and indicates the true positives that are correctly identified.The formula used to calculate recall is shown in (6).Similar to precision, it also ranges from 0 to 1, with higher values indicating better performance.Recall = . The F-Measures shown in (7) that is also known as F1-Score, defines the relationship between precision and recall, with the aim of striking a balance. Accuracy, also called classification rate, measures the number of correct predictions over the total predictions in the dataset, and is calculated as the ratio of the total sum of TP and TN over the total predictions.The formula of accuracy is shown in (8). IV. RESULTS This section presents the result and analysis of the proposed scheme with respect to detection of different attack types in VeReMi dataset.The performance metrics precision, recall, and F1-score of different classifiers across all specified attacks are shown in Tables 2-4. 2 shows that Random Forest performs best in the detection of all attack types with precision values of 0.99, 0.65, 1.0000, 0.95, and 0.98, respectively.While both the majority voting and stacking classification yielded similar results for Random Forest, there was a slight improvement in precision values for types 2 and 16 when using stacking. In Table 3, recall value, representing the efficiency of detecting the positive cases, indicated that the stacking-based method outperformed other traditional classifiers.However, all the classifiers exhibited significantly low recall values in detecting Constant Offset Attacks (Type 2).The lower recall rate of these classifiers signified a higher False Negative Rate (FNR), making it challenging to trace malicious nodes with offset errors.Interestingly, the F1-Score for types 1, 4, 8, and 16 were 0.99, 1.00, 0.95, and 0.98, respectively.Similar to recall value, stacking had a lower F1-Score of 0.65, as shown in Table 4. Fig. 5 shows the general performance of the classifiers in terms of accuracy.The stacking classification yielded similar results as Random Forest, outperforming other classifiers with accuracies of 99.16%, 75.73%, 99.78%, 96.48%, and 95.54% for Types 1, 2, 4, 8, and 16, respectively.The performance of majority voting ensemble and stacking classifier was compared at the third level.The best accuracy and other performance metrics were achieved through stacking of multiple classifiers.For majority voting, various classifiers were assigned weight scores, such as [1,1,1,1,5] for Naïve Bayes, SVM, logistic regression, k-NN, and Random Forest, respectively.While both the methods improved the performance matrices, the prediction time of Stacking Classifier was significantly shorter than the prediction time of the majority voting classification method.Table 5 shows the results for different attack type detection.The ROC curve of the final stage of Stacking Classification is presented in Fig. 6, with an area under curve score (AUC) of 0.985.Therefore, the detection rate of the proposed method was significantly high.A multi-level method was proposed to detect the position falsification attack in VeReMi dataset.In the first part of the proposed method, VeReMi dataset was preprocessed, and feature extraction was performed to enhance prediction using plausibility check features [14].The dataset was cleaned and parsed to train and test various ML classifiers.The results obtained were compared to [14,15,17] in terms of precision and recall values, as shown in Table 6.The proposed method showed better performance compared to [14] for all attack types.In comparison to [15,17], the results were significantly comparable, and the work performed better for type 2, incorporating the method used in [15]. The predictions of the best classifiers were fed into two different ensemble classifiers, namely, majority voting and stacking classifier, to enhance detection and assurance levels in the second part of the algorithm.The accuracy rate of the two classifiers was similar to Random Forest, with a slight improvement in stacking classification.Furthermore, both were evaluated in terms of prediction, with stacking predicting attacks significantly faster than the majority voting classifier for all attack types.This difference was mainly because the majority voting decision simply depended on the feedback from the input classifiers majority voting, while stacking was a weighted voting scheme where the vehicle score was used as a weight for decision-making.In conclusion, SDN-based VANETs were vulnerable to internal attacks that could significantly impact VANETs services and threaten human lives.This study proposed an ensemble learning-based framework for detecting malicious nodes in the network.In the first part, the performance of various ML classifiers, including Naïve Bayes, SVM, logistic regression, k-NN, and Random Forest, was evaluated for detecting various attack scenarios (Attack Types 1, 2, 4, 8, and 16) in VeReMi dataset.The results showed that Random Forest outperformed all other classifiers in terms of performance metrics, such as precision, recall, F-score, and accuracy.For the final level of detection, ensemble learning-based classifiers, namely "majority voting" and "stacking," were comparatively studied.The results showed that stacking classification improved the performance metrics with lower prediction times compared to majority voting classification.Although these results were significantly promising, the proposed algorithm exhibited relatively low performance metrics for type 2. Therefore, future studies were recommended to design a more adaptive framework for its detection. Fig. 3 Fig. 3 Random Forest classifier D. Final Level Detection Two different methods were compared at this level, based on prediction time.The predictions of the classifiers previously mentioned were used to design both majority voting and stacking classifiers.The majority voting is an ensemble classifier that uses the predictions of different classifiers based on assigned weightage.Meanwhile, the stacking ensemble generates a new meta-classifier using the input classifiers and incorporates their characteristics.The entire concept is summarized in the Algorithm 1 and Fig. 4. Fig. 5 Fig. 5 Accuracy of k-NN, Random Forest, majority voting and stacking classifiers
2023-11-10T16:35:14.760Z
2023-11-01T00:00:00.000
{ "year": 2023, "sha1": "230c087250ce938cfbd332bbb7643f640da80a5f", "oa_license": "CCBY", "oa_url": "https://e-journal.unair.ac.id/JISEBI/article/download/45306/26580", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ea1ec3092b4fb62c2f1627104f26af3731cb260e", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [] }
232409751
pes2o/s2orc
v3-fos-license
Efficacy of Exercise Rehabilitation Program in Relieving Oxaliplatin Induced Peripheral Neurotoxicity Background: Peripheral neurotoxicity is common in patients with digestive malignancies receiving chemotherapy containing oxaliplatin, and there is still no effective drug to prevent or treat this complication. Methods: Seventy-nine patients receiving chemotherapy containing oxaliplatin were included, and the relationship between chemotherapy regimens, cycles, and cumulative dose of oxaliplatin and peripheral neurotoxicity was analyzed. Patients were divided into two groups of control or intervention. Twenty-eight patients in the control group received routine chemotherapy care, and 51 patients in the intervention group underwent two-week exercise rehabilitation program. Patients’ Functional Assessment of Cancer Therapy/Gynecologic Oncology Group – Neurotoxicity (FACT/GOG-Ntx), functional tests, and Brief Pain Inventory(BPI) scores as well as interference life scores were assessed before intervention and two weeks after the intervention. Results: In the intervention group, 52.9% patients previously exercised regularly. The FOLFOX regimen was more common in peripheral neurotoxicity (73.4%), and the median oxaliplatin cycles for neurotoxicity was 9 (ranging from 1 to 16). The mean cumulative dose of oxaliplatin was 1080.02 ± 185.22 mg, both the cycles and cumulative dose were positively correlated with the occurrence of peripheral neurotoxicity. Compared with control, the scores of FACT/GOG-Ntx, functional tests, and BPI were significantly decreased in the intervention group (p < 0.05). Conclusion: Chemotherapy cycles and cumulative doses were in relation with OIN , and exercise rehabilitation program could effectively alleviate OIN. Efficacy of Exercise Rehabilitation Program in Relieving Oxaliplatin Induced Peripheral Neurotoxicity (Andriamamonjy et al., 2017), and may lead to insufficient dosages of chemotherapy or even cessation of treatment, severely decreasing patients' quality of life and efficacy of treatment (Raphael et al., 2017). There are few clinical studies on the relationship between dose and cycles of chemotherapy and peripheral neurotoxicity, and no effective drug to prevent or treat OIN has been introduced yet (Stefansson and Nygren, 2016). This prospective study analyzed the clinical and behavioral features of OIN and evaluated the efficacy of exercise rehabilitation care in alleviating peripheral neurotoxicity induced by oxaliplatin . Patients Patients received oxaliplatin-based chemotherapy at the Department of Oncology in the First Affiliated Hospital of Soochow University were enrolled in this study. Other main inclusion criteria were presenting different degrees of peripheral neurotoxicity during or after chemotherapy, aging from 18 to 60 years old, having pathologically confirmed malignant tumor, and willingness to participate in the study. Exclusion criteria were having history of neurological diseases or diabetes, receiving other treatments that might cause peripheral neurotoxicity, combined with vertebral metastases and/ or intracranial metastases, and having merging cognitive impairment or mental disorder. Patients could choose to receive routine care or 2 weeks of exercise rehabilitation program on the basis of routine care. Patients' clinical and behavior characteristics were retrospectively reviewed from their medical record. From July 2018 to December 2018, 214 cancer patients receiving chemotherapies of oxalipatin were enrolled. Among them, 127 patients presented peripheral neurotoxicity and met our inclusion criteria. With respect to our exclusion criteria, 41 patients were excluded. The remain 86 patients underwent further investigation. Seven patients were also excluded due to incomplete data. Finally, 79 patients were divided into two groups of control (n=28) and intervention (n=51) based on patients' choices ( Figure 1). Patient groups The patients in the control group received regular care: patients with oxaliplatin infusion should keep warm, do not drink cold water, do not touch cold objects, and use central venous catheters to avoid local extravasation of chemotherapy drugs (Sorich et al., 2004). Patients in intervention group received 2 weeks of exercise rehabilitation program on the basis of regular care. In brief, the exercise rehabilitation program is a comprehensive gymnastics and quickly walking training. The program in details was as follows: a. doing comprehensive gymnastics training 3 times every morning and evening (lying down, moving hands and fingers and feet toes in turn 10 times, then stand, slowly raising arms and then stretching out and drawing back fingers 10 times, and then slowly falling arms; at last, placing hands on hips, heeling slowly after falling off the ground 10 times) and b. quick-step walking training(according to the patient's own physical condition, patients could choose to walk quickly from 1 to 3 km). During exercise, if patients felt dizziness, chest tightness, or other physical discomfort, exercising was immediately stop. Peripheral neurotoxicity assessment The 4 th edition of the FACT/GOG-Ntx scale was used to assess the subjective symptoms of neuropathy (McCrary et al., 2017). The FACT/GOG-Ntx scale has a total of 11 items, including sensory, auditory, motor, and dysfunction. The patient scores from 0 to 4 according to his/her degree of symptoms: 0 (not at all), 1 (a little), 2 (occasionally), 3 (often), and 4 (very frequently). The total score was calculated according to the principles in the FACT (Functional Assessment of Cancer Therapy) manual. Functional assessment In order to test the effect of neurotoxicity on the patients' physical function, we calculatedthe time of the 6-hole shirt (the patient puts on a 6-hole shirt, and twist 6 buttons as soon as possible), the fastest time to do fifty steps walking, and the time of the coin test (the patient sat at the table, picked up 4 coins in turn, and put them in the cup one by one) (Griffith et al., 2017). Brief Pain Inventory (BPI) BPI is a questionnaire that effectively assesses patients' subjective pain symptoms. The first 2 questions are to assess the severity of pain 0 (no pain) and 10 (the most imaginable pain). For the third question (using the interference scale), patients are required to score their previous 24-hour life interference because of pain using score 0 (no interference at all) to 10 (all interfered) (El-Fatatry et al., 2018). Statistical analysis In this study, chi-square test was used to compare the general characteristics of the two groups. Covariance analysis was also used to compare the FACT/GOG-Ntx scores, functional test scores, and BPI scores between the two groups. Moreover, chi-square test was used to compare the chemotherapy regimens between control group and intervention group. The t-test was also used to compare the cycles and the cumulative dose of oxaliplatin. According to SPSS (version 19.0) , p < 0.05 was considered as a statistically significant difference. The clinical characteristics of two groups Intriguingly, 32 patients previously exercised regularly, and most of them (n=27, 84.3%) were in the intervention group. In control group, there were 18 males and 10 females, and the patients' mean age was 52 years old (ranging from 41 to 60). In intervention group, there were 34 males and 17 females, and the patients' mean age was 50 years old (ranging from 39 to 60). There was no statistically significant difference between two groups regarding age, gender, history of surgery, history of radiotherapy, and tumor type (p > 0.05) ( Table 1). The relationship between cycles and cumulative dose and OIN Among chemotherapy regimens presenting peripheral Table 3. Neurological Symptoms, Functional Assessment, and Pain Scores at Baseline and after 2 Weeks in Both Groups significant differences between the two groups in the cases of chemotherapy regimens, cycles, and doses (p > 0.05). The median cycle of OIN emergence was 9 (ranging from 1 to 16), and the cumulative dose of oxaliplatin was 1080.02 ± 185.22 mg. The cycles and cumulative dose were positively correlated with the occurrence of peripheral neurotoxicity (Table 2). Exercise rehabilitation and neurological symptoms, functional assessments, and pain scores of OIN After a two-week exercise rehabilitation program in intervention group, the patients' FACT/GOG-Ntx, functional test, and BPI scores were significantly decreased (p < 0.05). In addition, exercise rehabilitation program significantly improved patients' walking abilities, normal work abilities , and sleep quality (p < 0.05). However, patients' general activities, mood, relationship with others, and life enjoyment did not change significantly (p > 0.05). In control group, the FACT/ GOG-Ntx, functional tests, and BPI scores as well as pain disturbances did not improve compare to baseline (p > 0.05) ( Table 3). Discussion Oxaliplatin induced peripheral neurotoxicity is clinically manifested in both acute and chronic forms, with a higher incidence of its acute form ranging from 65% to 100% (El-Fatatry et al., 2018). The typical manifestations of acute neurotoxicity are distal limb or perioral dysfunction and dull sensation of the throat which are induced by cold stimulation (Banach et al., 2018). These syndromes usually occur before oxaliplatin infusion, during the infusion, or soon after the infusion. Most of these patients can recover within hours or days (Velasco et al., 2015). In contrast, chronic neurotoxicity caused by oxaliplatin is a dose-cumulative neuropathy which affects up to 80% of patients. It severely affects patients' life, reduces patients' compliance, and may lead to reduced dose and/or early withdrawal of chemotherapy, reducing the efficacy of anti-tumor therapy (Ma et al., 2018). A meta-analysis was done on 3,869 patients who received oxaliplatin regimens in 14 studies. The results showed that only six studies evaluated the relationship between neurotoxicity and oxaliplatin cumulative dose, and among them, five revealed that neurotoxicity was associated with cumulative doses (Beijers et al., 2014). This study showed that the chemotherapy cycles of presenting peripheral neurotoxicity was 8 to 9 in patients receiving oxaliplatin-based chemotherapy regimens, with a mean cumulative dose of 1,080.02 ± 185.22 mg. The most frequent regimen for chronic peripheral neurotoxicity was FOLFOX. These findings were consistent with a previous study (Palugulla et al., 2017). To reduce peripheral neurotoxicity, the IDEA collaborative study extracted six clinical studies and evaluated the non-inferiority of FOLFOX/CAPOX as adjunctive therapy with a short course of treatment (3 months vs. 6 months). In this study, clinically relevant OIN was significantly reduced in the 3-month group (FOLFOX and CAPOX were 16.6% vs. 47.7%, and 14.2% vs. 44.9%, respectively), but the non-inferior results of shortening the course of treatment were not confirmed in the overall population, and 3-year disease-free progression was reduced by 0.9% (Grothey et al., 2018). The mechanism through which oxaliplatin causes peripheral neurotoxicity remains unclear. The possible mechanisms are axonal over-excitation, voltage-gated sodium and/or potassium channel changes leading to repetitive discharge and oxidative stress (Argyriou et al., 2019;Poupon et al., 2018). The effects of dorsal root ganglia caused by accumulated into neuronal damage (Beijers et al., 2014). Currently, there are no standard drugs or methods to effectively treat OIN. Previous studies have shown that exercise can promote the regeneration of peripheral nerves (Streckmann et al., 2019). Furthermore, in a neuropathic mouse model (Park et al., 2015), exercise could prevent the occurrence of peripheral neuropathy and in a mouse model of diabetic peripheral neuropathy, exercise could delay disease progression (Groover et al., 2013). However, no study evaluated the role of exercise on OIN. Therefore, we speculate that exercise may accelerate peripheral blood circulation, which can accelerate metabolism of chemotherapy drugs, reduce toxic drug damage, and promote peripheral nerve to regenerate. Our exercise rehabilitation program is simple and easily to do with the designed movements dominated by fingers and toes. Basic research has found that sensory axonal damage reduces the amplitude of sensory nerve action potentials, while most motor nerve functions are unaffected. Therefore, the motor function of patients with OIN has not been affected, and most rehabilitation exercises can be sustained and completed. However, there were also limitations with this study. Since more patients who previously exercised regularly were in the intervention group. Future studies using larger sample size is needed to further confirm the efficacy of exercise in relieving OIN, or to discover which types of exercise may be helpful. Nevertheless, we provide some evidence that exercise rehabilitation program can effectively alleviate OIN.
2021-03-30T06:16:13.060Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "103a7542bbbc4aeb0c3e8883da2db4d1d0e801a5", "oa_license": "CCBY", "oa_url": "https://doi.org/10.31557/apjcp.2021.22.3.705", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f9a304dc882c730010892b9e2b6fae4df47cdd0c", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
57189127
pes2o/s2orc
v3-fos-license
Answering Range Queries Under Local Differential Privacy Counting the fraction of a population having an input within a specified interval i.e. a \emph{range query}, is a fundamental data analysis primitive. Range queries can also be used to compute other interesting statistics such as \emph{quantiles}, and to build prediction models. However, frequently the data is subject to privacy concerns when it is drawn from individuals, and relates for example to their financial, health, religious or political status. In this paper, we introduce and analyze methods to support range queries under the local variant of differential privacy, an emerging standard for privacy-preserving data analysis. The local model requires that each user releases a noisy view of her private data under a privacy guarantee. While many works address the problem of range queries in the trusted aggregator setting, this problem has not been addressed specifically under untrusted aggregation (local DP) model even though many primitives have been developed recently for estimating a discrete distribution. We describe and analyze two classes of approaches for range queries, based on hierarchical histograms and the Haar wavelet transform. We show that both have strong theoretical accuracy guarantees on variance. In practice, both methods are fast and require minimal computation and communication resources. Our experiments show that the wavelet approach is most accurate in high privacy settings, while the hierarchical approach dominates for weaker privacy requirements. INTRODUCTION All data analysis fundamentally depends on a basic understanding of how the data is distributed. Many sophisticated data analysis and machine learning techniques are built on top of primitives that describe where data points are located, or what is the data density in a given region. That is, we need to provide accurate answers to estimates of the data density at a given point or within a range. Consequently, we need to ensure that such queries can be answered accurately under a variety of data access models. This remains the case when the data is sensitive, comprised of the personal details of many individuals. Here, we still need to answer range queries accurately, but also meet high standards of privacy, typically by ensuring that answers are subject to sufficient bounded perturbations that each individual's data is protected. In this work, we adopt the recently popular model of Local Differential Privacy (LDP). Under LDP, individuals retain control of their own private data, by revealing only randomized transformations of their input. Aggregating the reports of sufficiently many users gives accurate answers, and allows complex analysis and models to be built, while preserving each individual's privacy. * † LDP has risen to prominence in recent years due to its adoption and widespread deployment by major technology companies, including Google [12], Apple [7] and Microsoft [8]. These applications rely at their heart on allowing frequency estimation within large data domains (e.g. the space of all words, or of all URLs). Consequently, strong locally private solutions are known for this point estimation problem. It is therefore surprising to us that no prior work has explicitly addressed the question of range queries under LDP. Range queries are perhaps of wider application than point queries, from their inherent utility to describe data, through their immediate uses to address cumulative distribution and quantile queries, up to their ability to instantiate classification and regression models for description and prediction. In this paper, we tackle the question of how to define protocols to answer range queries under strict LDP guarantees. Our main focus throughout is on one-dimensional discrete domains, which provides substantial technical challenges under the strict model of LDP. These ideas naturally adapt to multiple dimensions, as we discuss briefly as an extension. A first approach to answer range queries is to simply pose each point query that constitutes the range. This works tolerably well for short ranges over small domains, but rapidly degenerates for larger inputs. Instead, we adapt ideas from computational geometry, and show how hierarchical and wavelet decompositions can be used to reduce the error. This approach is suggested by prior work in the centralized privacy model, but we find some important differences, and reach different conclusions about the optimal way to include data and set parameters in the local model. In particular, we see that approaches based on hierarchical decomposition and wavelet transformations are both effective and offer similar accuracy for this problem; whereas, naive approaches that directly evaluate range queries via point estimates are inaccurate and frequently unwieldy. Our contributions. In more detail, our contributions are as follows : We provide background on the model of Local Differential Privacy (LDP) and related efforts for range queries in Section 2. Then in Section 3, we summarize the approaches to answering point queries under LDP, which are a building block for our approaches. Our core conceptual contribution (Section 4) comes from proposing and analyzing several different approaches to answering one-dimensional range queries. • We first formalize the problem and show that the simple approach of summing a sequence of point queries entails error (measured as variance) that grows linearly with the length of the range (Section 4.2). • In Section 4.3, we consider hierarchical approaches, generalizing the idea of a binary tree. We show that the variance grows only logarithmically with the length of the range. Post-processing of the noisy observations can remove inconsistencies, and reduces the constants in the variance, allowing an optimal braching factor for the tree to be determined. • The last approach is based on the Discrete Haar wavelet transform (DHT, described in Section 4.6). Here the variance is bounded in terms of the logarithm of the domain size, and no post-processing is needed. The variance bound is similar but not directly comparable to that in the hierarchical approach. Once we have a general method to answer range queries, we can apply it to the special case of prefix queries, and to find order statistics (medians and quantiles). We perform an empirical comparison of our methods in Section 5. Our conclusion is that both the hierarchical and DHT approach are effective for domains of moderate size and upwards. The accuracy is very good when there is a large population of users contributing their (noisy) data. Further, the related costs (computational resources required by each user and the data aggregator, and the amount of information sent by each user) are very low for these methods, making them practical to deploy at scale. We show that the wavelet approach is most accurate in high privacy settings, while the hierarchical approach dominates for weaker privacy requirements. We conclude by considering extensions of our scenario, such as multidimensional data (Section 6). RELATED WORK Range queries. Primitives to support range queries are necessary in a variety of data processing scenarios. Exact range queries can be answered by simply scanning the data and counting the number of tuples that fall within the range; faster answers are possible by pre-processing, such as sorting the data (for one-dimensional ranges). Multi-dimensional range queries are addressed by geometric data structures such as k-d trees or quadtrees [24]. As the dimension increases, these methods suffer from the "curse of dimensionality", and it is usually faster to simply scan the data. Various approaches exist to approximately answer range queries. A random sample of the data allows the answer on the sample to be extrapolated; to give an answer with an additive ϵ guarantee requires a sample of size O( 1 ϵ 2 ) [4]. Other data structures, based on histograms or streaming data sketches can answer one-dimensional range queries with the same accuracy guarantee and with a space cost of O(1/ϵ) [4]. However, these methods do not naturally translate to the private setting, since they retain information about a subset of the input tuples exactly, which tends to conflict with formal statistical privacy guarantees. Local Differential Privacy (LDP). The model of local differential privacy has risen in popularity in recent years in theory and in practice as a special case of differential privacy. It has long been observed that local data perturbation methods, epitomized by Randomized Response [28] also meet the definition of Differential Privacy [11]. However, in the last few years, the model of local data perturbation has risen in prominence: initially from a theoretical interest [9], but subsequently from a practical perspective [12]. A substantial amount of effort has been put into the question of collecting simple popularity statistics, by scaling randomized response to handle a larger domain of possibilities [1,7,8,27]. The current state of the art solutions involve a combination of ideas from data transformation, sketching and hash projections to reduce the communication cost for each user, and computational effort for the data aggregator to put the information together [1,27]. Building on this, there has been substantial effort to solve a variety of problems in the local model, including: language modeling and text prediction [3]; higher order and marginal statistics [5,13,30]; social network and graph modeling [14,23]; and various machine learning, recommendation and model building tasks [9,19,25,26,31] However, among this collection of work, we are not aware of any work that directly or indirectly addresses the question of allowing range queries to be answered in the strict local model, where no interaction is allowed between users and aggregator. Private Range queries. In the centralized model of privacy, there has been more extensive consideration of range queries. Part of our contribution is to show how several of these ideas can be translated to the local model, then to provide customized analysis for the resulting algorithms. Much early work on differentially private histograms considered range queries as a natural target [10,15]. However, simply summing up histogram entries leads to large errors for long range queries. Xiao et al. [29] considered adding noise in the Haar wavelet domain, while Hay et al. [16] formalized the approach of keeping a hierarchical representation of data. Both approaches promise error that scales only logarithmically with the length of the range. These results were refined by Qardaji et al. [21], who compared the two approaches and optimized parameter settings. The conclusion there was that a hierarchical approach with moderate fan-out (of 16) was preferable, more than halving the error from the Haar approach. A parallel line of work considered two-dimensional range queries, introducing the notion of private spatial decompositions based on k-d trees and quadtrees [6]. Subsequent work argued that shallow hierarchical structures were often preferable, with only a few levels of refinement [22]. MODEL AND PRELIMINARIES 3.1 Local Differential Privacy Initial work on differential privacy assumed the presence of a trusted aggregator, who curates all the private information of individuals, and releases information through a perturbation algorithm. In practice, individuals may be reluctant to share private information with a data aggregator. The local variant of differential privacy instead captures the case when each user i only has their local view of the dataset S (typically, they only know their own data point z i ) and she independently releases information about her input through an instance of a DP algorithm. This model has received widespread industrial adoption, including by Google [12,13], Apple [7], Microsoft [8] and Snap [20] for tasks like heavy hitter identification (e.g., most used emojis), training word prediction models, anomaly detection, and measuring app usage. In the simplest setting, we assume each participant i ∈ [N ] has an input z i drawn from some global discrete or continuous distribution θ over a domain Z. We do not assume that users share any trust relationship with each other, and so do not communicate amongst themselves. Implicitly, there is also an (untrusted) aggregator interested in estimating some statistics over the private dataset {z i } N i=1 . Formal definition of Local Differential Privacy (LDP) [18]. A randomized function F is ϵ-locally differentially private if for all 2 possible pairs of z i , z ′ i ∼ Z and for every possible output tuple O in the range of F : . This is a local instantiation of differential privacy [11], where the perturbation mechanism F is applied to each data point independently. In contrast to the centralized model, perturbation under LDP happens at the user's end. Point Queries and Frequency Oracles A basic question in the LDP model is to answer point queries on the distribution: to estimate the frequency of any given element z from the domain Z. Answering such queries form the underpinning for a variety of applications such as population surveys, machine learning, spatial analysis and, as we shall see, our objective of quantiles and range queries. In the point query problem, each i holds a private item z i drawn from a public set Z, |Z| = D using an unknown common discrete distribution θ . That is, θ z is the probability that a randomly sampled input element is equal to z ∈ Z. The goal is to provide a protocol in the LDP model (i.e. steps that each user and the aggregator should follow) so the aggregator can estimate θ as θ as accurately as possible. Solutions for this problem are referred to as providing a frequency oracle. Several variant constructions of frequency oracles have been described in recent years. In each case, the users perturb their input locally via tools such as linear transformation and random sampling, and send the result to the aggregator. These noisy reports are aggregated and an appropriate bias correction is applied to them to reconstruct the frequency for each item in Z. The error in estimation is generally quantified by the mean squared error. We know that the mean squared error can be decomposed into (squared) bias and variance. Often estimators for these mechanisms are unbiased and have the same variance V F for all items in the input domain. Hence, the variance can be used interchangably with squared error, after scaling. The mechanisms vary based on their computation and communication costs, and the accuracy (variance) obtained. In most cases, the variance is proportional to 1 N (e ϵ −1) 2 . Optimized Unary Encoding (OUE). A classical approach to releasing a single bit of data with a privacy guarantee is Randomized Response (RR), due to Wagner [28]. Here, we either report the true value of the input or its complement with appropriately chosen probabilities. To generalize to inputs from larger domains, we represent v i as the sparse binary vector e v i (where e j [j] = 1 and 0 elsewhere), and randomly flip each bit of e v i to obtain the (non-sparse) binary vector o i . Naively, this corresponds to applying one-bit randomized response [28] to each bit independently. Wang et al. [27] proposed a variant of this scheme that reduces the variance for larger D. Perturbation: Each user i flips each bit at each location j ∈ [D] of e i using the following distribution. Finally user i sends the perturbed input o i to the aggregator. Variance: Setting д = e ϵ + 1 minimizes the variance to be OLH has the same variance as OUE and it more economical on communication. However, a major downside is that it is compute intensive in terms of the decoding time at the aggregator's side, which is prohibitive for very large dimensions (say, for D above tens of thousands), since the time cost is proportional to O(N D). Hadamard Randomized Response (HRR) [5,19]. The Discrete Fourier (Hadamard) transform is described by an orthogonal, symmetric matrix ϕ of dimension D × D (where D is a power of 2). Each entry in ϕ is where ⟨i, j⟩ is the number of 1's that i and j agree on in their binary representation. The Aggregation: Consider each report from each user. With probability p, the report is the true value of the coefficient; with probability 1 −p, we receive its negation. Hence, we should divide the reported value by 2p − 1 to obtain an unbiased random variable whose expectation is the correct value. The aggregator can then compute the observed sum of each perturbed coefficient j as O j . An unbiased estimation of the jth Hadamard coefficient c j (with the 1 √ D factor restored) is . Therefore, the aggregator can compute an unbiased estimator for each coefficient, and then apply the inverse transform to produce θ . Variance: The variance of each user report is given by the squared error of our unbiased estimator. With probability p, the squared error Then, we can expand the variance for each report as There are N total reports, each of which samples one of D coefficients at random. Observing that the estimate of any frequency in the original domain is a linear combination of Hadamard coefficients with unit Euclidean norm, we can find an expression for the value This method achieves a good compromise between accuracy and communication since each user transmits only ⌈log 2 D⌉ + 1 bits to describe the index j and the perturbed coefficient, respectively. Also, the aggregator can reconstruct the frequencies in the original domain by computing the estimated coefficients and then inverting HT with O(N + D log D) operations, versus O(N D) for OLH. Thus, we have three representative mechanisms to implement a frequency oracle. Each one provides ϵ-LDP, by considering the probability of seeing the same output from the user if her input were to change. There are other frequency oracles mechanisms developed offering similar or weaker variance bounds (e.g. [8,13]) and resouce trade-offs but we do not include them for brevity. RANGE QUERIES 4.1 Problem Definition We next formally define the range queries that we would like to support. As in Section 3.2, we assume N non-colluding individuals each with a private item where I p is a binary variable that takes the value 1 if the predicate p is true and 0 otherwise. an estimation of interval R of length r computed using a mechanism F . Then the quality of F is measured by the squared error ( R − R) 2 . Flat Solutions . Therefore a first approach is to simply sum up estimated frequencies for every item in the range, where estimates are provided by an ϵ-LDP frequency oracle: R [a,b] = b i=a θ i . We denote this approach (instantiated by a choice of frequency oracle F ) as flat algorithms. FACT 1. For any range query R of length r answered using a flat method with frequency oracle F , Var[ R − R] = rV F Note that the variance grows linearly with the interval size which can be as large as DV F . LEMMA 4.2. The average worst case squared error over evalua- PROOF. There are D −r +1 queries of length r . Hence the average Hierarchical Solutions We can view the problem of answering range queries in terms of representing the frequency distribution via some collection of histograms, and producing the estimate by combining information from bins in the histograms. The "flat" approach instantiates this, and keeps one bin for each individual element. This is necessary in order to answer range queries of length 1 (i.e. point queries). However, as observed above, if we have access only to point queries, then the error grows in proportion to the length of the range. It is therefore natural to keep additional bins over subranges of the data. A classical approach is to impose a hierarchy on the domain items in such a way that the frequency of each item contributes to multiple bins of 4 varying granularity. With such structure in place, we can answer a given query by adding counts from a relatively small number of bins. There are many hierarchical methods possible to compute histograms. Several of these have been tried in the context of centralized DP. But to the best of our knowledge, the methods that work best in centralized DP tend to rely on a complete view on the distribution, or would require multiple interactions between users and aggregator when translated to the local model. This motivates us to choose more simple yet effective strategies for histogram construction in the LDP setting. We start with the standard notion of B-adic intervals and a useful property of B-adic decompositions. its length is of a power of B and starts with an integer multiple of its length. The B-adic decomposition can be understood as organizing the domain under a complete B-ary tree where each node corresponds to a bin of a unique B-adic range. The root holds the entire range and the leaves hold the counts for unit sized intervals. A range query can be answered by a walk over the tree similar to the standard pre-order traversal and therefore a range query can be answered with at most Hierarchical Histograms (HH) Now we describe our framework for computing hierarchical histograms. All algorithms follow a similar structure but differ on the perturbation primitive F they use: Input transformation: user i locally arranges the input z i ∈ [D] in the form of a full B-ary tree of height h. Then z i defines a unique path from a leaf to the root with a weight of 1 attached to each node on the path, and zero elsewhere. Figure 2 shows an example. In each local histogram, the nodes in the path from leaf to the root are shaded in red and have a weight of 1 on each node. Perturbation: i samples a level l ∈ [h] with probability p l . There are 2 l nodes at this level, with exactly one node of weight one and the rest zero. Hence, we can apply one of the mechanisms from Section 3. User i perturbs this vector using some frequency oracle F and sends the perturbed information to the aggregator along with the level id l. Aggregation: The aggregator builds an empty tree with the same dimensions and adds the (unbiased) contribution from each user to the corresponding nodes, to estimate the fraction of the input at each node. Range queries are answered by aggregating the nodes from the B-adic decomposition of the range. Key difference from the centralized case: Hierarchical histograms have been proposed and evaluated in the centralized case. However, the key difference here comes from how we generate information about each level. In the centralized case, the norm is to split the "error budget" ϵ into h pieces, and report the count of users in each node; in contrast, we have each user sample a single level, and the aggregator estimates the fraction of users in each node. The reason for sampling instead of splitting emerges from the analysis: splitting would lead to an error proportional to h 2 , whereas sampling gives an error which is at most proportional to h. Because sampling introduces some variation into the number of users reporting at each level, we work in terms of fractions rather than counts; this is important for the subsequent post-processing step. In summary, the approach of hierarchical decomposition extends to LDP by observing the fact that it is a linear transformation of the original input domain. This means that adding information from the hierarchical decomposition of each individual's input yields the decomposition of the entire population. Next we evaluate the error in estimation using the hierarchical methods. Error behavior for Hierarchical Histograms. We begin by showing that the overall variance can be expressed in terms of the variance of the frequency oracle used, V F . In what follows, we denote hierarchical histograms aggregated with fanout B as HH B . THEOREM 4.3. When answering a range query of length r using a primitive F , the worst case variance PROOF. Recall that all the methods we consider have the same N (e ϵ −1) 2 , with N denoting the number of users contributing to the mechanism. Importantly, this does not depend on the domain size D, and so we can write is a constant for method F that depends on ϵ. This means that once we fix the method F , the variance V l for any node at level l will be the same, and is determined by N l , the number of users reporting on level l. The range query R [a,b] of length r is decomposed into at 2(B − 1) nodes at each level, for α = ⌈log B r ⌉ + 1 levels (from leaves upwards). So we can bound the total variance V r in our estimate by In the worst case, α = h, and we can minimize this bound by a uniform level sampling procedure: PROOF. We use the Lagrange multiplier technique, and define a new function L, introducing a new variable λ. We compute the gradient as (1) Hierarchical versus flat methods. The benefit of the HH approach over the baseline flat method depends on the factor (2B − 1)hα versus the quantity r . Note that h = log B D + O(1) and α = log B r + O(1), so we obtain an improvement over flat methods when r > 2B log 2 B D, for example. When D is very small, this may not be achieved: for D = 64 and B = 2, this condition yields r > 128 > D. However, for larger D, say D = 2 16 and B = 2, we obtain r > 1024, which corresponds to approximately 1.5% of the range. PROOF. We obtain the bound by summing over all range lengths r . For a given length r , there are D − r + 1 possible ranges. Hence, We find bounds on each of the two components separately. 1. Using Stirling's approximation we have Plugging these upper bounds in to the main expression, □ Key difference from the centralized case: Similar looking bounds are known in centralized case, for example due to Qardaji et al. [21], but with some key differences. There, the bound (simplified) is proportional to (B − 1)h 3 V F rather than the (B − 1)h 2 V F we see here. Note however that in the centralized case V F scales proportionate to 1/N 2 rather than 1/N in the local case: a necessary cost to provide local privacy guarantees. Optimal branching factor for HH B . In general, increasing the fanout has two consequences under our algorithmic framework. Large B reduces the tree height, which increases accuracy of estimation per node since larger population is allocated to each level. But this also means that we can require more nodes at each level to evaluate a query which tends to increase the total error incurred during evaluation. We would like to find a branching factor that balances these two effects. We use the expression for the variance in (1) to find the optimal branching factor for a given D. We first compute the gradient of the function 2(B − 1) log B (r ) log B (D). Differentiating w.r.t. B we get We now seek a B such that the derivative ∇ = 0. The numerical solution is (approximately) B = 4.922. Hence we minimize the variance by choosing B to be 4 or 5. This is again in contrast to the centralized case, where the optimal branching factor is determined to be approximately 16 [21]. Post-processing for consistency There is some redundancy in the information materialized by the HH approach: we obtain estimates for the weight of each internal node, as well as its child nodes, which should sum to the parent weight. Hence, the accuracy of the HH framework can be further improved by finding the least squares solution for the weight of each node taking into account all the information we have about it, i.e. for each node v, we approximate the (fractional) frequency f (v) with f (v) such that || f (v) − f (v)|| 2 is minimized subject to the consistency constraints. We can invoke the Gauss-Markov theorem since the variance of all our estimates are equal, and hence the least squares solution is the best linear unbiased estimator. First, consider the simple case when H is a single level tree with B leaves. Then we have H T H = 1 B×B + I B , where 1 B×B denotes the B × B matrix of all ones. We can verify that (H T H ) −1 = ((B + 1)I B − 1 B×B )/(B + 1). From this we can quickly read off the variance of any range query. For a point query, the associated variance is simply B/(B + 1)V F , while for a query of length r , the variance is (rB − r (r − 1))/(B + 1)V F . Observe that the variance for the whole range r = B is just B/(B + 1)V F , and that the maximum variance is for a range of just under half the length, r = (B + 1)/2, which gives a bound of V F (B + 1)(B + 1)/(4(B + 1)) = (B + 1)V F /4. The same approach can be used for hierarchies with more than one level. However, while there is considerable structure to be studied here, there is no simple closed form, and forming (H T H ) −1 can be inconvenient for large D. Instead, for each level, we can apply the argument above between the noisy counts for any node and its B children. This shows that if we applied this estimation procedure to just these counts, we would obtain a bound of B/(B + 1)V F to any node (parent or child), and at most (B + 1)V F /4 for any sum of node counts. Therefore, if we find the optimal least squares estimates, their (minimal) variance can be at most this much. □ Consequently, after this constrained inference, the error variance at each node is at most BV F B+1 . It is possible to give a tighter bound for nodes higher up in the hierarchy: the variance reduces by B i i j=0 B j for level i (counting up from level 1, the leaves). This approaches (B − 1)/B, from above; however, we adopt the simpler B/(B + 1) bound for clarity. This modified variance affects the worst case error, and hence our calculation of an optimal branching factor. From the above proof, we can obtain a new bound on the worst case error of (B + 1)V F /2 for every level touched by the query (that is, (B + 1)V F /4 for the left and right fringe of the query). This equates to (B + 1)V F log B (r ) log B (D)/2 total variance. Differentiating w.r.t. B, we find Consequently, the value that minimizes ∇ is B ≈ 9.18 -larger than without consistency. This implies a constant factor reduction in the variance in range queries from post-processing. Specifically, if we pick B = 8 (a power of 2), then this bound on variance is 9V F log 2 (r ) log 2 (D)/(2 log 2 2 8) = compared to 7 4 V F log 2 (r ) log 2 (D)/4 for HH 4 without consistency. We confirm this reduction in error experimentally in Section 5. We can make use of the structure of the hierarchy to provide a simple linear-time procedure to compute optimal estimates. This approach was introduced in the centralized case by Hay et al. [16]. Their efficient two-stage process can be translated to the local model. Stage 1: Weighted Averaging: Traversing the tree bottom up, we use the weighted average of a node's original reconstructed frequency f (.) and the sum of its children's (adjusted) weights to update the nodeâȂŹs reconstructed weight. For a non-leaf node v, its adjusted weight is a weighted combination as follows: u ∈child(v)f (u) Stage 2: Mean Consistency: This step makes sure that for each node, its weight is equal to the sum of its children's values. This is done by dividing the difference between parent's weight and children's total weight equally among children. For a non-root node v, wheref (p(v)) is the weight of v's parent after weighted averaging. The values of f achieve the minimum L 2 solution. Finally, we note that the cost of this post-processing is relatively low for the aggregator: each of the two steps can be computed in a linear pass over the tree structure. A useful property of finding the least squares solution is that it enforces the consistency property: the final estimate for each node is equal to the sum of its children. Thus, it does not matter how we try to answer a range query (just adding up leaves, or subtracting some counts from others) -we will obtain the same result. Key difference from the centralized case. Our post-processing is influenced by a sequence of papers in the centralized case. However, we do observe some important points of departure. First, because users sample levels, we work with the distribution of frequencies across each level, rather than counts, as the counts are not guaranteed to sum up exactly. Secondly, our analysis method allows us to give an upper bound on the variance at every level in the tree -prior work gave a mixture of upper and lower bounds on variances. This, in conjunction with our bound on covariances allows us to give a tighter bound on the variance for a range query, and to find a bound on the optimal branching factor after taking into account the post-processing, which has not been done previously. Discrete Haar Transform (DHT) The Discrete Haar Transform (DHT) provides an alternative approach to summarizing data for the purpose of answering range queries. DHT is a popular data synopsis tool that relies on a hierarchical (binary tree-based) decomposition of the data. DHT can be understood as performing recursive pairwise averaging and differencing of our data at different granularities, as opposed to the HH approach which gathers sums of values. The method imposes a full binary tree structure over the domain, where h(v) is the height of node v, counting up from the leaves (level 0). The Haar coefficient c v for a node v is computed as c v = C l −C r 2 h(v )/2 , where C l , C r are the sum of counts of all leaves in the left and right subtree of v. In the local case when z i represents a leaf of the tree, there is exactly one non-zero haar coefficient at each level l with value ± 1 2 l /2 . The DHT can also be represented as a matrix H D of dimension D × D (where D is a power of 2) with each row j encoding the Haar coefficients for item j ∈ [D]. Answering a range query. A similar fact holds for range queries. We can answer any range query by first summing all rows of H D that correspond to leaf nodes within the range, then taking the inner product of this with the coefficient vector. We can observe that for an internal node in the binary tree, if it is fully contained (or fully excluded) by the range, then it contributes zero to the sum. Hence, we only need coefficients corresponding to nodes that are cut by the range query: there are at most 2h of these. The main benefit of DHT comes from the fact that all coefficients are independent, and there is no redundant information. Therefore we obtain a certain amount of consistency by design: any set of Haar coefficients uniquely determines an input vector, and there is no need to apply the postprocessing step described in Section 4.5. Our algorithmic framework. For convenience, we rescale each coefficient reported by a user at a non-root node to be from {−1, 0, 1}, and apply the scaling factor later in the procedure. Similar to the HH approach, each user samples a level l with probability p l and perturbs the coefficients from that level using a suitable perturbation primitive. Each user then reports her noisy coefficients along with the level. The aggregator, after accepting all reports, prepares a similar tree and applies the correction to make an unbiased estimation of each Haar coefficient. The aggregator can evaluate range queries using the (unbiased but still noisy) coefficients. Perturbing Haar coefficients. As with hierarchical histogram methods, where each level is a sparse (one hot) vector, there are several choices for how to release information about the sampled level in the Haar tree. The only difference is that previously the non-zero entry in the level was always a 1 value; for Haar, it can be a −1 or a 1. There are various straightforward ways to adapt the methods that we have already (see, for example, [2,8,19]). We choose to adapt the Hadamard Randomized Response (HRR) method, described in Section 3.2. First, this is convenient: it immediately works for negative valued weights without any modification. But it also minimizes the communication effort for the users: they summarize their whole level with a single bit (plus the description of the level and Hadmard coefficient chosen). We have confirmed this choice empirically in calibration experiments (omitted for brevity): HRR is consistent with other choices in terms of accuracy, and so is preferred for its convenience and compactness. Recall that the (scaled) Hadamard transform of a sparse binary vector e i is equivalent to selecting the ith row/column from the Hadamard matrix. When we transform −e i , the Hadamard coefficients remain binary, with their signs negated. Hence we use HRR for perturbing levelwise Haar coefficients. At the root level, where there is a single coefficient, this is equivalent to 1 bit RR. The 0th coefficient can be hardcoded to N √ D since it does not require perturbation. We refer to this algorithm as HaarHRR. Error behavior for HaarHRR. As mentioned before, we answer an arbitrary query of length r by taking a weighted combination of at most 2h coefficients. A coefficient u at level l contributes to the answer if and only if the leftmost and rightmost leaves of the subtree of node u partially overlaps with the range. The 0th coefficient is assigned the weight r . Let O L l (O R l ) be the size of the overlap sets for left (right) subtree for u with the range. Using reconstructed coefficients, we evaluate a query to produce answer R as below. where, c l is an unbiased estimation of a coefficient c l at level l. In the worst case, the absolute weight |O L l − O R l | = 2 l . We can analyze the corresponding varance, V r , as Here, V F is the variance associated with the HRR frequency oracle. As in the hierarchical case, the optimal choice is to set p l = 1/h (i.e. we sample a level uniformly), where h = log 2 (D). Then we obtain It is instructive to compare this expression with the bounds obtained for the hierarchical methods. Recall that, after post-processing for consistency, we found that the variance for answering range queries with HH 8 , based on optimizing the branching factor, is log 2 (r ) log 2 (D)V F /2 (from (2)). That is, for long range queries where r is close to D, these (3) will be close to (2). Consequently, we expect both methods to be competitive, and will use empirical comparison to investigate their behavior in practice. Finally, observe that since this bound does not depend on the range size itself, the average error across all possible range queries is also bounded by (3). Key difference from the centralized case. The technique of perturbing Haar coefficients to answer differentially private range queries was proposed and studied in the centralized case under the name "privelets" [29]. Subsequent work argued that more involved centralized algorithms could obtain better accuracy. We will see in the experimental section that HaarHRR is among our best performing methods. Hence, our contribution in this work is to reintroduce the DHT as a useful tool in local privacy. 8 Prefix and Quantile Queries Prefix queries form an important class of range queries, where the start of the range is fixed to be the first point in the domain. The methods we have developed allow prefix queries to be answered as a special case. Note that for hierarchical and DHT-based methods, we expect the error to be lower than for arbitrary range queries. Considering the error in hierarchical methods (Theorem 4.3), we require at most B − 1 nodes at each level to construct a prefix query, instead of (2B − 1), which reduces the variance by almost half. For DHT similarly, we only split nodes on the right end of a prefix query, so we also reduce the variance bound by a factor of 2. Note that a reduction in variance by 0.5 will translate into a factor of √ 2 = 0.707 in the absolute error. Although the variance bound changes by a constant factor, we obtain the same optimal choice for the branching factor in B. Prefix queries are sufficient to answer quantile queries. The ϕquantile is that index j in the domain such that at most a ϕ-fraction of the input data lies below j, and at most a (1 − ϕ) fraction lies above it. If we can pose arbitrary prefix queries, then we can binary search for a prefix j such that the prefix query on j meets the ϕ-quantile condition. Errors arise when the noise in answering prefix queries causes us to select a j that is either too large or too small. The quantiles then describe the input data distribution in a general purpose, non-parametric fashion. Our expectation is that our proposed methods should allow more accurate reconstructions of quantiles than flat methods, since we expect they will observe lower error. We formalize the problem: Definition 4.7. (Quantile Query Release Problem) Given a set of N users, the goal is to collect information guaranteeing ϵ-LDP to approximate any quantile q ∈ [0, 1]. Let Q be the item returned as the answer to the quantile query q using a mechanism F , which is in truth the q ′ quantile, and let Q be the true q quantile. We evaluate the quality of F by both the value error, measured by the squared error ( Q − Q) 2 ; and the quantile error |q − q|. EXPERIMENTAL EVALUATION Our goal in this section is to validate our solutions and theoretical claims with experiments. Dataset Used. We are interested in comparing the flat, hierarchical and wavelet methods for range queries of varying lengths on large domains, capturing meaningful real-world settings. We have evaluated the methods over a variety of real and synthetic data. Our observation is that measures such as speed and accuracy do not depend too heavily on the data distribution. Hence, we present here results on synthetic data sampled from Cauchy distributions. This allows us to easily vary parameters such as the population size N and the domain size D, as well as varying the distribution to be more or less skewed. The shape of the (symmetrical) Cauchy distribution is controlled by two parameters, center and height. We set the location of the center at P ×D, for 0 < P < 1, so that larger values of P shift the mass further to the right. Since Cauchy distribution has infinite support, we drop any values that fall outside [D]. Larger height parameters tend to reduce the sparsity in the distribution by flattening it. Our default choice is a relatively spread out distribution with height = D 10 and P = 0.4. 2 We vary the domain size D from small (D = 2 8 ) to large (D = 2 22 ) as powers of two. Algorithm default parameters and settings. We set a default value of e ϵ = 3 (ϵ = 1.1), in line with prior work on LDP. This means, for example, that binary randomized response will report a true answer 3 4 of the time, and lie 1 4 of the time -enough to offer plausible deniability to users, while allowing algorithms to achieve good accuracy. Since the domain size D is chosen to be a power of 2, we can choose a range of branching factors B for hierarchical histograms so that log B (D) remains an integer. The default population size N is set to be N = 2 26 which captures the scenario of an industrial deployment, similar to [7,12,20]. Each bar plot is the mean of 5 repetitions of an experiment and error bars capture the observed standard deviation. The simulations are implemented in C++ and tested on a standard Linux machine. To the best of our knowledge, ours is among the first non-industrial work to provide simulations with domain sizes as large as 2 22 . Our final implementation will shortly be made available as open source. Sampling range queries for evaluation. When the domain size is small or moderate (D = 2 8 and 2 16 ), it is feasible to evaluate all D 2 range queries and so compute the exact average. However, this is not scalable for larger domains, and so we average over a subset of the range queries. To ensure good coverage of different ranges, we pick a set of evenly-spaced starting points, and then evaluate all ranges that begin at each of these points. For D = 2 20 and 2 22 we pick start points every 2 15 and 2 16 steps, respectively, yielding a total of 17M and 69M unique queries. Histogram estimation primitives. The HH framework in general is agnostic to the choice of the histogram estimation primitive F . We show results with OUE, HRR and OLH as the primitives for histogram reconstruction, since they are considered to be state of art [27], and all provide the same theoretical bound V F on variance. Though any of these three methods can serve as a flat method, we choose OUE as a flat method since it can be simulated efficiently and reliably provides the lowest error in practice by a small margin. We refer to the hierarchical methods using HH framework as TreeOUE, TreeOLH and TreeHRR. Their counterparts where the aggregator applies postprocessing to enforce consistency are identified with the CI suffix, e.g. TreeHRRCI. We quickly observed in our preliminary experiments that direct implementation of OUE can be very slow for large D: the method perturbs and reports D bits for each user. For accuracy evaluation purposes, we can replace the slow method with a statistically equivalent simulation. That is, we can simulate the aggregated noisy count data that the aggregator would receive from the population. We know that noisy count of any item is aggregated from two distributions (1) "true" ones that are reported as ones (with prob. 1 2 ) (2) zeros that are flipped to be ones (with prob. 1 1+e ϵ ). Therefore, using the (private) knowledge of the true count θ [j] of item j ∈ [D], the noisy count θ * [j] can be expressed as a sum of two binomial random variables, 1+e ϵ . Our simulation can perform this sampling for all items, then provides the sampled count to the aggregator, which then performs the usual bias correction procedure. The OLH method suffers from a more substantial drawback: the method is very slow for the aggregator to decode, due to the need to iterate through all possible inputs for each user report (time O(N D)). We know of no short cuts here, and so we only consider OLH for our initial experiments with small domain size D. Impact of varying B and r Experiment description. In this experiment, we aim to study how much a privately reconstructed answer for a range query deviates from the ground truth. Each query answer is normalized to fall in the range 0 to 1, so we expect good results to be much smaller than 1. To compare with our theoretical analysis of variance, we measure the accuracy in the form of mean squared error between true and reconstructed range query answers. Plot description. Figure 4 illustrates the effect of branching factor B on accuracy for domains of size 2 8 (small), 2 16 (medium), and lastly 2 20 and 2 22 (large). Within each plot with a fixed D and query length r , we vary the branching factor on the X axis. We plot the flat OUE method as if it were a hierarchical method with B = D, since it effectively has this fan out from the root. We treat HaarHRR as if it has B = 2, since is based on a binary tree decomposition. The Y axis in each plot shows the mean squared error incurred while answering all queries of length r . As the plots go left to right, the range length increases from 1 to just less than the whole domain size D. The top row of plots have D = 2 8 , and the last row of plots have D = 2 22 . Observations. Our first observation is that the CI step reliably provides a significant improvement in accuracy in almost all cases for HH, and never increases the error. Our theory suggests that the CI step improves the worst case accuracy by a constant factor, and this is borne out in practice. This improvement is more pronounced at larger intervals and higher branching factors. In many cases, especially in the right three columns, TreeOUECI and TreeHRRCI are two to four times more accurate then their inconsistent counter parts. Consequently, we put our main focus on methods with consistency applied in what follows. Next, we quickly see evidence that the flat approach (represented by OUE) is not effective for answering range queries. Unsurprisingly, 10 Figure 6: Impact of varying ϵ on mean squared for prefix queries. These numbers are scaled up by 1000 for presentation. We underline the scores that are smaller than corresponding scores in Figure 5. Table 3 from [21] comparing the exact average variance incurred in answering all range queries for ϵ = 1 in the centralized case. for point queries (r = 1), flat methods are competitive. This is because all methods need to track information on individual item frequencies, in order to answer short range queries. The flat approach keeps only this information, and so maximizes the accuracy here. Meanwhile, HH methods only use leaf level information to answer point queries, and so we see better accuracy the shallower the tree is, i.e. the bigger B is. However, as soon as the range goes beyond a small fraction of the domain size, other approaches are preferable. The second column of plots shows results for relatively short ranges where the flat method is not the most accurate. For larger domain sizes and queries, our methods outperform the flat method by a high margin. For example, the best hierarchical methods for very long queries and large domains are at least 16 times more accurate than the flat method. Recall our discussion of OLH above that emphasised that its computation cost scales poorly with domain size D. We show results for TreeOLH and TreeOLHCI for the small domain size 2 8 , but drop them for larger domain sizes, due to this poor scaling. We can observe that although the method acheives competitive accuracy, it is equalled or beaten by other more performant methods, so we are secure in omitting it. As we consider the two tree methods, TreeOUE and TreeHRR, we observe that they have similar patterns of behavior. In terms of the branching factor B, it is difficult to pick a single particular B to minimize the variance, due to the small relative differences. The error seems to decrease from B = 2, and increase for larger B values above 2 4 (i.e. 16). Across these experiments, we observe that choosing B = 4, 8 or 16 consistently provides the best results for medium to large sized ranges. This agrees with our theory, which led us to favor B = 8 or B = 4, with or without consistency applied respectively. This range of choices means that we are not penalized severely for failing to choose an optimal value of B. The main takeaway from Figure 4 is the strong performance for the HaarHRR method. It is not competitive for point queries (r = 1), but for all ranges except the shortest it achieves the single best or equal best accuracy. For some of the long range queries covering the almost the entire domain, it is slightly outperformed by consistent HH B methods. However, this is sufficiently small that it is hard to observe visually on the plots. Across a broad range of query lengths (roughly, 0.1% to 10% of the domain size), HaarHRR is preferred. It is most clearly the preferred method for smaller domain sizes, such as in the case of D = 2 8 . We observed a similar behavior for domains as small as 2 5 . Impact of privacy parameter ϵ Experiment description. We now vary ϵ between 0.1 (higher privacy) to 1.4 (lower privacy) and find the mean squared error over range queries. Similar ranges of ϵ parameters are used in prior works such as [30]. After the initial exploration documented in the previous section, our goal now is to focus in on the most accurate and scalable hierarchical methods. Therefore, we omit all flat methods and consider only those values of B that provided satisfactory accuracy. We choose TreeOUECI as our mechanism to instantiate HH (henceforth denoted by HH c B , where the c denotes that consistency is applied) method due to its accuracy. We do note that a deployment may prefer TreeHRRCI over TreeOUECI since it requires vastly reduced communication for each user at the cost of only a slight increase in error. Plot description. Table 5 compares the mean squared error for HH c 2 , HH c 4 HH c 16 and HaarHRR for various ϵ values. We multiply all results by a factor of 1000 for convenience, so the typical values are around 10 −3 corresponding to very low absolute error. In each row, we mark in bold the lowest observed variance, noting that in many cases, the "runner-up" is very close behind. Observations. The first observation, consistent with Figure 4, is that for lower ϵ's, HaarHRR is more accurate than the best of HH c B methods. This improvement is most pronounced for D = 2 8 i.e. at most 10% (at ϵ = 0.2) and marginal (0.01 to 1%) for larger domains. For larger ϵ regimes, HH c B outperforms HaarHRR, but only by a small margin of at most 11%. For large domains, HH c B remains the best method. In general, except for D = 2 22 , there is no one value of B that achieves the best results at all parameters but overall B = 4 yields slightly more accurate results for HH c B for most cases. Note that this B value is closer to the optimal value of 9 (derived in Section 4.5) than other values. When D = 2 22 , HH c 2 dominates HH c 4 but only by a margin of at most 10%. Comparison with DHT and HH based approaches in the centralized case. We briefly contrast with the conclusion in the centralized case. We reproduce some of the results of Qardaji et al. [21] in Table 7, comparing variance for the (centralized) wavelet based approach to (centralized) hierarchical histogram approaches with B = 2, 16 with consistency applied. These numbers are scaled and not normalized, so can't be directly compared to our results (although, we know that the error should be much lower in the centralized case). However, we can meaningfully compare the ratio of variances, which we show in the last two rows of the table. For ϵ = 1, D = 2 8 , the error for the Haar method is approximately 2.8 times more than the hierarchical approach. Meanwhile, the corresponding readings for HaarHRR and HH c 4 (the most accurate method in the ϵ = 1 row) in Table 5 are 0.787 and 0.763 -a deviation of only ≈3%. Another important distinction from the centralized case is that we are not penalized a lot for choosing a sub-optimal branching factor. Whereas, we see in the 4th row that choosing B = 2 increases the error of consistent HH method by at least 1.8576 times from the preferred method HH c 16 . A further observation is that (apart for D = 2 22 ) across 24 observations, HaarHRR is never outperformed by all values of HH c B i.e. in no instance is it the least accurate method. It trails the best HH c B method by at most 10%. On the other hand, in the centralized case (Table 7), the variance for the wavelet based approach is at least 1.86 times higher than HH c B . Prefix Queries Experiment description. As described in Section 4.7, prefix queries deserve special attention. Our set up is the same as for range queries. We evaluate every prefix query, as there are fewer of them. Plot description. Table 6 is the analogue of Table 5 for prefix queries, computed with the same settings. We underline the scores that are smaller than corresponding scores in Table 5. Observations. The first observation is that the error in Table 6 is often smaller (up to 30%) than in Table 5 at many instances, particularly for small and medium sized domains. The reduction is not as sharp as the analysis might suggest, since that only gives upper bounds on the variance. Reductions in error are not as noticeable for larger values of D, although this could be impacted by our range query sampling strategy. In terms of which method is preferred, HH c 2 for D = 2 22 and HH c 4 tend to dominate for larger ϵ, while HaarHRR is preferred for smaller ϵ. Impact of input distribution Experiment description. We now check whether the shape of the input distribution affects the mean squared error when other parameters are held to their default values. Plot description. Figure 8 plots the mean squared error for domains of different sizes for e ϵ = 3 (ϵ = 1.1). Along the X axis, we shift the center of the Cauchy distribution by changing 0 < P < 1. For each domain size D, we make our comparison between HaarHRR and the most accurate consistent HH method according to Table 5. Observations. The chief observation is that for small and mid sized domains, the change in distribution does not make any noticeable difference in the accuracy. HaarHRR continues to be slightly inferior to HH c 4 for all input shapes. For D = 2 22 , we do notice an increase in the error when the bulk of the mass of the distribution is towards the left end of the domain (P = 0.1 to P = 0.3). This is partly due to the range sampling method we use: the majority of range queries we test cover only a small amount of the true probability mass of the distribution for these P values, and this leads to increased error from privacy noise. However, the main take-away here should be the consistently small absolute numbers: maximum mean squared error of 0.0035, i.e. very accurate answers. Quantile Queries Experiment description. Finally, we compare the performance of the best hierarchical approaches in evaluation of the deciles (i.e. the ϕ-quantiles for ϕ in 0.1 to 0.9) for a left skewed (P = 0.1) and centered (P = 0.5) Cauchy distribution. Plot description. The top row in Figure 9 plots the actual difference between true and reconstructed quantile values (value error). The corresponding bottom plots measure the absolute difference between the quantile value of the returned value and the target quantile (quantile error). Observations. The first observation is that the both the algorithms have low absolute value error (the top row). For the domain of 2 22 ≈ 4M, even the largest error of ≈35K made by HH c 2 is still very small, and less than 1%. The value error tends to be highest where the data is least dense: towards the right end when the data 12 skews left (P = 0.1), and at either extreme when the data is centered. Importantly, the corresponding quantile error is mostly flat. This means that instead of finding the median (say), our methods return a value that corresponds to the 0.5004 quantile, which is very close in the distributional sense. This reassures us that any spikes in the value error are mostly a function of sparse data, rather than problems with the methods. Experimental Summary We can draw a number of conclusions and recommendations from this study: • The flat methods are never competitive, except for very short ranges and small domains. • The wavelet approach is preferred for small values of ϵ (roughly ϵ < 0.8), while the (consistent) HH approach is preferred for larger ϵ's and for larger queries. • This threshold is slightly reduced for larger domains. However, the "regret" for choosing a "wrong" method is low: the difference between the best method and its competitor from HH and wavelet is typically no more than 10-%. • Overall, the wavelet approach (HaarHRR) is always a good compromise method. It provides accuracy comparable to consistent HH in all settings, and requires a constant factor less space (D wavelet coefficients against 2D − 1 for HH 2 ). CONCLUDING REMARKS We have seen that we can accurately answer range queries under the model of local differential privacy. Two methods whose counterparts have quite differing behavior in the centralized setting are very similar under the local setting, in line with our theoretical analysis. Now that we have reliable primitives for range queries and quantiles, it is natural to consider how to extend and apply them further. Multidimensional range queries. Both the hierarchical and wavelet approaches can be extended to multiple dimensions. Consider applying the hierarchical decomposition to two-dimensional data, drawn from the domain [D] 2 . Now any (rectangular) range can be decomposed into log 2 B D B-adic rectangles (where each side is drawn from a B-adic decomposition), and so we can bound the variance in terms of log 4 B D. More generally, we achieve variance depending on log 2d D for d-dimensional data. Similar bounds apply for generalizations of wavelets. These give reasonable bounds for small values of d (say, 2 or 3). For higher dimensions, we anticipate that coarser gridding approaches would be preferred, in line with [22]. Advanced data analysis. In the abstract, many tasks in data modeling and prediction can be understood as building a description of observed data density. For example, many (binary) classification problems can be described as trying to predict what class is most prevalent in the neighborhood of a given query point. Viewed through this lens, range queries form a primitive that can be used to build such model. As a simple example, consider building a Naive Bayes classifier for a public class based on private numerical attributes. If we use our methods to allow range queries to be evaluated on each attribute for each class, we can then build models for the prediction problem. Generalizations of this approach to more complex models, different mixes of public and private attributes, and different questions, give a set of open problems for this area.
2018-12-31T12:26:42.000Z
2018-12-28T00:00:00.000
{ "year": 2018, "sha1": "0a93146d5857c3852c56bd23606a2fa4060203a4", "oa_license": "CCBYNCND", "oa_url": "http://wrap.warwick.ac.uk/123728/1/WRAP-answering-range-queries-local-differential-privacy-Cormode-2019.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1f56b26c059ed7637529970788e3288970981464", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science" ] }
227080564
pes2o/s2orc
v3-fos-license
Emergence of carbapenem-resistant ST131 Escherichia coli carrying bla OXA-244 in Germany, 2019 to 2020 The dissemination of carbapenem-producing Gram-negative bacteria is a major public health concern. We report the first detection of OXA-244-producing ST131 O16:H5 Escherichia coli in three patients from two tertiary hospitals in the south-west of Germany. OXA-244 is emerging in Europe. Because of detection challenges, OXA-244-producing E. coli may be under-reported. The emergence of carbapenem resistance in a globally circulating high-risk clone, such as ST131 E. coli is of clinical relevance and should be monitored closely. The dissemination of carbapenem-producing Gramnegative bacteria is a major public health concern. We report the first detection of OXA-244-producing ST131 O16:H5 Escherichia coli in three patients from two tertiary hospitals in the south-west of Germany. OXA-244 is emerging in Europe. Because of detection challenges, OXA-244-producing E. coli may be underreported. The emergence of carbapenem resistance in a globally circulating high-risk clone, such as ST131 E. coli is of clinical relevance and should be monitored closely. Escherichia coli of the ST131 lineage is considered as a successful and emerging high-risk pandemic multidrug-resistant E. coli strain [1,2]. Typically, most ST131 E. coli are resistant to third-generation cephalosporins but remain susceptible to carbapenems [1]. We detected three OXA-244-producing ST131 E. coli from patient samples in two tertiary hospitals in the southwest of Germany between January 2019 and June 2020. The aim of our study was to investigate the genetic diversity of the emerging OXA-244-producing E. coli in the Rhine-Neckar region using whole-genome sequencing. Black squares: presence, grey squares: absence of antimicrobial resistance genes; red font and red squares: carbapenemase genes. C. Minimum-spanning tree based on the core genome of all sequenced E. coli in this study. Potential transmission clusters are indicated by the grey circles with SNP differences over the core genome in blue. There was no indication of patient-to-patient transmission (3,413 genes, 108,017 polymorphic sites). Numbers in square brackets indicate the number of isolates belonging to the MLST. Molecular and microbiological characteristics of OXA-244producing Escherichia coli Between January 2019 and June 2020, we identified 50 E. coli with phenotypic carbapenem resistance, of which 41 carried a carbapenemase. Nine of the 41 carried bla , which belonged to three clonal lineages ST38 (n = 5), ST131 (n = 3) and ST167 (n = 1). The isolate belonging to ST167 haboured two carbapenemase genes, bla NDM-5 and bla OXA-244 . Relevant clinical and microbiological characteristics of the nine patients are summarised in Table 1. The presence of genotypic antibiotic resistance determinants is summarised in Figure 1. Antibiotic susceptibility of all bla OXA-244 is displayed in Table 2 Consistent with published data, the bla OXA-244 genes are most likely to have been integrated into the chromosome because sequencing coverage of the blaOXA-244containing contigs was lower than the overall average sequencing coverage (Figure 2A and 2B) [5,15]. Seven of nine isolates were susceptible to meropenem as indicated by the low MIC in two different AST methods (Tables 1 and 2). One isolate (ST167, P9) carried both bla OXA-244 and bla NDM-5 so that high MIC values for carbapenem were expected. However, the isolate from P3 exhibited an unusually high MIC for meropenem for an OXA-244 producer in both AST methods (≥ 16 mg/L in VITEK and 6 mg/L in E-test) (Tables 1 and 2), for reasons we could not explain. Nevertheless, all nine isolates exhibited positive results in the phenotypic carbapenem inactivation assay (CIM) using meropenem disk (10 µg) with a 2 h inactivation step [16]. Our findings suggest that CIM may be a reliable method to detect OXA-244 producers and should be validated in further studies. Potential origin and nosocomial transmission of OXA-244-producing ST131 Escherichia coli SNP analysis to evaluate the clonal relationship of the isolates suggested two potential transmission clusters of patients P1-P2 with five SNP and P4-P5-P8 with 15-24 SNP ( Figure 2C). Patient P1 was colonised with bla OXA-244 E. coli on admission. There was no recent travel exposure so that community acquisition in Germany was possible. P2 stayed in the same ward as P1 with some temporal overlap. P2 was born in the hospital and acquired the colonisation with ST131 OXA-244-producing E. coli during the hospital stay. Nosocomial transmission is a very likely source of acquisition as suggested by the identical genotypic and phenotypic resistance of both isolates of P1 and P2 ( Figure 1 and Table 2). P3 was in a different hospital than P1 and P2. The lack of epidemiological link is consistent with the genomic analysis, which did not indicate transmission. P3 had had contact with the healthcare system in Libya and was initially screened negative on admission in Germany. The bla OXA-244 E. coli was detected in subsequent screenings. However, we cannot fully rule out importation because the sensitivity of the detection method is limited [15]. In the ST38 cluster, there was no epidemiological overlap so that a nosocomial patient-to-patient transmission event is unlikely. Nevertheless, community transmissions caused by clonal dissemination of bla OXA-244 -positive ST38 E. coli in Germany cannot be entirely ruled out [17]. Discussion The increased incidence in Europe of communityacquired infections with E. coli carrying OXA-244 is of public health relevance as reflected by the rapid risk assessment by the European Centre for Disease Prevention and Control (ECDC) at the beginning of 2020 [18]. Recently, several federal states in Germany reported a rise in detection of community-acquired infections with ST38 OXA-244-producing E. coli [17]. Similar observations have been reported in other European countries [4,5,[19][20][21]. In Germany and other neighbouring countries in Europe, bla OXA-244 is predominantly found in ST38 E. coli [4,17,19,21,22]. Surveillance data from Denmark and France reported the presence of bla OXA-244 in other clonal groups (ST10, ST38, ST69, ST167, ST10, ST361 and ST 3268) [21,23], but to the best of our knowledge the presence of bla OXA-244 in ST131 E. coli in Europe has not been reported before. Besides being responsible for serious extra-intestinal infections, the development of resistance to carbapenems in the ST131 E. coli clonal lineage, is particularly worrisome as carbapenems are often the last line of therapy for life-threatening infections [2,24]. There are no systematic data on the prevalence of carbapenemase-producing Gram-negative bacteria in the Rhine-Neckar region. However, our data suggest a low prevalence of 0.5% (131/27,387 screened patients in the Heidelberg University Hospital in 2019), which is consistent with published data [25]. Peirano et al. reported that the global incidence of carbapenemase-producing E. coli ST131 O25b:H4 of the fimH30/virotype C lineage is increasing, with bla KPC as the most common carbapenem-resistance determinant [2]. In contrast, our E. coli ST131 has the serotype O16:H5 with bla OXA-244 that belongs to the fimH41/virotype C lineage [26]. Although the major lineage of the highly virulent ST131 belongs to the serotype O25b:H4 and fimH30, a murine infection model suggested that ST131 O16:H5 fimH41 is comparable to the H30 lineage in virulence and lethality [27], which implies that the emergence of carbapenems resistance in the H41 ST131 lineage is equally relevant. Our study has limitations, the detection of OXA-244 producing E. coli is a major diagnostic challenge owing to its low level of phenotypic resistance to carbapenems; therefore OXA-244 producers may be underreported. Nevertheless, our finding suggests that a simple phenotypic assay for carbapenem inactivation combined with routine WGS may be useful to detect low carbapenemase producers, such as OXA-244. In addition, the epidemiological data of our patients were limited so that the exact origin of the OXA-244-producing ST131 E. coli in this study cannot be fully elucidated. Conclusion The emergence and dissemination of virulent and dominant E. coli clones with resistance to last-line antibiotics is a public health concern. Our findings emphasise the necessity of adequate surveillance measures and warrant further studies on the epidemiology and transmission dynamics of carbapenem-resistant E. coli both in the hospital and community setting. Ethical statement Data and isolates were collected and characterised in accordance to the German Infection Protection Act. The local ethical committee was consulted for the usage of clinical data for scientific purposes and granted waiver of informed consent (S-474/2018).
2020-11-21T14:06:34.131Z
2020-11-01T00:00:00.000
{ "year": 2020, "sha1": "c186bb2a0404bac1a067e3ec739b7361375ca6c3", "oa_license": "CCBY", "oa_url": "https://www.eurosurveillance.org/deliver/fulltext/eurosurveillance/25/46/eurosurv-25-46-5.pdf?containerItemId=content/eurosurveillance&itemId=/content/10.2807/1560-7917.ES.2020.25.46.2001815&mimeType=pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c0b8cebafd4a89dd73dd1c12842dcdfffad68173", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
55680813
pes2o/s2orc
v3-fos-license
A COMPARISON STUDY OF URBAN AND SMALL RURAL HOSPITALS FINANCIAL AND ECONOMIC PERFORMANCE This study examines the performance of hospitals based on location (geographical region, rural, urban). In this study, recent data has been used to better understand the hospitals performance after the introduction of Prospective Payment System (PPS). The data set used by the study is much comprehensive in its coverage and information on a number of relevant variables. We have included a number of new economic and financial variables in the analysis and examined the effects of conversion of hospitals from not-for-profit to for-profit on hospital performance. Our empirical findings suggest that the size of hospitals, occupancy rate of hospital beds, ownership status, degree of competition faced in the market, teaching status, and measure of financial indebtedness of hospitals are significant determinants of hospital performance holding location constant. The empirical model also suggests that the relationship between hospital efficiency measure and its various determinants is actually non-linear in nature and therefore, it is important to adopt appropriate non-linear econometric models for empirical estimation of the performance function. Finally, our findings show that rural and small hospitals face significant factors that hinder its performance in comparison to urban and larger hospitals such as the lack of (DSH) payments and economy of scale due to their smaller size and lower proportion of Medicaid patients. INTRODUCTION The profitability and financial performance of rural hospitals -and their determinants -have been important subjects of research and of great interest to federal and state agencies as well as banks, creditors, rating agencies, and regulators.Most studies to date have focused on the issues of differential access to urban and rural hospitals, and ignored the issues related to their financial performance.(Institute of Medicine, 2000) Rural hospitals differ from urban hospitals by being smaller with average size of less than 50 beds.Another characteristic of rural hospitals is its dependence on Medicaid and Medicare as a source of payment.Medicare pays for almost 50% for all hospital discharge compared with 37% for urban hospitals.Medicaid patients count for 17% of rural hospitals inpatient days in comparison to 26% of urban hospitals.Urban hospitals showed an average length of stay (LOS) of 5.9 days versus 7.4 days in rural hospitals.(Ricketts, 1999). There are many factors determine the level of access to hospitals healthcare such as health insurance, education, and race (Lee and Estes, 2000), however, an equally important factor contributing to health care access is the hospital's financial performance and profitability.In the long run, hospitals with financial insolvency problems would either be expected to reduce their level of care to the poor, uninsured, and other indigent populations, or face closure, bankruptcy or merger.Sear (1990) examined the issue of profitability in a sample of 50 investor-owned or for-profit (FP) hospitals and 60 not-for-profit (NFP) hospitals in Florida during the period 1982-1988.His results indicated that FP hospitals are more profitable than NFP hospitals and the average length of stay (LOS) and wages per adjusted patient day were important in explaining hospital profitability.Walker (1993), using a logit regression model, found that financial variables, by themselves, failed to discriminate between profitable and non-profitable hospitals and thus did not provide a complete explanation of financial condition.Watt et al. (1986) reported that FP hospitals have higher average revenues than their NFP counterparts.Herzlinger and Krasker (1987) found that NFP hospitals neither perform as well financially as do FP hospitals, nor do they compensate for this by returning higher levels of social benefits.However, other authors (Haddock et al. 1989;Arrington and Haddock, 1990) reexamined Herzlinger and Krasker's (1987) methods and found that the NFP hospitals were less profitable than FP, but provided more access to care to the indgent population through the admission to their emergency room..In short, the performance of hospitals varied by ownership, thus refuting the findings of Herzlinger and Krasker (1987).On the other hand, based on a sample of hospitals in Florida in 1980, Sloan and Vraciu (1983) found that FP and NFP hospitals were virtually identical in terms of profitability.Younis et.al. (2001) found that the most profitable hospitals are located in the southern region of the country and the hospitals located in the northeastern region where the least profitable. In this study, we revisit the issue of rural and urban hospitals financial performance by taking several new directions compared to previous studies.First, we examine the variation in financial performance between urban and rural hospitals.*Unlike earlier studies, which used data from the pre-Prospective Payment System (PPS) period, the data set employed in this study is obtained from the post-PPS period and is therefore more relevant to the payment system currently faced by the hospitals.Unlike the previous cost-based mechanism, reimbursement under PPS is set at a predetermined rate.PPS operating cost payment was initiated in 1983 and phased-in over the five-year period 1983-1988 to provide the hospitals the appropriate time to adjust to the new payment system.Second, we incorporate in our empirical analysis additional economic and financial variables (e.g., degree of competition and financial indebtedness) that are likely to affect hospital profitability.Moreover, since early 1990s, hospitals in the U.S. went through significant changes in ownership pattern, and the more recent data allow us to investigate the effect of conversion of not-for-profit (NFP) hospitals to for-profit (FP) status on profitability.Third, we used a proxy variable, consistent with the literature, to identify rural hospitals, and we acknowledge the potential limitations of this approach at the end of the paper.Fourth, since long-term and specialty hospitals are reimbursed under Tax Equity and Fiscal Responsibility Act of 1982 "TEFRA" we included only short-term care hospitals (Ettner, 2001). BACKGROUND This study examines the financial performance and conversion of U.S. hospitals in relation to geographic region and urban-rural differences.The area variation and change of ownership have been ignored in the past.This research contributes significantly to the issue of variation of financial performance between small rural hospitals and its urban counterpart. In this study, recent data from the Medicare Cost Report (MCR) have been used to understand the economic performance of hospitals following the introduction of the Prospective Payment System (PPS).The data set used includes information on a number of relevant variables such as length of stay, occupancy rate, and full-time employee casemix.We also included to our regression analysis the effect of serving Medicaid population on rural and urban hospitals performance. OBJECTIVES To examine and compare the financial performance between rural and urban hospitals. ECONOMETRIC METHODOLOGY FOLLOWED Using the return on assets (ROA) as the dependent variable to understand the factors affecting the hospital profitability and efficiency, the following regression model was estimated: ROA = f(BEDSIZE, OCCURATE , OWNERSTATUS, LENGTHSTAY, DEBT, FTECMX, TEACHSTATUS, SOLE, YEAR, RATIO OF MEDICAID DAYS TO TOTAL HOSPITAL DAYS) In this model, a number of variables were considered to have non-linear which can be approximated by piece-wise linear models.For example, it has been hypothesized that profitability is dependent on hospital size (BEDSIZE), which is the proxy variable for location in this model.But the effect of BEDSIZE on ROA should change if size exceeds a certain minimum level.At another higher level, the effect on ROA can change again.To allow this constant effect of BEDSIZE on ROA for a specific size range and another constant effect for another size range, the variable BEDSIZE was decomposed into three variables (BEDSIZE0-50, BEDSIZE50-400 and BEDSIZEover400).The first redefined variable shows the actual size of beds for all hospitals with bed sizes less than 50.And if the size is 50 or more, the variable takes the value of 50.Similarly, the second redefined variable (BEDSIZE30-400) is 0 if size is less than 50; it is actual bed-size minus 50 if the number of beds in the hospital is between 50-399, and 400 when the size is 400 or more.Similarly, BEDSIZEover400 takes the value of 0 if the size of the hospital is less than 400, and bed-size minus 400 if the size is over 400.Such redefinition allows the slope of hospital size to change in the regression model. In this regression model, we have allowed non-linearity (piece-wise linear) for another variable --occupancy rate (OCCURATE).OCCURATE was also redefined into three variables following the procedure mentioned for BEDSIZE.The categories defined for the three variables are: occupancy rate less than 10%, 10 to 50%, and more than 50%.Other variables entered in the model are: • FTECMX = number of full-time employees per 100 admissions adjusted for case mix.• OWNERSTATUS = dummy variable indicating type of ownership (equals 1 for NFP status, 0 for FP status).• EACHSTATUS = dummy variable, taking the value of 1 if the hospital provides teaching and interns training, 0 otherwise.• OLE = dummy variable capturing the degree of competition facing a hospital (equals 1 if a hospital is the sole Medicare provider, 0 otherwise).• YEAR = dummy variable, taking the value of 0 if year is 1991 and 1 if year is 1995. • DEBT = debt per bed in service.Debt is defined as bonds issued plus loans. • CONVERT = dummy variable, taking the value of 1 if a hospital is converted from NFP status to FP status between 1991 and 1995 and 0 otherwise.• DAYS MEDICAID/TOTA DAYS = the ratio of inpatient days Medicaid to total hospital days The estimation methodology used the ordinary least squares (OLS) with heteroscedasticity adjustment to standard errors, following White (1980). DATA AND DESCRIPTIVE ANALYSIS Hospital data for the years 1991 and 1995 were obtained from the Medicare Cost Report (MCR) with support from HCIA, Inc.We consider the data is recent given the untimely release of the Medicare Cost Report to the public and skills needed to make the file in readable format.The hospitals in this study were divided into three categories: notfor profit hospitals in both 1991 and 1995, for-profit hospitals in these two years, and hospitals that were converted from not-for-profit status to for-profit status between 1991 and 1995.Table 1 shows some basic characteristics of the hospitals during these two years. Note that for the year 1991, the data set contains 521 for-profit hospitals, 3,478 not-for-profit hospitals, 614 for-profit hospitals, and 3,406 not-for-profit hospitals in the year 1995.Table 1 also presents descriptive statistics on the hospitals in the sample for the years 1991 and 1995.Full-time employees per 100 adjusted discharges declined by about nine percent (9%) for rural hospitals over 1991 and 1995, but the decline was much steeper for urban hospitals (15.5%).In general, rural hospitals are smaller in size than urban and suburban hospitals, and over the years hospitals in general experienced an approximately 1.6 % decrease in size.The length of stay per adjusted acute case also declined from about 4.3 days to about 3.7 days between 1991 and 1995 for rural hospitals, while urban hospitals experienced more extensive declines in LOS.However, hospitals converting from NP to FP showed steeper declines in occupancy rates than nonconverting hospitals.It appears that the hospitals experiencing a change in profit status found themselves relatively weak in terms of market power (Needleman et al. 1997) number of sole community providers in urban locations was only 4 in 1995, indicating that most urban hospitals face competition from other hospitals in the community. The measure of profitability used in this study is return-on-assets (ROA), a continuous financial status variable defined as net income divided by total assets.ROA reflects the efficiency score of hospitals as it relates hospital output to non-labor inputs.The profitability of the hospitals in the sample increased over the years 1991 and 1995.This is true both for rural and non-rural hospitals.The enhanced financial performance of hospitals is often considered to be related to improvements in collections and electronic payments.It should be noted that rural hospitals in general were losing money in 1991, whereas urban hospitals were doing much better than even the for-profit hospitals in terms of profitability. RESULTS We found that hospitals profitability has improved over time.However, the magnitude of the improvement was far lower for rural hospitals than urban hospitals.Lower profitability will hinder the ability of the rural hospitals to provide charity care and other uncompensated care. Table 2 presents the results of the regression model.A major controversy in the health care field centers around the effect of ownership on economic performance of hospitals.The variable of ownership status was found to be negative and statistically significant which indicate that FP hospitals are more profitable than NP.On the average, for-profit hospitals are likely to have higher ROA ratios than not-for-profit hospitals.This result is obtained after controlling for the time trend in profitability.The estimated coefficient of time trend is quite high (2.18) with high t-values, implying that hospitals in general had a higher ROA in 1995 compared to 1991.This is consistent with earlier studies, most of which found for-profit hospitals to be more efficient and profitable than not-for-profit entities (Younis et al. (2001). The Teaching Status variable in the model turned out to be negative and significant.Teaching hospitals are less profitable than non-teaching hospitals possibly due to the costs associated with training as well as the charitable services these hospitals provide.Teaching hospitals provide training for interns and residents, which increase the cost of operation of the hospitals.In many cases, teaching hospitals have affiliations with medical schools and try to maintain a charitable image in the community in order to attract donations and contributions.The significant difference between teaching and nonteaching hospitals may also be due to the scope of services provided by the teaching hospitals.Teaching hospitals tend to be larger and located in urban and economically depressed inner-city areas (HCIA, 1997).Consequently, teaching hospitals provide access to the indigent population from the surrounding areas with little or no compensation. Hospitals with less than 50 beds in service appear to be less profitable than larger hospitals.In fact, hospital sizes 50-400 and more than hospitals with less than 50 beds, thereafter profitability declines for hospitals over 400 beds because the economy of scale cease after the 400 beds range.The larger the hospital, beyond a certain point the lower its profitability.The variable SOLE (measure of lack of competition) was also significant in the model, although the value of the coefficient is positive and small.This may be because the number of hospitals in this category is simply regulated to overcharge patients for hospital care.Occupancy rate also shows a significant impact on profitability, and only statistically significant coefficient was for OCCURATE0to10 and OCCURATE10to50.The sample sizes in other groups were too small to obtain significant results.The higher the number of full-time employees adjusted for case mixes, the lower the profitability, holding all other variables constant.Case mix index (CMI) is analogous to product mix in a manufacturing context.It is a measure of the mix of patient illness types treated in the hospital, relative to the national average, and proxies for relative resource consumption.Thus, a hospital with an above-average CMI is expected to consume more resources than a hospital with a lower CMI.Employee full-timeequivalents (FTEs) are divided by the CMI to provide an adjusted (standardized) FTE measure.A full-time employee is a good proxy for the variable cost of the hospital.However, EMPLOYEES has a low coefficient value with a low significance level.This suggests that hospitals may be operating on an optimal number of employees, and any reduction in the number of employees would not lead to significant improvement in profitability, however, the level of significance does not warrant strong conclusions. Finally, the ratio of total Medicaid days to total days had a significant contribution to hospitals financial performance because hospitals with higher proportion of Medicaid patients will get additional payment through Medicaid disproportionate hospital share (DSH) payments system.Rural hospitals are in disadvantaged position to receive (DSH) payments because most of Medicaid patients are located in the large metropolitan areas and inner cities. CONCLUSIONS AND POLICY IMPLICATIONS Our empirical findings suggest that rural hospitals generate less revenue per bed than urban hospitals due to several factors such as lower Medicaid volume, which lead to a lower Disproportionate Share Hospital (DSH) reimbursement rate and the lack of economy of scale due to their small size and large overhead cost.Other variables such as, occupancy rate, ownership status, degree of competition faced in the market, teaching status, and financial indebtedness are significant predictors of hospitals financial performance.The model also suggests that the relationship between hospitals profitability and its various determinants is non-linear in nature and therefore, it is important to adopt appropriate non-linear econometric models for empirical estimation of the performance function.The findings also indicate that rural and small hospitals are significantly disadvantaged in terms of performance compared to urban and larger hospitals.Furthermore we conclude that NFP rural are in disadvantage because they receive no or little donation in comparison to larger urban NFP hospitals (Cutler, 2000). Traditionally, the measure of performance of hospital industry has relied on the calculation of the financial ratios from hospitals' financial statements (income statement and balance sheet).The financial ratios measure the hospital's historic performance.Banks, creditors and rating agencies use these ratios to predict the hospital's future performance and credit extension.However, this research demonstrates that there are equally important measures that should be used in evaluating hospital performance.These factors are occupancy rate, staffing ratio, and total expense per adjusted discharge. These measures tend to clarify the underlying factors that produce a favorable or unfavorable financial performance. For example, since the implementation of PPS, and in the current era of declining use of inpatient services vis a vis outpatient treatments, occupancy rate has been considered a key predictor of financial performance.A declining trend in occupancy rate would have an adverse effect on efficiency, profitability, and liquidity.At a lower rate of occupancy, operating expense per adjusted discharge will be greater, which will hinder ability to operate efficiently. In conclusion, the financial performance of the hospital industry cannot be expressed by any one measure alone.Major differences exist among hospitals in terms of their location, scope of services provided, size, ownership, organizational structure, and amount of graduate medical education provided.Moreover, associated with these structural and locational differences are factors such as in-patient and payor mix, government regulations, and several non-financial factors, over which a hospital may have little or no control.Such diversity in hospital market structure makes any analysis of hospital efficiency and profitability extremely difficult to interpret. Other empirical analyses have shown that affiliation with a School of Medicine has an important influence on hospital profitability, services, and access for indigent populations (HCIA, Inc., 1997), which is comparable with the regression results of this study. LIMITATIONS AND FUTURE RESEARCH Due to rapid changes in the health care system, current models may not work two years from now.New research related to the prediction of hospitals' mergers and takeovers can be suggested.As discussed in Morck et al. (1988), insider ownership may reduce the probability of mergers and takeovers in non-health care industries.Research related to the prediction to prediction of hospitals', mergers and takeover is therefore suggested.Prediction of hospital bankruptcy is another area for future research.There might be a strong correlation between bankruptcy, payment system, location, access to health care, mergers, and acquisitions. Finally, the trend in rural hospital closures and mergers (mostly through conversion from NFP to FP status) is attracting the attention of the regulators and public citizens' groups. A full analysis of rural hospital performance and access to health care possible endogeneity of location, size and characteristics of the rural populations could not be carried out here due to data limitation.The study was constrained by the variables obtained from the Medicare Cost Report. Another limitation of the study is that although each of the hospitals occurred twice in the sample, this research did not correct for the repeated measure issues.Also, non-reporting hospitals may have created some selectivity bias and led to having a fewer number of rural hospitals in the data set.The non-reporting problem could be related to the going administrative and organizational changes in such hospitals. This study should be updated with more recent data to examine the effect of the Balanced Budge Act of 1997 (BBA) which includes a significant cut in Medicaid disproportionate hospital share (DSH) payments.The Balance Budget Act no doubts would affect the financial performance and profitability for hospitals with high volume of Medicaid patients. Table 1 . The Descriptive Statistics for Dependent and Independent Variables Notes: FP (NFP) denotes for-profit (not-for-profit) hospitals.The source of data is the Medicare Cost Report Data and the data were provided by HCIA, Inc. Baltimore, Maryland. Table 2 Descriptive Statistics for Dependent and Independent Variables Notes: F-P, (NFP) denotes for-profit (not-for-profit) hospitals.Rural hospitals are a proxy for hospitals with less than or equal 50 bed.Average size hospitals, has beds between 51 and 400 beds, Large Size hospitals has over 400 beds.The source of data is the Medicare Cost Report Minimum Data Set.
2018-12-05T14:03:00.519Z
2003-06-01T00:00:00.000
{ "year": 2003, "sha1": "597e753f357455c923e6268a05ef98ecbd15b7bf", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.14574/ojrnhc.v3i1.247", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "597e753f357455c923e6268a05ef98ecbd15b7bf", "s2fieldsofstudy": [ "Business", "Economics" ], "extfieldsofstudy": [ "Business" ] }
210457807
pes2o/s2orc
v3-fos-license
Super-recognisers show an advantage for other race face identification The accurate identification of an unfamiliar individual from a face photo is a critical factor in several applied situations (e.g. border control). Despite this, matching faces to photographic ID is highly prone to error. In lieu of effective training measures which could reduce face matching errors, the selection of ‘super-recognisers’ (SRs) provides the most promising route to combat misidentification or fraud. However, to date, super-recognition has been defined and tested using almost exclusively ‘own-race’ face memory and matching tests. Here, across three studies we test Caucasian participants on tests of own-race (GFMT, MFMT, CFMT) and other-race (EFMT, CFMT-C) face identification. Our findings show that compared to controls, high performing typical recognisers (Studies 1 & 2) and super-recognisers (Study 3) show superior performance on both the own-and other-race tests. These findings suggest that recruiting SRs in ethnically diverse applied settings could be advantageous. Introduction The use of face photos for accurate identity verification is critical in maintaining border security and ensuring that correct convictions occur within the criminal justice system.At border control, passport officers are required to decide whether the face of a traveller matches their passport photo, while police officers are routinely required to match the face of a suspect to poor-quality CCTV stills.In each of these cases, the target individuals are likely to be unfamiliar to the police officer or border control official.Despite this, it is now well established that matching pairs of unfamiliar faces is highly prone to error (Burton, 2009;Burton & Jenkins, 2011;Davis & Valentine, 2009;Hancock, Bruce, & Burton, 2000;Jenkins & Burton, 2011;Johnston & Edmonds, 2009;Robertson, 2018;Robertson & Burton, 2016). Notably, errors within this context may lead to travellers with fraudulent passports entering the country illegally, or innocent suspects being convicted of a crime. In addition, a number of recent experiments have found it difficult to train people to be better at facial identification, with individual differences in performance often outweighing the magnitude of improvement so that any positive effects demonstrated tend to be largest for, or restricted to, typically poor recognisers (e.g., see Robertson, Mungall, Watson, Wade, Nightengale, & Butler, 2018;and White, Kemp, Jenkins, & Burton, 2014 for work on feedback training).This difficulty in trying to improve an individual's facial recognition ability was further supported by a recent paper by Towler et al. (2019) which showed that professional facial identification training courses, which are used by agencies across the world, appear to have little or no impact on an individual's person identification performance. Therefore, focus has now somewhat shifted from improving the performance of typical recognisers to the selection of individuals (see Baldson, Summersby, Kemp, & White, 2018), known as super-recognisers (SRs), who naturally excel at face identification tasks as a result of a likely inherited (Wilmer, Germine, Chabris, Chatterjee, Williams, Loken, et al., 2010), face-specific (McCaffery, Robertson, Young, & Burton, 2018;Wilhelm, Herzmann, Kunina, Danthiir, Schacht, & Sommer, 2010;Yovel, Wilmer, & Duchaine, 2014) present in around 2% of the general population.Recent work has started to assess the processes which may underpin super-recognition, and the findings suggest that SRs may focus more on the inner features of unfamiliar faces (particularly the nose region; Bobak, Parris, Gregory, Bennetts & Bate, 2017), as well as enhanced early stage encoding of incoming facial information (Belanova, Davis, & Thompson, 2018), compared to typical recogniser controls.Despite these advances in the assessment of the neurocognitive markers of superrecognition, the CFMT+ remains the gold standard test for SR categorisation.The CFMT+ is a Caucasian learned face memory test.Participants are asked to memorise the faces of six people, followed by a memory test (3AFC) which includes novel instances of the learned identities.However, as noted above, the critical task at border control and in criminal identification is unfamiliar face matching, which does not place any demands on memory, and indeed in the early phase of super-recognition research it was not clear whether CFMT+ SRs would also excel in matching tasks. However, recent findings have shown that the superior face memory ability found in CFMT+ SRs does generalise to the unfamiliar face matching domain.A series of recent studies have shown significantly greater accuracy rates for CFMT+ SRs on the GFMT and the more challenging Models Face Matching Test (MFMT) compared to typical recognisers (Bobak, Dowsett, & Bate, 2016;Davis, Lander, Evans, & Jansari, 2016;Robertson, Noyes, Dowsett, Jenkins, & Burton, 2016; see also Bobak, Hancock, & Bate, 2016;Davis, Treml, Forrest, & Jansari, 2018;Noyes, Hill & O'Toole, 2018;Phillips et al., 2018 for similar findings with newly developed matching tests).In addition, recent individual difference studies have reported positive correlations of moderate strength, between scores on the CFMT+ and the GFMT (e.g.McCaffery, Robertson, Young, & Burton, 2018;Verhallen et al., 2017;see Fysh, 2018;Fysh & Bindemann, 2018 for equivalent findings with the CFMT/Kent Face Matching Test).Such correlations across face matching and face memory tasks support the idea of Verhallen's f (Verhallen et al., 2017), as a common underlying mechanism for face processing akin to Spearman's g (1927) (for intelligence).In the applied context, these findings confirm that CFMT+ SRs can also excel on matching tasks and could therefore be deployed as passport checkers at border control or as officers in criminal identification units in policing. The finding that CFMT+ SRs also excel at matching pairs of faces is important in terms of the general utility of SRs across different occupations.However, it must still be viewed with caution because the face tasks employed in these studies (CFMT+, GFMT, MFMT) used only Caucasian faces (see Noyes & O'Toole, 2017), when in the real-world, passport checkers and police officers regularly encounter faces from a wide range of ethnic groups. Data from the 2011 UK Census (ONS, 2011) showed that six distinct ethnic groups are represented by more than one million UK citizens (i.e.White British, All Other White, Mixed, Asian, Black and with 'Other' category representing many additional ethnic groups) and an official may encounter many other non-UK ethnicities at an airport.Verifying an individual's identity from a face photo is challenging enough when the viewer and the target are from within the same ethnic group, however, due to a well-established psychological phenomenon known as the other-race effect (ORE), accurately identifying a person from a different ethnic group results in even poorer performance (see Meissner & Brigham, 2001 for a review). The ORE emerges early in development, with infants as young as nine months of age showing preferential recognition for own-race faces, while initial exposure to predominantly own-race faces, shapes adult perception and performance (Kelly et al., 2007;Meissner & Brigham, 2001;O'Toole, Deffenbacher, Valentin, & Abdi, 1994;Walker & Tanaka, 2003). A study by Meissner, Susa, and Ross (2013) demonstrated the ORE using a matching task which mirrored the passport control context, with the image pairs showing a high-quality face photo of the 'traveller' and a scanned photo-ID page from a passport.They reported the typical 20% error rate in the own-race condition (Mexican American observers/faces), which rose to 30% in the other-race condition (Mexican American observers/African American faces).In addition, findings from Megreya, White, and Burton (2011) displayed the ORE in a 1-10 matching task (UK/Egyptian Faces/Observers).Intriguingly, this study also reported moderate-to-strong correlations between accuracy rates on the own-and other-race tests for both groups (r = .60UK Observers, r = .78Egyptian Observers), although the sample size here was small (N = 26 for both groups).This suggests that participants who excelled on the own-race task were also likely to excel on the other-race task (relative to a lower mean score).Recent work by Kokje, Bindemann, and Megreya (2018) replicated both the ORE effect and the own-/other-race accuracy correlation with a larger sample (N = 74) using 1-1 matching tasks.However, they did not use the CFMT+, or the GFMT, when assessing individual differences in performance, limiting the generalisability of their findings to typical recognisers. To date, only one paper by Bate et al. (2018) has attempted to directly assess the performance of SRs on other-race face identification tests.Using a sample of 8 Caucasian SRs, Bate et al. (2018) presented participants with own and other-race face memory tests SUPER RECOGNISERS AND OTHER-RACE FACES 9 (Experiment 1), and own and other-race face matching tests (Experiment's 2 and 3).They reported that their sample of SRs did not show a performance advantage over native typical recognisers (i.e.Asian observers/Asian face tests).However, the SRs did show an advantage over the Caucasian controls on the other-race face tests, although the accuracy cost for otherrace faces remained, with no difference in magnitude compared with the control group.That is, the ORE was present in SRs, albeit from a higher baseline level of performance than controls.These are intriguing findings suggesting that SRs may be performing at the top end of a face recognition continuum rather than displaying qualitatively different cognitive processes.However, the findings from Bate et al. (2018) Finally, in Study 3, in order to directly assess SRs' performances on own-and other-race face tasks, we test a large sample of Caucasian super-recognisers (SRs) (N = 35) using the CFMT+, Adult Face Recognition Test (AFRT), MFMT, and the other-race EFMT-short, relative to Caucasian typical recogniser controls (N = 420).Following the process reported by Bate et al. (2018), we seek to assess whether Caucasian SRs outperform Caucasian controls on an other-race unfamiliar face matching test, and whether or not the accuracy cost associated with the identification of other-race faces is present in the SR group, and if so, to what extent. Study 1 In Study 1, we use four established tests of face identification (CFMT-short, CFMT-Chinese, GFMT-short, MFMT-short) and a 200 item Egyptian Face Matching Test (EFMTlong; 100 match/100 mismatch trials).Here we seek to replicate previous work, outlined in the introduction, which has shown a robust correlation between the CFMT (learned face memory) and the GFMT (face matching).We also include the more challenging MFMT (face matching; highly variable male model images) as a direct correlation between this task and the CFMT and the GFMT has not been previously reported.Importantly, we also include an other-race face matching test (EFMT-Long; Egyptian Faces), and we assess whether this task produces an other-race accuracy cost, and whether accuracy on the own-race GFMT generalises to the other-race EFMT-long.Although the focus of this paper is on other-race face matching, we also include the CFMT-Chinese version (McKone et al., 2012) to assess cross-domain (i.e.matching/memory) and cross-race correlations (Caucasian, Egyptian, Chinese).The short version of the CFMT is used in this study, rather than the CFMT+, and therefore we cannot determine if there are any SRs in the sample.Therefore, in Study 1 we test typical recognisers (undergraduate students) only. Method Ethical Approval Each study reported in this paper received ethical approval from the Ethics Committee Glasgow Face Matching Test (GFMT) The GFMT (short version) consists of 40 pairs of unfamiliar Caucasian faces.The test contains an equal number of trials in which the face pairs show the same person (match condition) or two different, but similar looking, people (mismatch condition).See Figure 1 for an example image pair and Burton et al. (2010) for further details. Models Face Matching Test (MFMT) The Models Face Matching Test (short version) consists of 30 pairs of unconstrained, highly variable, face photos of male models (15 match/15 mismatch).The MFMT is designed to be more difficult than the GFMT and, in line with the CFMT/CFMT+ distinction, is more likely to detect high performing face matchers.See Figure 1 for an example image pair and Dowsett and Burton (2015) for further details. Item Egyptian Face Matching Test Long Version (EFMT-long) The Egyptian Face Matching Test (EFMT-long) that we use here consists of 200 pairs of unfamiliar male Egyptian faces (100 match/100 mismatch), as seen in Figure 1 (see Megreya, White, & Burton, 2011 for further details). Cambridge Face Memory Test (CFMT) The Cambridge Face Memory Test (short version) is a well-established 72-item learned face recognition-memory task which increases in difficulty with the addition of within-person variability and visual noise to the image set. Figure 1 shows an example of the stimuli used in the CFMT, see Duchaine & Nakayama (2006) for further details. CFMT-Chinese Version (CFMT-C) The Cambridge Face Memory Test -Chinese Version, follows an identical format to that described above for the CFMT with the exception that Chinese faces replace the Caucasian faces used in the original test.See McKone et al. (2012) for further details. Procedure The order of presentation of the tasks was randomised by block (unfamiliar face matching tests, face memory tests) and then by test (GFMT/MFMT/EFMT-long, CFMT/CFMT-C).On each trial on each of the face matching tests, participants were required to decide whether the face pair showed the same person or two different people.For the matching tests, each trial remained on screen until a participant made a response.For the face memory tests, participants were required to learn six target identities by viewing photos of them in three different orientations (left, forward facing, right), and to then detect photos of these identities in the presence of two foils in 3-AFC recognition trials.Recognition trials remained onscreen until participants made their response.All responses were made via keyboard key with testing sessions lasting approximately 1 hour. While research shows that accuracy on other-race tasks is poorer than own-race face tasks, here we find that EFMT-long accuracy was significantly higher than both the GFMT and MFMT performances (M = 85%, SD = 8%, Range = 60%-98%; F(1,110) = 17.29, p < .001,ηp 2 = .14for the GFMT, F(1,110) = 122.26,p < .001,ηp 2 = .53for the MFMT).This pattern is likely to be because, as mentioned above, the GFMT and MFMT consist of the most difficult items from longer test sets.This is not the case for the EFMT-long, in which the full 200 trial test was used, and so accuracy is likely to be inflated by the inclusion of a greater proportion of easy trials.A shortened version of this test, using the most difficult items from the current dataset, is therefore tested in Study 2. Individual Differences As our principal aim was to explore potential correlations between different measures, we were more concerned with avoiding Type 2 than Type 1 errors, and therefore report uncorrected statistics.As a check on the reliability of these, however, we also used the Benjamini-Hochberg procedure with a false discovery rate of 0.2 to correct for multiple comparisons, and we also report confidence intervals (see McCaffery, Robertson, Young, & Burton, 2018). Unfamiliar Face Matching (GFMT, MFMT, EFMT-long) As seen in Figure 1, there was a significant positive correlation between the GFMT and the MFMT (r(111) = .541,uncorrected p < .001,95% CI [.39, .66])with individuals who perform highly on the GFMT also performing highly on the MFMT.This correlation replicates the effect reported by Bobak, Dowsett, and Bate (2016), and shows a level of stability in matching aptitude across the GFMT and the MFMT.It further supports the use of the MFMT as a more sensitive measure of face matching ability among high performers. Importantly, participants' scores on the own-race GFMT and MFMT both correlated with the other-race EFMT-long (r(111) =.580, uncorrected p < .001,95% CI [.44, .69]for the GFMT; r(111) =.535, uncorrected p < .001,95% CI [.39, .65]for the MFMT).This finding extends previous research by Megreya, White, and Burton (2011) who reported a similar relationship using 1-10 face matching arrays, using the well-established GFMT.These findings suggest that individuals who perform highly in matching pairs of unfamiliar faces from their own-race, are also likely to perform highly when exposed to other-race faces. Face Recognition Memory (CFMT, CFMT-C) Here we replicate the strong positive correlation reported by McKone et al. (2012) between performances on the own-race Caucasian CFMT and the other-race CFMT-C, r(111) SUPER RECOGNISERS AND OTHER-RACE FACES 17 =.653,uncorrected p < .001,95% CI [.53, .75].This finding shows that individuals with a high aptitude for the recognition of new instances of a recently learned own-race face, are also like to perform well when the target identity is from a different ethnic group. Cross-Domain and Cross-Race Correlations As shown in Figure 1, all of the cross-domain (matching, memory) tests correlated with each other, suggesting shared underlying mechanisms for identity verification in both matching and memory contexts.While it has previously been established that scores on the CFMT and the GFMT correlate (McCaffery, Robertson, Young, & Burton, 2018;Verhallen et al., 2017), this is the first study to show such relationships between these tests and the other-race face tasks included in the battery.Importantly, we show a significant positive correlation between the CFMT and both the own-race GFMT (r(111) = .433,uncorrected p < .001,95% CI [.27, .57])and the other-race EFMT-long (r(111) = .449,uncorrected p < .001,95% CI [.29, .58]for CFMT vs. EFMT-long).That is, aptitude on a face memory test generalises to both own-and other-race unfamiliar face matching accuracy.Taken together, the findings from Study 1 do provide support for the view that a general face processing factor f (Verhallen et al., 2017) exists and it supports face processing across matching and memory domains for both own-and other-race faces. Study 2 As reported in Study 1, mean accuracy on the 200-item EFMT-long were higher than the 40-item GFMT-short (which consists of the 40 most challenging items from the GFMTlong).Here, in Study 2, we follow the same procedure as Burton, White, and McNeil (2010) by selecting the 40 most difficult items (i.e.least accurate responses) from the EFMT set used in Study 1, to create a shorter version of the task. Stimuli, Apparatus and Procedure In this study, only the GFMT and our shortened version of the EFMT were used.In line with the GFMT, the EFMT-short used in the present study consisted of 40 trials (20 match / 20 mismatch).These 40 EFMT pairs represented the 40 most difficult pairs as measured by EFMT item error rates in Study 1, from a sample of fifty-two participants.The tasks were presented on a Dell PC, task order was counterbalanced, and trial order was randomised across participants. Task Accuracy Mean accuracy on the shortened version of the other-race EFMT was 74%, significantly lower than the own-race GFMT (81%), F(1, 42) = 19.63,p < .001,ηp 2 = .32.As seen in Figure 2, accuracy on the EFMT-short was lower in both the Match and Mismatch conditions (F(1, 42) = 5.02, p = .03,ηp 2 = .11for Match; F(1, 42) = 8.83, p = .005,ηp 2 = .17for Mismatch), and in line with the GFMT, accuracy rates did not differ between EFMT-short Match and Mismatch conditions, F(1, 42) = 2.15, p = .15,ηp 2 = .05.We note here that although the EFMT-short produced lower accuracy rates than the GFMT, without the inclusion of an Egyptian sample of participants we cannot say conclusively that our EFMTshort produces an other-race effect on accuracy.It could be the case that the EFMT-short items are simply more difficult than the GFMT items, we thank Reviewer 3 for bringing this to our attention.However, 75% of the items used in our EFMT-short were also included in a longer test by Kokje, Bindemann, and Megreya (2018), and analysis of the dataset which isolated our EFMT-short items revealed that mean accuracy rates for the Egyptian observers was 78%, that is 4% more accurate than our Caucasian observers.This suggests that should Study 2 be replicated within the inclusion of an Egyptian sample, that it would be likely that the EFMT-short would generate an other-race effect on accuracy rates.Even were it to be the case that this data was not available from Kokje, Bindemann, and Megreya (2018), the EFMT-short would still provide a valid measure with which so assess between group differences on identification accuracy using own-and other-race faces, as we do so in Study Individual Differences Here we replicate the findings from Study 1 with a significant positive correlation between overall scores on the GFMT and the EFMT-short (r(43) = .454,uncorrected p = .002,95% CI [.18, .66]),again showing consistency in performance across own-race and other-race unfamiliar face matching tests.In addition, significant correlations were found across the tests when the match and mismatch trials were analysed separately (r(43) = .532,uncorrected p < .001,95% CI [.27, .71]for Match trials; r(43) = .390,uncorrected p = .010,95% CI [.10, .61]for Mismatch trials).These correlations remained significant after applying both the Bonferroni and Benjamini-Hochberg corrections.self-paced, order and trial order were randomised across participants, and feedback scores were provided at the end of the study. Group Comparisons For the typical recogniser control group, mean accuracy rates on the tasks were: 80% for the CFMT+ (SD = 11%, Range = 46%-92%), 75% for the AFRT (SD = 9%, Range = 40%-95%), 91% for the GFMT (SD = 7%, Range = 58%-100%), 83% for the MFMT (SD = 9%, Range = 53%-100%), and 86% for the EFMT-short, the other-race face matching task (SD = 8%, Range = 55%-100%).Mean performance on each of these tests is around 8%-10% higher than previously published norms, which is likely to be due to a recruitment bias in which those likely to take part in this study have an interest in superior face recognition ability.Importantly, these results replicate our findings from Study 2, with poorer performance on our newly established short version of the other-race EFMT in comparison to the own-race GFMT, F(1, 359) = 118.07,p < .001,ηp 2 = .25.This confirms that this short version of the EFMT-short is challenging enough to provide an unfamiliar face matching other-race effect. It is important to note that while SRs display enhanced accuracy on the EFMT-short in comparison to controls, however, in line with the controls the SRs still performed less accurately on the other-race EFMT-short (94%) compared to the own race GFMT (97%; t(34) = 2.67, p = .012for the difference).For the SRs, the mean difference in accuracy between the EFMT-short and GFMT was 3%, which was not significantly smaller than the 5% effect reported between the tests for the typical recogniser controls, t(393) = -1.48,p = .141.However, again, this could be due to the recruitment bias in the control group outlined above, and when the size of the SR difference in accuracy between the own-and other-race tests (3%) was compared to the typical recognisers recruited for Study 2 (7%; students), the magnitude of the SR cost was found to be significantly smaller, t(76) = -2.33,p = .022.We note again, that our claim that the EFMT-short produces an other-race task cost should be replicated in a fully crossed design which includes native Egyptian observers. Super-Recogniser Group In contrast to the typical recogniser group, and as expected, there were no correlations between the CFMT+ and any of the other tests (all p's > .076), a consequence of selecting SRs on the basis of the CFMT+ scores, thus removing most of the variance from that set which would allow for an individual differences analysis. Superior Performance Across All Tests Although the majority of SRs did produce scores above mean control performance across tasks, it is important to note that 3 SRs scored below the control mean on the GFMT, 4 SRs scored below the control mean on the MFMT, and 2 SRs scored below the control mean on the EFMT-short.That is, it is not the case that all SRs, as categorised by the CFMT+ and the AFRT, will always show superior performance on other facial identification tasks. Moreover, if we apply the conservative CFMT+ criteria for super-recognition (i.e.≥ 2 SDs above the control mean) to the other tests, then, as seen in Figure 3, 16/35 SRs achieved this for the GFMT, 3/35 for the MFMT, and 9/35 for the EFMT-short, as seen in Figure 3. Out of the sample of 35 SRs, only 1 participant achieved scores of 100% across each of the three face matching tests.This has implications in terms of the types of tests that should be used to categorise SRs for specific occupations, as outlined in the general discussion. General Discussion Across three studies we demonstrate a consistent performance cost for other-race face identification, both the context of recognition memory (Study 1; CFMT/CFMT-C) and importantly in unfamiliar face matching (Studies 1-3; GFMT, MFMT, EFMT), we show that Caucasian SRs do outperform Caucasian controls on an other-race face matching test, but that an other-race accuracy cost remains in that group. Study 1 is, to our knowledge, the first to assess cross domain matching/memory performance in own/other-race tasks using this battery of well-established (CFMT, CFMT-Chinese, GFMT, MFMT) and novel (EFMT-long) tests, in a single well powered sample.The findings from Study 1 replicate previous work showing consistency in performance on the CFMT and GFMT (McCaffery, Robertson, Young, & Burton, 2018;Verhallen et al., 2017) and we extend this to the more challenging MFMT.The latter effect supports the idea that individuals who excel on the CFMT and GFMT are also likely to fare well in more ecologically valid tasks which contain highly variable face photos (i.e. the MFMT).Most importantly, we show that performance on the CFMT and GFMT correlate with scores on the EFMT-long.This suggests that performing well on own-race face memory/matching tasks is likely to result in superior performance when an individual encounters faces from out with their own ethnic group (Kokje, Bindemann, & Megreya, 2018;McKone et al., 2012;Megreya, White, & Burton, 2011;Meissner, Susa, & Ross, 2013).This finding along with the other cross-domain correlations (e.g.CFMT-C vs. GFMT) add further support to the idea that both face matching/memory and own/other-race face processing may tap the same underlying cognitive and perceptual processes, which Verhallen et al. (2017) has termed f, a general face perception factor (analogous to Spearman's g in the study of intelligence; Spearman, 1927), which may distinct from non-face cognitive abilities (McCaffery, Robertson, Young, & Burton, 2018;Wilhelm et al., 2010).However, while Verhallen et al. (2017) used a variety of face tests to assess the potential for a general face f, further work, including a variety of object based and other non-face tasks is required to assess whether this factor is indeed specifically indicative of individual differences in face processing. Having assessed cross-domain performance in typical recognisers in Study 1 and verified our 40-item EMFT-short in Study 2, in Study 3 we used a battery of tests to assess own-and other-race face identification in a set of Caucasian SRs in comparison to Caucasian controls.The findings showed that while there was a SR advantage for accurately matching pairs of other-race faces, with an 8% increase in mean performance over controls, SR accuracy on the other-race EFMT-short was still lower than scores on the own-race GFMT. These findings support the recent work by Bate et al. (2018) which also showed that SRs outperformed typical recognisers on other-race face tests, but that an accuracy cost or ORE remained evident in the SR group.Both the study by Bate et al. (2018) and the present findings provide support for the view that the SRs are displaying performance at the top end of a face recognition continuum, rather than engaging qualitatively different cognitive and perceptual processes.One limitation of the present study was that it did not include native Chinese (Study 1) or Egyptian (Studies 1 & 3) control groups, therefore we were not able to test whether SRs would outperform native observers.However, Bate et al. (2018) did include native control groups and they found that while Caucasian SRs outperformed Caucasian controls on other-race tests, the native observers (e.g.Asian observers/Asian face test) outperformed both of these groups.This suggests that while employing a Caucasian SR at border control may lead to greater detection of fraud attacks by other-race travellers, a native observer who shares the fraudsters ethnic group would outperform that SR.Again, the sample size used in the study by Bate et al. (2018) was small, so further work should seek to test this native observer vs. SR advantage. The persistence of an ORE in SRs in the study by Bate et al. (2018) the other-race accuracy cost reported in this paper, is consistent with the idea that SRs represent the top end of a face recognition continuum, rather than a qualitatively distinct ability.Bobak, Parris, Gregory, Bennetts, and Bate (2016) used eye-tracking to assess face processing in SRs, typical recognisers and individuals with congenital prosopagnosia, and found that SRs spent a greater proportion of their time on the inner features of a face, particularly the nose region, when viewing social scenes.It could be this change in the time spent on the internal features of a face that is driving the SR advantage for other-race face identification.A series of studies has shown that the ORE may result from failing to direct attention to those features, such as the nose region, of an other-race face that are likely to provide the most diagnostic information for accurate identity perception (Hills, Cooper, & Pake, 2013;Hills & Lewis, 2006;Hills & Pake, 2013).Therefore, it could be the case that SRs are naturally attuned to deploy their attention more efficiently, and for longer, to central regions of the face which leads to greater identification accuracy for both own-and other-race faces.This could explain the greater accuracy on the EFMT-short in the SR group relative to controls; and the smaller magnitude of the SR EFMT-short cost (3%) compared to typical recognisers (7% in Study 2; but n.s. 5% in Study 3). An important consideration in terms of the applied potential of our findings relates to the fact that within SR research, group-level analyses (i.e.SRs vs. typical recognisers) can mask the fact that not all SRs, as categorised by scores on the CFMT+, always outperform typical individuals on other tests of face processing (Davis, Lander, Evans, & Jansari, 2016;Noyes, Hill, & O'Toole, 2018).In both Study 1 and Study 2, we replicate the correlation between the CFMT+ and the GFMT reported by previous studies.This correlation suggests that CFMT+ SRs are also likely to perform above average in occupations where unfamiliar face matching is the critical task (i.e.passport control officer).However, these correlations are in the moderate range, and it is therefore the case not all CFMT+ SRs are likely to be 'super-face-matchers' and therefore tests would need to be performed in conjunction with the CFMT+ before an individual could be considered as a suitable SR candidate for roles in which face matching is the critical task.Similarly, as outlined in Study 3, and as seen in Figure 3, not all CFMT+ SRs, or indeed higher performers on the GFMT, showed outstanding performance on the EFMT.Therefore, in the applied context, professions which are seeking to recruit SRs should employ a battery of tests to assess their suitability for the specific role (see Bate et al., 2018;Ramon, Bobak, & White, 2019).It is not the case that selecting SRs on the basis of CFMT+ scores will ensure that each of these individuals will excel at unfamiliar face matching or indeed other-race unfamiliar face matching. In conclusion, our findings of consistent associations in accuracy across face processing domains (matching/memory) and race adds weight to the notion that these processes may be served by the same underlying mechanism, or f a general face perception factor.SRs as a group, and to a large extent at the individual level, outperform typical recognisers from the SR's own race on a test of other-race face matching, with an other-race accuracy cost remaining evident in this group.This SR advantage for other-race faces may be driven by more efficient attentional allocation to central regions of the face, particularly the nose, which are likely to provide greater diagnostic information for identity perception (Bobak, Parris, Gregory, Bennetts, & Bate, 2016;Hills, Cooper, & Pake, 2013;Hills & Lewis, 2006;Hills & Pake, 2013).Finally, police forces, border control agencies and private organisations who seek to select and employ SRs must include other-race face tasks in their assessment battery to ensure, at the individual level, that the people they select also excel in verifying the identities of individuals from outside their own ethnic group.In doing so, this would provide an effective addition to counter-measures which are designed to reduce fraud attacks at passport control, and wrongful criminal convictions. should be treated with caution, as both the size of the SR sample (N = 8) and its heterogeneity precluded statistical comparisons at the group level.Further work, with a much larger SR sample is required to test the robustness of their findings.Therefore, the present study sought to investigate individual differences in performance across a range of own-and other-race faces tests in typical recognisers (Study 1 & 2) and a large sample of Caucasian SRs (Study 3).In Study 1, we test a large sample of typical recognisers (Caucasian undergraduate students; N = 111) using a battery of facial identification tests which tap into own-race face memory (CFMT+), own-race face matching (GFMT-short, MFMT-short), other-race face memory (CFMT-Chinese), and other-race face matching processes (Egyptian Face Matching Test; EFMT-long), to assess whether typical recognisers who perform accurately on own-race tests also show similar levels of performance on the other-race tests.If that is the case, it would provide support for a common mechanism which underlies both own-and other-race face identification.In Study 2, using a sample of typical recognisers (Caucasian undergraduate students; N = 43), we verify a shortened 40 item version of the other-race face matching test used in Study 1 (EFMT-short). of the University of Strathclyde School of Psychology Sciences and Health.Study 3 received concurrent approval from the University of Greenwich Research Ethics Committee.Participants One hundred and eleven Caucasian participants with a mean age of 22 years (SD = 5, Range = 18-53, 18 Male) were recruited from the University of Strathclyde School of Psychological Sciences and Health.All participants had normal or corrected-to-normal vision, each provided written informed consent, and upon completion of the study each received a course credit, or an optional piece of confectionary. Figure 1 Figure 1 Correlation matrix for the five face identification tests used in Study 1; Glasgow Participants Forty-three Caucasian participants recruited from the University of Strathclyde School of Psychological Sciences and Health, with a mean age of 23 years (SD = 5, Range = 18-44, 11 Male) took part in this study.All participants had normal or corrected-to-normal vision, each provided written informed consent, and upon completion of the study they received a course credit to reimburse them for their time. 2 Mean accuracy for the GFMT and the shortened version of the EFMT (40 Trials), and separately, their match and mismatch conditions, * p < .05,** p ≤ .005.
2019-11-14T17:08:32.881Z
2019-12-18T00:00:00.000
{ "year": 2019, "sha1": "6d60762c15ed7c3c11fa71ae319f0d67bf876d89", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/acp.3608", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "43b1fef45ef840c5d3211d7d39e3daa09de032ab", "s2fieldsofstudy": [ "Psychology", "Computer Science" ], "extfieldsofstudy": [ "Psychology" ] }