id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
247362582
pes2o/s2orc
v3-fos-license
Semantic Norm Recognition and its application to Portuguese Law Being able to clearly interpret legal texts and fully understanding our rights, obligations and other legal norms has become progressively more important in the digital society. However, simply giving citizens access to the laws is not enough, as there is a need to provide meaningful information that cater to their specific queries and needs. For this, it is necessary to extract the relevant semantic information present in legal texts. Thus, we introduce the SNR (Semantic Norm Recognition) system, an automatic semantic information extraction system trained on a domain-specific (legal) text corpus taken from Portuguese Consumer Law. The SNR system uses the Portuguese Bert (BERTimbau) and was trained on a legislative Portuguese corpus. We demonstrate how our system achieved good results (81.44\% F1-score) on this domain-specific corpus, despite existing noise, and how it can be used to improve downstream tasks such as information retrieval. Introduction Nowadays many countries have their laws available online. For example, all the norms and laws of the Portuguese Republic, are published by The Official Portuguese Gazette (Diário da República), available online at DRE.pt, under the curation of a state-owned company, INCM (Imprensa Nacional Casa da Moeda). It works as a public service with universal and free access to its content and functionalities. The current search process used enables regular citizens to search for legislation, but it is based on traditional keyword matching algorithms . The appearance of transformer-based language models has shifted the Natural Language Processing (NLP) area towards semantic models such as Sentence-BERT, capable of determining if two sentences have a similar meaning even if they do not share the same words. However, the problem with these models is that they require a large dataset of examples for the models to be fine-tuned to a particular domain, which might not be possible in a service such as DRE, where every month more than a thousand new digital juridical texts are published. One example of an approach in NLP to automatically extract semantic information from legal texts is the work of Humphreys et al. (2020). The authors extracted definitions and deontic concepts (such as rights and obligations) from legislation, and showed that capturing these legal concepts, and the relationships between the terms that represent such concepts, helps information retrieval. Similarly to Humphrey et al., we hypothesize that the extraction of definitions, norms and semantic relations from legal articles would be beneficial to downstream tasks. By extracting all this information, laws would be represented in a formalism that could then be used to infer knowledge and help build and train NLP-based systems, such as information retrieval and question answering systems. These systems would help citizens to better understand and navigate the legal domain. Thus, the main research questions this article aims to answer are the following: • RQ1: How to create a system capable of automatically extracting and representing norms, entities and semantic relations from Portuguese legal texts? • RQ2: Can the automatic extraction of norms, entities and semantic relations from text improve downstream tasks such as information retrieval? This work focuses on the extraction of several concepts from the Portuguese Legislation. We will focus on a particular subset, the Portuguese Consumer Legislation. These concepts include legal norm types (like definitions, rights, obligations, and so on), several types of named entities (like legal references or administrative bodies), and some semantic roles (such as who is obligated, what is that person obligated to, among others), that are defined within those categories. Some of this information is similar to that given in a Semantic Role Labelling (SRL) task, which gives us information regarding "who" did "what" to "whom", "when" and "where". Additionally, just like Named-Entity recognition (NER) extracts named entities, we want to extract norm types, named entities and semantic roles. Thus, we assume that we were dealing with a NER classification task. Also, having in mind that some of the concepts we want to extract can be nested within each other, i.e. norm types include both entities and semantic roles, we assume we were dealing with a nested NER task. Thus, since the type of information we are extracting involves not only semantic roles (as in SRL), but also named entities (as in NER), we name this domain-specific task as Semantic Norm Recognition (SNR). Related Work The NER task can be defined as the extraction and classification of two structural types of entities: flat entities and nested entities. Most existing NER models focused only on flat NER. Some of these works involve applying linear-chain conditional random fields Lafferty et al. (2001) and semi-Markov conditional random fields Sarawagi and Cohen (2004). Still, there are a few models that deal with extracting nested entities, along with flat entities. Lu and Roth (2015) proposed a hypergraph model to detect nested entities, which had a linear time complexity. Muis and Lu (2018) improved Lu and Roth model by proposing a multigraph representation based on mention delimiters. They assigned tags between each pair of consecutive words, preventing the model from learning fraudulent structures. These correspond to entities that overlap each other and that are grammatically impossible. In the last years, we saw an increasing number of neural models targeting nested NER as well. Katiyar and Cardie (2018) further extended Muis and Lu approach, by creating a model that learns the hypergraph representation directly for nested entities. This is done by utilizing features extracted from an adaptation of a Long Short Term Memory Neural Network (LSTM). Fisher and Vlachos (2019) adopted an approach that decomposes nested NER in two stages. Firstly, the boundaries of the named entities at all levels of nesting are identified. Secondly, based on the resulting structure, embeddings for each entity are generated by merging the embeddings of smaller entities/tokens from previous levels. Yu, Bohnet and Poesio (2020) proposed a model based on the dependency parsing model of Dozat and Manning (2016) to provide a global view on the input via a biaffine model. Not only did they used a Bidirectional Long Short Term Memory Neural Network (BiLSTM) to learn word representations, they also used two Feed Forward Neural Networks (FFNNs) to generate representations for possible start and end of spans, which they then classify with the biaffine classifier. The authors argued that using different representations for the start and end of spans, allowing the model to learn these representations separately, resulted in an increase in accuracy, when compared to previous systems that did not adopted this idea. They claim that the increase in accuracy comes from the fact that the start and end of spans have different context, thus their learning should be done separately. The authors also showed how their model outperformed the previous ones, mentioned above, for the ACE 2005 Dataset. They also evaluated their model on other seven datasets including ACE2004, GENIA, and ONTONOTES, among others. For all these datasets, their model achieved the best results. Humphreys et al. (2020), tackled the problem of automating knowledge extraction from legal text, and how to use such knowledge to populate legal ontologies. They built a system based on NLP techniques and postprocessing rules, derived from domain-specific knowledge. This system can be divided into two main components. The first is a Mate Tools semantic role labeler (Björkelund et al., 2009). This component is responsible for extracting an abstract semantic representation, along with dependency parse trees. The second is a set of rules, responsible for recognizing norms and definitions, classifying their norm type and mapping arguments in the semantic role tree to domain-specific slots in the legal ontology. Additionally, and very recently, Neda and Mark (2018) proposed an information extraction process to build a legislation network. Due to the structured nature of their legal corpus they were able to apply NER by defining clear rules to extract the relevant entities. Finally, Ruggeri et at. (2021) proposed a data-driven approach for detecting and explaining fairness in legal contracts for consumers. They applied a Memory-Augmented Neural Network (MANN, Sukhbaatar et al. (2015)) in order to incorporate external knowledge (set of legal explanations) when classifying legal clauses. Dataset As mentioned above, the goal of this work is to create an automatic information extraction system, named SNR, capable of extracting relevant concepts (such as norms, semantic roles and named entities), from a specific corpus that contains the Portuguese Consumer Law. Since we are dealing with a classification problem, we need to have a dataset with the laws and corresponding gold labels. For this reason, the corpus was annotated so we could generate the needed dataset for our task. This section covers the creation of the dataset, with a brief description of the corpus, of the annotation process and the tools used. Corpus The SNR system uses a subset of legal articles from the Portuguese Consumer Legislation. These legal texts were made available by INCM and were segmented at the level of the article number, containing in total 5.600 segments, and 341.392 tokens. This segmentation was done at the level of the article number, due to the fact of this being considered the level that gave the necessary context to extract and annotate the relevant semantic information, as an article number often corresponds to a full sentence. Thus, each segment, roughly speaking, corresponds to a number of an article of a legal act. For example, "1 -É revogada a Lei n.º 29/81, de 22 de agosto." (1 -The law n.º 29/81, from August 22, is revoked.), is the segment with ID 5839, and corresponds to the number 1 of article 24 of the Decree-Law no. 47/2014 dated from the 28th of July. Each segment is associated with an unique identifier (segment ID) and other metadata regarding the legal text from which the segment was extracted from. This information includes the number of the legal act, its publishing date, chapter, paragraph number, and so on. Annotation Procedure In the context of this work, the goal of the SNR system is to extract the norms' types, the semantic roles and the named entities present in the legal text. Thus, the corpus must be annotated with these concepts. The relevant norm types chosen were based on the concepts mentioned in Humphreys et al. (2020), and considering a preliminary analysis of the corpus. They consist in norms conveying the deontic constructs of obligations (oblig) and rights (right); articles presenting legal definitions (def); statements about the entry into force or the revocation of specific laws (leffect); and residual introductory articles (intro), framing the context, or stating the purpose or the domain of application of the current legal act. Named entities (NE) include domain-specific NE, such as legal document references (lref), like the law mentioned in the example above, and other textual references (tref), often referring to other articles/sections/items within the current legal act. The later require anaphora resolution (not considered at this stage). Other, more typical, NE were also tagged, such as time expressions like those denoting dates (time date), duration (time duration) and frequency (time freq). Distinction between absolute (time date abs) and relative dates was considered, the later requiring some temporal anaphora resolution (not considered at this stage). The relative dates may refer either to the moment of utterance (=date of publication), which is rare, or to an event already mentioned in the text (time date rel text). Considering their frequency, only this later type and the duration type were included in the model. The names of administrative bodies (ne adm) and other organizations (ne org) are also considered, including office titles (ne office). As for the semantic roles (SR) included in the task, some are specific of a given norm type, such as those involved in definitions, v.g. definiendum, for the concept to be defined; definiens, for the definition proper; and scope, delimiting the domain of application of a definition. Other SR denote commonly occurring circumstantial events, such as condition, purpose, concession, exception. For the right and oblig norm types, which may arguably be considered the most important concepts to be captured from the legal texts, specific roles were defined for the experiencer, that is, the person/entity holding the right or having an obligation; and the action, i.e. the very content of that right and/or that obligation. Finally, negation expressions (neg), which reverse the polarity of the deontic value attributed to the action by either the obligation (oblig) or the right norm type, were explicitly annotated. The list of all chosen concepts can be seen in Table 1, and were annotated using the Prodigy tool 1 . A set of guidelines was produced to precisely define and abundantly exemplify all the relevant concepts here involved, and to provide specific orientations for more complex or problematic cases. The annotation was then carried out by a team of linguists, following these guidelines. This process was developed in two main steps. On a first step, each segment as a whole was tagged with its corresponding norm type. Then, on the more granular second step, the named entities within each segment were delimited and classified. Finally, the semantic roles were identified and tagged. At each step, a pilot-annotation was carried out, to train the annotators and assess the difficulties posed by the texts, or by inconsistencies and lacunae found in the guidelines, which have accordingly been revised. In the end, inter-annotator agreement as calculated. An average Fleiss Kappa coefficient of 0.79 was found for the first step (norm type classification), which can be interpreted as "substantial" or "strong". For the second step, the task was conceptually much more complex and, in spite of the guidelines, several inconsistencies were found among the annotators, especially for some semantic relations and for some complex time-related named entities. Also, some semantic roles had been missed. These situations introduced some noise in the dataset. Systematic revision was then undertaken to correct the more noisy labels and to improve the overall quality of the annotation. For example, several def-inclusion spans, which correspond basically to an enumeration of items within a definition, had been incorrectly assigned to alternative definitions (definiens) of the same concept (definiendum); the opposite case was also found in the dataset. In other cases, several lexical cues and some patterns of co-occurrent semantic roles were investigated and, then, missing or inconsistent annotations were corrected. For example, several instances of compound conjunction"desde que" ('as long as') introducing a condition SR span had been missed, hence precluding the identification of those spans. Patterns like these were systematically revised and corrected. As a result of this correction phase, the dataset went from an initial number of 34,178 marked spans to a final total of 36,711. Semantic Norm Recognition System In this section we will describe our SNR system and all the implemented approaches. Baseline SNR System The purpose of this work is to create a system capable of identifying norm types, semantic roles and named entities present in legal texts. These concepts can appear nested, which means, for example, there can be sentences where an experiencer is inside an action (both, semantic roles). Once the annotation procedure was done, we saw that we could have more than one label for the same span, with each label belonging to a different group of spans (either a norm, or a semantic role or a named entity). For example, a certain span could not only be an experiencer (semantic role), but also a ne adm (named entity). These, and only these cases, correspond to a multi-label problem. A span that is labeled with a concept of a certain group (either norms, semantic roles or named entities) will never be labeled with another concept of the same group. For example, a certain span that is labeled as experiencer, will not have any other semantic role associated with it. Thus, for each group we have a multi-class problem. In order to make sure that we were dealing only with a multi-class problem for all concepts and not a multi-label problem, we created an information extraction system composed of three models: (i) the Norms Model, responsible for predicting the norms; (ii) the Named Entities (NE) Model, responsible for predicting the named entities; and (iii) the Semantic Roles (SR) Model, responsible for predicting the semantic roles. All models have the same architecture, but are trained to learn different types of labels (norms, named entities and semantic roles, respectively). The overview of the SNR system is shown in Figure 1. The architecture of each model is based on the dependency parsing model of Yu, Bohnet and Poesioo (2020), with a small difference regarding the embeddings used, as we can see in Figure 2. 2020)) as word embeddings. BERTimbau is the Portuguese version of BERT. As BERT is the state-of-the-art for word embeddings, we decided there was no need to use fastText. Regarding char-CNN, these characters embeddings are mostly used to deal with emoticons and misspelling words. Since we are dealing with legislative text, which is subject to intense scrutiny and careful edition prior to its publication, even the existence of misspelled words will be rare. Therefore, this type of embeddings was not included in our system. Similarly to Yu et al. (2020), we fed our embeddings into a BiLSTM, to learn the words representation, and applied two FFNNs to the word representations generated by the BiLSTM, in order to learn different representations for the start and the end of the spans. Each FFNN computed representations h s and h e , for the start and the end of the entities, respectively. Then, we applied a biaffine classifier (Dozat and Manning (2016)) in order to generate a scoring tensor r, of size l × l × c, over the input sentence, where l corresponds to the sentence length and c to the number of entities (norms, for the Norms Model; semantic roles, for the Semantic Roles (SR) Model; and named entities, for the Named Entities (NE) Model), plus one to represent non-entities. The score for each span i, corresponds to: W a 2d × c matrix and b the bias. All the valid spans (spans whose end is after its start) are scored by the tensor. Then, the entity label with the highest score is assigned for each span: After having all spans and their possible labels, just like Yu et al. (2020), we also did a final post-processing step to make sure that there were no nested entities whose boundaries clash. For example, the spans "the article number 5 defines an" and "article number 5 defines an obligation to consumers" clash with each other (boundaries overlap), and so in this situation the selected span would be the one with a higher score. The model's learning goal is to assign the right category (including non-entity) to each valid span. Being a multi-class classification problem, we optimised the model with softmax cross-entropy. Full Norm Dependency SNR System In the previous section we described the Baseline SNR system. It was built in order to learn to predict all the information that was annotated. However, the annotation itself was divided into two levels: first, identifying the norm types for the whole corpus; and, then, for those norms, identifying the remaining concepts. We, thus, decided to create another version of the system, which we denote by Full Norm Dependency SNR System, as shown in Figure 3, in order to try to replicate the annotation process and the human rationale behind it. The Full Norm Dependency system is very similar to the Baseline system, the key difference being the fact that the system tries to learn to identify the named entities and semantic roles, knowing the norms that are present in those segments. The rationale behind this strategy is that by predicting the norm type first (first phase), it would allow the system to focus on predicting the semantic roles and named entities for that particular norm, making this second phase easier. This was also the strategy adopted in the (human) annotation process. This system starts by generating the word embeddings using BERTimbau, just like the Baseline System. Afterwards, it uses those embeddings as input for the Norms Model. The model will, then, make its predictions regarding the norms, returning all the predicted norms. After this, these predicted norms will be added into the segments and fed into our feature extractor (BERTimbau) to generate another set of embeddings. For example, for the segment "1 -É revogada a lei ... no dia anterior." (The law is revoked ... on the previous day), if the Norms Model predicted the norm type leffect spanning from token 2 ("É ") to token 22 ("anterior "), instead of feeding BERTimbau with the original segment, we feed it with "1 -ileffectÉ revogada a lei ... no dia anterior fleffect". In short, we added ILabel and and FLabel to represent the start and the end of the norm, respectively, with Label being the label of the corresponding norm. After generating these embeddings, we feed them to two models responsible for predicting the named entities and the semantic roles, respectively. These two new models differ from the Named Entities and Semantic Roles Model of the baseline approach, since they are trained with more information (the norm types) than those two models from the baseline, which had only been trained with the original segments. Let us denote these new two models by Named Entities Norm Dependent (NEND) Model, and Semantic Roles Norm Dependent (SRND) Model, respectively. Finally, like the Baseline system, the predictions of each model are concatenated together, in order to obtain the final classification. As we mentioned before, the system uses the predicted norms as input for the other two models. Regarding the training of the two models, however, we did not feed the models with the predicted norms in the training and validation segments. Instead, we used the gold labels in order to have each model learn correctly the relations between the norms and other labels (named entities for the NEND Model, and semantic roles for the SRND Model), as we do not want the model to learn wrong relations. For example, let us consider that we have a segment that has a def (norm type) and also has a definiendum (semantic role). If we were training the SRND Model, and if the Norms Model had incorrectly predicted (assuming it had been trained already) that the segment would have been tagged as an oblig instead of a def norm type, by using the incorrect label, the model could learn to associate oblig with definiendum. Therefore, to make sure that the model only learned the correct relations, we only used the gold labels. This also allows us to train all three models simultaneously, after generating the two sets of embeddings (one for the original segments, and another for the segments with the norms gold labels) . After doing some experimental training and evaluating with both of our previous systems, we saw that some named entities seemed to have worse results when using the Full Norm Dependency system. After looking at each named entity and semantic role, we saw there was a stronger relationship between norms and semantic roles than there was between norms and entities. For example, in the corpus lref or tref are text-related named entities that can occur in any type of norm, but we can only have a definiendum inside a def, or a effect inside a leffect. Having this in mind, we decided to create a third version of our system, which we denoted as Partial Norm Dependency SNR System, which is shown in Figure 4. Partial Norm Dependency SNR System As we can see, the Partial Norm Dependency approach is very similar to the Full Norm Dependency. In the previous approach, we had the NEND Model and SRND Model responsible for predicting the corresponding labels, knowing the norms that were present in the segments. In this approach, instead, we only use the norms' information for the model responsible for predicting the semantic roles (SRND Model). Thus, we use the Named Entities Model, instead of the NEND Model to predict the named entities. To get a better understanding of the input and output of our system, see Figure 5, which includes the predictions the Partial Norm Dependency SNR System makes for the input segment "O presente decreto-lei entra em vigor 30 após a sua publicação." (The current decree-law come into effect 30 days after its publication). Models Training and Optimization As we mentioned above, we created three different versions of our system. In total there are five different models: the Norms Model, the NE Model, the SR Model, the NEND Model, and, finally, the SRND Model. We applied a division of 80% training/10% validation/10% test to the dataset, used batches of size 32, trained all models for 15 maximum epochs (saving the one with highest F1-score), and optimized the parameters using Bayesian optimization. The results can be seen in Table 2. Evaluation Metrics For evaluating the system and each model, we used Micro-F1, Micro-Precision and Micro-Recall. To evaluate each label, we used the standard F1-score, Precision and Recall. We also created a novel metric, which we named "Average Token Agreement (AT A)". Since the previous metrics only count exact matches as correct, we decided it would be useful to know how many tokens we correctly predicted. This way, we would have information regarding the agreement between the gold labels and predicted labels at token level, and not just whether it is a exact match or not. The AT A score of a set S of segments can be calculated by where segmentAgreement(s) = t agreement(t) |s| , with |S| corresponding to the number of segments in the set, |s| to the number of tokens in the segment, t to a token of the segment s, goldLabels(t) to the gold labels, and predLabels(t) to the predicted labels, of the token t. Fig. 6 Example for agreement metric Figure 6 shows a fictitious segment and its corresponding gold and predicted labels for the purpose of illustrating the ATA(S) metric. Each arrow represents a span and its legend corresponds to its gold/predicted label (semantic roles above and norm types below). The ATA(S) given by the system for that single segment, is (1/2 + 1/2 + 2/3 + 1/2 + 0/1)/5 = 0.43. For example, for Token 1, the gold and predicted labels agree in 1 label (oblig) over 2 labels (oblig and experiencer). In this case, 0.43 means that the system overall and the gold labels agree with each other 43%, regarding that sentence. For x sentences, the agreement would be the sum of the agreement of each sentence divided by the number of sentences, as we can see in the Equation (1). If we consider a specific model, and not the full system, for example, the SR Model, the agreement for this same sentence would be (0/1 + 0/1 + 1/2 + 0/1 + 1)/5 = 0.3, which means the gold and predicted labels have an agreement of 30%, regarding the semantic roles, which makes sense, since from 5 tokens in total, only in Token 3 and in Token 5 do the predicted and gold labels match (in case of Token 3 only in 1 of 2 of its semantic roles). Also as we can see tokens that have no labels (in this case Token 5 has no semantic role) will not be considered. Results' Comparison After performing evaluation for all three approaches, we achieved the results shown in Table 3. Immediately by looking at the results, we can see that the NE Model outperforms the NEND Model in all metrics (F1-score, Precision and Recall) except the Agreement, which is very close. Thus, the Full Norm Dependency approach is not the best approach. The SRND Model has a higher F1-score than the SR Model. This comes from the fact that, even though it has a smaller precision than the SR Model, the SRND Model has a higher recall (being a bigger difference when comparing increase in precision) than the SR Model. This is reflected on the results, when we compare the Baseline Approach with the Partial Norm Dependency approach and see that the Partial Norm Dependency approach has a higher F1-score and recall than the Baseline approach. Based on these results and since our priority is the model making the highest number of corrected span predictions (F1-score), we concluded that the best approach is the Partial Norm Dependency approach. Even thought the Partial Norm Dependency approach is the best approach, this and the Full Norm Dependency approach, have some limitations when compared to the baseline one. The SRND Model and the NEND Model are dependent on the predictions the Norms Model did. The consequence of this is that when the Norms Model makes wrong predictions, it can lead the other two models associating the wrong information with the one it was provided. For example, the Norms Model incorrectly predicted 11 defs to be obligs. This means that for those segments, the SR with Norms received the segments with the start and end tokens of the predicted obligs, which could have caused the model to associate labels such as action and theme, instead of definiendum or definiens, leading to wrong predictions. Still, since the Norms Model achieves such good results, this dependency does not seem to have much impact on the performance of these approaches. Partial Norm Dependency SNR System Evaluation Now that we know the Partial Norm Dependency to be the best approach we need to evaluate our system based on its results regarding each label that its trying to predict. Tables 4, 5 and 6, show the results of each model of the Partial Norm Dependency SNR system. From looking at Table 4, the Norms Model seems to have good results for almost every norm (74% to 100% F1-score), except for def, which has satisfactory results (68.97% F1-score). leffect was the norm that had the best results having a 100% F1-score, precision and recall, as well as an AT A of 1. This type of norm is probably the one which we found to be more consistently annotated since it was probably the most simpler concept to identify. def on the other hand, was a norm which caused some ambiguity, especially in some cases were segments were too long and included many definitions which might have cause indecision of were to start and end each definition, which in turn could explain why the AT A for that norms is so low. The NE Model, has great results for every named entity (80% to 92% F1-score) except time duration and time date rel text, which had satisfactory results. Based on the NE Model predictions we saw that sometimes the model confused these two labels, it could be that since both are temporal expression the model had some difficulty identifying each one separately. For the SRND Model, only definiens, action, and purpose had satisfactory results, the remaining semantic roles all had pretty good results. action was the semantic role most common in our dataset, which would make us think it should have better results, since it had more samples to train on. Yet, we can see that it had a good AT A score (0.76), which may indicate that some of the predictions that were incorrect were only off a few tokens. From all the spans that corresponded to an action, 182 were predicted to be nolabel by the system, this could contain cases where the predicted span was incorrect by only lacking a certain tokens. The mistakes the models made (such as confusing certain norms with each other as well as confusing the two temporal expressions) might come from noise still in the dataset, as we had already found some situations like this, which we talk about in more detail in the following section. Overall, our Partial Norm Dependency SNR system reached high results, achieving 81.44% F1-score. The NE Model achieved the best results, with a 88.67% F1-score, followed by the Norms Model, with a 84.74% F1-score, and finally the SRND Model, with a 76.86% F1-score. Thus, resulting in a 81.44% F1-score for the system, as we can see on Table 7. Comparison with previous results Regarding previous work done on legal text by Humphreys et al. (2020), even though our system results and theirs are not directly comparable (different corpus, different number of samples and some different labels), there are some common labels whose results can be compared. The following tables contain the F1-scores for the concepts extracted by our model and by Humphreys et al. (2020). Their system achieved 81.60 F1-score in detecting the norm type, while our norms model achieved a 84.74%. As we can see from looking at tables 8 and 9, their worst norm was "Legal Effect" with 34.78% F1-score, while our worst was def with 68.97%. Regarding the elements of some of their norm types, for those that are similar to ours (v.g. "Scope", definiendum, definiens, "Includes", "Action", "Condition" and exception), only "Includes" and definiens had a higher F1-score (90.91% and 71.43%, respectively) than our corresponding concepts, def-inclusion and definiens. Their worst result was 22.22% F1-score for the concept exception, which for our exception concept, we achieved a 80% F1-score. In conclusion, taking into account the above observations, it does seem that our model achieves better results in general than the one presented in Humphreys et al. (2020). Existing Noise in the Dataset As we have mentioned earlier we found noisy labels during the system implementation, which we then corrected. Still, we did not review the whole dataset since that would take a lot of time. After having finished developing our system, we decided to check if there was still noise in the dataset. We decided to collect randomly, for each label, 30 segments and ask the main annotator to verify how many errors were present. This was done, in order to estimate the presence of noise concerning a specific label, so that when evaluating our system we have that information in mind. We started by doing this sampling for the Norms labels, whose results can be seen in Figure 10. For this group of labels, since a segment always has some norm, and it is relatively simple to see the norm that should be used instead, when found a noisy label, the main annotator, when doing this verification, not only marked the noisy labels but also point out the real label that should have been used instead. If we compare the errors present in the norms and the predictions the Norms Model made, we can find many relationships. For example, our Norms model predicted one intro as a oblig. In our sample of 30 segments, for the label intro, we found an error of 7% related with the label oblig. This could mean that the incorrect predictions the Norms Model made came from noise in the dataset. For the label oblig we found an error of 13%, from which half comes from the label "DEF". This also reflects the incorrect predictions the model made, when from 48 spans that corresponded to "DEFs", 10 were predicted to be "OBLIGs" (about 2%). Regarding the named entities, time duration and time date rel text which were the ones with worse results, were also the two named entities for which we found a higher percentage of error. Finally, for the semantic roles, we did not find error associated with the labels definiens and purpose, still that does not mean it does not exist with certainty. We did find error in other labels for which our model performed correctly, which could mean that if there was not any noise the SRND Model could perform even better regarding those labels, or that it simply learned to predict the wrong information as well. Thus, the correction of the annotation process is highly important and action to reduce noise must be actively pursued, specially in such a difficult type of annotation task as this one. Application To Information Retrieval At the beginning of this document, we hypothesized that the extraction of relevant concepts from legislative text could help downstream tasks. For this reason, we decided to see if our system improved the performance of the retrieval information system that was being implemented for the legal texts of the Portuguese Consumer Legislation. The system works by returning the 100 legal acts that are deemed relevant for a given query, and is evaluated by calculating the accuracy of the top x results. This accuracy is measured against a "golden" result provided by Law experts for that query. We used our Partial Norm Dependency SNR system predictions for a new set of legal texts, to generate a set of questions and answers(QAs). The process of generating these QAs, consisted of the creation of rules having in mind the entities and their relationships. For example, if a segment has an oblig, with an action and a experiencer, it will generate the following QAs: Q From these QAs, we used the segment-answer pairs (we created question and answer pairs, so that we could use them for a QA system as well) to fine tune the information retrieval system. This way we could compare the results when no fine-tuning was done, or when fine-tuning was done by performing the Inverse Close Task (ICT) proposed by Taylor (1953) versus fine-tuning with our generated segment-answer pairs. ICT basically consists of the division of a segment into parts, and the creation of pairs between each part and the segment itself, to represent question and answer pairs. The results are shown in the following table. Now if we compare the results, we can clearly see that not only did the retrieval system's performance, when fine-tuned using our predictions, outperform the system with no fine-tuning, but also it outperformed fine-tuning using the ICT method. Thus, these results show that fine-tuning with generated segment-answer pairs, which contain the relevant semantic information our SNR system extracted, does in deed improve the overall performance of an information retrieval system. Conclusion With this paper, we presented an automatic semantic information extraction system responsible for capturing a defined group of relevant semantic concepts (norm types, named entities and semantic roles) present in Portuguese Consumer Legislation. Our system is composed of three models, whose architecture is inspired on the model of Yu, Bohnet and Poesio (2020). We implemented and evaluated three different approaches, and showed that having the system predict the semantic roles knowing the norm type information (Partial Norm Dependency approach) achieves the best results. We also showed that all models of the Partial Norm Dependency SNR system, achieved a good performance, attaining a 84.74% F1-score for the Norms Model, 88.67% for the Named Entity Model, and 76.86% for the Semantic Role with Norms Model. Based on all the results, and the existent dataset noise we found, we concluded the Partial Norm Dependency SNR system had a good performance, resulting in a 81.44% F1-score. We also showed how using the predictions of our SNR system, we were able to improve an existing information retrieval system by training the model with our predicted knowledge. This is the first system of this kind for Portuguese legal text. We showed how the Partial Norm Dependency SNR system was implemented and how it is able to capture the relevant semantic concepts, allowing legislation to have a more informative representation. With the presented system, any article in the legislation can be represented not only by its text, but also by the main concepts it includes. In short, we were able to make several contributions, the major ones being: (i) The creation of a dataset with the portuguese consumer law annotated with the corresponding norms and ocncepts; (ii) The creation of our automatic semantic information extraction system, the SNR system; (iii) And finally, validating the improvement of existing information retrieval systems, by using the information our own system predicted to train those systems; In spite of our achievements, there is still room for improvement, which future research could address: (i) reviewing the dataset and current annotations, in order to reduce the existing noise; (ii) training and validating the Full Norm Dependency and Partial Norm Dependency approaches using the predicted labels instead of the gold labels, in order to see if by doing this the second-level models (v.g. the SRND Model and the NEND Model) will be more robust to errors regarding the norm types labels; (iii) improving the AT A (Average Token Agreement) metric in order to make sure that it is entirely adequate not only at token level, but also at span level, that is, providing the agreement for a corresponding pair of predicted and gold spans. By making that improvement, the metric could be used to validate the training of the models and to compare different the approaches, instead of using the F1-score alone; (iv) Finally, we used our predictions to generate simple QAs to improve an existing information retrieval system. Future work, not directly regarding our system but rather its application, could include generating more complex rules to create more informative QA. This could improvement the performance of the information retrieval system.
2022-03-11T06:47:14.094Z
2022-03-10T00:00:00.000
{ "year": 2022, "sha1": "273a92964f331c8a883dd8f5fcd9cefa2190279e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "273a92964f331c8a883dd8f5fcd9cefa2190279e", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
208209057
pes2o/s2orc
v3-fos-license
Effectiveness of smart phone application use as continuing medical education method in pediatric oral health care: a randomized trial Background Continuing education aims at assisting physicians to maintain competency and expose them to emerging issues in their field. Over the last decade, approaches to the delivery of educational content have changed dramatically as medical education at all levels is now benefitting from the use of web-based content and applications for mobile devices. The aim of the present study is to investigate through a randomized trial the effectiveness of a smart phone application to increase public health service physicians’ (PHS physicians) knowledge regarding pediatric oral health care. Method Five of all seven DHCs (District Health Center) in Tehran, which were under the supervision of Tehran University of Medical Sciences and Iran University of Medical Sciences, were selected for our study. Physicians of one DHC had participated in a pilot study. All PHS physicians in the other four centers were invited to the current study on a voluntary basis (n = 107). They completed a self-administered questionnaire regarding their knowledge, attitudes, practice in pediatric dentistry, and background. PHS physicians were assigned randomly to intervention and control groups; those in the intervention group, received a newly designed evidence-based smartphone application, and those in the control group received a booklet, a CME seminar, and a pamphlet. A post-intervention survey was administered 4 months later and t-test and repeated measures ANCOVA (Analysis of Covariance) were performed to measure the difference in the PHS physicians’ knowledge, attitude and practice. Results In both groups, the mean knowledge scores were significantly higher (p-Value < 0.001) in post-intervention data compared to those at baseline. Similar results existed in attitude and practice scores. Although the scores in knowledge in the intervention group indicating potentially greater improvement when compared to those of the control group, the differences between the two groups were not statistically significant (dif: 0.84, 95% CI − 0.35 to 2.02). Conclusion In the light of the limitations of the present study, smart phone applications could improve knowledge, attitude and practice in physicians although this method was not superior to the conventional method of CME. Trial registration Our clinical trial had been registered in Iranian Registry of Clinical Trials (registration code: IRCT2016091029765N1). Background Continuing education aims at assisting physicians to maintain competency and to learn about emerging topics in their field. It is an important part of medical practice. The traditional in-person lecture has been considered the best method for continuing education [1]; however, it suffers from the limitations of being instructor-centered and the need for the presence of instructor and learner at the same time and place [2,3]. On the other hand, educational booklets with a combination of images and text have been used for continuing medical education (CME) as a learner-centered method [4]. In order to benefit from the strengths of these two methods, some programs have combined the use of traditional lecture sessions and booklets or pamphlets [5][6][7]. For instance, a study in Iran reported the effectiveness of delivering an educational booklet followed by a lecture session in improving nurses' knowledge and attitudes regarding oral health [6]. Several methods such as films, television programs, and audio programs have been used for CME along with the development of distance education facilities. Distance learning may reduce inequalities in health education [8] and has found its place among other training methods as it has been used in a number of previous studies with promising results [9][10][11][12][13]. Online CME websites can provide easy access, and their interaction potential promises more effectiveness compared to traditional methods [14]. However, insufficient access to evidence-based information, lack of sufficient searching skills, time shortage and financial cost are major barriers to access information via this approach [15]. In the last decade, approaches to the delivery of educational content have changed dramatically as medical education at all levels is now benefitting from the use of web-based content and mobile device applications, including smart phone applications [16][17][18][19][20][21][22][23]. Mobile phones and tablets offer communication, access to the scientific literature in real time, are portable, and provide easy access to information at the point of care [22]. Also, smartphone applications could provide interactive learning and constant connection through question and answer sections. This seems particularly useful, since studies have concluded that, widely used CME delivery methods such as conferences and lecture sessions without practice-reinforcing strategies have little direct impact on improving professional practice [24]. Moreover, compared to traditional lecture-based CMEs, interactive CMEs are more effective in promoting knowledge and changing physicians' practice [9,25]. Thus, interactive methods have been proposed as a tool to be used in CME [18,19,22]. Also, online CME methods may offer greater flexibility in training times, improve access by geographically dispersed learners, reduce travel expenses and time, and adapt to individual learner styles [26]. Despite the emergence of smartphone applications as a potential approach to deliver CME, almost no study exist that investigated its effectiveness. The American Academy of Pediatric Dentistry (AAPD) recommends a child's first dental visit to occur within 6 months of the eruption of the first tooth and no later than 12 months of age [27]. However, most children do not visit a dentist before the age of 3 in several countries [28,29]. Very often, a child's first visit with a family physician or pediatrician occurs earlier than the child's first visit to a dentist. According to guidelines [30,31], primary health care providers have to counsel families on teething and dental care [32,33]. However, studies indicate family physicians and other primary care providers lack sufficient knowledge and have received little training in medical school regarding preventive dental care [34,35]. Also, these studies reported physicians' lack of knowledge and training as barriers for providing preventive oral health care to their patients specifically for children [36]. The aim of this study was to investigate the effectiveness of smartphone applications as a continuing education (CE) method to improve self-reported knowledge, attitudes and practice of public health service (PHS) physicians regarding pediatric oral health care. Study design and subjects The study population was a sample of general practitioners (n = 107) working in the District Health Centers (DHC) of Tehran. Each DHC supervises 15 to 20 public health centers with one to three PHS physicians in each center. There are seven DHCs in Tehran and its satellite towns. We selected five of them which were under supervision of Tehran University of Medical Sciences and Iran University of Medical Sciences. Physicians of one DHC (South West) participated in our pilot study. All PHS physicians in the other four centers were invited to participate in this larger study on a voluntary basis (n = 107). The inclusion criteria was being a general practitioner and working in DHC. The randomization was done at DHC level. Two DHCs were selected through a simple randomization (by flipping coin) process for intervention so that all PHS physicians in these two DHCs received intervention. The other two DHCs served as controls (Fig. 1). Assuming an equal standard deviation of two intervention groups at 80% power, the minimum difference between the two groups was calculated to be 1.704 in the knowledge and 1.818 in the attitude and 1.242 in the practice scores. Data collection Questionnaire and variables A questionnaire developed in a previous study and evaluated for content validity and reliability [37,38] was selected as the data collection tool (Additional file 1). No personal identifiable information was collected. The questionnaire requested information on participant's demographic characteristics (age, gender, work experience, whether or not working in private sector, and whether or not having a dentist in first-degree family), as well as items in the following domains: Knowledge of pediatric oral health The knowledge domain included four multiple-choice questions and ten questions with five-point Likert scale responses ranging from strongly agree to strongly disagree and including an option for "don't know". The responses were assigned a score of one for correct answers, and zero for incorrect and don't know answers. For true statements, "strongly agree" and "agree" answers were given score one, and the other answers score zero. For false statements, "strongly disagree" and "disagree" answers were given score one and the other answers score zero. Questions tested the participant's knowledge regarding the timing of primary and permanent tooth eruption, the time/age when tooth cleaning and brushing for children should begin, usage of fluoride (toothpaste and varnish), transmission of the bacteria that cause dental decay, the effects of pacifier sucking and mouth breathing, the advantages of sealant therapy, and dental trauma. By summing the scores, final scores, with a range of zero to 14 were calculated and subgrouped into quartiles. Attitudes toward pediatric oral health The attitudes section comprised eight questions with five-point Likert scale response alternatives which ranged from strongly disagree to strongly agree and was scored from one to five. The range of final scores was from eight to 40. The questions asked PHS physicians' opinions about oral health care, and the preventability of dental caries and periodontitis. They also were asked about the responsibility of PHS physicians to examine children's oral cavity, the effectiveness of routine dental visits in preventing dental disease, importance of PHS physicians' role in preventing oral diseases, association of oral health problems and general health problems, and tendency to implement preventive oral health activities. Practice in pediatric oral health The practice section contained two multiple choice questions, eight five-item Likert type questions with options very likely, likely, medium, unlikely, very unlikely (scored from 1 to 5, respectively), 11five-item Likert type questions with options strongly agree to strongly disagree (scored from 0 to 4, respectively), and 12 fouritem Likert type questions with options never, rarely, occasionally, very frequently (with the first two options scored 0, and the second two scored 1). By summing the scores, final scores were calculated from 31 to 107 and sub-grouped into quartiles as described above. Intervention and control groups The intervention group received training through an evidence-based smart phone application (hereby referred as 'application' in this paper) designed for the purpose of the study. Although participants were instructed on how to use the application, there was a help section in the menu of the application that explained how to use it. PHS physicians could also submit their questions online and receive answers within 2 days. A reminder message was sent to the intervention group through the application itself 1 month following the first session. PHS physicians in the control group received the same educational content as a booklet offered in the traditional method of CME. In addition, there was a Q&A session for this group 2 weeks after the first session. Also the "education and health promotion unit" staff of the health network sent a reminder in the form of a pamphlet to the booklet group. The seminar and booklet covered the same topics as the application: information on pediatric oral and dental disease; caries and its etiology, signs and care; dietary habits; fluoride therapies and fissure sealant; and dental trauma. Baseline data collection One of the researchers (MB) visited all the PHCs and administered the baseline questionnaire in-person to the participants. One week after each visit, the same researcher collected the completed questionnaire. Baseline data collection was performed from November to December 2016. Post-intervention evaluation Four months after baseline data collection, in one of the monthly meetings of the DHCs, the study questionnaire was distributed among the participants, and collected after 1 h. To measure changes at the participant level, we requested each participant to enter a person-specific code when completing the pre-and post-intervention questionnaires. Figure 1 shows the flow diagram of the present study. Statistical analysis All numerical data were entered and analyzed using the IBM Statistical Package for Social Sciences (SPSS version 21.0). Descriptive statistics were obtained for gender, age, working experience and working sector. T-test and repeated measure analysis of covariance (ANCOVA) served to assess the statistical significance of differences between knowledge, attitude and practice scores of intervention and control groups. Ethical considerations Participation in the study was voluntary, and the responses were anonymous. All respondents provided their written informed consent. Results Of the 107 physicians invited for the baseline data collection (50 in intervention and 57 in control group), 86 physicians (43 in intervention and 43 in control group) completed the questionnaire (total response rate = 80.3%). In both intervention and control group, all physicians completed baseline questionnaire participated also in post-intervention data collection. A quarter of the PHS physicians completing both baseline and postintervention questionnaire were men, and the majority (n = 68, 79%) of them worked solely in the public health sector. The mean age was 39.2 years among the smartphone intervention group and 44.3 years among the control group (Table 1). No significant differences existed between intervention and control group regarding demographic information. The rate of using application in the last week leading to post intervention evaluation was 68.4% (varied from once a week to everyday). The mean knowledge score among participants at baseline was 8.17 ± 2.03 (Table 2). At baseline, only 9.5% of the PHS physicians in the control group knew the correct answer to the question "Pacifier sucking in under-4-year-old children is a risk factor for dentoalveolar malformation" while the percentage of correct answers in the intervention group was 11.6%. In the control group, the biggest change (13.5%) in PHS physicians' responses before and after the intervention were related to the question "Physicians should examine the oral cavity and teeth throughout their routine patient's visits". In the intervention group, the biggest change (20%) in the PHS physicians' responses before and after the intervention was related to the question "Oral health care delivered by physicians is not efficient for patients". In both groups, the mean scores for knowledge, attitudes and practice were significantly higher at postintervention data collection compared to that at baseline ( Table 2). Table 3 displays the knowledge, attitudes and practice differences of study participants in the intervention and control groups. Although the scores of intervention group in knowledge, attitudes and practice showed larger differences in pre and post scores compared to that of the control group, the differences between the two groups remained insignificant (Table 3). Subgroup analysis by ANCOVA showed that the improvement of knowledge, attitudes, and practice scores in each study group remained independent from background factors (P > 0.05). Discussion The present study investigated the effectiveness of CME on oral health delivered through a smartphone application and a booklet among PHS physicians. Both methods could improve knowledge, attitude and practice in physicians. However, the difference between the two groups was insignificant, showing no superiority of the smartphone app over the conventional method. Many surveys and studies exist about benefits, barriers and risks of online CME. Although studies on healthoriented patient centered applications are available, research on using particular smart phone as a medium for CME among physicians is scarce. Certain studies [39][40][41][42] have found results similar to ours. Short et al., in 2005, conducted an online interactive CME in the field of intimate partner violence for physicians. The control group in their study received no training. Similar to our study, they concluded that online interactive CME made persistent changes in knowledge, attitudes and self-reported practice [39]. Ryan et al., in 2009, compared the effectiveness of face-to-face and online CME among 62 general physicians. The course was about accreditation as pharmacotherapies prescribers for opioid dependence. Similar to the findings of our study, they reported significant improvement of knowledge among participants in both groups. Comparison of post-test scores of knowledge among the two groups also showed no significant difference. The same pattern also occurred in the attitude scores. They concluded that online CME was as effective as the face-to face method for increasing the knowledge of treatment and management of opioid dependence [40]. Similar to our study results in the knowledge section, Kim et al. in a study on educating nursing students to provide care for infant airway obstruction reported no significant difference in the knowledge score between the smart phone-based group and the lecture group [41]. A Canadian study in 2011 evaluated the outcomes of an online CME course in the field of asthma without a control group and reported significantly increased level of knowledge in clinical area among health professionals [42]. Their finding is in agreement with the results of the present study. Other studies have reported significant differences in their findings. A study conducted by Pelayo in 2011 in Spain, which compared online training on palliative care to traditional self-training method, found a 14 to 20% increase in knowledge through the online method among primary care physicians. Moreover, this method led to significant improvement in attitudes and perception of confidence in symptom management and communication [43]. Also, Kim et al. compared the effects of onetime lecture and smartphone application on skill of nursing students regarding infant airway obstruction. They reported that the skill score of students in the smart phone application group was significantly higher than that in the lecture group [41]. One of the advantages of CME through smartphone over conventional methods is the accessibility that smartphones provide. It is worth mentioning that smartphones, when used as a tool for CME, can provide access to educational content at the point of care at any time without adding any new asset to pocket [22]. Moreover, the high rate of adoption of smartphone by physicians (84.5 to 94%) in 2012, indicates its potential to be used for CME [22]. The high response rate in baseline data collection (80.4%), and the fact that all participants who completed the baseline questionnaire also participated in postintervention data collection can be considered as strengths of our study. Limitation of study The main reason that some of the physicians refused to participate was that they were too busy, which seems not to be unusual in studies on professional groups. To alleviate this limitation, physicians were given the privilege of continuing education credits for free. Also, gifts including toothbrushes and tooth pastes were given to the respondents. As a result of our sample size and according to wide range of CI in differences in main variables in Table 3, a possibility exists that a significant difference does exist between intervention and control, but the sample size was underpowered to detect it. Downloading the application in app group and the coverage of network in health centers was another limitation which was eliminated by using mobile modem in training sessions of application group. On the other hand, having a self-administered questionnaire may cause social desirability bias and lead to overestimation rather than underestimation of the reported attitudes and practice. Moreover, a risk of under-estimation exists in questionnaire surveys answered by lay people [44]. Our study investigated the short-term outcome of CME through smartphone, and the long term effectiveness of this method needs to be further studied. Conclusion In the light of the limitations of the present study, smart phone applications could improve knowledge, attitude and practice in physicians although this method was not superior to the conventional method of CME. Other aspects of the use of this method such as cost and time savings, its widespread use and higher ease of accessibility need to be further investigated. Supplementary information Supplementary information accompanies this paper at https://doi.org/10. 1186/s12909-019-1852-z. Authors' contributions MB contributed to design of the work and development of the proposal, data collection, data analysis, drafting and revising the manuscript. SM contributed to conception and design of the work, development of the proposal, interpretation of data, drafting the manuscript. EM contributed to design of the work, interpretation of data, drafting the manuscript. TT contributed to design of the work, interpretation of data, drafting the manuscript. MK contributed to conception and design of the work, development of the proposal, interpretation of data, drafting and revising the manuscript. All authors read and approved the final manuscript. Funding This study has been supported by Research Center for Caries Prevention, Dentistry Research Institute (95-01-194-31526), Tehran University of Medical Sciences, Tehran, Iran. The funding was used as personnel costs and costs of materials and trips for collecting data. Availability of data and materials The datasets generated and analyzed during the current study are available from the corresponding author on reasonable request. Ethics approval and consent to participate All respondents provided their written informed consent. The Ethics Committee of Tehran University of Medical Sciences approved the study (IR.TUMS.REC.1395.2252). In addition, the study was registered in Iranian Registry of Clinical Trials (IRCT2016091029765N1). Consent for publication Not applicable.
2019-11-22T14:25:33.612Z
2019-11-21T00:00:00.000
{ "year": 2019, "sha1": "987e615c43b9bde24c47c19d3831ac870f57588d", "oa_license": "CCBY", "oa_url": "https://bmcmededuc.biomedcentral.com/track/pdf/10.1186/s12909-019-1852-z.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "987e615c43b9bde24c47c19d3831ac870f57588d", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
247942368
pes2o/s2orc
v3-fos-license
Gender Differences Between the Phenotype of Short Stature and the Risk of Diabetes Mellitus in Chinese Adults: A Population-Based Cohort Study Objective Previous studies have shown that there are significant regional and gender differences in the association between the phenotype of short stature and diabetes mellitus (DM). The purpose of this study was to investigate the gender difference between the phenotype of short stature and the risk of DM in the Chinese population. Methods The sample included 116,661 adults from 32 locations of 11 cities in China, of which the average height of men and women was 171.65 and 160.06 cm, respectively. Investigators retrospectively reviewed annual physical examination results for follow-up observations and set confirmed DM events as the outcome of interest. Multivariate Cox regression, restricted cubic spline, and piecewise regression models were used to check the association between height and DM risk. Results During an average observation period of 3.1 years, there were 2,681 of 116,661 participants who developed new-onset DM, with a male to female ratio of 2.4 to 1. After full adjustment for confounders, we confirmed that there was a significant negative correlation between height and DM risk in Chinese women (HR per 10 cm increase: 0.85, 95% CI: 0.74–0.98), but not in men (HR per 10 cm increase: 1.16, 95% CI: 0.98–1.14). Additionally, through restricted cubic spline and piecewise regression analysis, we determined that the height of 157–158 cm may be the critical point for short stature used to assess the risk of DM in Chinese women. Conclusions In the Chinese population, female short stature phenotype is related to increased DM risk, among which 157–158 cm may be the saturation effect point of female short stature for predicting DM risk. INTRODUCTION Diabetes mellitus (DM) is a chronic non-infectious disease characterized by elevated glucose levels due to disturbances in glucose metabolism, 90% of which are type 2 DM, which is an important cause of physical disability and death (1,2). With the global obesity epidemic, the aging population, and the great changes in lifestyle and dietary patterns, the prevalence of DM in China has doubled in the past 30 years (1995-1999: 4.5%; 2010-2014: 8.35%; 2019: 116 million) (3)(4)(5)(6)(7). At present, China has become the center of the global DM pandemic and has the largest number of DM patients in the world (7). In the past few decades, a large number of studies have found that patients with DM are often accompanied by some special body phenotypes, such as obesity phenotype, high waist circumference phenotype, hypertriglyceridemic waist phenotype, and short stature phenotype (3,(8)(9)(10). In other words, these special body phenotypes can help us assess DM risk. The phenotype of short stature has been shown to be closely related to the increased risk of DM in previous studies, but there is still some debate about this association between different regions and between genders (11)(12)(13)(14)(15)(16). In the existing longitudinal correlation studies involving both men and women, the findings of England and Germany have supported that only the height of men was negatively correlated with the risk of DM (11,12); results from Norway and Iran have supported a negative association between height and DM risk only in women (13,14); South Korea's research has shown that there was a negative correlation between height and DM risk in both sexes (15), while the United States study has found no association between height and DM (16). Although the results of these studies were not identical, it further indicated that there are significant regional and gender differences in the association between height and DM. China, as a hardest-hit area of DM disease burden, currently has very limited data on the correlation between height and DM, so it is necessary to determine the gender difference between height and DM risk and the appropriate risk threshold or saturation point in the Chinese population as soon as possible. To address this issue, the present study conducted an in-depth analysis of national physical examination data from Rich Healthcare Group in China to identify gender differences in height and DM risk among Chinese adults and to determine an appropriate height threshold or saturation point for predicting future DM risk. Data Sources and Study Design In this study, we conducted a secondary analysis of the retrospective cohort study based on the national physical examination data of China Rich Healthcare Group. The original data have been shared to the public database (www. Datadryad.org) by Chen et al. (17). The study design of the retrospective cohort has been described in detail in previous studies (18). In short, the current study cohort was from adults who underwent health screening in China Rich Healthcare Group from 2010 to 2016 (n = 685,277); considering that these participants were screened at least twice during this period, therefore, a retrospective analysis can be conducted based on the research data of this population. In previous studies by Chen et al., they retrospectively analyzed the association between body mass index (BMI) and DM risk (18). Given the chronic course of DM, they excluded participants from the previous study who were followed for less than 2 years (n = 324,233). Moreover, for study purposes, they also excluded participants with incomplete or extreme baseline BMI (BMI > 55 or <15 kg/m 2 ; n = 152); participants with no gender, height, weight, or fasting plasma glucose (FPG) information at baseline (n=135,317); participants with diagnosed DM at baseline (n=7,112); and participants with unknown DM status during followup (n = 6,630). Ultimately, Chen et al. included 211,833 participants who met the criteria for their analysis. Based on the data used by Chen et al., the current study further excluded participants with loss of baseline lipid parameters (n = 95,172) and finally included 116,661 participants ( Figure 1). These people come from 32 locations of 11 cities in China, accounting for 7/34 of China's provincial administrative regions and 8.26/100,000 of China's total population. The Ethics Committee of Jiangxi Provincial People's Hospital approved the research protocol (ethical review no. 2021-067). Also, considering that the identification information of the participants in the current study was canceled, the Institutional Ethics Committee of Jiangxi Provincial People's Hospital waived the informed consent of the participants. Health Examination and Laboratory Measurement As mentioned earlier (18), the trained medical staff recorded the baseline clinical data of the participants during the physical examination through a standard questionnaire, including age, height, blood pressure, gender, family history of DM, weight, and smoking and drinking status. The medical staff used an automatic scale to measure the height and weight of the participants, during which the participants took off their shoes and wore only light clothes. Blood pressure was measured using a standard mercury sphygmomanometer. BMI is calculated from height and weight. Statistical Analysis R language software (version 3.4.3) and Empower(R) (version 2.20) software were used to analyze the data of this study. All baseline data were expressed as mean or median or percentage, respectively, where appropriate. One-way ANOVA or t-test or Kruskal-Wallis H test was used to compare the mean (median) of continuous variables, and the chi-square test was used to compare categorical variables between groups. All P-values were bilateral, and P <0.05 was the significant standard. In multivariate Cox regression analysis, we ran three models with DM events as endpoints, identified relevant confounding factors based on epidemiology, and recorded the hazard ratio (HR) and 95% confidence interval (CI) related to height and DM events (20). Before running the multivariate Cox regression model, we checked for collinearity between all covariables (21), among which weight and TC were excluded from the model due to variance inflation factor greater than 5. Model 1 was adjusted for age, BMI, FPG, and DM family history. Model 2 further considered the effects of blood pressure, smoking, and drinking on DM on the basis of model 1. Model 3 was further adjusted for BUN, Cr, TG, LDL-C, and HDL-C. For the selection of the best model, we designated model 3 which has adjusted all noncollinear variables as the final model after the epidemiological and statistical screening. Restricted cubic splines (RCS: nested in Cox regression analysis) with four knots were used to fit the shape of the dose-response correlation between height and the risk of DM (22,23). By visually examining the shape of the curve, we selected the critical point when HR changes from larger or smaller to 1 to serve as the height threshold point or saturation effect point (if any) used to assess the DM risk. If the potential threshold or saturation effect point between height and DM risk was found by RCS, we will further use the piecewise regression model to calculate the threshold or saturation effect point by using a recursive algorithm (24). We also examined the HR and 95% CI of height and the risk of DM in different age and BMI populations, where the cutoff point of BMI was based on the classification standard recommended by the Chinese Obesity Working Group (25) and the cutoff point of age was based on the age classification standard of the World Health Organization in 2000. Likelihood ratio tests were used to compare whether there were differences in height-related DM risk among different age and BMI groups. Study of the Baseline Characteristics of the Population A total of 116,661 participants without DM at baseline were included in the current study, with a male to female ratio of 1.16:1 and a mean age of 44 and 43 years, respectively. In consideration of the significant gender differences in previous similar studies (11)(12)(13)(14)(15)(16), the baseline characteristics of men and women grouped by independent and dependent variables were summarized in this study. Table 1 presents the quartiles of height, showing the baseline characteristics of men and women in different height categories. In both sexes, with the increase of the quartile of height, the weight and Cr levels increased gradually; in contrast, BMI, age, blood glucose, blood pressure, blood lipid, and AST and BUN levels decreased gradually. The gender differences were mainly reflected in ALT levels and smoking and drinking. In men, the level of ALT gradually increased with the increase of height, while in women, the trend of ALT level and height seemed to be the opposite; additionally, the basic ALT level of men was higher than that of women. Table 2 summarizes the baseline characteristics of both sexes according to the presence or absence of new-onset DM during the follow-up. During an average follow-up period of 3.1 years, a total of 2,681 participants developed new-onset DM (518 people with a self-reported diagnosis of DM), with a male to female ratio of 2.4 to 1. Regardless of gender, participants ultimately diagnosed with DM had higher levels of TC, ALT, FPG, BUN, LDL-C, TG, AST, weight, BMI, age, SBP, and DBP at baseline. Compared with men, women had higher levels of age, SBP, and blood lipids and lower levels of height, weight, BMI, DBP, FPG, BUN, Cr, and liver enzymes. Table 3 shows the results of a multivariate analysis of the association between height and DM in both sexes. In the unadjusted model, the height of both sexes was negatively correlated with the risk of DM, but after further adjustment for potential confounding factors (models 1-3), the negative correlation still existed in women but disappeared in men. In the model adjusted for age, FPG, family history of DM, TG, DBP, Cr, BMI, SBP, smoking status, drinking status, BUN, LDL-C, and HDL-C (model 3), each increase in 10 cm of female height reduced the risk of DM by 15% (HR per 10 cm increase: 0.85, 95% CI: 0.74-0.98), and the linear trend between height and DM risk disappeared (P-trend = 0.6556). Height Saturation Effect Points of Women Assessing DM Risk RCS was established to fit the shape of female height and DM risk. As shown in Figure 2, there was a negative correlation between female height and DM risk. When the height was about 157 cm, the HR of DM risk was about 1. Additionally, we also calculated the critical value of height for DM risk using a recursive algorithm by a piecewise regression model, and the results showed that the optimal critical value for female height was 157.9 cm. Among people with a height less than 157.9 cm, the risk of DM decreased by 3% for each 1 cm increase in height (HR per 1 cm increase: 0.97, 95% CI: 0.95-0.99), while women who were taller than 157.9 cm had an HR of 1 ( Table 4). The critical values determined by visual examination and calculated by the recursive algorithm were very close in the current analysis. Therefore, we believe that the female height of 157-158 cm may be a saturation effect point for evaluating the future risk of DM. Subgroup Analysis We also explored the association between height and DM in women of different ages and BMI levels. As shown in Table 5, we only observed a negative correlation between height and DM in people older than 60 years old and obese people. However, further interaction tests suggested that there were no significant differences in these findings (P-interaction = 0.1508/0.9522). DISCUSSION The national retrospective cohort study examined the relationship between China's adult height and new-onset DM. Only female height was found to be significantly negatively associated with DM risk among Chinese adults at a mean follow-up of 3.1 years, an association that remained stable after full adjustment for confounders (HR per 10 cm increase: 0.85, 95% CI: 0.74-0.98). RCS and piecewise regression analysis help us further determine that the height of 157-158 cm may be the critical point for the short stature used by Chinese women to assess the risk of DM. The relationship between height and DM has always been a controversial topic, and there are significant differences in the results of existing studies based on different places. In a nutshell, the differences are mainly in terms of region and gender. We have made some summary and analysis based on the existing research reports of different regions: 1) Europe: in 1998, a longitudinal cohort study of 11,654 people in Norway revealed for the first time that there was a negative correlation between female height and DM (RR per 5 cm increase: 0.71, 95% CI: 0.58-0.87), but not in men (13). Subsequently, two other longitudinal studies in Europe reported the opposite result: the negative correlation between height and DM was only found in the male population (11,12). It is worth noting that in the England and German studies, although the association between women's height and DM was not statistically significant, the lower CI limits of women's DM risk in these two studies were 0.50 and 0.78, respectively (11,12). Based on these results, the positive effect of a 22%-56% reduction in DM risk among women cannot be ruled out. 2) North America: Three crosssectional studies and one longitudinal study have shown that there was no significant correlation between height and DM (16,(26)(27)(28), while femur length and leg length-to-height ratio may be the key factors for the assessment of DM in North American population (26,27). 3) Oceania: A cross-sectional study involving 11,247 Australians showed no relationship between height and blood glucose metabolism (29). 4) Africa: A crosssectional evidence from Nigeria has shown that there was an association between height and blood glucose levels and glucose tolerance in the African urban population (30). 5) Asia: According to several studies from Asia, there were also some differences between height and DM risk in Asian people, and further distinction may be necessary. i) West Asia: In a survey and analysis in Iran in 2011, only a negative correlation between female height and DM was found after fully adjusting the covariates (14). Although another Iranian study in 2012 found a negative association between height and DM in the whole population, the 2012 study only adjusted for age, gender, and waist circumference and did not adequately account for risk factors (31). ii) South Asia: Evidence from Bangladesh showed a negative association between height and DM risk for both sexes (32), whereas this negative association was observed only among women in the Indian analysis (33 (34,35). In our current study, we analyzed the national physical examination data of Rich Healthcare Group involving 32 locations in 12 cities in China. The results indicated that height was significantly negatively associated with DM risk in Chinese adults only in women, and no such association was observed in men. Overall, women in Asia, Europe, and Africa were more likely to be negatively associated with DM risk, and short-height women in these regions should pay more attention to the primary prevention of DM, actively understand and learn about DMrelated knowledge, and establish a correct concept of eating and exercise. The general recommendations are as follows: set appropriate goals and plans with the help of doctors; reduce the intake of a certain proportion of saturated fatty acids and increase the intake of vegetables, and change lifestyle by increasing the appropriate amount of exercise, losing weight, and reducing exposure to DM-related risk factors. We have known from some previous studies that there is an inverse relationship between height and the risk of cardiovascular and cerebrovascular disease and relative mortality risk, and the shape of this association is non-linear: the researchers found that when height was within a certain range, the risk of cardiovascular and cerebrovascular diseases and mortality risk decreased significantly (36)(37)(38). These findings have greatly helped people to change their awareness of the risk of disease and death. However, at present, the understanding of the height critical point for assessing the risk of DM and the shape of the correlation between them is still very limited. A recent study by Professor Al Ssabbagh from India showed that there seems to be a U-shaped association between women's height and the risk of DM, in which the height is between 155 and 160 cm and the risk of DM is the lowest (33). In addition, in a recent study on gestational DM by Li et al., they determined that 158 cm may be the critical point of short stature for Tianjin women to assess the risk of gestational DM (35). Among men, an Israeli study showed that people with a height of 170-175 cm are at a critical risk of DM (39). Our current study was based on RCS and piecewise regression analysis to determine that female height between 157 and 158 cm may be the saturation point of DM risk. This finding was similar to the height critical point studied by Li et al. and Al Ssabbagh et al. (33,35). In view of this result, we call on women with height less than 157-158 cm to pay more attention to early intervention of risk factors for DM. The relationship between DM and gender was extensively studied in the past. Although there are some differences in the results of local studies, generally speaking, the prevalence of DM in men is higher in the world, but the number of women suffering from DM is higher than that of men (40). This difference is closely related to age. Men are more likely to suffer from DM before puberty, while women are more likely to have DM in old age (40,41). In the current study, the age of women with DM is higher than that of men (60 vs. 56). From the results of age stratification analysis, we only observe the height-related DM risk of Chinese women in the elderly population. For this particular population, based on some existing research evidence, we speculate that it may be related to the following reasons: it is well known that height decline occurs in both men and women during aging, which may be related to osteoporosis, disc herniation, arthritis, spinal disease, and kyphosis (42,43). According to the observation of Wang et al., with the increase of age, the bone mineral density reduction rate of Chinese women will be higher than that of men and Caucasians (44). In addition, in women, with the increase of age, body fat deposition increases and fat redistribution becomes more obvious (45,46), and all these factors significantly increase the risk of DM. In summary, height atrophy and an increase in fat due to some physiological and pathological causes during aging may partly explain the risk of height-related DM. The pathophysiological mechanism of the association between height and DM is speculative. It has been suggested that height is closely related to heredity and early environmental influence (47), while intrauterine environment, children's nutrition, growth-related hormone factors, and vitamin D deficiency are considered as potential ways to link peripheral growth impairment with the risk of adult type 2 DM (26,48,49). Gender differences in the association between height and DM are not yet clear in the Chinese population. Some studies suggest that early puberty in women causing eventual shorter height may be an important factor (15,50); however, this statement does not seem to be convincing enough. From the current study, we found that compared with short-height men (Q1), short-height women have some differences with men in the family history of DM (0.85% vs. 2.9%), which further suggests the importance of heredity in this association. Further research is needed to explain this particular gender difference. Adult height is determined by a combination of roles, mainly divided into proximal and distal roles. At the proximal role, nutrition and early onset of disease play a key role in adult height (51). In general, nutrition is the most important external factor affecting linear growth in height; before the fetus is born, nutritional deficiencies can lead to intrauterine growth retardation, preterm birth, and low birth weight; these consequences are related to height in adulthood (52)(53)(54). After the fetus is born, nutrition has a greater impact on growth, among which high-quality protein, mineral trace elements, and vitamin intake are particularly important (52,55). Studies have shown that supplementation with micronutrients, iodine, iron, folic acid, and calcium during pregnancy can reduce the risk of delivery of a small-for-gestational-age infant. In addition, milk consumption in children after birth is positively correlated with adult height (56,57). Disease is another key factor in children's height development, which can affect growth by hindering food intake and the absorption and transport of nutrients to tissues, leading to direct nutrient loss and affecting bone growth or density (51,52). At the distal role, socioeconomic status plays a key role in adult height (58). Generally speaking, parents' social class, socioeconomic status, and educational attainment are all important factors in adult height (51); these characteristics directly affect the resources available to the child, the probability of exposure to risk factors, and the health status of the child's mother. The most immediate challenges include overcrowded growing environments, reduced access to medical assistance, inappropriate feeding practices, poor dietary conditions, and food/liquid contamination, while in socially underdeveloped areas, there are more complex adverse environmental exposures (such as Aspergillus flavus), which significantly affect height growth (51,59). Like height, DM is also caused by a combination of factors. Besides population aging, environmental factors, socioeconomic factors, and lifestyle changes are thought to be responsible for the rapid increase in the incidence of DM globally in recent decades (3)(4)(5)(6)(7)60). Considering that China is still in the stage of economic development, there are still many families in the unfavorable social environment described above. Based on the above analysis, in addition to improving lifestyle, we have several suggestions that need to be mentioned from the Chinese social level: 1) increasing capital investment to improve the unfavorable living environment of residents, 2) guaranteeing the basic living conditions of women and children in poverty-stricken areas of the country, 3) increasing nutritional subsidies for women and children in poverty-stricken areas, 4) improving medical insurance policies and assistance programs for residents in poor areas, 5) strengthening the construction of professional medical teams and improving medical security in poor areas, 6) reinforcing the construction of a grassroots DM control mechanism, 7) establishing a monitoring network system for DM prevention and control, and 8) incorporating the prevention and treatment of chronic diseases into the basic national policy. This study has several advantages worth mentioning: 1) The participants of the current study are from 32 locations in 12 cities in China. Compared with the previous two similar studies (34,35), this study will be more representative of the Chinese population. 2) This study adopts a longitudinal design, and for the first time, it is clear that there are gender differences between height and DM risk in the Chinese population. 3) In this study, two different statistical methods were used to determine the saturation effect points for Chinese women to assess the risk of DM, which provided very useful reference materials for the primary prevention of DM. Some limitations also need to be highlighted: 1) In the current study, DM was diagnosed by FPG and self-reported, and the study population that might meet the diagnostic criteria for postprandial DM could not be identified, thus possibly underestimating the true incidence of DM. 2) As described above, although stratified analysis in the current study found some meaningful results in subgroups, further interaction tests did not show significant differences, which was mainly related to the short follow-up time in the current study, and these subgroup analysis results need to be confirmed in samples with more DM events. 3) The current study did not distinguish the types of DM, which may affect the application of current research results in some special types of DM. 4) Covariates contained in the current research dataset were still limited, and some known risk factors for DM, such as femoral length, waist circumference, and hip circumference, are not included in the dataset, which inevitably leads to some residual confounding (61). 5) Although the participants in the current study come from many different cities in China (Nantong, Wuhan, Hefei, Guangzhou, Chengdu, Changzhou, Shenzhen, Suzhou, Nanjing, Beijing, Shanghai), most of them (10/11) are from southern China, so the results of the current study may be more applicable to people in southern China. The applicability in northern China needs to be explored in further research. 6) Due to the lack of identification information of different locations and physical examination institutions in the current study, it is impossible to evaluate the errors between different physical examination centers and within them, which may affect the results of this study. Further prospective cohort studies are needed to verify the results. 7) The data of the current study were collected from multiple physical examination centers across the country. It is undeniable that there are certain differences in genetic, environmental, nutritional, and physical activities among subjects in different regions, which may affect the interpretation of parameters collected and height saturation point. 8) Although we have excluded subjects with DM at baseline, this study did not evaluate whether non-DM subjects used DM drugs at baseline, which may lead to some errors in the true diagnosis rate of DM. CONCLUSION In conclusion, the present study confirms that the short stature phenotype of Chinese women significantly increases the risk of DM, and 157-158 cm may be the saturation point of female's short height for predicting the risk of DM. These findings further clarify the association between height and DM in the Chinese population. These new insights may help develop a more accurate risk prediction model and may allow individuals to change their other behaviors to help reduce the risk of DM. AUTHOR CONTRIBUTIONS XW and WS designed the study. WS, YH, JY, YW, ZC, JX, and JL analyzed the data. YH, JY, YW, ZC, JX, JL, and XW interpreted the results. WS wrote the first draft of the manuscript. XW contributed to the refinement of the manuscript. All authors contributed to the article and approved the submitted version.
2022-04-05T13:17:19.342Z
2022-04-05T00:00:00.000
{ "year": 2022, "sha1": "6ba30a619e98b3c6e02faf5ae8d965fe22e5087a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "6ba30a619e98b3c6e02faf5ae8d965fe22e5087a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
52044834
pes2o/s2orc
v3-fos-license
Document-Level Neural Machine Translation with Hierarchical Attention Networks Neural Machine Translation (NMT) can be improved by including document-level contextual information. For this purpose, we propose a hierarchical attention model to capture the context in a structured and dynamic manner. The model is integrated in the original NMT architecture as another level of abstraction, conditioning on the NMT model’s own previous hidden states. Experiments show that hierarchical attention significantly improves the BLEU score over a strong NMT baseline with the state-of-the-art in context-aware methods, and that both the encoder and decoder benefit from context in complementary ways. Introduction Neural machine translation (NMT) (Bahdanau et al., 2015;Wu et al., 2016;Vaswani et al., 2017) trains an encoder-decoder network on sentence pairs to maximize the likelihood of predicting a target-language sentence given the corresponding source-language sentence, without considering the document context. By ignoring discourse connections between sentences and other valuable contextual information, this simplification potentially degrades the coherence and cohesion of a translated document (Hardmeier, 2012;Meyer and Webber, 2013;Sim Smith, 2017). Recent studies (Tiedemann and Scherrer, 2017;Wang et al., 2017; have demonstrated that adding contextual information to the NMT model improves the general translation performance, and more importantly, improves the coherence and cohesion of the translated text (Bawden et al., 2018;Lapshinova-Koltunski and Hardmeier, 2017). Most of these methods use an additional encoder Wang et al., 2017) to extract contextual information from previous source-side sentences. However, this requires additional parameters and it does not ex-ploit the representations already learned by the NMT encoder. More recently, have shown that a cache-based memory network performs better than the above encoder-based methods. The cache-based memory keeps past context as a set of words, where each cell corresponds to one unique word keeping the hidden representations learned by the NMT while translating it. However, in this method, the word representations are stored irrespective of the sentences where they occur, and those vector representations are disconnected from the original NMT network. We propose to use a hierarchical attention network (HAN) (Yang et al., 2016) to model the contextual information in a structured manner using word-level and sentence-level abstractions. In contrast to the hierarchical recurrent neural network (HRNN) used by (Wang et al., 2017), here the attention allows dynamic access to the context by selectively focusing on different sentences and words for each predicted word. In addition, we integrate two HANs in the NMT model to account for target and source context. The HAN encoder helps in the disambiguation of source-word representations, while the HAN decoder improves the target-side lexical cohesion and coherence. The integration is done by (i) re-using the hidden representations from both the encoder and decoder of previous sentence translations and (ii) providing input to both the encoder and decoder for the current translation. This integration method enables it to jointly optimize for multiple-sentences. Furthermore, we extend the original HAN with a multi-head attention (Vaswani et al., 2017) to capture different types of discourse phenomena. Our main contributions are the following: (i) We propose a HAN framework for translation to capture context and inter-sentence connections in a structured and dynamic manner. (ii) We integrate the HAN in a very competitive NMT ar-chitecture (Vaswani et al., 2017) and show significant improvement over two strong baselines on multiple data sets. (iii) We perform an ablation study of the contribution of each HAN configuration, showing that contextual information obtained from source and target sides are complementary. The Proposed Approach The goal of NMT is to maximize the likelihood of a set of sentences in a target language represented as sequences of words y = (y 1 , ..., y t ) given a set of input sentences in a source language x = (x 1 , ..., x m ) as: so, the translation of a document D is made by translating each of its sentences independently. In this study, we introduce dependencies on the previous sentences from the source and target sides: where D x n = (x n−k , ..., x n−1 ) and D y n = (y n−k , ..., y n−1 ) denote the previous k sentences from source and target sides respectively. The contexts D x n and D y n are modeled with HANs. Hierarchical Attention Network The proposed HAN has two levels of abstraction. The word-level abstraction summarizes information from each previous sentence j into a vector s j as: where h denotes a hidden state of the NMT network. In particular, h t is the last hidden state of the word to be encoded, or decoded at time step t, and h j i is the last hidden state of the i-th word of the j-th sentence of the context. The function f w is a linear transformation to obtain the query q w . We used the MultiHead attention function proposed by (Vaswani et al., 2017) to capture different types of relations among words. It matches the query against each of the hidden representations h j i (used as value and key for the attention). The sentence-level abstraction summarizes the contextual information required at time t in d t as: Figure 1: Integration of HAN during encoding at time step t,h t is the context-aware hidden state of the word x t . Similar architecture is used during decoding. where f s is a linear transformation, q s is the query for the attention function, FFN is a position-wise feed-forward layer (Vaswani et al., 2017). Each layer is followed by a normalization layer (Lei Ba et al., 2016). Context Gating We use a gate to regulate the information at sentence-level h t and the contextual information at document-level d t . The intuition is that different words require different amount of context for translation: where W h , W p are parameter matrices, and h t is the final hidden representation for a word x t or y t . Integrated Model The context can be used during encoding or decoding a word, and it can be taken from previously encoded source sentences, previously decoded target sentences, or from previous alignment vectors (i.e. context vectors (Bahdanau et al., 2015)). The different configurations will define the input query and values of the attention function. In this work we experiment with five of them: one at encoding time, three at decoding time, and one combining both. At encoding time the query is a function of the hidden state h xt of the current word to be encoded x t , and the values are the encoded states of previous sentences h j x i (HAN encoder). At decoding time, the query is a function of the hidden state h yt of the current word to be decoded y t , and the values can be (a) the encoded states of previous sentences h j x i (HAN decoder source), 3 Experimental Setup Datasets and Evaluation Metrics We carry out experiments with Chinese-to-English (Zh-En) and Spanish-to-English (Es-En) sets on three different domains: talks, subtitles, and news. TED Talks is part of the IWSLT 2014 and 2015 (Cettolo et al., 2012(Cettolo et al., , 2015 evaluation campaigns 1 . We use dev2010 for development; and tst2010-2012 (Es-En), tst2010-2013 (Zh-En) for testing. The Zh-En subtitles corpus is a compilation of TV subtitles designed for research on context (Wang et al., 2018). In contrast to the other sets, it has three references to compare. The Es-En corpus is a subset of OpenSubtitles2018 (Lison and Tiedemann, 2016) 2 . We randomly select two episodes for development and testing each. Finally, we use the Es-En News-Commentaries11 3 corpus which has document-level delimitation. We evaluate on WMT sets (Bojar et al., 2013): newstest2008 for development, and newstest2009-2013 for testing. A similar corpus for Zh-En is too small to be comparable. Table 2 shows the corpus statistics. For evaluation, we use BLEU score (Papineni et al., 2002) (multi-blue) on tokenized text, and we measure significance with the paired bootstrap resampling method proposed by Koehn (2004) (implementations by Koehn et al. (2007)). Model Configuration and Training As baselines, we use a NMT transformer, and a context-aware NMT transformer with cache memory which we implemented for comparison following the best model described by , with memory size of 25 words. We used the OpenNMT (Klein et al., 2017) implementation of the transformer network. The configuration is the same as the model called "base model" in the original paper (Vaswani et al., 2017). The encoder and decoder are composed of 6 hidden layers each. All hidden states have dimension of 512, dropout of 0.1, and 8 heads for the multi-head attention. The target and source vocabulary size is 30K. The optimization and regularization methods were the same as proposed by Vaswani et al. (2017). Inspired by we trained the models in two stages. First we optimize the parameters for the NMT without the HAN, then we proceed to optimize the parameters of the whole network. We use k = 3 previous sentences, which gave the best performance on the development set. Table 1 shows the BLEU scores for different models. The baseline NMT transformer already has better performance than previously published results on these datasets, and we replicate previous previous improvements from the cache method over the this stronger baseline. All of our proposed HAN models perform at least as well as the cache method. The best scores are obtained by the combined encoder and decoder HAN model, which is significantly better than the cache method on all datasets without compromising training speed (2.3K vs 2.6K tok/sec). An important portion of the improvement comes from the HAN encoder, which can be attributed to the fact that the sourceside always contains correct information, while the target-side may contain erroneous predictions at testing time. But combining HAN decoder with HAN encoder further improves translation performance, showing that they contribute complementary information. The three ways of incorporating information into the decoder all perform similarly. Table 3 shows the performance of our best HAN model with a varying number k of previous sentences in the test-set. We can see that the best performance for TED talks and news is archived with 3, while for subtitles it is similar between 3 and 7. Accuracy of Pronoun/Noun Translations We evaluate coreference and anaphora using the reference-based metric: accuracy of pronoun translation (Miculicich Werlen and Popescu-Belis, 2017b), which can be extended for nouns. The list of evaluated pronouns is predefined in the metric, while the list of nouns was extracted using NLTK POS tagging (Bird, 2006). The upper part Table 1: BLEU score for the different configurations of the HAN model, and two baselines. The highest score per dataset is marked in bold. ∆ denotes the difference in BLEU score with respect of the NMT transformer. The significance values with respect to the NMT and the cache method are denoted by * , and † respectively. The repetitions correspond to the p-values: * † < .05, * * † † < .01, * * * † † † < .001. of Table 4 shows the results. For nouns, the joint HAN achieves the best accuracy with a significant improvement compared to other models, showing that target and source contextual information are complementary. Similarity for pronouns, the joint model has the best result for TED talks and news. However, HAN encoder alone is better in the case of subtitles. Here HAN decoder produces mistakes by repeating past translated personal pronouns. Subtitles is a challenging corpus for personal pronoun disambiguation because it usually involves dialogue between multiple speakers. Cohesion and Coherence Evaluation We use the metric proposed by Wong and Kit (2012) to evaluate lexical cohesion. It is defined as the ratio between the number of repeated and lexically similar content words over the total number of content words in a target document. The lexical similarity is obtained using WordNet. Table 4 (bottom-left) displays the average ratio per tested document. In some cases, HAN decoder achieves the best score because it produces a larger quantity of repetitions than other models. However, as previously demonstrated in 4.2, repetitions do not always make the translation better. Although HAN boosts lexical cohesion, the scores are still far from the human reference, so there is room for improvement in this aspect. For coherence, we use a metric based on Latent Semantic Analysis (LSA) (Foltz et al., 1998). LSA is used to obtain sentence representations, then cosine similarity is calculated from one sentence to the next, and the results are averaged to get a document score. We employed the pre-trained LSA model Wiki-6 from (Stefanescu et al., 2014). Table 4 (bottom-right) shows the average coherence score of documents. The joint HAN model consistently obtains the best coherence score, but close to other HAN models. Most of the improvement comes from the HAN decoder. Table 5 shows an example where HAN helped to generate the correct translation. The first box shows the current sentence with the analyzed word in bold; and the second, the past context at source and target. For the context visualization we use the toolkit provided by Pappas and Popescu-Belis (2017). Red corresponds to sentences, and blue to words. The intensity of color is proportional to the weight. We see that HAN correctly translates the ambiguous Spanish pronoun "su" into the English "his". The HAN decoder highlighted a previous mention of "his", and the HAN encoder highlighted the antecedent "Nathaniel". This shows that HAN can capture interpretable inter-sentence connections. More samples with different attention heads are shown in the Appendix ??. Conclusion We proposed a hierarchical multi-head HAN NMT model 5 to capture inter-sentence connections. We integrated context from source and target sides by directly connecting representations from previous sentence translations into the current sentence translation. The model significantly outperforms two competitive baselines, and the ablation study shows that target and source context is complementary. It also improves lexical cohesion and coherence, and the translation of nouns and pronouns. The qualitative analysis shows that the model is able to identify important previous sentences and words for the correct prediction. In future work, we plan to explicitly model discourse connections with the help of annotated data, which may further improve translation quality.
2018-08-26T18:09:34.276Z
2018-09-05T00:00:00.000
{ "year": 2018, "sha1": "e20ff55e87e2b3ef02ae0529880bb705f5efbcae", "oa_license": "CCBY", "oa_url": "https://www.aclweb.org/anthology/D18-1325.pdf", "oa_status": "HYBRID", "pdf_src": "ACL", "pdf_hash": "a7ffcf08b392468df9411609f65f44a5c13bf3ca", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
238263756
pes2o/s2orc
v3-fos-license
The Role of Road Database in Supporting Road Network Development Analysis for Regional Development Analysis of regional accessibility is very important in regional development, where good accessibility is needed to support regional development, especially for potential areas. In the accessibility analysis, it is necessary to have regional spatial data that shows the potential of the area, and accurate road network data including geometric and road network conditions required for accessibility calculations, in which the data derived from the road database were the main and accurate data source. Penukal Abab Lematang Ilir (PALI) is a new district in South Sumatra and has great plantations and mining potential. PALI District continues to develop its road network to improve regional accessibility. The road database for PALI District has been developed since 2017 and has complete road data covering162 roads with a total length of 555.85 km. This paper discusses the development of regional accessibility, through road improvement and road network development in PALI District using the existing spatial plan and road database. The calculation of accessibility was carried out using the principle of regional connectivity and considering the effect of distance and road network conditions. The network scenario was developed and the accessibility analysis provided the best road network development plan for the regional development of PALI District. Introduction For a development of a region, an effective connecting network is a prerequisite for ensuring connectivity support for potential regions and support for the region's internal and external accessibility so that the regional development plans can be implemented as planned. Area-related accessibility studies have also been widely discussed [1][2][3][4][5][6]. The previous study [7] stated that in ranking the area's accessibility, the road network's length and condition are very influential. Thus, in ranking the regional accessibility, it is necessary to enter the parameters of length and road conditions. To calculate the accessibility, accurate and complete data on the existing road network are needed; consequently, the road database's role is essential in calculating regional accessibility. PALI District has a comprehensive road database and is regularly updated. PALI District has various natural resource potentials. Based on the existing regional spatial plan, there are 12 activity centers to be developed which are grouped into Centers of Local Activity (CLA), Center of Promotion Local Activity (CPLA), Center of Regional Service (CRS) ), and the Center of Environmental Service (CES). This study discusses the development of the road network in PALI District by taking into account the regional development plan, the location of the area that has natural resource potential and is supported by the existing road database. The accessibility calculation was based on the calculation of the connectedness matrix with respect to road length and road conditions [7]. The network being developed is aimed at improving the accessibility within the region and from and outside the region. The final result of this research is a road network development plan ready to support the Regional Development Plan in PALI District. The research steps are shown in Figure 1 with the following explanation: Methodology x The data required for the accessibility calculation were in the form of Potential Area Data obtained from the regional spatial plan. The network, geometric and road condition data were obtained from the Road Database of PALI District. x The Potential Area Data and Exiting Road Network were used to create alternatives for road network development. x The role of databases in providing data required for the discussion was to see the importance of databases in Area Accessibility Analysis. x The final stage of the analysis was to find out the best road network alternative in regional development. x In this study the total accessibility matrix was calculated based on the connectedness matrix. Several studies in improving regional connectivity in Indonesia using the total accessibility matrix have been reported [8][9][10]. x Apart from the regional connectivity, the road lengths and conditions were used in the Accessibility Calculation Analysis. The steps for calculating accessibility are described as follows: 1. Identifying link and node in existing network. 2. Defining connectivity matrix, in which the cell matrix value is between 0 to 1, depending on the connectivity, length and condition of the road. A basic matrix is a connectivity matrix that explains direct connectivity between regions. To include the influence of the road's length on the connectivity matrix, the matrix cell between two vertices that have the shortest distance is given a value of 1 while the other matrix cells are given a value by dividing the shortest path length by the road length between the two road vertices. The influence of road conditions is included in the calculation by multiplying the percentage of the length of the road that has good conditions [7]. 3. Calculating the accessibility matrix. The following are the phases of the calculation process of accessibility matrix [11]: 1. Arranging the initial network matrix based on road network map and name it C 1 on the matrix. Specifying Matrix T = C 1 and initial value n = 1. Then check if each element on the matrix T has no zero value. If not, then m = 1 and directly proceed to step 5 2. Calculating m = n + 1 and C m = C 1 xC n 3. Calculating T = T + C m 4. Stopping the iteration when the matrix T has no elements of 0 value, if any element is zero, n = m, C m = C n , and return to Step 2. 5. If the iteration is terminated, the last T is the total accessibility matrix and m is the network diameter. 6. Calculating the accessibility value of node. 3. Existing Network Condition and Network Development Plan 3.1. Regional Potential and Development Plan In the Regional Spatial Plan (RTRW) of PALI District [12], based on the regional's potential, a Regional Development Plan for PALI District has been formulated and shown in Table 1 and Figure 2. There are 12 development regional plans in PALI District, where Pendopo is the development center. Areas to be developed are in the form of trade centers, plantations, mining, social and economic services, tourism areas, and access points from and out of PALI District. The development areas need to improve their accessibility, both within the region and outside the region by appropriately developing a road network. Existing Road Network Condition and Role of Database Analysis of the existing road network was used to see how the existing road network could support the Regional Development Plan of Pali District. It was conducted by looking at the inter-regional connectivity with the road network, especially the development area's connection. The existing road network and the development area plan are shown in Figure 3. Figure 3 shows that several development area plans are not well connected, such as be Betung Barat, Prambatan and Modong so that a Road Development Plan is needed to improve the accessibility of potential areas. Currently there are two main accesses out of Pali District, namely through Talang Bulang (node-15) to Muara Enim District and via Kota Baru (node-18) to Musi Banyuasin District. There are two exit accesses that have not been properly opened, namely through the Semangos and Modong Rivers which open access to Musi Rawas District and Muara Enim District. The development of the road network has to pay attention to the shortcomings of the existing road network in supporting the regional accessibility. In this study, the data of network, geometric and road conditions were obtained from the road and bridge database having been developed by PALI District. The availability of data was needed to calculate the accessibility; the road network development planning process could be done more quickly and accurately. The data used include: x Road geometric data (length of road sections); x Street coordinating data; and x The data on the value of road conditions which in 2019 the value of road conditions was started to be collected using IRI data from the Roadroid. The main menu display of PALI database is shown in Figure 4, in which the database menu consisted of the data input, data view, reporting, road handling history, and road and bridge libraries. An example of displaying geometric data and the road conditions of a road is shown in Figures 5 and 6. The road and bridge database of PALI District has complete data covering 162 roads with a total length of 555.86 km. The road length and condition data on the existing road network used in the analysis are shown in Table 2. The data shown in Table 2 were needed to calculate accessibility in the context of regional development. With a complete road database, the local accessibility calculations can be done more quickly and accurately. The road and bridge database is very useful in developing the road network besides being used for road maintenance planning in an area. Road Network Development Plan The road network development plan was based on the need in increasing the accessibility of areas, especially the potential regions. In addition, the road networks were developed to improve the accessibility into and out of the regions. Based on the condition of the existing road network, the network development was carried out as follows: x Increasing accessibility to the potential areas of West Betung (node-28), Prambat (node-29) and Modong (node-25) by developing or improving road conditions to the area. Figure 7 shows the developed road network plan. To see the effect of the road network development plan on regional accessibility, the accessibility ranking of the road network development plan was compared to the existing accessibility ranking. Accessibility Analysis Area accessibility is calculated based on area connectivity, road length, and conditions. Previous studies have shown that road lengths and conditions greatly influence the results of accessibility ranking [7]. The results of the comparison of the regional accessibility rankings for the existing conditions and road network development scenarios are shown in Table 3, where the first step in the calculation was to find out the connectivity matrix. The connectivity matrix of the existing network conditions and road network development plan is shown in Tables 4 and 5. Table 3 shows that a road development plan being able to improve the accessibility of the area is described as follows: x With the addition of the road network to West Betung (node-28) and Prambatan (node-29) which are the development centers of the regional service area, there has been a significant improvement in accessibility, especially to West Betung. x After developing the network, the best regional accessibility is only in Penukal Abab Subdistrict, the accessibility of the area changes where Abab Subdistrict is also an area with good regional accessibility. x For access outside the region, the road development scenario adds accesses from and out of the District via Semanggos (node-26) to Musi Rawas District and via Modong (node-25) heading for Muara Enim District, so that the accessibility to and from the region gets better. Conclusion This study discusses the analysis to increase the regional accessibility in PALI District for local development. Based on the condition of the existing road network, the location of the potential area to be developed and the access to and from Pali District, a road network development scenario was developed in Pali District. In addition to maintaining and increasing the existing accessibility of potential areas, the developed scenario opens access to potential areas and into and out of Pali District. With a complete road database, the regional accessibility calculations can be conducted more quickly and accurately.
2021-10-05T20:13:39.701Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "a3cfe1a42b4afb1aea86362126d2999dfebe2ffa", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/830/1/012083", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "a3cfe1a42b4afb1aea86362126d2999dfebe2ffa", "s2fieldsofstudy": [ "Geography", "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Physics" ] }
24234127
pes2o/s2orc
v3-fos-license
Simultaneous blockade of VEGF and Dll4 by HD105, a bispecific antibody, inhibits tumor progression and angiogenesis ABSTRACT Several angiogenesis inhibitors targeting the vascular endothelial growth factor (VEGF) signaling pathway have been approved for cancer treatment. However, VEGF inhibitors alone were shown to promote tumor invasion and metastasis by increasing intratumoral hypoxia in some preclinical and clinical studies. Emerging reports suggest that Delta-like ligand 4 (Dll4) is a promising target of angiogenesis inhibition to augment the effects of VEGF inhibitors. To evaluate the effects of simultaneous blockade against VEGF and Dll4, we developed a bispecific antibody, HD105, targeting VEGF and Dll4. The HD105 bispecific antibody, which is composed of an anti-VEGF antibody (bevacizumab-similar) backbone C-terminally linked with a Dll4-targeting single-chain variable fragment, showed potent binding affinities against VEGF (KD: 1.3 nM) and Dll4 (KD: 30 nM). In addition, the HD105 bispecific antibody competitively inhibited the binding of ligands to their receptors, i.e., VEGF to VEGFR2 (EC50: 2.84 ± 0.41 nM) and Dll4 to Notch1 (EC50: 1.14 ± 0.06 nM). Using in vitro cell-based assays, we found that HD105 effectively blocked both the VEGF/VEGFR2 and Dll4/Notch1 signaling pathways in endothelial cells, resulting in a conspicuous inhibition of endothelial cell proliferation and sprouting. HD105 also suppressed Dll4-induced Notch1-dependent activation of the luciferase gene. In vivo xenograft studies demonstrated that HD105 more efficiently inhibited the tumor progression of human A549 lung and SCH gastric cancers than an anti-VEGF antibody or anti-Dll4 antibody alone. In conclusion, HD105 may be a novel therapeutic bispecific antibody for cancer treatment. Introduction Tumor angiogenesis, the formation of new blood vessels in solid tumors, contributes to tumor cell survival, growth and metastasis. An important driving force of tumor angiogenesis is the signaling pathway involving vascular endothelial growth factor (VEGF) and its receptors (VEGFRs). 1 Several angiogenesis inhibitors targeting the VEGF/VEGFR signaling pathway have been approved by the Food and Drug Administration, and are now used for the treatment of several cancers. 2 The first inhibitor of the VEGF/VEGFR signaling pathway to be approved was bevacizumab (Avastin Ò , Genentech/Roche), a monoclonal antibody against the human VEGF ligand. 2 The other protein-based inhibitors are ramucirumab (Cyramza Ò , Eli Lilly), a human monoclonal antibody against human VEGFR, and aflibercept (VEGF-Trap; Eylea Ò , Regeneron). 3,4 Another class of inhibitors includes sunitinib (Sutent Ò , Pfizer) and sorafenib (Nexavar Ò , Bayer), which are small-molecule compounds that directly inhibit the phosphorylation of VEGFRs. 2 VEGF/VEGFR signaling inhibitors can block VEGF-driven angiogenesis and regress tumor vessels that are dependent on VEGF. However, VEGF inhibitors alone do not destroy all blood vessels in tumors. In addition, several preclinical studies have shown that VEGF inhibitors alone resulted in a more invasive pattern of tumors. Recently, a similar pattern of tumor infiltration has been observed in cancer patients as a result of resistance to anti-VEGF therapy. [5][6][7][8] Because those cancer patients are refractory to anti-angiogenic therapies, there is a strong clinical need for next-generation angiogenesis inhibitors to overcome resistance to anti-VEGF therapy. 9,10 Delta-like ligand 4 (Dll4), a Notch ligand, also plays an important role in vascular development. 11 Although many genes are involved in vascular development, aside from the VEGF gene, Dll4 is the only gene whose haploinsufficiency leads to major vascular defects and embryonic lethality. 11 The Dll4/Notch signaling pathway regulates not only embryonic vasculature, but also tumor angiogenesis. [11][12][13] In particular, Dll4 is highly expressed in many human cancers, including kidney cancers, gastric cancers, lung cancers, bladder cancers, pancreatic cancers, colorectal cancers, and breast cancers. 14,15 Several preclinical xenograft studies have shown that Dll4/ Notch blockade inhibited tumor progression by promoting the hyperproliferation of endothelial cells, which resulted in an increase in vascular density but a decrease in functional patent tumor vasculature. [16][17][18][19][20] In addition to the effects of Dll4 blockade on tumor vasculature, Dll4/Notch inhibition is known to reduce cancer stem cells (CSCs), which are an important cancer cell population for malignant tumor progression. 21 Therefore, Dll4 is now recognized as a promising target for improved efficacy in cancer treatment. Moreover, the Dll4/Notch signaling pathway acts as a key negative regulator downstream of the VEGF/VEGFR signaling pathway. 14,15 VEGF signaling activates the Notch pathway locally through the upregulation of Dll4 expression. Then, Dll4-dependent Notch activation leads to the suppression of the VEGF signaling pathway by the downregulation of VEGFR2 expression, resulting in the inhibition of excessive vessel branching by preventing endothelial tip cell formation. 15 This crosstalk between VEGF/VEGFR2 and Dll4/ Notch signaling pathways suggests that the simultaneous blockade of both signaling pathways would provide improved efficacy for the inhibition of tumor progression and angiogenesis. [14][15][16]19,20 We developed the HD105 bispecific antibody, targeting both VEGF and Dll4, as a potent anti-cancer therapeutic antibody. We generated HD105 by linking each C-terminal of an anti-VEGF antibody (bevacizumab-similar) with a Dll4-binding single-chain variable fragment (scFv). 22,23 The bevacizumabsimilar, a biosimilar molecule from Hanwha Chemical, has the same complementarity-determining region sequence, and similar biological activity and cross-reactivity compared with the originator's molecule. In this report, we evaluate the in vitro activities of the HD105 bispecific antibody compared to a VEGF-targeting antibody (bevacizumab-similar) and a Dll4targeting antibody alone. HD105 bound to both targets, VEGF and Dll4, with nanomolar K D values, and dose-dependently inhibited VEGF/VEGFR and Dll4/Notch interaction. These biochemical activities of the bispecific antibody led to the potent inhibition of each signaling pathway in endothelial cells and the dose-dependent suppression of VEGF-induced or Dll4-induced cellular responses. In addition, we found that simultaneous blockade by the HD105 bispecific antibody inhibited the tumor progression of human A549 lung and SCH gastric cancers in xenograft models more effectively than a VEGF-targeting antibody (bevacizumab-similar) and a Dll4targeting antibody alone. These results suggest that HD105 has promise as an anti-cancer therapeutic antibody to overcome resistance to anti-VEGF therapies. Simultaneous binding of HD105 bispecific antibody to VEGF and Dll4 The bispecific antibody HD105 is composed of a VEGF-targeting bevacizumab-similar IgG backbone and a Dll4-targeting single-chain Fv (Fig. 1A). To determine the binding affinities of HD105 against each target antigen, we performed Biacore assays and enzyme-linked immunosorbent assays (ELISAs) using the immobilized antigens VEGF and Dll4. The K D value of HD105 (0.13 nM) against human VEGF was found to be 2fold higher than the K D value of the anti-VEGF bevacizumabsimilar antibody (0.06 nM) in the Biacore assay (Fig. 1B). In addition, the K D value of HD105 against human Dll4 (30 nM) was 10-fold higher than the K D value of the anti-Dll4 monoclonal antibody (3.6 nM) (Fig. 1B). The higher K D value of HD105 against human VEGF and Dll4 might be due to a difference in the structure of the antibody molecule between a conventional IgG and the bispecific format of the HD105 antibody. 24,25 Using ELISAs, we determined the dose-dependent binding profiles of the HD105 bispecific antibody against immobilized VEGF and Dll4 (Fig. 1C, 1D, respectively). The results of dualantigen capture ELISA confirmed that each binding part of HD105 is actively maintained in the format of an IgG backbone linked with a scFvs (Fig. 1E). These results demonstrated that the binding affinity and kinetics of the bispecific antibody were comparable to the values for each single-antigen-targeting antibody. Next, we determined whether the HD105 bispecific antibody inhibited the receptor-ligand bindings of VEGF/ VEGFR2 and Dll4/Notch1. As shown in Fig. 1F, HD105 inhibited the interaction between human VEGF and human VEGFR2 (KDR) in a dose-dependent manner. The EC 50 (half maximal effective concentration) value of HD105 in inhibiting VEGF/VEGFR-2 interaction was 2.84 nM, which is comparable with the EC 50 value of the anti-VEGF (bevacizumab-similar) antibody (2.98 nM) (Fig. 1F). HD105 also inhibited the interaction between human Dll4 and Notch1. The EC 50 value (1.14 nM) of HD105 was 2-fold higher than the EC 50 value (0.65 nM) of the anti-Dll4 antibody (Fig. 1G), which might be due to the 10-fold lower binding affinity of Dll4 scFv in the bispecific antibody. Nonetheless, the results of competition inhibition ELISAs confirmed that the HD105 bispecific antibody effectively bound to each target and competitively inhibited the interaction of VEGF/VEGFR2 and Dll4/Notch1. Inhibition of VEGF-and Dll4-mediated signaling pathways and cell responses To address the in vitro biochemical and biological activities of HD105, we examined the activation of downstream molecules of the VEGF/VEGFR2 or Dll4/Notch1 signaling pathways and signaling-mediated cellular responses after HD105 treatment. First, we determined the effects of the HD105 bispecific antibody on both signaling pathways, VEGF/VEGFR2 and Dll4/Notch1, in HUVECs ( Fig. 2A). VEGF-induced VEGFR2 activation was monitored by the phosphorylation status of VEGFR2 and ERK ( Fig. 2A, lanes 1-3), whereas the Dll4-mediated Notch signaling pathway was monitored by the induction of the Notch intracellular domain (NICD, Fig. 2A, lanes 4-6). The VEGF-induced VEGFR2 signaling pathway was completely suppressed by treatment with the anti-VEGF (bevacizumab-similar) antibody ( Fig. 2A, lanes 3 and 6). The VEGF/VEGFR2 signaling pathway in HUVECs was also inhibited by treatment with HD105, but not by treatment with anti-Dll4 antibody or DBZ (dibenzazepine), a chemical inhibitor of Notch receptor ( Fig. 2A, lanes 7-9). In the case of Dll4-mediated NICD induction, the Dll4-induced Notch1 signaling pathway was effectively inhibited by treatment with the HD105 bispecific antibody, anti-Dll4 antibody or DBZ ( Fig. 2A, lanes 7-9), but not by anti-VEGF (bevacizumab-similar) antibody ( Fig. 2A, lane 6). These results demonstrated that the HD105 bispecific antibody simultaneously inhibited the downstream signaling pathways of both VEGF-VEGFR2 and Dll4-Notch1 in the endothelial cells. Because VEGF-induced VEGFR2 activation eventually stimulates endothelial cell responses, we tested whether the HD105 bispecific antibody inhibits VEGF-induced HUVEC sprouting and proliferation compared to the anti-VEGF bevacizumab-similar antibody and anti-Dll4 antibody. To examine the effects on endothelial cell sprouting, HUVECs were mixed with dextran-coated beads in fibrin gels, and then allowed to sprout under normal endothelial cell culture conditions (Fig. 2B). The sprouting endothelial tip cells were completely decreased after anti-VEGF (bevacizumab-similar antibody) treatment, but markedly increased after anti-Dll4 monoclonal antibody treatment compared to the control (Fig. 2C, D). In addition, the sprouting endothelial cells were decreased after HD105 bispecific antibody treatment (Fig. 2E). The measurement of the sprouting HUVECs at a certain distance (225 mm) from the beads showed a 49% increase after anti-Dll4 monoclonal antibody treatment, but an 89% decrease after treatment with anti-VEGF bevacizumab-similar antibody or HD105 bispecific antibody compared to the control (Fig. 2F). In the case of endothelial cell proliferation, the HD105 bispecific antibody inhibited VEGF-induced HUVEC proliferation in a dose-dependent manner (Fig. 2G). The IC 50 value of HD105 was determined to be 1.58 § 0.08 nM, which is comparable with the value of anti-VEGF antibody (1.49 § 0.04 nM). To test the effects of HD105 on Dll4-mediated cell responses, we used engineered SKOV3 cancer cells expressing a luciferase gene regulated by Notch1 activation. HD105 dose-dependently inhibited Dll4-induced Notch1-dependent activation. The IC 50 value of HD105 was determined to be 0.62 § 0.23 nM and the IC 50 value of the anti-Dll4 monoclonal antibody was 0.58 § 0.03 nM, respectively (Fig. 2H). Based on the results of cell-based potency assays, we confirmed that the HD105 bispecific antibody suppressed cellular responses stimulated by either VEGF or Dll4. Suppression of tumor progression in xenograft models To determine the effects of the HD105 bispecific antibody on tumor progression in vivo, we used human cancer xenograft models in nude mice treated with a mouse surrogate Dll4 antibody because VEGF was secreted by human cancer cells, whereas Dll4 was expressed by tumor endothelial cells originating from mice. We generated a mouse surrogate Dll4 antibody binding to the N-terminal regions of mouse Dll4 using similar binding epitopes to the N-terminal human Dll4 for the HD105 antibody ( Fig. S1A-C). Thus, the mouse surrogate HD105 bispecific antibody can inhibit tumor progression via the neutralization of human cancer-secreted VEGF and host-expressed mouse endothelial Dll4 in xenograft models. Both mouse Dll4targeting bispecific antibody and monoclonal antibody competitively inhibited the interaction of mouse Dll4/Notch1 with a similar range of EC 50 values (Fig. S1C). In addition, the concentration of the mouse HD105 bispecific antibody was maintained at 83.3% after 100 hours of incubation at 37 C and the concentration of the mouse Dll4 antibody was maintained at 78.2% after 100 hours in mouse serum, respectively. In order to further evaluate the in vivo systemic exposure of HD105 and anti-Dll4 monoclonal antibody, we determined pharmacokinetic (PK) profiles and parameters of HD105 and anti-Dll4 antibody using BALB/c mice. We found no significant differences in the PK profiles and parameters of HD105 compared to those of anti-Dll4 antibody (Fig. S1D, E). These results from mouse PK studies demonstrated that our bispecific antibody format has similar in vivo exposure and clearance patterns to Dll4-targeting monoclonal antibody. In A549 human lung cancer xenograft models (Fig. 3A), the mouse surrogate HD105 bispecific antibody suppressed tumor progression more effectively (74%) than single-targeting antibodies, anti-VEGF bevacizumab-similar antibody alone (50%) or antimouse Dll4 antibody alone (50%). Similarly, the mouse surrogate HD105 bispecific antibody exhibited more potent effects on tumor progression inhibition (89%) than anti-VEGF bevacizumab-similar antibody alone (50%) or anti-mouse Dll4 antibody alone (50%) in SCH human gastric cancer xenograft models (Fig. 3B). In addition, the mouse surrogate HD105 bispecific antibody demonstrated a dose-dependent inhibition of tumor progression in SCH gastric cancer xenograft models (Fig. 3C). These in vivo results indicated that the simultaneous blockade of VEGF and Dll4 by the mouse surrogate HD105 bispecific antibody more potently suppressed tumor progression than each single-target antibody alone. We then further investigated the effects of the mouse surrogate HD105 bispecific antibody on the tumor progression of other human gastric cancers such as MKN-74, SNU-5, and SNU-16. Although no significant effect on MKN-74 or SNU-5 tumor progression was observed (Fig. 3D, E), the mouse surrogate HD105 bispecific antibody inhibited SNU-16 tumor progression by 50%, a similar level to the findings in the SCH xenograft model (Fig. 3F). Effects on tumor vessels and tumor cells To investigate the effects of the simultaneous blockade of VEGF and Dll4 on tumor vessels, we performed immunohistochemical analysis of A549 and SCH tumor tissues after treatment with each antibody. The endothelial cells of tumor vessels, vascular basement membrane, and pericytes were stained for CD31, type IV collagen, and NG2, respectively. After treatment with the anti-VEGF (bevacizumab-similar) antibody or the mouse surrogate HD105 bispecific antibody, CD31-positive tumor vessels were reduced in A549 tumors compared to tumor vessels treated with phosphate-buffered saline (PBS) or the anti-mouse Dll4 , 50 mm. The tumor vasculature was stained for CD31 immunoreactivity (green), and the vascular basement was stained for type IV collagen (red). Tumor vessels were decreased after treatment with anti-VEGF (bevacizumab-similar) antibody or mouse HD105 bispecific antibody, whereas tumor vessels were markedly increased after treatment with anti-mouse Dll4 antibody compared to PBS. Higher-resolution images compare the phenotype changes of tumor vessels in detail after PBS (E), anti-VEGF (bevacizumab-similar) antibody (F), anti-mouse Dll4 antibody (G), or mouse HD105 bispecific antibody treatment (H). Scale bar (E-H), 20 mm. The tumor vasculature was stained for CD31 immunoreactivity (red), and the perivascular pericyte was stained for NG2 (green). The nuclei of the tumor tissues were stained by DAPI (4 0 ,6-diamidino-2-phenylindole). Tumor vessels after treatment with anti-mouse Dll4 antibody were conspicuously thinner and more branched than the tumor vessels of other groups. Bar graph (I) measuring tumor vessel density of A549 tumor tissues in xenograft mice confirms the conspicuous increase of tumor vessels after anti-mouse Dll4 antibody treatment but decreases after anti-VEGF (bevacizumab-similar) antibody, mouse HD105 bispecific antibody, or combination treatment with anti-mouse Dll4 antibody and anti-VEGF (bevacizumab-similar) antibody. y, P < 0.05 versus PBS. Ã , P < 0.05vs. anti-Dll4 antibody. However, the functional tumor vessels in SCH gastric cancer tissues assessed by intravenous FITC-labeled Lycopersicon esculentum (Tomato) lectin staining were significantly decreased after treatment with anti-VEGF (bevacizumab-similar) antibody as well as anti-mouse Dll4 antibody (J). y, P < 0.05 versus PBS. z, < 0.05vs. anti-VEGF (bevacizumab-similar) antibody. Ã , P < 0.05 versus anti-Dll4 antibody. Functional tumor vessels were more decreased after treatment with mouse HD105 bispecific antibody compared to the other groups. antibody alone (Fig. 4A-D). After treatment with anti-mouse surrogate Dll4 antibody, the tumor vessels had many more branches, but a thinner phenotype, compared to the tumor vessels of other groups in the fluorescence images under high magnification (Fig. 4E-H). Tumor vessel density was increased by 58% after treatment with the anti-mouse surrogate Dll4 antibody, whereas tumor vessel densities were reduced after treatment with the anti-VEGF bevacizumab-similar antibody (35%), the combination of the anti-VEGF (bevacizumab-similar) antibody plus the anti-mouse surrogate Dll4 antibody (21%), or the mouse surrogate HD105 bispecific antibody (28%) (Fig. 4I). We also found a similar level of changes in the tumor vessel densities of SCH tumor tissues (data not shown). We then evaluated the effects of each antibody on functional tumor blood vessels in tumors by the intravenous injection of FITC-labeled Lycopersicon esculentum (tomato) lectin prior to the perfusion of mice used in the xenograft studies. 9,10 In the case of treatment with the anti-VEGF (bevacizumab-similar) antibody, functional tumor vessels in SCH tumors were reduced by 35%, which is a similar level of reduction in tumor vessel density (Fig. 4I, J). Compared to the conspicuous increase in tumor vessel density after treatment with the anti-mouse surrogate Dll4 antibody, functional patent tumor vessels were reduced by 36% in SCH tumors (Fig. 4I, J). More importantly, after treatment with the mouse surrogate HD105 bispecific antibody, functional tumor vessels were reduced by 60% in SCH tumors compared to other groups (Fig. 4I, J). These results suggested that the simultaneous blockade of VEGF and Dll4 led to a significant reduction in both the density and the functionality of tumor vessels. Because the results might be associated with tumor cell status, we assessed the status of apoptotic tumor cells using activated caspase-3 staining with DAPI nuclear staining. As shown in Fig. 5, apoptotic tumor cells stained by anti-activated caspase-3 antibody were significantly increased (by 2.4-fold) in SCH tumors after treatment with the mouse surrogate HD105 bispecific antibody compared to the PBS control group, the anti-VEGF (bevacizumab-similar) antibody or the anti-mouse Dll4 antibody group. Overall, the immunohistochemical studies demonstrated that treatment with the mouse surrogate HD105 bispecific antibody regressed tumor vessels, followed by the induction of apoptosis in tumor cells. Stability of HD105 bispecific antibody The HD105 bispecific antibody was produced by Chinese hamster ovary (CHO) cells and then purified by several chromatographic steps. The purified HD105 bispecific antibody Nuclei of the tumor tissues were stained by DAPI (4 0 ,6-diamidino-2-phenylindole, blue). The higher-resolution image confirms that activated caspase-3 antibody was stained in the cytoplasm of the apoptotic cells after mouse HD105 bispecific antibody treatment (E). The bar graph (F) measuring the cell density of apoptotic cells in SCH cancer tissues confirms the significant increase in apoptotic cells after mouse HD105 bispecific antibody treatment. Ã , P < 0.05vs. PBS. z, < 0.05 versus anti-VEGF (bevacizumab-similar)) antibody. Ã , P < 0.05vs. anti-Dll4 antibody. contained less than 5% of aggregates by size-exclusion chromatography-high-performance liquid chromatography (SEC-HPLC) analysis, which might be the dimer fraction of the HD105 bispecific antibody (Fig. 6A). The stability of the HD105 bispecific antibody (20 mg/ml) was monitored by SEC-HPLC, SDS-PAGE, and dual-antigen capture ELISA (DACE) analysis after 4 weeks' incubation of the antibody at 4 C, 25 C, or 40 C (Fig. 6B-D). The monomer fraction and the binding affinity of the HD105 bispecific antibody were maintained within the acceptance criteria specified in each analysis after 4 weeks' incubation less than 25 C. Discussion The goal of this study was to evaluate the in vitro and in vivo activity of the HD105 bispecific antibody targeting VEGF and Dll4, which are critical signaling mediators in tumor-induced angiogenesis. Several angiogenesis inhibitors that target the VEGF/VEGFR signaling pathway, including bevacizumab, ramucirumab, sunitinib and sorafenib, have been used for the treatment of cancer patients. [1][2][3] However, recent preclinical studies demonstrated that VEGF blockade alone led to a more invasive and aggressive pattern of tumors invading neighboring normal tissues, possibly due to increased intratumoral hypoxia. [5][6][7][8] Similar invasive and aggressive patterns of tumors were found in some cancer patients as a result of resistance to anti-VEGF therapy. 5-8 Therefore, additional targets of tumorinduced angiogenesis have been sought to overcome such resistance to anti-VEGF therapies. Dll4, a Notch ligand, is another important signaling mediator in tumor angiogenesis. 11 Dll4 inhibitors have shown potent anti-tumor effects in a broad spectrum of cancer xenograft models, including models with intrinsic or acquired resistance to VEGF therapy. 19,20,26 Dll4 inhibitors were also shown to reduce cancer stem cell (CSC) frequency in several preclinical patient-derived cancer xenograft (PDX) models. 21 These emerging reports suggest that the Dll4/ Notch signaling pathway might be a promising target for cancer treatment, with the possibility not only of inhibiting tumor angiogenesis, but also of reducing the CSC population. Moreover, combination treatment with VEGF and Dll4 inhibitors has demonstrated a more effective regression of tumor vessels and inhibition of tumor progression in several cancer xenograft models compared to VEGF or Dll4 blockade alone. [14][15][16][17][18][19][20] Based on this scientific evidence, we developed HD105, a bispecific antibody that targets VEGF and Dll4 simultaneously. This bispecific antibody consists of an anti-VEGF bevacizumab-similar IgG backbone linked with a Dll4-binding singlechain Fv. 22,23 We note that a dual-targeting bispecific antibody against VEGF and Dll4, OMP-305B83, was developed and recently entered an ongoing Phase 1 study sponsored by OncoMed Pharmaceuticals. 27,28 Each arm of OMP-305B83, composed of two distinct heavy chains and a common light chain, binds to VEGF and Dll4, respectively. 27,28 Heterodimer formation of two distinct heavy chains is promoted by mutations at CH3 domain of Fc region. Compared to one binding site for each antigen in OMP-305B83, the HD105 bispecific antibody has two binding sites for each antigen (Fig. 1A). We successfully expressed and purified the HD105 bispecific antibody from a CHO DG44 cell line. Then, we compared the in vitro biochemical properties and activities of the HD105 bispecific antibody with the properties of each singleantigen-targeting antibody, anti-VEGF bevacizumab-similar antibody or anti-Dll4 monoclonal antibody. The binding affinity of HD105 against VEGF was 2-fold weaker than for the anti-VEGF (bevacizumab-similar) antibody, whereas the affinity of HD105 against Dll4 was 10-fold weaker than for the anti-Dll4 antibody. This weaker binding affinity of the bispecific antibody might be due to the different conformation of the HD105 bispecific antibody compared to the general IgG antibody format. Generally, the binding affinity against a target antigen in a single-chain Fv format is much lower than the affinity in a conventional monoclonal IgG antibody format. 24,25 However, the HD105 bispecific antibody inhibited both receptor-ligand interactions, VEGF/VEGFR2 and Dll4/ Notch1, with comparable EC 50 values to the anti-VEGF (bevacizumab-similar) antibody against VEGF/VEGFR2 and the anti-Dll4 monoclonal antibody against Dll4/Notch1. These in vitro biochemical activities of the HD105 bispecific antibody led to potent inhibition of both the VEGF/VEGFR-2 and Dll4/Notch1 signaling pathways in endothelial cells, endothelial cell sprouting and proliferation, and Dll4-induced Notch1-response in engineered SKOV-3 cells. To address the in vivo efficacy of the HD105 bispecific antibody targeting VEGF and Dll4 in human-origin cancer xenograft models in mice, we used a mouse surrogate HD105 bispecific antibody because Dll4 is expressed by mouse endothelial cells in implanted human cancers. We found that the mouse surrogate HD105 bispecific antibody more effectively suppressed tumor progression in A549 lung cancer and SCH gastric cancer xenograft models compared to each single-antigen-targeting antibody, the anti-VEGF (bevacizumab-similar) antibody or the mouse surrogate Dll4-targeting antibody. Based on the results of immunohistochemical analysis, we also found that many more functional tumor vessels were regressed and more tumor cells were apoptotic after the simultaneous blockade of VEGF and Dll4. The greater regression of tumor vessels in response to the HD105 bispecific antibody was consistent with previous findings that Dll4 blockade enhances the anti-angiogenic effects of VEGF blockade after combination treatment. 16,19 These results suggested that the more potent suppression of tumor progression might be correlated with the regression of tumor vessels and induction of apoptotic tumor cells by the simultaneous blockade of VEGF and Dll4. However, the simultaneous blockade of VEGF and Dll4 by the HD105 bispecific antibody showed different anti-cancer effects in other human gastric cancer xenograft mouse models, including MKN-74, SNU-5, and SNU-16. These different anti-cancer effects of HD105 might be due to different contributions of the VEGF/VEGFR2 or the Dll4/Notch1 signaling pathway when each human gastric cancer cell is implanted into the mice. To address this issue, we intend to investigate the expression levels of the proteins involved in the VEGF/ VEGFR2 and Dll4/Notch1 signaling pathways using these gastric cancer xenograft tissues and cancer cell lines. We expect that the expression profiles of VEGF/VEGFR2-and Dll4/Notch1-signaling pathway-related proteins will provide important clues for identifying a biomarker to determine which type of gastric cancer patients can be more effectively treated by the simultaneous blockade of VEGF and Dll4 in future clinical trials. In conclusion, we found that the HD105 bispecific antibody targeting VEGF and Dll4 showed comparable activities to an anti-VEGF antibody (bevacizumab-similar) or an anti-Dll4 antibody in biochemical and biological in vitro assays. Furthermore, the HD105 bispecific antibody exhibited more potent in vivo efficacy in inhibiting the tumor progression of A549 and SCH human cancer xenografts than the VEGF or the Dll4 single-targeting antibody. These results suggest that the HD105 bispecific antibody might be a powerful anti-cancer therapeutic antibody for patients resistant to anti-VEGF therapies. Antibodies and cell culture An anti-VEGF (bevacizumab-similar) antibody, an anti-Dll4 monoclonal antibody, the HD105 bispecific antibody, an antimouse Dll4 monoclonal antibody, and a mouse version of the HD105 bispecific antibody were produced by Hanwha Chemical, Biologics R&D Center (Daejeon, South Korea). The antibody targeting human Dll4 was screened by in vitro library/ phage display methods using the OPAL library. 29,30 The anti-Dll4 monoclonal antibody is a fully human antibody that selectively binds to the N-terminal DSL domain of human Dll4. The HD105 bispecific antibody has a bevacizumab-similar IgG backbone, and the bevacizumab-similar C-terminal is linked with a single-chain Fv that binds to human Dll4. 22,23 The antimouse Dll4 monoclonal antibody was also screened and generated by in vitro phage display methods using the OPAL library, 29,30 which binds to the DSL domain of mouse Dll4. All antibodies were produced by CHO cells and then purified by several chromatographic steps for each antibody. All antibodies used in this study had over 95% purity. Determination of binding affinities to VEGF and Dll4 To compare the binding affinities of the HD105 bispecific antibody with a bevacizumab-similar and an anti-Dll4 monoclonal antibody, a Biacore assay and an ELISA were performed as described below. Surface plasmon resonance experiments were performed using a Biacore T200 instrument (GE Healthcare) with HBS-EP buffer (GE Healthcare) at 25 C. Recombinant human VEGF (R&D Systems) and His-tagged recombinant human Dll4 (rhDll4-His, R&D Systems) were immobilized on activated CM5 chip surfaces to »100 resonance units at a flow rate of 30 ml/min using acetate buffer (GE Healthcare, pH 5.5). A flow cell without any antigens served as a reference surface. Responses were obtained by injecting various concentrations (6.25-100 nM, series of 2-fold dilutions) of anti-VEGF (bevacizumab-similar) antibody, anti-Dll4 monoclonal antibody or HD105 bispecific antibody over the flow cells at a rate of 30 ml/ min for 250 seconds, followed by dissociation in buffer for 600 seconds. The sensor chip surfaces were regenerated by injecting 15 ml of glycine, pH 1.5 (GE Healthcare), at a rate of 30 ml/min for 30 seconds. Kinetic data were analyzed with the Biacore T200 evaluation software version 1.0 and were fitted to a bivalent analyte model to determine the equilibrium dissociation constant K D by measuring the ratio of the rate constants (K D D k d /ka). ELISAs were performed in 96-well Nunc-Immuno Maxi-Sorp plates (Nalgene Nunc International) coated with recombinant human VEGF (50 ng/well) or anti-His Tag antibody (200 ng/well, R&D Systems) for 16 hours at 4 C and blocked with PBS containing 1% bovine serum albumin (BSA) for 2 hours at 37 C. For the Dll4-binding assay, His-tagged recombinant human Dll4 (rhDll4-His, 2 mg/ml) was captured on anti-histidine-coated plates by additional incubation for 1 hour at 37 C. After being washed with PBS-T (PBS containing 0.05% Tween 20, 5 times), various concentrations of HD105 bispecific antibody were added to each plate and then incubated for 2 hours at 37 C. After being washed with PBS-T four times, the bound antibodies were detected by incubation of a peroxidase-conjugated anti-human IgG Fab antibody (Pierce, 1:50,000) for 1 hour at 37 C. After additional washing, 100 ml of 3,3 0 ,5,5 0 -tetramethylbenzidine (TMB) substrate reagent (Sigma) was added and incubated for 6 min. The reaction was stopped by adding 50 ml of 1 N sulfuric acid, and the absorbance (450-650 nm) was measured using a microplate reader (Molecular Device SpectraMax 190, Tecan). Anti-VEGF antibody (bevacizumab-similar) and anti-Dll4 monoclonal antibody were used as a negative control for the Dll4-binding ELISA and for the VEGF-binding ELISA, respectively. DACE was performed to confirm whether HD105 binds to both targets simultaneously. The plates were coated with recombinant human VEGF (25 ng/well) for 16 hours at 4 C. After blocking and washing, various concentrations of HD105 bispecific antibody were mixed with an equal volume of rhDll4-His (2 mg/ml). The mixtures of the bispecific antibody and rhDll4-His were transferred to the VEGF-coated wells and incubated for 2 hours at 37 C. After being washed with PBS-T, the bound rhDll4-His was detected by peroxidase-conjugated anti-His6 antibody (Roche, 1:1,000) for 1 hour at 37 C. The detection procedures of the reaction were the same as for the above-described ELISA method. Anti-VEGF antibody (bevacizumab-similar) or anti-Dll4 monoclonal antibody was used as a negative control for the DACE. Inhibition of receptor-ligand bindings To determine whether HD105 bispecific antibody inhibits the interaction of VEGF/VEGFR2 and Dll4/Notch1, competitive inhibition ELISAs were performed using 96-well Nunc-Immuno MaxiSorp plates with each ligand and each receptor. For the VEGF/VEGFR2 competition assays, the plates were coated with recombinant human VEGF (15 ng/well) for 16 hours at 4 C. Then, the wells were blocked by PBS containing 1% BSA for 2 hours at 37 C. Increasing concentrations of anti-VEGF (bevacizumab-similar) antibody or HD105 bispecific antibody were mixed with equal volumes of His-tagged recombinant human VEGFR2/Fc (1.65 mg/ml, R&D Systems). The mixtures of the antibody and VEGFR2/Fc were then transferred to the VEGF-coated wells and incubated for 2 hours at 37 C. The reactions were developed by adding peroxidase-conjugated anti-His6 antibody (Roche, 1:1,000) and visualized by adding TMB substrate reagent. The enzyme reactions were stopped after 6 min with 1 N sulfuric acid, and the reaction products were measured by reading the absorbance at 450-650 nm. The EC 50 value was obtained from the dose-response curve from the experiments. In the case of Dll4/Notch1 competition assays, the plates were coated with recombinant human Notch1 (50 ng/well) for 16 hours at 4 C. After blocking with PBS containing 1% BSA for 2 hours at 37 C, increasing concentrations (0.01 nM to 200 nM) of HD105 bispecific antibody or anti-Dll4 monoclonal antibody were mixed with equal volumes of rhDll4 (2.4 mg/ml). The mixtures of the antibody and rhDll4 were then transferred to the Notch1-coated wells and incubated for 2 hours at 37 C. The reactions were developed by adding peroxidase-conjugated anti-His6 antibody (Roche, 1:500) and visualized by adding TMB substrate. The enzyme reactions were stopped after 6 min with 1 N sulfuric acid, and the reaction products were measured by reading the absorbance at 450-650 nm. The EC 50 value was obtained from the dose-response curve from the experiments. In the case of mouse Dll4/Notch1 competition assays, the mouse HD105 bispecific antibody and the antimouse Dll4 antibody were used in the above assay format with recombinant mouse Dll4 and Notch1 (R&D Systems). Inhibition of VEGF and Dll4 signaling pathways To determine whether the HD105 bispecific antibody inhibits both the VEGF/VEGFR2 and Dll4/Notch signaling pathways, Western blot analysis was performed using HUVECs. Six-well tissue culture plates were coated with recombinant human Dll4 (1 mg/ml) diluted in bicarbonate buffer for 24 hours at 4 C and then washed with PBS twice. Anti-VEGF (bevacizumab-similar) antibody, anti-Dll4 monoclonal antibody, HD105 bispecific antibody, and DBZ (dibenzazepine, a chemical inhibitor of g-secretase, downstream enzyme of Notch singling) were pretreated for 20 min prior to the seeding of HUVECs. HUVECs (5 £ 10 5 cells) were plated onto the wells in growth medium for 1 day. Then, the cells were serum starved in EBM-2 medium (Lonza) containing 0.25% FBS (Gibco) for 24 hours. Serumstarved HUVECs were stimulated with recombinant human VEGF (100 ng/ml) for 15 min. The cells were lysed in NP-40 lysis buffer with PIC (protease and phosphatase inhibitor cocktails, Pierce), and the proteins were separated on 4% to 12% Bis-Tris gels. Finally, the proteins were blotted with antibodies against cleaved Notch1, phospho-VEGFR2, total VEGFR2, phospho-ERK, total ERK (Cell signaling) and b-actin (Santa Cruz). Effects on VEGF-and Dll4-mediated cell responses To evaluate the in vitro cell-based potency of the HD105 bispecific antibody, VEGF-induced HUVEC sprouting and proliferation and Dll4-induced Notch-1-dependent activation of luciferase in SKOV-3-RBP-J K luciferase cells were assayed. The HUVEC sprouting assay was performed as described below. HUVECs (400 cells per bead) were mixed with dextran-coated Cytodex 3 microcarrier beads (Sigma) in 1 ml of EGM-2 medium. Beads with HUVECs were shaken gently every 20 min for 4 hours at 37 C, then transferred to a T25 flask in 5 ml of EGM-2. After incubation for 16 hours, the HUVEC-coated beads were washed three times with 1 ml of EGM-2 and resuspended in 2 mg/ml of fibrinogen (Sigma) to obtain 300 HUVEC-coated beads/ml. The fibrinogen/bead solution (0.5 ml) was added to 0.625 units of thrombin (Sigma) in a 24-well tissue culture plate. The fibrinogen/bead solution was allowed to clot for 5 min at 25 C followed by incubation at 37 C for 20 min. EGM-2 (1 ml) was added to each well and equilibrated with the fibrin clot for 30 min at 37 C. After removal of the medium and replacement with 1 ml of fresh medium, Detroit 551 cells (2 £ 10 4 cells per well) were plated on top of the clot. The medium was changed every 3 days with appropriate antibodies for 15 days, and sprout formation was imaged using an inverted microscope (Eclipse TS100, Nikon). Each sprout was quantitated by counting the number of sprouts per bead. In proliferation assays, HUVECs were plated on 100 mm plates and cultured to reach 80% sub-confluence, then serum starved in starvation medium (EBM-2 C 0.25% FBS) for 24 hours. After serum starvation, the HUVECs were trypsinized and diluted to 6 £ 10 4 cells/ml in the starvation medium. A total of 3,000 HUVECs were seeded into each well. Anti-VEGF (bevacizumab-similar) antibody or HD105 bispecific antibody was serially diluted from 16.5 nM to 0.32 nM in the starvation medium containing human VEGF (100 ng/ml). Then, HUVECs were immediately treated with 50 ml of prepared antibodies at each concentration in triplicate. After an additional 72 hours of incubation at 37 C, HUVEC proliferation was detected by adding 10 ml of CCK-8 reagent (Dojindo) followed by incubation for 5 hours at 37 C. Absorbance at 450 nm was measured using a microplate reader (Molecular Devices). To test whether the HD105 bispecific antibody inhibits Dll4induced Notch-1-dependent activation, SKOV-3 cells (ATCC) were infected with a lentiviral particle expressing RBP-Jk reporter and Renilla luciferase reporter (Qiagen) that is responsive to Dll4/Notch signaling. Recombinant human Dll4 (100 ng/well) was coated onto white 96-well plates (Costar) for 24 hours at 4 C. The HD105 bispecific antibody or anti-Dll4 monoclonal antibody was serially diluted 3-fold from 30 nM to 0.002 nM and then added to each well. Engineered SKOV-3 RBP-Jk-luciferase cells were added to the wells and incubated for 24 hours. Luciferase activity was measured using a One-Glo luciferase assay kit (Promega) and HTRF Luminescence detector (BMG Labtech). Animal studies and immunohistochemical analysis To evaluate the in vivo efficacy of the HD105 bispecific antibody, tumor growth was measured after treatment with the mouse version of HD105 bispecific antibody, anti-VEGF (bevacizumab-similar) antibody or anti-mouse Dll4 monoclonal antibody in A549 human lung and SCH human gastric cancer xenograft models. All procedures for animal studies were approved by the Institutional Animal Care and Use Committee. Balb/c athymic nude mice (8-week-old female, Charles River Japan) were injected subcutaneously in the flank area with A549 and SCH cancer cells (1 £ 10 7 cells/head). When the tumors were grown to an average volume of 150-200 mm 3 , the mice were divided into homogenous groups (6-7 mice/group) and treated with the mouse version of HD105 bispecific antibody (3.25 mg/kg: same molar concentration with single-antigen-targeting antibodies), anti-VEGF (bevacizumab-similar) antibody or anti-mouse Dll4 monoclonal antibody (2.5 mg/kg) twice per week (A549) or once a week (SCH) by intraperitoneal injection. Tumor size was measured twice per week using a caliper and then calculated by the formula (length [mm]) £ (width [mm]) 2 £ 0.5. When the average tumor size of the control group reached 1,500-2,000 mm 3 , the treatment was stopped and the mice were sacrificed to measure the tumor weight. Some mice were perfused with 4% paraformaldehyde in PBS for the further analysis of tumors. 9,10 A dosedependent response study of HD105 bispecific antibody (0.361, 1.083, 3.25, and 6.5 mg/kg; once per week, intraperitoneal injection) was performed in the SCH gastric cancer xenograft model. The in vivo efficacy of the HD105 bispecific antibody (6.5 mg/kg; once a week, intraperitoneal injection) on the tumor progression of other human gastric cancers was confirmed using MKN-74, SNU-5 and SNU-16 cancer xenograft models. Mouse PK studies To compare the PK profiles and parameters of the HD105 bispecific antibody with anti-Dll4 monoclonal antibody, the HD105 bispecific antibody (3.25 mg/kg) and anti-Dll4 monoclonal antibody (2.5 mg/kg) were intraperitoneally injected into BALB/c mice (nD5). Mouse serum was harvested at different time points (1,2,6,24,48,96,168,216,264, and 336 hours), and then the concentration of each antibody was measured by the Dll4-binding ELISA method. The PK parameters including T max , C max , AUC all , T 1/2 of each antibody were determined based on non-compartment model. Stability of antibodies To determine the stability of the HD105 bispecific antibody, HD105 (20 mg/ml) in histidine buffer including NaCl, arginine and trehalose was stored at 4 C, 25 C, or 40 C for 4 weeks, and then was analyzed by SEC-HPLC, SDS-PAGE (silver staining kit, Elpis-Biotech), and DACE after 1 week, 2 weeks, and 4 weeks. Statistics Values are expressed as the means § SE. The significance of differences between group means was assessed by ANOVA followed by the Bonferroni test for multiple comparisons (P < 0.05 values were considered significant).
2018-04-03T02:07:04.351Z
2016-04-06T00:00:00.000
{ "year": 2016, "sha1": "23c1bb5564975c6b56149304e48e2c2227f8e9f0", "oa_license": null, "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/19420862.2016.1171432?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "7f7242515ed3e328ba54614c70852ce225b3da3d", "s2fieldsofstudy": [ "Biology", "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237388982
pes2o/s2orc
v3-fos-license
A case of air pollution-induced Valsalva retinopathy This is a case of Valsalva retinopathy during the season of annual transboundary haze pollution in Sarawak. A 22-year-old man with no known medical illness developed sudden onset of painless visual acuity loss preceded by persistent cough. Left eye fundus showed dense preretinal haemorrhage covering optic disc extending inferiorly with breakthrough vitreous haemorrhage. The patient underwent pars plana vitrectomy, endolaser, and fluid gas exchange in view of persistent dense vitreous haemorrhage after a month of conservative management. In conclusion, pars plana vitrectomy can be considered as a safe and effective treatment option for patients with Valsalva retinopathy developing extensive premacular haemorrhage. Introduction With the increase of global demand for agricultural and urban spaces, slash-andburn deforestation has become a popular method to clear forests for cultivable land. In September 2019, Sarawak suffered a rather critical transboundary haze that reached a hazardous level of 402 in the Air Pollutant Index as a result of rampant forest fires. Haze has been known to cause significant impact on respiratory and ocular health. A rare condition, Valsalva retinopathy case is scarcely reported during the haze season. Valsalva retinopathy is an induced premacular retinal haemorrhage due to increased pressure on the retinal venous system caused by sudden increases in intrathoracic pressure. The Valsalva manoeuvre, a forcible exhalation effort against a closed glottis causing a sudden rise in intrathoracic pressure, was first described by 17 th century physician Antonio Maria Valsalva. There are no valves in the venous system rostral to the heart hence causing a sudden surge of reflux venous pressure in the head and neck region. Common causes of Valsalva retinopathy include vomiting, weightlifting, vigorous sexual activity, and coughing. Here, we report a case of Valsalva retinopathy with dense preretinal haemorrhage induced by air pollution. Case presentation A 22-year-old working class man with no known medical illness presented with painless sudden onset of acute visual loss in the left eye preceded by a bout of cough. Visual acuity in the left eye was hand movement, the anterior segment was unremarkable, and intraocular pressure was 14 mmHg. Fundus examination showed dense preretinal haemorrhage covering the optic disc extending inferiorly with breakthrough vitreous haemorrhage (Fig. 1). B scan showed subhyaloid blood tracking inferiorly and anteriorly to the ora serrata. After 1 month, visual acuity remained hand movement and the fundus revealed extensive vitreous haemorrhage (Fig. 2). Full blood count /peripheral blood film, coagulation profile, blood urea, serum electrolytes, erythrocyte sedimentation rate, and autoimmune workup were done and yielded normal results. Pars plana vitrectomy, endolaser, Eng YH and Lim IH 110 and fluid gas exchange were performed in view of persistent vitreous haemorrhage. Postoperative follow-up at 2 months revealed best-corrected visual acuity of 6/7.5. Fundus examination revealed minimal old vitreous haemorrhage with flat posterior pole (Fig. 3). However, the patient developed early cataract as the complication of the surgery. Discussion Vitreous haemorrhage is a common sign of various ocular diseases. It is known to cause permanent visual damage such as haemosiderosis bulbi, proliferative vitreoretinopathy, and ghost cell glaucoma. Hence, our management aim was to restore vision and expedite the patient's recovery with minimal complications. Many procedures have been reported, such as puncturing the posterior hyaloid face with Nd-YAG, 1 pneumatic displacement of the haemorrhage with an intravitreal injection of gas with or without recombinant tissue plasminogen activator, 2 and pars plana vitrectomy. 3 García et al. 3 reported six cases of Valsalva retinopathy in which five required pars plana vitrectomy after 3-4 weeks of observation, whereas one patient recovered without intervention as the haemorrhage was minimal with a diameter of one disc. One of the cases that underwent pars plana vitrectomy developed cataract postoperatively and required cataract surgery. Successful Nd:YAG laser hyaloidotomy for Valsalva premacular haemorrhage with the size of more than three disc diameters and enough haemorrhage pocket depth was also been reported by Mehdi et al. 4 However, the challenge of performing Nd: YAG laser hyaloidotomy is the proximity to the retinal surface, which may cause macular hole 5 , retinal detachment, and epiretinal membrane formation. Our patient was treated with pars plana vitrectomy as his vitreous and preretinal haemorrhage was noted to be more than one disc diameter which did not resolve after 1 month. However, he developed early cataract postoperatively, which may require cataract surgery later. Conclusion Pars plana vitrectomy is the more effective and safer treatment for extensive premacular haemorrhage compared to other treatment modalities. However, known surgical associated complications such as cataract may develop, as reported in our case. Declarations Ethics approval and consent to participate Not required. Consent for publication The patient provided informed consent for the publication of this case report. Competing interests None. Funding None.
2021-09-01T15:13:31.654Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "930f243bbee62432976fef3c10f538ed3f1bead3", "oa_license": "CCBYNC", "oa_url": "https://myjo.org/index.php/myjo/article/download/182/103", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "a1dfc8da1d5eec0bd1c1aad7aa82a355bc4ae77c", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [] }
225757079
pes2o/s2orc
v3-fos-license
Optimizing the Dosage Regimen of Micafungin against Candida spp in HIV Positive Patients with EC Based on Mote Carlo Simulation The objective of our study was to explore the possibility of the antifungal efficacy of various micafungin dosage regimens against Candida spp in HIV positive patients with EC. According to pharmacokinetic/pharmacodynamics parameters of micafungin in HIV positive patients and MICs distribution of micafungin against Candida spp. in published studies, the dosage regimens of micafungin were 50, 100 and 150 mg QD iv. Monte Carlo Simulation analysed the probability of target attainment and cumulative fraction of response. The results showed that micafungin has good antifungal effect in treating HIV positive patients with EC when pathomycetes are Candida albicans, Candida glabrata or Candida tropicalis, in dosage at 100 mg QD and 150 mg QD. Introduction In recent years, invasive fungal infections (IFIs) had been a significant factor in the morbidity and mortality of inpatients with invasive infections, especially in patients with immunodeficiency [1] [2]. The primary pathogenic fungi for IFIs are Candida albicans, Candida glabrata, Candida krusei, Candida parapsilosis and Candida tropicalis [1]. phagal candidiasis (EC) is a common and severe complication, the incidence is 15% -20%, and micafungin has shown great efficacy and tolerability in treating EC in HIV positive patients [3]. Micafungin is an echinocandin antifungal agent, which plays an antifungal role in selective inhibiting the synthase of β-(1,3)-D-glucan in the fungal cell wall [4] [5]. Micafungin mainly binds to albumin in vivo, the protein binding rate in plasma is 99% [6]. In vivo, micafungin is metabolized by the liver and excreted through the biliary tract, and mainly excreted through faeces [7]. Studies showed that micafungin has antifungal activities against Candida spp. both in vitro and in vivo, even for fluconazole resistance fungus [8] [9] [10] [11]. Monte Carlo simulation (MCS) is a useful tool for clinical treatment of dose selection, which can be sufficient to evaluate the effect of antifungal drugs and minimise the possibility of antifungal drug resistance. MCS has been used to assess the dosing regimens of micafungin in morbidly obese patients [12], critically ill patients with invasive fungal infection [13], critically burned patients with abdominal disease [14] and children [15]. In this study, MCS was used to optimise the micafungin dosage regimen of EC in HIV positive patients, to provide a basis for clinical application. Pharmacokinetic Parameters Pharmacokinetic parameters for micafungin in HIV positive patients with EC from the literature [3], PK data of intravenous micafungin in HIV-positive patients are shown in Table 1. Micafungin is a concentration-dependent antifungal drug with long-term aftereffects, and the antifungal effect is measured by fAUC 24h /MIC, PD target of Candida spp. is fAUC 24h /MIC = 10 [16], Free drug fraction f is 1%. The Minimum Inhibitory Concentration (MIC) Data The MICs distribution of Candida spp. is from the European Committee on Antimicrobial Susceptibility Testing (EUCAST) (http://www.eucast.org). The data has shown in Table 2. Monte Carlo Simulation The probability of target attainment (PTA) is the target value of Pharmacokinetic/pharmacodynamic (PK/PD); the calculation formula is PTA Values PTA values of micafungin against Candida spp. in HIV positive patients under different MICs distribution shown in Figure 1. The results showed that in dosage at 50 mg, the 5 Candida spp can reach the target when MIC is less than 0.032 μg/mL. In dosage at 100 mg and 150 mg, the 5 Candida spp can attain the goal when MIC is less than 0.064 μg/mL. CFR As the results are shown in Table 3 Discussion Micafungin is one of three currently available echinocandins in the treatment of Candidiasis, and the FDA recommends a dose of 100 mg QD for adult Candidiasis [17]. In HIV positive patients confirmed EC, no effect of race or gender on the pharmacokinetics of micafungin [3]. Pharmacokinetics of micafungin in treating patients with EC was linear, predictable, which is similar to the published studies in healthy adults [3]. In our study, the antifungal effect of micafungin against different Candida spp in HIV positive patients with EC was quite different. For Candida krusei and Candida parapsilosis, micafungin has no antifungal impact, which is similar to the published studies in intensive care unit patients [18]. Our study also showed that micafungin against Candida albicans, Candida glabrata and Candida tropicalis has good antifungal effect in dosage greater than or equal to 100 mg, which is similar to FDA recommendation [17]. In this study, MCS used to carry out hypothesis analysis based on certain PK and strain data, which was beneficial to optimise the type and dosage of micafungin against Candida spp in HIV positive patients with EC. However, the research results of this paper also have some limitations, such as the MIC distribution of micafungin from some regions but not world-wide, so it can not reflect the development trend and change of the fungus in the future. Conclusion In summary, MCS is a simple, safe method to optimise dosage regimen according to the characteristics of fungi and PK/PD parameters. When pathomycetes are Candida albicans, Candida glabrata or Candida tropicalis, micafungin has good antifungal effect in HIV positive patients with EC, when pathomycetes are Candida krusei or Candida parapsilosis, other antifungal treatments are needed. Funding This study was funded by the Study and Development Fund for Sciences and Technology in Chengde City (No. 201701A086).
2020-06-18T09:08:19.906Z
2020-06-04T00:00:00.000
{ "year": 2020, "sha1": "7158193b4e39a619e291668cb7ce7545de035dd0", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=100928", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "e34f58965ffc42ef7168facd0624bfb118f7251a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
54895303
pes2o/s2orc
v3-fos-license
High strain rate characterization of shock absorbing materials for landmine protection concepts Numerical modelling of footwear to protect against anti-personnel landmines requires dynamic material properties in the appropriate strain rate regime to accurately simulate material response. Several materials (foamed metals, honeycombs and polymers) are used in existing protective boots, however published data at high strain rates is limited. Dynamic testing of several materials was performed using Split Hopkinson Pressure Bars (SHPB) of various sizes and materials. The data obtained from these tests has been incorporated into material models to predict the initial stress wave propagation through the materials. Recommendations for the numerical modeling of these materials have also been included. Introduction An estimated 110 million landmines are hidden around the world [1], interfering with agriculture as well as industrial development.In 1997, the Ottawa treaty [2] was signed to ban landmines around the world and many nations have responded to this problem with extensive de-mining efforts to decontaminate mine-affected areas. The design of footwear to protect against landmines is extremely complex and must take into account both the shock waves from the blast and the rapid expansion of the explosive gases immediately below the foot.Experimental testing is limited since it is extremely expensive and requires significant expertise.A method that allows many concepts to be inexpensively compared and ranked prior to the experimental trials is thus desirable.With such a method, many potential concepts may be evaluated and only those that show the most promise would then be tested experimentally.Numerical modelling allows for the evaluation of many concepts at a lower cost.To ensure that numerical mod-els will be realistic, several simpler models have been analysed and validated with experimental results [3][4][5].These models found that the explosive gases expand at speeds as high as 1800 m/s.Since the footwear must be thin enough for normal motion, the peak strain rates in the material will be very high and thus material properties obtained at these strain rates are desired. This paper discusses the testing and modeling of various materials that are used in existing protective boots using the Split Hopkinson Bar method.These materials include closed-cell polyethylene foams, open-cell aluminum foams and honeycombs and polyurethane rubber. Material models -shock behaviour The LS-DYNA hydrocode contains many material models representing a variety of materials [6].Ideally, both shock and deviatoric behaviour should be in- corporated into the material models.Shock may be accounted for by the use of an equation of state and there are few models that allow this equation of state to be added.The simplest of these models include the null material model which assumes that the material is a viscous fluid and neglects deviatoric properties, the elastic-plastic-hydrodynamic model that assumes the material follows a straight elastic curve to yielding and then may have some strain hardening or softening in the plastic region.Neither accounts for extensive deformation, however and a deviatoric model should be used to determine the long-term deformation of any material. The linear polynomial equation of state [6] may be used with the data tabulated in LASL shock handbook [7] to describe the shock behaviour. Material models -deviatoric behaviour This paper contains stress-strain curves that may be incorporated into deviatoric material models.The following models are recommended for the materials discussed in this report. Honeycombs and brittle foams The honeycomb model [6] uses the solid material properties, densification strain and orthotropic load curves in compression to determine deviatoric behaviour of the material.Isotropic materials such as brittle foams may also use this material model and only two load curves (uniaxial compression and shear) need to be defined in this case. Nearly incompressible rubber materials (polyureth ane) The Mooney-Rivlin rubber model is a two-parameter model for nearly incompressible rubber materials [6].The input data includes density, Poisson's ratio and a load curve (either force-displacement or stress-strain) which it uses to calculate the two parameters A and B. This model, with an appropriate strain-rate curve at the expected strain rates, has been used for modeling polyurethane rubber response to blast loading with some success [3][4][5]. Polymer foams The simplest foam-specific model is the crushable foam material model [6].The inputs for this model are the initial density of the foam and a stress-strain curve at a given strain rate. Split hopkinson bars for testing low-impedance materials The Hopkinson Bar is the most commonly used method for testing materials in the strain rate regime of 10 2 -10 4 s −1 .Only compressive split Hopkinson Bar (CSHB) tests were used. The CSHB consists of a striker bar, an incident bar and a transmitter bar, shown in Fig. 1.The striker bar is propelled from a gas gun at a desired velocity and impacts the incident bar.A compressive stress wave is imparted into the incident bar and propagates uninterrupted until it reaches the sample.The wave is partially transmitted through the sample into the transmitter bar and the rest is reflected into the incident bar.Strain gauges record the strain-time history of both the incident, reflected and transmitted waves. Classical analysis of a split Hopkinson Bar requires the following assumptions [8]: (i) the bars must remain elastic throughout the test; (ii) no attenuation or dispersion of the stress waves occur; (iii) the pulse is uniform over the cross-section of the bar and, (iv) the specimen remains in equilibrium throughout the test.These assumptions must be valid in order to apply the classical Hopkinson bar equations (Eqs (1)-(4)) to determine the time histories of stress, strain and strain rate within the specimen. Experimental methods Where C b is the speed of sound inside the bars, E b is the modulus of elasticity of the bars, ρ b is the density of the bars, A b and A s are the cross-sectional areas of the bars and specimen, L s is the length of the specimen and ε r and ε t are reflected and transmitted strains. It should be noted that the real behaviour of the bars is more complex than assumed.Friction must be overcome before a specimen can expand or contract radially.Davies [8] suggests certain criteria for specimen dimensions to minimize this effect.Aspect ratios are very important for ensuring that the friction effects at the ends is minimal [9], and were kept at L/D = 1 for the majority of the materials tested.In order to measure higher strain rates, the aspect ratio of the aluminum foam and honeycomb specimens were as low as L/D = 0.25.This was acceptable because these materials exhibit very low Poisson's ratios (approximately 0.05 [10]) and thus do not deform significantly in the radial direction, so the frictional effects were much less significant than for the other materials. Polymeric bars require a different approach.Bacon [11] developed an experimental method to correct for the attenuation and dispersion of the wave as it propagates along the length of the bar.The method is based on performing a free end test (where the end of the bar is not restricted) while measuring the incident and reflected wave.Since the bar is allowed to move freely at the interface end the entire wave must be reflected.By comparing the difference between the incident and reflected wave a measure of the attenuation and dispersion is possible.This analysis leads to the application of a propagation coefficient (which is a function of frequency) which is comprised of an attenuation coefficient and a wave speed.The impractical nature of performing such calculations in the time domain, lead to the application of spectral methods for determining the propagation coefficients. Once the free end test has been performed on both the incident and transmitter bars, the application of the propagation coefficient allows the velocity and force time histories at the end of the bars to be determined.The following fundamental relations can then determine the strain rate, strain and stress within the sample [12]: Where, L s and A s are the same as defined before, V 1 and V 2 are the velocities of the end of the incident and transmitter bar at the sample interface and F 2 is the force in the transmitter bar. Low strain rate testing of the materials was done using an Instron 4206 machine using steel platens and no lubricant at the ends of the samples.All samples were crushed beyond 90% engineering strain.The sample sizes were determined according to the material stiffness since the Instron could not accurately test at loads less than 4500 N. Table 1 shows the sample sizes used for each material. It would have been preferable to test all materials with samples with a 1:1 aspect ratio, however the limitations of minimum load and available material thickness prevented this. High strain rate testing was done using compressive split Hopkinson bars.Three sets of bars were used to avoid impedance mismatch with the materials.The bars used in these tests included 25.4 mm diameter aluminum, 25.4 mm acrylic and 25.4 mm low-density polyethylene (LDPE).Both the incident and transmitter bars were 96" (2.4384 m) long for all materials.The striker bars used had various lengths; the aluminum striker was 24" (605 mm), the acrylic striker was 28" (711 mm) and the LDPE striker was 6" (150 mm).Table 2 shows the specimen size and bar material(s) for each sample material tested. The choice of bar is critical to ensure that the impedance is matched as closely as possible between the specimen and the bars.When tested on the acrylic bars, the impedance mismatch between the foam and the bars resulted in a transmitted signal that was too weak to record.With the LDPE bars, however, there was a negligible mismatch since both materials were low-density polyethylene and thus the transmitted signal was much stronger. Polyethylene foams Two polyethylene foams were tested at a range of strain rates: HL34 with density 34 kg/m 3 and LD24 with density 24 kg/m 3 .Strain rates from 0.03 to 1500/s were achieved using the Instron 4026 and 25.4 mm diameter low-density polyethylene Split Hopkinson Bars respectively.Data obtained from these tests is shown in Figs 2 and 3.At low strain rates, both polymers had similar behaviour though the LD24 had a lower yield strength and modulus (as expected for a foam of lower density).After the initial elastic loading, there was a significant amount of crushing at nearly constant stress until the stress increased dramatically at densification.The HL34 foam had a higher yield stress and exhibited more energy dissipation than the LD24 foam due to its higher density. These foams show significant strain-rate dependence, in line with the observations of other researchers [13,14].Gibson and Ashby [13] determined that the compression of the fluid within the cells (air, in this case) is very rate-dependent and that this is the governing behaviour for foams of this density. Open-celled foamed aluminum Foamed aluminum is available in two forms: opencelled, with lamellae rather than solid cell walls, and closed-cell foams.Open-celled foams are widely used in crumple zones for cars [15] and other energyabsorption applications due to their crush strength and negligible strain-rate sensitivity at moderate strain rates (< 1200 s −1 ). Several varieties of Duocel open-celled aluminum foams were tested: 7% relative density with 10, 20 and 40 pores per inch, and 11% relative density with 10 pores per inch.Low-rate tests were conducted at a strain rate of approximately 0.01/s.Figure 4 compares these results.Note that, at this rate, the relative density has a much larger effect on the foam behaviour than the size of the pores.Since there is much more metal in a higher density foam, it logically follows that the area of metal in a given cross-section will be larger (regardless of pore size) and thus the material will be stronger as density increases.Other researchers have found that this is generally true for aluminum foams [13], and specifically for Duocel open-celled foams [15]. In order to properly represent the material, sample sizes were chosen such that each dimension included at least 10 cells.An exception was made for the 10PPI specimens, which had only 5 cells in each direction due to the maximum allowable specimen size for the bars.Due to these limitations, strain rates were limited to 1500/s for the 10PPI foams and 2200/s for the 40PPI foams.The results suggest that this was an acceptable specimen size since the stress-strain curves obtained were consistent with the trends observed by Danneman [16] for 40PPI Duocel aluminum foam and Deshpande [17] for 20PPI Duocel aluminum foam.Dynamic tests of these materials show very similar results at strain rates lower than 1500 s −1 in 11% relative density, 10PPI (Fig. 5) and less than 1300 s −1 for 7% relative density, 40PPI foam (Fig. 6). Danneman [16] suggested that strain rate effects are negligible up to 1200/s strain rate for the 40PPI foam and this is confirmed by the data obtained.At strain rates higher than 1300 s −1 , a strain rate dependence is seen at strains above 50%, though this may be related to the specimen size.Further high-strain rate tests should be done with these materials using bars capable of higher firing pressures so that the same specimen sizes may be used for all tests.A definite strain rate effect is seen at strain rates exceeding 1500 s −1 for the 40PPI foam.Deshpande [17] suggested that there was little strain rate sensitivity until 5000/s, but that conclusion was based on a 20PPI foam.It is possible that the reduced pore size in the 40PPI foam increases the strain rate effect because the cell walls are much closer together, thus the foam behaves more like solid aluminum in this regime. Honeycombed aluminum Honeycombed materials are used in a variety of applications, most often as an energy absorbing material.Its strength is much higher out-of-plane and it crushes in three modes of deformation [13]: linear elastic loading (1), plastic buckling (2), and crushing (3), as shown in Figs 7 and 8. Two types of honeycomb were tested: ACG (ACG-1/4-4.8) and CR-III (CRIII-1/8-5052-.006N-2.21-STD),both manufactured by Hexcel.These graphs clearly show the three modes of deformation listed above.Gibson and Ashby [13] calculate the plastic buckling stress (σ pl ) of honeycombs out-of-plane from the yield stress of the solid material (S ys ), wall thickness (t) and cell wall length (l).Equation ( 8) applies to honeycomb with single thickness cell walls (such as ACG) and Eq. ( 9) for honeycombs where two of the six walls are double-thickness (CR-III): The plastic buckling stresses predicted using these equation are 31 MPa for the CR-III (Eq.( 9)) and 2.4 MPa for ACG (Eq.( 8)) The experimentally observed values, 35 MPa and 2.3 MPa for CR-III and ACG respectively, compare reasonably well with these predictions. Dynamic testing was done using a 25.4 mm diameter aluminum split Hopkinson Bar.The specimens used for this testing consisted of a 7-cell repeated group of the honeycomb, shown in Fig. 9.This specimen shape does not account for the extra energy required to break the adjoining cell walls, thus the data collected from these specimens underestimates the actual strength of the material. The stress-strain data obtained is shown in Figs 10 and 11. The CR-III honeycomb shows a reduced strain rate effect, most likely because it tends to fail by shearing at the glued interfaces between the layers, which is not strain rate dependent.It is recommended that larger specimens of the CR-III be tested dynamically to determine whether the mode of failure changes with the number of cells tested. Polyurethane (Elastomer) Polyurethane was tested using the 25.4 mm diameter acrylic and 25.4 mm aluminum Hopkinson bars.The agreement between both bars was excellent, however only data from the acrylic bar is included in the graphs since the recorded signal was much less noisy.The likely reason for this is that the signal in the aluminum transmitter bar was much weaker than that in the acrylic transmitter bar, thus the noise was much larger with respect to the signal.Figure 12 shows the strain-rate sensitivity at all strain rates tested, with the behaviour at low strains enlarged. A clear dependence on strain rate is seen at low strains, similar to the results obtained by Yang et al [18].At higher strains, however, this dependence is not seen as clearly and the curves for strain rates of 1000-3000s-1 do not show a direct dependence on strain rate at high strains. Conclusions The material testing presented here included high and low strain rate testing of polymer foams, foamed aluminum, aluminum honeycomb, and polyurethane rubber.The polymer foams were found to be very strain-rate dependent, comparable with results from Gibson and Ashby and Zhao [13,14].The foamed aluminum tests showed two things: first, that the density has a much more significant effect than cell density at all strain rates, and second, that there is little strain rate sensitivity in the 10 −2 to 10 3 strain rate regime.This observation has been noted by other researchers [16,17].The aluminum honeycomb was found to be strainrate dependent, though there was little published data to compare these results with.The polyurethane rubber showed a clear strain-rate dependence at low strains, however at higher strains, the strain rate effect in the 1000-3000s-1 regime was not clear. The deviatoric models discussed in the literature review should be used with the high strain rate data provided in this paper to model the long-term deviatoric behaviour of the materials. Table 1 Sample sizes and maximum loading for low-rate material tests Fig. 2. Stress-strain behaviour of HL34 foam. Table 2 Sample sizes, Hopkinson bars used, and achieved strain rates for high-rate material tests
2018-12-13T09:59:00.019Z
2003-01-01T00:00:00.000
{ "year": 2003, "sha1": "5a0ed4d914b9e2288ffe1805fad068ef0b0242ea", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/sv/2003/961910.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5a0ed4d914b9e2288ffe1805fad068ef0b0242ea", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
15522051
pes2o/s2orc
v3-fos-license
Multivalent Interactions of Human Primary Amine Oxidase with the V and C22 Domains of Sialic Acid-Binding Immunoglobulin-Like Lectin-9 Regulate Its Binding and Amine Oxidase Activity Sialic acid-binding immunoglobulin-like lectin-9 (Siglec-9) on leukocyte surface is a counter-receptor for endothelial cell surface adhesin, human primary amine oxidase (hAOC3), a target protein for anti-inflammatory agents. This interaction can be used to detect inflammation and cancer in vivo, since the labeled peptides derived from the second C2 domain (C22) of Siglec-9 specifically bind to the inflammation-inducible hAOC3. As limited knowledge on the interaction between Siglec-9 and hAOC3 has hampered both hAOC3-targeted drug design and in vivo imaging applications, we have now produced and purified the extracellular region of Siglec-9 (Siglec-9-EC) consisting of the V, C21 and C22 domains, modeled its 3D structure and characterized the hAOC3–Siglec-9 interactions using biophysical methods and activity/inhibition assays. Our results assign individual, previously unknown roles for the V and C22 domains. The V domain is responsible for the unusually tight Siglec-9–hAOC3 interactions whereas the intact C22 domain of Siglec-9 is required for modulating the enzymatic activity of hAOC3, crucial for the hAOC3-mediated leukocyte trafficking. By characterizing the Siglec-9-EC mutants, we could conclude that R120 in the V domain likely interacts with the terminal sialic acids of hAOC3 attached glycans whereas residues R284 and R290 in C22 are involved in the interactions with the active site channel of hAOC3. Furthermore, the C22 domain binding enhances the enzymatic activity of hAOC3 although the sialic acid-binding capacity of the V domain of Siglec-9 is abolished by the R120S mutation. To conclude, our results prove that the V and C22 domains of Siglec-9-EC interact with hAOC3 in a multifaceted and unique way, forming both glycan-mediated and direct protein-protein interactions, respectively. The reported results on the mechanism of the Siglec-9–hAOC3 interaction are valuable for the development of hAOC3-targeted therapeutics and diagnostic tools. Introduction Sialic acid-binding immunoglobulin-like lectins (Siglecs) are a family of proteins expressed on different haemopoietic and immune system cells [1,2]. Based on their homology to CD33/ Siglec-3, the CD33-related Siglecs form a subgroup of the Siglec family. In addition to Siglec-3, the subgroup includes Siglec-9, -5 to -11, -14 and -16 [3], which are able to bind to a variety of sialyl sugars and can regulate the immune response [3,4]. Siglec-9 is an immunosuppressive molecule expressed mainly on neutrophils, monocytes, macrophages, as well as dendritic and NK-cells [2,3]. It consists of three extracellular immunoglobulin-like domains: a V-set domain followed by two C2-set domains and a short cytosolic tail including the Immunoreceptor Tyrosine-based Inhibition Motif (ITIM) and ITIM-like motifs [5]. Siglec-9 is also a leukocyte trafficking molecule and its expression is rapidly up-regulated on the leukocyte surface after inflammation stimuli [6]. Recently, we have identified Siglec-9 and Siglec-10 as counter receptors for human primary amine oxidase (hAOC3; also called vascular adhesion protein-1, VAP-1) on the endothelial cell surface [6,7]. Similar to Siglec-9, hAOC3 is an inflammation-inducible protein [8,9]. Upon inflammation, leukocytes migrate from the blood into the non-lymphoid tissues and the heavily glycosylated hAOC3 contributes to several steps in the extravasation cascade and controls the trafficking of lymphocytes, granulocytes and monocytes to the sites of inflammation [10]. Besides being an adhesion molecule, hAOC3 is also an enzyme, which catalyzes oxidative deamination of primary amines and produces hydrogen peroxide, aldehyde and ammonium [11]. The catalytic site of hAOC3 is deeply buried and contains an essential topaquinone (TPQ) cofactor, modified from Tyr471 in a copper-dependent manner. The two functions of hAOC3 are interlinked since inhibition of the enzymatic activity of hAOC3 increases rolling velocity but reduces adhesion and transmigration steps of leukocyte extravasation in vivo [12]. Additionally, sialic acids of the hAOC3-attached glycans are crucial for adhesion [13] and the hAOC3 glycosylation is important in the initial recognition but also regulates the enzymatic activity [14]. Since the small molecular inhibitors of hAOC3 oxidase activity are shown to prevent the inflammatory function of hAOC3 in vivo, the hAOC3 inhibitors could be used in treating acute and chronic inflammatory conditions as well as tumor progression and metastatic spread of cancer (reviewed in [15]). Siglec-9 and Siglec-10 were initially identified as potential ligands for hAOC3 using the CX8C phage peptide library [6,7]. The current knowledge on the Siglec-9-hAOC3 interaction is mainly obtained from studies with peptides, which correspond to the CE loop of the second C2 (C2 2 ) domain of Siglec-9 [6]. Based on the previous results, two arginines in the peptide were crucial for the interaction. These correspond to R284 and R290 in Siglec-9 and when only one of the arginines was present in the peptide, the binding to hAOC3 reduced while mutating both of them totally abolished binding. As hAOC3 is translocated to endothelial cell surface mainly upon inflammation and in certain cancers, Siglec-9 peptides are valuable as diagnostic tools to detect inflammation and cancer in vivo. In fact, the labeled Siglec-9 peptide functions as a tracer in the positron emission tomography (PET) of hAOC3. [6]. In the present study, we have expressed and purified the extracellular region of Siglec-9 (Siglec-9-EC) to investigate its interaction with hAOC3 at the protein level. Firstly, we used mutagenesis to find out the individual effects of R284 and R290 in the C2 2 domain and, secondly, we inspected if the V domain of Siglec-9 also plays a role in the interaction. Since R120 in the V domain of Siglec-9 has been reported to be crucial for the recognition of μ2-3 and μ2-6-linked sialic acids on some other Siglec-9 ligands [16], we created the ΔV and R120S mutants of Siglec-9 to study the interaction of the V domain with the hAOC3-attached sialoglycans. Biochemical studies of the Siglec-9-EC interaction with hAOC3, supported by mutagenesis and structural modeling, allowed us to show that the V domain of Siglec-9 binds the sialic acids on the hAOC3 surface and the C2 2 domain of Siglec-9 interacts with the active site channel of hAOC3. Our results provide novel insights into the mechanism of the physiologically relevant interactions that occur between leukocytes and endothelial cells upon inflammatory conditions and, thus, aid the further development of hAOC3-targeted therapeutics and diagnostic tools. Comparative 3D modeling The 3D structural model for Siglec-9-EC sequence (UniProt Knowledgebase (UniProtKB) Q9Y336) was constructed using the X-ray structure of Siglec-5 (Protein Data Bank Identification Code (PDB ID) 2ZG2 [17]) as a template. Firstly, a 3D model for the V-C2 1 domains (residues S20-H229) was made based on the multiple sequence alignment of Siglec-3, -5, -6, -7, -9 and -14 using the V-C2 1 domains of the Siglec-5 crystal structure as a template ( Fig 1A). Secondly, the C2 2 domain (residues S251-L335) was separately modeled using the sequence alignment of the C2 domains of CD-33-related Siglec sequences, including all C2 1 , C2 2 and C2 3 domains, and the C2 1 domain of the Siglec-5 crystal structure as a template (Fig 1B). Thereafter, the linker region (L230-V250) between the C2 1 and C2 2 domains was modeled in two different ways. The first complete Siglec-9-EC model of residues S20-L335 (model 1) was done by orienting the domains manually into an extended 3D arrangement and the second model of the same residues (model 2) was based on the 3D arrangement of the immunoglobulin domains in the X-ray structure of neural cell adhesion protein (PDB ID 1QZ1 [18]), which also has an extended 3D arrangement of the immunoglobulin domains. All sequence alignments were done using Malign [19] within Bodil [20]. For all the 3D models, ten models were generated with Modeller [21] and the ones with the lowest objective function were chosen as a representative for each model. The quality of the 3D folds of the V-C2 1 and C2 2 homology models were evaluated with PROCHECK [22], ProSA web [23,24] and QMEAN [25]. To visualize the location of the sialic acid (SA) binding site in Siglec-9-EC, sialic acid binding mode was modeled based on the Siglec-5 complex (PDB ID 2ZG1 [17]). Model 1 and model 2 of Siglec-9-EC were refined by molecular dynamics (MD) simulations as described in the following section. Molecular dynamics simulations Protein Preparation Wizard, as implemented in Maestro v. 9.6 molecular modeling software (Schrödinger, Inc.), was used to prepare model 1 and model 2 of Siglec-9-EC for the MD simulations. All hydrogen atoms were added, bond orders were assigned, and disulphide bridges were created between the following cysteine residues: 36 and 170, 41 and 102, 164 and 213, and 271 and 320. Hydrogen bonds were assigned at pH 7.0 and the protonation states of histidines were selected interactively to optimize the hydrogen bond network. Finally, a restrained energy minimization of hydrogen atoms was run in the OPLS 2005 force field. Energy minimization, thermal equilibration and standard production simulations were performed with the AMBER package (version 12) [26] using the AMBER ff03 force field [27]. All simulations were run in an octahedral box (extending 10.0 Å from the protein), filled with explicit TIP3P water molecules [28] and one neutralizing Cl-ion. Periodic boundary conditions, particle-mesh Ewald electrostatics [29] and a cut-off of 9 Å for non-bonded interactions were used. A time step of 1 fs (for Langevin dynamics during equilibration) or 2 fs was applied together with the SHAKE algorithm [30] to constrain the bonds to hydrogen atoms. The 20-ns production simulations were performed at a constant temperature of 300 K and a pressure of 1 bar. The coupling constants for temperature and pressure [31] were 5.0 and 2.0 ps, respectively. Energy minimization was performed with the steepest descent and conjugate gradient methods in six steps, gradually reducing the restraints on the protein atoms to their initial positions. At each step, the restraint force constant was defined as follows: 10, 5, 1, 0.1, 0.01 and 0 kcal/molÅ 2 . Each minimization step was carried out for a maximum of 200 iterations (of which the 10 first iterations were with the steepest descent method and the rest with the conjugate gradient algorithm). Equilibration simulations were performed in five steps: (i) 10 ps heating of the system from 10 K to 300 K using a Langevin thermostat with a collision frequency (γ) of 1.0 ps -1 , constant volume, and restraints on the protein atom positions (restraint force constant of 5 kcal/ molÅ 2 ); (ii) same as the previous step but for 20 ps and without restraints on the protein atom positions; (iii) 20 ps MD at 300 K using a Langevin thermostat with γ = 0.5 ps -1 and constant volume, no restraints on the protein; (iv) 50 ps MD at 300 K using a Langevin thermostat with γ = 0.5 ps -1 and constant pressure of 1.0 bar, coupling constant for pressure of 1.0 ps, no restraints on the protein. (v) 400 ps MD at 300 K and at constant pressure of 1 bar, coupling constants for temperature and pressure were 5.0 and 2.0 ps, respectively, no restraints on the protein. The MD simulation trajectories were analyzed with VMD [32] and the ptraj module of AMBER. The resulting final frame structures were first minimized with AMBER (similarly to the last step of the initial minimization) and then visually examined with PyMOL (Schrödinger, Inc.). Reagents All reagents, if not otherwise mentioned, were purchased from Sigma-Aldrich. Vectors and cell lines For the insect cell production of Siglec-9, we inserted a HindIII restriction endonuclease site after the carboxyl terminal His-tag by inverse PCR (The primer sequences are shown in S1 The multiple sequence alignments used in the 3D modeling of Siglec-9-EC. The sequence numbering (according to the Siglec-9 sequence) and the secondary structural elements (yellow and pink boxes denoting the beta sheets and the 3/ 10-alpha helices, respectively) in the Siglec-5 structure (PDB ID 2ZG2), which was used as a template in the modeling procedure, are shown above both of the alignments. The conserved residues have a cyan background. (A) Multiple sequence alignment used in the 3D modeling of the V-C2 1 domains of Siglec-9. In the sequence alignment of the V-C2 1 domains of Siglec-3, -5, -6, -7, -9 and -14, the six cysteine residues forming disulphide bonds are marked as '1', '2' and '3' (in green) below the alignment. The key sialic acid binding residue, R120, is highlighted in blue background. The two light blue arrows below the alignment mark the V domain and the light brown arrows define the C2 1 domain. (B) Multiple sequence alignment of the C2 domains of CD-33-related Siglec sequences used in the 3D modeling of the C2 2 domain of Siglec-9. The sequence alignment includes the C2 1 , C2 2 and C2 3 domains. The key arginine residues, R284 and R290, are highlighted with a blue background. The phage peptide sequence is shown in green under the sequence alignment. The Siglec-9 peptide sequence used in the PET study [6] is boxed in the alignment. The conserved cysteine residues forming the disulphide bond within the C2 domain are in bold letters. Text). The PCR product was then cut with XbaI and HindIII and inserted into pFastBac1 derivative vector p503.9 [34]. To include a cleavable His-tag, we changed the tag from C-terminus to N-terminus by PCR. The resulting constructs had then the secretion signal for insect cell production, N-terminal flag and His-tags and coding region for Siglec-9-EC (V-C2 12 ; residues 26-348) and Siglec-9-ΔV (C2 12; residues 145-348). The R284S, R290S, R120S, R120S/ R284S and R120S/R290S mutations were produced using QuickChange Lightning-mutagenesis kit (Agilent Technologies). For the PCR we used a Siglec-9-EC production vector as a template, primers HE-56-61 (The primer sequence, S1 Text) and followed the manufacturer's instructions exactly. The mutations were confirmed by sequencing the whole Siglec-9 gene. Production and purification of recombinant Siglec-9 Using Sf9 cells, high titer baculovirus stocks for each Siglec-9-EC construct were generated. For protein expression, the High Five Tn5 cells were infected with the baculovirus stock. Two days post infection, the protein was secreted out into the medium, and the supernatant was harvested by centrifugation to remove cellular material. The 6×His tagged protein was purified by adding Ni 2+ -resin (Ni 2+ -charged chelating sepharose, GE Healthcare) to the supernatant in batch. After 45 minutes incubation at +4˚C, the resin was washed with phosphate buffered saline (PBS) in the presence of 7 mM imidazole and the protein was eluted with 500 mM imidazole in PBS, pH 8.0. Finally, the protein was purified by gel filtration on a Superdex 200 10/ 300 GL (GE Healthcare) column in 20 mM HEPES pH 7.4, 150 mM NaCl. hAOC3 production and purification was carried out exactly as previously described [36]. Thermal stability assays The purified proteins were characterized using fluorescence based thermal stability assay [37]. In this methodology, a dye intercalates with the exposed hydrophobic regions generated by unfolding of proteins. We used SYPRO 1 orange dye with maximal absorption of dye-protein complex at 470 nm and maximal emission at 569 nm. Siglec-9-EC was concentrated to 2 mg/ ml. A 96-well plate was filled with protein samples, buffers and dye with assay volume of 25 μl per well. The analysis was done on an iCycler machine (Bio Rad Laboratories) and melting curves were generated by increasing the temperature from 20˚C to 95˚C with a stepwise increment of 1˚C. The fluorescent signal is plotted as a function of temperature, and the significant increase in the signal (slope) corresponds to the melting of the protein. The analysis of the results and the melting temperature (T m ) estimation of thermal shift assay were calculated with the Meltdown program [38]. The program estimates the melting temperature in two ways: by using a quadratic fit to the data around the global minimum of the first derivative curve (this value is used as the T m in subsequent analyses) and by finding the temperature associated with the midpoint in the fluorescence response between the high point and the low point of the melt curve. The melt curves are considered to be normal by the Meltdown program if the estimated T m values are within 5˚C. Binding assay using surface plasmon resonance (SPR) The binding of purified Siglec-9-EC to immobilized hAOC3 was measured with BiacoreX (GE Healthcare). AOC3 was expressed in the CHO cells and purified as described in Smith et al. (1998) [11]. The preparation of the hAOC3 chip was done via amine coupling according to manufacturer's instructions, using 10 mM Na-acetate buffer pH 4.0 as a coupling buffer, resulting to 10 000 RU of hAOC3 immobilized on a CM5 chip. To monitor the binding of Siglec-fragment to hAOC3, we injected 20 μl of 0.3-31 μM of Siglec-9-EC over the surface at 25˚C and recorded the response units (RU) as a function of time. The running buffer was HEPES buffered saline (HBS, 10 mM HEPES, 150 mM NaCl, pH 7.4) with 0.005% Surfactant-P20 (GE Healthcare). 25 mM HBS with or without 5% glycerol was used as a protein buffer and the dilutions were made in the purification buffer. During the measurement, the binding to an empty reference channel was subtracted. We also monitored the response of the protein buffer alone. Two experiments were done using different protein preparations. To determine a binding constant for Siglec-9-EC, we plotted all the responses as a function of concentration and used non-linear regression of GraphPad4 software (GraphPad Software Inc., La Jolla, CA). To test if mutations of R284 and R290 in Siglec-9 have any effect on the binding to hAOC3, we monitored the binding of Siglec-9-EC and the mutant proteins Siglec-9-EC/R284S and Siglec-9-EC/R290S on immobilized hAOC3 using from five to seven different concentrations. We then determined k off and k on separately using the Biaeval program, and determined the binding constants for every curve separately. For the final K d , we did not include the highest and lowest value when we calculated the average values. Because Siglec-9-EC-R120S, -R120S/ R284S, -R120S/R290S and-ΔV bound very weakly to hAOC3, it was not possible to determine their binding constants and, thus, only one concentration (0.5 μM) of these proteins was tested. The effect on Siglec-9-EC-hAOC3 interaction was also tested by SPR for two different glycans, sialic acid and disialyl lactotetraosylceramide (DSLc4), and for an imidazole molecule known to block access into the hAOC3 active site. In order to monitor the effect of a particular molecule, we incubated Siglec-9-EC in the purification buffer with different concentrations of above mentioned reagents on ice for minimum of 30 min before the measurement. After the incubation, Siglec-9-EC with the reagent was injected as above. We also tested if semicarbazide, a known inhibitor binding irreversibly to the topaquinone cofactor of hAOC3, has an effect on the interaction between Siglec-9 and hAOC3. Before Siglec-9-EC injections, we injected 1 mM semicarbazide over the hAOC3 surface to bind it to hAOC3 active sites. After this, we monitored the binding of Siglec-9-EC to hAOC3 and included 1 mM semicarbazide also in the injected Siglec-9-EC solutions. For every measurement the relative binding was calculated by normalizing the responses to the first control condition, which was set to 1.0. Siglec-9 as a hAOC3 substrate The enzymatic activity of hAOC3 on purified Siglec-9-EC was assayed as described earlier in e.g. [36]. Now we used 25 mM HEPES pH 7.4, 150 mM NaCl as a reaction buffer and 30 μg of CHO-hAOC3 lysate as the protein source, to determine the specific activity for hAOC3. As a positive control we used 0.25 mM benzylamine as a substrate and as a negative control the lysate of CHO-hAOC3-Y471F cells expressing an inactive hAOC3 mutant [12]. The formation of fluorescence was followed for 1-3 hours. For every experiment we used duplicate wells. Siglec-9 as a modulator of hAOC3 activity To find out if Siglec-9 has an influence on the amine oxidase activity of hAOC3, we performed the activity assay with live CHO-hAOC3 cells and used labeled benzylamine as a substrate as previously described [10,39,40]. In detail, we plated 5x10 4 cells per well on a 96-well plate a day before the experiment, and cultured the cells in 200 μl of F-12 medium (Gibco). Before the experiment, we removed the medium and added first the reaction buffer (20 mM HEPES, 5 mM KH 2 PO 4 , 1 mM MgSO 4 , 1 mM CaCl 2 , 136 mM NaCl, and 4.7 mM KCl, pH 7.4) and 1 mM clorgylin and, thereafter, 1 mM semicarbazide inhibitor or 5 μM Siglec-9-EC or bovine serum albumin (BSA). The assay of the R120S, R120S/R284S and R120S/R290S Siglec-9-EC mutant proteins was done with 1 μM protein concentration. The cells were then incubated at 37˚C/5% CO 2 for 20 min, after which [7-14 C]benzylamine (Amersham Pharmacia, 54 mCi/ mmol) was added (2.5 μM). The cells were incubated for further 2 hours at 37˚C/5%CO 2 after which the reaction was stopped with 2 M citric acid and the labeled reaction product benzaldehyde was extracted into toluene for the liquid scintillation counting (Wallac-1409 Liquid scintillator, Wallac, Turku, Finland). Three independent experiments with at least duplicates were performed and the relative activities were calculated by normalizing the responses to the first positive control condition. Statistics We used non-parametric Mann-Whitney U-test for the comparison of means. Analysis of variance was done using Kruskall-Wallis test. The p-values below 0.05 were considered significant. All the analyses were done using IBM SPSS Statistics version 22.0 (SPSS Inc. USA). Characterization of recombinant Siglec-9-EC Production and purification of recombinant Siglec-9-EC. We have previously shown that Siglec-9 peptides bind to hAOC3 and that CHO-Siglec-9 cells interact with CHO-hAOC3 cells. Further, we have demonstrated ex vivo that Siglec-9 mediates the binding of human granulocytes to hAOC3 expressed on lymph node vasculature of transgenic mice [6]. To analyze the biologically relevant hAOC3-Siglec-9 interactions in more detail at the protein level, we now expressed the extra-cellular part of Siglec-9 (Siglec-9-EC) as a recombinant protein in Tn5 insect cells. The protein was purified from the culture medium by Ni-affinity chromatography, followed by size exclusion chromatography (Fig 2A). The purity of Siglec-9-EC on the SDS-PAGE was estimated to be more than 95% (Fig 2B). Based on the visual inspection of the SDS-PAGE, the size of the denatured Siglec-9-EC is around 50 kDa, which is higher than the 40 kDa molecular weight calculated for Siglec-9-EC with tags. Using the retention time calculation based on the chromatogram (Fig 2A), the native Siglec-9-EC is a dimer of 85 kDa. This measurement gives a molecular weight of 42.5 kDa for the monomer, which is also slightly larger than the estimated size. The observed larger molecular weight of Siglec-9-EC might result from the fact that Siglec-9-EC has eight putative N-glycosylation sites (UniProtKB Q9Y336) and glycosylation might increase its size. Dimerization of Siglec has also been observed for Siglec-5 and Siglec-8 [41,42]. As R284 and R290 in the C2 2 domain of Siglec-9 were proposed to interact with hAOC3 [6], we created the R284S and R290S point mutations of Siglec-9-EC to find out if these residues indeed have a role in the Siglec-9-EC-hAOC3 interaction. To study whether the V domain of Siglec-9 is also involved in the interactions, we first deleted the sialic acid binding V domain (Siglec-9-ΔV) and thereafter created the R120S mutant to abolish the sialic acid binding capability of the V domain. Unlike Siglec-9-ΔV, which totally lacks the V domain, Siglec-9-EC/R120S variant retains the domains of Siglec-9-EC intact. Additionally, we created R120S/R284S and R120S/R290S double-mutants. The purity of the mutant proteins was similar to that of the WT protein (data not shown). All Siglec-9 proteins gave about 1 mg of purified protein per liter of the culture volume. Stability analysis of the produced Siglec-9-EC proteins with thermal shift assay. To assess the folding of Siglec-9-EC produced in insect cells, we performed fluorescence-based thermal shift assay using iCycler. This methodology takes advantage of the fact that the SYPRO orange dye becomes fluorescent when it binds to hydrophobic amino acids in the protein and, thus, when the protein starts to unfold during heat denaturation, the hydrophobicity of the dye environment increases and can be detected as an increased fluorescence. The results were analyzed and the melting temperature (T m ) was calculated using the Meltdown program [38]. First, we used a buffer screen, which contains a set of seven different buffer systems each at a concentration of 100 mM covering a pH range from 4.0 to 9.5 in the presence of 125 mM NaCl (Fig 3B). On the basis of thermal shift measurements, Siglec-9-EC seems to be stable in most of the buffer conditions at different pH values (up to pH 9.5), except for pH 6.0 and below ( Fig 3A). Furthermore, the thermal stability of Siglec-9-EC was studied in the presence of different additives. The results suggest that addition of the reducing agent 1 mM TCEP and detergent 1% Triton X-100 affects the protein stability and slightly reduces the T m of the protein (Fig 3B). Based on these measurements, we concluded that the purified Siglec-9-EC is most stable in 20 mM HEPES, 150 mM NaCl, pH 7.4, which was therefore selected as the storage buffer for the proteins. Finally, we ran the thermal shift assay for the purified Siglec-9-EC/ R284S, Siglec-9-EC/R290S, Siglec-9-EC/R120S, Siglec-9-EC/R120S/R284S and Siglec-9-EC R120S/R290S mutant samples, which all showed similar melting curves and no change in the T m of 57˚C, except for the slightly lower T m of 54.2˚C observed for Siglec-9-EC/R120S ( Fig 3C). This suggests that the arginine to serine mutations in Siglec-9-EC did not have an effect on its fold and stability and, thus, proved that our strategy of avoiding hydrophobic patches on the surface of Siglec-9-EC by replacing the positively charged arginines with polar serines instead of hydrophobic alanines was successful. The 3D model for Siglec-9-EC reveals the position of the key arginines involved in its interaction with hAOC3 The 3D structural model for Siglec-9-EC consisting of domains V-C2 1 -C2 2 was created to illustrate the location of the two arginine residues, R284 and R290 (in the C2 2 domain), which we have earlier found to be critical for the binding of Siglec-9-derived peptides into hAOC3 [6]. We also wanted to correlate their position to that of R120 (in the V domain), the key residue for the sialic acid binding. Firstly, the 3D model for the V-C2 1 was generated by homology modeling based on the multiple sequence alignment (Fig 1A) and the corresponding domains of the Siglec-5 X-ray structure used as a structural template for modeling (PDB ID 2ZG). Secondly, the 3D model for the C2 2 domain was created based on the alignment of the C2 domains of several Siglecs (Fig 1B) using the crystal structure of the C2 1 domain of Siglec-5 with the C2 fold as a structural template. The V-C2 1 and C2 2 models were visually inspected and compared with the 3D structure of Siglec-5, and their quality was assessed with several programs/servers that all gave acceptable results. According to PROCHECK [22], 91.8% of the V-C2 1 residues and 94.7% of the C2 2 residues are in the favored regions of the Ramachandran plot. Thus, the stereochemical quality of the models is actually better than that of the Siglec-5 structure (the corresponding value is 86.2%). Analysis of the homology models with the ProSAweb [23,24] gave Z-scores of -5.9 for V-C2 1 ( Figure A in S1 Fig) The linker region (L230-V250) between the C2 1 and C2 2 domains is predicted to be highly flexible and its position has a significant effect on the relative orientation of the V and C2 2 domains. Thus, the linker region (L230-V250) was modeled by manually creating an extended 3D arrangement of the domains (Fig 4A, model 1) and by using the spatial 3D arrangement of the three immunoglobulin domains in the X-ray structure of the neural cell adhesion molecule (PDB ID 1QZ1 [18]; Fig 4B, model 2), which has a similar extended conformation. To refine the structures of the Siglec-9-EC models, both models were subjected to a 20-ns MD simulation, which resulted in similar, bent conformations (Fig 4A-4C). Since model 1 was energetically better (prior to and after MD), it was used as a representative model for the further analysis ( Fig 4D). The model for the V-C2 1 domain is the most reliable part of the final model due to the relatively high sequence identity (~50%) between these domains in Siglec-9 and Siglec-5. Furthermore, a conserved disulphide bridge between Cys35 and Cys170 stabilizes the spatial orientation of the V and C2 1 domains. As a result of the interdomain disulphide bridge the relative orientation of the V-C2 1 domains is the same in the extended and bent conformation (Fig 4A-4C) whereas the movement of the C2 2 domain relative to V-C2 1 is not restricted and C2 2 moves closer to the V domain in the bent conformation (Fig 4C). The positions of R284 and R290 in the C2 2 domain are based on the 3D fold of the C2 1 domain of Siglec-5 since there are no X-ray structures for the C2 2 domain of Siglecs. Despite the low sequence identity between C2 2 of Siglec-9 and C2 1 of Siglec-5 (14%), the multiple sequence alignment (Fig 2B) shows six totally conserved residues and a conserved hydrophobicity profile. Moreover, the minor variations in the sequence lengths occur in the loop regions. Two of the totally conserved residues are cysteines that form a disulphide bond stabilizing the 3D fold of the C2 domain. In the C2 2 of Siglec-9, the conserved internal disulphide bridge between residues C272 and C320 increases the reliability for the 3D positions of R284 and R290. Both R284 and R290 are exposed to solvent and located nearby each other in the CE loop region of the C2 2 domain of the Siglec-9-EC model (Fig 4D). In the extended conformation (prior to MD; Fig 4A), they are far away from the V domain but in the bent conformation they come closer to R120 in the sialic acid binding site of the V domain ( Fig 4D). Role of R284 and R290 in the Siglec-9-EC binding to hAOC3 under flow condition We next tested the binding of Siglec-9-EC on immobilized hAOC3 to determine the affinity between these proteins. Using SPR, we demonstrated that purified Siglec-9-EC bound specifically to hAOC3 with the affinity of K d = 4.6 ± 0.93 μM and B MAX = 3504 ± 204 RU, when we determined the apparent K d by using the semi-quantitative method assuming a steady-state equilibrium at the end of the injection and a one-to-one binding model (Fig 5A and 5B). Since we have earlier shown that the mutation of R284 and R290 in the Siglec-9-like peptides reduces the binding to hAOC3 [6], we now tested the binding of R284S and R290S mutants to the immobilized hAOC3. The binding constants (Table 1) show that neither of the Arg/Ser mutations abolished the binding, and when using the Biaeval program for determination of k on and k off and calculating K d as a ratio of these, K d was lower than with the semi-quantitative method (1.04 vs. 4.6 μM). When R284 was mutated to a serine, K d improved about tenfold (0.14 ± 0.58 μM) whereas the R290S mutation had a smaller effect resulting to a K d of 0.53 ± 0.50 μM. Although the error range of the constants is large, both of the Arg/Ser mutations caused a tighter binding. The maximal binding at highest concentrations of both mutants were lower (1130 RU for R284S, 1760 RU for R290S) than for the wild-type (WT) Siglec-9-EC (3120 RU) due to the lower concentrations used ( Figure A in S2 Fig). When separate k on and k off were compared between the WT and the Arg/Ser mutants, we noticed that the R284S mutant had three times higher k on than the WT and a k off about half of the WT (Table 1). Therefore, both increased association and decreased dissociation contribute to the significantly increased binding of R284S. On the contrary, the R290S mutation had an almost identical association rate as the WT but the decreased dissociation rate (Table 1) leads to better binding. The intact V-domain is crucial for the Siglec-9-EC-hAOC3 interaction As both α2,3and α2,6-linked sialic acids of hAOC3 are known to be involved in cell adhesion [13], we tested if the V domain in Siglec-9 interacts with them. In this experiment, we first preincubated Siglec-9-EC with sialic acid and then assayed the binding to hAOC3 with SPR. We observed a clear dose-dependent inhibition in the binding of Siglec-9-EC to hAOC3 by sialic acid (Fig 6A, Figure B in S2 Fig). We also tested the effect of a glycan disialyl lactotetraosylceramide (DSLc4) on the binding of Siglec-9-EC to hAOC3 using SPR. DSLc4 binds to Siglec-7 and aids dimer formation by binding to two Siglec-7 molecules [43] but it does not interact with Siglec-9 [44] and thus functions as a negative control for the binding assay. As expected, DSLc4 did not have an effect on the interaction at 20 and 50 μM concentrations (Fig 6B, Figure B in S2 Fig). Table 1. Binding constants of Siglec-9-EC proteins on hAOC3. K d of Siglec-9-EC/R284S differed significantly from K d of Siglec-9-EC (Z = -2.61, p = .027), but K d (Siglec-9-EC/R290S) did not (Z = -1.57, p = .302). Because the sialic acid decreased the interaction of Siglec-9 with hAOC3, we next deleted the sialic acid binding V domain. When we assayed the relative binding of Siglec-9-ΔV on immobilized hAOC3, the binding of Siglec-9-ΔV remained at the background level (Fig 6C, Figure C in S2 Fig). Due to the low binding to hAOC3 (<10% of WT binding), we were not able to determine an accurate binding constant for Siglec-9-ΔV. To confirm that the sialic binding ability of Siglec-9 is mainly responsible for the binding of Siglec-9 to hAOC3, we mutated R120, the crucial sialic acid binding residue [16], to a serine and assayed the binding to hAOC3 (Fig 6C). Similar to Siglec-9-ΔV, the binding of Siglec-9-EC/R120S as well as the double mutants (R120S/R284S and R120S/R290S) to hAOC3 was at the background level ( Fig 6C, Figure C Siglec-9-EC enhances the enzymatic activity of CHO-hAOC3 cells Next, we tested if the purified Siglec-9-EC could act as a substrate for hAOC3. We were not able to demonstrate any activity over the negative control (CHO-hAOC3-Y471F lysate, Fig 7A), when we tested the activity of CHO-hAOC3 lysate on 10 μg of Siglec-9-EC using the Amplex Red assay. However, when we analyzed the effect of Siglec-9-EC on the amine oxidase activity of intact CHO-hAOC3 cells, we monitored a two-fold increase in the benzylamine activity of CHO-hAOC3 (Fig 7B). Thereafter, we studied if the Arg/Ser mutations have an effect on the activity-modulator capacity of Siglec-9-EC (Fig 7C). This time, we saw a similar increase in the hAOC3 activity with both WT and Siglec-9-EC/R120S mutant proteins. Thus, Siglec-9-EC was able to modulate the amine oxidase activity of CHO-hAOC3 cells. Furthermore, Siglec-9-EC/R120S was still able to enhance the amine oxidase activity of hAOC3, although the R120S mutation in the V domain had disrupted the sialic acid binding capacity of Siglec-9-EC (Fig 7C) whereas the mutation of R284S in the C2 2 domain and the double mutations (R120S/R284S and R120S/R290S) had lost the capacity to modulate the hAOC3 activity. K d (μM) k on (10 4 1/Ms) k off (10 −3 1/s) Semicarbazide does not have an effect on Siglec-9-EC-hAOC3 interaction whereas imidazole inhibits the binding of Siglec-9-EC to hAOC3 Our earlier data on the binding of a Siglec-9 peptide suggested that Siglec-9 binds directly to the TPQ cofactor of hAOC3 [6] but the activity and inhibition assays carried out in this study challenged this assumption. Since Siglec-9 clearly modulates the enzymatic activity of hAOC3, we further tested whether the binding of semicarbazide to hAOC3 has an effect on the Siglec-9-EC-hAOC3 interaction. Semicarbazide is a classical amine oxidase inhibitor that binds covalently to the TPQ cofactor and inhibits the enzymatic activity irreversibly. Binding of 1 mM semicarbazide to hAOC3 had no effect on the subsequent Siglec-9-EC adhesion (Fig 8A, Figure D in S2 Fig). From our previous work, we know that imidazole at high concentrations inhibits hAOC3 in a reversible manner and the X-ray structure of the hAOC3-imidazole complex shows two distinct sites in the active site cavity [36]: one of the imidazole molecules (Imid1) interacts with TPQ in the active site whereas the other one (Imid2) binds into the active site channel (Fig 8B). To elucidate if imidazole binding has an effect on the Siglec-9-EC-hAOC3 interaction, we injected several concentrations (5-100 mM) of imidazole together with the Siglec-9-EC samples over the hAOC3 surface. The addition of imidazole to the binding assay non-linearly decreased the binding of Siglec-9-EC to hAOC3 (Spearman r s = -0.8, p = 0.05, data not shown) and at the concentration of 50 mM inhibited the binding of Siglec-9-EC to hAOC3 by about 30% (Figure D in S2 Fig), when compared to the binding without imidazole (Fig 8A). Fig 8. Effect of hAOC3 inhibitors on the Siglec-9-ECn the subsequent Si (A) Imidazole reduces Siglec-9-EC binding to hAOC3 whereas semicarbazide has no effect. Black bars: no added inhibitor, white bars: 1 mM semicarbazide (SC) or 50 mM imidazole (Imid) added. Average of two independent results are shown ± SEM. (B) The overall structure of heavily glycosylated hAOC3. Chain A in the hAOC3 dimer is shown in blue, chain B in cyan and the copper ions in each deeply buried active site are shown as orange spheres. The X-ray structures of hAOC3 have revealed that the 12 N-glycosylation sites in the hAOC3 dimer are glycosylated but only a few sugar units of the highly flexible N-glycans are visible in the structures. The attached glycans are named N1 (attached to N137), N2 (N232), N3 (N294), N4 (N592), N5 (N618) and N6 (N666) in chain A and those in chain B marked with an additional hyphen. Close-up view corresponds to the boxed area of the hAOC3 dimer and shows the active site channel of hAOC3 in complex with imidazoles (PDB ID 2Y74 [32]). Like semicarbazide, Imid1 (salmon) directly binds to the TPQ cofactor in the active site whereas Imid2 (green) forms a hydrogen bond with Y394 in the channel and blocks access to the active site. Similarly to Imid2, the Arg guanidinium of Siglec-9 could interact with the polar residues in the Imid2-binding site when semicarbazide (like Imid1) is covalently bound to the TPQ cofactor. The sugars of the N232-attached glycan (N2) on the surface of hAOC3 are shown as sticks. doi:10.1371/journal.pone.0166935.g008 Discussion Previously, we have proven by in vitro, ex vivo and in vivo studies that Siglec-9 interacts with hAOC3 [6] but the interaction has not earlier been characterized in detail. Our previous results with Siglec-9 derived peptides indicated that R284 and R290 in the C2 2 domain of Siglec-9 might have a role in the protein-protein interaction [6] but the role of the V domain was not studied at all. In this study, our specific aim was to clarify, which domains in the extracellular part of Siglec-9 are important for the interaction and what is their exact role in the interaction. Towards this goal, we have produced a soluble, biologically active, extracellular domain of Siglec-9 and modeled its 3D structure, which consists of the V, C2 1 and C2 2 domains, and studied its interaction with hAOC3. Using both wild-type and mutant forms of recombinant Siglec-9-EC proteins, we have for the first time, to our knowledge, shown that both the C2 2 and V domain of Siglec-9 have a specific role in its interactions with hAOC3. Our results with the recombinant Siglec-9-EC proteins prove that Siglec-9 and hAOC3 are able to interact directly without any additional factors (Fig 5A and 5B), which was not clear from the previous cell-based assays [6]. Actually, the affinity of Siglec-9-EC to hAOC3 (K d of 1.04 ± 0.87 μM) is one magnitude higher than the values measured for the previously known Siglec-glycan interactions, which are at the range of 0.1-3 mM [1,45,46]. We therefore wondered if the higher affinity results from the unique protein-protein interactions that mediate Siglec-9 binding to hAOC3. To our surprise and in contrast to previous results with the R284A and R290A mutants of the Siglec-9 peptides [6], the R284S and R290S mutations of Siglec-9-EC did not weaken but rather enhanced binding (Table 1). Our earlier model for Siglec-9-hAOC3 interaction [6] proposed that either R284 or R290 directly binds to the TPQ cofactor but the observed enhanced binding of the R284S and R290S mutants rules out this possibility since a serine cannot bind directly to TPQ. Therefore, the interaction between the recombinant Siglec-9-EC and hAOC3 cannot result from the specific binding of R284 or R290 to TPQ. We next tested if the strong binding of Siglec-9 to hAOC3 was due to the well-known sialic acid binding ability of the Siglec-9 V domain [16]. Because addition of free sialic acids interfered with the Siglec-9-EC-hAOC3 interaction (Fig 6A) and the removal of the sugar-binding V domain or the sugar-binding residue R120 abolished the binding almost completely ( Fig 6C), we can conclude that, especially under flow conditions, the binding of Siglec-9-EC to hAOC3 is mainly mediated via the V domain. R284 and R290 in C2 2 are clearly involved in binding Siglec-9 to hAOC3 but where do they bind? Siglec-9-EC cannot bind to TPQ since Siglec-9-EC was not a substrate or inhibitor for hAOC3 (Fig 7A and 7B), the TPQ-binding semicarbazide inhibitor did not block the Siglec-9-EC-hAOC3 interaction (Fig 8A) and the R284S mutation increased the binding affinity of Siglec-9-EC (Table 1). However, imidazole significantly impaired the interaction (Fig 8A) and, therefore, the binding site for Siglec-9-EC plausibly overlaps with the secondary imidazolebinding site (Imid2, Fig 8B) in the active site channel of hAOC3 [36]. This site has many polar residues (e.g. Y394 in Fig 8B), which may form hydrogen bonds with R284 and R290 in Siglec-9-EC. Similarly, the hydroxyl group of serines in Siglec-9-EC/R284S and Siglec-9-EC/R290S is capable of forming hydrogen bonds with these polar residues. The fact that side chain of a serine is smaller and fits better into the active site cavity of hAOC3 than the large and bulky arginine in the Siglec-9-EC might explain the improved binding properties of the R284S and R290S mutants compared to the WT. It is intriguing that the reversible pyridazinone inhibitors of hAOC3 activity also bind to this unique binding site [47], which now seems to be a physiological binding site as well. Could Siglec-9 affect the activity of hAOC3 by binding to the active site channel? Siglec-9-EC is neither a substrate (Fig 7A) nor an inhibitor (Fig 7B). To our surprise, however, Siglec-9-EC increased the benzylamine activity of hAOC3 about two-fold in the cell-based assay ( Fig 7B). Furthermore, the R120S mutant, almost incapable of binding to hAOC3 under flow conditions (Fig 6C), modulated the amine oxidase activity like the WT protein (Fig 7C), but the R120S/R284S and R120S/R290S double mutants had lost this capacity (Fig 7C). This result explains our previous data according to which the removal of six N-glycosylation sites from the hAOC3 dimer could simultaneously reduce lymphocyte binding and increase enzymatic activity [14]. Consequently, in this study we have discovered a biological role for the binding of R284 and R290 to the active site of hAOC3: upon binding they modulate the amine oxidase activity of hAOC3. The molecular dynamics simulations of the 3D model for Siglec-9-EC revealed its flexibility and support the idea of conformational changes in the 3D arrangement of V-C2 1 and C2 2 upon hAOC3 binding. Furthermore, the recent analysis of small-angle X-ray scattering structures of Lens esculenta and Euphorbia characias amine oxidases [48] showed that the D3 domain of the copper amine oxidase fold makes a rigid-body movement and opens up the buried active site to make it more easily accessible for ligands. In the case of hAOC3, this is a fascinating scenario since its interaction with Siglec-9 would be enhanced if a similar movement of D3 opens the hAOC3 structure. The structural rearrangements would also give an explanation for the mechanism of Siglec-9-induced increase in the enzymatic activity of hAOC3. Due to the complex nature of the interactions and the involvement of sialic acids end groups of the highly flexible hAOC3 glycans (12 N-glycosylation sites in Fig 8B), computational predictions of the 3D complex are challenging and further experimental studies to find out e.g. which the N-glycan(s) in hAOC3 are important for the interaction are in progress. Furthermore, we cannot rule out the possibility that the glycosylation of Siglec-9 might contribute to its physiological interaction with hAOC3. Since Siglec-9-EC produced in insect cells exhibits a nonphysiological glycosylation pattern, this could not be studied. The activity-modulating effect of Siglec-9 at the cell level is an important discovery. Salmi et al. [10] have earlier shown that incubation of hAOC3 with specific antibodies enhanced the enzymatic activity towards a hAOC3 substrate on lymphocytes but decreased firm adhesion and rolling. Additionally, incubation of the endothelial cells with inhibitors also decreased the rolling and firm adhesion [10]. More interestingly, inhibitors diminished the rolling, adhesion and transmigration of granulocytes [10,12], albeit the enzymatically inactive hAOC3-Y471F was still able to mediate the rolling of granulocytes [12]. It can be envisioned that Siglec-9 by enhancing the activity of hAOC3 modulates the hAOC3-mediated leukocyte trafficking and thus the outcome of the immune response at sites of inflammation. It has earlier been shown that Siglec-9 is an immunosuppressive molecule [3]. Previously, Siglec-9 was reported to induce both apoptotic and non-apoptotic cell death of neutrophils [49]. The Siglec-9-mediated non-apoptotic cell death was caspase-independent but dependent on reactive oxygen species. Moreover, it occurred under in vivo inflammatory conditions and was characterized by cytoplasmic vacuolization [49]. Coupled to our results, this suggests that hAOC3 could be the previously unknown ligand for Siglec-9 in non-apoptotic cell death since the Siglec-9-mediated enhanced enzymatic activity of hAOC3 produces elevated levels of hydrogen peroxide and thus increases the reactive oxygen species at the sites of inflammation. This function of Siglecs is highly cell-type specific since Siglec-7, unlike Siglec-9, induced the non-apoptotic cell death of the U937 cells [50]. Strikingly, the CE-loop of C2 2 in Siglec-7 was crucial for the Siglec-7-mediated non-apoptotic cell death [50]. The cell death activity induced by the extracellular part of Siglec-7 was significantly decreased when any of the key residues (W288, T289 and S292) in the CE-loop of Siglec-7 was replaced by the corresponding residue in Siglec-9 (L287, S288 and G291) [50]. Interestingly, L287 is exclusively found in Siglec-9 and replaced by a tryptophan in the C2 domains of the other CD-33-related Siglec sequences ( Fig 1B). R284 and R290 of Siglec-9, which are located in the vicinity of theses residues, are conserved only in the C2 3 domain of Siglec-10 and the C2 2 domain of Siglec-7 and (Fig 1B) but their importance for Siglec-7 function is unknown. Although further studies are needed to elucidate the biological implications of Siglec-9-hAOC3 interactions, it is tempting to speculate how they might mediate the different steps in the extravasation cascade: 1) The rolling of leukocytes could be mediated via the interactions between the V domain of Siglec-9 and the sialic acids on hAOC3; 2) The firm adhesion might also require contacts from the C2 2 domain of Siglec-9; and 3) The transmigration might be mediated by the enzymatic activity of hAOC3, which is enhanced by the interaction of Siglec-9 with the active site channel of hAOC3. Conclusions As a conclusion, our results prove that the Siglec-9-hAOC3 interaction is multivalent and much more complex than expected. We interpret our findings so that the interaction of Siglec-9-EC with hAOC3 is mediated both by protein-sugar interactions via the V domain and by the protein-protein interactions via the C2 2 domain. This is the first time, to our knowledge, when both the C2 2 and V domain of Siglec-9 are proved to be associated with its interactions with any ligand. We could also postulate that R284 and R290 in C2 2 interact with hAOC3 in a manner that increases the amine oxidase activity of hAOC3, which is known to be important for hAOC3-mediated leukocyte trafficking [12]. Determination of the binding constants for 1 μM Siglec-9-EC, Siglec-9-EC/R284S and Siglec-9-EC/R290S. Each curve was used separately to determine k on and k off , which were used for the K D determination. (B) The binding of 0.5 μM Siglec-9-EC with and without different concentrations with sialic acid or with disialyl lactotetraosylceramide (DSLc4) to immobilized hAOC3. For both, one out of 2 experiments are shown. In the first experiment, the curve for the binding of Siglec-9-EC with 0.5 μM displayed substantial noise at the end of injection, most probably due to the air bubble. For this curve, the relative binding was determined before the noise. (C) The binding of 0.5 μM Siglec-9-EC and the Siglec-9-EC mutants to immobilized hAOC3. One out of 2-3 experiments are shown. (D) The binding of 0.5 μM Siglec-9-EC with and without irreversible inhibitor (1 mM semicarbazide, SC) or reversible inhibitor (50 mM imidazole) to immobilized hAOC3. The binding of Siglec-9-EC is similar before and after SC.
2018-04-03T05:46:03.257Z
2016-11-28T00:00:00.000
{ "year": 2016, "sha1": "e9f6fd5993c123332f35332d0b52f8142a42a3bf", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0166935&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e9f6fd5993c123332f35332d0b52f8142a42a3bf", "s2fieldsofstudy": [ "Biology", "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
249446008
pes2o/s2orc
v3-fos-license
Conventions for unconventional language: Revisiting a framework for spoken language features in autism Background and aims Autism has long been characterized by a range of spoken language features, including, for instance: the tendency to repeat words and phrases, the use of invented words, and “pedantic” language. These observations have been the source of considerable disagreement in both the theoretical and applied realms. Despite persistent professional interest in these language features, there has been little consensus around terminology, definitions and developmental/clinical interpretation. Main contribution This review paper updates and expands an existing framework for unconventional language in autism to include a broader range of non-generative (echolalia and self-repetition) and generative (idiosyncratic phrases, neologisms and pedantic language) features often observed in the language of individuals on the autism spectrum. For each aspect of the framework, we review the various definitions and measurement approaches, and we provide a summary of individual and contextual correlates. We also propose some transitional language features that may bridge non-generative and generative domains (e.g., mitigated echolalia and gestalt language). Conclusions This updated framework offers a unified taxonomy and nomenclature that can facilitate further investigation and interpretation of unconventional language in autism. Implications There are important implications of this work for our understanding of the complex interplay between autism and language development. Equally important are the clinical ramifications that will guide evidence-based practice in assessment and intervention for individuals on the autism spectrum. The earliest accounts of autism 1 include detailed descriptions of unconventional language use, including the tendency to repeat words and phrases, the use of invented words, unusual phrasing and "pedantic" language (Asperger, 1991;Kanner, 1943). These patterns of language use have been confirmed across countless studies; they are often of central interest in educational and clinical settings (Arora, 2012;Gladfelter & Vanzuiden, 2020), and some of these features are now even included in the diagnostic criteria for ASD (APA, 2013). They have been the source of heated debates over clinical decision-making, with some professionals suggesting that these tendencies are "maladaptive" (Lovaas et al., 1973;Risley & Wolf, 1967;Schreibman & Carr, 1978) and should be diminished (e.g., Carr et al., 1975;Fisher et al., 2013;Handen et al., 1984;Lanovaz & Sladeczek, 2012;Neely et al., 2016) and others arguing that they are important developmental markers that should be harnessed for language learning (e.g., Blanc, 2012a;Peters, 1983;Prizant, 1983;Schuler, 1979;Stiegler, 2015). Despite decades of scholarly and applied attention to these language patterns, the autism community lacks consensus around terminology, definitions and developmental interpretation. One important step towards a common framework was proposed by Prizant and Rydell (1993), who suggested a taxonomy for these behaviors using the term "unconventional verbal behavior," defined as the following: "vocal production that is composed of recognizable speech, but violates to some degree, socially acceptable conventions of linguistic communication" (Prizant & Rydell, 1993, p. 263). Within this framework, they proposed four categories: immediate echolalia, delayed echolalia, perseverative speech, and incessant (repetitive) questioning. In the years since Prizant & Rydell's important work, their framework has not been widely adopted, despite sustained interest in these patterns of language use. Therefore, this paper aims to revive and expand the initial "unconventional verbal behavior" framework, reporting on what is currently known about these areas of language in autism and proposing some modifications. The paper is organized around three primary clusters of unconventional spoken language (see Figure 1). The first is "non-generative forms", including (as outlined by Prizant & Rydell, 1993) immediate echolalia, delayed echolalia and repetitive speech (including what Prizant & Rydell called "perseverative" speech and "incessant/repetitive questioning"). We term the second cluster "generative forms," and this includes idiosyncratic language (the use of idiosyncratic words/neologisms and idiosyncratic phrasing) and pedantic language. We add this cluster to Prizant and Rydell's (1993) original framework, as previous work on language patterns in ASD has made an important distinction between echolalia/repetitive language (i.e., non-generative forms) and language that involves novel morphological and/or syntactic combinations (i.e., generative forms) (e.g., Rydell & Mirenda, 1991Tager-Flusberg et al., 2009), and which is sometimes referred to as "spontaneous". Finally, we recognize a third cluster of "transitional" forms (including mitigated echolalia and gestalt language), which combine features of the other two domains. It is important to state outright that the primary aim of our proposed framework is to classify the form of language behaviors. Nevertheless, for some aspects of the framework namely the non-generative and transitional formswe will secondarily consider their function. Aspects of communicative intention and function can vary when individuals use any form of unconventional language (see Schuler & Fletcher, 2002); thus, we do not equate either non-generative or generative types of unconventional language with terms like non-intentional, non-communicative, or non-functional, nor do the definitions of any of our language categories allude to the role of communicative intent. While our categorizations of different behaviors do not assume any role of communicative intent, in some sections, we will review what previous work has outlined regarding the communicativeness of certain unconventional language signals. This discussion is especially salient for non-generative forms (i.e., immediate and delayed echolalia and self-repetitions), which have historically been regarded as non-communicative. There has been less discussion around the communicative function of generative forms of unconventional language, perhaps because the communicative functions of these language forms are more readily apparent. As part of this review, we include several firsthand accounts of unconventional language use from autistic self-advocates. These authors provide invaluable insights into the interactive, communicative and expressive functions of unconventional language, and they also explain how unconventional language might help scaffold and contribute to language development. It is also critical to acknowledge that none of these categories of language use are unique to autism. Accordingly, we do not claim here that unconventional language is specific to autism; however, a review of the developmental scope and sequence of these features in non-spectrum populations and/or other neurodevelopmental or acquired disorders is beyond the scope of this paper (for a developmental perspective see, for instance, Schuler & Fletcher, 2002). Instead, we aim to review the various definitions and measurement approaches for unconventional language that have been used in the field of autism research and synthesize findings about individual and contextual correlates of unconventional language. The overarching goal of this work is to provide a taxonomic framework, unify the previous reports, and provide suggestions for future directions. Non-generative Over decades of autism research and clinical work, perhaps the most salient of unconventional language forms are those that are non-generativethat is, they do not involve the generation of novel morphological or syntactic forms but rather involve the rote repetition of a word or words. Included in this category are echolalia (immediate and delayed), both characterized by the repetition of language previously spoken by others, and self-repetition, which is characterized by the repetition of language previously spoken by oneself. Echolalia In his seminal work, Kanner (1943) observed that his patients often had the tendency to "echo" language spoken by others and that this mimicry retained the original words and the original prosody. He further noted that such repetitions occurred both "immediately" and in a "delayed" fashion. The dual characterization of echolalia as immediate and delayed has endured ever since, although some scholars have suggested further differentiation based on whether the delay is brief or "distant" (Sidtis & Wolf, 2015). In keeping with Kanner's foundational differentiation between immediate and delayed echolalia, the two will be considered separately below. It is important to note that, in many accounts, there has been a conflation of echolalia (both immediate and delayed) and another salient feature of language use in autism: pronoun reversal, wherein the individual reverses second-or third-person pronouns in place of first-person (e.g., "Want me to draw a spider" meaning "I want you to draw a spider" (Kanner, 1943, p. 241) or "Do you want a bath?" instead of "I want a bath" (Kanner, 1943, p. 219). More recent accounts have suggested that pronoun reversal may not be entirely due to echolalia (see below for further discussion) (Hobson et al., 2010;Hobson & Meyer, 2005;Lee et al., 1994;Ricard et al., 1999). Immediate echolalia. Definition and Examples. Following on Kanner's case descriptions, several researchers provided formal definitions of immediate echolalia in the context of autism research. Fay (1969) described it as "the meaningless repetition of a word or word group just spoken by another person" (p. 39); another early example comes from Schuler (1979), who described echolalia as "the literal repetition of utterances of others immediately after their occurrence" (p. 412). More detailed definitions have also been proposed: "a response (that) must have occurred subsequent to the interlocutor's utterance, and it must have consisted of segmental or suprasegmental similarities to the previous speaker, involving…rigid echoing of the model utterance…occurring within two utterances of the original utterance" (Prizant & Duchan, 1981, p. 243;Rydell & Mirenda, 1994). Alternative, similar definitions have been proposed elsewhere (e.g., , and some definitions have tried to differentiate autistic, "echolalic" repetitions of language from repetitions used as a "memory device" or those used "meaningfully" by children with low levels of expressive language (Lord et al., 2012). The following example (Arnold, 2021) shows how immediate echolalia appears in context, where a child repeats full or partial utterances produced by both his parent and an experimenter; each repetition is within two utterances of the original, model utterance. Parent: Ooh what's in there? Parent: Come sit and we'll see. Child: What's in there? Experimenter: Let's put the candles in. Child: Let's put the candles. Across this array of operationalizations, there is consensus around the characterization of immediate echolalia as being the repetition of a word or words spoken by others, immediately after hearing them; much less consensus exists around the function and communicative nature of echolalia (see below). Importantly, other terms for echolalia may be used in neighboring bodies of literature. For instance, as discussed by Stiegler (2015), there is some overlap between immediate echolalia and "vocal stereotypy", although the latter is a broader term more common in the literature on the (controversial) abatement of echolalia (e.g., Neely et al., 2016) and which often also includes nonword vocalizations (e.g., Lanovaz & Sladeczek, 2012). In typical development, the tendency to mimic speech may be called "echoic" language (Charlop, 1983). Measurement Approaches. A variety of measurement approaches have been developed to quantify immediate echolalia. Fay (1969) presented standardized verbal stimuli and coded the presence or absence of an echolalic response to each probe. In other work, videos and/or transcriptions from child/adult interactions have been reviewed by trained coders, who extracted the number of echolalic utterances during the session (e.g., Prizant & Duchan, 1981;Rydell & Mirenda, 1994) and, in still other studies, transcripts have been fed into automated software programs (that is, automatic algorithms) to extract measures of echolalia (Van Santen et al., 2013). Ordinal rating scales based on frequency of echolalia have been employed in "gold standard" diagnostic tools, relying on clinician observations (Lord et al., 2012) and parent report (Rutter et al., 2003), and qualitative approaches, including conversation analysis, have also been introduced (Dobbinson et al., 1998;, Contextual and Individual Correlates. Although immediate echolalia has been highly visible in the field of autism research and practice, and it has even been documented in children on the autism spectrum who use signed language (Shield, 2014;Shield et al., 2017), its prevalence in autism is difficult to quantify. Some research has suggested echolalia is more common in autism than in non-spectrum individuals: children with other language impairments (but who are not on the autism spectrum) also showed elevated levels of immediate echolalia (Cantwell et al., 1978;Leyfer et al., 2008), but they may show less use of echolalia relative to children on the spectrum (Van Santen et al., 2013). Interestingly, associations between levels of echolalia and individual characteristics are tenuous. Some studies found no associations between immediate echolalia and autism symptoms (Gladfelter & Vanzuiden, 2020;Van Santen et al., 2013), age (Mcevoy et al., 1988) or nonverbal cognitive abilities (Gladfelter & Vanzuiden, 2020;Mcevoy et al., 1988). Some researchers have also reported no associations with language abilities (Gladfelter & Vanzuiden, 2020;Van Santen et al., 2013), while others found that immediate echolalia was negatively associated with language skills, such that higher language skills were associated with lower levels of echolalia (Fay & Butler, 1968;Kang et al., 2020;Mcevoy et al., 1988). Finally, the role of other skills, such as short-term verbal memory (Dobbinson et al., 1998), inhibition (Grossi et al., 2013) or "auditory monitoring" (Schuler, 1979) is also inconclusive. Nearly as long-standing as the accounts of immediate echolalia in autism is the debate over whether it offers valuable cognitive or communicative functions (e.g., Schuler, 1979;Stiegler, 2015). Early characterizations of immediate echolalia suggested that it was nonfunctional (e.g., Kanner, 1943); indeed, many clinically oriented studies in the years since have treated it as a problematic repetitive behavior requiring extinction (e.g., Carr et al., 1975;Fisher et al., 2013;Neely et al., 2016). Other approaches sought to clarify the contexts and purposes of immediate echolalia. A seminal paper by Prizant and Duchan (1981) carefully coded the use of immediate echolalia during natural interactions and posited that it was a valuable communicative and cognitive tool serving seven distinct interactive functions, including turn-taking, yes-answer, requesting, declarative and self-regulatory. Many studies in the ensuing years have further explored the complex potential communicative and developmental functions of immediate echolalia (e.g., Local & Wootton, 1995;Pruccoli et al., 2021;Sterponi & Shankey, 2014;Stiegler, 2015). These studies reported that immediate echolalia was more frequently used in response to "high constraint" utterances (e.g., yes/no questions, directives) (Rydell & Mirenda, 1991Violette & Swisher, 1992) and/or novel or challenging language inputthat is, when understanding was low (Gladfelter & Vanzuiden, 2020;Schuler, 1979;Violette & Swisher, 1992). Even further, some found that immediate echolalia was an effective responsive strategy for mastering new language (Charlop, 1983;Leung & Wu, 1997) and developing social communication and play skills (Schuler, 2003). First-person perspectives from autistic adults describe several possible functions of immediate echolalia. Sinclair (2019) suggests that use of immediate echolalia may be interpreted in several different ways depending on the context, and it may or may not be intended as communicative. In Amythest Schaber's Ask an Autistic, episode #18 (2014), they explain the use of immediate echolalia for both communicative and non-communicative purposes. Schaber describes the communicative purposes of immediate echolalia by autistics to include: buying time while processing what was just said, as a form of verbal expression, engagement, and interaction, or as a way of making needs and desires known. For instance, the immediate repetition of the question "would you like some more salad?" (while holding out a plate) can mean "yes, I would like some more salad." Schaber (2014) also explained that immediate echolalia can be used for personal, noncommunicative ways, including as self-soothing behavior, self-stimulation, and self-rehearsal of what they are preparing to say. Delayed echolalia. Definition and Examples. Schuler and Fletcher (2002) described a child repeating a phrase verbatim (including the intonation and pausing of the original utterance) that he heard on television, "Barney was brought to you by the makers of Juicy Juice ©,100 percent real fruit juice, and by the J. Arthur Vining Foundation ©, the Corporation for Public Broadcasting and by contributions to your PBS stations from viewers like you" (Schuler & Fletcher, 2002, p. 135). This utterance is an example of delayed echolalia, whichlike immediate echolaliainvolves the repetition of speech spoken by another person . However, unlike immediate echolalia, delayed echolalia entails that the echoed speech occurs after some interval of time has passed between the model utterance and the echo. Authors have been more or less specific about what length of separation between the model and the echo constitutes a delay. Kanner (1943) identified the presence of delayed echolalia in his original case series, and he defines the phenomenon as "word combinations… 'stored' by the child and uttered at a later date" (p. 243). Thus, he suggested that a delayed echo involves a period of time between the model utterance and the echo, but he did not offer a specific duration that qualifies an echoed utterance as delayed (vs. immediate). Similar definitions followed, including "echoing of a phrase after some delay or lapse of time" (Simon, 1975(Simon, , p. 1440. In contrast, Rydell and Mirenda (1994) were more specific, in that they stipulated that delayed echolalia involved repeated speech occurring more than two speaking turns after the model utterance (Rydell & Mirenda, 1994). Sidtis and Wolf (2015) proposed differentiating delayed echoes that were proximal to the original utterance from those that were "distant" (more than 5 turns), although this distinction has not been widely adopted. Prizant and Rydell (1984), in their systematic exploration of delayed echolalia, emphasized the valuable role of a familiar adult in distinguishing delayed echolalia from generative utterances and/or self-repetitions (see next section). In their account, delayed echolalia was identifiable based on two criterion (at least one of which had to be met): the utterance was (1) "beyond the child's level of grammatical complexity based on creative utterances" (level of grammatical complexity was characterized according to the five stages of language development outlined by Brown (1973)) and/or (2) "identified as memorized routines by the child's language clinician or teacher" (p. 185). The following excerpt, adapted from their text, describes how a child, Mary, uses an echoed phrase about having a splinter to convey her fear of a stranger. This example underscores the importance of the familiarity of the interlocutor, not only in identifying delayed echoes in the first place, but also in interpreting their function in a given context. …[O]n one occasion, while working with her teacher, Mary observed an unfamiliar visitor to her classroom. After noticing the stranger, Mary turned toward the teacher and exclaimed in a distressed voice, "You got a splinter, got a splinter!" Mary's teacher responded, "Don't be afraid, that's Barry. He's come to spend some time with us today."…. Mary's teacher later explained that ever since Mary had a painful splinter the year before, she repeats this phrase, which was said to her at the time, whenever she is upset or experiencing pain. … The phrase would … be challenging to a naive listener who was either unfamiliar with Mary's original experience or with her history of using the phrase. However, as the example illustrates, Mary's speech production was not challenging to her teacher who was familiar with the relationship between Mary's utterance and the communicative intent that it expressed. (p. 266) In their first-person accounts, both Cynthia Kim and Emma Zurcher-Long described using delayed echolalia similarly to the way Mary does, in order to capture a specific emotional experience. Zurcher-Long (2016) explained that she uses sentences "from another time in [her] life" because they accurately convey her current emotional state. Kim (2013) described using the introductory line of a children's story book ("It's a bright sunny day") when she is feeling optimistic about the day ahead. Measurement Approaches. Few studies have closely examined delayed echolalia, perhaps due to the difficulty in identifying it as distinct from immediate echolalia and self-repetition. Indeed, some approaches do not distinguish between these categories (Lord et al., 2012;Rutter et al., 2003). Other studies have relied on transcription and coding of dyadic, semi-structured and/or play-based interactions either by trained coders (Gladfelter & Vanzuiden, 2020;Prizant & Rydell, 1984;Rydell & Mirenda, 1994) or automated algorithms (Van Santen et al., 2013). Qualitative studies have explored the usage of delayed echolalia using conversation analysis, in order to capture both the communicative/interactive function of these echoes and their prosodic contours (Sterponi & Shankey, 2014;Tarplee & Barrow, 1999;Wootton, 1999). Contextual and Individual Correlates. Delayed echolalia appears to be less common than immediate echolalia (Gladfelter & Vanzuiden, 2020;Rydell & Mirenda, 1994;Van Santen et al., 2013), but even so, it may be more frequent than other commonly studied features of language use by individuals on the spectrum. For instance, Szatmari et al. (1995) found thatin children on the autism spectrum who had "functional" language -50% used delayed echolalia, whereas only 26% showed pronoun reversal and 10.5% used neologisms. Nevertheless, there is a relative dearth of research on the correlates (either contextual or individual) of delayed echolalia. An early study by Cantwell et al. (1978) reported that children on the spectrum used delayed echolalia more often than a group of children with other language impairments (but not autism). A similar finding was reported by Leyfer et al. (2008), who found that only 2% of children with specific language impairment (SLI) were reported to use delayed echolalia or other repetitive speech. In contrast, Gladfelter and Vanzuiden (2020) failed to find associations between an omnibus measure of repetitive language (including immediate and delayed echolalia and other forms of repetitive spoken language) and participant characteristics in their sample of children on the spectrum; this is similar to previous findings (Van Santen et al., 2013). First-person accounts from authors on the spectrum exemplify how delayed echolalia can be used to recall a previous experience associated with a specific emotion (e.g., Zurcher-Long, 2016). Some adults on the spectrum report that the use of the echoed speech can be interpreted as conveying emotion (e.g., optimism, in the case of Kim, 2013). Others describe the use of delayed echolalia to connect to a previous emotional state as a method for conveying and regulating emotions (e.g., as a child, Sinclair (2019) repeated the question "What is one plus one?" to soothe himself in moments when he felt nervous or anxious because it reminded him of "the good times [he] had solving equations in school"). Self-Repetition. Definition and Examples. In addition to repeating words and phrases initially produced by another person (i.e., echolalia), individuals on the autism spectrum have also been observed to repeat words, phrases, and questions initially produced by themselves. 2 For instance, Kanner (1943) noted that his patients had a tendency to use an utterance and then "keep repeating [it] over and over again" (p. 221). This behavior has received less attention in the autism literature than echolalia has, although some research finds that self-repetitions are actually more common in the speech of individuals on the spectrum than echolalia is (Van Santen et al., 2013). Like echolalia, subsequent repetitions of the initial utterance are nongenerative, in that the speaker is repeating language by rote, rather than generating a novel utterance. Van Santen et al. (2013) provide the following excerpt, which shows how self-repetition can occur across conversational turns: Child: This time he's not at the end of the big string he's floating Experimenter: Okay that would be a better idea so we're going to change the trip Child: At the end of the big string We refer to this behavior as self-repetition (Van Santen et al., 2013), but it is important to note that this behavior has been categorized under a litany of labels through the years, including "palilalia" (e.g., , "verbal perseveration" (e.g., Abbeduto & Hagerman, 1997;Murphy & Abbeduto, 2007), "deviant repetitive language" (Sudhalter et al., 1990), "repetitive speech" (Handen et al., 1984), "verbal stereotypy" (Gladfelter & Vanzuiden, 2020), among others. The term "stereotyped" language has also been used in standardized tools (e.g., Lord et al., 2012;Rutter et al., 2003) to refer to words or phrases used repeatedly (note, however, the definition of "stereotyped" utterances allows for delayed echolalia along with self-repetition). It is important to note that many authors include "incessant" (Sudhalter et al., 1990) revisiting of the same topic (sometimes called "topic perseveration") within the category of self-repetition (Kang et al., 2020;Murphy & Abbeduto, 2007). Instead, we limit our discussion to the repetition of linguistic units (words, phrases, and sentences), as we interpret excessive focus on a particular topic (but using different linguistic forms) as a pragmatic (rather than purely linguistic) phenomenon. Measurement Approaches. Standardized diagnostic tools have used general ordinal rating scales to capture "stereotyped" utterances (which overlap with delayed echolalia), relying on both parent report (Rutter et al., 2003) and direct clinician observation (Lord et al., 2012). Other approaches used a more fine-grained observational approach. For instance, Sudhalter et al. (1990) provided subcategories of self-repetition which classified subsequent repetitions of a previous utterance by the linguistic unit (e.g., word/ phrase vs. sentence) that was repeated. Murphy and Abbeduto (2007) offered a similar conceptualization, but added an additional category, "conversational device repetition," encapsulating a speaker's repetition of comments or questions that are used to maintain a conversational exchange. See, for example, a participant's repeated use of the question "How about you?" at the end of several conversational turns in the following excerpts adapted from their paper (2007, Yeah. How about you? Sidtis and Wolf (2015) went beyond defining repetition by the larger linguistic unit that was repeated (i.e., word, phrase, sentence), by measuring repetitiveness according to the number of morphemes that were repeated from the initial utterance. Other authors have focused their analysis on whether the repetition occurs within or between turns (e.g., Van Santen et al., 2013). Combined approaches using transcription and automated algorithms have been used to tally each instance of self-repetition (Van Santen et al., 2013), and qualitative approaches have applied conversational analysis to repetitive exchanges in order to document the interactional and/or communicative functions of such repetitions (e.g., Dobbinson et al., 2003;. Contextual and Individual Correlates. Various studies have examined correlations between the frequency of repetitive language use and individual characteristics, like age, IQ, and language ability. Findings from this research have been mixed, with some research finding that IQ and age were negatively correlated with repetitive language frequency (Bishop et al., 2006), while others showed the opposite relationship (Cervantes et al., 2014), and still others reported no relationship at all, including any relationship with language ability (Gladfelter & Vanzuiden, 2020) or even echolalia (Van Santen et al., 2013). Such inconsistencies are likely due to the incredible variation of behavior subtypes (topic perseveration, word repetition, conversation device repetition) that are considered under the umbrella of self-repetition. It is probable that certain repetitive language behaviors (e.g., repetitions used across turns to maintain conversational topic) are associated with stronger overall language skills than others (e.g., repetitions of single words within a conversational turn). Thus, differences in correlations between age, IQ, and repetitive language use may depend on the types of self-repetition examined. There are also studies examining self-repetition in individuals with Fragile X syndrome and/or children with both Fragile X and autism diagnoses. Some of this work reported higher levels of repetitive language among males with Fragile X (vs. females), independent of cognitive or linguistic skill (Murphy & Abbeduto, 2007). Other work found that males with both Fragile X and autism diagnoses were more likely to repeat an utterance once compared to males on the spectrum without Fragile X; children in both groups were equally likely to repeat an utterance more than once and to perseverate on a topic (the authors considered perseveration a type of self-repetition; Friedman et al., 2018). As with the other forms of non-generative unconventional language, some scholars have explored the function of self-repetition; although, it has garnered much less attention than echolalia. used conversation analysis to explore the function of self-repetitions produced by an adolescent on the spectrum; they reported that self-repetitions often served a social function (e.g., to gain or maintain the attention of her social partner), sometimes in conjunction with other non-verbal behaviors (like handing over an object). Other work has suggested that selfrepetitions may serve to maintain preferred topics in conversation (Dobbinson et al., 1998). Further, even repetitions of the same utterance may be used to serve varying functions (e.g., turn-taking, confirming, part of a larger response); these functions can be differentiated by varied prosodic contours (i.e., rising or falling intonation) across repetitions of the same word/phrase/sentence (Dobbinson et al., 2003). Generative While most work on "unconventional" language behaviors in autism has focused on non-generative spoken language, since Kanner's original work (1943), there have been observations of unconventional language use that is generative, i.e., idiosyncratic productions that originate from the individual's own linguistic repertoire. For example, Kanner described a child using the word "Peten" as neologistic jargon for the nursery rhyme "Peter, peter pumpkin eater." He described another child who used the preposition "near" to describe paintings affixed to a wall, when speakers would conventionally use the preposition "on" instead. In fact, the child corrected his father's use of "on," suggesting that the child was knowingly refusing to adopt the conventions of his native language and was convinced that his prepositional choice is more accurate (even though it is not conventional). These examples are striking because they lie in opposition to non-generative spoken language: rather than repeating speech by rote, the individual is uniquely combining phonemes, morphemes, and words together to create forms they have never heard before. In fact, Asperger (1991) described children in his case studies as having "a special creative attitude towards" language, emphasizing the fact that instances of idiosyncratic language implicate linguistic productivity. This is an important point, in that the presence of generative unconventional forms further discussed below (i.e., idiosyncratic and pedantic language), by virtue of being generative, are expected to be associated with higher concurrent structural language skills than the non-generative forms described above. We separate the overarching category of unconventional generative language into two subtypes: idiosyncratic language and pedantic language. Idiosyncratic language involves the creation of novel words and phrases, and it can therefore be subcategorized based on what linguistic elements are being used to create a new linguistic form. While idiosyncratic words (a.k.a. neologisms) are the combination of phonemes, bound morphemes (or free morphemes in the case of compound words) to create a novel word (e.g., "Peten"), idiosyncratic phrasing is the production of a unique combination of words to produce a semantically unconventional phrase (e.g., "the paintings are hanging near the wall" to mean "the paintings are hanging on the wall"). Volden and Lord (1991) argued that both neologisms and idiosyncratic phrases are evidence of a similar linguistic phenomenon and are likely due to similar underlying factors. Pedantic language, sometimes referred to as "overly formal speech" , involves the combination of rare lexical items with formal phrasing, making the individual sound "bookish" (Ghaziuddin & Gerstein, 1996). Before the publication of the DSM-5 (APA, 2013), pedantic speech was often used as a diagnostic indicator of Asperger's syndrome (vs. autism) (Asperger, 1991;Eisenmajer et al., 1996;Ghaziuddin & Gerstein, 1996;Wing, 1981). Idiosyncratic language Idiosyncratic words/neologisms. Definitions and Examples. As mentioned above, idiosyncratic words, or "neologisms," have been noted in the speech of individuals on the autism spectrum since Kanner's original account (1943). Volden and Lord (1991) argued that neologisms involve a phonological or morphological variation of a conventional word, rather than a word fabricated de novo. 3 The following excerpt of the speech of a woman on the spectrum, provided by Werth et al. (2001, p. 116) in their case description, helps elucidate how neologisms morphologically and phonologically relate to known words: Later on we got to King's Cross, we vamperated the train, then we consailed the King's Cross underground [sings] "Going underground, going thunder-ground." Later on we were beckoned off, then we piled into the flying Victoria train and Mum and I gaggled as well as I showed Mum this advert of the Victoria Express. The underlined words indicate unique forms that involve morphological ("consailed", "thunder-ground") or phonological ("gaggled") manipulations of known English words. These forms are striking not only because they evidence a productive command of phonology and morphology, but also because the words' meanings are (for the most part) interpretable from context. Werth et al. (2001) explained that this was not always the case for this speaker, who sometimes used neologisms that are uninterpretable (e.g., "Then I went to bed… then shavered the zed-zed-zeds" (p. 5)). Similar caseswhere a word's meaning is not interpretable from linguistic or extralinguistic contextare also noted by Eigsti et al. (2007) in an analysis of language produced by five-year-old children on the spectrum during play sessions (e.g., "the serpice is flying"). Werth et al. (2001) used the fact that some neologistic forms are uninterpretable as evidence that the speaker may not be sensitive to the listener's needs. More broadly, all uses of neologisms are interesting in that they violate the lexical principle of conventionality, which states that "words have conventional meanings" (Clark, 1983). For this reason, neologisms produced by very young non-spectrum childrencalled "invented words" (Locke, 1983) or "protowords" (Kent & Bauer, 1985) in developmental literatureare interpreted as entailing that the child does not yet know or cannot yet articulate the conventional label (Laakso et al., 2010;Menn, 1978). In non-spectrum development, proto-words are replaced by conventional forms in the second year (e.g., Yousofi & Ashtarian, 2015). The protracted use of unconventional labels noted in individuals on the autism spectrumwell beyond the time when they can pronounce the conventional formmay indicate a prolonged adoption of the principle of conventionality. Alternatively, it may reflect struggles with lexical access, such that the individual uses a neologistic form in spontaneous speech merely because s/he cannot access a target word in the moment. This latter explanation has been used to account for the use of neologisms by other groups with language impairments, such as individuals with aphasia (e.g., Dell et al., 1997). Interestingly, neologisms have also been noted in children with SLI (Leyfer et al., 2008), who may have unique vulnerabilities in semantic networks (Haebig et al., 2015). Measurement Approaches. The Autism Diagnostic Interview -Revised (Rutter et al., 2003) includes an item addressing the production of neologisms. This item defines neologisms as "words that are obviously peculiar," and it has been used as a simple measurement strategy for some research quantifying neologisms (e.g., Leyfer et al., 2008;Szatmari et al., 1995). Few researchers have proposed a more detailed, systematic approach to measure the use of neologisms by individuals on the spectrum, but Volden and Lord (1991) compared the frequency of neologisms and idiosyncratic phrasing between the language samples of three groups of adolescents: a group on the autism spectrum, a group with cognitive impairments (but not on the spectrum), and a group not on the spectrum and with average IQ. Language use was recorded during the administration of a standardized observational diagnostic assessment (Autism Diagnostic Observation Schedule -Generic, or ADOS-G; Lord et al., 2000). In this study, neologisms were defined, simply, as non-words. The authors further explained that this category included a variety of subtypes, ranging from neologisms that were unrecognizable/unrecoverable as they were neither phonologically nor morphologically related to any known word (e.g., "Kellogg's nahavaties"; Volden & Lord, 1991, p. 125) to neologisms involving morphological modifications of known words (e.g., "redundiate", p. 125). Using a similar approach, Eigsti et al. (2007) coded transcripts from play sessions for the use of neologisms (what they called "nonsense words/jargon"), defined as "intelligible but uninterpretable words or phrases. Any words or phrases that the transcriber was able to hear, but was not able to supply a gloss or meaning for, was included" (p. 1015). Contextual and Individual Correlates. Little is known about the correlates of neologisms, perhaps due to their relative rarity (e.g., Szatmari et al., 1995). Research on Deaf children on the autism spectrum found that they produced neologistic signs, which were not evident in the language of non-spectrum Deaf peers (Shield, 2014). Further, the use of neologisms separated the discourse of older children on the autism spectrum from language-matched neurotypical peers and peers with other types of developmental disabilities (Eigsti et al., 2007;Suh et al., 2014;Volden & Lord, 1991). Leyfer et al. (2008) reported that nearly 9% of their sample of children with SLI were reported to use neologisms either currently or in the past, indicating that neologisms are not exclusive to autism. One study suggested a negative association between the frequency of neologisms and nonverbal cognition and language abilities in autism (Eigsti et al., 2007), while other qualitative work suggested that the use of neologisms may signal a relative strength in humor or creativity (Werth et al., 2001). Idiosyncratic phrases. Definitions and Examples. Not only is the language of individuals on the autism spectrum noted for containing unique word forms, but it also contains unique uses of known words. For example, a twelve-year-old child on the spectrum was observed using the adjective "sparkly" to describe the way that an alcohol swab felt when it was used on the skin of his arm (author's personal anecdote). In this case, the child uniquely extended an adjective that is typically used to describe a visual experience to something he was experiencing tactilely. Idiosyncratic language also goes beyond the unconventional use of single words (and/or the use of rare, less prototypical words); it also can involve unique combinations of words. Consider the following nonconsecutive utterances from a ten-year-old on the spectrum (Arnold, 2021): Child: I just fake that up. Child: [I see] pictures of frogs gliding warbly. And in another example, Wing (1981) described a child using the phrase "temporary loss of knitting" to refer to a hole in a sock (p. 127). This child's phrase was more descriptive (and, perhaps, more accurate) than the word "hole" would have been in a similar context, as it proposes an explanation for the hole's origin. In this way, like neologisms, idiosyncratic language may evidence linguistic creativity that allows a speaker to express concepts that are not readily capturable using conventional forms. From Asperger (1991, underline added for emphasis): "All young children [I have clinically observed] have a spontaneous way with words and can produce novel but particularly apt expressions" (p. 71). Asperger's impression that his patients all had striking linguistic gifts (which included the production of unique word combinations) led researchers that followed him to use idiosyncratic phrasing as a behavior that could distinguish autism from Asperger's syndrome (Eisenmajer et al., 1996). In fact, children on the autism spectrum may show an aptitude for acquiring rare word forms. For example, children on the spectrum have been found to provide more non-prototypical exemplars of a target category (e.g., "catamaran" as a member of the vehicle category) but less prototypical ones (e.g., "car") than either non-spectrum peers or peers with SLI (Dunn et al., 1996). The authors offered many interpretations for their findings, including the possibility that children on the spectrum have a firmer grasp of non-prototypical category members than their nonspectrum counterparts. However, like neologisms, unique uses of real words are (by definition) not conventional, and can therefore lead to misinterpretation; thus, idiosyncratic phrasing may sound poetic, but it may also hinder communication. Further, the underlying cause of such uses may actually be semantic weakness rather than language strengths. That is, rather than these uses signaling a relatively strong grasp of language that allows the speaker to use words creatively (even poetically), such uses may instead evidence an atypical, underspecified, or even erroneous understanding of a word's meaning. In other words, individuals on the spectrum are using a word in a unique way, not because they "have a special creative attitude towards language", as Asperger purports (1991, p. 70), but because they do not know how to use the word appropriately. There is some evidence to support this latter explanation. Perkins et al. (2006) analyzed the way that adults on the autism spectrum use words in conversation and found anomalous uses of many word classes, but especially spatial and temporal terms, which is similar to Kanner's (1943) observations that many of his patients used prepositions atypically. The examples provided by Perkins et al. (2006) do not read as creative uses of spatial and temporal terms but instead as lexical confusion. For example, one participant described breakfast as "the first meal of the day prior to waking." The use of "prior to," rather than "after" suggests a basic mix-up of temporal terms. Relatedly, Hobson and Lee (2010) found that children with ASD inappropriately used the deictics "here" and "this" (rather than "there" and "that," respectively) to refer to objects that were far from them. While these authors interpreted these findings as reflecting difficulties with perspective-taking in ASD, another explanation is that deictics (like prepositions and pronouns) have complex meanings that shift depending on context. This proposal has been proffered to explain atypical use of pronouns in children on the spectrum (Zane et al., 2021). Thus, idiosyncratic word use may reflect differences with initial word learning and/or extension (Tovar et al., 2020), which may particularly affect polysemous words, including prepositions, deictics, and pronouns (Arunachalam & Luyster, 2018). In fact, overarching struggles with semantics (relative to other language components, like morphology and syntax) have long been noted in autism (see Boucher, 2012 for a review). Naigles and Tek (2017) provided a framework for capturing patterns in the language acquisition profile of children on the autism spectrum that emphasized language form (morphology/syntax) as a relative strength and language meaning (lexical semantics) as a relative weakness. Measurement Approaches. Some diagnostic tools include a measure of idiosyncratic phrasing. The Autism Diagnostic Interview -Revised, for instance, includes an item asking caregivers to report on the use of idiosyncratic phrasing (Rutter et al., 2003). In this item, idiosyncratic phrasing is defined as "real words and/or phrases used or combined by the subject in a way that s/he could not have heard." The end of this item ("could not have heard") not only emphasizes the fact that idiosyncratic phrasing is unconventional, but also that it is generative; that is, these phrases must manifest from the speaker's own linguistic repertoire. The Autism Diagnostic Observation Schedule (2nd edition, or ADOS-2; Lord et al., 2012) similarly includes a rated item (based on clinical observation) capturing "stereotyped/idiosyncratic use of words or phrases." The overall rating on this item captures both "stereotyped" use of words or phrases (which could be echolalia or self-repetitions) and idiosyncratic use of words or phrases, which the item operationalizes as "idiosyncratic quality of the phrasing, unusual use of words or formation of utterances, and/or their arbitrary association with a particular meaning" (Lord et al., 2012). Others have used language samples from the ADOS-G/ ADOS-2 but have taken a more general, binary approach to quantifying the presence or absence of idiosyncratic language (without differentiating between pedantic language, idiosyncratic phrasing, neologisms and delayed echolalia; Suh et al., 2014). In contrast, Volden and Lord (1991) analyzed the use of idiosyncratic phrases by adolescents with ASD during the administration of the ADOS-G, where coders were trained to identify and tally each specific "use of conventional words or phrases in unusual ways to convey specific meanings" (p. 116) as instances of idiosyncratic phrasing. Contextual and Individual Correlates. In addition to finding that the use of neologisms distinguished the narratives told by children on the spectrum from non-spectrum peersas described above -Suh et al. (2014) also reported a higher frequency of idiosyncratic phrasing in the narratives produced not only by children on the spectrum, but also by children with "optimal outcome" (children who were at one time diagnosed with autism, but who subsequently lost the diagnosis due to their no longer meeting diagnostic criteria for autism spectrum disorder). Earlier work used the production of idiosyncratic phrasing as a diagnostic indicator of Asperger's syndrome and even as a distinguishing feature of this syndrome (i.e., vs. autism) (Eisenmajer et al., 1996). Pedantic language Definitions and Examples. In the previous section, we described a word fluency study where children on the autism spectrum were reported as providing less prototypical (i.e., rarer) exemplars of word categories than either their typically developing peers or their peers with SLI (Dunn et al., 1996). A command of less frequent word forms has frequently been observed as a feature of the expressive language of individuals on the spectrum (including in written expression, see Hilvert et al., 2019), since Asperger (1991). In a first-person account, an autistic author writing under the name "Aoife" (2019) mused on her own predilection for including rare/formal words in both her writing and speech, and she discussed her enjoyment in using such words: "Why use a smaller word when there are so many glorious synonyms floating around in the back of my brain [?]" Such word choices can sometimes give the listener the impression that the speaker is being (overly) precise and specific (De Villiers et al., 2007). And when these lexical items are combined, especially in syntactic frames that are more commonly associated with formal language contexts, including writing, the speaker begins to sound "bookish" (Ghaziuddin & Gerstein, 1996), "curiously pedantic" (Burgoine & Wing, 1983) or "overly formal" . Consider the following nonconsecutive examples from a 19-year-old on the spectrum, where pedantic speech is underlined (Arnold, 2021): Now I shall give you some entertainment. I'm sure the topographic information isn't very accurate. Volden and Lord (1991) included pedantic language under the larger category of idiosyncratic language (including both neologisms and idiosyncratic phrasing), where pedantic language was defined as the "unusual combination of conventional and overly complex words and phrases" (p. 111, underline added for emphasis). This definition overlapped with their definition of idiosyncratic phrasing, in that both pedantic speech and idiosyncratic phrasing involved an "unusual combination" of known words and phrases. What distinguished pedantic speech from other types of idiosyncratic phrasing in their framework was the impression that word choices and/or phrase structure was "overly complex". Pedantic language is arguably more than just a combination of rare words in complex/formal sentence structures. De Villiers et al. (2007) described pedantic language as involving the inclusion of factual, accurate, specific, and/ or technical information that was too detailed for a particular context. Similarly, Ghaziuddin and Gerstein (1996) based their definition of pedantic speech on the dictionary definition of the word "pedant", arguing thatwhen these qualities are translated to speechpedantic language involved more information than was necessary for a given discourse context, along with vocabulary and sentence structure that was typical of written language. Note that both De Villiers et al. (2007) and Ghaziuddin and Gerstein (1996) argued that pedantic language involved expressing details that were unnecessary for a particular context. Correspondingly, Asperger described several of his patients providing an extraordinary amount of detail when asked to explain the similarities and differences between two entities (for instance, between a fly and a butterfly) as part of an intelligence test. Asperger (1991) described one child's descriptions as "threaten[ing] to go on forever" (p. 53), and he argued that this child's descriptions included details that were unnecessary in the context of the exam. The importance of context in these descriptions underscores the fact that pedantic language may be best categorized as part of the pragmatic differences observed in autism, rather than a consequence of underlying language difference. However, the fact that it does include the use of infrequent words (words that are not conventionally used by other speakers) and perhaps the use of complex sentence structures that would not be produced by other speakers (i.e., only used in writing) motivates our inclusion of pedantic language as part of the unconventional spoken language framework. In addition to pedantic language falling somewhere on the interface between language and pragmatics, it may also fall somewhere between generative and non-generative language. While Asperger interpreted pedantic linguistic expression as absolutely generative and creative, other authors have argued that pedantic language reflects phrasing memorized from other sources that the individual has previously read (rather than heard; Wing, 1981). Measurement approaches. As described above, Volden and Lord (1991) conceptualized pedantic language as part of the larger idiosyncratic language category. As such, it was not coded separately, but was instead subsumed under this larger category. A similar global approach is taken in standardized diagnostic measures, which fold pedantic language into more general items addressing unusual language use (Lord et al., 2012;Rutter et al., 2003). Ghaziuddin and Gerstein (1996) offered a more nuanced coding scheme of pedantic speech, by operationalizing a rating scale that quantified how much the semantic, syntactic, and pragmatic nature of adolescents' speech evidenced qualities that accorded with the dictionary definition of a "pedant". When translated to speaking qualities, this involved speech that provided more information than was required in a given conversation (pragmatics), used sentence structures that were typically reserved for formal contexts (syntax/pragmatics), and included vocabulary that was less frequent and/or more typical of written language (semantics/pragmatics). A similar definition and ordinal rating scale was employed in later work (De Villiers et al., 2007). Another simple measure is to quantify the frequency of vocabulary, where higher rates of infrequent vocabulary words, along with lower rates of frequent vocabulary words, corresponds with a more pedantic quality of language; this approach was used by Hilvert et al. (2019) in an examination of the essays written by children on and off the spectrum. Finally, languages which take diglossic formsthat is, use a colloquial, casual form and a more formal versionhave afforded new insights by documenting the use of the "high," formal dialect (when not contextually required) in children on the spectrum (Francis et al., 2019). Contextual and individual correlates. Recent work has suggested that use of formal dialects in informal contexts may also be a diagnostic indicator of autism, generally (Francis et al., 2019). And as mentioned in the introduction to our section on generative unconventional language in autism, a pedantic quality of speech was often used to distinguish Asperger's syndrome from autism, where individuals diagnosed with Asperger's were described as using pedantic language, while individuals diagnosed with autism were not (Eisenmajer et al., 1996;Ghaziuddin & Gerstein, 1996). Despite the implication from this earlier work that pedantic speech was associated with more skillful language and cognitive skills (i.e., associated with diagnosis of Asperger's rather than autism), other work has failed to find an association between pedantic language and nonverbal cognition or language abilities (De Villiers et al., 2007). Some autistic writers have described their tendency to use pedantic speech (specifically, less common vocabulary items) as stemming from a simple enjoyment of words. Aoife, mentioned above, wrote that she "always [has] been fond of big words," and, as we describe earlier, wondered why anyone would choose a more common, shorter word when there is rarer, longer alternative (2019). Similarly, the Aspiring Aspergian (2015) commented, "When walking away from a group I will often say that I'm going to 'mosey', 'meander', or 'locomote', instead of simply excusing myself. I love choosing odd, sillysounding, archaic, or complex words and phrases to describe things." Another possibility is that autistic speakers use rarer, more pedantic-seeming words not only because they enjoy them, but also because they are more accurate/ precise. At the Consortium on Autism and Sign Language in Cambridge, Massachusetts (2015, https://www.amacad. org/news/consortium-autism-and-sign-language), several presenters discussed this possibility, by introducing the "Precision Hypothesis"an account of language use in ASD where speakers prioritize accuracy and specificity above other aspects of communication (e.g., efficiency). This hypothesis suggests that autistic speakers use rarer words and are simply more verbose because they aim convey exact information. Such an account accords well with many of the definitions/observations of pedantic language we have listed above, where speakers are described as being precise, specific, and including a surprising amount of detail (Asperger, 1991;De Villiers et al., 2007;Ghaziuddin & Gerstein, 1996). In fact, this hypothesis also can account for other types of generative unconventional language, like neologisms and idiosyncratic phrasing, whichas we discuss in those sectionsmay represent a specific sense that is not captured via conventional words/ phrases. What lies in between? While we have thus far treated unconventional language behaviors as falling into one of two binary categoriesgenerative versus non-generativethis is not to suggest that all such behaviors can straightforwardly fit into only one category, as we have already discussed. In fact, there is an important subset of unconventional language behaviors that we have not yet addressed, which are best categorized as bridging the divide between non-generative and generative. These include mitigated echolalia and formulaic/ gestalt language, which both involve the manipulation of repeated and/or stored linguistic units, respectively (Fay, 1967;Prizant & Duchan, 1981;Schuler & Fletcher, 2002;Wray & Perkins, 2000). In such cases, the speaker generates a novel utterance, when looking at the utterance as a whole and comparing it to previous utterances; however, when the utterance is analyzed, it contains formulaic pieces/chunks of language and does not clearly evidence that the speaker has fully decomposed these pieces into constituent parts. For example, Dobbinson et al. (2003) described an adult on the autism spectrum discussing a favorite topic (the Pershing missile and the origin of its name); each time he introduced the topic, he began the sentence with "That Pershing missile," modified that noun phrase with a relative clause, and ended the sentence with the adverb "now" (p. 304). This individual was therefore reusing a specific syntactic frame (along with some lexical formulaicity) each time he created such sentences, but the sentences themselves were unique as a whole. In their framework, Dobbinson et al. (2003) described how this type of formulaicity can apply to both prosody and lexical items, in addition to syntactic frames. Similarly, the following excerpt, adapted from Sterponi and Shankey (2014, p. 285 Thus, even though the utterance in line 2 exactly repeats the mother's words at the end of line 1, we still include this echo as an example of mitigated echolalia (vs. "pure" echolalia), because the prosodic contours of the echoed speech deviate from the model (Schuler & Fletcher, 2002). Sterponi and Shankey (2014) further explained how Aaron used this modification communicatively: in prolonging the word "minute," he conveyed his desire to prolong the duration of the remaining time in the bath. Not only does this excerpt generally emphasize the communicative potency of mitigated echoes, but some utterances specifically show how mitigated echoes exhibit an underlying understanding of grammar. For example, in line 6, when Aaron produced the question "Is it minute time?", Aaron demonstrated quite a bit of linguistic knowledge. First, he decomposed the model contraction "it's" into "it + is," and then he correctly transposed them to form a yes/no question. Then, he used the word "minute" as a modifier for "time," and by correctly positioning the modifier before '"time," he further modified the original utterance, in that he inserted a new word in the middle of the sentence. The previous analysis of Aaron's language shows how mitigated echolalia is not straightforwardly non-generative. However, this utterance cannot be considered truly generative, either, in that all the words he used were present in at least one of the preceding utterances. Therefore, in our framework, we position mitigated echolalia, along with other types of linguistic formulaicity, like gestalt language, as "transitional," in between non-generative and generative forms of unconventional language behaviors. We use this positioning to argue that they may simultaneously capture aspects of both generative and non-generative forms. Not only are such forms conceptually transitional between non-generative and generative, but they may in fact be developmentally transitional, in that certain theories of grammatical development suggest that the use of formulas help children transition from using completely nongenerative, stored utterances to composing novel ones. For example, in early accounts of child language acquisition from both Peters (1983) and Locke (1993Locke ( , p. 1995, as well as in later work from Wray and Perkins (2000), typically developing children start out storing and using gestalt forms until about 20-30 months old. At that time, they have usually compiled a sufficiently large number of stored units so that an in-born grammatical system is triggered, which can decompose stored utterances into constituent parts. This process allows children to begin generating novel utterances that depend on an underlying grammatical system, rather than repeating formulaic chunks of language, as they had done previously. Importantly, these forms of formulaic, gestalt language forms are both meaningful and intentional, in contrast to other forms of "automaticity"; for instance, Peters (1983) notes that individuals with brain damage may use chunks of language but that these utterances are neither appropriate nor creative. Thus, the "gestalt" perspectives position formulaicity "at the heart of grammar" (Dobbinson et al., 2003, p. 305) for typical development, in that grammar is initially constructed via analyzing stored, gestalt forms. Such an account has several implications for linguistic formulaicity in autism. Most fundamentally, it suggests that when individuals on the spectrum rely on linguistic formulae to produce utterances, they are taking advantage of a normative operation, rather than doing something deviant or disordered (Dobbinson et al., 2003). Further, since the use of gestalt language helps typically developing children to transition from non-generative to generative expressive language, the use of gestalt forms in autism may be developmentally transitional as well. 4 In fact, children on the autism spectrum may be more likely than neurotypical peers to build generative language from formulaic units, using a "gestalt learning style" rather than a hierarchical learning style, to scaffold language learning (Prizant, 1982;Zenko & Hite, 2014). Use of gestalt language and mitigated echolalia facilitate the transition into "emerging grammar" and later productive language use (Schuler & Fletcher, 2002, p. 133). In fact, scholars in psycholinguistics (e.g., Peters, 1983) and speech language pathology, including Prizant (1983) and more recently Blanc (2012a), proposed that language acquisition in autism involves several stages, where mitigated echolalia and gestalt language serve as transitional stages between echolalia and generative language. In these proposals, echolalia is seen as foundational, in that it provides the units from which mitigated echolalia (and eventual productive speech) will be extracted. There is some support for this in first-hand accounts from autistic individuals. Kim (2013) described her method for acquiring French by moving from unmitigated to mitigated echolalia, and then by using an analysis of these echoes to formulate a grammar from which she could produce completely novel utterances. Similarly, in an 2005 interview, Temple Grandin explained her ability to speak productively as an adult (as compared to her being predominantly echolalic as a child) in this way: "… As I get more and more phrases on the hard drive, I can recombine them in different ways, and then it's less tape-recorder like…." The idea is that echolalia, as embedded in a gestalt learning style, can promote language learning and propel a language learner toward spontaneous generative language use. Many authors have made a strong argument for the transitional properties of mitigated echolalia, but very little work has addressed this empirically. One study by Fay and Butler (1968) reported that children who used mitigated echoes at age three years had better language outcomes at age four years than children who were predominantly using "pure echoes" --echoes that were equivalent in form to the model --at three years old. Thus, their findings did provide empirical support for the idea that mitigated echoes (and, perhaps, other types of gestalt language) fit developmentally between non-generative and generative forms. However, because these authors did not study children who were explicitly diagnosed with ASD but rather generally "echolalic children" (and because of significant epidemiological shifts in ASD over these past several decades; e.g., Rice et al., 2012), we cannot be sure that these findings would extend to the spectrum as we now understand it. Fay (1967) specifically encouraged future research to measure the presence of mitigated echolalia longitudinally in children to help uncover its role in the development of spontaneous language use. However, we do not know of other researchcross-sectional or longitudinalthat has attempted to replicate these findings. The fact that there is such limited research analyzing how the use of mitigated echolalia and/or the use of linguistic formulas contributes to the development of spontaneous language later may be due, in part, to measurement challenges and the vast array of terminology used (e.g., Wray & Perkins, 2000). There is a lack of language assessments (including caregiver questionnaires and checklists) that measure mitigated echolalia separately from other types of echolalia and/or ones that measure gestalt language use. 5 Thus, researchers who are interested in exploring this topic must sample children's language directly, categorize it, and then measure longitudinal effects. In our following section, we discuss how improving the clarity and consistency of definitions may help improve the breadth of assessment tools, among other practical implications. Conclusions Our aim here has been to revitalize and expand on the seminal work of Prizant and Rydell (1993); our hope is that a clear taxonomy and common operationalizations will facilitate effective study and discourse about the diverse forms of language that are deemed "unconventional." We believe that there are important implications of this work for both theoretical and clinical endeavors. In advancing our understanding of the intersection of language development and autism, it is essential to acknowledge that unconventional language is not unique to autism, and as such, we recommend that the study of unconventional language should be transdiagnostic. The question of whether and how various forms of unconventional language differentiate autism from other non-spectrum populations (including neurodevelopmental disorders) is worth careful examination. When exploring that topic, it will be important to consider other individual characteristics, including developmental and language level. Relatedly, future work may explore whether unconventional language seems to correlate with autism features and/or structural language skills, oralternativelywhether it is a "third axis" that is orthogonal to these other individual characteristics. A rich characterization of the heterogeneity in conventional and unconventional language, as well as other corollary areas like nonverbal cognition, is essential to capture the wide variability of profiles seen across those individuals on the autism spectrum; it will require good measurement tools, large samples and advanced statistical modeling techniques. Approaches that identify latent, multidimensional communication profiles may be particularly useful (e.g., Zheng et al., 2021). One important application of this work is to develop a richer understanding of the "norms" for unconventional language. In the case of non-generative unconventional language, for instance, it is worth noting that echoed and repeated utterances may be excluded from standardized assessments of language and language-sample analysis (e.g., Tager-Flusberg et al., 2009), because those measures prioritize generative forms of spoken language. As a result, we do not have normative developmental data about the relative frequency of non-generative forms in non-spectrum children. And even for children on the autism spectrum, we do not yet have adequate tools to help capture the frequency and types of unconventional language used over the course of development. Instead, we often rely on tools like the ADI-R and ADOS which provide a rough, general measure of current or past unconventional language use. A more fine-grained approach to measuring unconventional language (by type and by token), applied across development, would be particularly informative for non-speaking individuals 6 on the autism spectrum, for whom there is already an acknowledged dearth of and need for language metrics (Kasari et al., 2013). It is likely that a relatively high proportion of non-speaking individuals' language output is unconventional, and as such, we stress the importance of including unconventional language as part of the assessment of their language. Finally, we hope this work will pave the way for future researchers to ask a range of important applied questions, including whether the early use of generative and/or nongenerative types of unconventional language may differentially predict long-term language outcomes. We consider these questions to be of utmost importance to evaluate rigorously, becausebased on arguably little studythere is wide variation in existing clinical approaches, with some providers espousing the importance of these features for bootstrapping language development (e.g., Stiegler, 2015) and others proposing their extinguishment (e.g., Neely et al., 2016). In a time of emerging emphasis on evidence-based practice, it is essential to provide empirical findings to guide clinical decision making. There are many open questions to examine: (a) What are the developmental/communicative contexts in which the varied forms of unconventional language occur? (b) How frequently is unconventional language used (across development) and/ or how variable is use of unconventional language between individuals with varying types of neurodevelopmental disorders and varying skill levels? (c) For a given individual, what proportion of their output is unconventional (whether spoken, manual, or via a speech generating device) and does the proportion of use have implications for later language production? (d) Do certain types of early unconventional language predict better long-term language outcomes than others? In considering these questions, it will be important to bridge the literature on the heterogeneity of timing and trajectory of "conventional" language development with the emerging literature on patterns of change in unconventional language in autism. Longitudinal studies suggest high levels of change in conventional spoken language before age 6, followed by relatively high stability (e.g., Pickles et al., 2014) of language development after age 6; on the other hand, retrospective studies have indicated relatively low stability of unconventional language in middle childhood and adolescence (Kang et al., 2020). It may be that these two areas of languageconventional and unconventionalhave unique patterns of change and/or intersect in important ways over development, and that a complementary consideration of both would enrich our understanding of how spoken language emerges and shifts over time in autism. Finally, we hope to support the creation of useful clinical tools and perspectives. For instance, clinicians may find it informative to review a child's use of unconventional language in order to enrich their understanding of that child's semantic and/or syntactic development (e.g., repetition of an utterance with some modificationomissions, additions, expansions, or changes in intonationmay indicate a developing grammatical system) (Schuler, 1979). Pruccoli et al. (2021, p. 6) also highlight the importance of studying the accompanying "behavioral and paralinguistic features" of unconventional language in the interest of identifying its communicative value; they note that "the suppression of a purposeful behavior would deprive ASD-affected individuals of a potentially useful interactive tool." The adverse first-person experience of this is noted by autistic blogger J. Sinclair (2019), who points to "reports of autistic people with echolalia becoming mute after a life of being ignored and misunderstood. This just gives you more reason to listen to the autists with this symptom. Remember, echolalia isn't nonsense, it's us trying to run before we can walk." Clinicians may also continue to expand their understanding of unconventional language as part of language development by through self-study and continuing education (e.g., Blanc, 2021bBlanc, , 2021c. There are some limitations to this review. We have focused our attention on spoken language; it will be important for future explorations to consider whether and how these features manifest in other forms of expressive language, including written language or language that is produced using an augmentative and alternative communication (AAC) device. Moreover, we have prioritized language form, whereas there is much more to be considered in terms of language function and communicative intent (i.e., non-intentional, pre-intentional, intentional). It will be important for researchers and practitioners alike to consider whether an assessment of unconventional language needs to be accompanied by careful attention to communicative intent (e.g., Schuler & Fletcher, 2002), a question which is beyond the scope of the present work. In closing, we hope that by presenting a common framework and set of operational definitions, we can support future efforts to cohere and advance this important body of research exploring unconventional language in populations on and off the autism spectrum. With regards to the nature of language in autism, specifically, there are still important fundamental questions to be asked and answered, including how best to support the spoken language skills of individuals on the autism spectrum across the lifespan. This proposed taxonomy offers a method by which these questions can be framed.
2022-06-08T15:14:59.058Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "52d9bc73d0a0032d7a88ec738ce67820c21b9436", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/23969415221105472", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9c28fe8b7055e8d14347aa7613e3781956caa1e9", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
12647705
pes2o/s2orc
v3-fos-license
Evaluation of pancreatic cancer cell migration with multiple parameters in vitro by using an optical real-time cell mobility assay device Background Migration of cancer cell correlates with distant metastasis and local invasion, which are good targets for cancer treatment. An optically accessible device “TAXIScan” was developed, which provides considerably more information regarding the cellular dynamics and less quantity of samples than do the existing methods. Here, we report the establishment of a system to analyze the nature of pancreatic cancer cells using TAXIScan and we evaluated lysophosphatidic acid (LPA)-elicited pancreatic cell migration. Methods Pancreatic cancer cell lines, BxPC3, PANC-1, AsPC1, and MIAPaCa-2, were analyzed for adhesion as well as migration towards LPA by TAXIScan using parameters such as velocity and directionality or for the number of migrated cells by the Boyden chamber methods. To confirm that the migration was initiated by LPA, the expression of LPA receptors and activation of intracellular signal transductions were examined by quantitative reverse transcriptase polymerase reaction and western blotting. Results Scaffold coating was necessary for the adhesion of pancreatic cancer cells, and collagen I and Matrigel were found to be good scaffolds. BxPC3 and PANC-1 cells clearly migrated towards the concentration gradient formed by injecting 1 μL LPA, which was abrogated by pre-treatment with LPA inhibitor, Ki16425 (IC50 for the directionality ≈ 1.86 μM). The LPA dependent migration was further confirmed by mRNA and protein expression of LPA receptors as well as phosphorylation of signaling molecules. LPA1 mRNA was highest among the 6 receptors, and LPA1, LPA2 and LPA3 proteins were detected in BxPC3 and PANC-1 cells. Phosphorylation of Akt (Thr308 and Ser473) and p42/44MAPK in BxPC3 and PANC-1 cells was observed after LPA stimulation, which was clearly inhibited by pre-treatment with a compound Ki16425. Conclusions We established a novel pancreatic cancer cell migration assay system using TAXIScan. This assay device provides multiple information on migrating cells simultaneously, such as their morphology, directionality, and velocity, with a small volume of sample and can be a powerful tool for analyzing the nature of cancer cells and for identifying new factors that affect cell functions. Electronic supplementary material The online version of this article (doi:10.1186/s12885-017-3218-4) contains supplementary material, which is available to authorized users. Background Migration of cancer cells correlates with distant metastasis and local invasion. This phenomenon involves various molecules including chemoattractants, trophic growth factors and their receptors, adhesion molecules, intracellular signaling molecules, motor proteins, and the cytoskeleton [1]. These molecules are orchestrated to help cells migrate to specific parts of the body or even spontaneously without an apparent destination. As cancer metastasis is directly associated with prognosis, controlling cancer cell migration is an effective strategy for treating the disease. Pancreatic cancer is among those with the poorest prognosis [2]. The treatment for this type of cancer is currently restricted as there are few effective drugs and knowledge regarding the nature of this cancer type is insufficient. New insights regarding this cancer and novel approaches for its treatment have long been awaited. Lysophosphatidic acid (LPA) is a highly bioactive lipid mediator and is known to be involved in cancer cell migration, proliferation, and production of angiogenic factors [3]. In the process of cell migration, LPA works as a potent chemoattractant for various kinds of cells. Six receptors of LPA (LPA 1 , LPA 2 , LPA 3 , LPA 4 , LPA 5 , and LPA 6 ) are known and all of them are G-protein coupled [4][5][6][7][8][9]. Some cells express one of these receptors, while others express multiple receptors for LPA [10]. Several articles have reported that pancreatic cancer cell lines express LPA receptors and the cells migrate towards LPA, using Boyden chamber and/or Transwell culture methods, which involve counting the number of migrated cells [11][12][13]. TAXIScan is an assay device for studying cell dynamics in vitro and has been used in the analysis of both suspension (mostly hematopoietic) and adherent cells [14][15][16][17][18][19][20][21][22]. The device functions as an optically accessible system and provides two-dimensional images of cell migration. TAXI-Scan provides markedly more information including morphology as well as quantitative analysis compared to existing methods such as Boyden chamber method. This device consists of an etched silicon substrate and a flat glass plate, both of which form horizontal channels each with a micrometer-order depth and forms 2 compartments on either side of a channel. Cells are placed and aligned on one side, while a stimulating factor is injected to the other side (typically 1 μL each of the cells and the stimulant). The cells react to the stable concentration gradient of the stimulant inside the horizontal channel [14]. The cell images are observed thereafter and filmed with a charge-coupled device camera located beneath the glass. By analyzing the cell images, many parameters can be determined including velocity, directionality, etc. [23][24][25][26]. The objective of this study is to establish TAXIScan as a system for pancreatic cancer research by using pancreatic cancer cell lines and to evaluate cancer cell migration in vitro for understanding the characteristics of this cancer cell type and for identifying new drugs to regulate cancer cell migration. Here, we show the adherence of cells to the scaffolds as well as LPA-elicited migration by TAXIScan, and by an existing method, the modified Boyden chamber method (Transwell). The LPA-elicited migration was confirmed by checking the expression of LPA receptors and the effect of an LPA inhibitor Ki16425. Maintenance of cells Human pancreatic cancer cell lines BxPC3 (ATCC CRL-1687), PANC-1 (ATCC CRL-1469), and AsPC1 (ATCC CRL-1682) were obtained from the American Type Culture Collection (ATCC), and MIAPaCa-2 (RCB2094) and KATOIII (RCB2088) from Riken Cell Bank. PC3 and 211H were kindly provided by Dr. Masakiyo Sakaguchi. Cells were cultured and maintained in RPMI1640 with 10% FBS or in D-MEM with 10% FBS on 10-cm diameter dishes as the standard procedure. Passaging of the cells was performed using PBS and Trypsin/EDTA solution when they were 80-90% confluent. All samples were handled according to the Declaration of Helsinki. Migration assay The Real-time cell mobility assay was performed by optical real-time cell mobility assay device "EZ-TAXIScan" (ECI, Inc., Kawasaki, Japan) as described previously [20], except for assembling the TAXIScan holder together with a coverslip pre-coated with the extracellular matrix. Briefly, coverslips were coated with collagen I (100 μg/mL), Matrigel (1/30 diluted solution with culture medium), fibronectin (100 μg/mL), laminin (100 μg/mL), or the culture medium, by incubating 100 μL of each solution on a coverslip at room temperature for 1 h before assembling the TAXIScan holder. After collagen I was selected as the scaffold, collagen I pre-coated coverslips were used for the TAXIScan method. The pre-coated coverslip was washed once with 0.5 mL of PBS and was placed on the glass plate for TAXIScan. The TAXIScan holder was assembled according to the manufacturer's instructions. Cells were harvested by detaching from culture flasks using the same conditions as passaging. One μL of suspension prepared in the culture medium containing 2 × 10 6 cells/mL was applied to the cell-injection side of TAXIScan holder and the cells (100 or less in most of the cases) were aligned at the edge of the micro-channel. After obtaining the first round of images, 1 μL of the chemoattractant solution prepared in the chemotaxis buffer was added to the ligand-injection side of the device to initiate migration. The assay conditions were as follows: duration, 4 h; interval, 5 min; micro-channel depth, 10 μm; and temperature, 37°C. Time-lapse images of cell migration were stored as electronic files on a computer hard disk and analyzed when needed. The morphologies of migrating cells were depicted by tracing the edge of cells and then superimposing the resulting outlines onto the initial image. Movies of the images were made and quantification of velocity and directionality was carried out through the "TAXIScan analyzer 2" software. The trajectory of each cell on the image was traced by clicking the center portion of each cell on the computer display. The velocity (V) and the directionality (D) of each cell were calculated using the traced data as described previously [20,23]. The statistical analysis for the velocity and the directionality was done by the Kruskal-Wallis Test (Non-parametric ANOVA) followed by the Dunn's Multiple Comparisons Test, as the data did not show normal distribution in most cases [20]. The modified Boyden chamber method was performed using collagen I-coated polycarbonate membrane inserts (8 μm pore size) in a 24-well plate (CytoSelect 24-Well Cell Haptotaxis Assay kit, Cell Biolabs, Inc. San Diego, CA, USA) or Transwell Plate with non-coated polycarbonate membrane (Corning Incorporated, Corning, NY, USA), per the manufacturer's protocols. Briefly, the cells grown on a culture dish were detached with Trypsin/ EDTA solution, washed with PBS, and re-suspended in RPMI1640/HEPES buffer with 0.1% fatty-acid-free BSA (the chemotaxis buffer) to attain a density of 0.5 × 10 6 cells/mL. A total of 1.5 × 10 5 cells per well were placed in the upper chamber; the chemotaxis buffer with or without LPA was injected to the lower chamber, and then the plate was incubated at 37°C for 2 h. The migrated cells were stained with the staining solution (supplied with the kit), observed under the microscope, and then lysed with the lysis solution (supplied with the kit) to quantify the number of migrated cells by measuring the absorbance at 560 nm. The absorbance was calibrated with the numbers of cells by using the standard curve with a series of different cell numbers (0, 10, 32, 100, 320, 1000, 3200, and 10,000 cells). Quantitative reverse transcriptase polymerase reaction (qRT-PCR) Total RNA was extracted from the cells using the RNeasy kit (QIAGEN, Hilden, Germany). Cells were seeded on 10 cm-diameter dishes until 80-90% confluency was attained. On the day of the experiment, the medium was removed, and the cells were washed with 5 mL PBS, followed by addition of lysis solution, per the manufacture's recommended procedure. Template DNA was prepared with extracted total RNA of each sample using Ready-To-Go You-Prime First-Strand Beads kit (GE Healthcare, Little Chalfont, UK) and 0.5 μL each of 1st strand DNA per sample was used for quantitative polymerase reaction (qPCR) with Fast SYBR Green Master Mix reagent (Life Technologies, Carlsbad, CA, USA). Analysis was done after preparing samples in a 96-well plate; signal during PCR was detected by Step One Plus Realtime PCR system (Life Technologies). The primers used are given in Additional file 1: Table S1. β-actin was used as an internal control for normalization of data. Data were analyzed by the software accompanied with the PCR system. Protein expression and phosphorylation detection Cells were seeded on 10-cm-diameter dishes until 80-90% confluency was attained. On the day of the experiment, cells were rinsed once with 5 mL of serum free Opti-MEM and then stimulated with 1 μM LPA prepared in the chemotaxis assay buffer (0.1%BSA in RPMI1640) prewarmed at 37°C for 30 s, 2 min, or 5 min. Immediately after stimulation, the medium was replaced with ice-cold chemotaxis assay buffer and cells were kept on ice until lysis was done. Cells were lysed with ice-cold lysis buffer from the PathScan RTK Signaling Antibody Array kit (Cell Signaling Technology, Danvers, MA, USA) per the manufacture's procedure. Cell lysate was kept at −70°C until the PathScan phosphorylation array or SDS-PAGE/ western blotting was performed. For western blotting, each cell lysate was subjected to SDS-PAGE, blotting, and antibody reaction. The pre-stained protein marker (Bio-Rad, Hercules, CA, USA) or the CruzMarker protein marker (Santa Cruz Biotechnology, Santa Cruz, CA, USA) was used to estimate the molecular weight of probed bands. Protein bands were visualized with ECL prime (GE Healthcare) and detected by LAS-4000 mini device (GE Healthcare). The list of the phosphorylated proteins for the array is shown in Additional file 2: Table S2. Results Establishing the optical real-time migration assay system for pancreatic cancer cells We established the assay system for pancreatic cells using optically accessible horizontal cell mobility assay device, EZ-TAXIScan. This device has been used for monitoring chemotaxis assays mostly for hematopoietic cells such as neutrophils, monocytes/macrophages, dendritic cells, eosinophils, and lymphocytes [14][15][16][17][18][19][20][21][22][23][24][25]. In the case of adherent cells, like the cancer cells, additional procedures may be required for retrieving the optimal response from cells, such as scaffold coating [26]. Therefore, we compared different coatings on glass for facilitating pancreatic cell migration. Human collagen I, fibronectin, laminin, and Matrigel (growth factor reduced) were examined as scaffold substances coated on the glass plate inside the TAXIScan chamber. Among these materials, collagen I and Matrigel showed good performances ( Fig. 1) (An additional movie file shows this in more detail [see Additional file 3]). Without coating, the cells did not attach well onto the glass plate (Fig. 1a) and did not show good migration (Fig. 1b). On the glass coated with collagen I or Matrigel, most cells attached and spread well even without a stimulant such as the chemoattractant (Fig. 1a). On the glass coated with collagen I or Matrigel, BxPC3 cells migrated towards LPA (Fig. 1b). LPA is known as a chemoattractant for cancer cells. To observe chemotactic migration of the pancreatic cancer cells towards LPA using the TAXIScan system, we used different concentrations of LPA to seek an optimal concentration for migration and observed that 1 μM of LPA was optimal for BxPC3 and PANC-1 cells (Fig. 2a) (An additional movie file shows this in more detail [see Additional file 4]). In the case of AsPC1 and MIAPaCa-2 cells, very few cells migrated towards LPA at the concentration ranging from 0.1 nM to 10 μM (only the 1 μM data is shown in Fig. 2a, an additional movie file shows this in more detail [see Additional file 5]). BxPC3 cells were the most responsive to LPA of all the cell lines studied. Therefore, we quantitated the directionality and velocity of migration of BxPC3 cells in response to different concentrations of LPA. The directionality in response to LPA increased in a dose-dependent manner ( Fig. 2b left panel). The velocity also increased in a dosedependent manner in the dose range of 1 to 10 μM LPA ( Fig. 2b right panel). These results were in agreement the TAXIScan images (Fig. 2a). We confirmed the same phenomenon by an existing assay method, the Boyden chamber method. In the Boyden chamber method, BxPC3 cells showed good response to LPA in a dose-dependent manner (Fig. 2c, left). The concentrations of LPA that elicited the migration of BxPC3 cells were observed to be similar in both methods. Expression of receptors for LPA on pancreatic cancer cells To confirm if the migration of cells was due to the LPAdependent phenomenon, we evaluated the expression of LPA receptors. Because most published reports showed either only mRNA expression or only protein expression [12,13,27], we attempted to show both mRNA and protein expression systematically by using qRT-PCR and western blotting. As LPA 1 , LPA 2 , LPA 3 , LPA 4 , LPA 5 , and LPA 6 are the known receptors for LPA; we used primers for these receptor isoforms (Additional file 1: Table S1) [27] to compare their mRNA expressions. In BxPC3 cells, based on the results of qRT-PCR, LPA 1 was the most highly expressed receptor among all the 6 receptors (Fig. 3a), whereas LPA 2 , LPA 3 , and LPA 6 were moderately expressed and LPA 5 showed the lowest expression. In PANC-1 cells, LPA 1 and LPA 3 were the major receptors expressed. In AsPC1 cells, the mRNA expression of LPA 1 , LPA 2 , and LPA 6 were detected, and in MIAPaCa-2 cells, the mRNA expression of most LPA receptors was extremely low. LPA 3 expression was highest among the receptors for the MIAPaCa-2 cells (Fig. 3a). We also evaluated the expression of these receptors at the protein level in the 4 pancreatic cell lines by western blotting using anti-LPA antibodies. All cell lines express a certain amount of LPA 1, LPA 2 and LPA 3 receptors, however, very low expression of LPA 4 , LPA 5 , and LPA 6 receptors was observed in lysates of all cell lines compared to 211H, KATOIII or PC3 which were used as positive controls (Fig. 3b). The data from the migration assay and western blotting indicated that BxPC3 and PANC-1 cells express the LPA receptors and the migration images of the cells reflects the LPA-elicited migration. Signal transduction during migration of pancreatic cancer cells towards LPA To further confirm that the migration was LPA-dependent, we determined phosphorylation of various molecules in BxPC3 and PANC-1 cells using the PathScan array, which enabled us to simultaneously evaluate the phosphorylation of 39 different molecules (Additional file 2: Table S2). We carried out phosphorylation assays at the time points 0.5, 2, and 5 min following LPA stimulation, due to uniform stimulation of cells by LPA on culture dishes, which precludes the use of an LPA concentration gradient similar to that of the TAXI Scan device. Using this array system, we observed that Akt (Thr308 and Ser473), p44/42MAPK, IRS-1, InsR, c-kit, EphA2, and Tie2 were phosphorylated after LPA stimulation in both BxPC3 (Fig. 4a, b) and PANC-1 cells (Fig. 4c, d). Of these phosphorylated proteins, Akt and MAPK are known to be key molecules involved in migration and proliferation. The phosphorylation of these signaling molecules after uniform stimulation was further observed by western blotting. The results obtained showed that Akt (Thr308 and Ser473), p44/42MAPK were phosphorylated after LPA stimulation, as expected, in both BxPC3 and PANC-1 cell lines within 5 min (Figs. 4e and 5c). For the record, we also checked longer time points, such as 15, 30, 60, 120, and 240 min which were similar to the time points used in the TAXIScan experiments, but no additional increase in phosphorylation of these molecules was observed (Fig. 4e). These data further support the establishment of the assay system of cancer cell migration towards LPA. Effect of inhibitor on migration towards LPA We also tested the effect of an LPA inhibitor, Ki16425 [28], on LPA-elicited migration of BxPC3 cells. When the cells were treated with Ki16425, the migration of the cells towards LPA was abrogated in a dose-dependent manner (Fig. 5a, b, an additional movie file shows this in more detail [see Additional file 6]). The half maximal inhibitory concentration (IC 50 ) value for directionality was ≈ 1.86 μM (Fig. 5b, left graph). Owing to weak inhibition of velocity by Ki16425, the IC 50 value for velocity was >100 μM (Fig. 5b, right graph). When the cells were treated 50 μM Ki16425, the phosphorylation of Akt and MAPK was reduced, as observed during western blot analysis (Fig. 5c). The pancreatic cancer cells showed LPA-elicited chemotactic migration with clarity in the TAXIScan chamber, and this phenomenon was vigorously supported by the inhibition of the intracellular signaling with Ki16425. Discussion In this study, we established a pancreatic cancer cell migration assay system by using the TAXIScan device. We found that coating of scaffolds such as collagen and Matrigel on glass, similar to that in some published studies using other methods, was necessary for successful adhesion and migration. BxPC3 and PANC-1 cells migrated towards LPA in a dose-dependent manner, which was clearly inhibited by an LPA inhibitor, Ki16425. This is the first report of pancreatic cancer cell migration monitored by the TAXIScan system that enables analysis of multiple parameters, including directionality, velocity, and cell morphology. Additionally, this is the first report simultaneously comparing the TAXIScan and Boyden chamber methods. The Boyden chamber method has been used for over 50 years [29], the limitations of this method have been pointed out by several researchers. In this method, a membrane of 10 μm thickness, having holes of 8 μm diameter (in this study) with random density, separates the upper and lower wells (see Additional file 7). It is thought that cells are able to sense differences in the chemoattractant concentration between these two wells. Although this method appears simple, it has certain limitations. (I) The density of holes may not be uniform. (II) The microstructure inside the hole, e.g., a micro-channel of 10 μm length × 8 μm diameter, is unknown, and the chemoattractant gradient is not measurable. (III) A large number of cells is necessary for this assay (1.5 × 10 5 cells per well in this study). (IV) A considerable amount of chemoattractant is necessary (500 μL per well in this study), which is expensive. Statistical analysis was conducted using the Student's t-test. *p < 0.05 (vs. data without LPA) materials) is inexpensive; and (III) it is well known and widely used. On the other hands, the advantages of TAXI-Scan are as follows [14] (see also Additional file 8): (I) it has an uniform micro-channel (260 μm length × 1000 μm width × 8 μm height); (II) the chemoattractant, which is placed on one end of the micro-channel, defuses uniformly through the channel, resulting in a stable concentration gradient [14]; (III) a small number of cells is required for analysis (100 or less cells per channel); (IV) a small and inexpensive amount of chemoattractant is necessary (1 μL per channel); (V) migrating cells are observable; (VI) images obtained during migration are recorded automatically; (VII) data obtained from this assay including that on morphology, behavior, directionality, and velocity, are more informative. However, some demerits of TAXIScan are as follows: (I) although the running cost is low, the initial cost is high, and (II) it is not well-known yet. In fact, it may not be appropriate to position TAXI-Scan as an alternate to the Boyden method, because both methods utilize completely different equipment and data collection methods, and the quality of data obtained using these methods is entirely different (Additional files 7 and 8). However, because of lower requirement of samples and the collection of more informative data, the approach to cancer cell migration using TAXIScan is more useful than analysis using existing techniques such as the Boyden chamber method. With the TAXIScan system, the characteristics of pancreatic cancer cells can be analyzed in detail. Moreover, our system can be adopted for migration studies in other types of cancer cells. In the Boyden chamber method, a certain number of cells without LPA was observed to migrate, indicating a high background (Fig. 2c), similar to that reported previously [30][31][32][33]. This high background with the Boyden chamber method is considered to be due to the thickness of the membrane (10 μm in this study). In TAXIScan method, cells without LPA were observed to migrate for more than 10 μm (up to 100 μm) (Fig. 2a), explaining this phenomenon. From this point of view, we could argue that TAXIScan has a wider dynamic range to detect cell migration. Herein, 4 pancreatic cancer cell lines were analyzed and only 2 of these cell-lines, BxPC3 and PANC-1, showed good migration towards LPA with reasonable co-evidence on the expression of LPA receptors. The reason why AsPC1 and MIAPaCa-2 cells do not migrate towards LPA is still unknown. BxPC3 and PANC-1 do express LPA 1 , LPA 2 , and LPA 3 ; however, these cell lines do not express LPA 4 , LPA 5 , and LPA 6 as observed during western blotting (Fig. 3b). The latter 3 receptors are likely not involved in cell migration but might be involved in other cellular functions. LPA inhibitor, Ki16425, shown in this study is believed to block human LPA 1 and LPA 3 receptors [28]; 10 μM of Ki16425 significantly blocked the migration of cancer cells [13]. In our system, Ki16425 clearly inhibited BxPC3 cell migration towards LPA at 5-50 μM concentrations, indicating that TAXIScan and BxPC3 cells are the best tools for screening inhibitors of pancreatic cell migration. Utilizing such a new method, new molecules for regulating pancreatic cancer metastasis can be identified, and the limited treatment options and the poor prognosis of this disease can be overcome. Studies on neutrophils have tested various kinds of compounds and found that some compounds inhibit neutrophil function, leading to the successful selection of several effective molecules [34]. Collectively, it can be concluded that the system established in our study can be a powerful tool for cancer research and drug discovery in seeking effectors and inhibitors for analyzing cancer cell function. We are currently looking for and screening such molecules that can regulate pancreatic cancer cell migration; some promising molecules will be reported in the near future. Conclusions We established a novel pancreatic cancer cell migration assay system that provides optical and quantitative information simultaneously. Using this system, we demonstrated that BxPC3 and PANC-1 cells showed good migration towards LPA. The effect of an LPA inhibitor, Ki16425, was detected clearly in this system, which was confirmed by the reduction in the phosphorylation of signal transduction molecules, Akt and MAPK. As this method provides a large amount of information on migrating cells simultaneously, such as their morphology, directionality, and velocity, with a small volume of sample, it can be a powerful tool for analyzing the characteristics of cancer cells and for evaluating factors affecting cellular functions.
2017-08-03T01:46:22.704Z
2017-03-31T00:00:00.000
{ "year": 2017, "sha1": "0d3f8f61432cb06b3da0dd53aad18741e5d57db6", "oa_license": "CCBY", "oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/s12885-017-3218-4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4c517d46695662448e7b674550c7601dd29e698f", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
17992528
pes2o/s2orc
v3-fos-license
Morphology and Proton Transport in Humidified Phosphonated Peptoid Block Copolymers Polymers that conduct protons in the hydrated state are of crucial importance in a wide variety of clean energy applications such as hydrogen fuel cells and artificial photosynthesis. Phosphonated and sulfonated polymers are known to conduct protons at low water content. In this paper, we report on the synthesis phosphonated peptoid diblock copolymers, poly-N-(2-ethyl)hexylglycine-block-poly-N-phosphonomethylglycine (pNeh-b-pNpm), with volume fractions of pNpm (ϕNpm) values ranging from 0.13 to 0.44 and dispersity (Đ) ≤ 1.0003. The morphologies of the dry block copolypeptoids were determined by transmission electron microscopy and in both the dry and hydrated states by synchrotron small-angle X-ray scattering. Dry samples with ϕNpm > 0.13 exhibited a lamellar morphology. Upon hydration, the lowest molecular weight sample transitioned to a hexagonally packed cylinder morphology, while the others maintained their dry morphologies. Water uptake of all of the ordered samples was 8.1 ± 1.1 water molecules per phosphonate group. In spite of this, the proton conductivity of the ordered pNeh-b-pNpm copolymers ranged from 0.002 to 0.008 S/cm. We demonstrate that proton conductivity is maximized in high molecular weight, symmetric pNeh-b-pNpm copolymers. ■ INTRODUCTION Proton-conducting polymers have attracted considerable attention because they play a central role as electrolyte membranes in hydrogen fuel cells and artificial photosynthesis. 1−3 The most widely studied membranes are based on sulfonated polymers such as Nafion. 4,5 Nafion is a semicrystalline random copolymer of hydrophobic tetrafluoroethylene and hydrophilic perfluoroether side chains that have terminal sulfonic acid groups. In the dry state, the ionic groups are sequestered in clusters in a hydrophobic tetrafluoroethylenerich matrix, and Nafion is an insulator. In the wet state, a percolating network of hydrated channels emerges within the hydrophobic matrix by self-assembly, resulting in a mechanically robust proton conducting material. Although numerous papers have been written on this transformation, 6−20 there is still considerable debate surrounding the nanoscale morphology of the hydrated channels. Several groups have embarked on studies of block copolymers comprising a sulfonated block that enables proton conduction and a hydrophobic block that provides the membrane with mechanical integrity. 21−26 The morphology of the conducting channels in these systems can be readily determined by scattering techniques (either X-ray or neutron scattering) or electron microscopy. During typical application conditions, polymer electrolyte membranes are exposed to air, and thus the extent of hydration of the membrane is determined by the partitioning of water between the membrane and the surrounding gas phase. As a result, sulfonic acid-based membranes are ineffective proton transporters at high temperatures (above 80°C), since very little water is retained at high temperatures. 27−30 This limitation has motivated studies of polymers functionalized with other acidic or protogenic groups. 31−33 Phosphonated polymers are attractive systems for several reasons. First, they exhibit efficient proton transport under low water uptake conditions. This is attributed to a higher degree of hydrogen bonding which promotes proton transport by the Grothuss mechanism. 34 Second, the phosphonic acid group can release two protons instead of one because phosphonate are dibasic as compared to sulfonates which are monobasic. Third, phosphonated poly-mers often show higher chemical and thermal stability, relative to sulfonic acid moieties, in part due to their higher pK a 's. 33 In spite of these advantages, relatively few studies of phosphonated polymers have been reported. 35 This is likely because there are no convenient synthetic routes to phosphonated polymers. In fact, all of the studies of proton transport in phosphonated polymer systems have been restricted to random copolymers. While proton conductivities ranging from 10 −6 to 10 −1 S/cm have been reported for such copolymers, 17 the relationship between morphology and conductivity has not yet been explored. Herein we report the synthesis and characterization of a family of well-defined phosphonate diblock copolymers: poly-N-(2-ethyl)hexylglycine-block-poly-N-phosphonomethylglycine (pNeh-b-pNpm). Polypeptoids are a family of comb-like polymers based on an N-substituted glycine backbone. 36,37 Iterative solid-phase synthesis enables the efficient synthesis of polymers with precise control over chain length and copolymer composition. 38−40 The dispersity of the copolymers (Đ) was less than 1.0003. Microphase separation and hydration results in the formation of pNpm-rich domains that conduct protons in the hydrated state. Here we study the relationship between morphology and proton transport for a family of diblocks containing varying volume fractions of each block. ■ EXPERIMENTAL SECTION Synthesis of Monomers. Di-tert-butyl(phthalimidomethyl)phosphonate. In a round-bottom flask, 17.5 g of potassium bis(trimethylsilyl)amide (88 mmol) was suspended in 200 mL of anhydrous tetrahydrofuran (THF) and cooled to −40°C. 17 g of ditert-butyl phosphite (88 mmol) was added over 20 min. After addition, the flask was warmed to 0°C and stirred for 30 min. The solution was cooled to −40°C, and 21 g of N-(bromomethyl)phthalimide (Aldrich) in 150 mL of anhydrous THF was added dropwise. After addition, the flask was warmed to room temperature and stirred for 1 h. The solvent was removed under vacuum. The residue was partitioned between 1 L of ethyl acetate and 100 mL of water. The organic layer was washed with water (100 mL), saturated aqueous sodium bicarbonate (100 mL), and brine (100 mL), dried over sodium sulfate, filtered, and concentrated to give an oily solid (32.2 g). The solid was purified by flash chromatography (60 hexanes/39.9 ethyl acetate/0.1 triethylamine), resulting in 19 g (65%) of a white solid. 1 Di-tert-butyl(aminomethyl)phosphonate. The di-tert-butyl-(phthalimidomethyl)phosphonate (19 g, 54 mmol) was dissolved in 200 mL of absolute ethanol. Methylhydrazine (9.9 g, 215 mmol) was added dropwise, and the solution was stirred overnight. The solution was concentrated in vacuum, and 250 mL of dichloromethane (DCM) was added. The white solid was removed by filtration and rinsed two times with DCM (100 mL). The filtrate was washed with water (5 × 75 mL) and brine (75 mL), dried over sodium sulfate, and concentrated to yield 11.9 g (99%) of pale yellow oil. 1 Synthesis of Peptoid Polymers. Polypeptoids were synthesized on an automated robotic synthesizer or a commercial Aapptec Apex 396 robotic synthesizer on 100 mg of Rink amide polystyrene resin (0.61 mmol/g, Novabiochem, San Diego, CA). The protected phosphonate submonomer, di-tert-butyl(aminomethyl)phosphonate, was synthesized by a modification of previously reported methods. 41,42 All the other monomers, solvents, and reagents described here were purchased from commercial sources and used without further purification. The 2-ethyl-1-hexylamine submonomer was used as the racemic mixture. Peptoids were synthesized by a slightly modified version of the solid-phase submonomer method previously described. 40,43 The Fmoc group on the resin was deprotected with 20% (v/v) 4-methylpiperidine/DMF before starting the monomer cycle. An acylation step was then performed on the amino resin by the addition of 1.0 mL of 1.2 M bromoacetic acid in DMF and 0.18 mL of N,N′-diisopropylcarbodiimide (DIC, 1.15 mmol, neat) and mixing for 20 min. Displacement of the bromide with various monomers occurred by adding a 1.0−2.0 M solution of the primary amine in N-methyl-2-pyrrolidone, followed by agitation for 120 min. All the polymers were acetylated on the resin after synthesis using a mixture (2.0 mL per 100 mg of resin) of 0.4 M acetic anhydride and 0.4 M pyridine in DMF for 30 min. The crude peptoid products were cleaved from the resin by the addition of 95% (v/v) trifluoracetic acid (TFA) in H 2 O for 1 h, followed by evaporation. The crude products were then directly precipitated from water. The final polypeptoids were then lyophilized prior to subsequent measurements. All polymers were characterized by 1 H NMR (500 MHz, CD 3 OD), shown in Figure S2. The peaks marked with b (at 4.4 ppm, NCH 2 CO in pNeh), a (at 4.2 ppm, NCH 2 CO in pNpm), c (at 3.3 ppm, NCH 2 CH), and d (at 3.9 ppm, NCH 2 P) are assigned to the protons of the pNeh and pNpm blocks. The peaks e−j (at 0.9−1.8 ppm) can be assigned to the protons of the alkyl group in the pNeh blocks. Density Measurement. The density of polypeptoids was measured using a density gradient column with a sucrose solution at room temperature as previously described. 44 An aqueous sucrose gradient was used in the density gradient column method. The measured density was used to calculate the volume fraction of the polypeptoids. The densities of pNpm and pNte were measured to be 1.13 ± 0.01 and 1.23 ± 0.01 g/cm 3 . Differential Scanning Calorimetry (DSC). Differential scanning calorimetry (DSC) experiments were performed to determine the thermal behavior of the synthesized peptoids using a TA Q200 differential scanning calorimeter. In all tests, a scan rate of 10 K/min was used in the temperature range of −20 to 200°C for three heating and cooling cycles. Thermogravimetric Analysis (TGA). Samples were characterized using a TA Instruments TGA to investigate degradation temperatures by mass loss. Approximately 5.0 mg of lyophilized peptoid powder was placed on an aluminum sample pan. Samples were equilibrated at 30°C for 20 min and then heated to 500°C at 5°C/min under a nitrogen atmosphere. Water Uptake. Water uptake of pNeh-b-pNpm equilibrated in humid air was measured in a humidity-controlled environmental chamber (SH-241, Espec. Corp). A small piece of water-equilibrated sample was placed in a quartz pan which was hooked on the end of a quartz spring (Deerslayer) in the humidity chamber. Samples were equilibrated at the humidity level of interest for 12 h before measurements were recorded. The weight of the wet sample, W wet , was obtained by measuring spring length through a port on the wall of the humidity chamber by a cathetometer equipped with an optical zoom telescope located outside the chamber. Care was taken to minimize the time when the port was opened (typically 10 s). The spring was calibrated with standard masses at experimental temperatures and relative humidity in the chamber before use (spring constant was about 0.5 mN/mm). The sample pellet was dried in vacuum at 40°C for 24 h. It was allowed to cool down in a desiccator before the dry sample weight, W dry , was measured. Water uptake is given by eq 1. The ion exchange capacity (IEC) and the number of water molecules per phosphoric acid group (hydration number), λ, were calculated from water uptake: where MW H 2 O = 18.02 g/mol. Small/Wide-Angle X-ray Scattering (SAXS/WAXS). The block copolypeptoid was dissolved in a 1:1 (v/v) mixture of methanol and tetrahydrofuran and stirred overnight. The solution was then cast on ultraclean Kapton film on a custom-built solvent caster maintained at 35°C, using a doctor blade. The concentration of the solution and the height of the doctor blade were adjusted to obtain a membrane with a thickness of ∼120 μm. The membrane was dried under vacuum overnight, annealed at relative humidity (RH) = 98%, and dried again before measurements. All of these steps were carried out at room temperature. Because of the lack of access to a humidity-controlled chamber appropriate for SAXS experiments, wet samples were prepared by placing the cast samples (with the Kapton substrate) in a closed SAXS sample stage containing water. Samples were equilibrated for 2 h before measurements were taken. Synchrotron SAXS was performed at beamline 7.3.3 at the Advanced Light Source (ALS) at Lawrence Berkeley National Laboratory (LBNL). A silver behenate sample was used as a standard. Full two-dimensional scattering patterns were collected on an ADSC CCD detector. The scattering patterns from ALS were reduced using the Nika program for Igor Pro available from Jan Ilavsky at Argonne National Laboratory. 45 Transmission Electron Microscopy (TEM). Ultrathin films of peptoid diblock copolymers were prepared by drop casting 0.1 wt % MeOH/THF 50:50 solutions on the gold grids that covered by lacey carbon supporting films. All grids were annealed in the same humidity chamber described above at 25°C, relative humidity 98% for 24 h. The annealed films were dried either partially by storing them stored in air (35% humidity) or fully under ultrahigh vacuum (lower than 10 −7 Torr) in the transmission electron microscope column. Samples were imaged without any staining using both energy filtered transmission electron microscopy (EFTEM) at a 200 kV acceleration voltage with a slit width of 20 eV on a Tecnai F20 (FEI Company. Netherlands). The thickness of the ultrathin films, which was estimated by using electron energy loss spectroscopy, was between 60 and 80 nm. Conductivity Measurements. The block copolypeptoid membranes with thicknesses of about 40 μm were obtained by methods described in the SAXS experimental section. In-plane proton conductivity of membranes equilibrated in humid air was measured in the same humidity chamber as that used in the water uptake measurements by ac impedance spectroscopy using platinum electrodes in the standard four-probe configuration using a BekkTech sample clamp. Data were collected over a frequency range of 1 Hz−100 kHz. The membrane was allowed to equilibrate at each humidity level for 24 h before a measurement was made. The conductivity, σ, is given by eq 4: where w and h are width and thickness of the membrane, respectively, R is the touchdown of the Nyquist semicircle on the real axis, and l is the distance between the inner platinum electrodes. ■ RESULTS AND DISCUSSION The volume fraction of the phosphonate block (Npm) was varied from 0.13 to 0.44 in order to obtain a variety of nanoscale morphologies and thus probe the impact of morphology on conductivity. The Neh block, made from a racemic monomer, was chosen as the hydrophobic structural block and is known to be amorphous. 40 The structure of the synthesized block copolypeptoids is shown in Figure 1. Block molecular weights and purity characteristics are given in Table 1. We first investigated the thermal properties of the block copolymers by TGA and DSC. TGA results show that degradation of the block copolypeptoids begins at 300°C, indicating stability of the N-phosphonomethylglycine units ( Figure S5). The lack of melting peaks and crystallization exotherms in DSC data (not shown) indicate that, as expected, all of the pNeh-b-pNpm copolymers are amorphous. The water uptake properties of the phosphonated peptoid block copolymers are shown in Figure 2, where the number of water molecules per phosphonate group or hydration number, λ, is plotted as a function of the dry block copolymer volume fraction, ϕ Npm , at relative humidities (RH) of 50% and 98%. The volume fractions of the pNpm-rich microphases in the wet state, ϕ Npm,wet , were determined from the water uptake measurements and known copolymer compositions, assuming perfect microphase separation and neglecting volume changes on mixing, and these values are given in Table 2. As seen in Figure 2, λ is largely independent of block copolymer composition and chain length. The average value of λ at RH = 98% is 7.8, while that at RH = 50% is 1.2. These values are substantially lower than those obtained in sulfonated block copolymers. Typical values of λ at RH = 98% in sulfonated block copolymers is 13. 46 The phase behavior of the pNeh-b-pNpm block copolymers in dry and hydrated states was studied by SAXS. We report data obtained from dry samples that were exposed to air (RH = 35%) and wet samples placed in a closed SAXS sample stage containing water. Lacking a better alternative, we assume that the SAXS data from the wet samples indicate the sample morphology at RH = 98% (the relative humidity at which proton conductivity and water uptake was measured). SAXS intensity is plotted as a function of the magnitude of the scattering vector, q, in Figure 3. Under dry conditions, most samples exhibit a primary peak at q = q* and a second-order peak at q = 2q*, consistent with the presence of a lamellar phase. Additional higher order peaks at q = 3q*, 4q*, and 5q* are seen in some of the samples; these higher order peaks are also consistent with a lamellar phase. In all cases except pNeh 30b-pNpm 6 , the peaks are relatively sharp, indicating the presence of well-ordered lamellar morphologies. In contrast, the SAXS peaks of dry pNeh 30 -b-pNpm 6 are broad, suggesting a disordered morphology. The primary peak of the pNeh 26 -b-pNpm 10 sample has a high-q shoulder that is absent in the higher order peaks. We do not know the reason for this observation. The observation of lamellar morphologies in this composition window is consistent with a previous study of amorphous peptoid diblock copolymers, 40 where we reported the formation of lamellar phases, irrespective of block copolymer composition. The characteristic length of the periodic structure, d, is given by d = 2π/q*. The values of d thus obtained are given in Table 2. At a fixed chain length of 36 (m + n = 36), d decreases from 10.3 to 8.2 nm as ϕ Npm decreases from 0.44 to 0.13, consistent with the classical theory on block copolymer self-assembly by Leibler. 47 Not surprisingly, d is dependent on the chain length (m + n); at fixed ϕ Npm = 0.44, d decreases from 10.3 to 6.1 nm, as m + n decreases from 36 to 18. The morphology of dry pNeh-b-pNpm copolymers was also studied by dark field TEM as shown in Figure 4. It is important to note that our sample preparation approach, described in the Experimental Section, results in the self-assembly of morphologies in free-standing films with thicknesses between 60 and 80 nm. (Attempts to use a cryogenic microtome to obtain sections were not successful.) In a previous study, it was shown that free-standing films of this nature can exhibit morphologies that are similar but not identical to the bulk morphology. 48 In Figures 4a and 4b we show micrographs of pNeh 9 -b-pNpm 9 and pNeh 18 -b-pNpm 18 . Lamellar microphases with poor longrange order are seen in pNeh 9 -b-pNpm 9 . A higher degree of long-range lamellar structure is seen in pNeh 18 -b-pNpm 18 . In contrast, the pNeh 26 -b-pNpm 10 samples exhibited honeycomb morphologies by TEM (Figure 4c). Inside the honeycombs, pNeh 26 -b-pNpm 10 exhibits lamellae arranged like an onion. The micrograph of pNeh 30 -b-pNpm 6 ( Figure 4d) has a similar lamellar structure to that of pNeh 18 -b-pNpm 18 . The lamellae seen in the micrographs of pNeh 9 -b-pNpm 9 , pNeh 18 -b-pNpm 18 , and pNeh 26 -b-pNpm 10 are consistent with the distances observed by SAXS. In contrast, the lamellar structure inside the honeycombs in pNeh 30 -b-pNpm 6 appears disordered, consistent with the broad SAXS primary peak seen in Figure 3. Returning to the SAXS data (Figure 3), we see that the lamellar morphology is obtained in all the samples with m + n = a d is the center-to-center distance between adjacent pNpm lamellae in the air. d wet is the center-to-center distance between adjacent pNpm lamellae at RH = 98%. ϕ Npm is the volume fraction of the pNpm block in the air. ϕ Npm,wet is the volume fraction of the pNpm block at RH = 98% and 50%. The data are assumed ideal mixing. N/A is not avaiable. λ = water uptake (%) × 10/(MW H 2 O × IEC). Figure 3. SAXS profiles at room temperatures for pNeh 18 -b-pNpm 18 , pNeh 26 -b-pNpm 10 , pNeh 30 -b-pNpm 6 , and pNeh 9 -b-pNpm 9 in dry (red) and hydrated (blue) states. 36 in the wet state. The ordered morphologies in the wet state are generally better defined than in the dry state. For example, the higher order peaks at 4q* and 5q* are seen in the wet pNeh 26 -b-pNpm 10 sample with ϕ Npm = 0.23 but are absent in the dry state. The reduced intensity of the 2q* peak in the pNeh 18 -b-pNpm 18 sample suggests that ϕ Npm.wet must be in the vicinity of 0.5. This is consistent with estimates of ϕ Npm,wet ( Table 2). The SAXS patterns of pNeh 30 -b-pNpm 6 in the dry and wet states are similar except for the low-q shoulder that appears in the wet state. All primary peaks shift to a lower q* values in the wet state, indicating an increase in d in wet state. Interestingly, in the wet pNeh 9 -b-pNpm 9 sample (m + n = 18) (ϕ Npm,wet = 0.60), a primary peak at q = q* and higher order peaks at q = √3q*, 2q*, and 3q* are visible, indicating the presence of hexagonally packed cylinders. This is in contrast to what is typically observed in uncharged block copolymers: samples with symmetric composition (i.e., with the volume fraction of each block in the vicinity of 0.5) exhibit a lamellar morphology, in the dry state or when swollen with selective solvents. 47, 49,50 The presence of a cylindrical morphology in hydrated pNeh 9 -b-pNpm 9 is thus interesting. Such morphologies have been seen before in nearly symmetric sulfonated block copolymers 24 and are predicted by theories on charged block copolymers. 51,52 In these systems, 24,54,55 the charged block forms the matrix. We thus expect the matrix of wet pNeh 9 -b-pNpm 9 to comprise hydrated pNpm, while the cylinders are expected to comprise dry pNeh chains. Another point worth noting is that the higher molecular weight sample with the same composition, pNeh 18 -b-pNpm 18 , exhibits a lamellar phase in the wet state. It is evident that the set of peptoid block copolymers used in this study present a wide variety of morphologies in the hydrated state. The proton conductivity (σ) of the block copolymers equilibrated in humid air with RH = 98% was determined as a function of ϕ Npm,wet ( Figure 5). Note that the hydration number in all of the samples including pNeh 30 -b-pNpm 6 were similar: λ = 7.8 ± 1.4. The conductivity of hydrated pNeh 30 -b-pNpm 6 was below the detection limit of our instrument (about 1.5 × 10 −7 S/cm). We can thus only provide an upper bound for the conductivity of this sample. The conductivities of the other samples were above 10 −3 S/cm. The most conductive sample exhibits a proton conductivity of 8 × 10 −3 S/cm, a remarkably high value considering that λ is only 9.2. It is likely that there are two possible reasons for the sharp increase in conductivity as ϕ Npm,wet increases from 0.22 to 0.39: (1) the morphology of the ionic microphase undergoes a percolation transition, or (2) the mixing of pNeh segments in the pNpmrich domains interferes with ion transport. The conductivity of ordered block copolymers with one conducting block, σ, is often described by the equation 54−56 σ ϕσ = f c c (5) where f is the morphology factor related to geometry of the conducting phase, ϕ c is the volume fraction of the conducting phase, and σ c is the intrinsic conductivity of the conducting phase. We assume that ϕ c = ϕ Npm,wet . In the case of pNeh 18 -b-pNpm 18 and pNeh 26 -b-pNpm 10 , f is 2/3 (lamellar conducting domains), while in the case of pNeh 9 -b-pNpm 9 , f is 1 (conducting phase is the matrix of a hexagonally packed cylinder morphology). We take σ c of pNeh 30 -b-pNpm 6 to be zero. The value of σ c corresponds to the estimated conductivity of hydrated pNpm domains with λ = 8.1 ± 1.1; small differences in λ between samples will be discussed shortly. To a good approximation, σ c is a linear function of n ( Figure 6). Similar trends have been observed in other charged block copolymers; the intrinsic conductivity of block copolymer domains increases with increasing chain length. One of the factors that contribute to this effect is segregation strength. As segregation strength increases, the microphases become more sharply defined; i.e., the concentration of nonconducting chains in the conducting domains decreases. One expects segregation strength to increase with increasing chain length at constant block copolymer composition. The data in Figure 6a are consistent with this expectation. In Figure 6b, we compare the intrinsic conductivity of the hydrated pNpm-rich domains in the block copolypeptoid, σ c , 18 ) is about an order of magnitude lower than that of phosphoric acid. Since the conductivity of phosphoric acid solutions represents an upper limit for the intrinsic conductivity of hydrated microphases with phosphonic acid groups, the maximum attainable value of σ c is between 0.15 and 0.25 S/cm. It may thus be possible to improve the conductivity of phosphonated block copolymers by as much as an order of magnitude by increasing segregation strength or by designing other phosphonated polymers. ■ CONCLUSION A series of novel phosphonated diblock copolymers poly-N-(2ethyl)hexylglycine-block-poly-N-phosphonomethylglycine (pNeh-b-pNpm) with dispersity ≤1.0003 were synthesized by solid-phase synthesis. The morphologies of the block copolypeptoids were determined by SAXS and TEM. In the dry state, the sample with ϕ Npm = 0.13 was disordered while others exhibited lamellar morphologies. In most cases, the morphologies of the dry and hydrated sates were similar, except for pNeh 9 -b-pNpm 9 , which exhibited a cylindrical morphology in the hydrated state. The hydration numbers (λ) of the pNehb-pNpm membranes equilibrated in air with RH = 98% were comparable (was 8.1 ± 1.1 water molecules per phosphonate group), but proton conductivities were widely different. The disordered sample was an insulator (conductivity <10 −7 S/cm) while conductivities as high as 0.008 S/cm were obtained in the ordered samples. The estimated intrinsic conductivity of hydrated pNpm microphases increases linearly with the degree of polymerization of the pNpm block. The high molecular weight, symmetric pNeh-b-pNpm sample exhibited maximum conductivity. The results of this study provide the basis for the design of proton-conducting phosphonated polymer electrolytes with higher conductivity. Peptoid block copolymers provide a novel platform for studying the relationship between molecular structure and transport. The ability of phosphonate-containing copolymers to conduct protons at low degrees of hydration makes them particularly attractive for electrochemical applications.
2016-05-17T14:37:50.623Z
2016-04-04T00:00:00.000
{ "year": 2016, "sha1": "05f5ec910d6828411b59406f17b07f028de31869", "oa_license": "acs-specific: authorchoice/editors choice usage agreement", "oa_url": "https://doi.org/10.1021/acs.macromol.6b00353", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "28a61d9d6824c6eeab088d22b9ef2caff61a7c2b", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
13261424
pes2o/s2orc
v3-fos-license
The Influence of Emotion on Fairness-Related Decision Making: A Critical Review of Theories and Evidence Fairness-related decision making is an important issue in the field of decision making. Traditional theories emphasize the roles of inequity aversion and reciprocity, whereas recent research increasingly shows that emotion plays a critical role in this type of decision making. In this review, we summarize the influences of three types of emotions (i.e., the integral emotion experienced at the time of decision making, the incidental emotion aroused by a task-unrelated dispositional or situational source, and the interaction of emotion and cognition) on fairness-related decision making. Specifically, we first introduce three dominant theories that describe how emotion may influence fairness-related decision making (i.e., the wounded pride/spite model, affect infusion model, and dual-process model). Next, we collect behavioral and neural evidence for and against these theories. Finally, we propose that future research on fairness-related decision making should focus on inducing incidental social emotion, avoiding irrelevant emotion when regulating, exploring the individual differences in emotional dispositions, and strengthening the ecological validity of the paradigm. INTRODUCTION Researchers of decision-making typically regard emotion as impulsive and irrational and neglect its role in decision making (Kahneman and Tversky, 1979;Von Neumann and Morgenstern, 2007). In "normative decision theory, " economic decision making is based on "cold" mathematical calculation, and decision makers are idealized as perfect "rational machines." However, studies increasingly show that emotion is one of the most important factors in the irrational decisionmaking process (Hastie, 2001;Sanfey et al., 2006). For example, emotion may guide people's decision making under conditions of risk and uncertainty and with regard to intertemporal choices, social decisions, and moral decision making (Loewenstein and Lerner, 2003;Rilling and Sanfey, 2011). Fairness-related decision making is an important issue in the field of psychological decision making (Güth and Kocher, 2014). Experiments on fairness-related decision making have usually been conducted using the classic "Ultimatum Game" (UG) paradigm (Güth et al., 1982). An increasing number of UG studies have revealed that responders tended to sacrifice their own payoffs to decline an unfair offer, especially when they receive an offer that is less than 20% of the total (Güth et al., 1982;Thaler, 1988;Camerer and Thaler, 1995). These irrational rejection behaviors cannot be captured by the economic rationality of utility, in which the responder should accept all offers since receiving at least some money is always preferable to receiving no money. Some theories, such as "inequity aversion" theory (Fehr and Schmidt, 1999;Bolton and Ockenfels, 2000) and "reciprocity equilibrium" theory (Rabin, 1993;Falk and Fischbacher, 2006), have attempted to explain irrational behaviors in fairness-related decision making. "Inequity aversion" means that people prefer equitable outcomes: they are willing to forego a material payoff to work toward more equitable outcomes (Fehr and Schmidt, 1999;Bolton and Ockenfels, 2000). However, it is difficult to explain why unfair offers from computer partners were accepted at higher rates than human partners if people were pursuing only fairness in terms of their own material payoff relative to the payoff of others (Blount, 1995;Knoch et al., 2006). According to "reciprocity equilibrium" theory, the rejection in the UG with human partners is social punishment to promote fair offers in subsequent bargaining, establish a good reputation, or enforce fairness norms (Rabin, 1993;Falk and Fischbacher, 2006). Thus, people will reject unfair offers from human partners, but accept unfair offers from computer partners to maximize personal gains. One study found that players would reject unfair offers when rejection reduced only their own earning to 0, and even when they cannot communicate their anger to the proposers through rejection . The rejection of unfair offers that increase inequity and fail to punish proposers cannot be explained by the "inequity aversion" and "reciprocity equilibrium" theories. Such studies have increased awareness of the fact that emotion may be an important reason for irrational behaviors in fairness-related decision making (Sanfey et al., 2003;Ferguson et al., 2014). They propose that rejection is used to express the negative emotions such as anger or disgust aroused by unfair offers (Xiao and Houser, 2005). Although the two classical theories do not deny the existence of emotion, they nevertheless do not clearly explain the role of emotion and its mechanism. A new perspective on emotion is required to explain behavior in fairness-related decision making. Many studies have explored the influence of emotion on fairness-related decision making using behavioral, electrophysiological and neuroimaging approaches and supported these theories. The influence of emotion on decision making concerns integral emotions (i.e., task-driven) and incidental emotions (i.e., task-unrelated) (Loewenstein and Lerner, 2003). The Wounded Pride/Spite Model suggests that integral emotion, such as negative emotions provoked by unfair offers, prompt rejections (Straub and Murnighan, 1995;Pillutla and Murnighan, 1996). However, this model only focuses on the influence of emotional response aroused by fairness-related decision making; it does not consider the influence of emotion aroused by dispositional or situational sources objectively unrelated to the task. To address this gap, the Affect Infusion Model investigated how incidental emotion (emotion aroused by emotional videos or images) influence fairness-related decision making (Forgas et al., 2003;Bless et al., 2006). These two models emphasized the role of emotion in fairness-related decision making, but ignored the regulation of emotion by cognition in modulating behavior. The Dual-Process System claims that the rational system and the emotional system are dual subsystems in fairness-related decision making, with the former prompting an adaptive response to different situations by regulating the latter (Loewenstein and O'Donoghue, 2004;Sanfey and Chang, 2008;Feng et al., 2015). This review summarizes these models of the impact of emotions on fairness-related decision making and the corresponding behavioral and neural evidence. Wounded Pride/Spite Model The Wounded Pride/Spite Model proposes that the integral emotion aroused by a task itself may change fairness-related decision making. The model claims that if responders perceive that offers are unfair, feelings of wounded pride and anger may be aroused (Straub and Murnighan, 1995;Pillutla and Murnighan, 1996). When direct channels for expressing emotions are either impossible or undesirable, individuals are willing to incur the costs of rejection to retaliate against perceived unfairness (Gross and Levenson, 1993;Gross, 1999). Even when the responder has no way to punish the proposers, the responder still wants to reject the unfair offer , suggesting that rejection may be not only a strategy to enlarge future potential payoffs but also an effective means of emotional release. However, if responders can convey their feelings of unfairness to proposers, the acceptance rates (ARs) of unfair offers could be increased substantially (Xiao and Houser, 2005). Evidence from Integral Emotion According to a large number of recent studies, the integral negative emotions aroused by unfair offers can increase the punishment for violating fairness norms. First, previous studies found that fairness-related decision making can evoke strong emotions, demonstrating the existence of integral emotion in fairness-related decision making. From the responders' self-reports, the researchers found that when responders received an unfair offer, their negative affective responses, such as anger, contempt, irritation, envy and sadness, increased, whereas positive affective responses, such as pleasure and happiness, decreased (Pillutla and Murnighan, 1996;Bosman et al., 2001;Xiao and Houser, 2005;Osumi and Ohira, 2009;Voegele et al., 2010;Hewig et al., 2011;Bediou and Scherer, 2014;Gilam et al., 2015). Researchers used the UG to examine the affective correlates of decision making and found that the decision to reject is positively related to more negative emotional reactions, increased autonomic nervous system and skin conductance activity (van't Wout et al., 2006;Hewig et al., 2011), and decelerated heart rate (Osumi and Ohira, 2009;Dunn et al., 2012). Furthermore, similar facial motor activities were evoked by unfair treatment, unpleasant tastes, and photographs of contaminants, suggesting that unfairness elicits the same disgust as bad tastes and disease vectors (Chapman et al., 2009). Second, the affective response to unfairness offers is one possible reason for rejection in fairness-related decision making. Psychophysiological studies have shown that increased ARs of offers correlate with greater resting heart rate variability (Osumi and Ohira, 2009;Dunn et al., 2012). EEG studies found that feedback-related negativity (FRN) could predict the likelihood of rejection in the UG and that rejection was associated with negative emotion (van't Wout et al., 2006;Hewig et al., 2011). By using the dipole localization method, EEG studies showed that unfair offers could arouse the activation of the insula, which is associated with negative emotion, and the anterior cingulate cortex (ACC), which is associated with conflict monitoring (Guclu et al., 2012). Neuroimaging studies also showed a negative correlation between the activation of the insula specifically involved in aversive emotion and the ARs of unfair offers (Sanfey et al., 2003;Takagishi et al., 2009). The above findings indicate that negative emotion aroused by perceptions of unfairness play an important role in rejection behaviors, supporting the Wounded Pride/Spite Model. Although the Wounded Pride/Spite Model proposes that negative emotion in fairness-related decision making is an important factor in the rejection of an unfair offer (van't Wout et al., 2006;Hewig et al., 2011) and can explain many behaviors in fairness-related decision making (Harle and Sanfey, 2007;Grecucci et al., 2013b), this model is only concerned with the responders' emotional reaction that is aroused by fairnessrelated decision making. It ignores the impact of the responders' emotional state and other contextual factors. AFFECT INFUSION MODEL AND EVIDENCE Affect Infusion Model The Affect Infusion Model proposes that incidental emotion aroused by task-unrelated sources can significantly influence fairness-related decision making by priming mood-congruent concepts and dispositions (Forgas et al., 1990;Forgas, 2002). For instance, in fairness-related decision making, people must integrate negative (unfair social signals) and positive (financial benefits) information. Positive incidental emotion makes responders more concerned about their own benefits, thus increasing ARs. By contrast, negative incidental emotion makes responders more concerned about unfair offers, thus decreasing ARs (Harle et al., 2012). That is, acceptance or rejection decisions represent the internal rewards and external fairness principles in fairness-related decision making. Positive emotion can enhance cooperation by recruiting a more assimilative, internally focused processing style that promotes selfishness (Forgas et al., 1990). Negative emotion is an alert signal that requires accommodative processing and increases monitoring of the external environment to process potential threats and hazardous stimulation, increasing concern with social norms (Forgas et al., 2003;Bless et al., 2006). For example, sadness provokes pessimistic framing and increases the processing of threatening information, making responders more concerned about the negative consequences of unfairness and the punishment of those who violate the fairness norm (Harle and Sanfey, 2007). Evidence of Incidental Emotion To explore the influence of incidental emotion, many studies have manipulated the affective state by evoking different valences and arousal levels with images and videos. The results showed that participants in a negative emotional state will reject a greater number of unfair offers (Moretti and Di Pellegrino, 2010;Fabiansson and Denson, 2012;Harle et al., 2012;Liu et al., 2016;Riepl et al., 2016), whereas a positive emotional state may reduce or exert no influence on ARs (Harle and Sanfey, 2007;Andrade and Ariely, 2009;Forgas and Tan, 2013a,b;Liu et al., 2016). Behavioral studies found that on the one hand, when the participants were responders, compared with a neutral group, sad participants reported more negative emotions, such as anger and disgust, when faced with unfair offers and subsequently made more rejections. However, participants who were induced to experience happy emotions accepted more unfair offers (Riepl et al., 2016), with no discernible impact on their decisions (Harle and Sanfey, 2007;Forgas and Tan, 2013a,b;Liu et al., 2016). On the other hand, when the participants were proposers, inducing amusement (compared with sadness) made them more selfish; they also allocated a greater number of points to themselves and had shorter response times (Forgas and Tan, 2013a,b). Neuroimaging studies indicate that incidental sad emotions are regulated by the three main brain regions for emotions, namely, the insula, ACC and striatum. First, compared with participant responses under neutral conditions, the ARs of unfair offers were associated with higher bilateral insula activations in participants who were sad. Insula is typically associated with negative emotions (Paulus et al., 2003;Knutson et al., 2007), suggesting that this region may indicate an aversive response, which may reduce ARs (Harle et al., 2012). Consequently, some researchers have speculated that insula activation can predict the influence of sadness on decision making (Sanfey et al., 2003). Increasing evidence suggests the important role of the anterior insula (AI) in detecting norm violations (Civai et al., 2012;Xiang et al., 2013). Researchers speculated that a sad participant with increased AI activity may experience high sensitivity to norm violation. Thus, sad incidental emotion could activate the insula involved in negative emotion (or detection of norm violation) and bias behavior accordingly. Second, receiving unfair offers in a sad vs. neutral mood resulted in greater activation in the ACC linked to error and decision conflict monitoring, suggesting that sad individuals may experience an enhanced perception of social norm violation (Harle et al., 2012). Furthermore, a moderating effect of mood was found in the left ventral striatum, which is associated with reward processing. Individuals who experienced a neutral mood showed stronger activation for fair offers relative to unfair offers, while individuals who were sad did not exhibit such a pattern of activation, implying decreased reward responsiveness to reward stimuli (Harle et al., 2012). Overall, both behavioral and neural studies have shown that negative emotions enhance participants' negative responses to behaviors that violate fairness norms and reduce reward activation for fair offers, thus decreasing ARs. These studies demonstrate that emotion plays a role in changing participants' decisions by altering their cognitive processing, supporting the Affect Infusion Model. However, some researchers have noted that the dimension of emotional motivation, rather than emotional valence, is the key factor that influences fairness decision making. Emotional valence refers to the intrinsic attractiveness (positive valence) or averseness (negative valence) of an event, an object, or a situation (Frijda, 1986). Emotional motivation refers to the aversive and appetitive apparatuses, which, respectively, promote withdrawal and approach behavior (Schneirla, 1959;Lang et al., 1997). Two emotions with similar valences may have different motivations, and vice versa. For instance, amusement and serenity are positive emotions, whereas anger and disgust are negative emotions. However, amusement and anger are classified as approach-based emotions, whereas serenity and disgust are withdraw-based emotions. Therefore, researchers have suggested that compared with a valence framework, partitioning affective states based on motivational tendency could more accurately explain the changes in ARs in fairness-related decision. The results of a study that explored the influence of positive emotions (amusement and serenity) and negative emotions (anger and disgust) on fairness-related decision making, indicate that emotional valence did not predict ARs. However, the approach-based emotional states (amusement, anger) increased ARs, whereas withdrawal-based emotional states (disgust, serenity) decreased ARs (Harle and Sanfey, 2010). Thus, emotional motivation may help explain fairnessrelated decision making. Many researchers have explored the emotional influence of fairness-related decision making in terms of approach-based states (anger) and withdrawalbased emotional states (disgust) (Andrade and Ariely, 2009;Moretti and Di Pellegrino, 2010;Liu et al., 2016;Riepl et al., 2016). Studies have shown that anger influences fairness-related decision making and leads responders to reject more unfair offers. On the one hand, anger functions as a negative emotion after unfair treatment (Pillutla and Murnighan, 1996) and thus decreases the ARs of unfair offers. Prior to a decision, the responders' anger elicited by watching the video clip made them reject more unfair offers compared with responders who watched a pleasant video clip (Andrade and Ariely, 2009;Riepl et al., 2016). When manipulating the facial expressions of the proposers, the same results were found: responders facing angry proposers provided the most rejections, whereas the least rejections were from those who faced pleasant proposers (Mussel et al., 2013;Liu et al., 2016). When the responder's anger was provoked by the controlled proposer's negative appraisal of the responder's speech, decreased ARs resulted (Fabiansson and Denson, 2012). To the best of our knowledge, only one study used an EEG and explored the neural mechanism of the influence of incident emotion on fairness-related decision making. That study induced anger, fear and happiness via short movie clips. The results showed that responders with high trait negative affect in aversive mood states had increased FRN amplitudes when they were in an angry mood but not when they experienced fear or happiness (Riepl et al., 2016). On the other hand, whether the proposer or the responder is the angry party leads to different perceptions of fairness and judgments of the proposer's offer. If the proposers are angry, more unfair offers are given. For example, if the proposer's anger is aroused by the responder, the proposer is more likely to split unfair offers (Fabiansson and Denson, 2012). In contrast, if the responder feels angry, more fair offers are given. For example, proposers will make more fair offers when they know that the responders watched an angry video clip in contrast with the knowledge that the responders watched a happy clip (Andrade and Ho, 2007). The above results may relate to the proposers' attribution of anger. Anger is a kind of high-arousal and approach-based negative emotion (Berkowitz and Harmon-Jones, 2004;Carver and Harmon-Jones, 2009), and it may cause antisocial behaviors related to revenge (Carnevale and Isen, 1986;Pillutla and Murnighan, 1996;Allred et al., 1997). Therefore, when the responder is the one to irritate the proposer, the proposer proposes more unfair offers in return. Second, anger may make people tougher and more dominant (Knutson, 1996;Tiedens, 2001). People know that angry people are impulsive and act irrationally (Bacharach and Lawler, 1981), so they may make more fair offers to reduce the possibility of being rejected instead of irritating the responder to maximize the profits in bargaining when they play as proposers (Andrade and Ho, 2007;Andrade and Ariely, 2009). In addition, disgust aroused prior to a decision can increase the responder's punishment for unfair offers, whereas the idea of misattributing the disgust induced by the unfair offer to incidental disgust will reduce the responder's punishment. When responders have viewed emotional pictures or faces to arouse aversion prior to a decision, lower ARs to unfair offers are caused by the disgust (Moretti and Di Pellegrino, 2010;Liu et al., 2016). In a comparison of the influence of disgust and sadness on fairness decisions, disgust caused obviously lower ARs (Moretti and Di Pellegrino, 2010). However, another study using disgusting smells showed that participants misattributed the disgust induced by an unfair offer to the disgusting smell, which led to higher ARs (Bonini et al., 2011). These results indicate that the arousal of disgust prompts people's maintenance of social norms because disgust is a type of withdrawal-based emotion (Harle and Sanfey, 2010) and may be extended to moral and social violations (Rozin et al., 2000). As an indicator of the judgment of others' behavior as either right or wrong, feelings of disgust can function better than sadness as moral intuition (Haidt, 2001) to decrease the ARs of unfair offers. To an extent, disgust aroused prior to a task overlapped with disgust in the distribution, whereas the attribution of the latter to the former resulted in a subtraction of the emotion. From the above, we may conclude that the valences of anger and aversion are the same; however, due to the different induction manipulations and attributions, they may have different impacts on fairness-related decision making. Consequently, the Affect Infusion Model takes the motivational direction of emotion as an important factor to interpret the emotional process of fairnessrelated decision making within a wider range. Dual-Process Systems The above two models focused on the function of emotional arousal and appraisal in fairness-related decision making but ignored the regulation of emotion by cognition to change decision making. The Dual-process System claims that there are dual subsystems in fairness-related decision making: one is automatic, with an immediate response and an emotional system with no cognitive effort, whereas the other is controlled and comparatively slow, with a rational system of cognitive effort. The emotional system represents the intuitive response; however, after learning and calculation, the rational system requires an adaptive response to different situations by regulating the emotional system (Loewenstein and O'Donoghue, 2004;Sanfey and Chang, 2008;Feng et al., 2015). Fairness-related decision making is influenced by systematically and effectively regulating responders' fairness perceptions via rational cognitive control (Rilling and Sanfey, 2011). For example, the model suggests that all types of emotional regulation strategies can change fairnessrelated decision making through the interaction of cognition and emotion. The Empirical Study of Dual-Process Systems Researchers have employed different emotion regulation strategies and compared their effectiveness. The results support the influence of emotion regulation on fairness-related decision making. First, responders may spontaneously regulate the negative emotions induced by unfair offers in the UG. After decision making, responders are requested to report their own opinions on the offer and to write down the shift of their decisions as follows: "At the very beginning, I thought of. . ., then I considered.. . ." Some responders may remain angry, reject the unfair offer and refuse to report, whereas others may spontaneously employ cognitive reappraisal to reduce their own negative emotions and then accept more unfair offers (Voegele et al., 2010;Gilam et al., 2015). In physiological arousal, responders who employed reappraisal showed higher vagal activation and attenuated heart rate deceleration after accepting unfair offers (Voegele et al., 2010). Neuroimaging studies have revealed that increased ARs of unfair offers are associated with increased activity in the ventrolateral prefrontal cortex (vlPFC), a region involved in emotion regulation, and decreased activity in the AI, which is linked to negative affect (Tabibnia et al., 2008). Individuals with high monetary gains showed increased ventromedial prefrontal cortex (vmPFC) activity but also decreased AI activity (Tabibnia et al., 2008;Gilam et al., 2015). Furthermore, patients with vmPFC damage had lower ARs than control groups (Koenigs and Tranel, 2007). The studies suggested that brain areas associated with emotion regulation, such as vlPFC and vmPFC, may be engaged to diminish the aversion-related AI's response (Tabibnia et al., 2008;Gilam et al., 2015) and increase the ARs of unfair offers. Second, multiple emotion regulation strategies can change decisions by regulating emotions. Researchers have employed two strategies for emotion regulation in fairness-related decision making: reappraisal and expressive suppression. The results showed that although the two strategies could reduce the negative emotions of responders to unfair offers, though compared with expressive suppression, the reappraisal strategy was more effective in changing responders' emotions and making them accept more unfair offers (Kirk et al., 2006;van't Wout et al., 2010;Fabiansson and Denson, 2012). In addition, reappraisal strategies may continue to reduce participants' negative emotions and make them propose more fair offers during a second interaction with partners who treated them unfairly in a previous interaction, whereas the expressive suppression strategy may reduce participants' previous negative emotions with no effect of ridding themselves of negative treatment, resulting in the proposal of unfair offers (van't Wout et al., 2010;Fabiansson and Denson, 2012). The results showed that to change emotions and behaviors using an emotion regulation strategy and to avoid previous negative impact, the reappraisal strategy is considerably more effective than expressive suppression and can extend beyond a single encounter to influence future interaction. Grecucci furthered the study of reappraisal strategies by discussing up-and down-regulation (Grecucci et al., 2013b). The former refers to the interpretation of intentions and behaviors of unfair offers as more negative (i.e., the player is a selfish person and wants to keep all the monetary gains), whereas the latter refers to these as less negative (i.e., the proposers' debt problems leading them to gain more). The results showed that responders with an up-regulation strategy rejected more unfair offers in contrast with down-regulation, demonstrating that reappraisal strategies may change the way responders understand others' intentions and affect their emotional reaction, resulting in changed decisions. Overall, the reappraisal strategy can modulate the impact of emotional stimuli, contributing to our decisions flexibly (Grecucci et al., 2013b). Neuroimaging studies revealed that the dorsolateral prefrontal cortex (DLPFC) and bilateral ACC play vital roles in the reappraisal process. The DLPFC is associated with cognitive control and inhibition (Miller and Cohen, 2001) as the basis of the generation and maintenance of reappraisal strategies (Ochsner et al., 2002;Ochsner and Gross, 2005). Additionally, Buckholtz et al. (2008Buckholtz et al. ( , 2015, Buckholtz and Marois (2012) proposed the integrative model of DLPFC function, which suggested the role of DLPFC in the representational integration of the distinct information streams used to make punishment decisions. When applying cognition reappraisal in fairness-related decision making, the evaluation of fairness and the information concerning harm and blame changed. Therefore, the DLPFC activated to integrate the information from emotional response, regulation strategy, fairness evaluation and other sources to make punishment decisions. Furthermore, the ACC monitors and evaluates conflicting responses or motives (Yeung and Sanfey, 2004;Ochsner and Gross, 2005). In addition to reappraisal and expressive suppression, expected emotion is an effective way of regulating fairnessrelated decision making. With regard to changing a decision, some studies have investigated the regulation of individuals' expected emotion induced by the decision outcome. In the decision stage, responders will attempt to predict the probabilities of different outcomes and the emotional consequences associated with alternative actions. To minimize negative emotion and maximize positive emotion, responders will adjust their decisions (Loewenstein and Lerner, 2003;Rick and Loewenstein, 2007). If they predict they will be proud of their fair offers, more fair offers will be given, whereas if they predict that they will feel regretful, less fair offers will be chosen. The expected emotion helps them to anticipate future outcomes and modify their behaviors to evoke desirable emotions and avoid undesirable results. When an individual can expect a positive outcome, it is likely that a current offer will be supported. In contrast, an expected negative outcome will lead to modification of the current activities (Baumeister et al., 2007). Some researchers have manipulated the expected emotion using the autobiographical recall task and found that anticipated pride about fair behavior increased levels of fairness, whereas anticipated pride about unfair behavior decreased levels of fairness. Similarly, anticipated regret about fair behavior reduced levels of fairness, whereas anticipated regret about unfairness increased levels of fairness (van der Schalk et al., 2012). If the proposers were required to observe pride or regret after making fair or unfair offers in the UG, they made fewer fair offers if they had seen the responder's regret about a fair offer, whereas they made more fair offers if they had seen the responder's regret about unfair offers (van der Schalk et al., 2014). The results showed that past emotional experience make people reflect on and modify the outcome of their behavior because they pursue not only maximized benefits but also positive emotional experiences (Mellers et al., 1999;Loewenstein and Lerner, 2003). Other studies on regulating strategies of delay or distraction revealed that the delay of a decision did not change the emotional experience or behavior (Bosman et al., 2001), whereas distraction only decreased anger but did not change fairnessrelated or other decisions when anger was induced again by the same stimulus (Gross and Levenson, 1993;Gross, 1999;Xiao and Houser, 2005;Fabiansson and Denson, 2012). Neural mechanism studies on the emotion regulation of fairness-related decision making have supported Dual-process Systems. The interaction of the automatic processing emotional system and the controlled cognitive system affects people's behavior. The emotional system includes the insula, which is associated with aversion to violating norms (Sanfey et al., 2003;Guo et al., 2013); the amygdala, which is associated with negative emotions (Haruno and Frith, 2010;Haruno et al., 2014); and the vmPFC, which is associated with encoding subjective values of perceived offers and emotion regulation (Tabibnia et al., 2008;Baumgartner et al., 2011;Gilam et al., 2015). In addition, the controlled cognitive system involves the dorsal ACC, which regulates the conflict of norm enforcement and self-interest and DLPFC (Knoch et al., 2006(Knoch et al., , 2008Baumgartner et al., 2011) related to executive control. Dual-process Systems focus on the function of emotions and involve the interaction of emotion and cognition for fairnessrelated decision making. This model has been supported by many behavioral and neuroimaging studies (Sanfey et al., 2003;Baumgartner et al., 2011). This model also proposes strategies for regulating emotion that provide a new way of changing fairnessrelated decision making (Knoch et al., 2006(Knoch et al., , 2008. However, current evidence is limited to the regulation of negative emotion induced by an offer (Grecucci et al., 2013a,b). Little is known about the regulation of incidental emotion in fairness-related decision making. A SCHEMATIC ILLUSTRATION OF THE INFLUENCE OF EMOTION ON FAIRNESS-RELATED DECISION MAKING In complex social environments, both the emotion and cognition systems are involved in processing the fairness perception of resource distribution (see Figure 1). The Wounded Pride/Spite Model and the Affect Infusion Model describe the influence of integral emotion aroused by task and incidental emotion aroused by task-unrelated resources, respectively. For instance, compared with fair offers, unfair offers have been associated with greater activation of the insula, which is involved in aversion emotion (Sanfey et al., 2003;Takagishi et al., 2009), whereas fair offers have been linked to the activation of reward regions, such as the ventral striatum (Tabibnia et al., 2008). Additionally, individuals in sad or angry moods showed an enhanced perception of unfairness, with a greater activation of the insula and amygdala (Harle et al., 2012). The Dualprocess Systems perspective proposes that the rational system could regulate emotion to both up-and down-regulate fairnessrelated decision making. For example, the ACC monitors and evaluates conflicts between norm enforcement and financial benefit (Yeung and Sanfey, 2004;Ochsner and Gross, 2005). The vlPFC and vmPFC associated with emotion regulation could decrease the activation of AI to diminish conflicts (Tabibnia et al., 2008;Gilam et al., 2015). The DLPFC is associated with cognitive control and inhibition (Miller and Cohen, 2001) and influences generation and maintenance reappraisal strategies (Ochsner et al., 2002;Ochsner and Gross, 2005). It can integrate the information from emotional response, regulation strategy, fairness evaluation and other sources to make punishment decisions (Buckholtz and Marois, 2012). SUMMARY AND PROSPECTS In the history of studies on fairness-related decision making, the hypothesis has changed from viewing responders as completely rational with no influence from emotion to regarding both emotion and cognition as important factors in Dual-process Systems. Many studies have revealed that emotion plays an important role in fairness-related decision making. Based on the review of the theoretical and empirical studies, we conclude that the future research scope of the influence of emotion in fairness-related decision making can be furthered in the following ways. First, recent studies that have induced incidental emotions are limited to several basic emotions, such as happiness, sadness, anger or disgust. However, as a social animal, humans have complicated, delicate and vast social structures and interpersonal relations. Among these, social emotions are one of the important motivations for human behavior. Since fairness is one of the basic norms in human society, it is influenced by many social emotions . As a result, future research should explore the impact of social emotions, including both positive social emotions (empathy, gratitude) and negative social emotions (envy, indignation), on fairness-related decision making. Second, reappraisal is a common strategy to regulate emotional response, but this strategy involves reinterpreting the meaning of a stimulus. In studies on fairness-related decision making, responders can adopt an up-regulation strategy or a down-regulation strategy. Responders must evaluate the motivations and behaviors of proposers to decrease the anger or disgust caused by unfair offers (Grecucci et al., 2013b). However, reappraisal may induce other emotions, such as empathy from down-regulation (Gross, 2013). Future studies should aim to identify the irrelevant emotions aroused by the regulation strategy that may influence fairness-related decision making. Third, some personal traits, such as emotional dispositions (Dunn et al., 2010), social value orientation (Karagonlar and Kuhlman, 2013;Haruno et al., 2014), and personality characteristics (Spitzer et al., 2007;Osumi et al., 2012), may influence personal emotional response and regulation, thus affecting fairness-related decision making. For this reason, we suggest that future studies should explore the possible interaction of personality traits, emotion and unfair offers. Finally, the standard UG paradigm has been widely used in studies on the influence of emotions on fairness-related decision making. Some complex, modified versions of the UG may complicate the context of fairness-related decision making, but may nevertheless be accurate models of real-world situations. For instance, we can put fairness-related decision making in the more complex background of social comparison Alexopoulos et al., 2012;McDonald et al., 2013), the loss context (Buchan et al., 2005;Zhou and Wu, 2011;Guo et al., 2013), or making responders perceive the intentions of the proposer (Radke et al., 2012;Ma et al., 2015). As a result, future studies on the influence of emotions on fairness-related decision making should consider ecological validity to make the studies more realistic. AUTHOR CONTRIBUTIONS Conceived and designed the study: ZY, YZ, and XL. Literature search and synthesis: CJ and YQ. Wrote the paper: ZY and YZ.
2017-09-20T21:48:30.393Z
2017-09-19T00:00:00.000
{ "year": 2017, "sha1": "36d60e077e7e0510d7161ee0a3768e40fb806588", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2017.01592/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "36d60e077e7e0510d7161ee0a3768e40fb806588", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
3199838
pes2o/s2orc
v3-fos-license
Plasma levels of danger-associated molecular patterns are associated with immune suppression in trauma patients Purpose Danger-associated molecular patterns (DAMPs) released of trauma could contribute to an immune suppressed state that renders patients vulnerable towards nosocomial infections. We investigated DAMP release in trauma patients, starting in the prehospital phase, and assessed its relationship with immune suppression and nosocomial infections. Methods Blood was obtained from 166 adult trauma patients at the trauma scene, emergency room (ER), and serially afterwards. Circulating levels of DAMPs and cytokines were determined. Immune suppression was investigated by determination of HLA-DRA gene expression and ex vivo lipopolysaccharide-stimulated cytokine production. Results Compared with healthy controls, plasma levels of nuclear DNA (nDNA) and heat shock protein-70 (HSP70) but not mitochondrial DNA were profoundly increased immediately following trauma and remained elevated for 10 days. Plasma cytokines were increased at the ER, and levels of anti-inflammatory IL-10 but not of pro-inflammatory cytokines peaked at this early time-point. HLA-DRA expression was attenuated directly after trauma and did not recover during the follow-up period. Plasma nDNA (r = −0.24, p = 0.006) and HSP70 (r = −0.38, p < 0.0001) levels correlated negatively with HLA-DRA expression. Ex vivo cytokine production revealed an anti-inflammatory phenotype already at the trauma scene which persisted in the following days, characterized by attenuated TNF-α and IL-6, and increased IL-10 production. Finally, higher concentrations of nDNA and a further decrease of HLA-DRA expression were associated with infections. Conclusions Plasma levels of DAMPs are associated with immune suppression, which is apparent within minutes/hours following trauma. Furthermore, aggravated immune suppression during the initial phase following trauma is associated with increased susceptibility towards infections. Electronic supplementary material The online version of this article (doi:10.1007/s00134-015-4205-3) contains supplementary material, which is available to authorized users. Introduction The survival of multiple trauma patients has improved significantly during the past decades [1]. However, despite improvements in both traffic safety and preand in-hospital management, severe trauma remains a main cause of death among young people worldwide [2]. In 2014, 25,845 people were killed and over 203,500 seriously injured in road accidents in the EU alone [3]. Roughly, trauma-related mortality can be divided into two categories. Early deaths are mainly attributed to neurological damage or severe blood loss directly related to the trauma. The patients that survive the initial trauma often develop nosocomial infections or sepsis [4], representing a significant cause of late mortality in trauma patients. The increased susceptibility of trauma patients to develop infections is mediated by a suppressed state of the immune system that develops after trauma [4][5][6][7][8][9]. Two frequently used hallmarks of the immune-suppressed state after trauma are attenuated production of cytokines by leukocytes ex vivo stimulated with pathogen-associated molecular patterns (PAMPs) such as lipopolysaccharide (LPS), and decreased leukocyte HLA-DR expression [6,8,[10][11][12][13]. Release of danger-associated molecular patterns (DAMPs), which can elicit an immune response very similar to the response to PAMPs from invading pathogens in sepsis [14,15], could contribute to immune suppression in trauma patients. DAMPs can both be actively released by ischemic cells as danger signals or originate from damaged or dead cells as debris [16,17]. An example of a DAMP that can be released in the case of cell damage is mitochondrial DNA (mtDNA), which can trigger an immune response via Toll-like receptor 9 [18,19]. Moreover, heat shock protein (HSP)-70 is released following trauma [20] and has been shown to induce immune cell deactivation [21]. Furthermore, previous studies have indicated that free nuclear DNA (nDNA) in plasma is a marker for cell damage or death, because it is one of the many cell components released if a cell is ruptured [19,22]. Therefore, it might be an indicator of general DAMP release. However, the role of these DAMPs in the immune response after trauma and the possible development of a suppressed state of the immune system is unknown. Taken together, although immune suppression and nosocomial infections are frequently described phenomena in trauma patients, the role of DAMPs that trigger pro-and anti-inflammatory responses remains elusive. The aim of this study was to investigate the release of DAMPs following trauma, starting in the very early (prehospital) phase, and to assess its relationship with immune suppression and nosocomial infections. Parts of this work were presented at the 33rd International Symposium on Intensive Care and Emergency Medicine, held on 19-22 March 2013 in Brussels, Belgium [23] and at the European Society of Intensive Care Medicine (ESICM) Lives Annual Congress, held on 27 September-1 October 2014 in Barcelona, Spain [24]. Study population Adult trauma patients (n = 166) admitted to the trauma care unit at the emergency room (ER) of the Radboud University Nijmegen Medical Centre were included in the study. Exclusion criteria were expected risk of blood sampling at the trauma scene (e.g., jeopardizing the clinical handling of the patient), known HIV/AIDS, known malignancies, and use of steroids (all dosages and types of administration) or other immunomodulatory medication previously to the trauma. Selective digestive tract decontamination (SDD) was administered to all patients who were admitted to the ICU (n = 101), as part of standard ICU protocol. Therefore, comparisons between ICU patients who did and did not receive antibiotics could not be made. Of the patients who were not admitted to the ICU (n = 65), only seven received (prophylactic) antibiotics; this group size does not allow for statistical analysis. Furthermore, comparing patients that did not receive antibiotics (and thus by definition were not admitted to the ICU) with patients that did receive antibiotics (all ICU patients and the seven non-ICU patients that received antibiotics) does not yield meaningful information, because of major differences in trauma/disease severity, placement of catheters, intubation etc. The study was carried out in the Netherlands in accordance with the applicable rules concerning the review of research ethics committees and informed consent (CMO2011/380, NL38169.091.11). All patients or legal representatives were informed about the study details at the first opportunity, usually within 1 day after admission. The local ethical committee that approved the study protocol agreed that it was not possible to do this at an earlier stage. Written informed consent was obtained from the patient or his/her legal representative if vena puncture was necessary to obtain blood samples. All determinations and data handing were performed under the guidelines of The National Institutes of Health and in accordance with the declaration of Helsinki and its later amendments. Lithium heparin (LH) anti-coagulated blood was obtained for ex vivo stimulation experiments as described below, which were performed immediately after sampling. Ethylenediaminetetraacetic acid (EDTA) and LH anti-coagulated blood was centrifuged after withdrawal at 1600×g at 4 °C for 10 min, after which plasma was stored at −80 °C until further analysis. EDTA plasma for real-time quantitative PCR (qPCR) analysis was centrifuged again at 16,000×g at 4 °C for 10 min to remove potential remaining cells and cell debris. The supernatant was stored at −80 °C until further analysis. Blood for mRNA analysis was sampled in PAXgene blood RNA tubes (Qiagen, Valencia, CA, USA) and stored according to the manufacturer's instructions. Clinical parameters and demographic data were obtained from electronic patient files. Injury severity scores (ISS) were supplied by the Regional Emergency Healthcare Network. Infection within 28 days was defined as the presence of fever and/or other infectious symptoms (pain, swelling, erythema) with leukocytosis and positive cultures and/or another visible or otherwise proven infection focus corresponding to the symptoms of the patient. The attending physicians were blinded to the immune investigation results as these assays were performed after collection of all samples from each patient. Plasma DAMP levels Plasma from doubly centrifuged EDTA anti-coagulated blood was diluted 1:1 with phosphate buffered saline solution (PBS) after which DNA was isolated using the QIAamp DNA Blood Midi Kit (Qiagen, Valencia, CA, USA), using the "Spin Protocol" as described by the manufacturer. Isolated DNA was stored at −20 °C until further analysis. qPCR was performed using iQ SYBR Green PCR Master Mix (Bio-Rad Laboratories, Hercules, CA, USA) on a CFX96 Real-Time PCR Detection System (Bio-Rad Laboratories, Hercules, CA, USA). A primer pair specific for the GAPDH gene present in all nucleated cells of the body was used for quantification of nuclear (n)DNA levels: forward 5′-AGCACCCCTGGCCAAGGTCA-3′, reverse 5-CGGCAGGGAGGAGCCAGTCT-3′. For quantification of mitochondrial (mt)DNA levels, the following primer pair specific for the mitochondrially encoded NADH dehydrogenase 1 (MT-ND1) gene was used: forward 5′-GCCCCAACGTTGTAGGCCCC-3′ and reverse 5′AGCTAAGGTCGGGGCGGTGA-3′. Primer pairs were obtained from Biolegio (Nijmegen, the Netherlands). Samples were analyzed in duplicate and DNA isolated from blood obtained from a healthy volunteer was used on each plate as a calibrator [CV % of 1.48 % (GAPDH) and 0.41 % (mtDNA) between plates]. Plasma nDNA and mtDNA levels are expressed as fold change relative to the calibrator sample using the formula 2 ΔCt . Plasma concentrations of HSP70/HSPA1A were determined batchwise using ELISA according to the manufacturer's instructions (R&D systems, Minneapolis, MN, USA). Plasma cytokine concentrations Plasma concentrations of the pro-inflammatory cytokines tumor necrosis factor (TNF)-α, interleukin (IL)-6, and IL-8, and the anti-inflammatory cytokine IL-10 were analyzed batchwise in plasma obtained from EDTA anticoagulated blood using a simultaneous Luminex assay according to the manufacturer's instructions (Milliplex; Millipore, Billerica, MA, USA). Ex vivo cytokine production Leukocyte cytokine production capacity was determined by challenging whole blood from the patients with LPS ex vivo using an in-house developed system with prefilled tubes described in detail elsewhere [25]. Briefly, 0.5 mL of blood was added to tubes prefilled with 2 mL culture medium as negative control or 2 mL culture medium supplemented with 12.5 ng/mL Escherichia coli LPS [serotype O55:B5 (Sigma Aldrich, St Louis, MO, USA), end concentration 10 ng/mL]. Cultures were incubated at 37 °C for 24 h, centrifuged, and supernatants were stored at −80 °C until analysis. Concentrations of TNFα, IL-6, and IL-10 were determined batchwise by ELISA according to the manufacturer's instructions (R&D systems, Minneapolis, MN, USA). Ex vivo cytokine production data were censored at time of infection diagnosis, because infections can induce immune alterations. HLA-DRA mRNA expression RNA was isolated from blood collected in Paxgene blood RNA tubes (Qiagen, Valencia, CA, USA). cDNA was synthesized from total RNA using the iScript cDNA Synthesis Kit (Bio-rad, Hercules, CA, USA). Subsequent qPCR analysis was performed using TaqMan gene expression assays (Life Technologies, Paisley, UK) for the reference gene peptidylpropylisomerase B (PPIB) (#Hs00168719_ m1) and HLA-DRA (#Hs00219575_m1) on a CFX96 Real-Time PCR Detection System (Bio-Rad, Hercules, CA, USA). We chose PPIB on the basis of its stability in inflammatory conditions in peripheral whole blood [26] and previous use as a reference gene for HLA-DRA [27]. We chose the HLA-DRA gene because it was shown to correlate well with flow cytometric analysis of mHLA-DR [27][28][29], an established marker of immune suppression. HLA-DRA expression levels are expressed as fold change relative to the expression of PPIB in the same sample using the formula 2 ΔCt . HLA-DRA data were censored at time of infection diagnosis, because infections can decrease HLA-DR expression. Statistical analysis Data presented in tables and text are expressed as median [interquartile range] and data in figures as geometric mean ± 95 % CI. Mann-Whitney U and Fischer exact tests were used to investigate differences between two groups as appropriate. Differences between patient data at the various time-points and data of healthy controls were performed using Kruskal-Wallis with Dunn's post hoc tests. Differences between time-to-infection curves were tested using log-rank (Mantel-Cox) tests. A Cox proportional hazard model was used to adjust the relationship between HLA-DRA expression and time-to-infection for the usual clinical confounders age and ISS [10]. Correlations were calculated using Spearman correlation. All analyses were performed with available data of the corresponding timepoints. As a result of missing values at certain time-points or patients that were lost to follow-up, patient numbers in the analyses vary. Principal component analysis (PCA) was performed to explore the expected covariation between multiple laboratory variables and their relationship with injury severity, thereby preventing the need to list all individual correlations [30]. No imputation was used, as missing values were judged to be non-random, i.e., blood for [X] was sampled at day [Y] and could therefore not be obtained in patients who died early. Instead, a core data set of variables and patients without missing values was established. Measurements were log-transformed, meansubtracted, and z-score was calculated on which PCA was performed on the basis of the singular value distribution in a Python script. All other statistical analyses were performed using SPSS statistics version 22 (IBM Corporation, Armonk, NY, USA) and Graphpad Prism version 5 (Graphpad Software, La Jolla, USA). A p value of less than 0.05 was considered statistically significant. Patient characteristics A total of 166 patients were included between August 2010 and May 2013, the characteristics of which are listed in Table 1. The majority of patients suffered from head/neck or chest injury. Plasma cytokines Plasma TNF-α concentrations in trauma patients were not elevated at any time-point compared with levels found in healthy controls and did not change over time ( Supplementary Fig. 1). Plasma IL-6 levels were elevated from time-point ER until day 7 post-trauma (Fig. 1d), while IL-8 levels were slightly but significantly increased compared with healthy controls from time-point ER and remained elevated during the entire follow-up period (Fig. 1e). Both cytokines showed highest levels at day 1. Plasma IL-10 concentrations in trauma patients showed a distinct peak at the ER and remained significantly higher compared with healthy controls until day 1 (Fig. 1f ). Plasma nDNA levels measured at the ER correlated with plasma IL-8 (r = 0.40, p < 0.0001, n = 121), IL-6 (r = 0.47, p < 0.0001, n = 121), and IL-10 (r = 0.45, p < 0.0001, n = 121) concentrations at the same time-point. Plasma HSP70 levels at the ER correlated with plasma IL-8 (r = 0.40, p < 0.0001, n = 100), IL-6 (r = 0.45, p < 0.0001, n = 100), and IL-10 (r = 0.48, p < 0.0001, n = 100) levels at that time-point. Immune-suppressed state HLA-DRA mRNA expression in trauma patients was profoundly suppressed at all time-points compared with healthy controls (Fig. 2a). were comparable to the entire patient cohort (supplementary Table 1). The capacity of leukocytes to produce pro-inflammatory cytokines TNF-α and especially IL-6 upon ex vivo stimulation with LPS was severely suppressed at the trauma scene and during the first days of hospital admission compared with healthy controls (Fig. 2b, c). In sharp contrast, ex vivo production of the anti-inflammatory cytokine IL-10 was augmented in the first days after trauma compared with healthy controls (Fig. 2d). This effect remained evident during the entire 10-day follow-up period (data not shown). Ex vivo TNF-α and IL-6 production at the ER correlated positively with HLA-DRA expression (r = 0.43, p = 0.02, n = 30, and r = 0.58, p = 0.001, n = 30, respectively). This was not the case for ex vivo IL-10 production (r = 0.22, p = 0.25, n = 30). Relationship between injury severity, DAMPs, cytokines, and HLA-DRA To comprehensively investigate the relationship between injury severity and the mediators measured, we PCA on data of nDNA, mtDNA, HSP70, IL-10, IL-6, IL-8, TNF-α, and HLA-DRA expression at the ER. In concordance with the individual correlations shown, the first principal component (PC1) had high loadings in the same direction of plasma nDNA, HSP70, IL-10, IL-6, IL-8, and TNF-α levels, while HLA-DRA had a smaller negative loading. PC1 had a total explained variance of 46 % and correlated with ISS (r = 0.64 p < 0.0001, supplementary Figure 2). Susceptibility towards infections Thirty-three patients (20 %) developed an infection during the first 28 days following trauma (characteristics of infected and non-infected patients provided in supplementary Table 2 and injury location were comparable between patients who developed an infection within 28 days and those who did not. However, patients who developed an infection following trauma were more frequently admitted to the ICU, received more transfusions, and required a longer length of stay, both at the ICU and in the hospital. Furthermore, vasopressor therapy and corticosteroid use tended to be higher in patients who developed an infection. The 28-day survival was higher in patients who developed an infection compared with those who did not, likely due to direct trauma-related deaths. Indeed, when analyzing the data of patients who survived the initial phase after trauma, no difference in 28-day survival was observed (supplementary Table 2, lower part). ISS was slightly higher in patients who survived the initial phase after trauma and developed an infection, and these patients were more frequently admitted to the ICU. Furthermore, vasopressor therapy, transfusions, and corticosteroids were more frequently used, and ICU and hospital length of stay were increased in these patients. Plasma mtDNA and nDNA levels at ER were higher in patients who developed an infection within 28 days compared with patients who did not (2. Previous studies indicate that the change in HLA-DR expression over time better predicts outcome and/or development of infections than absolute values of HLA-DR [10,29,31]. Accordingly, we investigated the relationship between change in HLA-DRA expression (increase or decrease between ER and day 3) and infections in a subgroup of patients from our cohort for which HLA-DRA data was available on these time-points. Patients Subsequently, DAMPs bind to (intracellular) receptors on immune cells such as macrophages, which induces a predominantly anti-inflammatory response characterized by IL-10 release. In turn, this leads to immune suppression, indicated by decreased monocytic HLA-DR expression as well as reduced production of TNF-α/IL-6 and increased production of IL-10 upon ex vivo stimulation with LPS. Alternatively, DAMPs can exert direct immunosuppressive effects, such as HSP70-induced LPS tolerance in monocytes. All these events take place in the very early (prehospital) phase following trauma. In the hospital, aggravated immune suppression is associated with increased susceptibility towards infections, consequent prolonged ICU and hospital length-of-stay, and increased late mortality exhibiting a decrease in HLA-DRA expression (ratio <1) more likely developed an infection compared with patients who showed an increase (ratio >1, Fig. 3). The relationship between a decrease in HLA-DRA expression and development of infection remained apparent after correcting for age and ISS (hazard ratio [95 % CI] of 3.02 [1.02-8.93], p = 0.046). Furthermore, ICU and hospital length of stay were increased in patients with decreasing HLA-DRA expression, while other characteristics were not significantly different (supplementary Table 3). Discussion This study demonstrates that multi-trauma patients exhibit a suppressed state of the immune system already at the trauma scene, thus before admission of the patient to the hospital. This is characterized by low HLA-DRA expression and an anti-inflammatory cytokine pattern, both in vivo and ex vivo. Furthermore, we show that DAMPs are present in large quantities in the circulation during the prehospital phase and shortly after admission, and that DAMP levels are associated with the extent of immune suppression. Finally, our data demonstrate that further aggravation of immune suppression in the initial phase after trauma is associated with increased susceptibility towards infections. A conceptual representation of how DAMP release may lead to increased susceptibility towards infections following trauma is presented in Fig. 4. The pronounced general release of DAMPs, reflected by plasma nDNA levels, in the prehospital phase of trauma was associated with the immune suppression observed in our cohort of trauma patients. Although an observational study such as the current does not allow one to draw conclusions concerning cause and effect, our data suggests that DAMPs play a role in the suppressed state of the immune system. There are some data in support of this. HSP70 is known to induce LPS tolerance in monocytes, an in vitro phenomenon showing similarities to in vivo immune suppression [21]. Accordingly, we found a inverse relation between plasma HSP70 levels and HLA-DRA expression. Moreover, previous studies have suggested an immunomodulatory role for mtDNA in trauma patients [18,32]. Of interest, while levels of mtDNA were increased after trauma, this increase was relatively modest. Levels of circulating nDNA were much higher and, unlike mtDNA, correlated with HLA-DRA expression, indicating that mtDNA release is not one of the major factors behind immune suppression in trauma patients. Previous studies that demonstrated much higher mtDNA concentrations in plasma had much smaller patient numbers (n = 15 [18] and n = 38 [32]) and used only a single 1600g centrifugation step [18,32]. Chiu et al. demonstrated that double centrifugation of plasma (at 1600g and 16,000g, as performed in our study) is necessary to remove residual cells, each containing thousands of copies of the mitochondrial genome, making the results from one-spin protocol studies less reliable [33]. One could argue that a difference in injury severity could explain the difference, as Zhang et al. included only patients with ISS >25 [18]. However, Lam et al. included a majority of patients with ISS <16 [32], making it unlikely that the lower injury severity in our study population (median ISS of 26) explains the lower levels of mtDNA found. Our study further shows that the early immune response following trauma has a distinct anti-inflammatory phenotype. Plasma levels of the archetypal pro-inflammatory mediator TNF-α were not increased whatsoever and increases in other pro-inflammatory cytokines such as IL-8 and IL-6 were relatively modest and peaked at later time-points. In sharp contrast, the anti-inflammatory cytokine IL-10 was produced rapidly following trauma and already reached peak levels at arrival in the ER. Of interest, in a previous study, trauma patients that were considered "immunoparalytic" based on HLA-DR expression on alveolar macrophages displayed higher IL-10 levels in BAL fluid [34]. IL-10 attenuates the immune response in several ways, e.g., through inhibition of the production of proinflammatory cytokines, such as TNF-α and IL-6 [35]. In our study, the initial IL-10 peak was followed by a peak in IL-6, which reached highest levels at day 1 following trauma. IL-6 is most renowned for its pro-inflammatory properties, although in trauma it is suggested that continuous IL-6 release accounts for the upregulation of anti-inflammatory mediators, such as prostaglandin E2, IL-1 receptor antagonist, IL-10, and transforming growth factor (TGF)-β and thereby also exhibits anti-inflammatory properties [36,37]. These findings indicate that immune suppression sets in directly after the injury. The mechanisms initiating this immediate anti-inflammatory response remain to be elucidated, although these findings are in agreement with the current paradigm of the immune response during sepsis. In sepsis, it is now generally accepted that, instead of a previously assumed biphasic inflammatory response, consisting of an initial pro-inflammatory response and a subsequent compensatory anti-inflammatory response, a simultaneously occurring pro-and anti-inflammatory response is present [38]. Others have shown that the production of pro-inflammatory cytokines by leukocytes ex vivo stimulated with LPS is severely attenuated following trauma [11,13]. Herein, we confirm these findings and demonstrate that the production of IL-10 is increased in these patients, with both phenomena already apparent at the trauma scene. This distinct anti-inflammatory phenotype ex vivo in the early phase following trauma corroborates our in vivo findings. In keeping with previous work, our data reveal that HLA-DRA expression is decreased following trauma [6,8,10,12,13]. We importantly extend these findings by showing that this event already takes place before hospital admission and that DAMPs are associated with this phenomenon. The increased IL-10 levels early on following trauma might play a role in the decreased HLA-DRA expression, as IL-10 is known to reduce macrophage function. Furthermore, in accordance with an earlier study [13], we demonstrate that low HLA-DRA levels were associated with decreased production of proinflammatory cytokines in response to ex vivo stimulation of leukocytes with LPS. Several studies on small cohorts of trauma patients have investigated the relationship between HLA-DR expression and infectious complications [6,8,12,13]. Some have found (trends towards) associations between low HLA-DR expression and infections [6,13], while others have not [12]. One study showed that reduced expression of HLA-DR on alveolar macrophages, but not on circulating leukocytes, was associated with nosocomial pneumonia [8]. However, concerning the relation to outcome and/or development of infections, studies in trauma patients, septic patients, and in a cohort of ICU patients with various conditions have revealed that recovery of HLA-DR, rather than absolute values, is important [10,29,31]. In keeping with this, we found that a further decrease of HLA-DRA expression between admission and day 3 predicts development of infections. Taken together, these data suggest that aggravated immune suppression following the initial hit increases the risk of infection after trauma. Nevertheless, the anti-inflammatory phenotype present directly after trauma might also have beneficial effects through limiting excessive inflammation and thereby organ damage. As such, whether this phenotype is solely detrimental or has homeostatic features as well remains to be determined. Our study has several limitations. First, inherent to this type of study, a substantial number of patients were lost to follow-up, e.g., because of discharge from the hospital or transfer to another hospital (in most cases due to recovery), or death (although mortality was low in our cohort). Therefore, if alterations in parameters observed initially in patients improve in those who recover, this could be missed. However, this does not affect the main conclusions of the manuscript as these are based on data obtained at early time-points and/or data of a subgroup of patients with a follow-up of several days. Another weakness of the current work typical for the multitrauma patient population studied is the heterogeneity of the patients. Second, the use of plasma nDNA levels as a marker of general DAMP release could be debated, as it is possible that specific DAMPs display other release or clearance patterns and do not necessarily follow plasma nDNA concentrations. Future studies focusing on the extensive range of DAMPs important in trauma could shed more light on this phenomenon and the importance of individual DAMPs in trauma. Third, we used expression of the HLA-DRA gene in whole blood leukocytes as a marker of immune suppression, while most studies have used HLA-DR expression on the surface of monocytes determined using flow cytometry for this purpose. Flow cytometric analysis requires rapid analysis after sampling and the constant availability of a flow cytometer, which was not feasible in our setting, especially with regard to the samples obtained at the trauma scene. Nevertheless, the use of gene expression data is a limitation, as post-transcriptional effects can also affect HLA-DRA expression. Furthermore, next to monocytes, other cells present in whole blood may also express the HLA-DRA gene to various extents and/or may exhibit different kinetics of expression, although there is little known on this subject in the context of immune suppression. In several studies in septic patients, a population which is, similar to ours, highly heterogeneous and likely exhibiting profound changes in leukocyte counts and differentiation over time, HLA-DRA gene expression was not corrected for leukocyte counts/differentiation [27][28][29]39]. Also, we do not have adequate data on daily leukocyte counts and differentiation in our cohort, as these were not regularly measured in the majority of patients. Nevertheless, the aforementioned studies in septic patients have shown that gene expression of HLA-DRA correlates well with flow cytometric analysis of mHLA-DR, with correlation coefficients ranging from 0.74 to 0.84 [27][28][29]; however, in one of these, a more moderate but still highly significant correlation coefficient of 0.53 was found [39]. Therefore, we feel qPCR analysis of HLA-DRA in our study is a reliable indicator of HLA-DR expression and immune suppression. Yet, we acknowledge that the lack of data on leukocyte counts is a limitation, because, next to possible effects on HLA-DR gene expression data, lymphopenia also represents a hallmark of immune suppression. Finally, our control group consisted of solely young male volunteers. Especially with regard to the phenomenon of immunosenescence [40], this could have biased our results. However, when we compared levels of DAMPs and immunological parameters across five age categories [<28 (n = 33), 28-42 (n = 33), 43-56 (n = 34), 57-70 (n = 32), and >70 (n = 32)], or between males and females within our patient cohort, we found no differences in any of the parameters at any of the measured time-points. Also, we did not assess the functionality of the adaptive immune system, e.g., using functional assays such as proliferation or cytokine release by T cells. In conclusion, we demonstrate that trauma results in release of DAMPs and that this is associated with an acute predominantly anti-inflammatory response and a suppressed state of the immune system. In trauma patients, these events take place already before hospital admission and the observed immune suppressed state is not preceded or accompanied by a pronounced pro-inflammatory phase. Aggravated immune suppression, as indicated by further decrease of HLA-DRA expression, is associated with the development of nosocomial infections in this patient population. Electronic supplementary material The online version of this article (doi:10.1007/s00134-015-4205-3) contains supplementary material, which is available to authorized users.
2017-07-22T02:21:01.494Z
2016-02-24T00:00:00.000
{ "year": 2016, "sha1": "27825a802b29785baa79607cad258cb80733d68b", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00134-015-4205-3.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "27d56972fc449ba65f6a65c4933fa012db57105f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
225828875
pes2o/s2orc
v3-fos-license
A Comparative Study of Big Data Marketing and Traditional Marketing in the Age of Internet The Internet age is constantly developing and being perfected, and the marketing strategy of the big data of the industrial and commercial enterprises in the Internet age is also constantly developing and flourishing. We can think of Internet applications as providing businesses with the possibility of real-time marketing strategy development. This paper introduces the Data Collection Strategy and enterprise marketing strategy. Facing the traditional marketing method, the new marketing method difference and the novel characteristic are excavated gradually. In today’s marketing methods are very complex times, if the industrial and commercial enterprises do not forge ahead, it will be eliminated by the Times. Introduction When people use the Internet and social media, a lot of application data is generated. The types of application data are varied. Compared to the data generated by previous Internet technologies, today's data types are very complex. The data includes text, images, audio and video. Many businesses do not pay attention to the production of these data. Nor do they understand the main implications of big data [1][2] . They also don't realize the importance of marketing their data. One of America's great scientists believes that advanced technology has made big data marketing strategies possible. This kind of marketing strategy will expose the unlimited value of the enterprise [3] . A brief talk on the marketing data collection strategy of enterprises The business enterprise should establish the suitable marketing data collection system. This kind of system may help the enterprise to make the appropriate marketing strategy. This will enable the enterprise to get more profit. Collection of user characteristics and analysis of behavior data Every user of the network in the process of Internet surfing will have a corresponding network surfing trace. These traces will generate some information about the characteristics of the user. This information includes the user's name, gender, and occupation. The business enterprise should collect these characteristic data in time and analyze the behavior data of the user in detail. They are of great help to the marketing strategy of the enterprise. Look for reliable means of access or means of access The collection of characteristic data can be obtained from the inside or outside of an enterprise. The internal characteristic information of enterprise includes transaction information and user information [4] . The external information of the enterprise includes social media and various network platforms. Some enterprises believe that the collection of internal data is reliable. Others believe that the collection of external data is reliable. In fact, they're both right. Data collection should be based on the actual situation. On the traditional marketing strategy of industrial and commercial enterprises The traditional marketing strategy is single in form. But it comes in many varieties. Customer profile analysis Through the feature data, the traditional marketing strategy can analyze the main characteristics of customers. These features include User Habits and preferences. The accurate positioning of customers can enable enterprises to better serve their customers. This characteristic is also the importance of enterprise's characteristic analysis. An analysis of the consumer's propensity to consume According to the characteristic analysis, we may also determine the customer's concrete consumption tendency [5] . Both Taobao and Jd.com have used this analysis of consumer trends. Through the prediction of customers' consumption tendency, business enterprises can provide different personalized services according to the needs of different users. Loyalty analysis The main object of loyalty analysis is the customer. Some customers prefer a brand product. We think he's loyal to the brand. According to this algorithm, business enterprises can find their own brand loyal users accordingly. Based on their spending habits,. The Enterprise provides marketing strategy accordingly. Analysis of potential customers A potential customer is someone who wants to buy a product. He was called a potential customer of the company that made the product. In fact, there are many potential customers for every business. Traditional Marketing data-driven refers to the use of simple data statistical methods and statistical experience to carry out marketing activities. Big Data Marketing Advocates Marketing Strategies that utilize complex algorithms and analytics techniques. The data type of traditional marketing is single. We can think of it as a regular structured data type. The data types of big data marketing are complex. It includes structured data, semi structured data, and Unstructured data. In the use of data volume, the data utilization rate of traditional marketing is very small. Big Data Marketing has a high data usage rate. Traditional marketing takes a long time [6] . Real-time change of plans is impossible. Big Data Marketing has a short life cycle and can change strategies in real time (see Table 1). Traditional marketing typically includes most manual and electronic equipment operations. It's not very automated. Big Data marketing according to the data algorithm and different levels of automation to achieve semi-automatic or fully automatic marketing strategy. In addition, the degree of individuation of traditional marketing is not high. It doesn't provide a personalized user experience. It also fails to predict the effects of actual activities and to analyze the importance of data protection. Compared to traditional marketing, big data marketing is very advanced. The main advantages of big data marketing in the age of Internet Compared with the traditional marketing strategy, big data marketing has a very wide range of development advantages and advantages in use. It allows for more detailed analysis of customer characteristics Traditional feature analysis is crude due to the lack of necessary data. The characteristic analysis of traditional marketing strategy is inaccurate. The new data marketing strategy solves this problem very well. It upgrades the internal signature analysis function. According to the different dimensions of different degrees of customer characteristics analysis. The establishment of personalized service Network users leave their own characteristic information in the process of network search. Enterprises can use this information to predict the user's personality needs and characteristics. Using data marketing analytics technology, IT can build personalized services for different customers. A good user experience is the primary goal of big data marketing. Market forecast and product improvement According to the different characteristic analysis, the business enterprise may forecast the different product market situation. According to different market requirements and the actual situation, data marketing will collate and realize the various functions of the enterprise. Moreover, according to ICCASIT 2020 Journal of Physics: Conference Series 1574 (2020) 012038 IOP Publishing doi:10.1088/1742-6596/1574/1/012038 5 different market requirements, data marketing can carry on the perfect product improvement to help the product obtain the corresponding market control. Conclusion The traditional marketing strategy of industrial and commercial enterprises is indeed a relatively highquality marketing means. However, in order to adapt to the rapid progress of today's enterprises, big data marketing strategy is the main marketing tool we must explore. Only in this way, enterprises can constantly update and develop.
2020-07-09T09:12:15.044Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "ca30fee2039f556c90a72580dc88ec8863075bd0", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1574/1/012038", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "9f3d451417937347dcd08cfdb11544ad4a00095b", "s2fieldsofstudy": [ "Business", "Computer Science" ], "extfieldsofstudy": [ "Business", "Physics" ] }
16141256
pes2o/s2orc
v3-fos-license
Monitoring patient care through health facility exit interviews: an assessment of the Hawthorne effect in a trial of adherence to malaria treatment guidelines in Tanzania Background Survey of patients exiting health facilities is a common way to assess consultation practices. It is, however, unclear to what extent health professionals may change their practices when they are aware of such interviews taking place, possibly paying more attention to following recommended practices. This so-called Hawthorne effect could have important consequences for interpreting research and programme monitoring, but has rarely been assessed. Methods A three-arm cluster-randomised trial of interventions to improve adherence to guidelines for the use of anti-malarial drugs was conducted in Tanzania. Patient interviews were conducted outside health facilities on two randomly-selected days per week. Health workers also routinely documented consultations in their ledgers. The Hawthorne effect was investigated by comparing routine data according to whether exit interviews had been conducted on three key indicators of malaria care. Adjusted logistic mixed-effects models were used, taking into account the dependencies within health facilities and calendar days. Results Routine data were collected on 19,579 consultations in 18 facilities. The odds of having a malaria rapid diagnostic test (RDT) result reported were 11 % higher on days when exit surveys were conducted (adjusted odds ratio 95 % CI: 0.98-1.26, p = 0.097), 17 % lower for prescribing an anti-malarial drug to patients with a negative RDT result (0.56-1.23, p = 0.343), and 27 % lower for prescribing an anti-malarial when no RDT result was reported (0.53-1.00, p = 0.052). The effect varied with time, with a U-shaped association over the study period (p < 0.001). We also observed a higher number of consultations recorded on days when exit-interviews were conducted (adjusted mean difference = 2.03, p < 0.001). Conclusions Although modest, there was some suggestion of better practice by health professionals on days when exit interviews were conducted. Researchers should be aware of the potential Hawthorne effect, and take into account assessment methods when generalising findings to the ‘real word’ setting. This effect is, however, likely to be context dependent, and further controlled evaluation across different settings should be conducted. Trial registration ClinicalTrials.gov: NCT01292707. Registered on 29th January 2011. Electronic supplementary material The online version of this article (doi:10.1186/s12879-016-1362-0) contains supplementary material, which is available to authorized users. Background Observation of clinical consultations is an important and frequently used tool to assess the quality of care, but the process of observation may itself change how clinical staffs behave. This effect is generally referred to as "Hawthorne effect", from the industrial experiment of the 1920's where worker's productivity increased with every change made to the working conditions [1,2]. Patient Exit Interviews involve an assessment of the patient as they leave a health facility. Typically a researcher will be stationed outside a health facility and will ask a series of questions, and possibly repeat a physical examination and/or clinical investigations, as patients leave the health facility. It is assumed that recall of the details of procedures in the health facility will be better when asked so soon after the consultation compared to retrospective alternatives. If conducting such exit interviews affects the consultations, data collected through exit interviews could give a distorted picture of "real-life" consultations, with implications for program evaluation and implementation. The importance of the Hawthorne effect has been widely discussed in the literature [3][4][5] but has rarely been rigorously assessed [6]. In this paper, we document an assessment of the Hawthorne effect when exit interviews were conducted, as part of a randomised trial evaluating malaria diagnostic and treatment training in Northern Tanzania, by looking at routinely recorded health information. We sought to investigate the following hypotheses: i) documented malaria diagnostic and treatment practices differed when patient exit-interviews were conducted, ii) the difference also affected the recording of routine information not related to the trial outcomes, and iii) there were changes in the difference over time, as health workers became used to the exit interviews. Trial setting Data for this study were derived from the Targeting Artemisinin Combination Trial (TACT), a 3-arm cluster-randomised trial of different training interventions to improve the use of malaria rapid diagnostic tests (RDTs) among health workers in primary care facilities [7,8]. The trial took place in two districts in northeast Tanzania in 2011-2012. This analysis was based on data collected in the Kilimanjaro region, a predominantly rural district with a relatively low malaria transmission and peak transmission seasons in April to June, and November to December. Participating primary care facilities (clusters) were randomised to one of three intervention arms. All prescribing health workers at the study facilities received the standard two-day national training on RDTs, where they were taught how to perform a RDT, and the recommended prescription practice (Artemisinin-based Combination Treatment (ACT) for positive test result, and no anti-malarial for negative test result) [9]. In addition health workers from the two intervention arms participated in three sessions of interactive training, aimed at reflecting on the change in practice and making it sustainable. The third arm also included the distribution of posters and patients leaflets to enhance demand for RDTs. The primary outcome of the trial was the proportion of patients with a non-severe, non-malarial illness being prescribed an approved antimalarial drug in a consultation for a new illness episode. Routine records Health workers in primary care facilities in Tanzania are expected to keep a register of all their consultations. This Health Management Information System (HMIS, called MTUHA in Tanzania) includes a ledger where the health worker is supposed to record each patient's details, diagnoses and treatments. Each health worker has his own book. Records are aggregated and reported to the district level each month, with a summary of the number of patients seen by age (less than or over five years) and first diagnosis. As part of the trial, MTUHA records were modified to include information on fever and RDT result. Data collection Trial outcomes were measured using patient exit surveys on 2 randomly-varied days per week throughout the trial. An exit survey interviewer was recruited from the nearby population using criteria of literacy and availability and given 2 days of training on site. On each day of the exit survey the interviewer notified the health staff of their presence. Survey dates changed occasionally from the initial schedule, due to practical issues such as weather conditions or interviewer availability. The dispensary MTUHA register was inspected at the end of each survey day to extract the RDT result for each patient (identified on the basis of name, age and order seen), which served as a secondary source to validate exit interview information. Each dispensary was visited every four to six weeks by a research assistant to check on RDT and other essential supplies and to take a photograph of pages in the clinic register since the last visit. Health workers were informed of this and were told the RDT result was to be extracted. A sample of the register data photographs were selected for data entry. Samples were taken from one of the trial region (Kilimanjaro), from three pre-defined time periods of two to three months, to represent the beginning middle and end of the one-year trial. The MTUHA data needed for this study were single-entered into MS Access (Microsoft Corp, Redmond VA). Data from the Monday before the first exit survey and the Friday after the last survey (for each health facility) were included in this analysis. Measures definition The main factor of interest, exit survey interview, was defined by at least one record in the TACT exit survey interview database for a given day, for the respective health facilities. We thereafter use the term Hawthorne effect to refer to the differences in indicators on survey compared to non-survey days. The indicators compared came from the MUTHA register completed by the health worker. Our three primary indicators were defined a priori as follow: i) having an RDT result reported, ii) whether an antimalarial drug prescription was reported for patients with a negative reported RDT result, and iii) whether an antimalarial drug prescription was reported for patients without a reported RDT result. Although ACT was the recommended treatment for malaria, the prescription of other antimalarials (likely due to stock outs of the ACTs) was documented and we included the prescription of any antimalarial in our analyses. Other information from the MTUHA ledger used to asses completeness included the number of records per day and the patient's age, gender, village of origin, previous attendance (during the same year, or for the same health problem in the last 2 weeks) and whether the patient contributed to the national health insurance scheme (subscriber type). Statistical analysis A statistical analysis plan was written and published before initiating the analyses presented here [10]. Data and patients' characteristics were first reported descriptively, overall and by survey and non-survey days. General characteristics (distribution between health facilities, time period, day of the week (Monday-Friday), and patients' characteristics) were compared between survey and non survey days. Differences were tested using Wald tests from appropriate hierarchical mixed-effect models [11] for each characteristic, taking into account clustering (non-independence) of data within each health facility, and within each day of data collection. Differences identified (days of the week and study period) were controlled for in the remaining analyses using fixed effects. We investigated a possible Hawthorne effect on the general recording behaviour: number of records and completeness of general patient information (age, gender, village, previous attendance, subscriber type) were compared between survey and non survey days. The number of records per day was compared using a mixed-effect linear regression, with a random effect for clustering by health facilities. Completeness of information was compared using three-level random effect models to take into account the clustering by health facilities and by day of recording. When the mixed-effect models did not converge, simpler models with robust standard error allowing for clustering by health facility were used. The Hawthorne effect on our three primary outcomes was investigated in a similar way, comparing outcomes on survey to none survey days using a three-level random effect model. For each of the three models, we tested the absence of a differential effect by study arm by allowing for an interaction term between the Hawthorne effect and the two intervention arms combined, compared to the control arm. Our third hypothesis was investigated by testing for an interaction between Hawthorne effect and time, first defined as the three study periods, then defined as a continuous variable in days from study initiation, and testing for a linear and quadratic interaction. To avoid issues of multiple comparisons it was decided a priori to test this hypothesis only on the RDT results recording outcome. Post hoc analyses were conducted to explore further this result, by plotting the change over smaller time periods, and by looking at the interaction between time and Hawthorne effect on the other two primary outcomes. All statistical tests were two-sided and considered significant at the 5 % level. All statistical analyses were performed with Stata software version 13 (StataCorp, College Station, TX). Ethics The nature and purpose of the trial was explained to participants and written informed consent was sought from heads of the facilities and all health workers. All attendees at study facilities were informed by leaflets and posters in each facility that basic data from their consultation might be recorded for research purposes. This was verbally repeated in the consultation and all subjects were free to refuse with no effect on the services offered. The study was approved by the Ethical Review Boards of the National Institute for Medical Research in Tanzania and the London School of Hygiene and Tropical Medicine (NIMRlHQ/R.8cNol. 11/24 and #5877 respectively). The trial was registered with clinicaltrials.gov (Identifier # NCT01292707). An independent data safety monitoring board monitored the trial and approved its' overall statistical analysis plan. Data description Eighteen health facilities contributed to the analysis. A majority (n = 16) were governmental, and two were funded by a mission. Each health facility typically comprised of three prescribing staff (range two to four), 75 % (39/52) of them above 45 years old, and 72 % (38/ 53) were female. Half (24/48) had worked in the facility for more than 10 years. There was an equal number (six) of health facilities from each of the three trial arms. Each facility had a median number of 85 days where MTUHA data were available, giving a total sample of 1,520 days for analysis, with 691 (45 %) on days when exit-survey interviews were conducted (Table 1). With a median of 11 consultation records per day, a total of 19,579 consultation records were available. A Presence of fever was not part of routine data collection and was not always collected MTUHA Mfumo wa Taarifa za Huduma za Afya (health management information system), IQR 1st and 3rd quartile, ALu artemether-lumefantrine, AM antimalarial drug majority of patients were female (57 %), the median age was 12 years with 32 % below five years (Table 1). Fever was reported in 46 % of the consultations. Table 1 shows details of malaria diagnostic testing and treatment reported in the MTUHA records. RDT results were reported in 20 % of consultations, with 5.2 % reported as positive. When restricted to patients where fever was documented (not reported in table) the proportion with a RDT result was 57 % (2,585/4,521) and 6.8 % (177/ 2,585) of those were positive. An antimalarial drug prescription was reported in 4.4 % (867/19,579) of consultations. The main antimalarial treatment prescribed was artemether/lumefantrine (ALu) (635/867, 73 %). Characteristics of days with and without exit-surveys Some characteristics differed between days when surveys were conducted and those when no exit survey was done. The proportion of observed survey days per health facility ranged from 36 % to 71 % (p-value = 0.03). There was also a difference by time period (p < 0.001), with a lower proportion of data from surveyed days in the first study period (39 %, vs. 46 % and 51 % in the second and third periods respectively). Survey days were also associated with the days of the week (p < 0.001), with 11 % of surveys taking place on a Thursday, and 26 % on a Friday. The distribution of patients' gender and age did not differ significantly between survey and non-survey days (p = 0.50 and p = 0.11, respectively). Table 2 shows the median number of consultations per day was 11 on non-survey days and 12 and survey days. Hawthorne effect on general recording Adjusting for time period and day of the week, the difference was significant, with an average of 2.03 more consultations recorded on survey days (p < 0.001). The information recorded also differed on days when exitsurveys were conducted. Although age and gender were rarely missing, there was a possible association with more complete recording on survey days. Recording of village of origin, previous attendance, and subscriber type appeared to differ with surveyed days, although the direction of the difference was not consistent. On survey days, previous attendance appeared, although not significantly, less likely to be missing (Odds Ratio (OR) =0.54, p = 0.103), whereas village of origin and subscriber's type were more likely to be missing (OR = 1.65, p = 0.01, and OR = 1.92, p < 0.001, respectively). Hawthorne effect on malaria diagnostic and treatment practice The comparison of the three primary outcomes between days with and without exit-surveys is reported in Table 2. After adjustment for time period and day of the week, all estimates suggested better practice on survey days, although none were statistically significant (p ≥ 0.052). There was a small non-significant difference for more RDTs being recorded on survey days (OR = 1.11, 95 % Confidence Interval (CI): 0.98-1.26, p = 0.097). The odds of having an antimalarial drug prescribed with a negative RDT result did not significantly differ, with 17 % lower odds on survey days (OR = 0.83, 95 % CI: 0.56-1.23, p = 0.343). Prescription of antimalarial when no RDT result was reported was borderline significantly lower on There was no indication of effect modification between trial arms (significance of interaction term p = 0.805, p = 0.800, p = 0.604, for the three outcomes, respectively). Change in Hawthorne effect over time We investigated whether the difference between survey and non survey days appeared to change over time. There was significant heterogeneity of the Hawthorne effect on RDT recording by period, with lower rates of RDT recording on survey days during the second period (OR = 0.76) than in the first and third periods (OR = 1.20 and 1.62, respectively) (Fig. 1). The test for interaction was significant (p < 0.001). When the Hawthorne effect was modelled by a linear and quadratic term for time effect, the quadratic term was significant. The odds ratio for the association between survey days and RDT recording increased of 0.3 % for every 100 squared days (OR = 1.003, p < 0.001). To explore further this result, two post hoc analyses were conducted. The first one was to plot the Hawthorne effect by smaller time periods to see in more details the change over time. The quadratic shape of the change in Hawthorne effect remained, with higher effect at either ends of the study period (see Additional file 1). The second post hoc analysis explored the interaction for the two other primary outcomes. None of them showed evidence for heterogeneity of Hawthorne effect by study period (see Additional file 1), with the interaction terms not being significant (p = 0.43 for antimalarial drug prescription with a negative RDT, and p = 0.55 without a RDT result). The suggestion of a possible quadratic effect remained but confidence intervals were wide. In all cases, none of the three outcomes suggested a consistent reduction over time in the trend towards a Hawthorne effect. Discussion We assessed indicators of case management, which were the subject of the research study and might therefore have been influenced when health staffs were under observation. This study did not find strong evidence that the presence of the exit survey altered the prescribing behaviour of health staff. There is an increasing need to capture and monitor the performance of health staff in resource poor countries as investments in health services increases and the tasks expected of health staff become more complex and diverse. However there are relatively few established methodologies to capture the content of the consultation in primary care settings. One can review routine documentation of the consultation, although the reliability of self-reported practices is uncertain [12]. A commonly used alternative is to observe the consultation directly [13,14]. This may be complemented by a repeat consultation by an "expert" immediately after the consultation of interest [15,16]. These methods have a variety of potential limitations including the cost and practicality of having qualified health professionals to observe or repeat a consultation, and the strong influence that a peer observation may have on health workers. The patient exit survey is an interesting alternative [17][18][19], as it might reduce errors associated with inaccurate completion of Fig. 1 Hawthorne effect on reporting a RDT result, by study period. Odds ratio of reporting a RDT result for survey days compared to non-survey days. Estimates from a three-level hierarchical model (with health facility and calendar day as random effects) adjusted for day of the week, and stratified by study period. RDT = Rapid diagnostic test, CI Confidence Interval routine records and minimise patient recall by asking about the content of the consultation immediately after its completion. The Hawthorne effect All of these methods have the potential to alter the behaviour of health workers by creating anxiety, raising awareness from the novelty of the situation, or a desire to satisfy the expectation of the researchers. This 'observer effect' is generally referred to as the Hawthorne effect after the studies conducted in the Hawthorne electronics factory in the late 1920's in Michigan, USA [1,2]. Although definitions vary widely, it usually relates to the difference in someone's behaviour when aware they are participating in research, or under scrutiny, as opposed to their behaviour in a more 'natural' setting. Rigorous evaluation of this effect is however limited [6], possibly explained by the complexity of this contextspecific and multi-components concept, and also the challenges of measuring it without inducing it. Some studies have looked at the effect of direct observation on medical consultations [20][21][22]. Although the design was usually before and after and other factors could have influenced the result, they generally observed difference toward better practices when health workers were being observed. This study is, to our knowledge, the first evaluation of the Hawthorne effect when conducting patient exit interviews. Discussion of findings Our primary results found no strong statistical evidence of important differences in clinical practice on days when exit surveys were conducted, but the differences we found were all in the direction of improved clinical practice on days when exit interviews were performed. The point estimates of effect size are modest, lying between 0.73 and 1.11, but with a lower confidence interval extending down to 0.53 for one of the outcomes. These results have implications for the interpretation of data captured through exit interviews and should be kept in mind when extrapolating data from exit surveys to "real world" practices. In the case of the TACT trial for example, the proportion of patients "appropriately treated" captured using exit interviews, could be an over-estimate. The efficacy estimates could also be affected if the extent of the Hawthorne effect differed across trial arms, but this was not suggested by our analysis. All methods to assess case management have limitations [23] and the most complete overall picture is likely to result from triangulation of the results from a variety of methods. It is pertinent to consider why the Hawthorne effect comes about. It is possible that participants become more attentive to their whole work routine, even for aspects of care which are not under scrutiny. On the other hand, by trying to excel in the practice being assessed, health workers may neglect other aspects of care. In our study we found suggestions of better record-keeping in the MTUHA book on the days where exit-surveys were conducted. This could suggest that consultations were more systematically recorded on the days where an external observer was present. There were also some indications of differences regarding completeness of other MTUHA information; however these results are to be interpreted with caution as the pattern of completion of some of the information remained unclear, and an appropriate statistical model could not always be performed. The last hypothesis explored in this paper was the change in Hawthorne effect over time. The initial assumption was that the novelty effect may tend to reduce over time, as participant become used to being observed, and their 'natural' behaviour would return and dominate the observation-conditioned behaviour. The change in Hawthorne effect over time on our primary outcome (RDT uptake) was not as expected, as significant decrease, and then increase, in observer effect was observed ( Fig. 1). In the second period of the study, health workers were significantly less likely to report an RDT result on survey days, for reasons which remain unclear. One hypothesis was that it could be related to the regular visits by research team to check supplies, after which health workers could have been more motivated to demonstrate good performance (even if this was not the aim of these visits). However visits were regular and do not seem to explain the curvilinear pattern. Seasonal variations in malaria transmission rates did not seem to explain the pattern either. More importantly, however, we did not find any suggestion of a reduction in the Hawthorne effect over time, on any of the three outcomes. Although it is often assumed than any Hawthorne effect would reduce over time we did not find any evidence of this here, and no such effect was actually evident in the original Hawthorne studies [24]. Some other interesting secondary findings include that no evidence was found for differences in Hawthorne effect between trial arms, which did not support that health workers in the intervention arms paid more attention to their practice on days when trial outcomes where measured, in order to satisfy the wishes of the investigators [25]. Another issue arising is the difficulty of working with routine data, particularly when coming from a handwritten book, then transferred into a database via photographs. Not all book pages could be recorded, and some patterns of information availability were surprising, for example the recording of the "village of origin" was completely missing on some days, and completely recorded on some other, without a clear explanation (such as different health workers, or variations in book format or workload). Electronic routine data recording could facilitate access and improve consistency, and recently introduced integrated systems of RDT reading and recording may also offer useful benefits [26]. Generalisability It seems likely that the Hawthorne effect is sensitive to the context of the study and our findings may not apply to other settings or methodologies. Our study was conducted in health facilities participating in a randomised trial, in one region of Tanzania, representing a very specific context. However use of exit surveys is common and the findings have some wider implications. There are clearly some specific conditions that are likely to modify the Hawthorne effect and these include any situation where some level of reward or sanction could result from the result of the study, or at least where it is expected as such by the health worker. The perception of an exit survey conducted as part of a trial may well be different from one conducted as part of a national monitoring programme. In addition it seems that more intense observation such as might occur with a researcher actually observing the consultation or where the consultation is replicated by an expert could also be expected to result in modified behaviour and our results are unlikely to apply to these situations. Having the interviews performed by a trained non-health professional from the community, may have reduced the fear of judgment for the health workers. Limitations The study has a number of limitations. Firstly there was some knowledge among health staff that their routine records would be reviewed, although they were informed that this would be primarily to document the RDT result. All staffs were reassured that the results of the study would only be accessible to research staff and that data on individual health facilities or staff would not be revealed to anyone outside of the research team and in particular to senior or supervisory staff of the health clinics. Nonetheless, the trial could have affected the feeling of "scrutiny", and health workers may have paid more attention to their practice even on days when exit surveys were not conducted, which would have reduced the apparent Hawthorne effect. The second major limitation is the reliance on completion of basic records and assumption that what was written was a true reflection of what was done. This is an inherent limitation of any study that aims to capture health worker performance without access to information obtained from direct observation. However, the main interest of the study was to investigate whether exit-surveys resulted in systematic difference in recording -we should therefore speak more of differences in 'recording' than differences in actual 'practice'. Data used for this analysis were based on single data entry of photographs of the MTUHA records, and may not reflect the exact content of the book. For example, instances were reported where data could not be entered because the photos could not be read. Across the study periods, the median number of health facilities with data available on any specific day was 15 (out of 18). Again, this should not influence the assessment of the Hawthorne effect results if this is independent of survey days, but could bias the results otherwise (e.g. if health worker paid more attention to readability on days where exit surveys were conducted). Because survey were conducted on two randomly selected days per week, this design controlled for potential differences, and allows us to attribute the observed difference to the exit interview itself. However the schedule was not always strictly followed (see methods) or other biases could have occurred. We indeed observed differences in survey rates between health facilities, study periods and days of the week. We controlled for these factors in our analysis, but other unmeasured factors could have differed between surveyed and non-survey days and biased our findings. Another consideration is that what is reported here may not be considered as the whole Hawthorne effect, which would capture any difference in behaviour within and outside the research context. Here we have been able to capture the effect of conducting exit-surveys, but if health workers behave differently in general (even on days not monitored) because of participating in a trial, this would not have been captured here. Conclusion Exit surveys of primary care consultations using staff recruited from the nearby community may have a modest effect on the clinical practice observed. It is important to consider the possibility of a Hawthorne effect when evaluating health interventions or monitoring routine health service provision, and to consider the extent to which this may alter the point estimates generated. Additional file Additional file 1: Change in Hawthorne effect over time, exploratory analyses. (PDF 9 kb)
2018-04-03T05:45:01.095Z
2016-02-03T00:00:00.000
{ "year": 2016, "sha1": "ad758e65d0cba70f819fbdb3f026b9d1c2f7c152", "oa_license": "CCBY", "oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/s12879-016-1362-0", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "94ce6cfadce2b7a73c8caef9247b0c25bc495e82", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
202765580
pes2o/s2orc
v3-fos-license
Analysis of Possibility of the Combination of Affine Cipher Algorithm with One Time Pad Cipher Using the Three-Pass Protocol Method in Text Security The use of the Three-Pass Protocol method in exchanging secret text messages is one effective method because the two interested parties do not need to use a single key to open the message sent. But there is a possibility that the message cannot be described because the cipher used is not suitable. The purpose of this study is to show whether the Affine Cipher Algorithm with One Time Pad Cipher using the Three-Pass Protocol method can be used to encrypt and decryption the text message. Introduction Information exchange has been started for a long time. It even starts when a human is in the womb, like a sign that a baby will be born. The baby's body will send signals that are responded by the mother's body which will then be translated by the mother's brain to interpret the signals. Communication that occurs from one party to another is part of the exchange of information. At the time of colonialism, information exchange was important. Where interested parties try to convey important information to related parties. They convey these information using various methods. Until finally found the name of cryptography which can be used to manipulate information so that it can only be known by the recipient. Information exchange today is a necessity. Especially for emergency matters that require speed and security that ensures that the information sent can be received without intervention from other parties. For this reason, methods are needed that can support and ensure the security of the information that moves. In this study, an analysis of the three-pass protocol method in which there was a combination of Affine cipher's algorithm with One Time Pad Cipher. Can the combination of the two ciphers ensure that the message sent can be received and translated according to the original text by the recipient. Theoritical Basis Everybody loves secret, and cryptography has it. The word comes from word kryptos means hidden and graphein means writing, both Greek words. [1] And also in information security, cryptography is the study of mathematical techniques itself. [3] misuse by unauthorized parties is something that must be prevented by using cryptographic schemes, even the scheme is made to maintain the desired functionality. [2] Some security aspects that must be fulfilled in cryptography are: Confidentiality, where messages cannot be read or translated by unauthorized parties Integrity, that the message must be intact Authentication, is related to identification, both identifying the truth of the parties who communicate. Non-denial, which prevents communicating entities from denying. [5] Cryptographic algorithms have 3 basic functions, namely, Encryption, is very important where the original message is changed to codes that are not understood. Decryption, is the opposite of encryption, where the message that has been encrypted is converted back to its original form. The key is the key that is used for encryption and decryption. The key is divided into two, namely the secret key and the public key. [4] "No key protocol" is a paper made by Shamir that contains cryptographic implementation without having to do a key exchange. And it was mentioned that research on this method was still lacking to be explored more deeply. This method is called Three Pass Protocol. [7] Affine cipher is a substitute cipher which is a secure cipher where we can choose the values a and b, and then arrange into the formula ϵ (m) = am + b mod 26. while "one time pad cipher" is a cipher that is done by using character shifts. Where we can determine how many characters are shifted using mod 26 [3][6] Flowchart Research Broadly speaking, the flowchart of the study in this study can be described as follows: Analysis of Current Problems The exchange of text messages that are carried out is confidential so that a way is needed so that the two interested parties do not need to exchange keys to perform encryption and decryption activities. Generally when a secret message is sent, a single key is needed to open the message. Then the use of the Three-Pass Protocol method combined with One Time Pad Cipher and Affine Cipher can be a solution. The Three-Pass Protocol method cannot be run. The cipher used is not suitable for each other to do the encryption process and decryption. Analysis Process Encryption process of the original message by the sender. This first process is done by the sender who will send the original message (plaintext) to the recipient. The sender who uses One Time Pad Cipher must set the key in the form of numbers from zero (0) to nine (9). Plaintext: RIDHO Key: 123 After setting the plaintext and key, then to encrypt the process will convert character letters to numbers, in this case the 26 character alphabet table is used. R I D H O 17 8 3 7 14 Furthermore, characters which already have these numbers are processed with formulas C = P + K Mod 26 C1 = 17 + 1 = 18 Mod 26 = 18 = S C2 = 8 + 2 = 10 Mod 26 = 10 = K C3 = 3 + 3 = 6 Mod 26 = 6 = G C1 = 7 + 1 = 8 Mod 26 = 8 = I C1 = 14 + 2 = 16 Mod 26 = 16 = Q Then the ciphertext obtained from the processing is Ciphertext = SKGIQ The next process is the ciphertext is sent to the recipient. The recipient uses Affine Cipher to carry out the encryption process of the ciphertext sent by the sender, using the formula C=(7P + 10) Mod 26 : GCAOS For the third stage the message in the form of the second ciphertext obtained from the encryption using Affine Cipher by the recipient is returned to the sender who will do the decryption while still using One Time Pad Cipher and the same key, as for the One Time Pad Cipher decryption formula P = C-K Mod 26 Conclusion The conclusions that can be drawn from this research are: a. Affine Cipher and One Time Pad Cipher algorithms cannot be combined in the Three Pass Protocol Method. b. Not all Ciphers can be combined in the Three Pass Protocol Method c. Affine Cipher algorithm can encrypt well d. The One Time Pad Cipher algorithm can encrypt well e. Affine Cipher's algorithm cannot decrypt the results of the One Time Pad Cipher's decryption
2019-09-17T02:57:52.254Z
2019-08-01T00:00:00.000
{ "year": 2019, "sha1": "715da9240cd890c09bed63b382c6ca79b72c5fa8", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1255/1/012028", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "91a6888c8b222a524ccb9ff2b32c51adf11cc0ad", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
238417251
pes2o/s2orc
v3-fos-license
The Spatial Concentration and Dispersion of Homicide during a Period of Homicide Increase in Brazil : This study applies the principles of measuring micro-place crime concentration and the spatial dispersion of crime increase to the geographic unit of cities in Brazil. We identify that a small number of cities account for a large cumulative proportion of homicides, and that during a period of homicide increase 30 cities out of 5570 accounted for the equivalent national increase in homicides. The majority of the 30 cities were not established high homicide cities but instead were new emerging centers of homicide that neighbor high homicide cities. We suggest the findings can be used to better target effective programs for decreasing homicides. Introduction Brazil consistently accounts for one in seven of all recorded homicides in the world, yet represents only 3% of the world's population [1,2]. Over recent decades, homicide levels have increased in Brazil, rising from 48,219 homicides in 2007 to 65,602 in 2017 [3]-a rate of 31.6 homicides per 100,000, compared to a global rate of 6.1 [3,4]. In the wider context of Latin America, Brazil's high levels of homicide are representative of the issues of violence in the region, and because of the significant role the country plays in the region's social development and prosperity, research on homicide in Brazil is valuable for understanding Latin America's homicide issues [5]. National statistics on homicide can, however, mask important geographic variations within a country. For example, in recent years some cities in Brazil such as Fortaleza and Rio Branco have followed the national trend and have experienced increases in homicide. However, several other cities such as São Paulo and Belo Horizonte have not experienced increases and in some cases the level of homicide has decreased. Reasons for the variation in homicide levels in Brazil have been associated with differences in social disorganization, opportunities for drug trafficking, exploitation of natural resources, and conflicts between organized crime groups [5][6][7][8][9][10][11][12][13][14][15]. In this paper we build on the research about the variation in homicides in Brazil by analyzing the geographic dynamics of homicide trends across the country, and by doing so illustrate how the methods we use can be applied to examining crime patterns across a country. The methods we use involve an examination of the spatial concentration of homicides in Brazil at the city level and an examination of the spatial dispersion of homicides across these cities during a period of homicide increase. A significant body of research shows that crime spatially varies, and concentrates in a relatively small number of places. This observation is so consistent that Weisburd [16] proposed a law of crime concentration, suggesting that about 1% of places account for a cumulative proportion of 25% of crime, and that about 4% of places account for a cumulative proportion of 50% of crime. Weisburd's law for examining the spatial concentration of crime has a focus towards micro-places, such as street segments within a city. Studies examining micro-place patterns of spatial concentration have then informed the design and targeting of successful interventions that counter these patterns [17,18]. We propose that the process for measuring the spatial concentration of crime can also be applied to the larger geographic unit of cities to help facilitate the empirical examination of crime across a country. Specifically, with regards to the current study, the analysis of the spatial concentration of homicides in Brazil can identify whether only a few cities account for a large cumulative proportion of homicides across Brazil. An examination of the spatial concentration of homicides for cities across a country may in turn better inform the targeted implementation of national or state government homicide reduction strategies. A consistent finding from several other studies that have examined the geography of crime is that during periods of crime increase or decrease, a small proportion of places are responsible for these changes in crime, with the places that previously registered the highest levels of crime accounting for the largest changes [19][20][21][22][23][24]. These studies have only focused on examining micro-place or neighborhood patterns of crime. We propose the techniques used in these studies for examining areas responsible for changes in crime can also be applied to the meso-scale of cities to determine if a national increase in crime is associated with only a small number of cities. Additionally, these techniques can determine if it is those cities that previously recorded the highest levels of crime were the cities where most increases in crime were experienced, or if crime has dispersed to other areas. By doing so, we present the first ever findings that have applied the combination of these techniques of spatial concentration and the dispersion of crime to meso geographic units (Saraiva et al. [25] have examined the spatial patterning of homicides across Brazil over a 10-year period using cities as the geographic unit but did not use the statistical measures of spatial dispersion for calculating and identifying those cities that contributed most to the increases in homicide in Brazil). By applying these methods to homicide, the study reveals findings on patterns of homicide across a country during a period of increase, and more specifically contribute to our understanding of homicide patterns in Brazil. In the section that follows we discuss homicide trends in Brazil from which we highlight the potential value in examining the geographic dynamics of these trends within the country. In section three we describe the methods used for examining micro-place spatial concentration and the dispersion of crime increase, before describing the data used and how these methods were applied in the current study. Results are presented in section five, followed by a discussion of the findings, limitations of the study, and conclusions. Homicide in Brazil Brazil's high levels of violence are a continually debated topic that shape everyday social interactions in the country [26][27][28][29]. Although violence is manifested in different ways, homicides are the maximum expression of the problem of violence in Brazil. In 2017, a new record of 65,602 homicides for a single year was registered in Brazil, representing more than 179 homicides per day [3]. This placed Brazil as the deadliest country in the world in absolute numbers, with a homicide rate that was 30 times higher than Europe's [30]. Homicides in Brazil are mostly committed by and against young black men living in impoverished conditions [31,32]. High homicide levels also restrict the country's prosperity, from youths killed being lost to the labor market, to its effect on the prices of goods and services [12]. Homicide in Brazil has an annual social cost that is equivalent to 5.9% of the country's Gross Domestic Product, which in 2017 represented USD 110.9 billion [1,12]. Most studies that have examined geographic variations in homicide in Brazil (and Latin American countries more widely) have focused on examining the influence of structural conditions such as social inequality and poverty on these homicide patterns [33][34][35][36]. More recently, other studies have additionally highlighted weaknesses in institutional legitimacy, government ineffectiveness, the rule of law, impunity, and development as factors that influence homicide levels [37][38][39][40][41][42][43][44][45]. Violence in Brazil is also considered to be associated with the drugs trade, with Brazil playing an important role in the trafficking of drugs from neighboring countries (such as Colombia) to European, African, Asian, and Australasian markets [46,47]. Coinciding with the role of Brazil as a hub for drug trafficking has been the geo-economic expansion of the largest criminal organizations in the country-the Primeiro Comando da Capital (PCC) and the Comando Vermelho (CV). These groups, previously most present in the southeast and south regions of Brazil, have expanded their activities to the north and northeast of the country with allies based in these regions [12,48]. Collectively, these studies have provided valuable insights into the variation and dynamics of homicide in Brazil. In the current study we specifically examine the geographic dynamics of homicide in Brazil with the aim of generating findings that add to this previous research. Homicides in Brazil are heavily concentrated in urban areas [49]. In Brazil, there are 5570 cities [50]. Previous research using a data sample of 4491 cities found that 16% of Brazilian cities accounted for 73% of all homicides between 1991 and 2010, variations in changes in homicide ranged between increases of 207% and decreases of 26%, with differences in the social disorganization of cities (such as lack of effective formal social control) being the reason for the variations in homicide levels [13]. More specifically, homicides among men aged 20-39 years have been observed to be greatest in highly urbanized municipalities [49], and tend to be lower where Brazil's cash transfer (poverty alleviation) program was present [29,51]. The level of homicides also tends to be lower where stronger restrictions on access to and use of firearms is imposed [29,52]. Other researchers have highlighted how certain changing dynamics within Brazil affect homicide levels in the country. This includes changes in economic conditions and urban development that has resulted in cities near international borders and coastal cities experiencing increases in homicide because of changes in their functional status [6]. To help provide a basis for supporting research that examines homicide patterns in Brazil, Waiselfisz [14,15] proposed five categories to define Brazilian cities where increases in homicide have been observed: new poles of growth, border cities, new frontiers, seaside cities, and cities in the Marijuana Polygon. Waiselfisz [14,15] identified that new poles of growth tended to be smaller or medium-sized cities where a process of economic decentralization and development from the late 1990s resulted in investment in this type of city, attracting many people, and with it more opportunities for crime. Border cities are those located near to international borders, and served as gateways for smuggling, and the trafficking of weapons, drugs, and people. New frontiers are cities located in predominantly rural regions, characterized by issues associated with illegal logging and mining, conflicts over land tenure, and the exploitation of local communities often because of large agricultural and national development ventures that demand 'unoccupied' land. Seaside cities are those that have port facilities, making them a hub for outbound illicit drugs and inbound firearms, in addition to attracting tourists who could be vulnerable to crime. The Marijuana Polygon includes cities across a region located at the junction of Bahia, Pernambuco, Alagoas, Sergipe, and Ceará states, responsible for cannabis production in Brazil. Ceccato and Ceccato [8] have highlighted that small cities in Brazil have experienced the largest increases in homicide, with Steeves et al. [53] suggesting these increases in violence are associated with economic prosperity in smaller cities. In contrast, several large metropolises in Brazil have experienced decreases in homicide, such as São Paulo where new public safety programs and improvements in policing methods have coincided with decreases in homicide [54,55]. Ingram and Da Costa [10] have also observed that if an area experiences a high level of homicide this can result in an increase in homicides in nearby areas. This synthesis of the research on homicide patterns in Brazil, its spatial distribution and trends, highlights the multi-faceted nature to homicide in Brazil. Homicides in Brazil are not equally distributed across the country, are most present in cities rather than rural areas, and are subject to changing dynamics in the country. Countering the problem of homicide not only requires federal action, but also effective action at state and city levels of government. Prioritizing resources to areas of most need, and tailored program delivery to addressing the conditions that reduce homicides in different settings are considered to be key factors in the delivery of effective homicide reduction programs [56]. To ensure homicide programs in Brazil are effectively implemented requires an appreciation of how homicide patterns spatially vary across the country, especially during periods when homicide levels increase. Methodological developments in examining the geographic concentration of crime and the dispersion of crime during periods of crime increase can offer a means for adding to existing understanding of patterns of homicide in Brazil. Spatial Concentration and Emerging Problem Areas The study of the geography of crime has increasingly focused on the micro-place (such as the street segment) as the geographic unit of analysis. A consistent finding from numerous studies on micro-place patterns of crime is that a small number of places are responsible for a large proportion of crime [16,[57][58][59]. Many of these studies have applied the bandwidths suggested by Weisburd [16] for comparing between settings and different crime types. These bandwidths are used to calculate the proportion of places that are responsible for a cumulative proportion of 25% of crime and for a cumulative proportion of 50% of crime. The results from micro-place analysis of crime have then been used to determine where to target interventions for decreasing crime, such as hot spot policing and problem-oriented policing programs [17,18]. Use of the same methodological process for examining micro-place patterns of crime may help to better formalize the examination of the spatial concentration of homicide for larger geographic units, such as cities within a country. When crime increases, attention focuses to identifying the areas where increases have been greatest. Micro-place studies of crime show that when crime increases, the largest increases take place in the areas where crime levels were already high [21]. To help measure and identify the areas that are most responsible for an overall increase in crime, Ratcliffe [23] developed a series of dispersion indices to indicate if an overall crime increase is associated with only a small number of areas or if the increase is a spreading (emergent) problem. Ratcliffe's dispersion measures do not, however, determine if the crime increase is associated with an increase in areas where most crime previously occurred. To address this, Chainey and Monteiro [21] developed the Crime Concentration Dispersion Index (CCDI) to determine whether, during a period of crime increase, areas of high crime concentration were responsible for the increase or if other areas were responsible. To date, these measures for crime dispersion have only been applied to micro-place and neighborhood geographic units (because of recent research focus to micro-place analysis of crime), yet their application is suitable for any size of geographic unit, such as cities. The techniques developed by Ratcliffe [23] and Chainey and Monteiro [21], therefore, can be used to determine whether an increase in crime in a country was mainly associated with only a small number of cities being responsible for the increase, and whether cities with the highest levels of crime were mainly responsible for a national increase in crime. In this study, we use techniques that have been applied to examine crime at the micro-place level to the patterns of homicide at the meso-place level (i.e., cities across a country). We recognize that city size will influence the overall homicide level in a city but similar to analysis of crime at micro-place levels, micro-place geographic units (e.g., street segments) substantially vary in size and have provided valuable insights into geographic patterns of crime. In micro-place analysis of crime, we would anticipate that longer street segments would account for a larger number of crimes than shorter street segments. Micro-place analysis of spatial concentration does not normalize crime count data to rates based on geographic unit size, and yet has provided valuable insights about the spatial patterning of crime. We anticipate that the larger cities in Brazil will account for the larger number of homicides, but also anticipate the analysis will provide valuable insights into geographic patterns of homicide across Brazil. Cities in Brazil are based on similar institutional frameworks and share relatively similar cultural heritages [10], which in turn make them suitable units for comparative analysis. The research was guided by testing three hypotheses: Homicide is highly concentrated across cities in Brazil; a small number of cities in Brazil were responsible for recent national increases in homicide; the homicide increase in Brazil was associated with cities that previously recorded the highest levels of homicide. We anticipate that the results that are generated from testing these hypotheses provide new insights into the geographic dynamics of homicide in Brazil. To posit these results in the context of previous research that has examined the patterning of homicides in Brazil, we aim to review our results in the discussion section against Waiselfisz's [14] five categories for defining cities in Brazil where increases in homicide have been observed. Data and Methods The unit of analysis was cities in Brazil, of which there were 5570 cities. The number of homicides in each city between 2007 and 2017 was extracted from the Mortality Information System of the Brazilian Ministry of Health [60]. These data refer to the occurrence of violent deaths, including intentional incidents, robberies that resulted in a homicide, and police killings. The data do not indicate the motivation for the homicide, such as if the intentional homicide was associated with the drugs trade and violence between criminal groups. Population data for each city were extracted from the Brazilian Institute of Geography and Statistics for the year 2017 [50]. Cities were organized into five size groupings to determine if patterns of homicide were particular to city size, following the method used by Duarte et al. [49]: Small cities I (≤20,000 inhabitants; n = 3802); small cities II (from 20,001 to 50,000; n = 1103); medium cities (from 50,001 to 100,000; n = 355); big cities (from 100,001 to 900,000; n = 293), and metropolises (≥900,001; n = 17). The number of cities that accounted for 25% and 50% of Brazil's homicides in each year between 2007 and 2017 were calculated. This process was performed in Microsoft Excel (following the procedure described by Chainey [61]) and involved arranging the data on homicides in each city for each year into a table and then rank ordering the data for each year from the highest to the lowest number of homicides in each city. Then, the percentage of homicides in each city relative to all homicides was calculated, followed by the calculation of the cumulative percentage for each city, across all the cities. This procedure is the same procedure that is used for examining micro-place crime concentration albeit where the geographic unit of analysis is the street segment. We calculated the level of homicide spatial concentration for each year to determine if this measure changed over time and for a period when homicides in Brazil had increased. We refer hereafter to cities that accounted for 25% of homicides as high homicide cities-HHCs. The study's examination of the dispersion of homicide was focused towards examining the change in crime between the two years for the most recent period that data were available (i.e., 2016 to 2017) for purposes of being up to date, and because 2017 was when there was a peak in the number of homicides observed in Brazil. The spatial dispersion of homicide increase in Brazil was analyzed using Ratcliffe's Dispersion Calculator [23]. The Dispersion Calculator compares the changes in crime between two time periods (t1 and t2, i.e., 2016 and 2017) for each geographic unit in the study area. When a crime increase in a study area has been observed, the Dispersion Calculator determines whether the increase in the study area was related to only a small number of places experiencing a large increase in crime, or whether the increase was associated with smaller increases across a large number of places within the study area. The Dispersion Calculator works by ordering the geographic units in the study area by the level of crime increase in each unit, and then removes these ordered units one at a time (starting with the unit with the highest crime increase) and removes their respective incidence of crime from the total for the study area. For each iteration, the increase in crime across the study area (based on all remaining geographic units) is recalculated. By doing so, the Dispersion Calculator determines the point at which the removal of the geographic units that experienced the highest increases in crime generates a revised study area change in crime measure that shows no increase in crime across the remaining geographic units between the two time periods. Some other geographic units may have experienced an increase in crime, but these increases would have been smaller and could have been offset by the places where decreases in crime were observed. The Dispersion Calculator generates two indices: The Offense Dispersion Index (ODI) and the Non-Contributory Dispersion Index (NCDI). The ODI is the proportion of geographic units that must be removed from the study-wide calculation before the increase in crime across the study area is transformed to a decrease, or at least a no-change steady state is observed (for more details see Ratcliffe [23]). The ODI is calculated using the following equation: ODI = (n of geographic units that must be removed from the study-wide calculation before the increase across the study area is transformed to a decrease)/(n of all geographic units in the study area) The ODI (ranging from zero to one) determines the smallest proportion of geographic units that alone account for a study area equivalent increase in crime. For example, in a study area that experienced a 20% increase in crime and consisted of 100 geographic units, the ODI is a measure of the smallest proportion of geographic units that alone accounted for the study area's 20% increase in crime. For instance, if five geographic units experienced large increases in crime, and the total increase in these five geographic units was equivalent to the study area's 20% increase in crime, the ODI would be 0.05 (i.e., 5/100). If ten geographic units experienced the largest increases that in total was equivalent to the study area's 20% increase in crime, the ODI would be 0.1 (i.e., 10/100). An ODI value close to zero indicates that only a small number of geographic units experienced an increase in crime that was equivalent to the study area's increase. The NCDI (ranging from zero to one) indicates the proportion of other geographic units that are a concern and is a measure of the proportion of the other geographic units in the study area that experienced an increase in crime. The NDI is calculated using the following equation: NCDI = (n of geographic units in the study area that experienced an increase in crime, but not including those units included in the ODI calculation that experienced the highest increases in crime)/(n of all geographic units in the study area) For example, following on from the example of a study area that consists of 100 geographic units, if 35 other geographic units experienced increases in the crime (i.e., in addition to the five geographic units that experienced the greatest increases in crime), the NCDI for this study area would be 0.35 (i.e., 35/100). An NCDI value close to zero indicates the crime increase has not been observed in many other geographic units. To date, although the ODI and NCDI can be applied to any size of geographic unit and any size of study area it has only been applied to the geographic units of street segments and neighborhoods (for examples see Chainey and Monteiro [21] and Ratcliffe [23]). In the current study we apply these measures to the geographic unit of cities. ODI and NCDI values for homicide were calculated for the whole country of Brazil, and for each city group categorization. The Dispersion Calculator was also used to identify the specific cities that accounted for the ODI value (i.e., the cities that experienced the highest levels of homicide and that collectively accounted for an increase in homicide that was equivalent to the increase in homicide experienced in Brazil between 2016 and 2017). We refer to these cities hereafter as emerging problematic cities-EPCs. The Crime Concentration Dispersion Index was used to determine if cities where homicide levels were previously high were the cities responsible for the crime increase (following the methodology described by Chainey and Monteiro [21]). The CCDI is the ratio of the homicide increase in EPCs (calculated using the Dispersion Calculator) that were not identified as cities with high homicide levels (i.e., non-HHC EPCs), and the homicide increase in the high homicide cities (i.e., HHCs). For the non-HHC EPCs, the total increase in homicides experienced between 2016 (t1) and 2017 (t2) was calculated, and then averaged per non-HHC EPC. Similarly, for the high homicide cities, the total increase in homicides experienced between t1 and t2 for all HHCs was calculated, and then averaged per HHC. The CCDI is calculated using the following equation: CCDI = (Crime increase between t1 and t2 per non-HHC EPC)/(Crime increase between t1 and t2 per HHC) A CCDI value of less than one indicates that high homicide cities contributed more to the increase than other emerging problematic cities (i.e., the non-HHC EPCs). The closer the CCDI is to zero, the less the need for targeting resources to cities other than HHCs. A CCDI of one indicates that HHCs and other non-HHC EPCs equally contributed to the increase in homicides, meaning that HHCs and these new emerging problematic cities require attention if homicide levels are to be decreased. A CCDI of greater than one indicates new emerging problematic cities (i.e., non-HHC EPCs) contributed more to the increase than high homicide cities. To assist in the presentation of the results we also organized the results for cities by the regional geography of Brazil (north, northeast, southeast, south and midwest). We used these regions in order to be consistent with the definition of regions of Brazil that are most commonly used, particularly in studies of homicide in Brazil [12]. Results Homicides were highly spatially concentrated across cities in Brazil. When considering the total number of homicides that were observed between 2007 and 2017, only 15 cities (equivalent to 0.27% of all cities in Brazil) accounted for 25% of all homicides, and 95 cities (equivalent to 1.7% of all cities) accounted for 50% of all homicides. The level of spatial concentration for homicides across cities in Brazil changed very little between 2007 and 2017, increasing from 0.20% to 0.34% of cities accounting for 25% of all homicides (Table 1), even though homicides increased by 36.9% over this period. All the cities that accounted for 25% of homicides between 2007 and 2017, and for each year within this period, were either metropolises or big cities. Table 3 shows the ODI and NCDI values for Brazil. The homicide ODI for this period was 0.005 indicating that a small number of cities (n = 30) accounted for the equivalent national increase in homicides of 5.1% in Brazil between 2016 and 2017. That is, 30 of the 5570 cities in Brazil were identified has experiencing the highest increases in homicide between 2016 and 2017. In these 30 cities, homicides increased from 9975 in 2016 to 13,106 in 2017. Between the same two years, homicides increased in Brazil from 61,531 to 64,660, a 3129 numeric increase and 5.1% increase. The increase of 3131 homicides in the 30 cities that experienced the highest increases was equivalent to the Brazil-wide increase in homicides experienced between 2016 and 2017. The homicide NCDI for this period was 0.390 and indicated that many other cities (n = 2172) experienced increases in homicide between 2016 and 2017. The ODI and NCDI calculations were repeated for cities within each city size category, with very little difference being observed in the patterns for each category compared to that observed nationally-a small number of cities (in each city size category) accounted for the equivalent increase in homicides between 2016 and 2017, but a large number of other cities (in each city size category) had also experienced increases in homicide. For example, for the city size category of small cities (category II) only 67 of the 1103 cities accounted for the equivalent increase in homicides of 8.5% between 2016 and 2017 in this category (ODI = 0.061), but almost half of the cities in this category (n = 496; NCDI = 0.450) also experienced increases in homicide. The CCDI value for Brazil was 1.129, indicating the national increase in homicides was more associated with cities that were not HHCs, rather than HHCs being responsible for the increase. That is, cities other than the cities where homicide levels had previously been the highest emerged as problematic areas for homicide. The contribution of emerging problematic cites, other than HHCs to the increase in homicide in Brazil between 2016 and 2017 is further illustrated in Table 4. Between 2016 and 2017, homicides in the high homicide cities in Brazil (n = 19) increased by 6.8%. Other cities that were mainly accountable for the increase in homicides in Brazil between 2016 and 2017 (i.e., non-HCC EPCs) (n = 22) experienced a homicide increase of 56.0%. Table 5 lists the 30 cities in Brazil, by city size category, that accounted for the equivalent national increase in homicides of 5.1% between 2016 and 2017. Each city is listed by its percentage increase in homicide (and the region in which it is located). Most cities that contributed to the national increase were big cities rather than metropolises. The metropolises listed were all HHCs with the exception of Campinas. Several cities experienced increases in homicide of over 100%, including Barreiras, Gravatá, Gravataí, and Horizonte. The medium-sized cities that were mainly accountable for the equivalent national increase in homicides were all located in the northeast of Brazil. HHCs that were not included in this group of 30 cities (listed with their percentage change in homicide between 2016 and Small cities II -Small cities I -Cities in italics were high homicide cities. Figure 2 shows the distribution of the 30 cities that accounted for the equivalent national increase in homicides of 5.1%. Over half of these 30 cities were located in the northeast region of Brazil. Almost all the cities were located along the Atlantic coast, however, it is noted that the majority of urban settlements in Brazil are located along or near to the Atlantic coast, reflecting the lack of properly developed transport routes that extend into the interior from what is referred to as the Grand Escarpment that dominates much of Brazil's coast [62]. Discussion Brazil experiences some of the highest homicide levels in the world. In the current study the analysis was guided by testing three hypotheses, with the first of these stating that homicide is highly concentrated across cities in Brazil. This proved to be the case with no more than 20 of Brazil's 5570 cities (0.36%) being responsible for at least a quarter of all homicides in any year. This level of spatial concentration is comparable to the patterns of homicide concentration observed at micro-places (i.e., street segments) within cities across Latin America [58] and suggests a consistency in the spatial concentration of crime for geographic units across geographic scales. Similar to street segments, cities vary in size. It was apparent that city size was a factor in determining cities in Brazil that experienced the most homicides-for each year from 2007 to 2017, at least three-quarters of the high homicide cities in Brazil were metropolises. Studies examining micro-place concentrations of crime do not examine whether street segment length is a determining factor in identifying streets that experience the highest concentrations of crime. We applied this measurement principle for examining the spatial concentration of homicide across Brazil using cities as the unit of study but recommend further research on the spatial concentration of crime across scales that normalize for the size of the geographic unit. This could include the examination of crime rates (e.g., crimes per kilometer of street, crimes per 1000 city population) to determine those geographic units that contribute to the highest quartile of crime rates. In 2017, Brazil experienced its highest number of recorded homicides. The second hypothesis we stated was that a small number of cities in Brazil were responsible for the recent national increases in homicide. This proved to be the case. Although 2202 of Brazil's 5570 cities experienced increases in homicide between 2016 and 2017, only 30 of these cities accounted for the equivalent national increase in homicides of 5.1%: In Brazil, there was an increase of 3129 homicides between 2016 and 2017; in the 30 cities that experienced the highest increases in homicide between 2016 and 2017, the total increase in these cities was 3131 homicides-equivalent to the national increase; 2172 other cities in Brazil experienced increases in homicide between 2016 and 2017, but these increases were small and offset by the decreases in homicide experienced in many other cities in Brazil. Studies examining micro-places during periods of increase have suggested that areas where crime concentrates are most responsible for the crime increase. This led to us stating for our third hypothesis that the homicide increase in Brazil was associated with cities that previously recorded the highest levels of homicide. This was not the case for homicides in cities across Brazil-of the cities identified as HHCs, only eight of these were part of the group of 30 cities that accounted for the equivalent national increase in homicides between 2016 and 2017. Instead, several medium-sized and big cities that were not established HHCs in Brazil accounted for the largest proportion of this group of 30 cities. The findings from the current study suggest that although recent increases in homicide in Brazil are highly contained to a relatively small number of cities, the spatial concentration of homicide has dispersed from established areas of high homicide to other cities. The dispersion of homicide to new problematic cities for homicide does, however, appear to be clustered around several established HHCs. Around Fortaleza, the cities of Caucaia, Horizonte, Maracanaú, Pacajus and Sobral were among the group of 30 cities that accounted for the equivalent national increase in homicides. Around Recife, the cities of Abreu e Lima, Cabo de Santo Agostinho, Gravatá, Ipojuca and Paulista were also cities among this group of 30. Rio de Janeiro and neighboring São Gonçalo were established high homicide cites, and were added to with Duque de Caxias as a city amongst this group of 30 when homicides increased in 2017. Other scholars have suggested there has been a 'reorganization of violence' across Brazil, characterized by the increases in homicide in the north and northeast regions, and from the largest cities to smaller cities [6,53]. The current study supports these patterns of homicide in Brazil, albeit suggesting that several HHCs have continued to persist. As stated in a previous section, Waiselfisz [14,15] proposed five categories to define cities in Brazil where increases in homicide have been observed: new poles of growth, border cities, new frontiers, seaside cities, and cities in the Marijuana Polygon. Based on the results from the current study, we add to this categorization of cities where increases in homicides have been observed in Brazil by suggesting a sixth type of city-neighboring cities. This builds on Ingram and Da Costa's [10] observation that homicides in an area are likely to increase homicides in nearby areas. We define neighboring cities as those that border or are close to established high homicide cities and where conditions are similar for criminal activity to thrive. High levels of homicide in Brazil have dispersed to several cities that border or are close neighbors to high homicide cities. Their proximity to established HHCs is a key influencing factor to why they have emerged as problematic cities for homicide. If these cities were located far from HHCs it is unlikely they would be centers of homicide. By bordering or being close to high homicide cities, the conditions in the neighboring city are more likely to be similar than if the city was located far away. These conditions include the function of the city in terms of commerce, industry and entertainment, social and economic conditions, the effectiveness of government institutions, and the presence of criminal groups. Additionally, these neighboring cities could be where urban expansion from nearby established cities is taking place. Pressures from population movement and limited investment in welfare and public security in these neighboring cities can create environments for criminal activity to thrive [8,14,29,39]. Ten of the 22 cities identified as accounting for the equivalent national increase in homicides between 2016 and 2017 (and which were not HHCs) could be considered as neighboring cities to established HHCs: Caucaia, Horizonte, and Maracanaú because of their proximity to Fortaleza; Abreu e Lima, Cabo de Santo Agostinho, Ipojuca and Paulista because of their proximity to Recife; Ceará-Mirim that borders the HHC of Natal; and Alvorada and Gravataí that border the HHC of Porto Alegre. Additionally, the cluster of cities consisting of Vitória, Cariacica and Serra that are included in the 30 cities group could also be considered as neighboring cities, albeit also being categorized as seaside cities. Pacajus could also be considered a neighboring city, because of it bordering Horizonte and its proximity to the established HHC of Fortaleza, albeit also being categorized as a new pole of growth. All these neighboring cities are likely to be similar in the conditions they experience (to nearby problematic cities for homicide), which in turn offer similar conditions for criminal activity to thrive. Intentional homicide being the ultimate expression of this criminal activity. Table 6 lists cities using the six categories, suggesting that the majority of emerging problematic cities for homicide in Brazil were neighboring cities, with most others being new poles of growth. We also note that almost half of the neighboring cities were also seaside cities, however, this most likely reflects the large geographic distribution in Brazil of urban settlements along or close to the Atlantic Coast. The dispersion of crime can also operate in an opposite manner-if high crime areas decreased in crime, this decrease may disperse to neighboring areas. Established high homicide cities such as Belo Horizonte, Brasilia and Curitiba experienced decreases in homicide of between 12% and 20% in 2017. No cities that neighbor these cities experienced increases in homicide that significantly contributed to the national increase, but instead experienced decreases in crime: Contagem located next to Belo Horizonte experienced a 28% decrease in homicides; Formosa located next to Brasilia experienced a 16% decrease in homicides; and São José dos Pinhais, a city that neighbors Curitiba experienced a 12% decrease in homicides. Policing and public safety programs that are targeted to the micro-places where crime concentration persists have a significant impact in decreasing crime. These programs involve proactive strategies that aim to address the situational causes of crime (such as deterring criminal activity because of the presence of targeted police patrols), alongside changing individual behaviors that reduce recidivism (e.g., restorative justice), and providing alternatives to criminal involvement (e.g., via focused deterrence strategies). When crime increases, identifying the micro-places most responsible for the increase and targeting activities to these places also has a significant overall impact in decreasing crime [21]. Thus, it is the highly targeted nature of effective intervention implementation that is a key factor in their success. Homicide across cities in Brazil show similar spatial patterns to the patterns of crime observed at micro-places-a small number of places are responsible for a large proportion of homicides, and when homicide increases, a small number of places account for the increase. These patterns provide the opportunity to determine where to target state and national strategies for decreasing homicide, especially during a period of crime increase. Problems of homicide are multi-faceted, but intended program effect can become diluted if not focused on where these programs are most necessary. When these programs are effective, there is the potential for their effect to disperse to neighboring areas. If high homicide levels are not abated, there is the potential for high levels of homicide to disperse to neighboring areas. Additional analysis was conducted to examine the spatial concentration and dispersion of homicide within city groups. This analysis found that only a small number of cities accounted for a large proportion of homicides in each city group, and that a small number of cities accounted for the equivalent increase in homicides within the city-size group. For example, in the small cities I group, 4% of cities accounted for 25% of homicides, and only 65 of the 3802 cities accounted for the equivalent increase in homicides of 7.5% within this city-size group. The CCDI for small cities I for 2016 to 2017 was 1.5, suggesting that cities other than the high homicide cities within this group were most responsible for the homicide increase. The additional analysis within city-size groups of spatial concentration and the dispersion of homicides during periods of recent homicide increases further showed the redistribution of homicides to the north and northeast regions: 26 of the 28 HHCs in the medium city size category were located in the north or northeast regions, and all those cities that accounted for the equivalent increase in homicides of 6.1% in this city-size group were located in these regions. As other scholars have noted, the increase in violence in the north and northeast regions of Brazil is likely to be associated with changes in the drug trafficking dynamics and the disputes this has created between rival criminal groups, the illegal exploration of land, logging and mining, and land tenure-related conflicts in several areas of these regions [8,12,29,48]. Targeted programs and strategies for effective homicide prevention to the small number of cities in each group would likely result in a more significant decrease in homicides than an untargeted strategy. Although it was beyond the scope of the current study to examine the variables that were most associated with the geographic concentration and dispersion of homicide increase in Brazil (e.g., social inequality and residential stability), it is likely that programs and strategies that are similarly targeted towards addressing the factors that have created the conditions for violence to thrive-such as improving government effectiveness, reducing impunity, and reducing social inequality, and countering the situational circumstances that create opportunities for crime-are likely to be most effective. Limitations There are several limitations that may have affected the study. First, Brazilian homicide figures may be underestimated because of problems associated with misclassification, homicides that are never registered because the body is not found, and structural deficiencies in the criminal justice system that led to the registration of the cause of death as unknown [30,[63][64][65]. According to Cerqueira ([66], p. 42) "it is understood that the homicide rate in the country would be 18.3% higher than the official figures". The number of homicides in Brazil in 2017 was, therefore, more likely to be about 77,000 instead of the 65,602 registered. We do not anticipate this underestimation to have significantly affected the key patterns we observe in the current study. Second, the SIM/MS data on homicide is made available 18 months after the end of the reporting year. Data for 2017 were released in June 2019, and were the most up to date data available when the current study was conducted. With regards to methods, using absolute counts drew attention to larger cities. Studies of crime concentration focus on examining where the incidence of crime is greatest, rather than examining crime rates (normalized by population or the size of the geographic unit of study). We referred in the section above to further research we recommend for examining rates of crime in geographic units rather than solely using counts of crime to determine if the patterns observed are different. The research used cities as the geographic unit of study. The data in some cases referred to an area that extended beyond the physical border of the city. We had no control over this, but as the data have been used in several other studies of cities in Brazil we are confident our results are reflective of the geography of homicides in Brazil at the city level. We also recognize that within cities there is great spatial heterogeneity in social, economic and environmental conditions. This is something that micro-place studies have examined and exposed, and we recognize that the spatial concentration and dispersion of homicide increase is likely to be as similarly acute within cities as the results we show across Brazil in the current study. Crime prevention interventions that target micro-places, such as hot spot policing programs, have been found to significantly decrease crime [17]. A motivation for the current study was to examine if certain cities were more responsible for homicide in Brazil than other cities and examine the spatial dispersion of homicide across Brazil during a period of crime increase. The current study adds to improving our understanding of patterns of homicide in Brazil, and similar to interventions that target micro-places, our results show the potential benefit of targeting strategies and programs to specific cities that contribute most substantially to the problems of homicide within a country. Since completing the current study we note that homicide levels in Brazil decreased in 2018 and 2019, but increased in 2020. Our study was motivated to examine spatial patterns of homicide during an overall period of homicide increase (from 2007 to 2017). We encourage further research that examines if the findings from our current study and the methods we use provide insight to patterns when homicide decreases and in particular if the decreases were mostly observed in certain cities. In particular, we encourage the use of the typology of cities we add to in the current study (with neighboring cities) and the effect of the Covid-19 pandemic on spatial patterns of homicide in Brazil. Conclusions A small number of cities account for a large proportion of homicides in Brazil. This finding matches with observations of micro-place patterns of crime concentration. When crime increases in micro-places, only a small number of places usually account for the increase in crime, with these places being where crime was previously concentrated. For cities in Brazil, when homicides increased to record levels in 2017, the results from the current study showed that only a small number of cites accounted for the equivalent national increase in crime. However, most of these cities were not established centers of high levels of homicide. Instead, almost all were smaller than the established high homicide cities, albeit many of these new emerging problematic cities for homicide neighbored high homicide cities, especially those located in the northeast of Brazil. Targeted police and public safety programs to the micro-places where crime is observed to concentrate are known to be effective in decreasing crime, especially to counter recent increases in crime. Although the problem of homicides is multi-faceted, there is potential for improving national and state programs and strategies that aim to decrease homicides by more precisely targeting effective interventions to the cities that account for the highest levels of homicide and that are most responsible for a national increase in homicides.
2021-09-27T20:06:59.003Z
2021-08-07T00:00:00.000
{ "year": 2021, "sha1": "8586b1728f26e1c7cb31919014c01f65bc83e93f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2220-9964/10/8/529/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "fb0a83c1c9ac4a5395ce5e42bd0a81d44310740e", "s2fieldsofstudy": [ "Law", "Psychology" ], "extfieldsofstudy": [ "Computer Science" ] }
9193597
pes2o/s2orc
v3-fos-license
Depression and Comorbid Illness in Elderly Primary Care Patients: Impact on Multiple Domains of Health Status and Well-being PURPOSE Our objective was to examine the relative association of depression severity and chronicity, other comorbid psychiatric conditions, and coexisting medical illnesses with multiple domains of health status among primary care patients with clinical depression. METHODS We collected cross-sectional data as part of a treatment effectiveness trial that was conducted in 8 diverse health care organizations. Patients aged 60 years and older (N = 1,801) who met diagnostic criteria for major depression or dysthymia participated in a baseline survey. A survey instrument included questions on sociodemographic characteristics, depression severity and chronicity, neuroticism, and the presence of 11 common chronic medical illnesses, as well as questions screening for panic disorder and posttraumatic stress disorder. Measures of 4 general health indicators (physical and mental component scales of the SF-12, Sheehan Disability Index, and global quality of life) were included. We conducted separate mixed-effect regression linear models predicting each of the 4 general health indicators. RESULTS Depression severity was significantly associated with all 4 indicators of general health after controlling for sociodemographic differences, other psychological dysfunction, and the presence of 11 chronic medical conditions. Although study participants had an average of 3.8 chronic medical illnesses, depression severity made larger independent contributions to 3 of the 4 general health indicators (mental functional status, disability, and quality of life) than the medical comorbidities. CONCLUSIONS Recognition and treatment of depression has the potential to improve functioning and quality of life in spite of the presence of other medical comorbidities. INTRODUCTION E pidemiological and clinical studies consistently indicate that depression adversely affects the lives of older adults. The relative contribution to adverse effects is not entirely clear, because depression often occurs in conjunction with other psychiatric illnesses, such as anxiety disorders; somatic symptoms, such as pain; and chronic medical illnesses, such as diabetes. The latter is particularly of concern, because it is often diffi cult to know whether a particular symptom, such as lethargy, is caused by depression, a coexisting medical illness, or both. Patients with chronic medical illness are known to have a high prevalence of comorbid depression. 1 Furthermore, both major depressive disorder and subsyndromal depression have been associated with increased somatic symptoms, morbidity, mortality, health care utilization, and costs in the presence of comorbidities. [1][2][3][4] DEPRESSION AND COMORBID ILLNESS Some studies have found that patients with depression have more functional impairment and poorer quality of life than patients with other chronic illnesses. [5][6][7] Furthermore, severity of depressive symptoms is inversely related to patients' health-related quality of life, even after controlling for age, sex, and medical comorbidities. 8,9 Many older persons, however, have more than one chronic illness that may differentially impair health status. Elders with multiple comorbidities may be particularly vulnerable to the debilitating impact of depression. Much of the previous research examining the interconnections between depression, medical comorbidities, and health status has been conducted in restricted settings. It is therefore diffi cult to compare the impact of depression with that of other chronic medical disorders to inform policy decisions about health care resource allocation. 8 Although researchers have increasingly recognized the importance of including patient-centered measures of health status in outcomes research, a wide variety of concepts and measures have been used, including quality of life, functional status, and disability. Because depression and other illnesses may affect multiple dimensions of health status, simultaneous examination of these may provide a richer understanding. Using baseline data from an intervention study of 1,801 depressed elders, 16 we examined the association of depression severity and chronicity, other comorbid psychiatric conditions, and coexisting medical illnesses with multiple domains of general health status. Our goal was to answer the following question: among older adults with clinical depression, what is the relative association of depression severity and chronicity with functional status, quality of life, and disability compared with comorbid psychiatric illnesses and coexisting medical illnesses? METHODS Project IMPACT is a multicenter randomized controlled trial comparing usual care with the effectiveness of collaborative disease management for late-life depression in primary care. 16 Study protocols were approved by the institutional review boards at all sites. All patients signed an informed consent approved by their local institutional review board. Cross-sectional data collected at baseline were used for the analyses described in this report. Sample Eighteen participating primary care clinics belonged to a total of 8 different health care organizations in 5 states. Represented were 2 staff model health maintenance organizations (HMO), 2 regions of a large group model HMO, the Department of Veterans Affairs, 2 university-affi liated primary care clinics, and 1 private practice physician group. Recruitment procedures were developed to enroll a sample of depressed older primary care patients that could be identifi ed for a quality improvement intervention under real-world conditions. 16 Each site used 2 methods to identify study participants. The fi rst method consisted of referrals from primary care providers, other staff, or patients themselves in response to clinic promotions of the study. The second method consisted of systematic screening using a 2-item instrument that screened for depression. 17 Inclusion criteria were age 60 years or older, intent to use one of the study clinics as the main source of primary care in the coming year, and a diagnosis of current major depression or dysthymia according to the Structured Clinical Interview for DSM-IV Axis I Disorders (SCID). 18,19 Exclusions included a current drinking problem (a score of 2 or more on the CAGE questionnaire), 20 a history of bipolar disorder or psychosis, ongoing psychiatric treatment, severe cognitive impairment, and acute risk of suicide. A total of 907 of the patients identifi ed by screening and 894 of the referrals enrolled in the study (Figure 1). 21 Measures Trained lay interviewers conducted in-person structured computerized interviews before randomization. The interviews assessed sociodemographic characteristics, severity of depression symptoms using the Hopkins Symptom Checklist (HSCL-20), 22 chronic depression (depressed or anhedonic more than one half of the days during past 2 years), presence of panic attacks in past 4 weeks, 23 neuroticism with a 7-item subscale of the NEO Personality Inventory, 24 posttraumatic stress disorder using a 3-item screening tool, 25 presence of mild cognitive impairment using a 6-item screening tool derived from the Mini-Mental Status Examination, 26 and a history of diagnosis or treatment for common chronic medical problems during the preceding 3 years. Related conditions were collapsed into 11 general categories. Patients were specifi cally asked about the following: asthma, emphysema, or chronic bronchitis (chronic lung disease); high blood pressure or hypertension (hypertension); high blood glucose or diabetes (diabetes); arthritis or rheumatism (arthritis); loss of hearing or vision (sensory defi cit); cancer excluding skin cancer (cancer); neurological conditions, such as epilepsy, seizures, Parkinson' s disease, or stroke (neurological disease); heart disease, such as angina, heart failure, or valve problems (heart disease); chronic back problems, headache, or other chronic pain problems (chronic pain); stomach ulcer, chronic infl amed bowel, enteritis, or colitis (gastrointestinal disease); and chronic problems with urination, chronic bladder infec-DEPRESSION AND COMORBID ILLNESS tions or prostate problems, or incontinence or inability to hold urine (urinary tract or prostate disease). Outcome measures included the physical component score (PCS-12) and mental component score (MCS-12) of the Rand 12-item Short Form (SF-12); the normed-based scores have a mean of 50 (SD = 10) with lower scores indicating poorer functioning. 27 Quality-of-life score (QOL) was measured by a singleitem rating of overall quality of life in the past month on a scale from 0 (about as bad as dying) to 10 (life is perfect). 28 Disability was measured by an index (SDI) derived from the Sheehan Disability Scale, which uses 3 items to assess impairments in work, family, and social functioning. 13,29 The SDI is reported as an average on a 10-point Likert scale (10 indicating inability to carry out any activity). Analytic Plan We used an extended hot-deck multiple imputation technique that modifi es the predictive mean matching method to impute item-level missing data. The strategy makes use of the well-established framework of multiple imputation, where the goal is to integrate the contribution of missing values into overall estimates of uncertainty. 30 By using hot-deck imputation, imputations were restricted to values that had been observed in other subjects. Rates of item-level missing data were less than 2.5% for all variables discussed in this article. Four baseline interviews were lost at site; an approximate Bayesian boot-strap multiple imputation method was used to impute unit-level missing data for these 4 baseline surveys from screening instruments and subsequent follow-up surveys. SAS Proc MI (SAS Institute, Cary, NC) was used to generate 5 imputed data sets. The MI2 SAS Macro 31 was used to average regression coeffi cients from the 5 separate mixed-effects linear regression models. Standard errors for the regression coeffi cients were adjusted to refl ect both within-imputation variability and between-imputation variability to achieve proper coverage. 30 Simple descriptive statistics (means and standard errors for continuous variables and percentages for categorical variables) were calculated for each control variable, predictor, and outcome. To determine which variables (sociodemographic, psychological, or medical comorbidities) were associated with the general health status measures, we conducted separate mixed-effects linear regressions using SAS PROC MIX for each outcome (MCS-12, PCS-12, SDI, and QOL). In this approach, the intercept and slopes of the linear model are treated as either fi xed or random effects rather than simply as a set of fi xed constants, as in ordinary multiple linear or logistic regression. 32 Sociodemographic factors (age, sex, ethnicity, level of education, marital status) and participating organization were entered fi rst as fi xed effects into the models as control variables. Each categorical variable having more than 2 levels was coded as a fi xed effect using dummy coding. Joint tests were used to assess the signifi cance of each categorical variable to the model. We did not include recruitment method as a predictor because it was not associated with 3 of the 4 outcomes (PCS-12, MCS-12, SDI) in bivariate tests, and it did not retain signifi cance in multivariate modeling with quality of life. Next, the psychological variables (depression severity, chronic depression, positive screening test for panic disorder, positive screening test for posttraumatic stress disorder, neuroticism, and positive screening test for cognitive impairment) were entered as a set into the models. To determine whether other comorbid chronic medical conditions are associated with further declines in functional status, disability status, or quality of life, we then entered the set of 11 medical comorbidities. Finally, all 2-way interactions between the psychological variables and the control and medical comorbidity variables were examined and retained in the model(s), if signifi cant. The difference in the likelihood ratio chisquare for each model tested the null hypothesis that each additional set of predictors contributed nothing beyond the set(s) of variables entered in the model(s) at earlier steps. RESULTS Descriptive statistics are presented in Table 1. The correlations among the 4 outcome measures (MCS-12; PCS-12, QOL, and SDI) were all signifi cant, except for the correlation between PCS-12 and QOL (Table 2). DEPRESSION AND COMORBID ILLNESS The magnitudes of the remaining correlations were modest, ranging from 0.18 to 0.41, indicating that the 4 outcomes measured related, albeit separate, constructs. The models containing only the sociodemographic variables were signifi cant ( P <.001) for all 4 outcome measures (MCS-12, PCS-12, QOL, and SDI). The set of psychological variables was signifi cantly associated with all 4 outcomes (PCS-12, P = .016; MCS-12, P <.001; SDI, P <.001; QOL, P <.001). The set of medical comorbidities contributed signifi cant effects to the PCS-12 (P <.001) and SDI (P <.001) models, but not the MCS-12 (P = .447) or QOL (P = .071) models. Only 2 interactions signifi cantly contributed to the models. One of the interactions suggests that African Americans with chronic depression have better mental health functioning as measured by the MCS-12 than whites with chronic depression (P = .021). The second interaction suggests that as depression severity increases in patients with heart disease, their quality of life improves. This fi nding, however, seems somewhat counterintuitive and may be spurious in light of the marginal level of signifi cance (P = .041) Table 3 displays the fi nal models for all 4 outcomes and indicates the signifi cance of the difference for the likelihood ratio chi-square as each additional set of independent variables was entered into the models. Of the control variables, only organization was signifi cantly associated with all 4 outcomes, suggesting differences in case mix at the 8 participating health care organizations. The sex of the patient was signifi cantly associated with PCS-12 and QOL, in that men had better physical functioning than women, but worse quality of life. Level of education and ethnicity were signifi cantly associated with PCS-12; patients with a college education had better physical functional status than those who did not graduate from high school; whereas Hispanics had better physical functional status than whites. Marital status was signifi cantly associated with QOL, suggesting that the quality of divorced and widowed participants' life was poorer than that of married participants. Depression severity was the only psychological variable that was signifi cantly associated with all 4 outcomes. As depression severity increased, quality of life and physical and mental functioning declined while disability increased. Neuroticism was signifi cantly associated with PCS-12, indicating that as neuroticism increased, physical functioning worsened. Cognitive impairment was signifi cantly associated with physical functioning and disability. Participants with chronic lung disease, diabetes, neurological disease, heart disease, and chronic pain had signifi cantly worse physical functioning and greater disability. Arthritis was signifi cantly associated with both physical and mental functioning, while hypertension and gastrointestinal disease were signifi cantly associated with decreased physical functioning only. Controlling for all other variables, depression severity was the only psychological or medical variable that was signifi cantly associated with all 4 outcomes. Comparison of the standardized regression coeffi cients for depression severity with those of the medical illnesses indicates, however, that all 8 of the medical illnesses with signifi cant associations (chronic lung disease, hypertension, diabetes, arthritis, neurological disease, heart disease, chronic pain, and gastrointestinal disease) contributed relatively more to physical functioning than depression severity did. Nevertheless, the standardized regression coeffi cients indicate that depression severity made larger independent contributions to mental health functioning, disability, and quality of life than any of the other psychological or medical variables. DISCUSSION We conducted a study of 1,801 elderly primary care patients with clinically severe depression to determine the relative level of association in depression severity and chronicity compared with psychiatric and medical comorbidities, on quality of life, physical functioning, mental functioning, and disability. We found that depression severity was signifi cantly associated with all 4 indicators of general health status in this diverse sample of depressed elders. As depression severity increased, quality of life and physical and mental functioning declined, while disability increased. Furthermore, depression severity was signifi cantly associated with all 4 indicators of health status after controlling for sociodemographic differences, other psychological conditions, and 11 medical comorbidities. Although study participants had an average of 3.8 chronic medical illnesses, depression severity made larger independent contributions to 3 of the 4 general health indicators (mental functional status, disability, and quality of life) than the medical comorbidities. The results are somewhat surprising, given the restricted range of depression scores in this sample of elders, all of whom met diagnostic criteria for major depressive disorder or dysthymia. It is important to note that depression severity was signifi cantly associated with both component scores of the SF-12, considering that the scale was constructed from 2 orthogonal factors attempting to distinguish medical and mental health problems. 14 The orthogonal construction artifi cially tends to limit the effect of a variable on both mental health and physical health components. This fi nding underscores the devastating impact that depres-sion can have on both emotional and physical functioning in older adults. Unfortunately, depression often goes unrecognized or receives suboptimal treatment in primary care. 33,34 When faced with competing demands for treating multiple chronic illnesses, physicians may give depression less priority for treatment compared with such illnesses as diabetes or arthritis. [35][36][37] The current fi ndings suggest, however, that depression severity is more pervasively associated with quality of life, functional status, and disability in depressed elders than most chronic medical illnesses. This association is important to recognize, because latelife depression can be successfully treated in the primary care setting with proper support. 21,34,38,39 Given that chronic medical illnesses such as diabetes can often be managed only to prevent further decline, depression may well be one of our most treatable chronic illnesses among elders. Indeed, it may be that treatment for depression can lead to more dramatic improvements in functional status, disability, and quality of life than interventions for other chronic illnesses in this age-group. This descriptive study has a number of limitations. The cross-sectional nature of the study makes it impossible to determine causality. Although the sample was recruited from 8 diverse health care organizations, the participating clinics are not representative of all primary care clinics. Although we relied upon selfreports of medical comorbidities, these were validated by medical chart review and automated data for one of the illnesses (arthritis). 40 We did not, however, assess the severity of these comorbidities. The measures of health status were also derived from self-report, but these patient-centered measures have been increasingly recognized as important health outcomes. Despite these limitations, the fi ndings are consistent with previous research, which indicates that depression is associated with declines in a variety of general health indicators. 8 Although often viewed as a sequela of medical illness, late-life depression is also related to a variety of psychosocial factors, including spousal death, role changes associated with retirement, social isolation, and diminished income. Improved recognition and treatment of depression has the potential to improve patients' lives in spite of other medical comorbidities. Future analyses from this study will determine whether multiple comorbid medical illnesses affect patient response to a collaborative treatment program for late-life depression in primary care.
2017-09-26T14:02:55.997Z
2004-11-01T00:00:00.000
{ "year": 2004, "sha1": "9780a58be9189cf6ea82fe3dd063d899c9868b1d", "oa_license": null, "oa_url": "http://www.annfammed.org/content/2/6/555.full.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9780a58be9189cf6ea82fe3dd063d899c9868b1d", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
4711000
pes2o/s2orc
v3-fos-license
Structural analyses of Arabidopsis thaliana legumain γ reveal differential recognition and processing of proteolysis and ligation substrates Legumain is a dual-function protease–peptide ligase whose activities are of great interest to researchers studying plant physiology and to biotechnological applications. However, the molecular mechanisms determining the specificities for proteolysis and ligation are unclear because structural information on the substrate recognition by a fully activated plant legumain is unavailable. Here, we present the X-ray structure of Arabidopsis thaliana legumain isoform γ (AtLEGγ) in complex with the covalent peptidic Ac-YVAD chloromethyl ketone (CMK) inhibitor targeting the catalytic cysteine. Mapping of the specificity pockets preceding the substrate-cleavage site explained the known substrate preference. The comparison of inhibited and free AtLEGγ structures disclosed a substrate-induced disorder–order transition with synergistic rearrangements in the substrate-recognition sites. Docking and in vitro studies with an AtLEGγ ligase substrate, sunflower trypsin inhibitor (SFTI), revealed a canonical, protease substrate–like binding to the active site–binding pockets preceding and following the cleavage site. We found the interaction of the second residue after the scissile bond, P2′–S2′, to be critical for deciding on proteolysis versus cyclization. cis-trans-Isomerization of the cyclic peptide product triggered its release from the AtLEGγ active site and prevented inadvertent cleavage. The presented integrative mechanisms of proteolysis and ligation (transpeptidation) explain the interdependence of legumain and its preferred substrates and provide a rational framework for engineering optimized proteases, ligases, and substrates. Over the last 20 years, plant legumains attracted increasing attention largely due to their dual protease-peptide ligase function (1)(2)(3)(4). Contrasting mammals, plants contain multiple legumain isoforms (5,6). Arabidopsis thaliana encodes four legumain forms, two vegetative-type (AtLEG␣ and ␥), one seed-type (AtLEG␤), and a separate grouped (AtLEG␦). The vegetative-type legumains, like AtLEG␥, are involved in plant-programmed cell death (7,8). This function is especially interesting because plants lack caspases, which are homologous to legumain and serve as key enzymes in mammalian-programmed cell death (9). Several studies showed that plant legumains and caspases share the same substrates and inhibitors due to their preference for acidic sequences such as Tyr-Val-Ala-Asp, Val-Glu-Ile-Asp, and Ile-Glu-Thr-Asp (7,10,11). Plant legumain mostly locate to the vacuoles and are, therefore, alternatively referred to as vacuolar-processing enzymes (or VPEs) (12). Legumains are synthesized as inactive precursors, or zymogens, with a tripartite domain organization. It comprises an N-terminal asparaginyl endopeptidase domain (AEP), 2 an intermediate activation peptide that blocks access to the active site and thus confers enzymatic latency to the zymogen, and a C-terminal legumain stabilization and activity modulation (LSAM) domain, which renders legumain stable at neutral pH and restricts substrate access to the active site (13). Specific legumain isoforms differ strongly in their peptidase and ligase activities toward certain substrates. For example, of five tested legumains from Helianthus annus, A. thaliana isoform ␤, Ricinus communis (castor bean), Canavalia ensiformis (jack bean) legumain, and Clitoria ternatea (butelase-1), only the latter two showed significant ligase activity, whereas the others exhibited only proteolytic activity (14,15). Recently, it has been shown that AtLEG␥ is able to efficiently ligate linear peptides (16). Ligation was also reported for legumains from another kingdom of life, in human and mouse legumain (3,17,18). Ligations are especially interesting when peptides are headto-tail cyclized, thereby producing a large variety of cyclic peptides. Examples are the potent sunflower trypsin inhibitor (SFTI), one of the shortest cyclic peptides, and kalata B1, a member of the so-called cyclotides (14, 15,19). SFTI serves as an ideal model peptide to study cyclization. Due to their special structural properties such cyclic peptides play important roles in plant defense strategies like pesticidial, insecticidal, antimicrobial, or nematodical activities (20 -23). They all share common characteristics like a high thermal, pH, or proteolytic resistance, making them attractive drug scaffolds (23)(24)(25)(26). In vivo, precursors of cyclic peptides, like PawS1 of SFTI (26), are ribosomally synthesized and post-translationally modified, e.g. by the formation of disulfide bridges or the removal of signal-or propeptides (27). During its maturation, pro-SFTI is processed twice by legumain. Initially, legumain cleaves and releases a flexible N-terminal propeptide from pro-SFTI (19,27,28). The subsequent cleavage and release of a C-terminal propeptide is accompanied by a head-to-tail ligation, i.e. cyclization, also catalyzed by legumain (19). However, not all peptides are efficiently ligated/cyclized during the second processing step by plant legumain (2,15). Quite apparently, the peptide sequence and structure determine its preference for cleavage or cyclization/ligation with a strong preference for hydrophobic residues in the so-called P2Ј position, which is the second residue after the cleavage site (4,14,15,29,30). For a definition of the nomenclature of the substrate recognition sites according to Schechter and Berger, please see Ref. 31. However, the specific role of this highly conserved residue remained unclear. Similarly, the detailed reaction mechanism underlying the plant legumain-mediated ligation reaction remains controversial. For several plant legumain isoforms, a thioester with the catalytic cysteine was postulated as a critical reaction intermediate (32). This so-called enzyme-acyl complex can either be released by a water molecule (i.e. hydrolysis, classic proteolytic cleavage) or by the nucleophile of an incoming N terminus. In the latter case, a ligated (or cyclized) peptide product is released from the legumain active site (1,14,19). Remarkably, for human legumain ligation was reported to occur at least partly independent of the catalytic cysteine. Indeed, ligation was enhanced if the catalytic cysteine was blocked, presumably by preventing re-hydrolysis of the ligated peptide bond. Lacking the thioester activation, an alternative activation by the proximal aspartimide (succinimide) was suggested (3,17). The incomplete atomistic understanding of mechanisms and specificities for proteolysis and ligation by legumain also reflects the lack of crystal structure information on the substrate recognition by a fully activated plant legumain, i.e. the catalytic AEP where the C-terminal activation peptide and LSAM domain are released. Here, we report the crystal structure of the peptidase form (AEP) of AtLEG␥ in covalent complex with the substrate analogue Ac-YVAD chloromethyl ketone (CMK). The structure maps the important substrate recognition sites before and after the scissile peptide bond, which are referred to as nonprimed and primed recognition sites (31). Biochemical and computational analyses indicated the importance of cis-trans-isomerization of the ligation product as well as the shielding from the catalytic water molecule. Delineating the substrate-recognition sites Especially interesting were the substrate-recognition sites. The nonprimed substrate recognition, i.e. the substrate binding preceding the substrate's scissile peptide bond, is facilitated by the edge strands ␤IV and ␤V and a plant-specific insertion of 7 amino acids (aa) in the so-called c341 loop (13,18) as compared with human legumain (Fig. 1, Fig. S1 and S2; c341 and c381 referring to caspase 1 numbering (13)). The c381 specificityloop, which features a 7-aa insertion compared with mammalian legumain (13) (Fig. S1), also significantly contributed to the nonprimed substrate interaction. Assuming an extended binding mode of the peptide substrate, the primed sites C-terminal to the scissile bond are located on the antiparallel ␤I-␤III-sheet ( Fig. 1, Fig. S2). Disorder-order transition upon zymogen activation When we analyzed the AtLEG␥ structure in complex with the Ac-YVAD-CMK ligand we found the specificity loops (c341 and c381) and the edge strand (␤IV) highly ordered, contrasting the zymogenic structure, displayed in relative B-factors, which indicate the local flexibility ( Fig. 2, a and b) (16). Although the observed flexibility might be influenced by the packing within the crystal lattice, the observed difference was corroborated by two independent molecules in the asymmetric unit for both the active peptidase and the zymogenic structures, minimizing potential influences by crystal lattice contacts (Fig. 2, a and b). For Tyr 307 , the change was drastic and particularly functionally relevant, because it defines the S4 substrate-binding site (Figs. 1 and 2). Notably, the main chain interaction of the peptidic substrate (Ac-YVAD-CMK) with the peptidase differed from that previously found for the activation peptide in the zymogenic structure ( Fig. 2) (16). In the Ac-YVAD-CMK substrate analogue, there were two major hydrogen bonds between the carbonyl oxygen of Ser 247 and amide nitrogen of Gly 249 to P1 amide nitrogen and P2 carbonyl oxygen, respectively. The P2 carbonyl oxygen was further anchored by the side chain of Arg 74 . By contrast, the activation peptide in the two-chain structure was out of register and shifted for 2.5 Å to the N-terminal direction (Fig. 2c). This observed shift is critical in rationalizing how the activation peptide can confer enzymatic latency in the zymogen structure: the out-of-register binding, albeit approximately substrate-like, renders the activation peptide encounter complex unproductive and prevents autocleavage of the activation peptide. The out-of-register shift of the activation peptide as compared with a productive peptide binding is mostly caused by Gln 354 rather than the classical Asn (or Asp) in the P1 position, preceding the scissile peptide bond. The additional CH 2 group in the Gln side chain displaces its main chain as well as the neighboring P2 residue for ϳ3.8 Å as compared with the Ac-YVAD-CMK. Conversely, the lack of the canonical substrate interactions resulted in the observed flexibility of the Proteolysis and ligation by plant legumain prominent c341 and c381 specificity loops in the zymogenic structure, whereas these loops are highly ordered in the substrate-bound state (Fig. 2, a and b). Specificity pockets and active-site elements The covalently bound Ac-YVAD-CMK substrate was clearly visible in the electron density and allowed for an accurate assignment of the nonprimed specificity pockets (Fig. 1, Fig. S2b). The oxyanion hole was formed by the amide nitrogens of Cys 219 and Gly 178 as well as of imidazole ring of His 177 (N␦1) (Fig. 1, Fig. S2b). Similarly as reported for mammalian legumain (13,18), P1 Asp substrates are best accepted at pH 4.0 (3,13,33), where the P1 Asp is protonated within the S1-pocket. The protonated Asp P1 carboxylate group was coordinated by Asp 269 and Glu 217 at the bottom, Ser 247 on the upper side ("north") and Arg 74 and His 75 on the lower side ("south") of the S1 pocket. The P2 Ala interacted hydrophobically with Trp 248 . The P3 Val was constrained by the Cys 252 -Cys 266 disulfide bridge and the guanidium group of Arg 74 . The P4 Tyr was surrounded by the two prominent c341 and c381 specificity loops with their central residues Tyr 307 (c381) and the aliphatic part of Glu 255 (c341). We could further identify a potential site for the catalytic water in perfect position to attack a thioester intermediate. The water was coordinated by the catalytic His 177 in proximity to the scissile carbonyl of Asp P1 (Fig. S3, Fig. 6). Cyclization of SFTI by AtLEG␥ To test whether AtLEG␥ can cyclize a modified sunflower trypsin inhibitor precursor peptide (SFTI-GL; 1 GRCTRSIP-PICFPDGL 16 ), we monitored time-resolved ligation as catalyzed by activated AtLEG␥. SFTI-GL was cyclized to C-SFTI remarkably fast. Already after 1 min we detected ϳ1/3 of the precursor (SFTI-GL) being cyclized (C-SFTI) (Fig. 3). After 20 min, conversion of SFTI to its cyclic form was complete, with ϳ10% each resulting in the linear form (L-SFTI) or not being processed at all (precursor SFTI-GL). This distribution and the absolute amounts remained constant for the tested time interval of 12 h, implying and reflecting the proteolytic resistance of cyclic SFTI (Fig. 3). We observed cyclization only in the presence of AtLEG␥ and if the precursor SFTI carried the primed residues (i.e. the C-terminal Gly 15 -Leu 16 ), which were cleaved off by AtLEG␥ (Fig. 3, c and d). Interestingly, we did not find a significant preference for oxidized or reduced SFTI-GL, in agreement with previous reports (15). Docking of SFTI reveals a canonical substrate-binding mode To understand how the precursor of SFTI is recognized by AtLEG␥, we performed docking studies guided by the present AtLEG␥-substrate complex structure. The nonprimed substrate-binding sites (S4 to S1) of AtLEG␥ served as receptor sites and Asp 14 of SFTI as the P1 ligand residue (cf. Fig. 1). The Proteolysis and ligation by plant legumain docking hits with the lowest free energy of binding were in agreement with a canonical binding and resembled the experimentally determined substrate-binding mode ( Figs. 1 and 4). Specifically, we found the carbonyl of P1 Asp 14 to be docked into the oxyanion hole (formed by the amides of Cys 219 and Gly 178 as well as by His 177 ) and further backbone interactions such as the amide of Asp 14 (SFTI) with the carbonyl oxygen of Ser 247 and the carbonyl oxygen of Phe 12 (SFTI) with the amide of Gly 249 , all consistent with the experimentally determined substrate-binding mode ( Fig. 1). Furthermore, Pro 13 (SFTI) and Phe 12 (SFTI) bound to the S2 and S3 pockets, respectively. Due to the intramolecular disulfide of Cys 11 (SFTI) with Cys 3 (SFTI), Ile 10 (SFTI) occupied the S4 pocket, interacting with Trp 248 . Interestingly, the docking program positioned the free N terminus of Gly 1 (SFTI) to form an ionic interaction with Glu 220 close to the catalytic cysteine Cys 219 . Proline 13(SFTI) switch allows canonical binding of linear substrate and release of the cyclic product Careful inspection of the docked structures revealed a major difference of the docked linear SFTI to a cyclic SFTI at Pro 13 (SFTI), which was ϳ180°switched (cis-trans isomerized) around the Phe 12 -Pro 13 peptide bond (Fig. S4). This conformational isomerization might be triggered either: 1) by "pulling" Phe 12 (SFTI) to the canonical S3 backbone interaction or 2) by "pushing" SFTI away from AtLEG␥ to avoid steric clashes with AtLEG␥; or a combination of both. Importantly, and contrasting the cyclic SFTI structure (21), the ensemble of NMR solution structures (PDB entry 2AB9) revealed Pro 13 (SFTI) as a wide spectrum of conformations in the SFTI precursor, as did the C-terminal extension, which is cleaved off before cyclization by legumain (2, 14, 28) (Fig. S5). Accordingly, cyclization of SFTI is accompanied by the selection of a Pro 13 (SFTI) conformation (21), which is unfavorable for binding to AtLEG␥. To further substantiate this conclusion, we computationally enforced Pro 13 (SFTI) within the cyclic SFTI to canonically interact with the S2 site, thereby also inducing proper interaction of Asp 14 (SFTI) with the S1 pocket and the oxyanion hole. However, upon releasing these restraints, Pro 13 (SFTI) switched back and pulled the Asp 14 carbonyl out of oxyanion hole. By contrast, the linear SFTI peptide remained canonically bound also in the absence of such restraints. Binding model of primed product residues and their role in ligation We next asked how primed residues C-terminal to the scissile peptide bond would bind to AtLEG␥, and to which extent they can prevent the catalytic water from premature hydrolysis of the thioester bond. Thereby, we focused on the P1Ј-S1Ј and P2Ј-S2Ј interactions, because these are reported to be espe- Proteolysis and ligation by plant legumain cially important for ligation (2,4,15,34) and, due to the known constraint of the P1-S1 interaction, can be reliably extrapolated. For stereochemical reasons the P1Ј residue must have the side chain exposed near the catalytic cysteine Cys 219 and Glu 220 , which delineate the S1Ј pocket. We further found a remarkably pronounced S2Ј pocket in AtLEG␥, which is bor- Proteolysis and ligation by plant legumain tion at position 190 (Tyr 190 in AtLEG␥ or butelase-1; Fig. S1). Furthermore, the S2Ј pocket is deepened by the basement residue Gly 184 as compared with the more bulky Val 150 in human legumain (Fig. 5, Fig. S1). To explore the binding mode of a dipeptide at the S1Ј and S2Ј sites, we modeled a C-terminal extension of the docked SFTI to obtain initial positions of the P1Ј and P2Ј residues. The binding mode of the activation peptide in the zymogen and a substrate differ markedly In this study we solved the crystal structure of AtLEG␥ in complex with Ac-YVAD-CMK (Fig. 1, Figs. S1 and S2). In this structure the binding mode of the peptidic substrate to the active site markedly differed from that seen for the activation peptide in the zymogen form (Fig. 2). Although the P1 Gln 35 4 of the activation peptide mimics a P1 asparagine in the substrate, it induced a partial frameshift of ϳ2.5 Å in the activation peptide backbone. This shift leads to distorted backbonebackbone interactions and translates into more disordered specificity loops (c341, c381; Fig. 1, Fig. S2). By contrast, the canonical binding triggered an ordering of the S3-S4 pockets, resulting in a tight binding of the P3 and P4 residues. Structure-derived AtLEG␥ specificity profile The covalently bound Ac-YVAD-CMK allowed to deduce the specificity of the nonprimed recognition sites (Fig. 1). The S1 pocket is bipolar and sterically matches with Asp and Asn, thus explaining its strong preference for Asn and protonated Asp at P1. The open S2 pocket with its hydrophobic basement (Trp 248 ) explains the preference for hydrophobic residues. The preference for mixed hydrophobic and partially negative P3 residues is consistent with Arg 74 and the redox-sensitive disulfide bridge Cys 252 -Cys 266 of the S3 pocket. The S4 site is very adaptive, reflecting the conformational variability of the specificity conferring c341 and c381 loops (Fig. 2). These structurederived specificity predictions are in agreement with experimentally determined specificities. For example, the caspase-1 (YVAD) inhibitor was reactive toward AtLEG␥, whereas the caspase-3 inhibitor (DEVD) was not (8,35). This observation is in agreement with the negatively charged S4 pocket, which should exclude a negatively charged P4 residue. Similarly, the reported autocleavage sites of AtLEG␥, i.e. 340 ADAN or 350 RVTN, match the structure-derived specificity profile (16). SFTI-binding mode mimics the binding mode of the ␣6 helix in the two-chain form of AtLEG␥ Docking of the SFTI inhibitor to the active site positioned its N terminus Gly 1 (SFTI) next to Glu 220 , close to the catalytic cysteine (Fig. 4). This stand-by position enables a coordinated displacement of the primed SFTI (product) residues Gly 15 -Leu 16 (SFTI). We have previously shown that AtLEG␥ can be activated to a pH-stable intermediate (16). This two-chain form is a noncovalent complex of the catalytic domain and the C-terminal domain comprising the ␣6 helix and LSAM (legumain stabilization and activity modulation) module. Thereby, the ␣6-helix was shown to act as critical gatekeeper for ligation substrates, which was proposed to be specifically unlocked by a suitable ligation substrate, whereas preventing premature proteolysis. The N terminus of SFTI exactly coincides with the ionic anchorage site of the ␣6-helix, i.e. Arg 355 binding with Glu 220 . Thus, SFTI mimics the interaction seen in the ␣6-helix (Fig. S6). Indeed, we could detect significant cyclization of SFTI-GL by the two-chain form, further supporting the correctness of our docking model (Fig. S6). Proteolysis and ligation by plant legumain Primed side interaction favors cyclization by preventing pre-mature thioester hydrolysis Although several reports indicated an essential role of the P1Ј and P2Ј residues in ligation (4,14,15,29,30), their mechanistic relevance remained so far unclear. Our analysis identified a prominent hydrophobic S2Ј pocket, specific to plants. Efficient ligases such as jack bean legumain, butelase-1, and AtLEG␥ all share an aromatic residue (Tyr or Phe) at position 190 and a glycine at position 184 (Fig. 5, Fig. S1) (14, 15, 36). Our computational studies showed that the catalytic water could be displaced by the presence of the P1Ј-P2Ј dipeptide binding, in a sequence-dependent manner (Fig. 6). Hydrophobic P2Ј residues had longer retention times, correlating with experimentally observed preferences in ligation substrates (15). We should note, however, that a recent publication by Yang and colleagues (37) proposed the primed nucleophilic ligation substrate employs a nonprimed binding site, i.e. it binds to the left side rather to the right side as shown in Fig. 6. This conclusion was presumably motivated by the C247A mutant, which strongly enhanced ligase activity. However, this proposition is sterically conflicting with the binding of the nonprimed ligase substrate (Figs. 1a and 2, Fig. S2b). To test our catalytic water displacement model, we compared the cyclization efficacy between AtLEG␥ and AtLEG␤. The latter has Tyr 190 (in AtLEG␥) substituted to histidine, thus rendering the S2Ј pocket less hydrophobic. Indeed we detected a significant higher portion of cleaved SFTI (linear SFTI) than cyclic (Fig. S7), consistent with earlier reports (14). Conversely, AtLEG␤ may be a superior ligase over AtLEG␥ for substrates with P2Ј residues optimized for AtLEG␤'s amphiphilic S2Ј site. These findings are in perfect agreement with a computational report on human legumain-mediated transpeptidation, which was only possible if water was excluded from the active site (38). Finally we note that the proposed water displacement model is consistent with the reportedly low proteolytic activity of bute-lase (15) as well as the here observed Ϸ5000-fold decreased proteolytic activity of AtLEG␥ as compared with human legumain (Fig. S8). Model of cyclization Based on our findings, we hypothesize the cyclization of SFTI is performed as illustrated in Fig. 7. Craik and colleagues (2,14) proposed that pro-SFTI is cleaved and ligated sequentially, whereby the N-terminal segment of pro-SFTI is initially released because of a kinetically preferred asparagine (Asn 1 (SFTI)) cleavage site (Fig. S5) (14, 28). In a second step, the N terminally trimmed SFTI binds canonically with Asp 14 (SFTI) into the active site, primarily exploiting the S4 to S2Ј sites, as we observed in our docking studies (Figs. 4 and 6). The catalytic cysteine can then form the acyl-enzyme intermediate, which is long-lived due to the above described water displacement model (Figs. 6 and 7). Subsequently, we propose the nucleophilic Gly 1 (SFTI) to bind to the S1Ј site, thereby displacing the primed product residues (2), followed by aminolysis of the thioester resulting in the cyclic peptide. A possible reaction scheme is proposed in Fig. 8, which is in agreement with several experimental findings. First, in ligation experiments in the presence of H 2 O 18 an incorporation of O 18 into the ligation product could not be observed, indicating that the acyl-enzyme was not H 2 O 18 hydrolyzed before it was ligated (14). Second, for the homologous caspases, it has been shown that the caspase inhibitor p35 binds the enzyme canonically and thereby displaces the catalytic water. The authors were consequently able to detect a long-lived thioester intermediate in the electron density (39). Third, for the macrocyclase domain of PatG, primed residues need to stay bound after forming the acyl-enzyme intermediate to exclude water from the active site, albeit achieved by different structural principles (40). Upon cyclization Pro 13 (SFTI) cis-trans-isomerization is conformationally enforced (Fig. S4) (21, 41), resulting in a decreased affinity and release of the cyclic product (Fig. S4). Daly et al. (41) reported that the P1 Asp 14 is hydrogen (and ionically) bonded to Arg 2 in cyclic SFTI, which constraints Pro 13 in the conformation unfavorable for binding. By contrast, in the D14A SFTI mutant a cis-trans-isomerism of Pro 13 (Pro 13 switch) was observed. We proposed a similar situation in our D14N SFTI mutant, which should be able to sample more Pro 13 conformations, leading to re-binding to the active site with the possibility of cyclic SFTI-D14N to be cleaved. This is what we indeed observed, the cyclic SFTI-D14N was a metastable reaction intermediate toward the stable cleaved product (Fig. S9). By combination of high resolution crystallographic studies with computational and biochemical studies we here provide a both detailed and integrative mechanism of peptide bond cleavage and cyclization. The here developed concepts allow to explain and reconcile many published data and to rationally design enzymes and substrates with improved properties in proteolysis and ligation. Experimental procedures A. thaliana AEP (legumain) isoform ␥ (AtLEG␥) full-length clone U10153, locus: AT4G32940, was obtained from TAIR database. Restriction enzymes and T4 ligase were obtained from Fermentas (St. Leon-Rot, Germany) and Pfu Ultra II Fusion HS DNA polymerase was obtained from Stratagene (La Jolla, CA). Custom-made primers were obtained from Eurofins Genomics (München, Germany) and sequence analyses were performed at Eurofins MWG Operon (Martinsried, Germany). Escherichia coli strain XL2 Blue (Stratagene) was used for subcloning expression constructs. To produce fully glycosylated protein, the Leishmania tarentolae expression system (LEXSY; Jena Bioscience, Germany) was used (42). All reagents used were of the highest standard available from Sigma (München, Germany) or AppliChem (Darmstadt, Germany). Cloning An N-terminal truncated mutant (Ser 56 -Ala 494 ) of A. thaliana proLEG isoform ␥ (referred in this work with pro-AtLEG␥) was amplified by PCR (Eppendorf Mastercycler ep gradient thermal cycler) to exclude the N-terminal ER-signal peptide and vacuolar sorting signal (43). A. thaliana legumain isoform ␥ full-length clone U10153 was used as a template. An appropriate forward primer containing an XbaI restriction site, His 6 tag, Proteolysis and ligation by plant legumain and a tobacco etch virus protease-cleavage site, AGCTCTCGAG-TCTAGAGCACCACCATCACCACCACGAAAACCTGTA-TTTTCAGTCCGGTACTAGGTGGGCTGTTCTAGTC-GCCG and a reverse primer containing a NotI restriction site, AGCTGCTCAGCGCGGCCGCCTATGCACTGAATCCAC-GGTTAAGCGAGCTCCAAGGAC, were used. Subsequently, the PCR product was cloned into the pLEXSY-sat2 vector utilizing the XbaI and NotI restriction sites. The expression constructs carried an N-terminal signal sequence for secretory expression in the LEXSY supernatant. Correctness of all constructs was confirmed by DNA sequencing. Cell culture, protein expression, and purification Expression constructs were stably transfected into the LEXSY P10 host strain and grown at 26°C in BHI medium (Jena Bioscience, Germany) supplemented with 5 g/ml of heme in 50 units/ml of penicillin and 50 mg/ml of streptomycin (Carl Roth GmbH, Germany). Positive clones were selected by addition of nourseothricin (Jena Bioscience). Protein expression was carried out as described elsewhere (13). Recombinant protein was removed from the LEXSY supernatant via Ni 2ϩ purification using nickel-nitrilotriacetic acid Superflow resin (Qiagen, Hilden, Germany). The wash buffer contained 20 mM HEPES, pH 7.2, 300 mM NaCl, and 10% glycerol. The elution buffer was composed of 20 mM HEPES, pH 7.2, 300 mM NaCl, 10% glycerol, 250 mM imidazole, and 0.3 mM S-methyl methanethiosulfonate. The elution fractions were concentrated using Amicon Ultra centrifugal filter units (3-kDa molecular mass cut off, Millipore) and desalted using PD-10 columns (GE Healthcare) to the final buffer: 20 mM HEPES, pH 7.2, 50 mM NaCl. Preparative autoactivation to yield two-chain state and protease only 2-3 mg/ml of pro-AtLEG␥ were incubated in autoactivation buffer A (100 mM Tris, 100 mM BisTris, 100 mM citrate, pH 4.0, 100 mM NaCl) for 16 h at 30°C to generate two-chain AtLEG␥. To prepare the protease only, 2-3 mg/ml of pro-AtLEG␥ were incubated at 30°C in autoactivation buffer B (100 mM Tris, 100 mM BisTris, 100 mM citrate, pH 4.0, 100 mM NaCl, and 2 mM DTT) for 2 h. All samples were checked for the presence or absence of the ␣6-LSAM domain by SDS-PAGE. After autoactivation, two-chain or protease-only samples were subjected to gel filtration chromatography utilizing an Äkta-FPLC system (SEC 200 10/300 GL column, buffer: 20 mM citrate, pH 4.2, 100 mM NaCl) to remove degradation products and DTT. Afterward, the respective fractions were either used directly for enzymatic assays or aliquoted and frozen at Ϫ20°C. Protein crystallization AtLEG␥ was purified as described above. Before concentration AtLEG␥ was inhibited with Ac-YVAD-CMK at pH 4.0. After inhibition a SEC run was performed (SEC 75, 15 mM citric acid, pH 4.5, 80 mM NaCl) and corresponding fractions were pooled and concentrated to Ϸ5 mg/ml. Crystallization screening was carried out using the sitting-drop vapor-diffusion method utilizing a Hydra II Plus One (Matrix) liquid-handling system. Crystals grew within 3-6 days in a condition consisting of 4% PEG 4000, 100 mM sodium acetate, pH 4.6. Data collection and processing An X-ray diffraction data set was collected on beamline ID29 at the ESRF at 100 K. The beamline was equipped with a Pilatus6M detector. Data collection was performed using a crystal-to-detector distance of 280.919 mm and a wavelength of 0.976251 Å. The exposure time was 0.04 s at 2.3% transmission. Data processing was performed by using iMOSFLM (53) and Aimless from the CCP4 program suite (44). Packing density was calculated according to Matthews (45). An initial model could be generated by molecular replacement with the two-chain form of AtLEG␥ (PDB code 5NIJ), the structure was refined by using Refmac 5 (46) and phenix.refine (47). The structure was deposited with the Protein Data Bank under PDB code 5OBT. Methods-Solid-phase peptide synthesis was carried out on an automatic peptide synthesizer (Syro I, Biotage). The analytical and semipreparative HPLC equipment was from Thermo Fisher Scientific (model Ultimate 3000). The analytical column was from Thermo Fisher Scientific (Syncronis C-18, 4.6 ϫ 250 mm, 5 m), the semipreparative column was from Macherey Nagel (NUCLEOSIL C-18, 250 ϫ 10 mm, 5 m). MALDI-TOF mass spectra were recorded on an Autoflex mass spec- Left pathway, if the initially bound substrate (green dashed line) carries a nonhydrophobic residue in P2Ј, the primed product can dissociate after the formation of the thioester and water will exchange. Consequently, this results in hydrolysis of the thioester and the release of the hydrolysis product. Right pathway, if P2Ј is hydrophobic, the primed site peptide stays bound and prevents the exchange of catalytic water, resulting in an equilibrium between thioester and peptide bond. In the presence of a suitable transpeptidation substrate (R3), an exchange between the initially bound primed product and the transpeptidation peptide can happen. This results in a new equilibrium between thioester and peptide bond, forming the transpeptidation product. The varying protonation state of the released primed N terminus is indicated. Only a deprotonated N terminus is able to attack the thioester, not a protonated one. This relationship explains the pH dependence of transpeptidation, which is more efficient at neutral pH than acidic pH. Testing peptidase activity The proteolytic activities of selected activation intermediates and isoforms were measured using 20 M of the fluorogenic substrate Z-VAN-MCA or IETD-MCA in activity buffer A adjusted to the desired pH value (100 mM Tris, 100 mM BisTris, 100 mM citrate, 100 mM NaCl) at 20°C. For each measured pH value, the reaction was started by adding around 0.5-2 l of the respective sample to the premixed 49.5 to 48-l mixture. The concentration of each enzyme in the assay was Ͻ1.5 M if not otherwise stated. The substrate turnover was measured at an excitation and emission wavelength of 370 and 450 nm, respectively, in an Infinite M200 Plate Reader (Tecan). Proteolytic activity was determined by calculation of the initial slopes of the time-dependent substrate turnover. Each measurement was done in triplicate. Structure preparation and docking Starting from the crystal structure of the fully activated AtLEG␥, first the inhibitor was removed from the system. Afterward, the enzyme was titrated at pH 6.0 (experimental pH) using the Protonate 3D function of MOE2016.08 (48). The structure of the substrate, SFTI, was retrieved from the Protein Database (49): PDB codes 1JBL (cyclic) and 1JBN (noncyclic) (21). Because these PDB files comprise several structures, only one chain was kept and protonated at pH 6.0 as described above for the enzyme. In case of the noncyclic inhibitor (PDB code 1JBN) the C terminus was appended by NME (N-methyl) to maintain neutrality. The docking simulations were performed using the following settings of the software package of MOE 2016.08. In the potential energy setup panel AMBER99 was chosen as force field. As placement protein-protein docking was employed to find the optimal docking hits. Each run was adjusted to pre-placement of 10,000, placement of 500 and refine 30 conformations as a cut-off. The top poses were retained for further analysis, investigating the H-bond distances between the substrate and the enzyme. For AtLEG␥, residues Cys 219 , Gly 187 , His 75 , Arg 74 , Cys 252 , Cys 266 , Asp 217 , Ser 247 , Asp 269 , Trp 248 , Gly 249 , Glu 255 , and Tyr 307 were defined as the binding pocket. In addition, as docking site of the substrate Asp 14 was chosen. The best docking hits were optimized using the energy minimization function of MOE2016.08 (48,50) with AMBER99 force field method. The docking results were judged by proper interactions with the S1 pocket and major backbone interactions. In addition to the well-established computational scoring function, the interaction-based accuracy classification method (51) was used to identify the docking hits, which included an interaction pattern of Asp 14 (SFTI) in the S1 pocket resembling the experimentally determined geometry (Fig. 1). Thioester generation and optimization To generate the tripeptides for the molecular dynamics simulations, first AtLEG␥ was superimposed with the crystal structure of the human legumain-cystatin (PDB code 4N6O) complex, because in that complex also primed residues are bound. The P1 to P2Ј residues were mutated to the sequence of interest and terminated by ACE (acetyl) and NME (N-methyl), respectively. The so generated complex was optimized using the energy minimization function of MOE2016.08 (48) with the AMBER99 force field method. Finally, the peptide bond between the P1-P1Ј residues was broken, a covalent bond between the carbonyl carbon of the P1 aspartic acid and SG(Cys 219 ) was generated and the complex was reoptimized. For the molecular dynamics studies the P1Ј and P2Ј residues were systematically mutated using the Protein Builder function of MOE2016.08 and reoptimized (MOE2016.08, AMBER99 force field). Molecular dynamics The protein-peptide complex was solvated in an 80-Å cubic box of waters and counterions (either Na ϩ or Cl Ϫ ) were added to maintain neutrality of the overall protein. Afterward, a series of equilibration steps were carried out by performing molecular dynamics annealing runs for 100 ps at temperatures 50, 150, 200, and 250 K and for 330 ns at 298.15 K (in 11 steps, after each 30 ns the coordinates were saved for further analysis). The molecular dynamics calculations were accomplished using AMBER99 force field as implemented into NWChem 6.6 (52). Author contributions-F. B. Z. designed and performed most experiments. F. B. Z., B. E., E. D., H. B. discussed and interpreted all experiments. C. C. synthesized the peptides for ligation, assayed and interpreted the ligation by mass spectrometry. F. B. Z. and H. B. wrote the manuscript, all authors proofread and agreed with the paper.
2018-04-26T22:47:32.840Z
2018-04-08T00:00:00.000
{ "year": 2018, "sha1": "8e1bf2dcb11942984fb2dd8810fad7193fc96cd3", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/293/23/8934.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "8e1bf2dcb11942984fb2dd8810fad7193fc96cd3", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
239051781
pes2o/s2orc
v3-fos-license
Stress and autonomic nerve dysfunction monitoring in perioperative gastric cancer patients using a smart device Abstract Background Heart rate variability (HRV), a sensitive marker of stress and autonomic nervous disorders, was significantly decreased in cardiovascular disease, inflammation, and surgical injury. However, the effect of radical gastrectomy on HRV parameters needs to be further investigated. Methods A prospective, observational study including 45 consecutive enrolled patients undergoing radical gastrectomy in our enhanced recovery after surgery (ERAS) programs was conducted. Frequency‐ and time‐domain parameters of HRV from 1 day prior to operation to 4 days postoperatively were continuously measured. Meanwhile, plasma cortisol and inflammatory markers were recorded and correlated to HRV parameters. Results Heart rate variability showed a solidly circadian rhythm. Anesthesia severely disturbed HRV parameters, resulting in a reduction of most of the HRV parameters. Frequency‐domain parameter (including VLF) and time‐domain parameters (including the SDNN, SDANN, and triangular index) of HRV demonstrated a significant reduction compared to preoperative values on the postoperative day 1 (Pod1), and these HRV parameters could return to baseline on Pod2 or Pod3, indicating surgical stress and autonomic nerve dysfunction existed in the early postoperative period. Inflammatory biomarkers were significantly elevated on Pod1 and Pod3. Plasma cortisol decreased significantly on Pod1 and Pod3. Both inflammatory biomarkers and plasma cortisol had no significant correlation with HRV parameters. Conclusions Compared with plasma cortisol and inflammation biomarkers, HRV is more sensitive to detect surgical stress and autonomic nervous dysfunction induced by radical gastrectomy in patients with gastric cancer. | INTRODUC TI ON Cancer is one of the major global threats to public health. Gastric cancer is the fourth leading cause of cancer-related death worldwide (Latest Global Cancer Data). China is one of the countries with high incidence of gastric cancer (Latest Global Cancer Data; Wang et al., 2019). Surgery is the mainstay of treatment for many patients with gastric cancer. However, surgical intervention can also cause stress response and autonomic nerve dysregulation (Haase et al., 2012;Manou-Stathopoulou et al., 2019). Even uncomplicated abdominal surgery could trigger dysfunction of the autonomic system, characterized by a relative decrease in parasympathetic tone (Colombo et al., 2017;Haase et al., 2012). Our previous studies demonstrated that patients undergoing gastrectomy in institutions applying enhanced recovery after surgery (ERAS) have an early convalescence (Wang et al., 2016;Zhao, Hu, Jiang, Wang, et al., 2018;, but the perioperative stress response and autonomic nerve dysregulation of these gastric cancer patients remain to be further studied. Currently, a variety of methods can be used to evaluate stress, including the self-administered questionnaire (SAQ) (He et al., 2018) and biochemical methods including the measurement of stressrelated substances such as plasma concentrations of cortisol (Manou-Stathopoulou et al., 2019). SAQ is most frequently used, but it requires subjects to maintain their reflective capability. Therefore, SAQ is not suitable for assessing perioperative stress level. For stress-related biochemical substances, there are significant differences in levels between individuals and throughout the circadian cycle. Therefore, it is difficult to reliably evaluate the perioperative real-time pressure (Kwon et al., 2020;Zatti et al., 2019). Heart rate variability (HRV) measurements, which were put forward by Hon and Lee for the first time in 1965 (Lee & Hon, 1965) are non-invasive and standardized method to assess the stress response and autonomic nervous function (Ardissino et al., 2019;Charlier et al., 2020;Mulkey et al., 2020). Heart rates fluctuate continuously even at rest and are determined by the discharge cycle of the sinoatrial node. Sinoatrial node discharge is regulated by intracellular levels of potassium (K + ) and calcium (Ca 2+ ), both of which are regulated by autonomic nerves. The autonomic nerve activity can be evaluated by frequency and time-domain HRV analysis of ECG data (Ptaszynski et al., 2013). In recent decades, HRV has aroused extensive clinical interest, mainly in the field of cardiovascular disease (Dias et al., 2019;Kališnik et al., 2019). Studies have shown that lower HRV was strongly associated with worse prognosis and higher risk of cardiac death (Goldenberg et al., 2019;Manresa-Rocamora et al., 2021;Pinheiro et al., 2015). HRV has also become an effective prognostic tool in intensive care medicine (Joshi et al., 2020). However, there are no studies to evaluate gastrectomy-induced stress and autonomic nerve dysfunction in gastric cancer patients by HRV. This study aimed to explore the circadian rhythm and fluctuation of HRV through a non-invasive smart device that can continuously monitor HRV, further clarify the perioperative fluctuation of HRV, and evaluate the surgery-induced stress and autonomic nerve dysfunction in gastric cancer patients. | Patients An observational study was conducted by a gastric surgery team of a first-class hospital. Patients were admitted to the hospital consecutively from July 1, 2019, to October 31, 2020. Written informed consent was obtained from all patients prior to enrollment. As reported in our previous literature (Yun et al., 2018;Zhao, Hu, Jiang, Wang, et al., 2018), patients were managed perioperatively in the ERAS programs applied for gastrectomy in our department, including preoperative and postoperative nutritional support, multi-mode health education, no intestinal preparation for nonconstipated patients, shortened preoperative fasting time and oral carbohydrate administration, patients temperature maintenance, limited fluid resuscitation, postoperative pain assessment and multimodal analgesia, early mobilization, and early oral feeding. | Data collection A smart device (Transcendent THoughts On Health, THOTH, http:// www.thoth -health.com/) was used for HRV monitoring. After the patients were admitted to the hospital, the researchers cleaned the skin of the precardiac area with wet gauze, attached the registered smart device to the skin in the shape of T (Figure 1a). And then the full-course heart rate and HRV were monitored from 1-day preoperation (Pre) to 4 days post-operation (Pod4). After the data collection, the Holter System software was used for statistical analysis (Figure 1b-f). In addition to HRV, collected variables included gender, age, body mass index, duration of surgery, bleeding volume, white blood cell count, neutrophil percentage, CRP, PCT, plasma cortisol, surgical complications,and et al. | HRV analysis Heart rate variability measurement methods include Poincaré plot intervals differing by >50 ms in the entire recording) divided by the total number of all NN intervals (pNN50). Analyzed frequencydomain analysis includes very-low-frequency power (VLF), lowfrequency power (LF), high-frequency power (HF), and the ratio of low-frequency power to high-frequency power (LF/HF). In this study, one datum is obtained every hour in time-domain analysis, and each data is the statistical result of all the obtained data in this hour. In frequency-domain analysis, one data is obtained in one hour, which is a statistical result of five minutes of monitored data in that hour. Even in the resting state, human's HRV fluctuates regularly and varies with circadian rhythms. Therefore, we first analyzed the preoperative circadian rhythms of HRV according to the data obtained from the continuous monitoring. | Statistical analysis Statistical analysis was performed using GraphPad Prism software (version 8). All HRV data are shown as the mean ± standard error of the mean (mean ± SEM). Normality of data distribution was tested using the Shapiro-Wilk test. The significance of differences of repeated measures of HRV parameters was evaluated using the Friedman test and Dunn's multiple comparisons test as the post hoc test. Spearman's rank correlation coefficient was also used to evaluate the correlation between perioperative HRV parameters and white blood cell count, Neutrophil percentage (%), CRP, PCT, or plasma cortisol. Significance was defined as p ≤ .05. | Patient characteristics In this study, 45 patients who underwent radical gastrectomy for gastric cancer were enrolled consecutively, with a median age of 61 (50-75) years, including 4 females and 41 males. The body mass index was 22.2 ± 2.8 kg/m 2 (mean ± SD). All patients were treated by the same team. The operations were assisted by 3D laparoscopic (n = 21) and Da Vinci Xi robot (n = 24), including 13 proximal gastrectomy, 20 distal gastrectomy, and 12 total gastrectomy. The operation time was 231.0±50.5 (mean ± SD) minutes and the mean bleeding volume was 60.2 ± 4.8 ml. The clinical stage is shown in | Preoperative circadian rhythm of heart rate variability The frequency-domain analysis found that VLF, LF, and HF showed consistent preoperative circadian rhythm. Compared with the time F I G U R E 2 Preoperative circadian rhythm of heart rate variability. According to sleep-wake cycle and activity level difference, one day is divided into three periods (0-4, 6-10, 18-22). (a-d) circadian rhythm of frequency-domain analysis parameters of heart rate variability (HRV), including VLF, LF, HF and the ratio of low-frequency power to high-frequency power (LF/HF); (e-h) circadian rhythm of time-domain analysis parameters of HRV, including SDNN, SDANN, triangular index, and pNN50, N = 45. **p < .01, ***p < .001, ****p < .0001. The data were represented as scatter plot and mean ± standard error. HF, high-frequency power; LF, low-frequency power ; VLF, very-low-frequency power; SDNN, standard deviation of all NN intervals; SDANN, standard deviation of all the averages of NN intervals index showed the same circadian rhythm, and these two parameters were highest at 6-10, which were significantly higher than that at 18-22 (Figure 2f-g). The pNN50 was the highest at 0-4, which was significantly higher than those at 6-10 and 18-22 (Figure 2h). | Perioperative HRV analysis For all the perioperative parameters of HRV, the throughout day data and the time-division data (including data of 0-4, or 6-10 or 18-22) showed similar trends and could recover to preoperative levels at post-operation day3 (Pod3), although a certain degree of difference existed (Figures 3 and 4). All the frequency-domain HRV parameters (including VLF, LF, HF, and LF/HF) decreased significantly during the operation, demonstrated by both the time-division data and throughout day data (Figure 3). The data of 0-4 and throughout day data showed that the VLF decreased significantly at Pod1 compared with Pre and returned to normal at Pod2 (Figure 3a,d). For LF/ HF, the data of 18-22 and throughout day data and showed significantly decreased at Pod1 compared with Pre and returned to normal at Pod2 (Figure 3o,p). In addition, the LF/HF data of 6-10 showed a significant reduction at Pod1 and Pod2 and could return to the preoperative level at Pod3 (Figure 3n). | Perioperative inflammatory markers and plasma cortisol Compared with pre-operation, all inflammatory markers (including WBC, Neutrophil percentage, CRP, and PCT) increased significantly at Pod1 and Pod3. WBC and Neutrophil percentages decreased significantly at Pod3 as compared with Pod1, indicating a markedly attenuated inflammatory reaction (Table 2). However, although CRP and PCT were lower at Pod3 than Pod1, they were not significantly decreased, suggesting a more sensitive and persistent response to the surgical inflammatory injury (Table 2). We collected blood from 5 patients to measure their plasma cortisol levels and found that the plasma cortisol decreased significantly at both Pod1 and Pod3. Subsequently, plasma cortisol increased significantly at Pod3 when compared with Pod1 (Table 2). | Circadian rhythm of heart rate variability Heart rate variability is affected by the sleep-wake cycle and activity | Perioperative HRV of gastric cancer patients According to the analyzed parameters of HRV, anesthesia seriously disturbed both the time-division data and throughout day data of HRV parameters, manifested as significant reduction of VLF, LF, HF, SDNN, triangular index, and pNN50. But interestingly, SDANN was the only HRV parameter that was not significantly affected by anesthesia, indicating that SDANN was quite special, and further dem- is consistent with our findings (Ardissino et al., 2019;Jeanne et al., 2009;Ledowski et al., 2005). Meanwhile, it was clear that the HRV at POD1 and POD2 is significantly reduced, presented as significant reduction of SDNN and SDANN, indicating patients suffering of a significant postoperative stress at POD1 and POD2, and implicating suppression of autonomic nervous system activity (Garg et al., 2020;Hattori & Asamoto, 2020;Tracy et al., 2016). Moreover, the reduction of LF/HF, which was used to evaluate the balance of sympathetic and parasympathetic nervous system (Lopresti, 2020), further suggesting that the sympathetic/parasympathetic imbalance occurred during the perioperative period. HF is a well-known HRV parameter that can reflect the parasympathetic and vagal activity (Lopresti, 2020). By continuous monitoring, we did not find any significant difference between the postoperative HF and the preoperative HF, indicating that the postoperative vagal activity of the patients was not significantly inhibited. pNN50, which is highly positive correlated with HF, also reflects parasympathetic and vagal activity. We found no significant difference of pNN50 between the postoperative and preoperative data, which further illustrated unchanged vagal activity of gastric cancer patients undergoing surgical treatment in our ERAS program. This might benefit from the multimodal analgesia in our ERAS program (Zhao, Hu, Jiang, Wang, et al., 2018;. | Perioperative inflammatory markers and plasma cortisol In addition, perioperative inflammatory markers were also de- Abbreviations: CRP, C-reactive protein; PCT, procalcitonin; WBC, white blood cell. the first day after surgery and decreased significantly on the third day. However, CRP and PCT were significantly increased on the first day after surgery, but not significantly decreased on the third postoperative day. Further, we analyzed the correlation between these inflammatory markers and HRV parameters. However, this study showed no correlation between any postoperative parameters of HRV and any inflammatory markers, suggesting that the inflammatory biomarkers could not reflect the postoperative stress, assumed that HRV was the best means to monitor surgical stress. And these results were consistent with the findings in patients with colorectal resections by Haase et al., (2012). Moreover, the plasma cortisol was not correlated with any HRV parameters. Cortisol is a marker of physiological stress, and large surgical stress may trigger a significant increase in plasma cortisol (Dimopoulou et al., 2008;Kapritsou et al., 2020). However, a growing number of studies in recent years have suggested that with the advancement of minimally invasive surgery, anesthesia and analgesia techniques, surgery may not lead to a significant increase in postoperative cortisol (Khoo et al., 2017;Prete & Yan, 2018). Similarly, laparoscopic surgery may not lead to a significant increase in postoperative cortisol. As in this study, there was a dramatically decrease in postoperative cortisol. This may be attributed to our minimally invasive laparoscopic surgery (including 3D laparoscopic and Da Vinci robotic surgery), multimodal analgesia and early removal of drainage, which minimize postoperative pain stress in the ERAS program (Schreiber et al., 2019;Zhao, Hu, Jiang, Wang, et al., 2018;Zhao et al., 2018c). Consistent with this study, Li Ren also confirmed that ERAS protocol could significantly reduce the postoperative serum cortisol level, so that the postoperative serum cortisol did not increase significantly compared with that before surgery (Ren et al., 2012). But in reality, surgical stress does exist. This study found that the plasma cortisol was not sensitive to detect the patient's postoperative stress, but the time-domain parameters of HRV (including SDNN, SDANN, and triangular index) were extraordinarily sensitive to confirm the patient's postoperative stress. These results suggest that HRV is a more accurate indicator of postoperative stress in surgical patients than cortisol. On the other hand, our team has started to routinely monitor the perioperative HRV and plasma cortisol of all HRV parameters in patients with gastric cancer and colorectal cancer, in ERAS program or non-ERAS program. Therefore, a larger sample size study of perioperative stress is on the way. | Advantages and limitations The advantage of this study is the 24-h continuous whole-course monitoring of HRV, which is convenient to observe the change of HRV throughout the perioperative period and capture valuable difference data. Moreover, the use of smart wearable medical devices and wireless communication technology for data collection can reduce the inconvenience to patients' movement and greatly improve patients' compliance and the accessibility to data. Limitations of this study are as follows: the age range of patients we included was narrow in consideration of the influence of patients' age on HRV, which led to a relatively small number of participants in the study. In future, our research team will carry out relevant studies on surgical stress of gastric cancer, colorectal cancer, and gallbladder stones in our ERAS programs, so as to monitor and reduce the postoperative stress of patients and promote their rapid recovery after surgery. Therefore, large sample and long-term observation studies are already on the way to explore the effects of pain, fasting, and sleep disorders on perioperative HRV, so as to guide us to improve our ERAS program. | CON CLUS IONS This study demonstrated that preoperative HRV of gastric cancer patients had circadian rhythm, and the obtained HRV data needed throughout day data and time-division data analysis, so as to comprehensively and accurately obtain the changes of perioperative HRV and surgical stress of patients. Perioperative heart rate variability monitoring revealed that anesthesia would disturb the HRV, resulting in a reduction of most of the HRV parameters. HRV monitoring showed a decrease in HRV parameters in the early postoperative period, indicating the existence of postoperative stress. Doctors should try their best to reduce perioperative stress and enhance patients' recovery through minimally invasive surgery and multimodal analgesia et al. in the ERAS program. However, the plasma cortisol was significantly reduced after surgery, so this parameter was not sensitive to reflect postoperative stress. In comparison, HRV could objectively reflect the changes in autonomic nerve function and stress response during perioperative period of gastric cancer and might be used as a valuable tool to evaluate perioperative stress response and guide clinical practice in the context of precision medicine and artificial intelligence medicine. CO N FLI C T O F I NTE R E S T The authors report no conflicts of interest in this work. AUTH O R CO NTR I B UTI O N S Zhiwei Jiang and Gang Wang: conceived and designed the study. Wei CO N S ENT TO PA RTI CI PATE Written informed consent was obtained from all individual participants included in the study. CO N S E NT FO R PU B LI C ATI O N Additional informed consent was obtained from all individual participants for whom identifying information is included in this article. DATA AVA I L A B I L I T Y S TAT E M E N T The data that support the findings of this study are available from the corresponding author upon reasonable request.
2021-10-22T06:53:19.299Z
2021-10-20T00:00:00.000
{ "year": 2021, "sha1": "8f9efb40651bc4a944b063bb0dd3065c39dbf7ca", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/anec.12903", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "25f53f52b4794983df4490ce1a0f8f3d84f712f0", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
150898965
pes2o/s2orc
v3-fos-license
Jeremy Safran: a hero’s journey In this essay I explore Jeremy Safran’s intellectual career as a hero’s journey or monomyth within the specific context of psychotherapy research. I argue that he fits such model in the sense that his work – though deeply informed by theories – was singularly focused and driven by his own sense of his role and mission within the profession. Rather than attempting to review in detail the entire scope or specific parts of his research contributions, I look at his scholarship as a kind of quest, a pursuit that was trans-theoretical but unified by foundational questions about the unique nature of the relationship between therapist and patient. Gently they go, the beautiful, the tender, the kind; Quietly they go, the intelligent, the witty, the brave. I know. But I do not approve. And I am not resigned. On remembering When asked to contribute to this special section I felt honored, and at the same time discomforted. Jeremy was a friend first before a colleague, and to write about his work, to distance myself from my own sense of loss, to address his accomplishments objectively, was challenging, and likely only partially achieved. In this short essay it is not my intent to review the full ark and span of his professional contributions -an important task but not mine -nor will I try to condense or sum up a life's work, a task which I think would be contrary to his contextualized way of looking at such things. I set myself a more modest task of trying to put in context my friend and colleague's intellectual journey. My reference in the title to a hero's journey was not intended as a qualitative adjective to lionize his contributions but rather to characterize his intellectual legacy. I use the term a hero's journey in reference to the concept of monomyth in the sense that Rank, Richter and Liebermann (2004), Jung and Hull (1990) and Joseph Campbell (2000) used the term to offer a perspective on what he has left us in the context of an intellectual life in-time his call to adventure and his pursuit into the unknown. The beginning (as I knew it) Jeremy and I got our doctorate at the same university and overlapped in our studies for three years. Like many universities at the time, (we are speaking of the late '70's) there were two separate departments granting doctorates in psychology. On one side was the Department of Clinical Psychology (DCP) staffed by reputable behaviorists like R. Dobson and R.J. Rachman. The DCP was housed in modern facilities, complete with one-way mirrored labs, video cameras etc., in a bespoke new building. On the other side, across the "L" parking lot was the Counselling Psychology department housed in Quonset shacks left over from a WW II army training camp. In this department you could find the likes of Les Greenberg, people interested in the ideas of Rogers, Fritz Perls and the humanists. The two sources graduate degrees in psychology ran on parallel but separate tracks. Even the research design and statistics courses, though thought mostly by the same professors, had different course numbers in each depart-N o n -c o m m e r c i a l u s e o n l y ment. A mixture of theoretical and ideological fealtytypical at the time -kept the two groups of faculty and graduate students apart and slightly paranoid of each other. The presence of at least three incompatible comprehensive theories of psychotherapy, competing for primacy in use at that time, was interpreted to mean that there could be but one correct broad conceptual framework, and the others relied on imperfect misuse of techniques better understood/explained by the right theory. Or worse: the nefarious placebo effect. Researchers expended considerable energy and resources to conclusively prove the superiority of their favored orientation. One can get a good sense of the partisan atmosphere within the field at this time by reading Eysenck's (1952) famous article disparaging psychodynamic treatment and promoting behavioral approaches, and the strong response to his claims e.g. (Strupp, 1963). It was in this Gemeinschaft that Jeremy Safran floated, quietly and seemingly without effort into our circle on the non-behavioral side of the campus. He became a regular feature of our informal gatherings (students and faculty) where -as often the case in graduate programs -all the interesting emerging ideas were discussed, and most productive learning experiences took place. Importantly, he never rejected the clinical psychology group, CBT theory, or came over to our side formally or informally. He simply questioned, participated, and made friends. Evidently the partisan theoretical divisions, the reality for most of the graduate students, did not even occur to him. I cannot recall him critiquing the status quo of competition among theories explicitly. Rather it was his ability, even as a graduate student, not to be owned or pay fealty to any intellectual club or collective. He had passionate loyalty to questions that he thought were important, and he was interested in listening to all the voices that shared his interests. His thesis was about the way cognitive processes mediate expectations and the interpretation of interpersonal behavior (Safran, 1983). But in our conversations he seldom mentioned this specific topic, he was interested on how therapy worked, how people changed. Differences in theories and the varieties of clinical modus operandi was the reason to ask, not a source of explanation. According to Campbell, the hero's journey begins with a call. In the summer of 1980, I was working for a branch of the provincial ministry serving people with substance abuse issues. Due to some unusual circumstances -clearly nothing to do with my abilities -I was rapidly promoted to head psychologist. The task was way over my head and urgently needed somebody competent to help me deal with the situation. Jeremy needed a summer job. I hired him and was rewarded with long rides all over the province in his company in my rusty Volkswagen beetle. It was during one of these rides he mentioned briefly tragedies in his personal life. He was a very private person, when he spoke, he was direct; an exchange between friends, he did not psychologize the impact of these events. But looking back, if I was to look for a spark the ignited the quest, I would start there. The journey Jeremy Safran's long list of professional publications read like road map: On one hand he spoken to a remarkably wide range of audience. In historical sequence, he starts off addressing the CBT and behavioral community (e.g., Alden, Safran, & Weideman, 1978;Safran, 1983;Safran & Greenberg, 1982a;Safran, Vallis, Segal, & Shaw, 1986). Somewhat overlapping with these pieces he is beginning to address a wider more inclusive audience (e.g., Greenberg & Safran, 1989;Safran, 1992;Safran, Greenberg, & Rice, 1988) and broadens, progressively, the scope of topics he is looking at. He explores issues from the perspectives of interpersonal theory and the starts to comment on the links between these concepts viewed from different theoretical perspectives (i.e., psychotherapy integration) but without discontinuing the dialogue with the CBT audience (Safran & McMain, 1992;Safran, Segal, Vallis, Shaw, & Samstag, 1993). Without neglecting these audiences he engaged the psychodynamic community starting in '99 (Safran, 1999(Safran, , 2001, and adds the dimension of Buddhism and philosophy of psychotherapy (Safran, 2003). But, on the other hand, while he appears to turn to different audiences over these three decades and move his vision across the theoretical spectrum, there is a remarkable internal consistency and cohesiveness among the issues he was working on and interested in. The uniqueness of his journey and contribution is the freedom, indeed enthusiasm, to explore the phenomenon of psychotherapy through different theoretical lenses in depth, and with full use of the intellectual insights and resources afforded by these different theories, without apparent loyalty or commitment to the theory as such. He is interested in issues, such as epistemological aspects of the therapy processhow do clients change their ideas and what influences their appraisal (Safran, 1984); the interaction of affect and cognition in therapy (Safran & Greenberg, 1982b) and, most importantly, the concept of therapy as interactive encounter between two humans, a two person event in the fullest sense that involves dynamic and mutual influence at the conscious and unconscious levels. What is unique in his approach is that unlike most of his contemporaries he was able to consistently priorize and focus on the issues and treated theoretical lenses as tools to address, from a variety of perspectives, what he considered to be the important questions in understanding how therapy works. On a single occasion that we talked about his relationship to theories -I awkwardly teased him about first encountering him as a CBT person and now having to squeeze beside him on his analytic couch (we were sharing accommodations at an APA conference). He smiled, turned quite [page 4] [ have some interesting things to say about stuff I've been thinking about". While many of us are honestly convinced that much of what works in therapy are not exclusive to the methods of any one theory, and common factors are at the heart of understanding the nature of psychotherapy, at the core, even those who are committed to psychotherapy integration, each come from a place. Even those who identify as eclectic, at the core of their personal theory, have an affinity -if not an orthodox fealty -to a theoretical home base. Most psychotherapist relay to some extent on a cohesive comprehensive theory to help them rise above the whirlpool of confused and confusing world of the patient, and think about a new road forward, a pathway to change beyond those already exhausted by the client. Likewise, researchers need a logically coherent framework to formulate questions, to identify issues that matter. It is a very difficult task to invent the wheel anew to build a cohesive structure to guide the inquiry from the ground up; it is much more efficient and convenient to make use of a well-articulated model, perhaps with some modifications, and with an open mind to alternative concepts. Thus, most of us end up habitually aligned with one of the mayor theoretical models. What, for me, is interesting and heroic about Jeremy's legacy is both his openness to find truths trough different theoretical prisms and his ability to maintain a skeptical outsider position throughout his intellectual and clinical journey. He was particularly impressed and interested in the contributions of other outsiders like Ferenczy, Rank and some of the Zen Buddhist masters. He shared with them the rare quality of intellectual independence, the strength of believing in his own questions, and the confidence of sometimes standing apart. His intellectual independence was of a very particular kind. Unlike independent thinkers who are reactive to the mainstream thinking and make the point of departure identifying what is missed missing or wrong-footed in the common wisdom, Jeremy had -at least in the conversations I was a part of -a very distinctive way of engaging with a topic. He would typically join in the conversation by enthusiastically amplifying the previous comment "Yes! Yes! Exactly…(repeat)...and you know….:" after the you know came inevitably some comment from a new place, a new source, a different angle. It was not instead it was in addition making the conversation bigger more interesting more inclusive. But his take on the issue was seldom just additive. He -both in verbal discussion and in his published work -started from a different take on what the core, of the issues were. Two examples of his approach are his pieces The unbearable lightness of being: Authenticity and the search for the real and Psychotherapy integration: A postmodern critique (Safran, 2017;Safran & Messer, 2015). In both cases he identifies issues at the center of the prior discourse that others have missed or overlooked without ever explicitly formulating his case as a critique but, by putting the matter into a different, more compre-hensive context, he challenges profoundly the more traditional views. In the first case, the notion of authenticity is explored historically as a social phenomenon. He brings attention to the neglected ethical dimension of being real, the interactive nature of one's sense of identity, and the moral implications of the value placed on becoming genuine and authentic. The re-considering of these neglected dimensions of the values attributed to authenticity challenge and elevate the discourse on the subject, not only within the psychoanalytic community (the paper was published in Psychoanalytic Psychology -Safran, 2017) but equally relevant to therapist of all orientation. The chapter -originally published as an article -Psychotherapy integration: A postmodern critique (Safran & Messer, 2015) likewise starts with re-casting the inquiry about the aims and limits of psychotherapy integration into a larger socio-historical context. Like in the previous example, the re-framing of the question, the re-formulating of the context (from the technical/theoretical to the historical social) opens up the discussion in a new dimension. They do not premise the paper on marshalling a critique: what has been over looked? But point out that the core issue lies in understanding the challenge of integrating different theoretical visions as a part of a larger intellectual current and bring the understandings and critique of intellectual pluralism and post-modern philosophy to provide a more inclusive, richer, context in which the limits of integrating diverse theories can be appreciated. Jeremy has made contributions to an amazing variety of subjects, especially considering his tragically short career. Many of us know of him for his contributions to the alliance research and especially his work on intersubjectivity. Rather than exploring these specific and very important fields of contributions I chose to focus on his particular style of intellect, the journey of an independent thinker, a quality that I think we should particularly treasure. Naturally, this short essay is a very incomplete and biased glance backwards to a very complete and accomplished friend and colleague. Perhaps a more just and complete summary of Jeremy Safran life and contributions is to be found in the sum of memorials and memories of all who remember him fondly, including this special volume. His life was tragically shortened, and we are all the poorer for it.
2019-05-13T13:05:51.726Z
2019-04-19T00:00:00.000
{ "year": 2019, "sha1": "3568f4f4aff8fe8add281bb374bb1fefb047a46e", "oa_license": "CCBYNC", "oa_url": "https://www.researchinpsychotherapy.org/index.php/rpsy/article/download/380/309", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3568f4f4aff8fe8add281bb374bb1fefb047a46e", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine", "Sociology" ] }
83880583
pes2o/s2orc
v3-fos-license
Interview with Robert E. Ricklefs, recipient of the 2011 Alfred profiles ISSN 1948‐6596 Interview with Robert E. Ricklefs, recipient of the 2011 Alfred Russel Wallace award by Rosemary G. Gillespie Division of Organisms and Environment, University of California at Berkeley, USA Rosemary Gillespie: You started working on is‐ lands as a new graduate student. Why islands? Robert E. Ricklefs: When I arrived at the Univer‐ sity of Pennsylvania to begin graduate studies with Robert MacArthur, he and Ed Wilson had just published their seminal paper in Evolution on the “theory of insular zoogeography.” It was only natural that I turned my attention to islands—and birds because I had for years been an avid bird‐ watcher, as was MacArthur; the West Indies be‐ cause James Bond at the Academy of Natural Sci‐ ences of Philadelphia had published a distribu‐ tional checklist. I quickly developed a scheme pos‐ tulating changes in bird distributions in the islands similar to Wilson’s “taxon cycle,” but MacArthur felt that historical hypotheses were untestable and discouraged further work. I slipped in some observations of bird distributions on the island of Jamaica one summer, which resulted in a 1970 publication, but did not return to the topic until I had become an Assistant Professor at Penn., and began a collaboration with George Cox, a superb ecologist and naturalist at San Diego State Univer‐ sity. My doctoral thesis at Penn. addressed issues in bird life histories—nothing on community ecol‐ ogy or biogeography. RG: You began your career in the 1960s, following in the footsteps of Hutchinson and MacArthur, and at a time when the ideas of Lotka and Volterra were highlighted, competitive exclusion was a new and important topic, along with related themes of niche specialization, ecological sorting, limiting similarity, and community saturation. How important were these in shaping your ideas and the field of biogeography in general? RER: One had to be impressed by the power of competition, and other types of interactions, re‐ vealed in experimental studies. Most ecologists brought up in the Hutchinsonian tradition be‐ lieved in niches and in resource limitation. These properties of systems were the basis for character displacement, density compensation, and other ecological and evolutionary phenomena. So, all these elements fit together well. Community satu‐ ration was perhaps the weakest aspect of the de‐ veloping theory, as this was difficult to demon‐ strate experimentally in natural systems, and simi‐ lar environments in different regions sometimes supported very different numbers of species. The degree to which species interactions constrained species ranges and probabilities of establishment following dispersal were harder to judge, although some ideas about biogeography, such as Jared Diamond’s checkerboard distribution patterns, had an essentially ecological foundation. RG: The 1960s were also the early days of plate tectonics, and the start of panbiogeography, cladistics and vicariance/dispersal debates. How much interaction did you see between these fields? How did they inform one another? How did discussions with the different players affect your thinking? RER: As an ecologist, these developments had relatively little influence on me. Ecologists were relatively indifferent to phylogenetics (as opposed to evolution) at the time, and to any kind of his‐ torical explanation, although we were very much interested in adaptive radiation, convergent evo‐ lution, character displacement, and evolutionary ecology. From my perspective, the debates con‐ cerning panbiogeography, cladistics, and vicari‐ ance/dispersal were peripheral issues, and the kind of biogeography involved in these debates seemed to have little connection to ecology. These were historical/geographical issues, and were not discussed widely among ecologists. Similarly, ‘island biogeography’ was consid‐ ered as distinct from ‘community ecology’. The frontiers of biogeography 3.1, 2011 — © 2011 the authors; journal compilation © 2011 The International Biogeography Society RER: One had to be impressed by the power of competition, and other types of interactions, revealed in experimental studies. Most ecologists brought up in the Hutchinsonian tradition be-lieved in niches and in resource limitation. These properties of systems were the basis for character displacement, density compensation, and other ecological and evolutionary phenomena. So, all these elements fit together well. Community saturation was perhaps the weakest aspect of the developing theory, as this was difficult to demonstrate experimentally in natural systems, and similar environments in different regions sometimes supported very different numbers of species. The degree to which species interactions constrained species ranges and probabilities of establishment following dispersal were harder to judge, although some ideas about biogeography, such as Jared Diamond's checkerboard distribution patterns, had an essentially ecological foundation. RG: The 1960s were also the early days of plate tectonics, and the start of panbiogeography, cladistics and vicariance/dispersal debates. How much interaction did you see between these fields? How did they inform one another? How did discussions with the different players affect your thinking? RER: As an ecologist, these developments had relatively little influence on me. Ecologists were relatively indifferent to phylogenetics (as opposed to evolution) at the time, and to any kind of historical explanation, although we were very much interested in adaptive radiation, convergent evolution, character displacement, and evolutionary ecology. From my perspective, the debates concerning panbiogeography, cladistics, and vicariance/dispersal were peripheral issues, and the kind of biogeography involved in these debates seemed to have little connection to ecology. These were historical/geographical issues, and were not discussed widely among ecologists. Similarly, 'island biogeography' was considered as distinct from 'community ecology'. The ISSN 1948-6596 Interview with Robert E. Ricklefs, recipient of the 2011 Alfred Russel Wallace award by Rosemary G. Gillespie Division of Organisms and Environment, University of California at Berkeley, USA 31 frontiers of biogeography 3.1, 2011 -© 2011 the authors; journal compilation © 2011 The International Biogeography Society profiles perception was that islands just didn't have enough species for ecological limitations on diversity to play a major role. The ecological space on islands wasn't saturated with species. Rather it was colonization limitation and area-dependent extinction that prevented the buildup of a lot of species. Islands were regarded as quite different from mainland continental areas. Moreover, Macarthur wanted ecology to have a quantitatively rational basis; he was very oriented towards understanding predictable processes. Considering these fields separately allowed them to develop in their own logical frameworks. RG: Was there an active shift that really started the integration between island biogeography and evolutionary biology? RER: One really has to go back to David Lack and the Galapagos finches. His book (Darwin's Finches) is very explicit about character displacement, mechanisms of diversification on islands, and how species partitioned resources. People tend to forget about his major contributions. Lack's themes were picked up by Peter and Rosemary Grant in their seminal work on Geospiza, and on it went from there. Likewise, the Anolis studies grew out of the Ernst Williams' group at Harvard. Williams had a lot of foresight, with much emphasis on morphology and taxonomy, even if not in a phylogenetic framework. So that was around, especially in the general context of adaptive radiation. RG: What about phylogenetics being applied to vicariance, and the ordering of the break-up of the continents? RER: That happened later -mid 1980s or 1990s, in particular, when it became possible to date events. The integration of these different fields can really be thought of as a large, braided river, with channels separating and reconnecting the way that areas of research branch apart and come together again in new combinations of ideas and approaches. RG: You made several trips to the Caribbean early in your career. How important were these field trips in shaping your ideas. What advice would you give to beginning graduate students in terms of the importance and role of field work in developing projects and ideas? RER: For ecologists, there is no substitute for working in the field -not just for gathering data, but also for developing 'insight' into natural systems, and making observations that challenge the views one has of nature. Perhaps biogeographers are more able to work solely from maps, but without the ecological dimension of space one can miss out on the transforming effect of environment on species and their distributions. I share the concern of many of the older generation of biogeographers that the increasing use of largescale datasets and focus on computer-intensive data analysis is drawing students and young investigators away from the more time-consuming fieldwork. I also wonder whether the focus of macroecological studies on place -often latitude -longitude grid cells-is diverting attention from species distributions and ultimately might detract from the integration of ecology and biogeography. RG: It seems that the early field excursion to Jamaica was very important in the development of your ideas. Would you recommend such an intensive field experience for beginning graduate students today? RER: It's very difficult for students now, because the kinds of pressures they experience are much different than in earlier times. Having to produce 4-5 high-profile papers out of one's doctoral and postdoctoral years is a pretty high bar. So students feel a lot of pressure to publish results quickly, which is not entirely compatible with de-Remember that being a member of IBS means you can get free online access to four biogeography journals: Journal of Biogeography, Ecography, Global Ecology and Biogeography and Diversity and Distributions. You can also obtain a 20% discount on the journals Oikos and Journal of Avian Biology. Additional information is available at http://www.biogeography.org/. tailed field observation and the development of expertise with a given taxon. Having too little time to master a group of organisms or the ecology and geography of a region can deprive a student of valuable insight. RG: So maybe one of the important things for students to do is develop mastery of one taxon. RER: Yes, it's helpful, but often not easy to achieve. Some institutions have maintained their emphasis in organismal biology, with courses tied to collections and taught by people that curate those collections. But many have let go of this side of the organism-oriented curriculum in favour of equally important training in genetics, population biology, and other more experimental and analytical sides of biology. RG: Over the years, you have collaborated with many people working in diverse areas, starting in the early days with Henry Hespenheide, Reed Hainsworth, George Cox, and others. What advice would you give to others, and especially students, as to the importance of building collaborations? RER: Beyond being, for me, one of the most pleasurable aspects of science, collaborations are very important in developing broader perspectives and fostering integration and synthesis. Of course, there are the practical considerations that a single person cannot do everything and so collaboration brings complementary expertise together. The most successful and satisfying collaborations occur when researchers challenge each other to integrate their approaches-to find the commonalities in their different concepts of biological systems. RG: You have a long-held interest in how communities develop in the context of regional processes. How have your ideas changed over the years? RER: Starting out as an ecologist with a strong interest in island biogeography, particularly in the context of MacArthur and Wilson's equilibrium theory, I have always had a strong sense of the impact of regional processes on localized ecological systems. Many other ecologists at the same time had similar perspectives, but from the 1960s through the 1980s, historical explanations were out of favour as being untestable and perhaps even unscientific for that reason. I was able to return to island biogeography 20 years after my initial work with George Cox, with the development of phylogeographic and phylogenetic analyses, and this allowed me and my collaborator Eldredge Bermingham to place distributional events on a time scale and test such time-dependent ideas as the taxon cycle hypothesis. RG: You wrote the first version of the Ecology text early on. Where did you see the need at the time? RER: The book was a major undertaking. In 1972 and 1973, 6 or 7 new ecology texts were published, largely as a reaction to the absence of population and evolutionary perspectives in the ecology texts available at the time. In ecosystem ecology, there was Eugene Odum's text -the text I used as an undergrad. But even that was primarily descriptive and not particularly quantitative, with little emphasis on formulating testable hypotheses in ecology. So people were reacting to this -ecology was becoming much more quantitative, much more scientific, much more process oriented, with a strong emphasis on adaptation, that is, evolutionary ecology. RG: You have seen many significant developments in the field of biogeography over the years. What do you see as the major "game-changers" in the context of conceptual breakthroughs? Clearly the Equilibrium Theory was important. What other ideas stand out? RER: Certainly the development of modern phylogenetic analysis has to have been the major game changer during the past decades. Placing a time scale and directionality on evolutionary divergences allowed one to infer areas of origin, directions and timing of major dispersal events, and to distinguish, in many cases, between vicariance and long-distance dispersal. Growing availability of palaeoclimate reconstructions and long fossil series have also greatly strengthened our empirical foundations for interpreting the history and geography of life. Increasing data on distributions and improved analytic capacities are shaping many aspects of biogeography at the present time, although one might also lament the balancing decline of support for basic taxonomic and systematic research required to ensure data quality and to provide a context for determining the biogeographic implications of these data. RG: There has been a lot of recent attention on community phylogenetics -ecological sorting, filtering, and niche modelling. Where do you see this going? RER: This line of research has been productive, but often tells us little that we didn't already know in a less formalized context. Niche modelling has been used to look at changes in distribution under climate change scenarios and whether or not introduced species can spread. The problem is that one doesn't know ahead of time whether an introduced species can spread through conditions that are outside its range back home. So what is the appropriate distribution model for that species? How much is the distribution of that species back home controlled by factors that are unrelated to climate, but rather reflect interactions with other species? We really don't have a good handle on that. Moreover, in addition to climate, other factors such as soil and habitat structure, are important components. So progress will need to wait until we get a better handle on what the niche of a species is. RG: What do you see are going to be the major exciting developments in the context of ecological/ evolutionary theory in biogeography? RER: I would go back all the way to Ed Wilson and some of his contemporaries. What they considered important was the biogeography of species. At the moment we are hung up on the ecology and biogeography of places. We decompose regions into lat/long, grid cells, cataloguing what occurs in a particular place, rather than focusing on where a particular species occurs. So while en-vironmental niche modelling and related approaches address species distributions, ecologists and many macroecologists focus on the local fitting of species into assemblages, rather than how populations of species are distributed and fit together in regional landscapes. So the integration of what species are doing individually, and what whole communities of species are doing in large regions, is ripe for development. I think that a focus on species populations as the fundamental units of ecology, evolutionary biology, and biogeography provides a natural direction for biogeography in the future. RG: So how do we get there? RER: Part of it may be giving up the place-oriented perspective that we have for research on certain types of questions. Islands are one thingdiscrete, self contained. Quite different from setting down grid squares in South America, where it is better to look at distributions of species within these large regions and start to develop hypotheses as to why different species are distributed in different ways. Work has started, but in the context of trying to integrate all the species in a large area -their distributions, evolutionary history, interactions-we have a long way to go. RG: How important do you think are the recent ideas that have come out of comparative phylogenetic and phylogeographic analyses of diversification? RER: These models deal with evolution within phylogenies rather than evolution within communities or regions. Adding the additional layers of complexity is a challenge. Even defining the problems remains to be done in a useful way. The reality is that the diversity of species, and their geographic and ecological distributions within regions, unfold over evolutionary time. Do we have to tackle it at this level of complexity to really understand it? I suspect we do to achieve a fundamental understanding of what's going on. So how do we package this in such a way as to get all of that complexity and history and geography and evolutionary adaptation into a study system that is simple enough to get a handle on? Islands give us advantages as they are discrete and not terribly diverse; and archipelagoes are very instructive because of their propensity to support species diversification. Whether we can scale up from these simpler systems is not clear, but it is a place to start. RG: The argument has been made that the genomic era and the ability to sequence everything cheaply will allow us to answer many of these questions. RER: I think that high throughput sequencing might make life even more confusing and challenging! Finally we will begin to understand the total diversity of ecological systems. And what is all this doing, and how is it maintained? We can generate hundreds or thousands of cytochrome b sequences or cytochrome oxidase I sequences from soil in a small patch of habitat, each representing a unique type of organism, with its own ecological role and history and distribution. It's obviously an important step towards characterizing diversity and distributions, but how will we generate new insights from such data. RG: What would you recommend for students starting out in the field if they want to do something new and different? RER: I would emphasize the importance of making connections across different traditions and areas of biology. Achieving integration and breadth is hard for students. We don't foster it enough in our academic structure, as departments are often divided and academic traditions confined. And it's often hard to master more than one area. But I feel that I have benefited from having a curiosity about many things -not unlike a stamp collector's curiosity! I find interest in almost anything. So I have pursued a lot of different issues in ecology and biogeography, which has enabled me to draw connections. I undoubtedly have sacrificed depth in some of my work, but it has allowed me to be more integrative and to pull disparate things together. RG: Why do you think the public should care about (and fund) research in biogeography? RER: I would certainly emphasize the commonly repeated social benefits of science in general: satisfying a deeply felt need to understand the world around us; providing information to help predict the consequences of global climate change, to help manage populations of economic importance and conservation concern, and to predict the emergence and spread of pathogens. Beyond these considerations, science is a defining aspect of our culture and civilization, and its practice in the context of a wide variety of phenomena, including the evolution and distribution of life on Earth, helps to maintain a certain level of analytical competence in society as a whole. Our success depends in large part on coming to rational conclusions and making informed decisions based on data. Of course, all scientists do this in the practice of their profession, but biogeographers uniquely work with evolutionary and ecological processes on global scales of time and space, maintaining valuable traditions within our culture. Your participation in frontiers of biogeography is encouraged. Please send us your articles, comments and/or reviews, as well as pictures, drawings and/or cartoons. We are also open to suggestions on content and/or structure.
2019-03-20T13:04:17.220Z
2012-04-12T00:00:00.000
{ "year": 2012, "sha1": "86fa610706b128a060b1a0d406c9ad2169cd87b6", "oa_license": "CCBY", "oa_url": "https://cloudfront.escholarship.org/dist/prd/content/qt1zb8z229/qt1zb8z229.pdf?t=pfzxv8", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "83db47da7612e17a1e5f2e5e6f229bc28c337591", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
247372154
pes2o/s2orc
v3-fos-license
Repository : Process-induced variability is a growing concern in the design of analog circuits, and in particular for monolithic microwave integrated circuits (MMICs) targeting the 5G and 6G communication systems. The RF and microwave (MW) technologies developed for the deployment of these communication systems exploit devices whose dimension is now well below 100 nm, featuring an increasing variability due to the fabrication process tolerances and the inherent statistical behavior of matter at the nanoscale. In this scenario, variability analysis must be incorporated into circuit design and optimization, with ad hoc models retaining a direct link to the fabrication process and addressing typical MMIC nonlinear applications like power amplification and frequency mixing. This paper presents a flexible procedure to extract black-box models from accurate physics-based simulations, namely TCAD analysis of the active devices and EM simulations for the passive structures, incorporating the dependence on the most relevant fabrication process parameters. We discuss several approaches to extract these models and compare them to highlight their features, both in terms of accuracy and of ease of extraction. We detail how these models can be implemented into EDA tools typically used for RF and MMIC design, allowing for fast and accurate statistical and yield analysis. We demonstrate the proposed approaches extracting the black-box models for the building blocks of a power amplifier in a GaAs technology for X-band applications. Introduction The foreseen transition to 6G communication systems (and beyond) calls for increased operation frequency and bandwidth along with reduced power dissipation and high efficiency, opening the way to the exploitation of new technologies and devices. Both Si nanotechnologies (e.g., CMOS and FinFETs [1][2][3][4]) and III-V-based technologies (GaAs and GaN PHEMTs [5][6][7]) have been continuously optimized for RF/microwave applications to cover the requirements of next generation communication systems, targeting either higher power density for the deployment of the wireless backbone [8], or extremely high operating frequencies to exploit their inherent wideband capability, or both. In analog high-frequency applications, though, the technological quality turns out to be the key for a successful deployment of microwave stages such as power amplifiers (PAs) or mixers [9]. Despite the successful development of RF technologies into sub-100-nm gate length technology nodes, process-induced variability (PIV) still represents an important bottleneck in the design of monolithic microwave integrated circuits (MMICs). From the modeling standpoint, it is therefore mandatory to integrate PIV into the standard, commercial electronic design automation (EDA) tools, to retain the link of a given circuit performance with the underlying technological process. A distinctive feature of microwave circuit design is the need for accurate modeling of both active devices and passive structures (matching networks, filters, couplers, etc.), which are in many cases implemented in semi-lumped form, i.e., adopting both distributed (transmission lines) and lumped (MIM capacitors and spiral inductors) elements. Technology variations impact both the active device and the passive structures, e.g., through the uncertainty of the doping concentration, trap density, mask definition, or passive layer thickness. The random nature of the technological variations, either linked to the granularity of matter at the nanometer scale or to the fabrication process tolerances, makes statistical analysis a fundamental tool for the design and optimization of a microwave stage. In this perspective, the designer must be aware that circuit optimization relying on the nominal device parameters only may result in being blurred out, or even utterly impaired, when the technological spread is taken into account. Physics-based (PB) analysis is the key modeling approach to link technological parameters to circuit-level performance. In fact, EDA environments often include PB simulation tools, such as physical electro-magnetic (EM) solvers to seamlessly simulate passive structures, or, less frequently, Fourier thermal analysis to model device self-heating. However, EM and thermal simulations, despite being mostly linear, are generally regarded to be extremely slow and too computationally intensive to allow for a true circuit optimization, and even less for the statistical analysis required to include the technological spread into the optimization process itself. Therefore, although in principle PIV may be incorporated into the design process, through, e.g., the spread of geometrical dimensions, this is seldom done in actual designs. Thermal simulations are often omitted, unless for applications where temperature is a critical parameter, like in space communication systems [10,11]. EM analyses, however, are almost mandatory at high frequency, as a final tuning/optimization step aimed at taking into account all the coupling effects firstly neglected in circuit-level design. However, due to the long simulation time, they are typically based on nominal parameters only, omitting PIV statistical analysis. Even concerning the active device models, a very limited simulation capability is nowadays available at the EDA level to model PIV [12,13], even if recent developments demonstrate the interest for this topic [14]. As for passive structures, physics-based simulations through calibrated technology CAD (TCAD) would represent the ideal framework to incorporate PIV into microwave design, but EDA tools do not allow for co-simulation of the active device physical model into the circuit-level design flow, mainly due to the numerical burden of the nonlinear physical model (e.g., the drift-diffusion model) solution. Active devices are most often modeled by nonlinear compact models based on equivalent circuits, whose components are calibrated against massive measurement campaigns. Such models lack the insight into the physical device behavior to include PIV in a systematic way. In fact, little information concerning the active device variability is usually included in the process design kits (PDKs) provided by foundries, furthermore often limited to DC or small signal data as a worst-case bound [12]. Moreover, in circuit models, statistical variations can only be applied to the macroscopic circuit parameters, hence losing the direct link with the specific contribution of each underlying technological parameter. More recently, behavioral models based on advanced nonlinear characterization methods or neural networks have also gained increasing interest, but still with very limited variability capabilities [15][16][17]. As a result, a gap between accurate physical simulations and circuit-level design is still present, hindering a true process-aware design of microwave stages. In this paper, we show that black-box models are well suited to bridge this gap, as they directly translate the physical simulations into EDA circuit design environments. In particular, a nonlinear black-box model, namely the X-parameters [18] (Xpar hereafter), is extracted from physics-based simulations of the active device taking PIV into account. A linear black-box model is extracted from EM simulations to model the passive structures, along with their technological spread. The final stage is then entirely described in terms of coupled black-box models, which can also be regarded as a preliminary step towards the development of the behavioral models needed for system-oriented analysis and stage predistortion, e.g., via DSP manipulation. In this paper, we discuss different possible black-box modeling approaches for the linear (passive) and nonlinear (active) portions of a microwave circuit, and their interfacing. We focus in particular on models allowing for a flexible implementation into the most common EDA tools for RF and MMIC design, and a fast, yet accurate, statistical circuit analysis of PIV. Block-Wise Stage Simulation through Black-Box Models Let us consider the block-wise partition of a microwave circuit shown in Figure 1. Each block represents a physical section of the circuit, the left one encompassing the active device (including parasitics), and the right one the passive distributed structures used for matching, biasing, and coupling. We aim at modeling each block with a black-box model extracted from accurate physics-based simulations, including PIV. Such models are chosen among the ones supported for inclusion into commercial EDA tools, e.g., Keysight ADS [19] or Cadence AWR Microwave Office [20], thus allowing us to set-up a complete circuitlevel simulation by connecting the black-box models by means of a set of interconnection ports. The procedure must identify (1) which black-box model is better suited for fast and accurate circuit-level analysis, and (2) how this model can be extracted from physicsbased simulations. Note that the physics-based simulation of a microwave circuit represents a true multiphysics and multiscale problem, requiring specific tools for each block as depicted in Figure 1 (thermal analysis not included). Passive structures require EM analyses, either full-3D simulations based on the finite element method (FEM, offered, e.g., by Ansys HFSS [21], Cadence AWR Analyst [22], and Comsol Multiphysics [23] commercial software), or planar-3D simulations, typically based on the methods of moments (MoM, offered, e.g., by Keysight Momentum [24], Cadence AWR Axiem [25], and Sonnet [26] commercial software), on a scale of the order of the wavelength corresponding to the operating frequency, usually spanning from hundreds of micrometers to a few millimeters in the final layout. Active devices, instead, require TCAD simulations, e.g., through the drift-diffusion model or higher-order non-stationary transport models, solved over a domain scale of a few hundredths of nanometers, with a discretization grid fine enough to include all relevant device features like doping distribution, material layers, and contact properties. Physics-based simulation may resort to general-purpose physical simulators, like Comsol Multiphysics, to more specific device TCAD commercial simulators, like Synopsys Sentaurus [27] or Silvaco Victory Device [28], or, finally, to ad hoc developed codes, like our TCAD simulator [29], which has been used for this work. MMIC thermal analysis is not included in this work, as it features manifold aspects (e.g., the coupling between the TCAD thermal model and circuit-level analysis through self-consistent electro-thermal solutions [30], or the integration with FEM-3D thermal analysis tools like Keysigth PathWave [31] or Cap-Sym SYMMIC [32]), which would fall outside the scope of this paper, but is the object of future developments, as it is gaining an increasingly important role in a wide range of applications. Figure 1 are modeled and connected, exploiting the concept of port waves [33], which is the most natural framework for the analysis of high-frequency circuits. It is well known that EM equations are a linear (or quasilinear) function of the external stimuli, and they are usually carried out in the frequency domain: at the k-th harmonic ω k = kω 0 of the fundamental frequency ω 0 (k = 1, . . . , N H , N H being the maximum-truncated-harmonic order included in the simulation), a given set of incident waves at all the ports a (LIN) k result in a set of reflected waves b (LIN) k at all ports and at the same frequency. The relation between these two quantities is given by the equivalent loads Γ k , which identify a black-box model of the form: as shown in Figure 2 (left), for a simple 1-port case. In (1), Γ k is explicitly made dependent on a set of technological parameters collectively denoted by β, characterized by the nominal values β 0 and the spread δβ. For an N-port passive block, Γ k (β) essentially correspond to the N × N scattering matrix at ω k as a function of β. The active device(s) in Figure 1 are modeled by means of TCAD simulations. The physical model must be solved with external periodic (or quasi-periodic) large amplitude stimuli, exciting the strong device nonlinearities, to include all the generated harmonics and the frequency mixing. These simulations require non-conventional TCAD solvers in either the time or frequency domain. Currently, only a few examples of such implementations are found in the literature: starting from the pioneering drift-diffusion simulator PISCES-HB [34], harmonic balance is exploited in our TCAD implementation that extends PISCES-HB by adding small-signal large-signal and Green's function-based perturbation analyses [35]. Among time-domain solutions, we mention the shooting solution discussed in [36], and the Boltzmann transport equation-based simulator in [37]. The availability of Green's functions makes our harmonic balance implementation superior for PIV analysis thanks to its numerical efficiency [38,39]. Moreover, it is better suited to extract black-box models to be incorporated into microwave EDA tools, which also employ the harmonic balance approach, and will be used hereafter in this work. Given a set of port incident For each harmonic, f (NL) k is a function of the magnitude and phase of the incident waves at all harmonics (up to the truncation order N H ), and of a set of physical device parameters γ (e.g., the doping profile or the gate length) with nominal values γ 0 and technological spread δγ. As anticipated, the model in (2) does not include any temperaturedependent effect, as temperature in the TCAD simulations is kept constant at 300 K. Identifying the nonlinear functions f (NL) k would be a true challenge: TCAD simulations should be repeated with different amplitudes and phases of all the incident waves to collect all the reflected waves. Interpolating a model over such a huge amount of data seems impractical. To downsize the problem, the device ports are terminated with a fixed embedding circuit, composed of prescribed harmonic loads characterized by the scattering matrix Γ k,ext , a given set of bias sources V DC , and an input generator with swept available power P av (we assume here a single tone excitation for the sake of simplicity). With such a constraint, model (2) The most natural choice is i.e., to embed the device with the equivalent loads presented by the passive structure with nominal parameters at each harmonic k. The model identification requires now the self-consistent solution of the two connected blocks, Figure 2 (right), corresponding to a TCAD mixed-mode analysis, whereby the physical device model is solved self-consistently with the circuit equations for the embedding structure (including bias, power source, and external loads). Harmonic balance enforces the port wave continuity: yielding the solution a , where the subscript "0" refers to the system at the nominal operating condition and nominal technological parameters, i.e., β = β 0 and γ = γ 0 . We now discuss the model dependency on technological variations, i.e., γ = γ 0 + δγ and/or β = β 0 + δβ. Variations δβ of β in the passive block correspond to variations δΓ k of the equivalent load Γ k at each harmonic. Since δβ is not a deterministic quantity that can be set a priori, but rather a stochastic term following a prescribed statistical distribution, we must regard δΓ k to be a free variation from the nominal load, with amplitude and phase continuously varying in a domain around Γ k (β 0 ), as illustrated in Figure 3. From the active device standpoint, δΓ k corresponds to equivalent load variations, effectively load-pulling the device around the nominal load. Therefore, to include PIV, the nonlinear block model must be either a global model as a function of Γ k (e.g., akin to (2)), or at least include local load-pull capability. Equivalently, it must account for the wave port For port wave continuity at the interconnecting ports, linear blocks also undergo port wave variations δa , as shown in Figure 4 (left). We conclude that, with respect to the model with nominal parameters, a PIV-dependent model must include the dependency on both technological variations (endogenous, i.e., resulting from internal variations of each block) and port wave variations (exogenous, i.e., resulting from variations of the embedding blocks). To model technological variations, two main approaches can be used: • incremental approach: repeated physical simulations are carried out, varying each technological parameter β i and γ i on a set of prescribed values β il and γ il , respectively. A family of models Γ k (β il ) and f N H , γ il ) are extracted and fed to the EDA tool, where interpolation among these models allows for statistical analysis or optimization with continuously varying β and γ; • linearized approach: when variations are small, physical simulations are used to assess the sensitivity of the given nominal model to each of the i-th parameter variations The same two approaches also apply to wave variations: • incremental approach: repeated physical simulations with varying incident waves a k (magnitude and phase) yield a set of reflected waves and a family of models. Interpolation allows us to incorporate them into EDA tools; • linearized approach: the models must be linearized as a function of the incident waves a k . Combining the above approaches, we obtain four possible cases, as shown in Figure 4 (right). Actually, since the passive network is itself linear as a function of the port waves, the linearized and incremental models would coincide, and thus we neglect cases (3L) and (4L). For the active device, instead, all four cases are possible. Note that, since technological variations are static, wave variations occur at the same fundamental frequencies as the nominal operating condition. Passive Block Black-Box Models For the passive blocks, we address separately the two cases (1L) and (2L) of Case (1L): Look-Up Table MDIF File In case (1L) the model is not linearized in terms of technology variations. To clarify the procedure, we consider first the dependency on a single parameter β: repeated EM simulations are carried out varying β over a prescribed range with a set of samples β l , l = 1, . . . , N β . The range and the number of samples must be chosen according to the technology used for the circuit development. The resulting values Γ k (β l ) can be collected in terms of a look-up table as a function of the parameter β, for further inclusion in EDA circuit simulations. To clarify the procedure, we take as an example the output matching network (OMN) of a power amplifier (PA) designed at 12 GHz and implemented in MMIC GaAs technology for X-band applications [40,41]. The OMN synthesizes a load Z L = (43 + j10) Ω at the fundamental frequency, shunting up to the third harmonic and minimizing the impedance at higher harmonics. For the design, we adopt a proprietary MMIC foundry PDK exploiting two gold layers (1 µm and 2 µm thick) for micro-strip transmission lines and a 100-nm-thick SiN insulating layer for MIM capacitors (resulting in about 600 pF/mm 2 capacitance per unit area). A preliminary layout of the OMN with nominal technological parameters is shown in Figure 5. Let us address the OMN PIV, focusing in particular on the variability due to the uncertainty of the dielectric layer thickness t SiN in MIM capacitors. According to the foundry specifications, t SiN is subject to variations estimated about ±2% around the nominal value of 100 nm. Hence, in Monte Carlo analysis, the PDK suggests to randomly distribute t SiN according to a Gaussian distribution with σ = 2 nm standard deviation, corresponding to the 2% foundry uncertainty. To sample such a distribution, we run EM simulations with t SiN equal to the nominal value and for other 6 values, corresponding to a variation of ±σ, ±2σ, ±3σ. Variations of t SiN are considered to be fully correlated over a correlation length comparable with the OMN dimensions, i.e., the variations are not local but global. If this hypothesis is relaxed, variations undergo partial compensation over a scale greater than the correlation length effectively reducing the overall effect of t SiN variations. Hence, the global variation case can be regarded as a worst-case bound for PIV. EM simulations were carried out within ADS, through the full-wave planar-3D Momentum simulation engine. The EM simulation results are collected into a look-up table model: we have used the ADS S2PMDIF component shown in Figure 6 [42]. It is essentially a 2-port scattering parameter model that allows for parametrization over an external quantity via the measurement data interchange format (MDIF) standard. The MDIF model consists of an ADS citifile collecting the EM simulated S-parameters up to the fifth-harmonic frequency as a function of t SiN . As an example, Figure 7 (left) shows the OMN S 11 for each harmonic and the sampled values of t SiN , highlighting their range of variation. With continuously varying the oxide thickness from 92 nm to 108 nm, ADS interpolates/extrapolates the citifile data using polynomials or splines, as shown by the red line in Figure 7 (right). As expected, the interpolation capability of the algorithm is excellent, while attention must be paid when trying to extrapolate. Notice that the circuit-level analysis is considerably more efficient from the numerical standpoint: the EM simulation took approximately 15 min for each dielectric layer thickness, while the circuit-level simulation with more than 150 t SiN values that generates the data shown in Figure 7 (right) is almost instantaneous. Since the final passive circuit-level model is lookup-table-based, the proposed method is very general and can be extracted from more accurate full-3D EM simulators external to ADS. Of course, if multiple parameters are simultaneously considered, the space over which the model must be sampled becomes multidimensional and the interpolation of the look-up table data requires a careful choice of the sampling points. Case (2L): Equivalent PIV Generators For case (2L) in Figure 4, the model must be linearized as a function of β: where S (β i ) k are the model sensitivities defined in (6). At first order: i.e., where b acts as an equivalent impressed wave generator [43] proportional to the technological variations, as shown in Figure 8. The model is entirely identified by the parametric sensitivity, which can be extracted numerically with only two EM simulations, i.e., one with the nominal parameter value, and a second one with a small perturbation. The simulation time for the model extraction is therefore reduced with respect to the incremental case. EM solvers allowing for numerically efficient ways to calculate the sensitivities, e.g., through a Green's function (GF) approach similar to the one used in the active device physical simulations, would further reduce the model extraction time. GFs would also allow us to efficiently take into account local variations of the dielectric thickness but, as already explained, we take global variations as a worst-case bound. The model of Figure 8 is implemented into ADS with ad hoc equivalent generators and compared with the look-up-table model of case (1L). Considering again the OMN example of the previous section with t SiN varying in the interval [94, 106] nm, Figure 9 (left) shows that the linearized model cannot predict the significant nonlinearity of the S 11 magnitude for t SiN < 98 nm, while the phase has overall nonlinear behavior. The effect of nonlinearity is especially evident in the statistical analysis required for PIV. The Monte Carlo OMN simulations, with t SiN taking random values with a Gaussian distribution with 2 nm variance, is shown in Figure 9 (right). The distribution of S 11 magnitude and phase obtained from the MDIF incremental model shows a pronounced skew, whereas the linearized model predicts instead Gaussian symmetric behavior. The result shown is not unusual. Even if in a mature technology parameter variations are expected to be small with respect to their nominal values, the sensitivity of the distributed matching network to the transmission line length or width can be very important, e.g., at the resonance frequencies. We remark that the nonsymmetric feature of the statistical distribution is especially important to correctly identify the corners for yield analysis. Since the ADS simulation time is practically the same for both the incremental and linearized analysis, we conclude that the obtained accuracy and time saving is not sufficient to justify the linearized approach, while the look-up-table models (case (1L)) are a better compromise between simulation speed and accuracy. Active Device Black-Box Model Extracting a black-box model for the active device PIV is the most challenging part, since active device black-box models are already challenging per se, even without considering variability. In principle, for the active block all of the four cases (1NL)-(4NL) of Figure 4 must be considered. Cases (3NL)-(4NL), however, require load-pull physical simulations with varying loads. The number of simulations required is calculated sampling the multi-dimensional space made of the real and imaginary parts of Γ k plus the technological parameters γ. The only way to exploit this amount of data in EDA simulations would be through a generalized MDIF (GMDIF), a data format specifically developed for accessing/saving multidimensional data (multiple independent vs. dependent variables) external to the circuit simulator, e.g., from measurements or independent physical simulations, basically extending the MDIF format used in Section 3, which was limited to linear blocks. The GMDIF format can be used to collect data representing the device in nonlinear conditions, e.g., the harmonic components of the currents, voltages, or port waves as a function of input power, γ, and load. Although a multidimensional interpolation over the parameter domain is in principle possible, such a model turns out to be a too demanding task. Furthermore, the model accuracy may also be poor. Therefore cases (3NL) and (4NL) will be not discussed any further. In this work we propose to circumvent the multidimensional interpolation issue by using the Xpar model [18], a black-box model where the device is linearized around a nominal operating condition, corresponding to a prescribed port load. As such, X-parameters seem to be the ideal choice to address cases (1NL) and (2NL). The Xpar model expresses the reflected waves at the device ports as a linearized function of the incident waves, therefore extending the concept of S-parameters to the non-linear large-signal (LS) regime. A power source, e.g., a single tone generator, drives the device in a nonlinear LS operating condition with ports terminated on a fixed reference load Γ ext , typically 50 Ω. To fix the ideas, we consider the single-tone injection in port 1 at the first harmonic. Small-amplitude incident waves are added to each device port and each harmonic (with the exception of the same port and the same frequency of the LS excitation) to perturb the LS working point. According to the multiharmonic linearization around the LS working point [44], the reflected waves b The Xpar functions X F , X S , and X T fully identify the model, depending on the DC bias voltages and the input large signal incident wave at fundamental frequency |a 11 | or, equivalently, on the available power P av if the source impedance is chosen to be equal to the normalization impedance. In order to include PIV, the Xpar model is made dependent on the active device technological parameters γ. Equation (9) can be regarded as a particular way of linearizing (3) around the nominal LS working point with reference loads Γ ext : as discussed in [44], X F relates to the AM-AM/AM-PM curves with a perfectly matched output impedance (typically 50 Ω), while X S and X T are sensitivity terms accounting for the device response to a (small) load mismatch, as required in the PIV analysis (see Figure 3). Sampling the Xpar model over a prescribed input power interval, e.g., driving the device from back-off to compression, we generate a look-up-table model (black-box model) suited for circuit-level analysis. Here we adopt the proprietary ADS .xnp file format and the corresponding Xpar schematic component [19]. The multiharmonic linearization around the LS working point, on which Xpars are based, is a problem addressed more generally by the sideband conversion analysis, where the small perturbation is imposed at a frequency displaced with respect to the LS harmonic by the sideband frequency [45]; see Figure 10. Xpars can be regarded as a special case, when the perturbation occurs at the same LS harmonics, i.e., with null sideband frequency; in fact, a one-to-one relationship exists between the two representations [44]. According to the sideband analysis, a sideband conversion matrix (SCM) describes frequency conversion among sidebands in terms of a matrix product, i.e., a linear superposition [45]. Sideband conversion analysis was introduced within the framework of TCAD simulations in [35], and was later extended to device variability and sensitivity analyses [38,39,46] and to assist the PIV-aware microwave circuit design [47][48][49]. In our in-house TCAD simulator, the admittance SCM Y is calculated with short-circuited device ports and converted into the scattering SCM S by where Z 0 is a block diagonal matrix with entries equal to the reference port impedance Z 0 for each harmonic and I is the identity matrix [40]. For implementation reasons, the SCM evaluated into TCAD tools is defined with reference to a bilateral spectrum, so that the harmonic index runs from −N H to N H . Furthermore, the frequency offset taken into account is positive, so that upper sidebands only are used. On the contrary, as can be seen from (9), Xpars differentiate the contributions from the upper and lower sidebands of each harmonic using both the incident waves a (NL) l and their complex conjugates (a (NL) l ) * . However, spectral symmetry implies that the upper sideband of a negative frequency harmonic p < 0 corresponds to the lower sideband of harmonic −p for the unilateral spectrum. Therefore, as shown in Figure 10, for each (k, l > 0), S can be converted into the S kl and T kl Xpars as follows: T-type X-par: (k,-l) element of Scattering SCM S-type X-par: (k,l ) element of Scattering SCM harmonics sidebands Figure 10. Extraction of X-parameters from the scattering sideband conversion matrix. Case (1NL): Look-Up Table X-Parameters To account for the dependency of the Xpars on process variability, TCAD simulations of the active device are carried out with technological parameters γ varying over a prescribed interval with N γ samples, and a look-up-table-based Xpar model interpolates among available data to make the Xpar model depend continuously on γ. To demonstrate the procedure, we consider the design of a power amplifier stage exploiting an FET GaAs MESFET as the active device [48]. The PA is designed for the tuned load deep class AB operation (10% I DSS ) at 12 GHz, with optimum load Z L,opt = (43 + j10) Ω at the fundamental frequency and shorted harmonics. With this loading condition, we investigate the FET behavior with varying channel doping (γ = N D ) in the interval ±10% around the nominal value N D = 2 × 10 17 cm −3 [40]. The aim is to extract a dependable Xpar device model to be used in the final design of the PA, describing the doping dependence and the effect of possible deviations from the nominal value of the optimum load. To extract the Xpar model, TCAD simulations have been carried out with three doping values (nominal, and ±10%) and the device ports terminated on matched loads (50 Ω at all ports and all harmonics). Xpars were extracted for each doping value using the S SCM as explained above, and stored in a unique ADS .xnp file which, being itself a particular form of a GMDIF, allows for the interpolation of the Xpars over any independent variable. To make the interpolation simpler, we inserted an additional, fictitious port in the Xpar model; the extra port is isolated from the other ones (Xpars are padded with zeros to avoid interfering with the other active ports) and its port voltage, set to a desired doping value, only allows for the interpolation over doping in the Xpar file (see Figure 11). Figure 11. Left: Xpar with extra port DOPING used for doping interpolation in the Xpar file. In this example a 4-port device component (two DC and 2 RF ports) becomes a 5-port for dopingdependent analysis. TCAD simulations have then been repeated with ideal tuners implementing the optimum load ( Figure 12). These simulations will be used for the Xpar model validation. Doping variations highly affect the device nonlinear operation. In fact, the drain current increases with higher doping concerning both the DC bias and the harmonic amplitudes, giving rise to different behaviors at varying input power. At a lower input drive, Figure 12 left, the device biased in class AB exhibits the transition from a "class-A-like" behavior to a "class-B-like" case, with a significant clipping of the drain waveform. Such transition is significantly impacted by doping: the lower is the doping, the more the device is pushed towards the class-B-like behavior, due to the reduction of the bias current, while the opposite is true when doping is increased. Even in harsh compression, Figure 12 right, doping affects significantly the device operating condition. In particular, the knee voltage is lower with higher doping, resulting in a larger voltage swing and output power. Turning to the Xpar model validation, simulations have been carried out within ADS exploiting the active device Xpar model loaded with ideal tuners implementing the optimum load. Note that such load differs from the one used for the Xpar model extraction (50 Ω). Nonetheless, Xpars easily accommodate both for the load mismatch and doping variations, both in back-off and compression, as shown by the dynamic load lines (DLLs) in Figure 13, where the drain current is plotted against both the drain voltage and the gate voltage. The accuracy is always remarkable, up to a doping variation of ±10% and with input power ranging from back-off to compression: even doping variations of ±5%, not used for the Xpar model extraction, are correctly modeled, showing the accuracy of the ADS interpolation capability within the GMDIF data file. The accuracy of the Xpar model is further verified by the gate current DLL (Figure 14), showing the gate current as a function of the gate voltage at the input port. Such DLLs demonstrate the dependence of the input gate nonlinear capacitance on doping, and are extremely valuable in the design of a power amplifier, since they predict the input mismatch. Even in this case the accuracy is extremely satisfying up to a ±10% doping variation. Notice that the simulation time for the TCAD analysis on a 25 values input power sweep and 10 harmonics is around 20 h for each doping value, while the Xpar simu-lations within ADS allow for the accuracy shown in Figure 11 in just a few seconds of simulation time. Case (2NL): X-Parameters with Equivalent PIV Generators Linearizing (3) with respect to the physical parameters γ i we have: where we have used the sensitivity defined in (6). As in the (2L) case, equivalent wave Vk can be added to the nominal device ports to describe parametric variations; see Figure 15 The equivalent PIV generators can be calculated by TCAD concurrently with the device S SCM [29] by means of a Green's function approach: localized technological variations inside the device are transferred by the GFs to equivalent port wave variations. The GF approach greatly reduces the computational cost of TCAD variability analysis, since the relevant propagation quantities must be calculated only once for the nominal parameters γ 0 , thus avoiding repeated simulations. Furthermore, the same analysis allows us to calculate the linearized electrical model through the Xpars, also requiring the same S SCM [29]. The resulting model is therefore a doubly linear model, both in terms of port waves and in terms of parameter variations. The method is applied to the same GaAs FET used for case (1NL). The schematic in Figure 15 (right) shows the implementation in ADS: the Xpar model is now required only for the nominal doping and reduced to a 4-port (the extra DOPING port is eliminated), while short circuit current generators are used at the input and output DC and RF ports to generate the PIV equivalent waves. The equivalent generators extracted from TCAD are stored in dataset files for each harmonic, and accessed via the data access component in ADS. As an example of the obtained results, Figure 16 shows the model accuracy on the same PA on class AB optimum load as in previous Section 4.1. The agreement is very good up to ±5% doping variations, but is less satisfying at ±10% doping variations. The accuracy diminishes especially at the lower doping values and lower input drive, where the operating condition of the device has a very strong dependency on doping. Although the linearized model validity is restricted to a more limited interval with respect to the incremental Xpar model (case (1NL)), it may be still sufficient to describe most of the process-induced doping variations in a mature technology. This is not the case for extremely downsized CMOS or FinFET devices, featuring gate lengths below 20 nm, where the doubly linear model based on linearization may not be sufficient to describe large statistical fluctuations. However, it may be appealing for three applications: 1. Case (2NL) provides a simple way to couple experimental Xpar characterization on a limited set of sample ("nominal") devices with the equivalent PIV generators extracted in an independent way, i.e., through physical simulations. This would avoid the time-consuming and costly statistical characterization campaigns on a given technology, a procedure well beyond the typical laboratory capabilities usually available to microwave designers. 2. Green's functions allow for a dramatic reduction of the model extraction time when we address concurrent multiple variations of technological parameters; in fact, GFs must be calculated just once with the nominal parameter values, and equivalent port generators are readily calculated for all variations. Hence, case (2NL) avoiding the multidimensional interpolation of look-up-table data, may turn out to be more robust from the numerical standpoint. However, linearization also implies that multiple uncorrelated parameters will also induce fully uncorrelated equivalent PIV generators, a result that has been proven wrong in some aggressively downscaled technologies [50]. 3. Green's functions allow us to easily address the effect of local fluctuations [38]. Although in this paper we restricted the analysis to global variations, local variations (also statistical in nature) are quite usual in silicon technologies, e.g., for random doping fluctuations [50]. Nonetheless, the efficient calculation of GFs in TCAD is not at all conventional. Up to now, GF-based variability analysis within the harmonic balance approach (i.e., for nonlinear large signal applications) is unique to our in-house software [29], and it is not available in any commercial simulator (indeed Synopsys Sentaurus [27] does implement a GF-based variability analysis, but limited to the DC case only). As a conclusion, a careful choice between case (1NL) and (2NL) must be considered for each specific technology. For a limited number of parameters, approach (1NL) reveals to be preferable for its superior accuracy. Conclusions We have addressed the problem of incorporating process-induced variability in the design of MMICs for the next generation of communications systems, where fabrication tolerances and nanometric device features will significantly impact the circuit performance. Statistical analysis of such variations requires accurate modeling strategies due to their ubiquitous presence in all the physical sections of the integrated circuit. In particular, we have shown a flexible procedure to extract black-box models from physical simulations to be readily integrated into commercial EDA tools for MMIC design, retaining a direct link to the fabrication process of both the passive and active circuit components, hence allowing for a PIV aware design. Author Contributions: Conceptualization, methodology, and writing-review and editing, all authors; investigation, funding acquisition and writing-original draft preparation, S.D.G. All authors have read and agreed to the published version of the manuscript. Funding: This work has been supported by the Italian Ministero dell'Istruzione dell'Universit'a e della Ricerca (MIUR) under the PRIN 2017 Project "Empowering GaN-on-SiC and GaNon-Si technologies for the next challenging millimeter-wave applications (GANAPP)". Conflicts of Interest: The authors declare no conflict of interest.
2022-03-11T16:07:11.599Z
2022-03-09T00:00:00.000
{ "year": 2022, "sha1": "0a3ee8678893a506c7aeecac7161fb2725faab86", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-9292/11/6/860/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "4be22741620db97a54d5ad578ce14465774ebd35", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
181815013
pes2o/s2orc
v3-fos-license
Genetics of Dilated Cardiomyopathy: Current Knowledge and Future Perspectives Nowadays, a huge claim for personalized medicine is progressively growing, and, along this way, genetic studies represent one of the most representative steps. Nowadays, genetic laboratories from the USA and Europe offer different panels of genes related to DCM, ranging from 30 to more than 150 genes, with a great part of them only anecdotally associated with the disease or with a putative link on the basis of biological relationship with known genes. A detailed analysis of each different gene is far beyond the aim of this chapter, which will be focused in the complexity of the interpretation of "evidence-based" DCM genetic background. Here below is presented a brief list of the most investigated and evidence-based genes, grouped according to functional intracellular similarity. Cardiac sarcomeric and cytoskeletal genes (TTN overall) are the most frequently encountered. Other involved genes spread all over cardiomyocyte biological pathways and cell compartments, encoding components of desmosome, structural cytoskeleton, nuclear lamina, mitochondria, and ion flux-handling proteins [1] (Fig. 5.1). We must premise that in these years times are rapidly changing, and this list may be no more representative of the entire genetic landscape of the disease in the next years. Structural Cytoskeleton Z-Disk Genes Cardiomyocyte's structural integrity, sarcomeric orientation and contraction, and mechano-sensing transductions depend on cytoskeleton and Z-disk correct function. DES, DMD, FLNC, NEXN, NEBL, LDB3, and VCL encode for component of both sarcolemmal and sarcoplasmatic intermediate filaments, co-localizing to sarcolemmal membrane, sarcoplasmic membrane, and Z-disk structure. Notably, no or only a mild ATPase activity is known for these genes; thus all belongs to non-motor actin-binding protein group inside Z-disk structure. Mutations in these genes accounted for 5-10% of familial DCM, but this prevalence could increase after the inclusion of the recently discovered Filamin C (FLNC) gene. Desmin (DES): Desmin is a cytoskeletal protein which forms muscle-specific intermediate filaments. Mutations in the gene encoding Desmin cause a wide spectrum of phenotypes of different cardiomyopathies, skeletal myopathies, and mixed skeletal and cardiac myopathies. Desmin mutations account for 1-2% of all cases of DCM. Cardiac manifestations include restrictive cardiomyopathy (RCM), DCM, conduction system diseases, arrhythmias, and sudden death. Isolate cardiac phenotype is reported, or it can precede skeletal muscle involvement [2][3][4]. Truncating DES variants are associated with anticipated and more severe forms of DCM with diffuse LV fibrosis (unpublished data from Heart Muscle Disease Registry of Trieste, HMDR). Dystrophin (DMD): The Dystrophin gene is located on the short arm of the X chromosome and consequently shows an x-linked pattern of inheritance. The dystrophin protein, in conjunction with the dystrophin glycoprotein complex, has an important role in force transmission, being integral to the mechanical link between the intracellular cytoskeleton and the extracellular matrix. Cardiac involvement is present in approximately 90% of the cases of Duchenne's muscular dystrophy and 70% of Becker's muscular dystrophy. Abnormal Q waves ("pseudonecrosis") in lead I, aVL, and V6 or in lead II, III, and aVF have been described. Right bundle branch block, atrioventricular block, and supraventricular arrhythmias can be present. About 10% of female carriers of DMD mutations (Duchenne or Becker type) may develop a DCM in the absence of clinical involvement of skeletal muscle and, although in anecdotal forms, missense and truncating variants of DMD may present with isolated cardiac involvement in males, with DCM, and no signs of muscular dystrophy [5][6][7][8]. Vinculin (VCL): This gene encodes a cytoskeletal protein (Vinculin) involved in cell-matrix and cell-cell adhesion. Specifically, Vinculin is involved in the linkage of integrin adhesion molecules to the actin cytoskeleton. Mutations in this gene, especially in cardiac-specific isoform metavinculin, are very rarely found (less than ten variants described so far) and have been mainly related to DCM but also to hypertrophic cardiomyopathy (HCM). Nowadays, only a limited number of cases sustain these associations, and segregation studies were no or only marginally in support of it. Moreover, some of the described families harbored a second mutation that explained the phenotype [9,10]. Lim Domain Binding 3 (LDB3, or Cypher Zasp): LDB3 interact with alphaactinin-2 and to protein kinase C, maintaining the structure of the Z-disk during muscle contraction and contributing to signal transduction cascades including cardiac hypertrophy and ventricular remodeling pathways. Mutations in this gene have been associated with left ventricular non-compaction (LVNC), DCM, HCM, skeletal myopathy, and peripheral neuropathy. The evidence on the pathogenicity of many of the first described variants is actually weak, as some of them have been found with similar frequency in patients and controls [11]. Those variants that are more likely pathogenic are mainly located in some of the zinc-binding LIM domains of the protein [12]. Desmosomal Genes Desmosome is a symmetric myocyte structure in which each part resides in the cytoplasm of one of a pair of adjacent cells, anchoring intermediate filaments in the cytoskeleton to the cell surface. In combination with the adherents and gap junctions, it connects myocardial cells maintaining both the mechanical and electrical integrity of the heart. Several desmosome genes have been identified in patients with DCM, usually inherited with an autosomal dominant pattern. Interestingly, desmosome genes (Plakophilin-2 (PKP2), Desmoplakin (DSP), Desmocollin-2 (DSC2), Desmoglein-2 (DSG2), and Plakoglobin (JUP)) were initially described as causing arrhythmogenic right ventricular cardiomyopathy (ARVC), but in 2010, Elliott et al. demonstrated a prevalence of 5% of desmosomal protein coding genes mutations among 100 unrelated DCM patients [13]: in relation to this aspect, it is now useful to introduce the concept of "overlapping, gene-driven phenotype" between different forms of cardiomyopathies (which turns out to be a recurrent feature in many genotypes)-even if originally described as linked to a peculiar phenotype (in the case of JUP and DSP genes with Naxos and Carvajal diseases and with ARVC), a specific genotype can manifest itself in different ways according to others, also non-genetic, modifiers. Furthermore, the genetic overlap between ARVC and DCM has also been shown in most of non-desmosomal ARVC-related genes (e.g., LMNA, TMEM43), increasing the possibility of a clinical overlap between different forms of cardiomyopathy. It is worth mentioning the similarity between specific cardiac and cutaneous desmosomal protein isoforms: Desmoplakin, plakoglobin, and plakophilin-2 are, in fact, constitutively expressed in desmosomes of both cardiomyocytes and keratinocytes, and a radical mutation in one of these two proteins often may result in cardio-cutaneous syndromes. Cadherins, conversely (DSC and DSG), have different isoforms preferentially expressed in the heart (isoform 2) or in the cutis (isoforms 1 and 3) [14,15]. Desmoplakin (DSP): DSP codes for the protein desmoplakin, an intracellular obligate component of desmosomes that anchors intermediate filaments, such as desmin and filamins, to the inner desmosomal plaques, while the N terminus of the protein (extracellular domains) interacts with plakophilin and plakoglobin. DSPrelated DCM is associated with increased ventricular arrhythmic burden and left ventricular fibrosis, with or without right ventricular involvement (arrhythmogenic cardiomyopathy). In general, frameshift and nonsense mutations in DSP are considered as disease causing, even when they have not been previously described, while missense variants must be evaluated case by case. As previously mentioned, DSP mutations, if present in homozygosity and with autosomal recessive inheritance pattern, have also been associated with a series of diseases characterized by cardiac and cutaneous involvement, such as Carvajal syndrome (woolly hair, keratoderma, DCM), keratosis palmoplantaris striata II, woolly hair, and lethal acantholytic epidermolysis bullosa. To date, large observational studies investigating the prognosis and the clinical manifestation related to DSP-DCM in respect to other genotypes are still lacking, but preliminary data from single-family studies and from HMDR of Trieste seems to confirm the increased risk of malignant ventricular arrhythmias. Sarcomeric (Motor) Genes Mutations in genes encoding for proteins that form sarcomeric thick and thin filaments have been largely recognized as DCM causing. These proteins (Myosinheavy chain alpha and beta (MYH6 and MYH7, respectively), myosin-binding protein C3 (MYBPC3), troponins (TNNT2, TNNI3, TNNC1), tropomyosin 1 (TPM1), cardiac actinin 1 (ACTN1), myopalladin (MYPN)) share catalytic activity and are involved in sarcomeric contraction (MYPN shares also structural properties with Z-disk genes); comprehensively, these genes are involved in about 10% of cases of genetic DCM. Also this group of genes is characterized by a large overlapping of phenotypes: this is due to increased allelic heterogeneity, where different mutations resulting in different phenotypes are scattered and intercalated through the entire nucleotide sequence of a given gene, and, more interestingly, a single variant may express itself in different phenotypes inside the same family [16,17]. Here below a brief list of most frequently encountered sarcomeric genes in DCM genotyping: Myosin-heavy chain alpha (MYH7): MYH6 codes for the alpha subunit of cardiac myosin heavy chain. It is the predominant isoform of myosin heavy chain at the embryonic myocardium. The ATPase activity and the shortening velocity of this isoform are higher than those of the adult beta-myosin isoforms. After birth, MYH6 expression decreases and represents on average 7% of ventricular myosin in the adult heart. Despite its low expression, the presence of alpha-myosin is important for ventricular function, and its expression in adult atrial myocardium remains elevated, being the main isoform in this tissue (MYH6 variants are also strongly associated with atrial septal defects). The characterization of this gene in DCM is representative of the evolving knowledge in cardiac genetics: previous studies have highlighted the importance of MYH6 mutations in DCM patients, elucidating also a possible negative prognostic effect [18]. These MYH6 mutations were distributed in highly conserved residues and were predicted to negatively affect protein function, but, nevertheless, the progression of knowledge of genetic databases has cast some doubts about the real contribute of this gene in DCM, since there seems to be no significant mutation excess in DCM patients in respect to controls. Variant is this gene should be evaluated carefully case by case [11]. Myosin-heavy chain beta (MYH7): β myosin heavy chain was the first sarcomeric protein to be linked with cardiomyopathy, and mutations in MYH7 are now common causes of HCM and are also associated with DCM, LVNC, and RCM. In respect to DCM, they are responsible for about 4-6% of cases of familial DCM. Truncating variants should generally be considered pathogenic. The converter region of the protein (amino acid: 700-790) represents a mutation hotspot which have been shown to correlate with possible overlapping phenotypes and severe prognosis [16,17]. Troponin T type 2 (TNNT2):The protein troponin T type 2 is the tropomyosinbinding subunit of the troponin complex, which is located on the thin filament of striated muscles and regulates muscle contraction in response to alterations in intracellular calcium ion concentration. Mutations in TNNT2 have also been associated with HCM, DCM, RCM, and LVNC. Patients with TNNT2 mutations generally exhibit a high frequency of premature sudden cardiac death. It accounts for 2-3% of DCM familial forms. Variant Arg173Trp has been clearly associated almost exclusively with dilated phenotype [19]. Myosin-binding protein C3 (MYBPC3): This gene 3 encodes for a member of myosin-associated proteins, which localized in the cross-bridge-bearing zone (C region) of A bands in cardiac muscle. It is the most common mutated gene in HCM, and, as others sarcomeric genes, it has been associated also with dilated or noncompaction phenotype. The more recent evidences raise questions about its contribution to DCM phenotype, given the relatively similar prevalence of MYBPC3 rare variants in healthy and affected individuals of explored populations [11]. However, it must be underlined that some HCM that develop "burnout" physiology may turn in dilated phenotype: particular attention should be paid to this aspect when facing a DCM patient with a rare variant in MYBPC3. Ion Channel-Related Genes Genes encoding for ion-channel proteins are strongly associated with channelopaties, but, in the last years, a growing amount of studies extended the phenotypical spectrum of clinical entities related to a defect in one of these genes to also to structural (dilated or non-compaction) phenotypes. The mechanistic links behind these associations is still poorly understood, but it is potentially related to altered membrane stability (i.e., syntrophin-mediated interaction between SCN5A and DMD) or altered calcium handling leading to sarcomeric inefficiency (phospholamban (PLN) and RYR2 variants). HCN4 (hyperpolarization-activated cyclic nucleotide-gated potassium channel 4) mutations have also been recently shown to be associated with LVNC, with or without DCM overlap (NB: the association between HCN4 and DCM needs still to be demonstrated) [20][21][22][23]. SCN5A: This gene encodes the voltage-gated sodium channel known as tetrodotoxin-resistant Nav1.5 dependent. The protein expression is predominant at heart. It is responsible for the fast sodium current that causes phase 0 of the action potential. Mutations in this gene, with marked allelic heterogeneity, have been strongly associated with Brugada syndrome in case of loss of function effect and long QT type 3 in case of gain of function effect, both diseases with autosomal dominant transmission. The association with DCM has been, in proportion, very rarely reported; it is generally accepted that these mutations are located in two specific regions of the channel: in the voltage-sensitive domain (VSD) and intracellular loops. One of the best characterized mutations is Arg222Gln [20], which affects the VSD. This mutation is also associated with frequent ventricular arrhythmias, cardiac conduction disease, and, in some cases, atrial fibrillation. None of the carriers presented a prolonged QTc. Recently, especially for truncating variants, the association with DCM has been further confirmed [11]. Ryanodine Receptor 2 (RYR2): This gene encodes a ryanodine receptor found in cardiac muscle sarcoplasmic reticulum. The encoded protein is one of the components of calcium channel, mediating the release of Ca 2+ from the sarcoplasmic reticulum into the cytoplasm and thereby playing a key role in triggering cardiac muscle contraction. Mutations (>95% missense) in this gene are known to result in catecholaminergic polymorphic ventricular tachycardia (CPVT), typically in the absence of structural heart disease. Some missense mutations have also been originally associated with the development of ARVC; however, it is now accepted that these carriers had not fulfilled current diagnostic criteria for the disease. Among missense variants, only one has been clearly associated with the development of structural (hypertrophic) heart disease in patients diagnosed with CPVT. A different variant (exon 3 deletion) has been demonstrated, in two families, to segregate with CPVT and progressive left ventricular dysfunction and/or cavity enlargement in some members [20]. Thus, the presence of DCM without CPVT phenotype related to RYR2 (radical) mutations is yet to be demonstrated. BCL2-Associated Athanogene 3, BAG3: Members of the BAG family, including BAG3, are cytoprotective proteins that bind to and regulate Hsp70 family molecular chaperones. Heterozygous mutations in BAG3 have been associated with DCM. Mechanism of disease may, at least in part, depends on a decreased capability to compensate external stressors. The severity of DCM, in fact, has been shown to vary considerably between carriers. By the age of 70, the disease penetrance is apparently 100%. Both non-truncating and truncating BAG3 mutations are reported, with variable penetrance. A specific variant (Pro209Leu), typically a spontaneous de novo variant, is linked to pediatric myofibrillar myopathy [24,25]. RNA-Binding Motif Protein 20, RBM20: This gene encodes a RNA-binding protein that acts as a regulator of mRNA splicing of a subset of genes involved in cardiac development, mainly sarcomeric genes (TTN, but also MYH7, TNNT2, and others). The association of this gene with DCM was firstly established in 2009 by genome-wide linkage analysis and progressively confirmed by subsequent studies. Remarkably, these mutations were located in exon 9, which appears to be a mutational hotspot. Nowadays, also mutations out of exon 9 are reported to be DCM causative, with similar penetrance and clinical manifestations. In respect to prevalence in DCM families, RBM 20 represents a rare genotype, accounting for 2-3% of cases. For this reason, so far, we should underline that evidence-based genotypephenotype correlations are still lacking: only a small number of studies, in fact, with small numbers of index-patients or families, and short follow-up, reported a phenotype characterized by "severe heart failure, arrhythmia, and the need for cardiac transplantation" [26,27], which still need to be confirmed in further studies. Technical Issues in Genetic Sequencing Over the last three decades, different approaches and technologies have been used to obtain genetic information in families or sporadic patients with hereditary diseases. Linkage analysis was the first method used to identify new disease genes, but this technique requires very large families or a large number of sporadic cases. The advent of "old" sequencing technology (Sanger method) has made genetic analysis much more effective, but with timing analysis and high costs, especially for pathologies with high genetic heterogeneity such as cardiomyopathies. More recently we are witnessing a revolution in medical genetics and scientific research applied both to the identification of new disease genes and to the massive parallel study of a large number of genes. This is due to the discovery of highefficiency instruments (NGS) that allowed the entry into what is called the era of the precision medicine; speed, reliability, and limited costs are the advantages peculiar of these techniques that allow the parallel analysis of a large number of genes. NGS technologies can be applied in various formats, with the aim of sequencing the entire genome (including non-coding parts), or the exome, which includes only the coding regions of the genome, or a group (panel) of selected genes. Currently (but technologies are continuously improving), the latter application seems to offer the best compromise between costs, execution speed, and accuracy for certified diagnostic purposes, as it usually guarantees greater coverage of the analyzed genes [28,29]. Different next-generation platforms have been proposed, differing from each other mainly in their methods of clonal amplification of short DNA fragments (50-400 bases) as a genomic library template and how these fragment libraries are subsequently sequenced through repetitive cycles to provide a nucleotide readout (see Table 5.1) [30]. However, the discovery of new single nucleotide variants (SNVs) using NGS still requires validation with Sanger sequencing methods because of the possible loss of precision in obtaining a really high number of short DNA fragments using the polymerase chain reaction (PCR) during library building. NGS platforms have in fact error rates of approximately ten times higher (1 in 1000 bases with 20× coverage) than Sanger sequencing (1 in 10,000 bases). Although the reading depth cutoff for NGS platforms is conventionally set at 20×, many studies indicate that average reading depths greater than 100× are required for the use of these platforms as independent tool for newly discovered variants, even under optimal conditions [31]. The Complexity in Variant Classification Process Traditionally, a mutation is defined as a permanent change in the nucleotide sequence, whereas a polymorphism is defined as a variant with a frequency above 1%. These terms, however, which have been used widely, actually seem no longer suitable to describe the complexity of interindividual genetic variability. The Human Genome Project, culminating in 2001 with the determination of the complete sequence of human DNA [32], provided a first quantitative assessment of the interindividual genetic variability and the possible impact that this variability has on human health. Subsequent multiple international projects (like ESP and 1000 genomes, recently merged with other projects in the most comprehensive exome and genome database: gnomAD; http://gnomad.broadinstitute.org) led to the conclusion that about 1 in 1000 nucleotides in the human genome (three million in total) differs between people, and this variation is largely responsible for the physical, behavioral, and medical unique characteristics of each individual. In this line, the term "mutation" is no more strictly associated with the concept of pathogenicity, as the term polymorphism with the concept of benignity. Taking into account the higher complexity of genetic information, the American College of Medical Genetics and Genomics (ACMG) 2015 guidelines defined a new standard [33]; both terms, mutation and polymorphism, should now be replaced by the term "variant," followed by one of these modifiers: (I) pathogenic, (II) likely pathogenic, (III) uncertain significance, (IV) likely benign, or (V) benign. Several stringent criteria are required to reach one of these different modifiers, which are defined by crosschecking the evidence that derives from different categories of evaluation: (a) population and disease-specific genetic databases, (b) in silico predictive algorithms, (c) biochemical characteristics, (d) literature evidences. A free access website, http://wintervar.wglab.org/results.php, released from ACMG, allows a guideline-based, point-by-point analysis of each-missense-variant of interest. This classification approach is more stringent than the previous ones and may result in a larger proportion of variants being categorized as uncertain significance. It is hoped that this approach will reduce the substantial number of variants being reported as "causative" of disease without having sufficient supporting evidence for that classification. It is important to keep in mind that when a variant is classified as pathogenic, healthcare providers are highly likely to take that as "actionable," i.e., to alter the treatment or surveillance of a patient or remove such management in a genotype-negative family member, based on that determination [11]. In recent years, in fact, genetic laboratories often showed a lack of uniformity in the definition of variants, especially for variants originally described in the past literature, which are still reported as pathogenic in older databases but were subsequently found to be too common in general population, so unlikely to be disease causing. This dis-homogeneity potentially led to different clinical management of similar variants. A similar argument is related to new candidate genes: these genes are included in offered extended panel tests on the basis of a putative biological relationship with known disease-causing genes, but-still-in the absence of solid population or scientific supporting data. The actual net effect of extended gene panels is an increase in the amount of variants of unknown significance and a relative decrease in actionable variants. It is important now to provide a brief mention to the mostly used of these "clinically oriented variant classification" databases: ClinVar and HGMD [34,35]. The ClinVar database (https://www.ncbi.nlm.nih.gov/clinvar/) is a public database that better represents the "historical" process that characterizes the classification of each variant: quoting, "ClinVar is a freely accessible, public archive of reports 'coming from research and diagnostic laboratories' of the relationships among human variations and phenotypes, with supporting evidence. ClinVar thus facilitates access to (…) the history of that interpretation." The Human Gene Mutation Database (HGMD ® , https://portal.biobase-international.com/hgmd/pro/start.php), available under subscription in the most updated version (last 3 years), is the other most reliable source of information about "known (published) gene lesions responsible for human inherited disease." Since nowadays not all laboratories are active submitters to ClinVar or HGMD ® , clinicians should still be careful in referring to them as a gold standard for variant classification: when a potentially disease related rare variant is found in a patient, these databases should be intended as a valuable source of informations to crosscheck with, but representing only a part of the multi-parametric approach that finally lead to definite variant classification. In respect to variants in DCM-related genes, a recent report [11] shed some light in this topic, helping the clinicians to reassess the classification of variants and genes offered by clinical laboratories according to the new guideline standards, in order to elucidate the common characteristics of true actionable variants. The authors found that in some genes, previously strongly associated with a given cardiomyopathy, a rare variant was not clinically informative because there is an unacceptably high likelihood of false positive interpretation, while, by contrast, in other genes, diagnostic laboratories may have been overly conservative when assessing variant pathogenicity. Interestingly, some genes proposed on the basis of several (but dated) studies as among the most common causes of DCM (e.g., MYBPC3, MYH6, and missense variants in SCN5A) showed no excess variation among affected cases, raising an important question about their contribution to DCM phenotype development. Identifying the frequency of the most common HCM pathogenic variant in the available population databases (c.1504C>T in MYBPC3: 2.5 × 10 −5 ) as the conservative upper bound, this study clearly elucidated what is the major allele frequency (MAF) threshold for a rare variant to be considered pathogenic: 0.0001 in ExAC (ExAC is the first release version of gnomAD, composed by exome data). The emerging concept is the odds ratio (OR) of a given variant, to be disease causing (e.g., LMNA-truncating variants (tv) reached an OR of ∼99 to develop DCM, TTN-truncating variant an OR ∼20 to ∼50, FLNC not tested): the higher OR corresponds to higher actionability. To summarize, clinicians should be aware that the "pathogenicity" of a variant is a fluid and evolving definition that should be periodically re-evaluated with the evidence coming from database and scientific progress, in order to be continuously customized to the patient. The External Modulation of Genotype: Environmental Triggers In DCM, both in sporadic and in familial cases, the pathogenicity of a gene variant is modulated by interfering, non-genetic environmental factors: this interaction could be largely responsible for variability in disease phenotype and prognosis. It is important to keep in mind how the actual knowledge in this field (contribution of interfering factors) may still be invalidated by a different accuracy in underlying genetic characterization, with the oldest reports being published before the release of 2015 ACMG standard. Below is reported a brief summary of known interfering environmental factors: inflammation, toxic exposure, hormones, and metabolic profile. Notably, in this field the research is currently very active, and all the following statements are susceptible to possible modifications in the next future (Table 5.2). To conclude, we may say that the phenotypically normal heart with a pathogenic variant (definition that should be constantly re-evaluated) represents a model of failing but compensated heart, which is no or less able to sustain a second, environmental, failing hit [48]: all these potential "second hits" must be taken into account in DCM treatment and prognosis stratification. Evidence-Based Genotype-Phenotype Correlations As previously mentioned, the key factor for a correct genotype-phenotype analysis is the accuracy of the underlying variant classification: reliable genotype-dependent phenotypic informations are in fact achievable only if driven by a solid pathogenicity assessment. Then, as patient's phenotype represents the final results of a long-lasting process of interactions between genetic background and environment, clinicians are aware that discovering the net effect of the pathogenic variant requires a careful "pruning" of "confounding" factors. Furthermore, some correlations could also be outlined "a posteriori", i.e., by the type of response to the medical therapy. Finally, in assessing this correlation, it is important to focus on what is the best starting point: specific mutation versus specific gene versus specific clusters of genes with similar function inside the cardiomyocyte. In this line, in respect to truly personalized medicine, the most correct approach should be the correlation between a specific pathogenic variant in a gene and its "private" phenotype, but, in order to achieve a more clinically meaningful classification, gene clustering attempts have been made and were shown to allow a rough, but functional, orientation, especially in therapeutic management [49]. At the current state of knowledge, a good compromise could be represented by the correlation between a specific gene and its phenotype, just preceded by a brief general distinction on the two main categories of variant (in respect to structural protein effect): missense and truncating (or radical). Generally speaking, the former is expected to affect protein morphology and/or function by changing a single amino acid in the protein sequence, while the latter is expected to cause a premature truncation of the amino acid sequence, leading to a decrease of total protein amount or effectiveness at the cellular level, mainly through nonsense-mediated decay (NMD). Consequently, truncating variants are generally considered less tolerated and linked to haploinsufficiency. Among all the human genes, the ones that are most conserved, expressed in early development, and highly tissue specific usually do not tolerate to be expressed in a single copy and are called haploinsufficient genes [50]. All There is compelling evidence that diabetes has a direct negative effect on the heart, being an independent risk factor for heart failure, with multiple mechanisms including mitochondrial dysfunction, oxidative stress, and shift in energetic substrate utilization. In respect to hypertension, unpublished data from HMDR of Trieste highlight the role of untreated elevated arterial hypertension in the first manifestation of heart failure in a minor proportion of patients harboring a pathogenic variant, and, from the other side, the presence of arterial hypertension is a positive predictor of LV reverse remodeling (LVRR) in DCM patients All known DCM-associated genes (TTN overall, HMDR unpublished data). Furthermore, recent evidence binds a specific LMNA variant-p.G602S-and type 2 diabetes [45][46][47] In HMDR registry, few families are enrolled in which DCM and diabetes co-segregate in the absence of a pathogenic identified variants: the suggestions are that (1) in DCM families, diabetic patients may be at risk of a worse disease prognosis, and (2) in some cases diabetes and dilated cardiomyopathy seem to be genetically correlated. Both these hypotheses need to be deeply elucidated cardiomyopathy-causing genes are included in this category, but they are not mutated with similar proportions of truncating and missense variants: for example, truncating variants on TTN have been discovered as the most frequent mutations in all DCM, whereas, in other DCM disease-causing genes, missense variants are the most frequently encountered (with, interestingly, similar actionability). With these principles in mind, among the several papers published on this topic, only few of them demonstrate evidence-based genotype-phenotype correlations that are helpful in the clinical management of patients with genetic DCM. To date, the best characterized correlations regard LMNA and TTN genes. Filamin C and other genes, in the next future, may reach a similar level of evidence (Fig. 5.2). Lamin A/C LMNA represent the more investigated gene in DCM, and the natural history of LMNA-DCM has been outlined in several papers [52][53][54]. Comprehensively, with a confirmed mortality rate around 12% at 4 years (up to 30% at 12 years of followup), it could be considered the more aggressive genotype in DCM. Its phenotypic expression is characterized by a relatively high incidence of sudden cardiac death or major ventricular arrhythmias, even before the development of systolic left ventricular dysfunction. The median age at disease onset is between 30 and 40 years, and penetrance is almost complete at the age of 70 [52]. It is associated also to a primary disease of the conduction system, with supraventricular arrhythmias and atrioventricular block, by some authors called LMNA "atriopathy." To date, LMNA pathogenic variants represent the only genetic background in DCM that is included in current guidelines, as it may change clinical choices such as the implantable cardioverter-defibrillator (ICD) therapy in primary prevention regardless of left ventricular ejection fraction values (Class IIa, level of evidence B, for ICD implantation in the presence of risk factors [55]: NSVT during ambulatory electrocardiogram monitoring, LVEF < 45% at first evaluation, male sex, and non-missense mutations). The type of variant (missense versus truncating) and its site (before or after the nuclear lamina interacting domain) have also been addressed in respect to prognosis: actual evidence shows that mortality rates are similar, but truncating variants are related to anticipated penetrance of the disease. No clear effect is still demonstrated in respect to the site of variants [56]. Titin Titin (TTN) is known as the largest sarcomeric protein that resides within the heart muscle. Due to alternative splicing of TTN, the heart expresses two major isoforms (N2B and N2BA) that incorporate four distinct regions termed the Z-line, I-band, A-band, and M-line. The amino terminus of Titin is embedded in the sarcomere Z-disk and participates in myofibril assembly, stabilization, and maintenance. The elastic I-band behaves as a bidirectional spring, restoring sarcomeres to their resting length after systole and limiting their stretch in early diastole. The inextensible A-band binds myosin and myosin-binding protein and is thought to be critical for biomechanical sensing and signaling. The M-band contains a kinase that may participate in strain-sensitive signaling and affect gene expression and cardiac remodeling in DCM. Due to its higher prevalence in DCM population in respect to Lamin A/c (TTN 12-18% of whole DCM population, versus LMNA 4-6%), Titin is becoming the more broadly assessed genotype, despite its relatively recent discovery as a DCMrelated gene [57]. To date, the evidence of pathogenicity is related almost exclusively to truncating variants. Since Titin-truncating variants (TTNtv) were reported also in 2-3% of general population without overt cardiomyopathy, many efforts have been made, firstly, to outline the characteristics that distinguish the diseaserelated truncating variants from the benign ones. GENOTYPE An important study by Roberts et al. elucidated the importance of the specific site of truncating variants: of the 364 exons of the entire gene, only a part of them is translated in cardiac isoforms N2B and N2BA [58]: Proportion (or percentage) of exons spliced in (PSI) is the concept that allows to correlate the exon site of the truncating variant with the molecular-and clinical-consequences of this truncation, with a PSI > 15% set as a lowest threshold to be penetrant and PSI > 90% describing exons sites with higher cardiac expression and higher association with fully penetrant DCM phenotype. The entire A-band and the proximal or terminal part of I-band contain exons with PSI proximal to 100%. Tv in M-band exons and Z-band exons should be evaluated case by case. This is the reason why the OR of a TTNtv varies between 20 and 50 according to the site involved by the mutation. A second paper by the same group further demonstrates this concept, showing that also in general population without overt cardiomyopathy, the presence of TTNtv in sites with PSI > 15% mildly, but significantly, affects cardiac dimensions and function when assessed with 3D cardiac magnetic resonance [48]. Lower ventricular mass values, with lower ventricular wall thickness, have been recently outlined as a peculiar phenotypic manifestation of TTNtv [49,59]. In respect to other clinical manifestations of TTN-related DCM, evidences are in favor of a relatively mild and treatable form of the disease in respect to LMNArelated one, with lower mortality rates, in line with the general DCM population. This could be true, especially in relatives that are diagnosed in a preclinical state [49,59]. Clinicians must be aware that TTNtv, even if in small proportion of cases, could be linked to malignant ventricular arrhythmias especially in the presence of external modifiers: comprehensively, the sum of the actual evidences recommends a complete and continuous clinical follow-up of patients with TTNtv-related DCM and their relatives, even in the absence of overt cardiomyopathy [60]. Titin missense variants, on the contrary, nowadays are considered mostly as benign. This assumption has been tested in a recent multicenter study that sequenced TTN gene in a cohort of 147 DCM patients in which the outcome was not affected by the presence of Titin missense variants, confirming that most of these variants could be in fact benign (despite a highly conservative and accurate selection of variants: lowest population frequency, familial segregation, software predictions of pathogenicity) [61]. Recently, however, this "simple" classification has been questioned: a report in fact elucidated the pathogenicity of a specific TTN missense variant in DCM phenotype with non-compaction aspects, raising the threshold of complexity in TTN variant evaluation [62]. Filamin C FLNC encodes filamin C, an intermediate filament that cross-links polymerized actin, contributing in anchoring cellular membrane proteins to cytoskeleton and in maintaining sarcomeric and Z-disk stability. It directly interacts with two protein complexes that link the subsarcolemmal actin cytoskeleton to the extracellular matrix: (1) the dystrophin-associated glycoprotein and (2) the integrin complexes, while, at intercalated disks, filamin C is located in the fascia adherens [63]. The association with DCM was initially reported by two separate studies [63,64]. Ortiz et al. evaluated with NGS panels a cohort of 2877 patients referred for various cardiac diseases (including channelopathies and HCM, the latter representing almost one half of the cases) and identified 28 unrelated probands with FLNCtruncating variants, previously diagnosed mainly with DCM or, in minor part, with arrhythmogenic or RCM. Truncating variants in FLNC came out to cause an overlapping phenotype of dilated and left dominant arrhythmogenic cardiomyopathy complicated by frequent premature sudden death, with the phenotypic hallmark represented by subepicardial-transmural fibrosis in inferolateral LV wall. Interestingly, a small portion of probands (<5%) had prominent right ventricular involvement or restrictive phenotype. The cumulative incidence of MVA or SD was found to be between 15 and 20% in a median follow-up of 5 years, and the mortality rate was about 6% for the same follow-up. We should underline that these data refer to a limited cohort of probands referred for genetic testing due to aggressive familial disease, representing a potential selection bias. Data on large cohorts of FLNCtv-related DCM patients are still lacking to confirm or modulate this aggressive phenotype. Furthermore, it is worth mentioning that FLNC missense variants have been identified in a previous study also in families with HCM, although with a mild degree of LV hypertrophy. As for other cytoskeletal or sarcomeric genotypes with allelic heterogeneity, this fact suggests that filaminopathies can generate a spectrum of different cardiac disorders that at least in part may be related to the type of variant [65]. FLNC has only recently been included in the genetic screening of patients with inherited cardiomyopathies and sudden death, and its real prevalence in DCM has still to be elucidated. Figure 5.3 shows familial pedigrees of three families carrying FLNCtv. Insights from Clinical Presentation and Left Ventricular Reverse Remodeling (LVRR) In clinical practice, especially in newly diagnosed DCM patients without familial history of cardiac disease, cardiologists may find useful to know peculiar findings that are representative of a specific genotype and, hopefully, able to guide disease treatment and prognostic assessment, at least in the short time. A recent report from HMDR of Trieste tried to shed some light in this sense, differentiating genotypes on the basis of response to therapy: a different response, in fact, can be interpreted as the indirect evidence of different, mutation-driven, underlying pathogenic processes [49]. These mutation-dependent processes may not, or only marginally, be detectable otherwise. Despite several limitations (possible selection bias in single referral center, limited number of patients partially grouped in gene clusters, thus introducing a possible heterogenic genetic background), this study allowed some interesting observations both in clinical presentation and LVRR rate in different genetic-based DCM, especially in relatively less investigated genotypes. In respect to clinical presentation, most of the clinical and instrumental characteristics did not differ between the different genotypes. Except for a lower rate of left bundle-brunch block in both TTN and structural cytoskeleton Z-disk group and a trend toward a mild degree of LV dilation and dysfunction in LMNA mutation carriers (part of these findings have been subsequently confirmed in other studies) [59,60], symptoms, electrocardiographic, and echocardiographic findings were grossly similar across different genotypes, being consistent with the hypothesis that DCM represents the final common phenotype of multiple genetic-based cardiac diseases and their relationship with environmental modifiers. The most interesting finding was related to LVRR: a significant association was in fact demonstrated between lack of LVRR and specific genotypes (FLNC, DES, DMD, and other cytoskeletal Z-disk genes overall, followed by LMNAc). Conversely, TTN genotypes were most frequently associated with positive LVRR on optimal medical therapy ( Fig. 5.4). This kind of approach showed how phenotype correlations can be inferred also in this way, as an "ongoing" process, once more related to the interactions with external modifiers, in these cases represented by medications. To conclude, the emerging concept elucidated in this chapter is that disease manifestation and prognosis are the results of the interaction between genotype and environment: the contribution of each factor to the patient's clinical status is modulated by (1) genetic variant's actionability and (2) type and severity of environmental factor(s). Summarizing, high actionable genotypes (with higher OR, as LMNAtv, or double pathogenic variants) may be per se the major determinants of disease manifestation/prognosis, while strong interfering environmental factors (e.g., chemotherapy) play a major role especially in cases with less actionable genotype. Future perspectives in genetics will further investigate these aspects. Association between LVRR and absence of LVRR according to different genotypes (data from HMDR) [49]
2019-06-07T21:13:18.437Z
2019-05-18T00:00:00.000
{ "year": 2019, "sha1": "085ef6e6b6482ab4ddcb902d45ced87266fc282f", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-030-13864-6_5.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "09c196289b73aab80dbdf39f7f2ea4b667e66ea4", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
9176129
pes2o/s2orc
v3-fos-license
Prevalence of IgG anti-HAV in patients with chronic hepatitis B and in the general healthy population in Korea Background/Aims Few studies have investigated hepatitis A virus (HAV) seroepidemiology in Koreans with chronic liver disease (CLD). This study compared the prevalence of IgG anti-HAV between the general healthy population and patients with hepatitis B virus-related CLD (HBV-CLD), with the aim of identifying predictors of HAV prior exposure. Methods In total, 1,319 patients were recruited between June 2008 and April 2010. All patients were tested for IgG anti-HAV, hepatitis B surface antigen (HBsAg), and antibodies to hepatitis C virus. The patients were divided into the general healthy population group and the HBV-CLD group based on the presence of HBsAg. The seroprevalence of IgG anti-HAV was compared between these two groups. Results The age-standardized seroprevalence rates of IgG anti-HAV in the general healthy population and patients with HBV-CLD were 52.5% and 49.1%, respectively. The age-stratified IgG anti-HAV seroprevalence rates for ages ≤19, 20-29, 30-39, 40-49, 50-59, and ≥60 years were 14.3%, 11.2%, 45.5%, 90.5%, 97.6% and 98.3%, respectively, in the general healthy population, and 0%, 9.8%, 46.3%, 91.1%, 97.7%, and 100% in the HBV-CLD group. In multivariate analysis, age (<30 vs. 30-59 years: OR=19.339, 95% CI=12.504-29.911, P<0.001; <30 vs. ≥60 years: OR=1060.5, 95% CI=142.233-7907.964, P<0.001) and advanced status of HBV-CLD (OR=19.180, 95% CI=4.550-80.856, P<0.001) were independent predictors of HAV prior exposure. Conclusions The seroprevalence of IgG anti-HAV did not differ significantly between the general-healthy-population and HBV-CLD groups. An HAV vaccination strategy might be warranted in people younger than 35 years, especially in patients with HBV-CLD. INTRODUCTION Hepatitis A is usually self-limited and has a benign clinical course. It is usually asymptomatic in children, whereas it causes clinically apparent disease in the majority of adults, rarely progressing to fulminant hepatic failure. 1 In particular, the old age and chronic liver disease (CLD) have been known to be risk factors for fulminant hepatic failure. 2 Therefore, hepatitis A virus (HAV) vaccination is recommended for patients with CLD. Improvements in the socioeconomic status and general public health of Korea have led to a shift in the seroprevalence of hepatitis A from hyperendemic region to lower one. 3 Paradoxically, the number of children and young adults who are susceptible to HAV infection has been gradually increased, resulting in the recent rapid rise of symptomatic hepatitis A. Currently, hepatitis A has become one of the most common causes of acute viral infection in Korean adults. 3,4 Although the prevalence of chronic hepatitis B virus (HBV) infection has been declined after the introduction of universal vaccination, it is still the most important cause of CLD in Korea. 5 Accordingly, the seroprevalence of hepatitis A in patients with HBV-related CLD has been of interest. However, there are few studies about HAV seroepidemiology in Korean population with CLD, 6,7 and no studies have compared the seroprevalence of IgG anti-HAV between general healthy population and patients with HBV-related CLD. This study was aimed to evaluate the seroprevalence of IgG anti-HAV in Korean patients with HBV-related CLD and analyzed it as compared with general healthy population. Thus, we attempted to provide the objective data about the HAV vaccination strategy in patients with HBV-related CLD and to identify the predictors of prior HAV exposure. Patients and study design We retrospectively analyzed the medical records of 1,319 pa- (281/622) and 64.6% (450/697), respectively. There were significant differences in age and sex between the two groups (P<0.001). Of patients with HBV-related CLD, a total of 14 patients had a superinfection with chronic hepatitis C ( Table 1). These results indicate that the incidence of acute hepatitis A occurred at an incidence of almost 30 cases per 100,000 Korean people, which urges to promote HAV vaccination for childhood and high risk groups. The superinfection of hepatitis virus infection in patients with CLD may aggravate the underlying liver disease. Korea is an endemic area of HBV infection. 11 and superinfection with HAV in HBV-related CLD remains problematic in these days. In Thai, it has been reported that the incidence and mortality of fulminant hepatic failure were 55% and 25% in cases of acute hepatitis A superinfection in HBsAg carriers. 12 18 also reported that the age was the independent factor affecting HAV prior exposure, which was consistent with our findings. In respect to the positive rate for IgG anti-HAV, our study showed similar results to other Korean reports (Table 3). 6,7,[19][20][21][22][23][24][25][26] The positive rate for IgG anti-HAV was found to be higher than 90% in patients aged 45 years or older. Most of these patients were exposed to HAV previously and then spontaneously acquired an immunity against hepatitis A. In patients aged 29 years or younger, the positive rate for IgG anti-HAV was found to be lower than 15%. These results suggest that most of these patients were The limitations of the current study are that it was difficult to identify a statistical significance in young age group because the number of enrolled patients aged 19 years or younger was smaller than those of other age groups. Also, because the current study was conducted in a single center in Seoul and did not include patients with non-HBV-related CLD, it could not be representative of total Korean people and could not be generally applied to all the Korean patients with CLD. Considering not only that there was no significant difference in the seroprevalence of IgG anti-HAV between the current study and other previous Korean reports but also that HBV was the most common cause of CLD in Korea, however, our results might present the baseline epidemiologic data for IgG anti-HAV in Korean patients with CLD. In conclusion, our results showed that there was an epidemiologic shift where the positive rate for IgG anti-HAV was decreased as compared with the past in adults in their 20s and 30s. It was also confirmed that the prevalence of IgG anti-HAV was not increased in patients with HBV-related CLD as compared with general healthy population. This might be used as a baseline epidemiologic data for estimating the prevalence of IgG anti-HAV in general healthy population and patients with CLD in Korea. This should also be followed by a nationwide epidemiological study. Besides, the age and the severity of hepatic diseases were found to be variables affecting the HAV prior exposure, based on which the HAV vaccination strategy should be established.
2014-10-01T00:00:00.000Z
2010-12-01T00:00:00.000
{ "year": 2010, "sha1": "d99554d4dea9b339b0d8f5425923ccff4bde57df", "oa_license": "CCBYNC", "oa_url": "https://www.e-cmh.org/upload/pdf/kjhep-16-362.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "d99554d4dea9b339b0d8f5425923ccff4bde57df", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
8242553
pes2o/s2orc
v3-fos-license
Theoretical Simulation of the Infrared Absorption Spectrum of the Strong Hydrogen and Deuterium Bond in 2-Pyridone Dimer This work presents a theoretical simulation of the infrared spectra of strong hydrogen bond in alpha-phase 2-pyridone dimers, as well as in their deuterium derivatives at room temperature. The theory takes into account an adiabatic anharmonic coupling between the high-frequency N-H(D) stretching and the low-frequency intermolecular N...O stretching modes by considering that the effective angular frequency of the fast mode N-H(D) is assumed to be strongly dependent on the slow mode stretching coordinate N...O, the intrinsic anharmonicity of the low-frequency N...O mode through a Morse potential, Davydov coupling triggered by resonance exchange between the excited states of the fast modes of the two hydrogen bonds involved in the cyclic dimer, multiple Fermi resonances between the N-H(D) stretching and the overtone of the N-H(D) bending vibrations and the direct and indirect damping of the fast stretching modes of the hydrogen bonds and of the bending modes. The IR spectral density is computed within the linear response theory by Fourier transform of the autocorrelation function of the transition dipole moment operator of the N-H(D) bond. The theoretical line shapes of the υN-H(D) band of alpha-phase 2-pyridone dimers are compared to the experimental ones. The effect of deuteration is successfully reproduced. Introduction The hydrogen bonding is responsible for the appearance of spectacular changes in IR spectra of associated molecules.This remark particularly concerns the υ X-H proton stretching vibration bands, which are the attribute of the X-H atomic groups in the X-H…Y bridges.The main effects depend on a considerable low-frequency υ X-H band shift, on the band integral intensity growth by even two orders of magnitude and on a noticeable increase of the band-width [1][2][3].Numerous theoretical studies on hydrogen-bonded systems such as carboxylic acid have to explain the changes in the infrared spectrum induced by the formation of the H-bond Bridge. In this work we present a theoretical approach for vibrational couplings in moderately strong hydrogenbonded systems and use it for simulating experimental infrared spectra for strong hydrogen-bonded of alpha solid-state phase 2-pyridone crystal dimers.It is shown alpha phase of 2-pyridone forms approximately a centrosymmetric dimer.In fact, that from the results given in the original crystallographic work, it results that the dimers are not ideally centrosymmetric since one hydrogen bridge in a dimer is longer than the other.Without making significant errors, in the discussion of spectroscopic effects, one may assume that the dimers are approximately centrosymmetric.The alpha-phase crystals belong to the monoclinic system with Z = 8 [4].However, we think that for the IR spectroscopy reasons you may safely consider these dimers as centrosymmetric ones.Several studies show that 2-pyridone form centrosymmetric dimers [5,6]. The studies have proved that in the case of cyclic dimers in this phase of 2-pyridone a single hydrogen bond is considerably stronger than a single hydrogen bond in a chain of molecules in the beta-phase.This fact suggests that the N + -H…O -bonds exist only in the alpha-phase and that these bonds are obviously stronger than the N-H…O hydrogen bonds in the beta-phase, since ionic hydrogen bonds are generally known to be stronger than the neutral ones [7,8].2-pyridone is an important precursor of antibiotics, which are used as inhibitors of the DNA gyrase.Thus 2-pyridone [9] and its derivatives [10,11] are widely investigated and exploited.Wuest and coworkers have proved that dipyridone can be formed by linking the two 2-pyridone using the functional groups like the acetylene or amine [12,13].Asymmetric dipyridones are ready to form a dimer, whereas symmetric dipyridones are easy to self-assembly to form the linear and planar polymers in aprotic solvent [12].The formation of the polymers via hydrogen bonding is an important approach of preparing liquid crystal and other functional polymers.There have been only a few reports on the hydrogen-bonding complexes formed by dipyridones.In addition, 2-pyridone dimer is an important reference system because it represents the rare case of a closed complex with two linear H-bonds.Recently, Wójcik presented a theoretical simulation of the bandshape and fine structure of υ N-H stretching band for 2-pyridone-H and 2-pyridone-D [14], taking into account an adiabatic coupling between high-frequency N-H(D) stretching and low-frequency intermolecular N…O stretching modes, Davydov interactions and linear and quadratic distortions of potential energies for the low-frequency vibrations in the excited state of the N-H(D) stretching vibrations.The slow vibration modes were assumed as harmonic potentials.In other work, Wójcik [15] has presented a theoretical model for vibrational couplings in weak and moderately strong hydrogen-bonded systems and use it for modeling experimental infrared spectra for hydrogenbonded crystals and hexagonal ice.The model is based on vibronic-type couplings between high and low frequency modes in hydrogen bridges, Davydov interactions [16] and Fermi resonance [17,18].It allows calculation of energy and intensity distributions in the infrared spectra of hydrogen-bonded systems.The present theory is based on strong anharmonic coupling between the highfrequency hydrogen stretching vibration υ N-H and lowfrequency phonons, Davydov interactions and multiple Fermi resonances interaction between a fundamental vibration of a υ N-H and one overtone or a combination tone of intermolecular vibration.Besides, this theory incorporated the intrinsic anharmonicity of the slow frequency mode through a Morse potential [19,20] whereas the fast mode was considered as harmonic.Note that, the Morse potential is undoubtedly more realistic than the harmonic one to describe this slow frequency mode.The adiabatic approximation [21] have been performed for each separate part of the dimer together with a strong non-adiabatic correction via the resonant exchange between the excited states of the two fast mode moieties.Both quantum direct (relaxation of the high-frequency modes) and indirect (relaxation of the H-bond bridges) dampings of the systems [22,23] were taken into account.This theory allows calculation of intensity distributions in the infrared spectra of hydrogen-bonded systems.The main pur-pose is to reproduce the experimental υ N-H IR line shapes of hydrogen and deuterium bond in alpha-phase 2-pyridone crystal dimer at room temperature.We shall use infrared spectra of 2-pyridone in the alpha solid-state phase, measured by Flakus at the room temperature (Figures 1 and 2 from [24]). The numerical results show that this theoretical approach allows fitting the experimental υ N-H infrared line shapes of cyclic alpha-phase 2-pyridone crystal dimer and its deuterium derivative by using a minimum number of parameters.With such tools, experimentalist should be able to compare experimental and theoretical data in an easy-to-use way. Experimental Spectra It is of interested to note that, 2-pyridone compound which has been used in this investigation was a commercial substance (Sigma-Aldrich).The single crystals of the alpha phase formed rectangular plates.For the purpose of the experiment they were exposed by using on a tin diaphragm with a 1.5 mm hole diameter.It was used without further purification.Crystals of 2-pyridone suitable for spectral studies were obtained by crystallization from melt between two closely spaced CaF2 windows.The solid-state spectra were measured by a transmission method at room temperature with the help of the FT-IR Nicolet Magna 560 spectrometer using a non polarized beam.The spectra were measured at 2 cm -1 resolution.Measurements of spectra were completed in a similar way for the crystals of deuterium derivative of 2-pyridone, which was synthesized by evaporation of the solution in D 2 O under reduced pressure. Theory The present theory was developed in the framework of the adiabatic approximation [25].The adiabatic approximation of the N-H stretching vibrations leads to the description of each moiety by effective Hamiltonians of the H-bond Bridge: for a single H-bond bridge, this effective Hamiltonian is either that of a harmonic oscillator, if the fast mode is in its ground state, or that of a driven harmonic oscillator if the fast mode is excited.When one of the two identical fast modes is excited, then, because of the symmetry of the cyclic dimer and because of coupling V 0 between the two degenerate fast-mode excited states, an interaction occurs (Davydov coupling) leading to an exchange between the two identical excited parts of the dimer, as considered by Maréchal and Witkowski in their pioneering work [21].Of course, this interaction between degenerate excited states is of non-adiabatic nature, although the adiabatic approximation was performed to separate the high-and low-frequency motions.Figure 1 present the geometry of the 2-pyridone dimer optimized at the HF/6-311 ++ G(d,p) level [26], which is formed by two hydrogen bonds (lengths of N6-H19… O20 and N21-H15…O22 are 2.89559 Å and 2.89558 Å, respectively). Let q 1 and q 2 , be the coordinates of the high-frequency N-H stretching vibrations, in the first and second hydrogen bond, and Q 1 and Q 2 coordinates of the two low-frequency intermolecular N…O stretching modes.The two moieties of the system are exchanged by the symmetry C2 parity operator. For the theory dealing with this system, the basic physical parameters are 1) vibration angular frequency ω˚ of the two degenerates fast modes moieties when the Hbond bridge is at the equilibrium, 2) vibration angular frequency Ω of the two degenerates H-bond bridge moieties, 3) dimensionless anharmonic coupling parameter α˚ between the high frequency mode of one moiety and the H-bond bridge coordinate of the same moiety, 4) Davydov coupling V parameter between the degenerate first excited state of the two moieties high frequency mode, 5) direct and indirect damping parameters γ and γ, 6) coupling parameters fi involved in the Fermi resonances coupling between the first harmonics of some bending modes of angular frequency and the first excited state of the g symmetrized high frequency mode, 7) relaxations parameter of the first harmonic bending modes, and 8) absolute temperature T. Full Hamiltonian of the System The evaluation of the spectral density of the hydrogen bond system requires the knowledge of the full Hamiltonian of the hydrogen bond system.For this purpose, it is important to describe the basic various vibration modes in the dimer.In our present work, for each part of the dimer, we have described the slow-frequency mode by a Morse potential, which can be written as follows: D e is the dissociation energy of the Hydrogen Bond Bridge and e β is given by: The fast frequency-mode is considered to be harmonic.It is important to note that in the majority of recent works, the slow-frequency mode was assumed as harmonic [25]. Within the strong anharmonic coupling theory and the anharmonic approximation for the H-bond bridge, the corresponding Hamiltonians of the slow and high-frequency modes of the two moieties of the dimer are, respectively given, using dimensionless operators, by: In these equations, P i are the dimensionless conjugate momenta of the H-bond bridges coordinates Q i of the two moieties, whereas p i and q i are the dimensionless coordinates and the conjugate momenta of the two degenerate high frequency modes of the two moieties.Ω is the angular frequency of the H-bond Bridge, whereas ω(Q i ) is that of the high frequency mode which is supposed to depend on the coordinate of the H bond bridge. Using a Taylor development of the Morse potential, the Hamiltonian of the slow frequency modes given in Equation ( 3) can be rewritten as the sum of the Hamiltonian of a quantum harmonic oscillator and an anharmonic potential V: where [H Slow ] i and V are respectively given by: Expansion to first-order of the angular frequency of the fast mode with respect to the coordinate of the H-bond bridge leads to write: where is the angular frequency of the two degenerate fast modes when the corresponding H-bond Bridge coordinates are at equilibrium, whereas is a dimensionless parameter which will appear to be an anharmonic coupling parameter. ω α In presence of damping, the thermal bath may be figured, by an infinite set of harmonic oscillators, and its coupling with the H-bond bridge are described by terms which are linear in the position coordinates of the bridge and of the bath oscillators: Here, are the dimensionless position coordinate operators of the oscillators of the bath, are the corresponding conjugate moments, obeying the usual quantum commutation rules, are the corresponding angular frequencies and r r q  r p  r ω g are the coupling between the H-bond bridges and the oscillators of the bath.Within the adiabatic approximation, the Hamiltonian of each moiety of the dimer takes the form of sum of effective Hamiltonians which are depending on the degree of excitation of the fast mode according to: 1, 2; 0,1 An excitation, of the fast mode of one moiety of the dimer is resonant with the excitation of the other moiety.Thus, a strong non-adiabatic correction [21] i.e. a Davydov coupling V 12 , occurs between the two resonant states after excitation of one of the two fast modes, so that the full Hamiltonian of the two moieties is given by the equations: with, Autocorrelation Functions and Spectral Density The spectral density of the υ N-H mode is given, within the linear response theory [27,28], by the Fourier transform of the autocorrelation function G(t) of transition moment operator of the fast mode : Using the symmetry of the system, the ACF may be split into symmetric parts (g) and antisymmetric parts (u). In the presence of Davydov coupling, the autocorrelation function (ACF) of the dipole moment operator of the fast mode may be written [25]: Here, γ is the natural width of the excited state of the high frequency mode, the expression of which has been calculated by Rösh and Ratner [22].[G (t)] g is the reduced ACF of the g part of the system related to Hamiltonian described the indirect damping.It's given by: In these equation, <n> and β are respectively the thermal average of the occupation number of the quantum harmonic oscillator describing the H-bond bridge and the effective dimensionless anharmonic coupling parameter related to α˚.These terms are given by: Note that by "reduced", we mean that a partial trace has to be performed over the thermal bath because of the coupling between the symmetric Hamiltonian and the surrounding.In the last equation, [G + (t)] u and [G -(t)] u are the two (u) ACFs which are affected only by the Davydov coupling.They are given by [29]: where, , n C    are the expansion coefficients of the eigenvectors on the basis of the eigenstates of the Hamiltonian of the quantum harmonic oscillator.One may observe that the angular frequency ω˚ of the high frequency mode must be decreased by a factor 1 2 on D isotopic substitution of the proton involved in the H-bonds.Besides, according to Maréchal and Witkowski theory [21], the anharmonic coupling parameter α˚ must also be reduced by this factor upon this same substitution, whereas the frequency Ω of the H-bond bridge has no reason to be modified. Following Equations ( 15) and ( 19), the spectral density of the dimer involving Davydov effect can be written as: That may be written formally: with respectively: In view of the above equations, the components appearing here are given by the following equations [29]: ,   , 2 , Recall that n is the thermal average of the number occupation operator of the H-bond bridge vibration given by Equation (18), which is a function of the angular frequency Ω of the H-bond bridge and of the absolute temperature T. Besides, β is the effective anharmonic coupling parameter given by Equation (17), which is a function of the anharmonic coupling parameter α˚ between the slow and fast mode, and of the angular frequency Ω and the damping parameter γ of the H-bond bridge. Situation with Fermi Resonances The previous treatment was developed with the neglect of Fermi resonances.Now, suppose the situation where this later effect is taken into account.It is resulting from the interactions occurring between the first excited state of the high frequency mode and the first harmonic of some bending modes.As it was stated by Maréchal and Witkowski [21], if Fermi resonances are taken into account, one has to consider one fast mode, one slow mode and one bending mode for each hydrogen bond of the cyclic dimer.If we take into account Fermi resonances [29], they affect only the g state of the system.As a consequence, the autocorrelation functions [G -(t)] u are not modified.In the presence of Davydov coupling and Fermi resonances, the ACF can be written following [29]: Here, {ω μ } g are the eigenvalues appearing in Equation ( 29) whereas, a {μ,0,m}g are the expansion coefficients defined by Equation (30). The g states involved in the above expansions are defined by: Here, the kets and   g l are the g eigenstates of respectively the symmetrized high frequency quantum harmonic oscillator, the slow frequency quantum harmonic oscillator and the bending frequency quantum harmonic oscillator.The Fermi resonance mechanism, characterized by the coupling parameter f i , is described by the following coupling operators ℏf which express the non-resonant exchanges between the state   1 j of the jth fast mode and second damped excited state   2 j of the jth bending mode.The introduction of the Fermi resonance coupling effects in the lineshape is presented by the complexes angular frequency gap Δ i : with where is the frequency of the bending modes .. The imaginary part in this gap is related to the lifetime of the corresponding excited states.Recall that the Hamiltonian of the dimer involving Davydov coupling, Fermi resonances between the g excited state of the fast mode and the g first harmonics of the bending mode, together with the damping of theses excited states is: with and In these two last equations, a + and a are the boson operators obeying [a + , a] = 1 and i 2 = -1. Results and Discussions In our theoretical approach, we have assumed that the 2-pyridone dimer crystal in alpha-phase is of approximately centrosymmetric type.In the case of the interprettation of the spectra of alpha-phase 2-pyridone crystal, this interpretation cannot be relied on since the N-H…O bonds are much shorter than the sulfur atoms containing hydrogen bonds.Most probably, the purely vibrational approach to solving the problem of the exciton interacttions in the hydrogen bond dimers should be abandoned.However, the contribution of electronic interactions to the vibrational exciton interactions between the hydrogen bonds in the dimers, expressed by the electronic coordinates, should be taken account. From these considerations some consequences result for the electronic structure of 2-pyridone dimers in alpha-phase crystals.This is most probably the structure of a larger contribution of the zwitterion electronic structure.Such a continuous structure of the π-electron cloud in the pyridine rings, in cyclic N + -H…O -hydrogen bond dimers allows for an effective ("head-to-tail") coupling of the hydrogen bonds in the cyclic dimers.The experimental and theoretical results allow us to state that, for alpha-phase 2-pyridone studied the υ N-H absorption bands of their dimers observed in the IR spectra are similar in structure and are very broad.This means that the mechanism of formation of these bands should, first of all, involve the participation of the cyclic structure, responsible for the formation of the hydrogen bond.Insignificant changes in the shape of the dimers and their deuterated derivatives bands observed in alpha-phase 2-pyridone.We believe that the interaction of the two intermolecular bonds N-H…O in the cyclic complex plays a determining role in the formation of the broad absorption band. The main object in this work is to reproduce theoretically the IR spectra of alpha-phase 2-pyridone at 300 K.In a first place we have reproduced the υ N-H(D) IR spectra of alpha-phase 2-pyridone dimer crystal by comparing theoretical line shapes.The spectral densities are computed by Equation ( 19) after construct and diagonalize the full Hamiltonian in a truncated basis.The stability of the computed spectra with respect to the size of this basis set was carefully checked.The stability of the spectra was also checked versus the order of the Taylor expansion of the Morse potential.Thus, in the first place, we have considered the general physical situation in which multiple Fermi resonances are taken into account and for which the spectral density is given by Equations ( 19), (20) and (28).The procedure we have used is the fitting of the experimental line shapes by optimizing the values of the basic parameters. Figures 2 and 3 present the theoretical spectra of alpha-phase 2-pyridone dimer crystal and its deuterated derivative where multiple Fermi resonances have been introduced in our theoretical approach.We have performed numerical experimentation by increasing progressively the number of Fermi resonances.The spectral densities, presented in this Figure, are computed in the presence of three Fermi resonances.Tables 1 and 2 show the parameters involved in the calculations. The examination of the spectra obtained using our theoretical model shows that there is a good agreement between theory and experiment.It is important to emphasize that if Fermi resonances are acting, one observes the classical behavior of the Fermi resonance, which is well described in the literature: the displacement of the bands and the redistribution of the intensities in the bands.However, only the intensities at high frequencies are redistributed if one increases the number of Fermi resonances.In fact, a recent study on acetic acid dimers in the liquid phase [30] did not show any wavelength dependence of their dynamics. Figures 4 and 5 show the theoretical line shapes of alpha-phase 2-pyridone dimer crystal and its deuterated derivative calculated by Equations ( 16), (19) and (20) when Fermi resonances are ignored.In both situations, the theoretical line shapes appear as red lines whereas the experimental ones are black lines.Table 1.Parameters used for fitting the experimental spectra of alpha-phase 2-pyridone at room temperature in the presence of three Fermi resonances. Species ω˚ (cm -1 ) Ω (cm -1 ) α˚V˚ γ˚ (cm -1 ) γ (cm In order to improve the validity of the present model, let us comment the magnitude of the parameters that we have used.Both the anharmonic coupling parameter and the angular frequencies of the high frequency mode for alpha-phase 2-pyridone in all treated situations, their magnitudes is decreased by a factor which is different from 2 .This result is in agreement with theory when passing from the H to the D species and when the slow mode is assumed to be of Morse type (anharmonic potential).Indeed, according to [30,31], the isotope ratio for hydrogen bonds O-H…O depends on the bond strength, it is close to the harmonic value 2 for weak bonds and decreases with the bond strength reaching minimum near 0.9   for low-barrier bonds.In our cases, to obtain good agreement between the theoretical and experimental one, we have used: It is important to note that the introduction modulation of the equilibrium positions of the fast modes (q e ) and the quadratic dependence of their frequencies on Q i which can represented by two expansions to the second order: ; Recall that in present work and following Maréchal and Witkowski [21], we have only used the first order dependence of the angular frequency of the fast modes on the coordinate of the slow modes (Q i ) (see Equation (40) and we have neglected the modulation of the equilibrium positions of the fast modes and the quadratic dependence of their frequencies on (Q i ) [32] as conformed by several experimental correlations [33,34].We have shown in recent [32] work that the fine structure of the IR υ X-H stretching band are connected with the new anharmonic coupling parameters (i.e.β, f˚ and g˚), whereas these parameters do not manifest markedly in effects of temperature and deuteration on the IR spectra.Besides, the account for all these parameters does not affect the similarity of the spectra in gas and condensed phases [35]. The isotope ratio χ of centers of gravity of band of light and deuterated alpha-phase 2-pyridone dimer is 1.306.The values obtained here are also in satisfactory agreement with the experimental data reported by Odinokov, et al. [36] for complexes of carboxylic acids with various bases and are comparable to those found in our recent work dealing with the H/D isotopic effects in H-bond spectra [33].However, these ratios are different from those used by Blaise, et al. [37] since in their approaches dealing with theoretical interpretation of the IR line shapes of liquid and gaseous acetic acid [38] and gaseous propynoic and acrylic acid dimers [29], low and high-frequency hydrogen stretching vibrations in individual hydrogen bonds are assumed to be harmonic whereas in the present work we use a Morse potential in order to describe the anharmonicity of the H-bond Bridge.Recall that the removal of the harmonic approximation for the slow modes by introducing Morse potential in place of harmonic one has been done by Leviel and Maréchal [39] in a model similar to the present one, involving cyclic dimer, however without damping.They have shown that the value of the angular frequency of the slow mode which must be used to the experimental lineshape is more close to the experimental value when the anharmonicity of the bridge is introduced.Now let us look to the strongest hydrogen bonds observed in alpha-phase 2-pyridone dimers.Generally this type of hydrogen bonds is observed in ion molecular complexes of the (AHA) -and (BHB) + types [40,41]; however, it is especially difficult to study the spectral properties of charged complexes under conditions of weak interactions with the medium.In the present approach, we take into account the natural width of the excited states of the fast mode due to the medium (direct relaxation) and the damping of the H-bond Bridge (indirect relaxation).The direct relaxation is included following quantum treatment of Rösch and Ratner [22] whereas the indirect one was taken into account via the approach of Louisell and Walker [42] dealing with the relaxation of driven damped quantum oscillators studied initially by Feynman and Vernon [43] and later by Louisell [44].The values of the direct and indirect relaxation parameters reflecting the effect of the medium used presently for alpha-phase 2-pyridone are of the same magnitude as those used by Blaise, et al. [29,37,38] in their study deal-ing with acetic acid in the gas phase, whereas the indirect damping at 298 K for acetic acid in crystalline state is γ = 1 cm -1 .One may ask whether the indirect damping used for crystalline state is weaker than that used for the gaseous phase since the indirect relaxation is ought to be larger in the solid state.The reason is that in the gas phase the indirect damping is an effective one which is the result of the combination of the indirect damping and of the rotational structure [45].In neutral systems, the strongest hydrogen bonds of the N-H…O type are formed in self-associates of alpha-phase 2-pyridone and, seemingly, phosphinic acids [46], and it is only due to the high thermal stability of these associates that one is able to observe their IR spectra in the gas phase, at which cyclic dimers are in equilibrium with monomeric molecules [47,48].This made it possible to analyze the dependence of the bandshape of the υ N-H band, which serves as an important criterion in the choice of theoretical models.The parameters of this broad band and its shape turned out to be almost independent of the bonding frequency. In order to obtain a good agreement with the experimental line shapes, we have take into account some breaking of the IR selection rule for the centrosymmetric cyclic dimer, via a large amount (η = 0.5; 0.9) of forbidden Ag transition.Recall that in general way the quality of the fitting is weakly improved by taking small values for η which lying between 0 and 1.This assumption was initially introduced by Flakus [49].This is a general trend which has been observed recently in the cases of the centrosymmetric cyclic dimers of gaseous acetic acid [37].Note that, according to Flakus hypothesis, the lack of forbidden transition ought to be stronger in the solid state (value near η = 1) than in the gaseous one and we must keep in mind that the Flakus assumption has been seen by this author to be unavoidable in diverse crystalline H-bonded carboxylic acids such as for instance glutaric [50] and cinnamic [51] acids and particularly in centrosymmetric H-bonded dimers. Figure 6 presents the theoretical spectra of alphaphase 2-pyridone dimer where multiple Fermi resonances have been introduced.The number of Fermi resonances n F is increasing from zero (without Fermi resonances) to 3 (n F = 0, 1, 2 and 3).The numerical results show that Fermi resonances appear to play an important role especially for the hydrogenated species whereas the Fermi resonances effect does not affect markedly the N-D derivative species.It may be explain by the fact that Fermi resonances mechanism may involve the N-H bond bending-in-plane vibrations in their first overtone states.So the coupling between the electronic systems with the electrons of the associated carboxyl groups implies a stabilization of the dimers.On the other hand, each deformation of the dimers provides a destruction of this stabilization mechanism.This is the most probable source of the anharmonic coupling involving the proton stretching and the proton bending-in-plane vibrations in the dimer of carboxylic acids.We think that the sensitive improvement of the theoretical line shape of strongly bound dimers of alpha-phase 2-pyridone by introduction of Fermi resonances might be due to some whole effect in which the Fermi resonances assisted by the strong anharmonic coupling would be augmented by the combination of the Davydov coupling and of the quantum direct and indirect damping.It appears from the present theoretical study that the simple model of Davydov coupling taking into account especially Fermi Resonances and quantum direct and indirect damping, is able to reproduce the experimental line shapes.When one of the two identical fast modes is excited, then, because of the symmetry of the cyclic dimer, and of Davydov coupling V D between the two degenerate fast mode excited states, an interaction (Davydov coupling) occurs leading to an exchange between the two identical excited parts of the dimer of alpha-phase 2-pyridone.It is interesting to note that the Davydov coupling parameters used in the present work are similar to those used by Maréchal and Witkowski in their pioneering work dealing with the adipic acid [21].However, we shall introduce in the Davydov coupling some further flexibility by assuming it to be dependent on the H-bond Bridge coordinate. The complexity of the task to the experimental line shape of the cyclic dimer of alpha-phase 2-pyridone and to reproduce the corresponding isotope effect is determined by several factors, acting simultaneously.To take them into account, one needs to know a great number of parameters.For this idea, we think that the small residual discrepancies between theory and experiment are related to the neglect of some of these parameters: 1) assumption of a linear dependence of the angular frequency of the fast mode on the coordinate of the H-bond bridge [32], 2) assumption of the independence of the equilibrium position of the fast mode on the coordinate of the H-bond bridge [32], 3) assumption of a constant term for the Davydov coupling parameter [52], 4) neglect of electrical anharmonicity, 5) neglect of eventual weak tunneling through the potential barrier separating the two H-bond bridge minima, and 6) neglect of eventual relaxation mechanisms of non-adiabatic nature. Conclusions The presented theoretical approach is dealing within the strong anharmonic coupling theory according to which the high frequency mode and the H-bond bridge are anharmonically coupled through a linear dependence of the frequency of the fast mode on the elongation of the Hbond bridge and takes into account Davydov coupling, Fermi resonances, anharmonicity of the H-bond Bridge and direct and indirect quantum relaxations.The present approach contains as special instances the majority precedent theoretical approach [53] dealing with the subject. This approach, which is applied to reproduce the υ N-H(D) IR spectra of centrosymmetric strongly bound dimers of alpha-phase 2-pyridone and their deuterated derivatives in the solid-state, is comprised in the linear response theory calculates the line shape by aid of the Fourier transform of the autocorrelation function of the dipole moment operator.The model has been applied to alphaphase 2-pyridone dimer crystal.It has been found that it is possible to correctly the experimental line shape of the hydrogenated compound and to predict satisfactorily the deuterium effect by using a set of spectral parameters.It appears from the present theoretical study that the simple model of Davydov coupling taking into account quantum indirect damping and the anharmonicity of the H-bond Bridge, is able to reproduce the experimental line shapes especially when the Fermi resonances were not ignored. In the end, we can say that the Fermi resonances appear play an important role to reproduce the experimental line shapes of alpha-phase 2-pyridone dimer crystal. Figure 2 . Figure 2. Comparison between the experimental and theoretical spectra of hydrogen-bonded in alpha-phase 2-pyridone at room temperature in the presence of three Fermi resonances. Figure 3 . Figure 3.Comparison between the experimental and theoretical spectra of deuterium-bonded in alpha-phase 2-pyridone at room temperature in the presence of three Fermi resonances. Figure 4 .Figure 5 . Figure 4. Comparison between the experimental and theoretical spectra of hydrogen-bonded in alpha-phase 2-pyridone at room temperature when the Fermi resonances are ignored. Figure 6 . Figure 6.Multiple Fermi resonance effect on the IR spectral densities of hydrogen (a) and deuterium (b) bonded in alpha-phase 2-pyridone.The number of Fermi resonances nF is increasing from zero (without Fermi resonances) to 3.
2017-10-23T22:19:00.293Z
2012-11-26T00:00:00.000
{ "year": 2012, "sha1": "0a0ee223620d6608d7b25a07309d4e4433f0241f", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=24751", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "0a0ee223620d6608d7b25a07309d4e4433f0241f", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
246411275
pes2o/s2orc
v3-fos-license
Perfect cycles in the synchronous Heider dynamics in complete network We discuss a cellular automaton simulating the process of reaching Heider balance in a fully connected network. The dynamics of the automaton is defined by a deterministic, synchronous and global update rule. The dynamics has a very rich spectrum of attractors including fixed points and limit cycles, the length and number of which change with the size of the system. In this paper we concentrate on a class of limit cycles that preserve energy spectrum of the consecutive states. We call such limit cycles perfect. Consecutive states in a perfect cycle are separated from each other by the same Hamming distance. Also the Hamming distance between any two states separated by $k$ steps in a perfect cycle is the same for all such pairs of states. The states of a perfect cycle form a very symmetric trajectory in the configuration space. We argue that the symmetry of the trajectories is rooted in the permutation symmetry of vertices of the network and a local symmetry of a certain energy function measuring the level of balance/frustration of triads. I. INTRODUCTION We study dynamics of spin variables ±1 defined on edges of a complete graph on N nodes. The spins change in discrete time according to the following synchronous update rule [1] s ij (t + 1) = sign Single indices i, j, k ∈ {1,. . . , N} refer to nodes. Pairs of indices, like ij, refer to edges. Edges are undirected so ij is equivalent to ji. There are no self-connections so by default s ii = 0. For convenience we assume that N is odd. This implies that the sum on the right hand side of (1) is strictly positive or negative. It is never zero. The dynamics (1) is motivated by the idea of the Heider balance [2] in social networks, where the variables s ij = ±1 represent relationships between agents represented by nodes i and j of the graph. The relationships can be either friendly (+1) or hostile (−1). They are assumed to symmetric: s ij = s ji . This kind of dynamics is known to generically lead to a final state where the system divides into two groups [3][4][5][6][7] internally friendly but mutually hostile. Such states are termed 'balanced' [2]. Here we abstract from the sociological interpretation [2] and focus on mathematical properties of the dynamics itself. We are mainly interested in final states reached during the evolution. In addition to 'balanced' states, which are fixed points of the dynamics, the dynamics can lead to jammed states, which are also fixed points but they are not balanced [3]. More interestingly, the dynamics also has limit cycles of different lengths. The fixed points and limit cycles can be used to classify states by basins of attraction they belong to. The statistics of basins of attraction for small systems was reported in [1]. The aim of the present paper is to explore properties of limit cycles, in particular of perfect limit cycles to be defined below. II. OBSERVABLES Let's introduce quantities that are useful in probing the behaviour of the system. It is convenient to define an energy function where u ijk = −s ij s jk s ki is energy of triangle ijk. The triangle energy is −1 when the triad ijk is balanced and +1 when it is frustrated. Because edges are undirected, any permutation of indices ijk corresponds to the same triangle. A balanced state consists only of balanced triads. Energy of a balanced state is U min = − N 3 . This is a global minimum of the energy function. A fully frustrated state has the energy equal U max = N 3 . A fully frustrated state can be obtained from a balanced state by flipping all spins s ij → −s ij . One can also define edge energy as a sum of energies of all triangles sharing the edge and similarly node energy as a sum of energies of all triangles sharing the node Clearly i u i = i<j u ij = 3U . Each triangle energy configuration {u ijk } i<j<k has a 2 N −1 -fold degeneration meaning that there are 2 N −1 distinct spin configurations having the same triangle energies. One can obtain them from each other by flipping all spins sharing a node. This operation does not change triangle energies because it flips an even number of spins in each triangle. This is a local gauge symmetry of the system. This operation can be repeated for N − 1 nodes, leading to 2 N −1 different spin configurations for every triangle energy configuration. Note that the initial configuration would be restored, if the gauge transformation was repeated for all N nodes. Therefore 'gauge orbits' consist of 2 N −1 and not 2 N different spin configurations. We can define energy spectra: triangle energy spectrum n t (u) is the number of triangles having energy u, edge energy spectrum n e (u) is the number of edges having energy u, and node energy spectrum n n (u) is the number of nodes having energy u. Formally we can write n t (u) = i<j<k δ u,u ijk , n e (u) = i<j δ u,uij n n (u) = i δ u,ui where δ a,b is the Kronecker delta. The energy spectra take nonzero values from the range ±1 for triangles, ±(N − 2) for edges and ±(N − 1)(N − 2)/2 for nodes. The proximity of spin configurations A and B can be measured by the Hamming distance We can use the Hamming distance (5) to measure proximity of consecutive configurations A 0 → A 1 → A 2 → . . . generated by the synchronous dynamics (1) and in particular to find fixed points and limit cycles of the dynamics. A configuration A t such that d H (A t , A t+1 ) = 0 is a fixed point of the dynamics. The minimal value c such that d H (A t , A t+c ) = 0 is the length of a limit cycle. The corresponding cycle consists of configurations A t , A t+1 , . . . , A t+c−1 . Initial configurations A 0 of any sequence of configurations A 0 → A 1 → . . . generated by the dynamics (1) can be classified by a fixed point or limit cycle of the sequence. With a limit cycle (or a fixed point) one can associate a basin of attraction that is a set of initial states A 0 which lead to this limit cycle. The update rule (1) can be written in the following way If this update rule was applied asynchronously that is to one edge at one time, it would never increase energy, and it would drive the system to a local energy minimum. We are however interested in synchronous dynamics. In this case more than one edge of a triangle can be updated simultaneously and in effect triangle energy and thus also energy of the system can increase. The number of spins flipped in one step of synchronous dynamics (1) is equal to the number of positive u ij 's, so where Θ is the Heaviside step function, and n e (u, t) is the edge energy spectrum of the configuration A t . It follows that A t is a fixed point of the dynamics, if all edge energies are negative, that is n e (u, t) = 0 for u > 0. The edge spectrum is said to be steady for t > t 0 if n e (u, t) = n e (u, t + 1) for all u and t > t 0 . This just means that the spectrum does not change for t > t 0 . For steady spectra the time dependence can be skipped n e (u, t) = n e (u). Fixed points have steady spectra, but as we will see also some cycles have. We will call such cycles perfect. The Hamming distance between any two consecutive configurations of a perfect cycle is constant: d H (A t , A t+1 ) = const, as follows from (8). In the next section we will discuss examples of perfect cycles. III. PERFECT CYCLES Let us first consider the system for N = 9. This is a good test site because the update rule (1) can be applied to all 2 36 spin configuration using a computer program, so one can test all configurations. Already for N = 11 the number of configurations is too large for an exhaustive computation for all configurations. We found that there are 967680 cycles of length c = 12 for N = 9. An example of a configuration belonging to a perfect cycle is A graphical representation of this state and remaining states belonging to the perfect cycle is shown in Fig. 1. With a naked eye it is rather difficult to see what makes these states form a perfect cycle. The situation changes when the energy spectra of these states u -7 -5 -3 -1 +1 +3 +5 +7 ne(u) 3 2 4 9 12 4 2 0 are analysed, because then you can observe that all states have constant spectra. Edge energy spectrum is given in Table I. One can easily see that energy of the states is U = 1 3 u un e (u) = −6, and the distance between any two consecutive states in the cycle (8) is u>0 n e (u) = 18. Using a computer program we have checked that configurations separated by two steps in the cycle differ by a constant number of spins d H (A t , A t+2 ) = 22. Similarly, the distance between any two configurations separated by three steps is constant d H (A t , A t+3 ) = 20. Generally we found that for any s the distance d H (A t , A t+s ) in the cycle is constant for all t as long as s is fixed. For completeness, d H (A t , A t+s ) = 10, 18, 20, for s = 4, 5, 6. Also, d H (A t , A t+s ) is the same as for s → 12 ± s. The plus minus symmetry follows from the symmetry of the distance Also the number of triangles by which A t and A t+s differ is constant for all t when s is fixed, and it is D H (A t , A t+s ) = 36, 32, 16, 32, 36, 0 for s = 1, 2, 3, 4, 5, 6. We have also studied systems for N > 9 to search for perfect cycles. In this case, however, we performed a random search since as mentioned the number of configurations is too large for these systems to be exhaustively browsed. We have found perfect cycles of length c = 14 for N = 11. The edge energy spectra of these cycles is shown in Table III. As follows from the table, energy of the configurations is U = 1 3 u un e (u) = −1, and the Hamming distance between any two neighbouring states in the cycle (8) u>0 n e (u) = 26. The corresponding node energy spectrum is n n (−9) = 1, n n (−5) = 4, n n (3) = 4, n n (7) = 2 and n n (u) = 0 for other values of u. As before we found that d H (A t , A t+s ) and D H (A t , A t+s ) for fixed s are independent of t, so all configurations of the cycle are equivalent, and symmetrically distributed in the configuration space. We found that there are two distinct permutations fulfilling the condition (10). They can be decomposed into a cycle of length seven and two cycles of length two. Using the same enumeration argument as before (12) this gives 11!/2/14 × 2 10 such cycles. One would need to check all configurations, to exclude that there are no other cycles (with a different energy spectrum) for N = 11. We have also found a perfect cycle of length c = 12 for N = 13. The edge energy spectrum is given in Table IV. The energy of the configurations is U = 1 3 u un e (u) = −56, and the Hamming distance between any two neighbouring states in the cycle (8) is u>0 n e (u) = 20. The node energy spectrum is n n (−20) = 4, n n (−16) = 2, n n (−12) = 4, n n (−4) = −2, n n (0) = 1. Again we found that d H (t, t + s) and D H (t, t + s) are independent on t when s is constant. IV. SEMI-PERFECT CYCLES Not all limit cycles have steady energy spectra. There are cycles whose spectra change periodically. We will call them semi-perfect. As an example let us discuss a semi-perfect cycle that we have found for N = 13. The cycle is representative for all semi-perfect cycles in u -11 -9 -7 -5 -3 -1 +1 +3 +5 +7 +9 +11 ne(u) 2 0 3 11 17 15 17 8 4 1 0 0 ne(u) 2 2 6 8 10 16 16 17 1 0 0 0 ne(u) 2 0 2 9 18 15 20 9 2 0 1 0 that that it has typical features, but additionally it is the longest limit cycle we have found so far. It has the length of c = 48. The energy spectra of the states in the cycle change with the period three. The edge spectra of three consecutive states of the cycle are given in Table V. Energies of the states are U t = −32, −32, −28. The values repeat every three steps. If we denote the map corresponding to a single state (1) by s(t + 1) = Φ(s(t)), then taking every third configuration is equivalent to s(t + 3) = Φ(Φ(Φ(s(t)))) = Ψ(s(t)) where the map is a triple composition of Φ: Ψ = Φ • Φ • Φ. Viewed from this perspective, the semi-perfect cycle of the dynamics defined by the map Φ (1) is a perfect cycle for Ψ. More generally, the class of semi-perfect cycles is a class of limit cycles which are perfect for a multiple composition Φ • . . . • Φ of the original update rule. V. DISCUSSION The motivation behind the evolution rule (1) is that it locally maximises the number of balanced triads. Indeed, when performed asynchronously, that is one edge at time, the rule never reduces the number of balanced triads and thus it leads to a state at local maximum, as far as the number of balanced triads is concerned (equivalent to local energy minimum (2)). The synchronous version of the evolution (1) where all edges are updated simultaneously has a far more interesting spectrum of attractors: in addition to fixed points it has limit cycles of different length and of different symmetry. Some limit cycles are surprisingly long. For example we found a limit cycle of length c = 48 for N = 13. In this paper we mostly focused on a class of limit cycles which preserve the energy spectrum and are represented by symmetric trajectories in the configuration space, such that any two states separated by the same number of steps in the perfect cycle are separated by the same Hamming distance in the configuration space. We have argued that the symmetry of these trajectories is rooted in the automorphism group of the complete graph on which the system is defined and in the local gauge symmetry of the energy function (2). There are many open questions. Is it possible to formulate general conditions that would make it possible to judge whether a state belongs to a limit cycle, before checking it explicitly by iterating the equation (1)? What is the longest limit cycle and the longest perfect cycle for the complete network for given N ? What is the abundance of such cycles? We know [1] that the fraction of initial states which lead to perfect limit cycles of length c = 14 for N = 11 is about 10 −6 , which is much less than the fraction of perfect cycles c = 12 for N = 9 which is 0.004. We expect that the percentage of states of perfect cycles decreases with the system size, but it would be good to find an argument about asymptotic behaviour. Generally, the dynamics we discussed in this paper is of the type s(t + 1) = Φ(s(t)). The map Φ given by Eq. (1) is just a particular case. One can change the evolution rule. For example adding a minus sign to the expression on the right hand side of Eq. (1) we would obtain a system having a tendency to maximize the number of frustrated triads. Of course this evolution would be in one-to-one correspondence to the one discussed here as can be seen by replacing states s in one original dynamics by mirror states s * in the new one. But the question about how the attractors of the evolution depend on the given map Φ is quite interesting. For example what is the class of maps Φ which would lead to perfect limit cycles? It would be interesting to study symmetry classes for general maps [9]. There is some correspondence of the dynamics of the model discussed in this paper and the quenched Kauffman NK model [10,11] of time evolution of networks. As we argued in [1], here the number K of incoming links which determine the current state of a node (here: of a link) evolves with the number of degrees of freedom (here: N 2 ) as a square root of this number (here N ). An important difference is that in our case, there is only one function (given by Eq. (1)) which determines the state of each link in a subsequent time, and not a random (fixed in the quenched model) set of these functions, different for each node. What is similar is the large number of steady states with minimal energy, which in our case is just the number of balanced states, varying with N as 2 N −1 . We add that the process of reaching the Heider balance, modeled by Eq. (1), has been termed as 'social mitosis' [12]. Limit cycles in the Kauffman model [13,14] are no less important than fixed points and have biological interpretation. Our results indicate that limit cycles can also occur when evolution is deterministic and identical for all components of the system.
2022-01-31T02:15:29.601Z
2022-01-28T00:00:00.000
{ "year": 2022, "sha1": "8e0c411196df0fb9b678954da9dd69b9b1d4186e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "8e0c411196df0fb9b678954da9dd69b9b1d4186e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
8596659
pes2o/s2orc
v3-fos-license
Older Adults Accessing HIV Care and Treatment and Adherence in the IeDEA Central Africa Cohort Background. Very little is known about older adults accessing HIV care in sub-Saharan Africa. Materials and Methods. Data were obtained from 18,839 HIV-positive adults at 10 treatment programs in Burundi, Cameroon, and the Democratic Republic of Congo. We compared characteristics of those aged 50+ with those aged 18–49 using chi-square tests. Logistic regression was used to determine if age was associated with medication adherence. Results. 15% of adults were 50+ years. Those aged 50+ were more evenly distributed between women and men (56% versus 44%) as compared to those aged 18–49 (71% versus 29%) and were more likely to be hypertensive (8% versus 3%) (P < 0.05). Those aged 50+ were more likely to be adherent to their medications than those aged 18–49 (P < 0.001). Adults who were not heavy drinkers reported better adherence as compared to those who reported drinking three or more alcoholic beverages per day (P < 0.001). Conclusions. Older adults differed from their younger counterparts in terms of medication adherence, sociodemographic, behavioral, and clinical characteristics. Introduction 2.8 million people living with HIV worldwide are over the age of 50 [1]. In the USA, 24% of all people living with HIV are older than 50 [2]. In sub-Saharan Africa, more than 14% of adults with HIV are 50 years or older, and this population is growing [3]. Perceived risk of contracting HIV among older adults is low [4] despite physiological changes associated with aging which place older adults at increased risk of contracting HIV [5,6]. HIV disease progresses more rapidly among older adults than among their younger counterparts, and mortality among older adults is higher after developing an AIDS-defining illness [7,8]. Older adults are more likely to be diagnosed at late stage of HIV disease progression than their younger counterparts [8,9]. This may be due, in part, to low perceived susceptibility of HIV among older adults [10] as well as their healthcare providers [11]. Orel et al. [12] evaluated state departments of public health in the USA and concluded that there is a dearth of HIV/AIDS risk-reduction materials targeting older adults. A study conducted in eight sub-Saharan African countries found that older adults had lower levels of knowledge about 2 AIDS Research and Treatment HIV, and, among older adults, women had the lowest levels of HIV-related knowledge [10]. Few prevention programs in this setting are aimed at older adults [6]. Interest in HIV and aging is mounting as evidenced by the increasing body of literature focused on aging, the emergence of meetings such as the 1st International Workshop on HIV and Aging held in Baltimore, MD, in 2010, and a growing number of advocacy activities such as the National HIV/AIDS and Aging Awareness Day, held annually in the USA since 2009. In turn, focus on behavioral and psychosocial issues associated with HIV and aging is building. Emlet [13] found that older adults were less likely to disclose their HIV serostatus to relatives, partners, mental health workers, neighbors, and church members than those aged 20-39 years. Negin et al. [10] found similar results in sub-Saharan Africa. One US-based study found that older adults were more likely than their younger counterparts to be adherent to their antiretroviral therapy (ART) regimens [14]. In contrast, others [15] have found that adherence to ART and other medications decreases as the number of chronic conditions increases among HIV-positive older adults. Though strides have been made in treatment scale-up in sub-Saharan Africa, very little is known about older adults accessing HIV care and treatment in resource-limited settings. This paper examines whether sociodemographic, behavioral, and clinical characteristics of those aged 50+ differ from those aged 18-49 years. Being over the age of 50 was a predictor of self-reported adherence according to a previous analysis of the women in this cohort [16]. The current paper seeks to extend these findings by evaluating whether or not there was an association between age and adherence to ART or other HIV-related medications in the overall International Epidemiologic Databases to Evaluate AIDS (IeDEA) Central Africa region cohort. Materials and Methods The HIV-infected adults included in this analysis were receiving care at 10 HIV care and treatment facilities contributing data to the IeDEA Central Africa region database. The National Institute of Allergy and Infectious Diseases funded the IeDEA initiative to establish regional centers for the collection and harmonization of HIV-related data. This international research consortium has enabled researchers in participating regions to better describe regional trends as well as address unique and evolving research questions in HIV/AIDS currently unanswerable by single cohorts. The Central Africa region database includes data from existing healthcare facilities in the Democratic Republic of the Congo (DRC), where data collection began in 2007, and Cameroon and Burundi, where data collection began in 2008. Approval for this research was granted by the Institutional Review Board (IRB) at the Kinshasa School of Public Health in DRC and RTI International, as well as the national ethics committees in Burundi and Cameroon. Sites providing data to the IeDEA Central Africa database were a combination of public and private hospital and ambulatory care units of varying size ranging from three patient beds at one clinic in DRC to 300 beds at the largest hospital in Cameroon. Participating sites served predominantly urban populations and offered primary care in DRC and tertiary care in Burundi and Cameroon. All participating sites recommended and provided routine HIV testing for participants' relatives, sex partners, and household members and had some level of linkage to programs providing prevention of maternal to child transmission (PMTCT) services. Participating sites in Cameroon were the first within the IeDEA Central Africa region to offer free ART for adults in 2000, while participating sites in DRC started in 2005, followed by the Burundi site in 2006. All of the clinic sites contributing data for this analysis provided individual adherence counseling for patients. Many sites also offered group counseling on medication adherence. Frequency of counseling ranged from site to site, with some programs only providing adherence counseling in the event of virologic failure, while others provided counseling at initiation of therapy, and at follow-up clinic visits every one to three months. Some sites were also able to provide other types of ART adherence support. Many of the sites in the DRC used followup appointments to assess adherence, and some distributed tools such as written or illustrated instructions on when to take each medication and, to a lesser extent, calendars, alarm clocks, watches, or pagers to be used as reminders. Other sites used teaching techniques such as quizzes on how and when to take each medicine as a method of reinforcing the information learned in the counseling. At the Cameroon and Burundi sites, the medical teams also incorporated a pharmacist into multidisciplinary teams of providers and some sites had videos with instructions on adherence for patients to view. All patient-level adherence data were self-reported and assessed at each individual's last visit prior to this analysis. All adults, regardless of whether they were on ARVs, were asked whether they had missed taking their medication more than two consecutive days in the last month. For those not on ARVs, missed medications included, most commonly, cotrimoxazole prophylaxis and, to a lesser extent, tuberculosis (TB) prophylaxis and TB treatment. Length of time on ARVs was calculated by determining the length of time between the ARV start date and the last follow-up visit prior to this analysis and was coded as not on ARVs, <6 months, 6-24 months, and >24 months. Those on ART were followed every one to three months and those not on ART were followed every six months, unless there was a clinical event for which they needed to return to the clinic for evaluation and/or care. All patient-level data used in this analysis were collected during a face-to-face interview with a clinic doctor or nurse. Statistical Analysis All analyses were performed using SAS 9.1 for Windows [17]. We examined baseline sociodemographic, behavioral, and clinical characteristics of those aged 50+ with those aged 18-49 years using chi-square tests to determine if distributions between the two groups differed. We evaluated differences between countries using chi-square tests to determine if distributions between DRC, Cameroon, and Burundi differed. We also examined whether age was associated with selfreported medication adherence. We defined nonadherence as missed doses (of ART or other HIV-related medications) for two or more consecutive days in the past 30 days. Logistic regression was used to determine if age was associated with medication adherence while controlling for variables such as country, marital status, gender, employment status, heavy drinking, education, clinical stage at enrolment into the IeDEA database, and length of time on ARVs. Included in the model were sociodemographic and clinical characteristics that we hypothesized a priori might affect adherence based on reported associations in the literature in the context of sub-Saharan Africa [18][19][20] while also considering completeness of data in the IeDEA Central Africa database. Results As of June 2011, there were 18,839 adults enrolled in HIV care in the IeDEA Central Africa region database and 2,819 (15%) were 50 years old or older (Table 1). The majority of adults (N = 10,647) were from DRC, 5,835 were from Cameroon, and 2,357 were from Burundi. Of adults aged 50+, the mean age in both DRC and Cameroon was 55 years (median 54 years) and 56 years in Burundi (median 55 years). Those aged 50+ were more evenly distributed between women and men (56% versus 44%, resp.) as compared to those aged 18-49 (71% versus 29%, resp.) (P < 0.05). Approximately 20% of both groups reported heavy drinking, defined as three or more alcoholic drinks per day on average. Adults were asked about their marital status, whether they had any casual sex partners in the last 6 months (defined as an occasional sex partner in addition to the respondent's regular partner), whether they had a sex partner (regular or casual) that recently died, and whether they used condoms with their regular partner. One-quarter of those aged 18-49 reported being single, compared to only 5% of older adults (P < 0.05). Seventeen percent of adults 18-49 indicated they had a casual sex partner within the last 6 months as compared to 8% of those 50+ (P < 0.05). About half (42%) of adults aged 50+ reported having a sex partner that recently died as compared to 27% of those aged 18-49 years (P < 0.05). A higher proportion of those aged 18-49 reported using condoms with their regular partner (19%) as compared to those aged 50+ (11%) (P < 0.05). We compared HIV serostatus disclosure of those aged 18-49 and 50+ at enrollment into the IeDEA Central Africa database. A higher proportion of those aged 18-49 as compared to those 50+ had shared their HIV test results with their partner or spouse (33% versus 27%, resp.) (P < 0.05). The majority had shared their results with a family member (57% for both groups), while few had shared their results with a friend (6% versus 4%, resp.), health worker (6% for both groups), or someone living in the home (2% for both groups). Few were referred for disclosure counseling at the baseline visit (3% versus 2%, resp.). To examine whether older adults were living with fewer amenities than their younger counterparts, we reviewed four variables addressing socioeconomic status: education level, paid profession, access to electricity in the home, and running water in the home. Older adults were more likely to report no formal education than their younger counterparts (14% and 7%, resp.) (P < 0.05); however, there were no differences between the two age groups for having a paid profession (42% of both groups), electricity (approximately 78% of both groups) and running water (approximately 61% of both groups) in the home. We compared the health status of those aged 18-49 and 50+ at enrollment into the IeDEA Central Africa database. The majority of both groups entered HIV care through voluntary counseling and testing (56% and 55%, resp.). The majority of both groups (64% and 65%, resp.) had moderate-to-severe HIV disease progression classified as WHO clinical stage 3 or 4 at enrollment into the IeDEA database. Of the 7,858 adults with CD4 counts available at enrolment into the IeDEA database, a higher proportion of those aged 18-49 years had CD4 cells counts less than 200 cells/mm 3 (44%) as compared to 37% of adults 50+ (P < 0.05). A higher proportion of those aged 50+ (8%) had a history of hypertension as compared to those aged 18-49 years (3%) (P < 0.05) while few had a history of diabetes (3% versus 1%, resp.). About 20% of both groups had a history of tuberculosis. Recognizing the diversity of the countries included in the IeDEA Central Africa region, we examined the sociodemographic, behavioral, and clinical characteristics of the 18,839 HIV+ adults in the database by country (Table 1). A higher percentage of adults in the Cameroon sites, (35%) were single as compared to adults in the DRC and Burundi sites (17% for both) (P < 0.05). Few adults in the DRC and Cameroon sites (4% for both) reported having no formal education as compared to 37% of adults in Burundi (P < 0.05). A higher percentage of adults in the Cameroon sites, reported having a paid profession (51%), electricity (93%) and running water (68%) in the home as compared to those in DRC and Burundi (P < 0.05). Table 2 presents the results of the logistic regression model used to determine if age was associated with medication adherence while controlling for variables such as country, marital status, gender, employment status, heavy drinking, education, clinical stage at enrolment into the IeDEA database, and length of time on ARVs. Those aged 50+ were more likely to be adherent to their medications than those aged 18-49 (P < 0.001). Older adults had 1.59 times the odds of being adherent to their medications as compared to their younger counterparts. In terms of other predictors of adherence, adults who were not heavy drinkers had 1.40 times the odds of being adherent as compared to those who reported drinking three or more alcoholic beverages per day. Those who were not taking ARVs had 2.05 times the odds of being adherent to other medications (i.e., cotrimoxazole prophylaxis) as compared to those on ARVs for less than 6 months. Adults from the Burundi site had 2.23 times the odds of being adherent to their medications as compared to those from the DRC sites (P < 0.001). Adults from the Cameroon sites had 1.98 times the odds of being adherent to their medications as compared to those from the DRC sites (P < 0.001). a Variables do not add to total number of adults in the database (18,839) due to missing data. * Significant differences found between adults 18-49 years and adults 50+ years distributions (α = 0.05). ∧ Significant differences found between DRC, Cameroon, and Burundi distributions (α = 0.05). Discussion Fifteen percent of this large cohort of HIV-infected adults were 50 years old or older. Though older adults were more likely to report no formal education than their younger counterparts, they did not seem to be living with fewer amenities as per the socioeconomic variables examined in this study: having a paid profession, access to electricity, and running water in the home. We found that older adults were more likely to be adherent to their medications than their younger counterparts. In terms of other predictors of adherence, we found that adults who were not heavy drinkers reported better adherence as compared to those who reported drinking three or more alcoholic beverages per day. Older adults have been found to be more adherent to HIV medications, including ART, than their younger counterparts, which may improve their survival and response to treatment [14]. Though we found adults aged 50+ years to be more adherent to their medications than those aged 18-49, further inquiry is needed to determine if the older adults in this cohort, in turn, experience an improved response to ART and survival. Alcohol abuse has been found to negatively affect adherence in sub-Saharan Africa (see Mills et al. [19] for a review) and our results provide further support. We found that adults who were not heavy drinkers reported better adherence as compared to those who reported drinking three or more alcoholic beverages per day. A similar proportion of older and younger adults reported heavy drinking (23% and 21%, resp.). However, it is important to note that those aged 50+ were more evenly distributed between women and men as compared to those aged 18-49, and, in this cohort, men tended to report alcohol use more frequently than women. Our results suggest that those reporting heavy alcohol use may benefit from additional adherence counseling. Those who were not taking ARVs reported better adherence to other medications (i.e., cotrimoxazole prophylaxis) as compared to those on ARVs for less than 6 months, supporting the notion that additional adherence counseling for new ART users may be beneficial. Older adults in sub-Saharan Africa have been found to be less likely to discuss HIV prevention with their partner as compared to their younger counterparts [10]. The results of the current study echo these findings. Disclosure of HIV test results with partners/spouse as well as condom use with regular partner was higher among those aged 18-49 as compared to those aged 50+. Further, HIV programming in sub-Saharan Africa is generally targeted towards younger adults, not those over 50 [3,21]. Older adults as compared to their younger counterparts have been found to be less likely to have been tested for HIV [10] and experience delays in diagnosis and treatment [8,9], as clinicians may not routinely screen older adults for HIV or recognize their signs and symptoms as those of HIV [11]. In the current study, the majority of both age groups had moderate-to-severe HIV disease progression classified as WHO clinical stage 3 or 4 at enrollment into the IeDEA database, which corresponded to entry into HIV care for many adults. Older adults are at increased risk for HIV infection due to biological and social factors [5,6]. For women, in particular, age is associated with thinning of vaginal membranes and reduced vaginal lubrication, which can lead to tearing during sexual intercourse [5]. Social factors may place older adults at risk for HIV, such as divorce or death of a spouse, which may lead to new sexual partners and thus risk of exposure [5]. In sub-Saharan Africa, cultural practices, such as wife inheritance, may also place women at risk of contracting HIV in the event of the death of a spouse [6]. In the current study, a greater proportion of older adults as compared to those aged 18-49 were widowed (40% versus 18%, resp.) or divorced (11% versus 9%, resp.). 2 Nonadherence was defined as missed doses of ART or other HIV-related medications (most commonly cotrimoxazole prophylaxis and, to a lesser extent, TB prophylaxis and TB treatment) for two or more consecutive days in the past 30 days. Limitations Our results should be considered in light of several study limitations. The adherence data for this study were self-reported and collected during face-to-face interviews with a clinic doctor or nurse, which can lead to social desirability bias and, in turn, inflated adherence estimations [22]. Our baseline data were derived at enrollment into the IeDEA Central Africa database. For many adults, this also corresponded to enrollment into HIV care. Though we were able to determine when ART was started, we were not able to assess how long the patient had been in HIV care before enrolling into the IeDEA database. The data presented in this paper provide a snapshot of patient characteristics from a broad range of private and public hospitals and ambulatory care units of varying size and capacity. However, the data are not nationally representative as they were not derived from randomly selected HIV care facilities. Though describing regional trends was an objective of the larger study, exploring differences between age groups was not an original goal. Collecting additional data on psychosocial variables, such as social support, depression, stigma, and quality of life, would have been insightful for examining potential differences between older adults and their younger counterparts. Conclusions This is the first study to examine whether there are differences between older adults and their younger counterparts accessing HIV care and treatment in the central Africa region. These results are noteworthy as they provide insight into the sociodemographic, behavioral, and clinical characteristics of HIV-infected older adults in this region. Though we found older adults were more likely to be adherent to their medications than their younger counterparts, further inquiry is needed to better understand factors affecting ART adherence, response to treatment, and survival of older adults receiving HIV care in sub-Saharan Africa. We found that heaving drinking negatively affected medication adherence, which suggests that those reporting heavy alcohol use may benefit from additional adherence counseling.
2014-10-01T00:00:00.000Z
2012-02-16T00:00:00.000
{ "year": 2012, "sha1": "1dc1561de55d91f618e5dad2f092b30f0c2b3a4d", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/art/2012/725713.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9068aa46782b2de9ecd506ee8dd09dacfbde269d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257367549
pes2o/s2orc
v3-fos-license
Noble metal catalyst detection in rocks using machine-learning: The future to low-cost, green energy materials? Carbon capture and catalytic conversion to methane is promising for carbon–neutral energy production. Precious metals catalysts are highly efficient; yet they have several significant drawbacks including high cost, scarcity, environmental impact from the mining and intense processing requirements. Previous experimental studies and the current analytical work show that refractory grade chromitites (chromium rich rocks with Al2O3 > 20% and Cr2O3 + Al2O3 > 60%) with certain noble metal concentrations (i.e., Ir: 17–45 ppb, Ru: 73–178 ppb) catalyse Sabatier reactions and produce abiotic methane; a process which has not been investigated at the industrial scale. Thus, a natural source (chromitites) hosting noble metals might be used instead of concentrating noble metals for catalysis. Stochastic machine-learning algorithms show that among the various phases, the noble metal alloys are natural methanation catalysts. Such alloys form when pre-existing platinum group minerals (PGM) are chemically destructed. Chemical destruction of existing PGM results to mass loss forming locally a nano-porous surface. The chromium-rich spinel phases, hosting the PGM inclusions, are subsequently a second-tier support. The current work is the first multi-disciplinary research showing that noble metal alloys within chromium-rich rocks are double-supported, Sabatier catalysts. Thus, such sources could be a promising material in the search of low-cost, sustainable materials for green energy production. The Paris Agreement highlights the paramount importance of establishing sustainable fuel sources. Catalytic hydrogenation of carbon dioxide is a promising carbon-neutral fuel source 1 . Emerging research on sustainable energy and environment protection, and implementation of green policies from governments and international foundations 2-4 emphasize the need to shift towards environmentally friendly energy production. The Sabatier reaction (Eq. 1) is a well-known and widely used process to produce methane from catalytic hydrogenation of CO 2 . It is a two-step reaction, involving the combination of an endothermic reversed water gas shift (RWGS) reaction and an exothermic CO hydrogenation (Eqs. 2 and 3, respectively), at elevated pressures and temperatures ranging between 200 and 500 °C 5 . (1) CO 2 + 4H 2 → CH 4 + 2H 2 O; �H = −165 kJ/mol www.nature.com/scientificreports/ The produced hydrocarbon is not exclusively methane but a mixture of hydrocarbons and other organic molecules depending on the activity and selectivity of the catalyst. Nickel and ruthenium-based catalysts produce almost exclusively methane. Less reactive metal catalysts (Pd, Pt, Rh, Mo, Re, Au) produce simultaneously CH 4 , CH 3 OH and CO via the RWGS 5 . In previous studies, the lowest reported temperature for CO 2 hydrogenation was at room temperature (25 °C). A ruthenium nanoparticle loaded on a TiO 2 catalyst, lead to methane formation within the first 5 min of the experiment 6 . Interestingly, low temperature (< 100 °C) CO 2 hydrogenation occurs in nature, producing abiotic methane (methane hereafter) via Sabatier reaction. Studies suggest that the source of methane is chromium-rich rocks (chromitites) 7,8 . Minerals with catalytic properties within chromitites are particularly promising in producing commercially efficient and sustainable catalysts. Mineral catalysts could reduce the cost and environmental impact related to the synthesis of catalysts (e.g., less processing) and fuel production (lower energy for lowertemperature reactions). There is currently limited understanding on the constrains during low temperature methane formation. Direct evidence for the kinetics of methane formation in nature is limited. The existing studies on high-temperature (> 300 °C) experiments are not representative of the methanation in chromitites. Isotopic analyses of methane in ruthenium-bearing chromitites suggest that methane was formed below 150 °C [7][8][9] . Low temperature (< 100 °C) experiments demonstrated that pure ruthenium catalysts, in quantities equivalent to their natural occurrence in chromitites, effectively support methanation 10 . Hence the original hypothesis was that the most abundant ruthenium phase in chromitites should be the catalyst. Ruthenium-rich phases occur mainly in chromitites including laurite (RuS 2 ), laurite-erlichmanite (OsS 2 ) solid solutions, and Ir-Ru-Os-Ni alloys (IPGE-Ni alloys) [11][12][13] . However, the exact locus of methane generation and the actual catalyst(s) is poorly understood. Natural materials including chromitites have a complex chemical history. Over the span of millions of years, rocks undergo numerous chemical transformations altering their original chemical composition along with their constituent minerals. Methane formation in chromitites comprises a minor part of the overall rock evolution. In a mathematical context, chromitites are multivariate systems. The abundance and distribution of the measured variables (i.e., chemical elements) derive from multiple overlapping processes. Variables are inheritably interrelated, having contrastingly different variances and scales. In this context, variables related to methane formation have low variance. Thus, it is critical to apply a suitable data analysis method to extract information on the mineral catalyst. The workflow of this study is sequential: large-scale target -source rock modelling -catalyst prediction -micro-scale target. The rocks with high potential (chromitites) are derived from abandoned chromium ore mines in Greece. Large-scale inferences are based on whole-rock chemical analyses. However, chemical data from rock samples are compositional (i.e., closure to 100%) and thus, linear regression and other parametric methods are unsuitable to detect causal effects. The suitability of non-parametric and stochastic approaches was tested using ANOVA and Spearman's correlations on the whole-rock chemical analyses to compare to methane concentrations in various rock-types and indicated the methane source rock. A combination of stochastic machine-learning algorithms including Random Forest Regression (RFR), t-distribution Stochastic Neighbour Embedding (t-SNE), and model-based clustering with Bayesian Information Criterion (mBIC), determined the element proxies of the mineral catalyst. The RFR classified the variables related to methane in order of importance. Reverse RFR modelling attested the predictors in identifying rocks with high methane levels. The best predictors comprise the potential catalyst proxies. Additional machinelearning algorithms (t-SNE and mBIC) verify the robustness of the identified catalyst proxies. The micro-scale investigation focuses in the richest-in-methane chromitites. A series of quantitative mineralogical analyses and observations (i.e., composition, stoichiometry, reaction indicators, chemical or crystal lattice modifications) connects the identified catalyst proxy to specific minerals. The latter comprise the suggested low-cost catalysts. Stochastic machine-learning techniques were used to trace the naturally occurring catalysts among the 55 analysed chemical elements. Herein, we suggest a novel, natural catalyst, in line with the need for a cost-effective catalyst that is active under low temperatures, resulting to lower environmental impact for its processing. The concept of a double-supported catalyst is also introduced. The present work is the first multi-disciplinary approach showing that noble metals alloys (i.e., Ir, Ru) within chromium-rich rocks are double-supported, Sabatier catalysts. Tables 1 and 2). ANOVA tests between the lithotypes and methane contents showed that the rock type has a large effect on the methane concentration (F (4, 53) = 53.935, p < 0.001, ω 2 = 0.785, Supplementary www.nature.com/scientificreports/ Catalyst proxies. Minerals consist of chemical elements in ordered crystallographic arrangement, which are incorporated in their crystal lattice. Therefore, the bulk chemical composition of a rock system reflects its mineralogical composition. This interdependency among elements, minerals, and rocks results in collinearity because the variables are not independent. Common parametric statistical tools such as regression analysis, especially on untreated data, are prone to biased results 14 . Furthermore, distance-based, machine-learning techniques, assuming a Euclidean geometry, are not appropriate, as the relevant assumption criteria are not met, and require complex data transformation techniques 15 . However, data treatment results in loss of information when the information is carried by features with small variances 16 Tables 8-9). Average rank position plots of each important feature against its sum ranking reveal distinct top feature subgroups (Fig. 1). Contrary to the expectations from previous studies 7,10 , iridium is the top predictor and more important than ruthenium in predicting methane concentration. Iridium and ruthenium comprise the most important features in methane prospecting as denoted by the steep difference in their average ranks (3.7 and 6.5, respectively vs 12.7 or higher for the rest elements) and their minimum ranking fluctuations (0 and 2.8). Hence, it appears that iridium and ruthenium are the true catalyst indicators. The rest of the elements are mostly classifiers of lithotypes with high or low methane concentrations (e.g., Cr and SiO 2 are the proxies for chromitites and basic/ultrabasic rocks, respectively). This agrees with the well-accepted geological proxies for such lithotypes and with the current ANOVA results. Results Validation of the catalyst proxies. The hypothesis that iridium and ruthenium are the catalysts by performing sequential t-SNE plots for each subgroup of the top features (Supplementary Table 9) was investigated. The methane levels are defined according to the measured concentrations to reveal which subgroup of elements www.nature.com/scientificreports/ provides distinct grouping in samples with similar methane levels ( Table 1). The t-SNE method was preferred because its stochastic approach allows the investigation of variables with different scales and variances. There are no assumptions about the underlying data distribution because outlier values do not affect it. Only the t-SNE that includes iridium and ruthenium (top 2 features plot) gave an almost ideal sample grouping ( Fig. 1A) for most of the tested perplexity values. Methane was excluded from the t-SNE calculations, and plots give an unbiased grouping of the rocks with the highest catalytic potential. Additionally, the modified Bayesian Information Criterion (mBIC) clustering for all the subgroups of the top features (Supplementary Table 10) shows that iridium and ruthenium provide the best clustering for the methane level prediction (Fig. 1B). The clusters were compared with both the predefined methane levels and the lithotypes. Multiple association measurements (Likelihood ratio, Pearson, Contingency Coefficient, Cramer's V) for each feature subgroup (Supplementary Table 11) were used for these comparisons. An increase of features (i.e., significant elements) in the plots results only in groups, which are interpretable from the rock-type perspective. The results strongly support the conclusion that only iridium and ruthenium are the true catalyst proxies. Although the clustering results are not as ideal as the lone t-SNE grouping, they clearly show that iridium and ruthenium concentrations in chromitites control, almost exclusively, methane abundance. Microscopic description of the methane source-chromitite. The chromitites with the highest concentrations of methane (Fig. 1, MSK code) were selected to assess the validity of the catalyst proxy. Detailed, microscopic characterisation revealed that Mg-Cr spinels (magnesiochromite) comprise on average 95% of the modal percentage in these rocks. The shape of the spinel crystals ranges from euhedral (hence relatively unmodi- www.nature.com/scientificreports/ irarsite-osarsite are highly variable and controlled by the original composition of the primary PGM, the alteration rate, and local factors, such as the secondary micro-porosity. Altered PGM and H 2 flow. The progressive removal of sulphur during laurite alteration (likely escaping as H 2 S) resulted in the stoichiometrically S-poor, variably desulphurised laurite. Mobilisation of ruthenium and sulphur from the primary laurite-erlichmanite solid solutions left a richer-in-osmium laurite relic. Desulphurisation of laurite is associated with reducing and low fS 2 conditions 18-20 , indicating log fS 2 below approximately − 20.5. Serpentinisation is a well-known, highly reducing alteration process. The hydration of ultrabasic rocks produces high amounts of H 2 , which lowers the fO 2 and fS 2 of the rock system [21][22][23] . Under such conditions, the PGE (Ru, Ir, Os) and base metals (e.g., Ni) are remobilised and form secondary alloys and S-poor, Nirich sulphides. The ultra-low fS 2 , coupled with a low fO 2 in the H 2 -rich serpentinising fluids, is consistent with the formation of the secondary awaruite in the chromitites, which indicates log fO 2 < − 35 and low water/rock ratios [23][24][25] . The latter implies that the reducing agent (H 2 ), affecting the chromitites, was most likely in the gas state. Several secondary Ni-Co-V phosphides in the chromitites indicate ultra-high reducing conditions in these rocks [26][27][28][29][30][31] . Their secondary origin explains the statistical correlation between cobalt, vanadium, and methane in the present results. Rerouting the mineral-catalyst target. The multi-approach data analysis indicates the occurrence of a Sabatier catalyst, consisting primarily of iridium and ruthenium, within chromitites. Laurite (RuS 2 ) is the most abundant ruthenium-bearing PGM and can incorporate up to 16 wt.% iridium. However, there is no dependence between sulphur and methane. Thus, it is highly unlikely that laurite is the catalyst in the samples. The laurite abundance cannot explain why some chromitites have considerable methane (e.g., CH 4 : 8500 ppmv, Ru: 101 ppb) and others do not (e.g., CH 4 : 1379 ppmv, Ru: 150 ppb). The current study shows that both iridium and ruthenium are critical Sabatier proxies. However, there is no linear relationship between their abundance and methane concentration. Hence, either different minerals catalyse the Sabatier reaction within different samples or a mineral with a highly varying composition is the catalyst. There is insufficient evidence to reject either possibility. However, observations indicate that an iridium-ruthenium-bearing mineral with a highly varying composition may have the strongest impact. It is possible that the secondary Ir-Ru-Os-Ni alloys are the main (but not necessarily the exclusive) Sabatier catalysts. These alloys are extremely inhomogeneous and show unsystematic metal ratios 17 , indicating that their compositions are highly influenced by the composition of the precursor laurite and erlichmanite, the variable intensity of alteration, and mobility of S, Ni and As. This variability explains their non-linear relationship with methane abundance. Moreover, their nano-spongy texture increases the available specific surface area for reactant adsorption, rendering them the ideal loci for a Sabatier reaction. There are few studies which have examined synthetic catalysts with the composition of laurite for deactivation resistance 32 . However, most of the research is focused on metallic, hybrid or metal-organic framework composite catalysts for higher efficiency 33,34 . The Ir-Ru-Os-Ni alloys are the closest natural counterpart to a metal catalyst. Secondary PGM, like the Ir-Ru-Os-Ni alloys, occur either as inclusions in spinels, or in micro-fractures filled with other secondary minerals. Previous work on the same PGM concentrates 17 showed that the secondary Ir-Ru-Os-Ni alloys were preferentially liberated in the same fraction as magnesiochromite and not in the secondary minerals fraction. Thus, the Ir-Ru-Os-Ni alloys were inclusions in magnesiochromite, deriving from the in-situ destruction of laurite. However, all catalysts require a support which is of great importance. One of the preferred supports in catalytic experiments, alumina (Al 2 O 3 ), is almost identical to natural spinels (compositionally and crystallographically). Considering the Ir-Ru-Os-Ni-alloys as the catalysts, then the catalyst support is laurite. The Ir-Ru-Os-Nialloys-laurite composite grains are in turn supported by the magnesiochromite crystals. Hence, the noble metal alloy (hosted in laurite) inclusions in spinels are the closest natural counterpart to a metal catalyst on an alumina support. Discussion A multi-discipline methodology is developed to discern the effect of the chemical composition and methane formation in a natural, multivariate chemical system of rock samples. Stochastic machine-learning algorithms, chemical analyses and microscopic observations are used to validate the current inferences. Machine learning showed that iridium concentration is the most important predictor of methane concentration in chromiumrich rocks. Ruthenium is a useful proxy for methane formation when considered in tandem with iridium. This unexpected result allowed for a shift in focus from laurite (RuS 2 ) 10 to the secondary Ir-Ru-Os-Ni alloys, as the potential Sabatier catalysts. Laurite is the most widespread noble-metal-bearing mineral in chromium-rich rocks, therefore its consideration as the catalyst fails to explain why methane concentrations are lower than expected (e.g., < 3000 ppmv) in many ruthenium-rich chromitites (Ru > 100 ppb), Sulphur is a common poison to the activity of a catalyst, and thus the secondary Ir-Ru-Os-Ni alloys represent a more promising catalyst target. These alloys are formed from the extreme desulphurisation of laurite that is causing mass loss and subsequently creating a nano-porous crystal surface. The Ir-Ru-Os-Ni alloys are the ideal loci of low-temperature CO 2 hydrogenation (Fig. 3) due to their large specific area and pure metal form. Continuous flow of H 2 gas generates extreme reducing conditions and triggers desulphurisation and formation of these alloys, in a process which may be an analogue of the routine pre-treatment methods used in catalysis to activate the metal catalysts and remove any adsorbed contaminants. The Ir-Ru-Os-Ni alloys are two-tiered supported catalysts: laurite is the integrated, first-level support, while spinel comprises the second-level support. As PGM precipitate from magmatic fluids, the bonding between the mineral catalyst and its support is superior to any synthetic counterpart. The current data cannot verify whether www.nature.com/scientificreports/ iridium is the catalyst or works as a promoter to ruthenium in the Ir-Ru-Os-Ni alloys (abbreviated as Ir-Ru alloys in the image) inside chromitites. Nonetheless, the overall composition of the catalyst and the support materials (e.g., Ru, Cr, MgO) may have an undetermined synergistic effect in natural systems. (abbreviated as Ir-Ru alloys in the image) inside chromitites. The identification of a naturally occurring mineral catalyst for low-temperature CO 2 hydrogenation is critical for sustainable catalysis. The comparatively reduced processing required for mineral catalysts would greatly reduce the carbon imprint, and the cost of the end-product. The closest synthetic counterpart catalyst is metal ruthenium with an iridium promoter on an alumina support. Pure ruthenium and iridium commonly derive from the extensive processing of chromitites. Alumina is a widely used catalyst support. However, alumina derives from energy-intensive processing of bauxites (aluminium-rich rocks). Herein, a naturally occurring, noble metal catalyst is identified with in chromitites, having an integrated support (laurite and spinel), thus making such natural chromitites an excellent, low-cost material for direct catalytic hydrogenation. Recent studies highlight that traces of ruthenium can be extremely active catalysts 35 . Therefore, the small amounts of iridium and ruthenium detected in the studied rocks may be considered as a positive factor 10,35 . Nonetheless, the studied chromitites have evidently catalysed the low-temperature hydrogenation in the past 7,8 . It is critical to note that noble and precious metals, used for catalyst production, are already under risk for future supply disruption 36 . Thus, the use of natural catalytic materials will greatly benefit the efforts for sustainable catalysis. Minimally processed natural materials require less energy and have an immense contribution to the decrease of waste. Mineral catalysts such as the noble metal alloys in chromitites might be further investigated for their potential industrial applications. Conclusions This study highlights the role of natural noble metal alloys as hydrogenation catalysts. Stochastic machine learning on whole-rock chemical data revealed that iridium followed by ruthenium are proxies of the mineral catalyst. Iridium and ruthenium concentrations were the top predictors in identifying rocks with high levels of methane and may be critical for future material exploration. In this study, the richer-in-methane, refractory chromitites host 17-45 ppb of iridium, and 73-178 ppb of ruthenium. Microscopic characterisation and mineral analyses showed that nano-porous Ir-Ru-Os-Ni alloys comprise the catalyst. These noble-metal alloys replace in-situ laurite, which occurs as inclusion within fractured spinels. Laurite and spinels constitute a two-tiered support for the mineral catalyst. The noble metal alloys, laurite and spinels are naturally fused during magmatic and post-magmatic processes. The natural fusion creates bonding between the catalyst and the support, far superior to their common synthetic counterparts. Noble-metal alloys found in chromitites can potentially serve as low-cost, sustainable catalysts for green energy production. Efficient application of mineral catalysts will drastically improve the economic viability of sustainable synthetic fuel production and have a positive environmental impact (by contributing to carbon sequestration). Additionally, natural catalysts would reduce the energy-consumption impact because of (a) the lower energy requirements for reduced processing and (b) reaction temperatures. Additional experimental work is required to determine the commercial suitability of natural noble metal catalysts within chromitites. Methods Material characterisation. A total of 12 chromitites, 11 ultrabasic rocks, 4 basic rocks, 5 rodingitised basic rocks, and 26 basic volcanic rocks comprise the dataset. All samples were collected in ophiolitic and volcanic rock outcrops in the central and northern parts of Greece. Chromitites were collected from abandoned mining sites in the following areas: Moschokarya, Eretreia, Aetorraches and Skoumtsa. Comprehensive description on the sampling locations and detailed macroscopic, microscopic and chemical characterisation of the samples is available in the unpublished dataset of the first author 37 . The separation and www.nature.com/scientificreports/ extraction process of the restudied PGM concentrates and methane extraction methods and values are given in earlier works 7,17 . Additional information may be requested by contacting the first author. Statistical analysis. We used centred-log ratio (clr) transformed geochemical data to perform statistical analysis in 58 samples. Values below the detection limit are replaced by ½ of the limit, while missing values are imputed using the median value representing each rock type. Data transformation is necessary as geochemical data are compositional and are prone to spurious correlations 38 . CoDa Pack app 39 was used to perform the clr transformations (Supplementary Table 12). ANOVA was performed to detect differences between the rock types and the methane concentration. Correlations between methane concentration and the whole rock composition were performed to identify which elements have significant positive correlations with methane as candidates for catalyst proxy. Despite the data transformation, several elements continue to show outliers. Hence the nonparametric correlations (Spearman's r) on the transformed data were preferred to avoid biases. Nevertheless, Pearson's r was calculated for comparison purposes and showed almost identical results. While clr transformation has several advantages (e.g., the data are plotted in the Euclidean space and the more intuitive interpretation of the results), it does not open up the data for variables to vary independently; hence, collinearity issues are not solved. Regression is a necessary step in identifying whether the positively correlated elements show an effect on the amount of methane measured. However, Post-hoc tests on linear regression efforts revealed collinearity issues and model bias. Thus, we employed a combination of Machine-learning techniques on the untransformed data to overcome such issues and study our dataset. Random forest (RF) regression. The RF regression algorithm 40 is a supervised machine-learning algorithm that uses ensembles of regression trees built for prediction, using a random number of features. The resampled data are organised hierarchically from the root to the leaf of the tree, in order to reduce variance from averaging and correlation between quantities. Random Forest is chosen because it is robust to missing and imbalanced data and can capture complex relations 41 . Furthermore, it is not multivariate-collinearity-sensitive and can handle a large number of features (see also Supplementary Notes 1 and 2). To identify the important features that contributed to the predictive performance of methane, we developed and evaluated the RF regression model as follows. We implemented this experiment using the caret R library 42 . The RFR experiment is repeated 30 times. For each run, we extracted the ranks of the variables using the varImp function and summed them over the runs. We extracted the top 20 features based on the summed ranks of features across the 30 runs. The reported train R 2 is obtained from the getTrainPerf function from R's caret package and test R 2 from the postResample function. The data is split into 80% train set and 20% test set. A tenfold crossvalidation (CV) technique repeated over 300 times was applied on the train set. We have tried with random, leave-one-out CV, 3 and fivefold CV, over 100, 200, 400 times, and such settings gave less favourable results. We tuned the parameters using the validation set from CV, exploring ntree of 500 to 2000; and mtry of 1 to 20. The best-tuned parameters found were using 600 trees in the forest (ntree) and 1 variable at each node (mtry), studying both train and test performance metrics. Using the sum of the rank of important features generated using the varimp function on the 30 models, we obtained the final ranking of the 14 most important features. The last step ensured that within these 30 runs, the same important features were consistently identified. We discriminated the top feature subgroups by plotting the sum ranking against the average position and used the change in the average rank position as the error bar. According to the natural breaks observed in the plots we recognised the following top features (Fig. 4) t-SNE visualisation of the important features subgroups. We employed t-SNE (t-distributed Stochastic Neighbour Embedding) plots as an additional visual evaluation tool. We used these plots to cross-validate whether the identified important features for methane prediction contribute to the clustering, where each cluster reflects the methane concentration class. The t-distributed Stochastic Neighbour Embedding (t-SNE) is a multivariate dimension reduction algorithm 16 . It represents similarity in probability distribution such that similar objects are given a higher probability value. Hence, it can reveal hidden structures in data at many different scales. The ability of t-SNE to reveal minor data structures prompted its use, as distance-based methods, such as PCA, are most efficient in displaying the variation among the methane-bearing chromitites. Due to the nature of t-SNE to express the probability distribution of similar objects as proximity, not using distance-based approaches can preserve both the local and global structure. Those of higher magnitude do not overshadow the similarity expressed here between samples with smaller ranges. Furthermore, t-SNE can handle non-linear relationships, which reflect the majority of the data deriving from natural samples, in contrast to the PCA. Additionally, the clustering of the data allows for an intuitive interpretation of the plot, contrary to a PCA. The perplexity parameter allows the user to control the number of neighbours. We use the Rtsne R package 43 on unscaled data. We set the max_iter to 6000 where the t-SNE plot has converged. Due to the stochastic nature of this algorithm, we explored different perplexity values from 2 to 19 to identify the plot with the most robust grouping (Supplementary Fig. 1). We repeated the t-SNE plotting 5 times to ensure the repeatability of the plot with the selected perplexity. www.nature.com/scientificreports/ mBIC clustering and association measures. Model-based clustering with Bayesian Information Criterion (mBIC) creates clusters of different Gaussian-mixture models in terms of shape, volume and density using an Expected-Maximisation (EM) algorithm initialised by Hierarchical Clustering. It then uses BIC to evaluate the goodness of the clusters identified by these models. The mclust R package 44 provides the mclustBIC and Mclust to create clusters and find the one with the best BIC score. Model-based clustering was chosen in this work because it is a non-distance-based clustering algorithm. We tried distance-based techniques such as k-means and hierarchical clustering, but the clusters produced were not meaningful. With mBIC, cluster memberships generated are based on probability distributions. We found that a distribution-based approach is more suitable for this dataset. The number of components (groups) was set to 5, as they are the same number of CH 4 levels as shown in Table 1. The best model found was the EEV (ellipsoidal, equal volume and shape) model, with a BIC score of − 726.96. The association of these clusters with CH 4 levels and lithotype are measured using the assocstats function from the vcd R package 45 and presented in Supplementary Table 10. When using the Top 3 features, mBIC was not able to find 5 clusters and generated 4 clusters instead. Microprobe analyses. We performed microprobe analyses on PGM concentrate fractions for mineral characterisation and classification (Supplementary Table 12). Microprobe analyses of the PGM were conducted in the Department of Earth and Planetary Sciences, McGill University, Canada, with a JXA JEOL-8900L electron microprobe analyser operated in WDS mode. Operating conditions include acceleration voltage of 15 kV and a beam current of 20 nA, with a beam diameter of about 5 mm and a total counting time of 20 s. The ZAF correction software was used, and natural and synthetic international standards were used for calibrations. Data availability The dataset and additional explanatory notes are available in the Supplementary files.
2023-03-07T14:57:14.149Z
2023-03-07T00:00:00.000
{ "year": 2023, "sha1": "886053e110657aff9f2418560830573e0b25a02c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "886053e110657aff9f2418560830573e0b25a02c", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
254685790
pes2o/s2orc
v3-fos-license
Real-time Curative Actions for Power Systems via Online Feedback Optimization Curative or remedial actions are the set of immediate actions intended to bring the power grid to a safe operating point after a contingency. The effectiveness of these actions is essential to guarantee curative N-1 security. Nowadays, curative actions are derived ahead of time, based on the anticipated future grid state. Due to the shift from steady to volatile energy resources, the grid state will frequently change and the curative actions would need to be pre-planned increasingly often. Furthermore, with the shift from large bulk production to many small decentralized energy sources more devices need to be actuated simultaneously to achieve the same outcome. Instead of pre-planning, we propose to calculate these complex curative actions in real-time after the occurrence of a contingency. We show how the method of Online Feedback Optimization (OFO) is well suited for this task. As a preliminary demonstration of these capabilities, we use an (OFO) controller, that after a fault, reduces the voltage difference over a breaker to enable the operators to reclose it. This test case is inspired by the 2003 Swiss-Italian blackout, which was caused by a relatively minor incident followed by ineffective curative actions. Finally, we identify and discuss some open questions, including closed-loop stability and robustness to model mismatch. I. INTRODUCTION The electrical power grid is a critical infrastructure and the backbone of modern society. Its uninterrupted operation is crucial and it is essential that security can be guaranteed at all times even when contingencies occur, i.e., a transformer, power plant, or power line is disconnected. Therefore, the grid is operated following the N − 1 criterion, meaning that the grid must be in a safe state even if any single element fails. This is also referred to as preventive N − 1 security. The ENTSO-E grid code used in Europe allows temporary overloads in case of a contingency if curative or remedial actions are defined upfront to bring the system back to a safe operating point [1,Article 32(2)]. The North American Electric Reliability Cooperation (NERC) allows for a Remedial Action Scheme that automatically takes corrective actions [2]. Permitting such temporary violations relaxes the N − 1 criterion to curative N − 1 security and enlarges the range of allowed grid configurations, which enables more economical grid operation [3], [4]. This idea of using curative actions dates back at least to the 1980s [4], where securityconstraint Optimal Power Flow (OPF) with curative actions The authors are with the Automatic Control Laboratory, ETH Zürich, Physikstrasse 3, 8092 Zürich, Switzerland. This research has been supported by ETH Zürich funds Email: {ortmannl,bsaverio,doerfler}@ethz.ch were proposed. Using curative N −1 security is an active field of research by, e.g., German Transmission Grid Operators and universities as it helps to utilize the grid to a larger extent [5,Subsection 5.2]. Available curative actions are, e.g., changes of active power generation setpoints, operating points of highvoltage direct current systems, voltage set-points or reactive power injections, and tap changers positions of phase-shifting transformers. Lately, the shift to decentralized generation also enables distribution grids to provide curative actions [6]. An overview of curative actions is presented in [7, Table III]. Currently, the curative actions are decided manually by operators based on long-term experience or based on a library of case studies created by solving OPFs for a set of contingencies so that the actions are available in case they occur. As the share of production from volatile and unpredictable renewable energy sources increases, the grid is expected to operate at different operating points throughout the day, which requires operators to update their curative action plan more often than today. Moreover, determining the best emergency response is already complex nowadays, but it will become more complex in the future because large power plants are being replaced by decentralized energy resources and therefore the number of actuators needed for effective curative actions will increase. Overall, operators will need to determine curative actions more often, and those actions will be more complicated. Last but not least, when a contingency occurs, the operating personnel needs to implement the curative actions quickly while guaranteeing that those actions will not lead to new problems elsewhere. In contrast to the current practice, we propose to employ a closed-loop control scheme to derive curative actions in real-time after the occurrence of a contingency. This has the following advantages: 1) the current operating point of the grid is taken into account; 2) the feedback nature of closedloop control provides robustness to model mismatch; 3) due to the low computational complexity, the curative actions are promptly implemented to quickly drive the grid to a feasible operating point. The control strategy that we propose is based on OFO, a methodology that allows converting iterative optimization algorithms into real-time robust feedback controllers [8]- [12]. These controllers can then be used to drive a system to the optimum of a constrained optimization problem, which, in the application that we are considering, defines the safe operating region of the grid. Such controllers do not need a full model of the system and guarantee constraint satisfaction even in the presence of model mismatch. There exist several different versions, e.g., distributed, centralized, model-based, and model-free controllers [13], [14]. They are well suited for several real-time optimization problems in power systems [15, Section IV], and they have also been experimentally validated [16], [17]. To show how an OFO controller could help control the grid during an emergency power system operation, we take inspiration from the 2003 Swiss-Italian blackout. In that blackout, a breaker could not be reclosed because of the excessive voltage angle difference across the breaker. Using the IEEE 39 bus model we set up a grid in which opening a breaker leads to a high angle difference, which we then reduce using an OFO controller. The structure of the paper is as follows. In Section II we describe the Swiss-Italian blackout and in Section III we present the simulation setup we are using to reconstruct the underlying problem of this blackout. Afterward, in Section IV, we design an OFO controller that determines effective curative actions in real-time. We present the results of our simulations in Section V and conclude the paper in Section VI. Reclosing this breaker would have resulted in high transient stress for generators located in that region and therefore a local protection system prevented the operators to reclose the line as long as the angle difference was larger than 30 • . Meanwhile, because of the open line, power flow increased on other lines, leading to one of them operating at 110% of its capacity. This resulting overload still satisfied the curative N − 1 criterion, assuming that it could be promptly mitigated. The Swiss operators deployed several control actions to enable reclosing the breaker and to lower the overloading, but did not succeed. The line overheated, which resulted in excessive sag of the conductor. At 3:25, after 24 minutes, a tree flashover occurred and the line was automatically disconnected. The remaining power lines immediately overloaded and were disconnected, leading to the largest Italian blackout in history [18]. The estimated cost of this 18-hour blackout is 1.2 billion Euros [19]. III. SIMULATION SETUP We reproduce the core phenomena of the Swiss-Italian Blackout using the publicly available IEEE 39 bus test case. It includes 10 generators, 34 lines, and 39 buses, see Figure 1. We will trip the power line connecting buses 23 and 24, which we will not be able to reclose unless we make the voltage difference between the two buses sufficiently small. The numerical experiment is done via the dynamic power system simulator DynPSSimPy [20], which models secondary frequency control through Automated Generation Control and the dynamics of synchronous machines, including the excitation system, power system stabilizer, and governor which includes the primary frequency control. Figure 2 shows the interconnection of the different elements. Here we give a short overview of the different components of the model. The synchronous generators are modeled with a sixth-order system with the speed deviation of the rotor speed from the nominal frequency ∆ω, the rotor angle δ and the internal voltages , and T q0 are positive, real-valued parameters. Their definition can be found in [21, Table 2.1]. The electrical power output of a synchronous machine is given by where I d and I q can be derived from Here, R is the armature winding resistance, v d = Re(u), and v q = Im(u) with the bus voltage u. Finally, the current injected by the synchronous generator is The governors are modeled like in Figure 3. They are driven by the frequency deviation ∆ω and the steady-state active power fed into the network by the corresponding synchronous machine p m0 . The parameters of the governors are explained in Table I. The excitation systems of the synchronous machines are modeled according to the block diagram in Figure 4. The input v pss comes from the power system stabilizer, ∆v is the deviation of the bus voltage magnitude v from the voltage set point v OF O , and E f 0 is the steady-state field voltage. The parameters of the excitation system are explained in Table II. The power system stabilizer can be seen in Figure 5 and is driven by the frequency deviation ∆ω. The parameters of the power system stabilizer are explained in Table III. The Automated Generation Control can be seen in Figure 6. It balances the active power generation and consumption in the power system and is driven by the average frequency deviation over all g generators ∆ω = i∈ [1,g] The vector β contains the participation factor of each generator and the sum of its elements is 1. The parameters of the Automatic Generation Control are explained in Table IV. For more information on the model and the model parameters, the reader is referred to [21]. IV. CURATIVE ACTIONS VIA ONLINE FEEDBACK OPTIMIZATION The curative actions available in the IEEE 39 bus case are changes in the active power generation set-points and voltage set-points of the generators. Hence, we consider the controllable active power set-points p OF O and voltage set- We measure the bus voltage magnitudes v, all power flows , and the phase difference ∆θ 23−24 between buses 23 and 24 and group them in our output y = [v T , T , ∆θ 23−24 ] T . The block diagram of our controller can be seen in Figure 7. We encode the goal of reclosing the breaker in the following optimization problem (1) that minimizes the voltage difference subject to actuator limits and grid constraints. Note, that this optimization problem is specific to this emergency situation, and more work is needed to identify optimization problems for other and more general situations. As prescribed by the OFO approach, we then select an optimization algorithm. We choose a projected gradient descent algorithm. An OFO controller derived from such an algorithm was developed in [22]. The control update is Online Feedback Optimization Controller with σ α (u, y m ) := arg min In the controller, α is a gradient step-length (that we set to α = 3) and σ α (u, y m ) is the projected gradient direction, which is computed via a simple convex quadratic program. Note, that such convex quadratic programs can be efficiently solved for very large numbers of variables and constraints on standard computation hardware. In this auxiliary optimization program, Φ(y m ) is the cost function of the optimization problem (1) where the output y is replaced by the measurement y m . The constants A, b C, d describe a local linearization of the potentially nonlinear constraints of the optimization problem (1), see [22]. The resulting controller determines curative actions based on measurements and the sensitivity ∇ u h(u, y m ). Overall, we are solving the highly nonlinear and non-convex optimization problem (1) by repeatedly solving the linear and convex problem (3) and utilizing feedback measurements. For an in-depth decision of the convergence of this strategy see [8]. ∇ u h(u, y m ) is the sensitivity matrix of input (set-points) to output (measurements). These are similar to e.g. power transfer distribution factors and we derive them from the steady-state power flow equations, see [23] for details. We recalculate this sensitivity at every time step, which occurs every 5 seconds. Note, that this sensitivity is calculated many times while solving Optimal Power Flow problems. When solving security-constraint Optimal Power Flow Problems, it is calculated for all considered contingencies. Knowing and calculating ∇ u h(u, y m ), as needed for our controller, is therefore a reasonable assumption. Note that, it can also be estimated and real-time adapted from data [14]. Last but not least, the controller is also robust with respect to an inaccurate sensitivity, on which we provide more details in Section V. V. RESULTS The upper panel in Figure 8 shows the absolute voltage difference between buses 23 and 24. A small absolute voltage difference implies that both the voltage angle difference and voltage magnitude difference between buses 23 and 24 are small, which allows the breaker to be reclosed. The middle panel shows the generators' voltage set-points and the lower panel shows the generators' active power set-points. After 10 seconds, the line connecting the two buses trips and the absolute voltage difference increases. As described in Section II local protection might prohibit reclosing the line. Therefore, after 30 seconds, the OFO controller is activated to reduce the absolute voltage difference over the breaker. As can be seen in Figure 8, the controller takes effective steps towards minimizing the absolute voltage difference, and within just a few iterations the breaker could be closed again. While the controller is minimizing the absolute voltage difference, the constraints in its update law (3) also enforce that the control inputs u are within the actuator capabilities, as can be seen in the lower two panels. Likewise, the constraints on y, i.e., bus voltage magnitudes and current flows, can also be enforced. Overall, the proposed controller quickly reduces the voltage difference. The resulting curative actions include iterative adjustments of the active power and voltage set-points of all generators, showing how complex coordinated interventions may be needed in order to effectively tackle a contingency. We also analyze the robustness of our controller against model mismatch. The only model information used in an OFO controller is the sensitivity ∇ u h(u, y m ). The sensitivity might be wrong if it was derived in a different operating state or based on a model with wrong parameters or an incorrect topology. In practice, the sensitivity will always have a model mismatch. For our robustness analysis, we calculate the sensitivity based on grid topologies that are different from the topology of our simulation model. More precisely, we derive the sensitivity for a topology where we erased a line from the grid and then use these wrong sensitivities in our controller. In many power grids, the position of switches and breakers is observed, and therefore a model mismatch due to a wrong topology is unlikely to occur. Nevertheless, we choose this source of model mismatch because we consider it to be the most extreme. The results of our robustness analysis can be seen in Figure 9 and show that even with severe model mismatch, the controller is able to reduce the absolute voltage difference and does not become unstable. However, some levels of model mismatch cause very slow performance, and future work should analyze how well the sensitivity needs to be known to guarantee good performance. Generally, the robustness against model mismatch is due to the feedback nature of the approach and the fact that our control law (2) is an integrator driven by a gradient step, and integral controllers are known to be robust. This robustness was also observed in experiments [16] and analyzed mathematically [24]. Another source of uncertainty that the controller needs to be robust against is that commanded inputs u are not implemented as asked for. For example, the synchronous generators do not follow the commanded input p OF O but the value p m , because the set-points of the governor and the Automatic Generation Control are added on top of u OF O , compare Figure 2. The lower panel in Figure 8 shows the active power generation set-points p m and one can see that they change continuously and not just every 5 seconds when our controller updates its set-point. Nevertheless, the controller converges because it measures the output y and therefore indirectly the effect of the governor and the Automatic Generation Control. VI. CONCLUSION These preliminary numerical results show that OFO controllers have the potential to derive curative actions in real-time after the occurrence of a contingency and to automate some curative actions in emergency power system operations. Such controllers could either be implemented as a decision support tool for the operator or directly as a closed-loop controller. In our opinion, determining curative actions in real-time is in agreement with the European and North American grid codes, and it definitely reduces the workload in the control room. Further research is needed to investigate the stability of the interconnection of the controller with the power system dynamics because timescale separation results like those in [25] (which assume that grid dynamics are sufficiently faster compared to the rate at which set-points are updated by the controller) turn out to be too conservative for this time-critical application. Furthermore, because we expect the system to work far from nominal operating points during contingencies, robustness to model mismatch needs to be certified for this application more extensively (possibly building on numerical tests like those in [24]). Last but not least, a broader range of emergency situations needs to be analyzed. Generator active power generation set-points p m Absolute value of voltage difference Actual topology Other topologies Line trips OFO activated Line reclosed Fig. 9. Behavior of the closed-loop system for several incorrect sensitivities. The line is tripped at 10 seconds, the controller is activated at 40 seconds, and the line is reclosed at 120 seconds.
2022-12-16T06:41:52.781Z
2022-12-15T00:00:00.000
{ "year": 2022, "sha1": "a63fd8b6d9a338dc973f4120fc827e6f9435e57a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "fb2715970106cd5c2203df9c098322849fe6ede0", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering", "Computer Science" ] }
118472600
pes2o/s2orc
v3-fos-license
OGLE-2005-BLG-153: Microlensing Discovery and Characterization of A Very Low Mass Binary The mass function and statistics of binaries provide important diagnostics of the star formation process. Despite this importance, the mass function at low masses remains poorly known due to observational difficulties caused by the faintness of the objects. Here we report the microlensing discovery and characterization of a binary lens composed of very low-mass stars just above the hydrogen-burning limit. From the combined measurements of the Einstein radius and microlens parallax, we measure the masses of the binary components of $0.10\pm 0.01\ M_\odot$ and $0.09\pm 0.01\ M_\odot$. This discovery demonstrates that microlensing will provide a method to measure the mass function of all Galactic populations of very low mass binaries that is independent of the biases caused by the luminosity of the population. Microlensing occurs when a foreground astronomical object (lens) is closely aligned to a background star (source) and the light from the source star is deflected by the gravity of the lens (Einstein 1936). The phenomenon causes splits and distortions of the source star image. For source stars located in the Milky Way Galaxy, the separation between the split images is of order milli-arcsecond and thus the individual images cannot be directly observed. However, the phenomenon can be photometrically observed through the brightness change of the source star caused by the change of the relative lenssource separation (Paczyński 1986). Since the first discovery in 1993 (Alcock et al. 1993;Udalski 1993), there have been numerous detections of microlensing events toward the Large and Small Magellanic Clouds, M31, and mostly the Galactic bulge fields. Currently, microlensing events are being detected at a rate of nearly 1000 events per year (Udalski 2008;Bond et al. 2001). The properties of multiple systems such as binary frequency and mass function provide important constraints for star formation theories, enabling a concrete, qualitative picture of the birth and evolution of stars. At very low masses down to and below the hydrogen burning minimum mass, however, our understanding of formation processes is not clear due to the difficulties of observing these objects. Over the last decade, there have been several searches for very low mass binaries [see reviews by Basri (2000); Oppenheimer et al. (2000); Kirkpatrick (2005); Burgasser et al. (2007)]. Despite these efforts, the number of very low mass binaries 2 is not big enough to strongly constrain their formation processes. Microlensing occurs regardless of the brightness of lens objects and thus it is potentially an effective method to investigate the mass function of low-mass binaries. For lensing events caused by single-mass objects, it is difficult to measure the lens mass because the Einstein time scale t E , which is the only observable that provides information about the lens for general lensing events, results from the combination of the mass and distance to the lens and the transverse speed between the lens and source. The degeneracy can be partially lifted by measuring either an Einstein radius or a lens parallax and can be completely broken by measuring both. Einstein radii are measured from the deviation in lensing light curves caused by the finite-source effect (Gould 1994). Most microlensing events produced by binaries are identified from the anomalies involved with caustic approaches or crossings during which the finite-source effect is important (Nemiroff & Wickramasinghe 1994;Witt & Mao 1994). Therefore, Einstein radii can be routinely measured for the majority of binary-lens events. The microlens parallax is defined by where π rel = AU(D −1 l − D −1 S ) is the lens-source relative parallax and D L and D S are the distances to the lens and source, respectively. In general, parallaxes can be measured for events that last long enough that the Earth's motion can no longer be approximated as rectilinear during the event (Gould 1992). The chance to measure the lens parallax for binary-lens events is higher than that of single-lens events because the average mass of binaries is bigger and thus time scales tend to be longer. In addition, the well-resolved caustic-crossing part of lensing light curves provide strong constraints on the lensing parameters and thus helps to pin-down enough anchor points on the light curve to extract otherwise too-subtle parallax effects (An & Gould 2001). The number of binary-lens events with well-resolved anomalies is increasing with the advance of observational strategies such as the alert system and follow-up observations. The increase of the monitoring cadence of existing and planned survey experiments will make the detection rate even higher. Although binary microlensing is biased toward separations similar to the Einstein radius, it is easily quantifiable for next-generation experiments that have continuous "blind" monitoring. Therefore, microlensing will be able to provide an important method to discover very low mass binaries and to investigate their mass function. In this paper, we present the microlensing discovery and characterization of a very low mass binary. We use this discovery to demonstrate that microlensing will provide a method to measure the mass function of very low-mass binaries that is free from the biases and difficulties of traditional methods. 4. MODELING Figure 1 shows the light curve of the event. It is characterized by the sharp rise and fall occurring at around the heliocentric Julian dates (HJD) of 2453556 and 2453560. These features are caused by the crossings of the source star across a caustic, which represents a set of source positions at which the lensing magnification of a point source becomes infinite. Therefore, the existence of such a feature immediately reveals that the lens is composed of two masses (Mao & Paczyński 1991). Characterization of binary lenses requires modeling of lensing light curves. We test 3 different models. In the first model, we test a static binary model (standard model). In this model, the light curve is characterized by 7 parameters. The first set of three parameters are needed to describe the light curves of single-lens events: the time required for the source to transit the Einstein radius, t E (Einstein time scale), the time of the closest lens-source approach, t 0 , and the lens-source separation in units of the Einstein radius at the time of t 0 , u 0 (impact parameter). Another set of three parameters are needed to describe the deviation caused by the lens binarity: the projected binary separation in units of the Einstein radius, s, the mass ratio between the binary components, q, and the angle of the source trajectory with respect to the binary axis, α (source trajectory angle). Finally, an additional parameter of the ratio of the source radius to the Einstein radius, ρ ⋆ = θ ⋆ /θ E (normalized source radius), is needed to incorporate the deviation of the light curve caused by the finite-source effect. In the sec-ond model, we consider the parallax effect by including two additional parallax parameters of π E,N and π E,E , which are the two components of the microlensing parallax vector π E projected on the sky in the direction of north and east celestial coordinates. In the last model, we additionally check the possibility of the effect on the lensing light curve caused by the orbital motion of the lens. The orbital motion affects the lensing magnifications in two different ways. First, it causes the binary axis to rotate or, equivalently, makes the source trajectory angle change in time. Second, it causes the separation between the lens components to change in time. The latter effect causes alteration of the caustic shape in the course of an event. To first order, the orbital effect is parameterized by and where the orbital parameters ω andṡ represent the rates of change of the source trajectory angle and the projected binary separation, respectively. Considering the orbital effect is important not simply to constrain the orbital motion of the lens system but also to precisely determine the lens mass. This is because both the motions of the observer (parallax effect) and the lens (orbital effect) have a similar effect of causing deviations of the source trajectory from a straight line. Then, if the orbital motion of the lens is not considered despite its non-negligible effect, the deviation of the lensing light curve caused by the orbital effect may be explained by the parallax effect. This will cause a wrong determination of the lens parallax and the resulting lens mass. When either the effect of the parallax or orbital motion is considered, a pair of source trajectories with impact parameters u 0 > 0 and u 0 < 0 result in slightly different light curves due to the breakdown of the mirror-image symmetry of the source trajectory with respect to the binary axis. We, therefore, check both models with u 0 > 0 and u 0 < 0 whenever the parallax or orbital effect is considered. As a result, the total number of tested models is 5. To find the best-fit solution of the lensing parameters, we use a combination of grid and downhill approaches. It is difficult to find solutions from pure brute-force searches because of the sheer size of the parameter space. It is also difficult to search for solutions from a simple downhill approach because the χ 2 surface is very complex and thus even if a solution that apparently describes an observed light curve is found, it is hard to be sure that all possible χ 2 minima have been searched. To avoid these difficulties, we use a hybrid approach in which a grid search is conducted over the space of a subset of parameters (grid parameters) and the remaining parameters are searched by a down-hill approach to yield minimum χ 2 at each grid point. Then, the best-fit solution is found by comparing the χ 2 values of the individual grid points. We set s, q, and α as grid parameters because they are related to the features of light curves in a complicated pattern while other parameters are more directly related to the identifiable light curve features. For the down-hill χ 2 minimization, we use a Markov Chain Monte Carlo method. RESULTS In Table 1, we present the results of modeling along with the best-fit parameters for the individual models. It is found that the effects of parallax and orbital motion are needed to precisely describe the light curve. We find that the model with the parallax effect improves the fit by ∆χ 2 = 571. The fit further improves by ∆χ 2 = 164 with the addition of the orbital effect. We note that the values of the parallax parameters from the "parallax+orbit" model are different from those determined from the "parallax" model. This demonstrates that consideration of the orbital effect is important for the precise measurement of the lens parallax. FIG. 2.-Geometry of the binary lens system responsible for the lensing event OGLE-2005-BLG-153. In the lower panel, the two filled dots represent the locations of the binary lens components. The dashed circle is the Einstein ring corresponding to the total mass of the binary. The ring is centered at the center of mass of the binary (marked by '+'). The line with an arrow represents the source trajectory. We note that the trajectory is curved due to the combination of the effects of parallax and lens orbital motion. The closed figure composed of concave curves represents the positions of the caustic formed by the binary lens. All lengths are normalized by the Einstein radius. The temperature scale represents the magnification where brighter tones imply higher magnifications. The upper panel shows a zoom of the boxed region. We note that the caustic shape slightly changes due to the orbital motion of the lens. We present the caustics at the two different moments of the caustic entrance and exit of the source star. In Figure 1, we present the model light curve on the top of observed data points. Figure 2 shows the geometry of the lens system corresponding to the best-fit solution, i.e. "paral-lax+orbit" model with u 0 > 0. In the figure, the filled dots represent the locations of the binary components, the dashed circle is the Einstein ring corresponding to the total mass of the lens, the closed figure composed of concave curves is the caustic formed by the lens, and the curve with an arrow represents the source trajectory. The upper panel shows an enlargement of the region where the source trajectory crosses the caustic. The shape of the caustic changes in time due to the orbital motion of the lens, and thus we present the caustics at two different moments of the caustic entrance and exit of the source star. Among the two quantities needed for the determination of the lens mass, the microlens parallax is obtained directly from the parallax parameters determined from modeling by On the other hand, the Einstein radius is not directly obtained from modeling. Instead, it is inferred from the normalized source radius ρ ⋆ , which is determined from modeling, combined with the information about the angular source radius θ ⋆ . The angular source radius is determined from the information of the de-reddened color of the source star measured by using the centroid of clump giant stars in the color-magnitude diagram as a reference position under the assumption that the source and clump centroid experience the same amount of extinction (Yoo et al. 2004). Figure 3 shows the instrumental color-magnitude diagram constructed based on CTIO V and I band images and the locations of the source star and the centroid of clump giants. By measuring the offsets in the color and magnitude between the source and the centroid of clump giants combined with the known color and absolute magnitude of the clump centroid of [(V − I), M I ] c = (1.04, −0.25), we estimate that the de-reddened magnitude and color of the source star are I 0 = 13.16 and (V − I) 0 = 1.09, respectively, implying that the source is a clump giant with an angular radius of θ ⋆ = 11.72 ± 1.01 µas. Here we adopt a Galactocentric distance of 8 kpc and the offset of the bar of the field is 0.4 kpc toward the Sun and thus the distance to the clump centroid is 7.6 kpc based on the Galactic model of Han & Gould (2003). Then, with the measured normalized source radius of ρ ⋆ = 0.018 ± 0.001, the Einstein radius is estimated as Together with the Einstein time scale, the relative lens-source proper motion is obtained by µ = θ E /t E = 5.38±0.47 mas yr −1 . With the measured Einstein radius and lens parallax, the mass of the lens system is uniquely determined by where κ = 4G/(c 2 AU). With the known mass ratio between the binary components, the masses of the individual binary components are determined, respectively, as and This implies that both lens components are very low mass stars with masses just above the hydrogen-burning limit of 0.08 M ⊙ . The distance to the lens is determined as where π S = AU/D S is the parallax of the source star. From this distance to the lens combined with the Einstein radius, it is found that the two low-mass binary components are separated with a projected separation of It is also found that the lens velocity in the frame of the local standard of rest is v = (v ⊥ , v ) = (−20.9 ± 31.9, 15.1 ± 31.9) km s −1 , where v ⊥ and v are the velocity components normal to and along the Galactic plane, respectively. We note that the errors in v are dominated by the unknown proper motion of the source, which is assumed to be 0 ± 100 km s −1 in the Galactic frame. The velocity and the distance to the lens imply that the lens is in the Galactic disk. In addition to the parallax effect, the relative lens-source motion can, in principle, also be affected by the orbital motion of the source star if it is a binary (Smith, Mao, & Paczyński 2003). We check the possibility that this so-called "xallarap" (reverse of "parallax") effect influences the parallax determination. For this, we conduct additional modeling including the xallarap effect. For the description of the xallarap effect, 3 additional parameters of the phase angle, inclination, and orbital period are needed under the assumption that the source moves in a circular orbit. From this analysis, we find that the xallarap effect does not provide a better model than the parallax model. In addition, the best-fit occurs for an orbital period of ∼ 1 yr, which corresponds to the orbital period of the Earth around the Sun. Furthermore, the best-fit values of the inclination and the phase angle are similar to the ecliptic longitude and latitude of the source star. All these facts imply that the xallarap interpretation of the light-curve deviation is less likely and support the parallax interpretation (Poindexter et al. 2005;Dong et al. 2009). From the orbital parameters ω andṡ determined from modeling along with the assumption of a circular orbit, one can obtain the usual orbital parameters of the semi-major axis, a, orbital period, P, and inclination, i, of the orbit of the binary lens from the relations Here B = r 3 ⊥ ω 2 /(Gm), x = sin φ, and φ is the angle between the vector connecting the binary components and the line of sight to the lens such that the projected binary separation is r ⊥ = a sin φ. The value of x is obtained by solving the equation x 3 − Bx 2 − x + (A 2 + 1)B = 0, where A = (ṡ/s)/ω (Dong et al. 2009). We find that the semi-major axis is a = 1.46 ± 0.08 AU and the period is P = 4.05 ± 0.19 yrs. The inclination of the orbital plane is i = 88.3 • ± 1.1 • , implying that the orbit is very close to edge on. We can also constrain the surface brightness profile of the source star by analyzing the caustic-crossing parts of the light curve. We model the source brightness profile as where Γ λ is the linear limb-darkening coefficient and θ is the angle between the normal to the stellar surface and the line of sight toward the source star, and F is the source flux. We measure the coefficient in I band of Γ I = 0.501 ± 0.014. The measured coefficient is consistent with the theoretical value of clump giants (Claret 2000). DISCUSSION AND CONCLUSION We analyze the light curve of a binary-lens microlensing event OGLE-2005-BLG-153, which exhibits a strong causticcrossing structure on the light curve. By measuring both the Einstein radius and the lens parallax, we could uniquely measure the masses of the individual lens components. The measured masses were 0.10 ± 0.01 M ⊙ and 0.09 ± 0.01 M ⊙ , respectively, and thus the binary was composed of very lowmass stars just above the hydrogen-burning limit. Although the event OGLE-2005-BLG-153 is one of few cases with well-measured lens masses among the 5000 microlensing events discovered to date, the event characteristics that enabled this mass measurement are likely to become common as next-generation microlensing experiments come on line. Because next-generation experiments will provide intense coverage from sites on several continents, most causticcrossing binaries will yield masses. Moreover, because nextgeneration cadences will be independent of human intervention, rigorous characterization of the selection function will be straightforward. Finally, for reasonable extrapolations of the mass function of stars close to and below the hydrogenburning limit, we can anticipate an important fraction of the roughly thousand events per year expected to be detected from next-generation surveys to be due to low-mass objects including brown dwarfs (Gould 2009). Hence, the mass function, at least of objects within binaries, will be measurable for all Galactic populations of low-mass stellar and substellar objects in the near future, independent of biases caused by the luminosity of the population.
2012-04-25T06:13:38.000Z
2010-09-02T00:00:00.000
{ "year": 2010, "sha1": "fca2802d14bc6497b5a956e976a85d1838d528ce", "oa_license": null, "oa_url": "http://iopscience.iop.org/article/10.1088/0004-637X/723/1/797/pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "fca2802d14bc6497b5a956e976a85d1838d528ce", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
133515140
pes2o/s2orc
v3-fos-license
An Empirical Analysis of the Impacts of Forestry Ecological Projects on Economic Development in Mountainous Area The relationship between environmental protection and regional economic development is the main content that environmental policy debates. This issue is particularly prominent and important in the mountains. Forestry ecological project plays an important role in the promotion of environmental restoration in the mountain areas, but the contribution of mountain economy is still very vague. This study takes Xiangxi Tujia and Miao Autonomous Prefecture of Wuling Province as an example and establishes the fixed effect model (LSDV), which is drawn that forestry ecological construction on the overall effect of the regional national economy is positive, which increased afforestation area by 1%, Xiangxi’s GNP by 3.7%, the first industrial production by 2.4%, secondary industry production by 4.2% and the third industry production by 4.9%. There is no doubt that economic forest than timber forest and public welfare forest contributes more to areas of national economy from the Tree species afforestation model. INTRODUCTION Since the 21 st century, the relationship between environmental protection and regional economic development is the main content that environmental policy debates (Hunter and Toney, 2005;Ehrlich, 2001;Adams et al., 2004).Unfortunately, most environment weak regions are located in remote mountainous rural areas.Because environmental protection policies limit the resource utilization of the poor, the impact on local economic development is the most serious (Duan et al., 2010).The relationship between environmental degradation and poverty is called "poverty trap" that poverty leads to environmental degradation and environmental degradation is exacerbated by poverty (Zhang, 2004).This issue is particularly prominent and important in the mountains.Mountains carrying the dual mission of ecological protection and economic development, including mountain forests, desert, wetlands and other major terrestrial ecosystem are an important forest areas and water sources, which constitute the basis of ecological safety (Wu, 2007).In the meantime, the mountains also face the poverty and the plight of the urgent need of development, which has closed, fragility, marginal and borderline features (Zhu, 2006).Those have become an objective obstacle mountain out of poverty. China is a mountainous country.Mountainous areas account for 69.3% of the land area and Mountainous population accounts for 56% of the total population of the country (Chen, 2008).China has 592 key counties for national poverty alleviation and development, 496 distributed in the mountains of rich forest resources (Liu et al., 2012).It is a common problem faced by many developing countries that how to build up the mountains based on economic growth, environmental protection and welfare improvement.Practice has proved that both environmental and economic win-win goal is commendable but it is difficult to achieve.In order to achieve the eradication of poverty and environmental restoration dual strategic goal, the Chinese government began to start the construction of six major forestry projects from the 1970s.As is known to all, forestry ecological projects in promoting mountainous environment recovery plays an important role.However, the contribution is vague to the economy in mountainous areas.Forestry ecological projects are not isolated, but closely linked with the social and economic development. The study area is located in Wuling Mountain Area of Xiangxi Autonomous Prefecture (Fig. 1).The Xiangxi Autonomous Prefecture government treats the forestry projects as an important mission to enrich people and it has successively implemented a natural forest protection project and the Sloping land conversion project and other eight major ecological forestry construction projects, completed planting 49200 acres of key forestry projects.At the same time, because Xiangxi is a typical "old, small, frontier and poor" area, the economic development of Xiangxi and Data sources: The data this study used is mainly from "Xiangxi Statistical Yearbook", "rural Hunan Statistical Yearbook" and "Hunan Statistical Yearbook".The data using from 2005 to 2014, 10 years of Xiangxi Autonomous Prefecture of Hunan province, is a panel data of eight cities and counties including Jishou county, LuXi county, Fenghuang county, Huayuan county, Baojing county, Guzhang county, Yongshun County, Longshan County. Descriptive statistics of the variables of this study are shown in Table 1. Model: This study focuses on whether forestry construction has effects on areas of the national economy growth.The model 1 is the overall effect model of forestry ecological construction to the regional economic contribution, in order to investigate whether the construction of forestry makes the different effects on the different constituents of GNP.Type Y represents GNP, first industrial output, secondary industrial output and third industrial output.Model 1 is set as follows: Investment, the number of employees as well as the road three control variables, measures for forestry ecological projects effects on the economic contribution of the local area are added in model 2, the model is set as follows: Ln Investment gLn Stuff qLn Road lLn Affores In the above formula: In order to determine whether different tree species of afforestation make different effects on the national economy, the model 3 is set as follows: RESULTS AND DISCUSSION Model regression results are shown in Table 2.It can be seen that the forestry projects make a certain contribution to gross national product value and composition of Xiangxi region (model1).Afforestation area increased by 1%.Xiangxi's GNP increased by 3.7%.The first industrial output increased by 2.4%.And secondary industry output increased by 4.2%.In the meantime, the third industry output increased by 4.9%.Therefore, in this study area, forestry ecological construction on the overall effect of the regional national economy is positive. From the perspective of the local effect of forestry ecological construction in model 2, the forestry ecological construction contributed to the national economy is not significant, even symbols into a negative when joined the control variable.Possible reasons are as follows.First of all, forestry area study is given priority to public welfare forest, even if timber forest also difficult to apply for the cutting index for mountain ecological protection.Therefore, afforestation activities are difficult to translate into direct productivity and economic benefits.This conclusion is also reflected in the peasant household questionnaire that farmer forestry income and production activities in the study area are very few.There is no doubt that the economic contribution from local forestry production is very small.Secondly, forest resources have a long growth cycle and many years into the characteristics of the harvest year so that the forestry harvest has time lag and dynamic.Because of it, there is a certain bias when made regression only by using the planting area and GNP that year.However, as the duration of the data collection is only for 5 years, lag-national economic variables are also unable to realize.We can infer that the estimated results from model are under partial.And forestry ecological construction of overall and local effects is greater than the estimate.In fact, the contribution rate of forestry ecological construction should be higher than the existing results and the coefficient should be positive.Thirdly, owing to the control variables joined in are too significant, it had a great influence on national economy gross domestic product and to a certain extent, it has also weakened the impact of afforestation on the national economy. From the model 3 about the influence of species of afforestation activities on the national economy can be seen that, in addition to economic forest, timber forest and public welfare forest make no contribution to the national economy.Economic forest area increased by 1%.GNP increased by 0.1% and a 0.2% increase in total cost of the secondary industry, as well as the third industry output increased by 0.1%.Visibly, although the study area builds up the ecological protection purpose as well as has strict rules and constraints on public welfare forest and timber forest.Economic forest, however, are more likely to translate into tangible economic benefits and because of the short growth period, those contribute more to the regional national economy. CONCLUSION This study, with the data of 8 counties in Xiangxi from 2005 to 2014 ten years, by constructing panel data fixed effects model, obtained by LSDV estimation methods: forestry ecological construction on the overall effect of the regional economy is positive, of which forest area increased by 1% and Xiangxi area's GNP increased by 3.7%.First industrial output increased by 2.4%.In the meantime, secondary industrial output increased by 4.2% and the third industry output increased by 4.9%.However, when added to other control variables, local effects of forestry ecological construction will no longer contribute to the national economy significantly and even become a negative symbol.The reasons include local restrictions on harvesting forest resources, model estimation bias owing to the long growth cycle of the forest and other significant variables.From the Tree species afforestation model, there is no doubt that economic forest than timber forest and public welfare forest contributes more to areas of national economy. Fig. 1 : Fig. 1: Location of study area poverty alleviation and reduction are also great stressful.This study takes Xiangxi Tujia and Miao Autonomous Prefecture as the research object, explores the contribution of forestry ecological environment construction in Wuling mountainous area on the mountain economy and provides policy recommendations for the healthy and harmonious development of Wuling mountainous area development and forestry construction. Afforestration = The afforestation of barren hills and wasteland area logarithmic function. Table 1 : Descriptive results of the sample Table 2 : LSDV regression results of effect of afforestation to national economy of western Hunan province Total output value
2018-12-21T05:07:41.604Z
2016-10-25T00:00:00.000
{ "year": 2016, "sha1": "2d2ae6e0215d3c36733a914219c5abe5bdd40534", "oa_license": "CCBY", "oa_url": "https://doi.org/10.19026/ajfst.12.2966", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "2d2ae6e0215d3c36733a914219c5abe5bdd40534", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Geography" ] }
1929284
pes2o/s2orc
v3-fos-license
Fermented Mistletoe Extract as a Multimodal Antitumoral Agent in Gliomas In Europe, commercially available extracts from the white-berry mistletoe (Viscum album L.) are widely used as a complementary cancer therapy. Mistletoe lectins have been identified as main active components and exhibit cytotoxic effects as well as immunomodulatory activity. Since it is still not elucidated in detail how mistle toe extracts such as ISCADOR communicate their effects, we analyzed the mechanisms that might be responsible for their antitumoral function on a molecular and functional level. ISCADOR-treated glioblastoma (GBM) cells down-regulate central genes involved in glioblastoma progression and malignancy such as the cytokine TGF-β and matrix-metalloproteinases. Using in vitro glioblastoma/immune cell co-cultivation assays as well as measurement of cell migration and invasion, we could demonstrate that in glioblastoma cells, lectin-rich ISCADOR M and ISCADOR Q significantly enforce NK-cell-mediated GBM cell lysis. Beside its immune stimulatory effect, ISCADOR reduces the migratory and invasive potential of glioblastoma cells. In a syngeneic as well as in a xenograft glioblastoma mouse model, both pretreatment of tumor cells and intratumoral therapy of subcutaneously growing glioblastoma cells with ISCADOR Q showed delayed tumor growth. In conclusion, ISCADOR Q, showing multiple positive effects in the treatment of glioblastoma, may be a candidate for concomitant treatment of this cancer. Introduction Glioblastoma (GBM) is the most common malignant brain tumor with an incidence of 3.5 cases per 100.000 people per year. GBM are among the most lethal neoplasms with a median survival of approximately one year after diagnosis even upon maximum current treatment strategies. Only few therapeutic regimens such as the chemotherapeutic drug temozolomide (TMZ) provide a short but significant increase in survival [1]. Additionally, tumor-intrinsic features including the methylation status of the O-(6)methylguanine-DNA methyltransferase (MGMT) promoter are also predictive for the survival of GBM patients [2]. The failure of effective therapy regimens in malignant GBM is associated with its malignant characteristics which means that these tumors are highly resistant to cell death [3], possess immunosuppressive function [4] and show a highly invasive and destructive growth due to their migratory and invasive growth potential [5]. Extracts of the European mistletoe (Viscum album L.) have been widely used for decades as alternative, complementary treatment and adjuvant cancer therapy, especially in German-speaking countries. Cytotoxic glycoproteins, the mistletoe lectins (ML), are one active component of mistletoe extracts and can stimulate effector cells of the innate and adaptive immune system such as dendritic cells, macrophages, natural killer cells, and B and T lymphocytes, at least one mechanism that might be responsible for the 2 Evidence-Based Complementary and Alternative Medicine antitumoral properties of mistletoe extracts (ME) [6][7][8][9][10][11][12][13]. Beside their immune modulatory function, ME show direct growth inhibition and cell death induction in tumor cells such as induction of apoptosis or direct necrotic effects, dependent on the concentration used for treatment [14][15][16][17][18][19][20][21]. In vivo, preclinical activity of aqueous ME has been shown in a variety of transplantable rodent tumor models [22][23][24][25][26]. In clinical cancer therapy studies, adjuvant treatment with ME showed an impact on the patients' quality of life, reducing side effects of conventional therapies such as nausea, fatigue or reduced energy induced by chemotherapy or radiation, and is associated with prolonged survival [27][28][29][30][31], even in glioma [32]. One of the longest known mistletoe preparations is the fermented plant extract ISCADOR which is extracted from mistletoe plants growing on different host trees like apple (ISCADOR M), oak (ISCADOR Q), or pine (ISCADOR P). ISCADOR P contains only very small amounts of lectin, whilst lectin content in ISCADOR Q is high and medium in ISCADOR M. Especially natural killer (NK) cells as part of the innate immune system play an essential role in cellmediated immune responses against tumor cells, also in glioma [33], and it has been shown before that ISCADOR treatment had a positive effect on NK cell function [9,34]. There are first hints that the ISCADOR variants show positive effects using them as treatment agents for GBM, but the mechanisms responsible for these effects are not well elucidated until today [26,32], In the present study, we especially aim to investigate the effects of lectin rich ISCADOR Q on GBM cell proliferation, cell death, cell motility and immune-cell mediated anti-GBM immune response to assess its antitumoral potential in gliomas. PCR-Based Microarray Expression Analysis. Total RNA and cDNA were obtained as described. For expression analysis of 96 genes involved in tumor cell motility and angiogenesis, Human Angiogenesis 96-well StellARray qPCR array (Lonza, Basel, Switzerland) was prepared according to manufacturer's instruction using SYBR green master mix (Thermo Fisher Scientific, MA, USA) on an ABI 7500 system. Data were analyzed with Global Pattern Recognition Data Analysis Tool (Bar Harbor Biotechnology, Trenton, ME, USA) using the internal array control housekeeping gene expression for normalization. Rembrandt Platform Analyses. The REMBRANDT database contains microarray data for probes from the Affymetrix U133 Plus 2.0 GeneChip (National Cancer Institute 2005; REMBRANDT home page: http://rembrandt.nci.nih.gov/ (accessed February 6, 2012)). At the time of accession, the database contained mRNA expression data of 228 glioblastomas and 28 normal control tissues. P values for median expression intensity changes between each glioma subgroup compared to normal CNS tissue controls were obtained. Growth and Viability Assays. Net cell culture growth and clonogenic survival was determined by crystal violet staining Evidence-Based Complementary and Alternative Medicine 3 as described [40]. Cell viability was determined by Trypan blue exclusion assay. Analysis of Cell Death. For the caspase 3/7 activity assay cells were seeded in microtiter plates, allowed to attach, and treated with ISCADOR (100 μg/mL). 24 h later, the cells were lysed in 25 mM Tris-HCl (pH 8.0), 60 mM NaCl, 2.5 mM EDTA, and 0.25% NP40 for 10 min, and acetyl-DEVD-amc was added at 12.5 mM. Caspase activity was assessed by fluorescence using a Mithras LB940 fluorimeter (Berthold Technologies, Bad Wildbad, Germany) at 360 nm excitation and 480 nm emission wavelengths. To analyze total cell death, cells were trypsinized and stained with propidium iodide (PI). The content of PI-positive cells was determined by fluorescence activated cell sorting (FACS) analysis. 2.10. Zymography. Cellular supernatants were collected, concentrated, and 20 μg protein was loaded on zymogram gels (Bio-Rad Laboratories GmbH, München, Germany). MMP activity was analyzed according to the manufacturer's protocol. Quantification was done using the Alpha-Erase-Software (Cell Bioscience, Santa Clara, CA, USA). Measurement of Cell Migration and Invasion. The Scratch assay and the Boyden chamber migration assay have previously been described [41]. Matrigel-coated Boyden chambers (BD Biosciences, Heidelberg, Germany) were used for invasion assays. 2.13. Purification of PBL, Isolation, and Activation of Immune Effector Cells. Peripheral blood mononuclear cells (PBMC) were isolated from EDTA-anticoagulated peripheral venous blood of healthy donors by density gradient centrifugation (Biocoll, Biochrom KG, Berlin, Germany). Monocytes were removed from the PBMCs by allowing them to adhere to tissue culture plastic. The nonadherent fraction (PBLs) was then cocultured with irradiated RPMI 8866 feeder cells to obtain polyclonal NK cell populations [22]. CD3+ T cells were isolated from freshly isolated PBMCs using the Pan T cell isolation kit (Miltenyi Biotec, Bergisch Gladbach, Germany). CD3+ T cells were then stimulated with 30 ng/mL of anti-CD3 antibody (clone OKT-3; eBioscience, Frankfurt, Germany) irradiated feeder cells and 50 U/mL human recombinant IL-2 (Immunotools, Friesoythe, Germany) as previously described [43]. On day 12 of the expansion cycle the activation status based on expression of CD25 (antihuman CD25-PE conjugated anitbody, Miltenyi Biotec) and HLA-DR (antihuman HLA-DR-FITC conjugated antibody, eBioscience) was examined by FACS analysis. These cells were used as effectors in the lysis assays at day 14 of their cycle. 2.14. Cellular Cytotoxicity Assay. Immune cell cytotoxicity against human GBM cells was analyzed using a nonradioactive assay measuring luciferase activity of stably transfected LNT-229-Luc cells. Briefly, target LNT-229-Luc cells were incubated with ISCADOR or control medium for 24 h. The cells were washed and human PBLs, isolated NK cells or activated CD3+ T cells of healthy donors were added to the target cells and incubated for 4 h at 37 • C. Viable cells were determined by measuring luciferase activity using a Mithras LB940 fluorimeter (Berthold Technologies, Bad Wildbad, Germany). The experimental lysis was corrected by division by the spontaneous lysis of target cells at the corresponding ISCADOR concentration or control-treated cells. Percentage of lysis was calculated as followed: 100 − ((experimental lysis/spontaneous lysis) × 100). Experiments were done in triplets. For inactivation of NK cells, the cultures were preincubated with anti-NKG2D-antibodies (R&D Systems, Wiesbaden, Germany) for 0.5 h prior to coculture with GBM cells according to the manufacturer's instruction. For analysis of NK cell attachment, purified NK cells were loaded with CSFE (10 μM) for 10 min and then cultivated on glioma cells for 1 h. Cocultivated cells were washed intensively with PBS to remove unattached NK cells and microscopically documented using a Zeiss Axiovert 200 M fluorescence microscope. Mouse Experiments. Athymic CD1-deficient NMRI nude mice were purchased from Janvier (St. Berthevin, France). NMRI mice develop functional NK cells, but lack T cells. VMDk mice [38,39] were bred inhouse. Mice of 6-12 weeks of age were used in all experiments. The experiments were performed according to the German law, Guide for the Care and Use of Laboratory Animals (approval N3/09). LNT-229 or SMA560 cells were treated with ISCADOR Q (100 μg/mL) or left untreated. The cells were trypsinized, counted and viability was assessed by trypan-blue staining. Groups of 6 mice were injected s.c. with one million viable cells into the right flank. Mice were examined regularly for tumor growth using a metric caliper and sacrificed when tumors reached 200 mm 2 . To avoid artifacts due to cytotoxicity or proliferation inhibition induced by ISCADOR, cell proliferation used for inoculation was analyzed. In brief, 1.000 cells were seeded in microtiter plates and allowed to attach. Cell density was monitored every 24 h by crystal violet staining. For treatment of tumors with ISCADOR, one million cells were implanted subcutaneously. Seven days later, 2 μg (20 μL of a 100 μg/mL stock) ISCADOR Q or vehicle (PBS) was injected intratumorally. In a further experiment, NMRI mice harboring subcutaneous tumors were periodically and subcutaneously injected on the contralateral body site with either PBS or with increasing concentrations (1 μg up to 100 μg in each 100 μL) of ISCADOR Q. Mice were examined regularly for tumor growth using a metric caliper, and killed when tumors reached 200 mm 2 . Statistical Analysis. Figures show data obtained in at least three up to ten independent experiments as indicated. Quantitative data were assessed for significant differences using t-test ( * P < 0.05, * * P < 0.01, * * * P < 0.0001). Statistical analysis of tumor growth in the animal experiments was done using the ANOVA test (SPSS18, SPSS Inc, Chicago, IL, USA). The results of the PCR-based microarray expression analysis were analyzed together with the results obtained for the corresponding genes in the REMBRANDT database by a contingency table analysis. Nominal scaled response variable (genes up-(1), non-(0), or down-(−1) regulated in glioblastomas versus normal brain tissue in the REMBRANDT database) and nominal explanatory variable (same gene up-or downregulated in PCR-based microarray expression analysis after ISCADOR treatment) were analysed and subsequently tested in a likelihood ratio test. A significance level of alpha = 0.05 was chosen for all tests. Statistical analysis to assess the association of the REMBRANDT and our PCR-based microarray was performed using JMP 8.0 software (SAS, Cary, NC, USA). ISCADOR Significantly Reduces the Expression of Genes Associated with Malignancy and Progression in Glioblastomas. To assess whether ISCADOR treatment might modulate gene expression in glioma cells, we first performed a PCRbased microarray expression analysis. This array contains a variety of genes involved in glioma-associated pathways such as proliferation, survival, migration, invasion, and angiogenesis as well as tumor-immunological processes. As shown in Figure 1(a), treatment of glioma cells with ISCADOR Q reduced the expression of a variety of genes relevant for gliomagenesis: epidermal growth factor receptor 2 (EGFR2/HER2/ERBB2) stimulates proliferation and blocks apoptosis [44], whereas BIRC5/Survivin is a potent inhibitor of apoptosis [45]. PKB/AKT1 protein kinase plays a key role in multiple cellular processes such as glucose metabolism, cell proliferation, apoptosis, transcription, and cell migration [46,47]. Constitutive signal transducer and activator of transcription 3 (STAT3) activation is associated with various human cancers, commonly suggests poor prognosis and provides antiapoptotic as well as proliferative effects [48]. Genes such as matrix-metalloproteinases (MMP), cell adhesion molecules such as platelet/endothelial cell adhesion molecule (PECAM)-1, or integrins are important players in cell motility whereas vascular endothelial growth factor (VEGF), VEGF receptor type II (VEGFR2), and angiopoietin (ANGPT)-1 and -related molecules as well as transforming growth factor (TGF)-β are genes involved in both cell motility and angiogenesis. The latter does not only enhance tumor cell motility and angiogenesis, but also is the central immunosuppressive molecule in GBM [5]. To verify reliability of microarray data, we exemplarily analyzed expression of differentially regulated genes by qPCR or ELISA. As shown in Figures 1 there is a significant downregulation of TGF-β, ANGPT-1, VEGF, and VEGFR2 upon ISCADOR treatment. Reduced gene expression is not a result of enhanced cell death, since ISCADOR treatment conditions used for this analysis did not induce cell death (Figure 3). We further analyzed the data obtained from the microarray set for a potential association with genes regulated in glioblastomas versus normal CNS tissue in the REMBRANDT database (Figure 1(f)). ISCADOR treatment of LNT-229 glioma cells led to a relative increase of genes which are significantly higher in nonneoplastic CNS tissue specimens in the REMBRANDT database while the percentage of genes significantly upregulated in glioblastomas in the REMBRANDT cohort decreased (P = 0.0047). ISCADOR Treatment Reduces Glioma Cell Growth in a Dose Dependent Manner. Since it has been shown that mistletoe extracts could induce cell death and hold antiproliferative activity in a variety of tumor cell lines derived from breast, lung, prostate, or renal cancer [14][15][16][17][18][19] and since we have shown that in GBM cells ISCADOR downregulated a variety of proliferation stimulating genes (Figure 1), we analyzed whether ISCADOR was able to reduce cell growth in human and mouse GBM cell lines, too. As shown in Figure 2, lectin rich ISCADOR M and Q reduced cell growth at doses of higher than 100 μg/mL both in human LNT-229 and in mouse SMA560 GBM cells, whereas ISCADOR P showed only weak effects on glioma cell growth. The cell death inducing effects of ISCADOR are dependent on their content of viscotoxins and mistletoe lectins [14,15]. The cytotoxic and growth inhibitory effects seen for ISCADOR M and Q, but not for ISCADOR P in the GBM cell lines seemed therefore to be only a minor effect of viscotoxins present in these ME, since all variants of ISCADOR used so far contain viscotoxin, and even if the final concentration of viscotoxin is comparable when treating the cells with different ISCADOR variants, high concentrations of ISCADOR P presenting a higher amount of viscotoxin only marginally induced cell death. The main difference between the ISCADOR variants is their lectin concentration suggesting that ML are mainly responsive for the cytotoxic effects. To analyze other effects of ISCADOR in the treatment of glioma cells beside the effects on cell growth, for further experiments an ISCADOR concentration of 100 μg/mL was chosen to avoid unwanted side effects induced by inhibition of proliferation or induction of cell death. The viscotoxin contents at the ISCADOR concentration of 100 μg/mL which we have used for further studies were 80 ng/mL (P), 180 ng/mL (M), and 182 ng/mL (Q), total ML concentrations were 0,017 ng/mL (P), 4,49 ng/mL (M), and 7,52 ng/mL (Q). At this concentration, no significant reduction in cell growth (Figure 3(a)) or induction of cell death (Figure 3(b)) or caspase activity (Figure 3(c)) were detectable. Even at higher concentrations of ISCADOR Q no caspase 3/7 activity was detectable, but cells became propidium iodide positive, suggesting that the cells died by necrosis (data not shown). ISCADOR Enhances PBL-Mediated GBM Cell Lysis. Since we knew from the literature that ML can stimulate immune effector cells and since we have seen that the secretion of TGF-β, the most important immunosuppressive cytokine in glioma, is decreased after treating the cells with ISCADOR Q (Figure 1), we asked whether ISCADOR might induce immune cell mediated GBM cell attack, too. Indeed, ISCADOR treatment of LNT-229 cells, enhanced PBLmediated tumor cell lysis, dependent on the lectin content of different ISCADOR variants (Q > M > P; Figure 4(a) and data not shown). ISCADOR mediated GBM-cell lysis is dependent on the activity of NK cells, since preincubation of PBLs with an inactivating NKG2D antibody neutralized the lytic effect (Figure 4(b)), whereas coincubation of ISCADOR treated LNT-229 cells with purified NK cells also showed enhanced cell lysis (Figure 4(c)). NK cell mediated tumor cell lysis is dependent on the expression of so called danger/stranger signaling molecules on target cells such as the NKG2D ligands major histocompatibility complex I class related (MIC)-A, MIC-B, or UL16 binding proteins (ULBP) 1, 2, 3, or the DNAX accessory molecule-(DNAM)-1 ligands CD112 and CD115. To identify whether upregulation of these molecules was involved in the enhancing PBL mediated lytic effect of ISCADOR treated GBM cells, we performed FACS analysis to quantify surface protein expression. Neither NKG2D-nor DNAM-1-ligand expression was altered after treating the tumor cells with ISCADOR Q (data not shown). We therefore analyzed whether enhanced NK cell mediated glioma cell lysis might be an effect of increased attachment of NK cells to the target cells due to ML presented on the target cell surface. For this, we cocultivated CSFE labeled NK cells with ISCADOR Q pretreated glioma cells. As shown in Figures 4(d) and 4(e), more NK cells were attached to glioma cells if the target cells were pretreated with lectin rich ISCADOR Q, but not if the cells were treated with lectin poor ISCADOR P, suggesting the ML boost NK cell attachment to the GBM target cells. In contrast to ISCADOR mediated enhancement of NK cell activity, cocultivation of LNT-229 glioma cells with purified and activated T cells did not show any difference in the lysis of untreated compared to ISCADOR treated glioma cells, also no changes in the cell surface expression of MHC molecules were detectable on LNT-229 cells if these cells were treated with ISCADOR (data not shown). ISCADOR Reduces GBM Cell Motility. To analyze whether the ISCADOR-mediated reduction in the expression of migration/invasion-relevant genes cells resulted in a reduction of glioma cell migration and invasion, we performed in vitro long lasting scratch assays, short time transwell boyden chamber migration as well as matrigel-invasion assays. As shown in Figure 5 (Figure 1(a)), there was an increased secretion of MMP-9 in ISCADOR Q treated cells. However, MMP-9 activity was also impaired by ISCADOR Q treatment (Figures 6(c) and 6(d)). Since it has been described that the net MMP-2 activity correlates with the level of TIMP-2 expression [50], and knowing that under certain conditions TIMP-2 can activate MMP-2 [51], we analyzed TIMP-2 protein expression, demonstrating that TIMP-2 is also downregulated in ISCADOR treated GBM cells (Figure 6(a)). ISCADOR Q Reduces Tumor Growth in Murine Glioma Models. To analyze the effects of ISCADOR Q on tumor growth in vivo, we used two different mouse models. We have chosen a xenograft model in which human LNT-229 GBM cells were subcutaneously implanted into the right flank of nude mice. For therapeutic intratumoral ISCADOR treatment, mouse SMA560 glioma cells, representing a spontaneously developed, poorly differentiated astrocytoma in a VMDk mouse, were implanted in immunocompetent VMDk mice [39]. As shown in Figures 7(a) and 7(b), both in the xenograft and in the syngeneic mouse model, pretreatment of the implanted cells with ISCADOR Q (100 μg/mL) mitigated tumor growth indicating that the ISCADOR Q antitumor effects were also transferable to in vivo growing tumors. To exclude that the differences in tumor growth in ISCADOR Q and control treated tumors were a result of altered cell proliferation or induction of cell death, we analyzed cell growth in parallel. There were no differences in proliferation in ISCADOR Q treated LNT-229 cells compared to vehicle treated cells. For SMA560 cells, ISCADOR Q delayed cell growth (insert in Figure 7(b)), but at 96 h after treatment, ISCADOR-treated cells reached the same (kD) * * * * * * * * * * * (d) Figure 6: ISCADOR Q mediated reduction of cell motility is caused by reduced MMP-expression and activity. (a) LNT-229 cells were treated with ISCADOR Q or were left untreated. 24 h later, cellular supernatants (SN) or lysates were prepared. MMP or TIMP expression was analyzed by immunoblot (Ab; specific antibody, one representative experiment is shown). (b) Quantification of protein expression (n = 3 for MMP-1, n = 9 for MMP-2, n = 3 for MMP-3, n = 8 for MMP-9, n = 2 for MMP-10, n = 4 for MMP-14, n = 5 for TIMP-2). (c) Gelatinase zymogram analyzing MMP activity (n = 10, one representative experiment is shown). (d) Quantification of MMP gelatinase activity (n = 8 for MMP-9, n = 10 for MMP-2, n = 4 for 36 kD active MMP-2; SEM). Note that the activated form of MMP-9 could not be separated strictly from MMP-2. Therefore quantification of MMP-9 activity was only done for proMMP -9. growth level than control cells suggesting that the differences in tumor growth were not due to altered proliferation or cell death. In a first therapy model mimicking conditions GBM patients use for ISCADOR treatment, the mice were treated with repeated subcutaneous injections of ISCADOR Q at increasing concentrations on the contralateral body site. This model will give information if ISCADOR Q is able to induce antitumoral activity by systemic application. As shown in Figure 7(c), there was a slight, but not significant reduction of tumor growth in ISCADOR Q treated mice. In a second therapy model we treated subcutaneously preimplanted tumors with a single intratumoral injection (20 μL, 100 μg/mL) of ISCADOR Q. This amount of ISCADOR was chosen since we wanted to gain information if an intratumoral application of ISCADOR might exhibit antitumor activity independent of its capacity to induce cell death or to inhibit proliferation (Figures 2(b) and 3(b)). Tumor growth was significantly reduced when ISCADOR Q was directly injected into the growing tumor mass. This suggests that ISCADOR Q might unfurl its antitumor effect best when in direct contact to the tumor cells. Discussion The failure of effective therapy regimens in malignant GBM is highly associated with its malignant characteristics which means that these tumors are highly resistant to cell death [3], possess immunosuppressive function [4] and show a highly invasive and destructive growth due to their migratory and invasive growth potential [5]. A therapeutic substance with multimodal function able to decrease GBM cell motility, to induce antitumoral immunity and to inhibit GBM cell proliferation might be of high value in tumor treatment. In this study, we showed for the first time that the mistletoe extract ISCADOR, and especially the lectin rich variant The cells were pretreated with ISCADOR Q (100 μg/mL) or were left untreated. One million viable LNT-229 cells were implanted into the right flank of nude mice (a), or one million SMA560 cells into VMDk-mice (b) Tumor size was measured 3 times per week. Inlet: As a control and to avoid the measurement of artifacts induced by ISCADOR Q mediated inhibition of cell growth, proliferation of implanted cells was assessed by crystal violet staining in parallel. (c) One million LNT-229 cells were implanted into the right flank of nude mice and tumors were allowed to grow for six days. At day 7 subcutaneous injections twice a week with PBS or a weekly increasing dose of ISCADOR Q (1 up to 100 μg in total, arrows indicate ISCADOR injections) were given on the contralateral body site. After two weeks of treatment with the highest dosage, injections were stopped and tumor size was measured as in (a). (d) One million SMA560 cells were implanted into the right flank of VMDk mice and tumors were allowed to grow for six day. On day 7, a single intratumoral injection of ISCADOR Q (20 μL, 100 μg/mL, indicated by an arrow) was given into the palpable tumors. Tumor size was measured as in (a). ISCADOR Q, not only reduced GBM cell growth and induced NK cell mediated GBM cell lysis, but also impaired the migration and invasion of GBM cells in vitro and delayed tumor growth in vivo. These effects seemed, at least in part, to be caused by an ISCADOR induced downregulation of glioma-associated genes involved in proliferation, survival and tumor cell motility (Figure 1(a)). In addition, ISCADOR treatment further led to a significant reexpression of genes which are usually downregulated in glioblastomas and a down-regulation of genes associated with malignant progression of gliomas (Figure 1(f)). These findings point to a therapeutic effect of ISCADOR on gene clusters associated with gliomagenesis and glioma progression thereby exerting antitumor functions. The expression of several genes regulating proliferation (EGFR, PKB, STAT3) as well as survival (BIRC5/survivin) and especially the secretion of several factors relevant for cell motility was also mitigated by ISCADOR Q, the most important being TGF-β and MMP-2, knowing that TGF-β regulates MMP-2 [37]. Reduced secretion did not result from a general inhibition of protein secretion as there was no difference in the overall amount of secreted proteins (data not shown), and there were also several other proteins we examined that did not show any difference in expression or secretion after ISCADOR Q treatment (Figures 4, and 6). Reduced secretion of MMP-2, being one of the most important proteins for ECM destruction and a well-known inducer of GBM cell motility [52,53], was accompanied by the reduction of the activity of either pro-and mature MMP-2 variants as well as of MMP-9 (Figures 6(c) and 6(d)). Even if there was reduced expression of MMP-9 mRNA upon ISCADOR treatment (Figure 4(a)), there was an enhancement in extracellular pro-MMP-9 (Figures 6(a) and 6(b)) suggesting that ISCADOR might also alter biological processes differentially regulating protein secretion. But to unravel the mechanism of this observation needs further investigation. In this context, ISCADOR Q mediated downregulation of TIMP-2 (Figures 6(a) and 6(b)) should also be kept in mind. TIMP-2 was originally characterized as an inhibitor of MMP, and in this light, the downregulation of TIMP-2 and accompanied simultaneous reduction of MMP-2 activity seemed prima facie not fit. But meanwhile it has been shown that TIMP-2 can also act as an activator for MMP-2, thereby increased levels of TIMP-2 can lead to increased MMP-2 activity, too [50,51]. For this reason ISCADOR Q mediated reduction of TIMP-2 fits well to decreased MMP-2 activity and impaired cell motility in ISCADOR Q treated GBM cells. The secretion of two additional factors related to tumor growth and motility as well as to angiogenesis, was also decreased in ISCADOR Q treated GBM cell: VEGF, whose expression is known in GBM to be correlated with MMP-2 [54] and Angiopoietin-1, a factor suggested to regulate, beside angiogenesis, GBM cell adhesion to the ECM via its receptor Tie-2 [55]. Reduced expression of VEGF and Ang-1 might therefore not only be suggested to contribute to the antimigratory/anti-invasive effect of ISCADOR, but also to contribute to reduced tumor neoangiogenesis. In the literature ISCADOR has been described as an immune stimulating agent [6-10, 13, 34, 56, 57]. In the present study, ISCADOR treatment of GBM cells led to an increased NK cell mediated, but not T cell mediated tumor cell lysis (Figure 3). The mechanism by which ISCADOR enhanced NK cell dependent GBM cell lysis is still not completely unraveled, but it was clearly not an effect of directly striking NK cells nor of altered immune receptor/ligand surface expression on either GBM or immune cells. Neither regulation of NKG2DL, DNAM-1-ligand, or MHC class molecules on GBM cells, nor alterations of the appropriate receptors on NK cells, were detectable upon ISCADOR treatment. The immune stimulating effect of ISCADOR might be, on the one hand, an effect of reduced secretion of the prominent immunosuppressive cytokine TGF-β [4,5,58] in ISCADOR treated GBM cells, on the other hand might result from the strengthened attachment of NK cells to the surface of ISCADOR treated GBM cells, by this mechanism forcing the contact between immune effector and tumor target cells. Since the boosted attachment was restricted to the lectin rich ISCADOR variant Q, this effect is suggested to be a result of high ML concentration on the target tumor cell surface (Figure 4(d)). The immune stimulating effect of ISCADOR Q was best if tumor cells are directly exposed to the agent, since we have shown that in vitro treatment of NK cells with ISCADOR provided no immune stimulation. In vivo using two different mouse models (syngeneic and xenograft) and two different therapeutic approaches (systemic or intratumoral ISCADOR Q application) we demonstrated that ISCADOR Q delayed tumor growth. Subcutaneous injection, of ISCADOR Q in mice harboring GBM tumors at the contralateral body flank showed only marginal and not significant tumor growth reduction. In contrast, local intratumoral treatment of tumors with ISCADOR Q significantly reduced tumor growth again suggesting that ISCADOR unfurls its antitumoral effect best when in direct contact to the target cells (Figures 7(c) and 7(d)). Conclusions In conclusion, we here present that ISCADOR treatment reduces the expression of genes associated with tumor progression, decreased GBM cell growth, mitigated GBM cell migration and invasion, and enhanced NK cell mediated GBM cells lysis. These transcriptional changes translated into a functionally relevant reduction of cell proliferation, a decrease of the migratory and invasory capacity as well as a decline in the immune evasory potential of glioma cells. In vivo, ISCADOR Q reduced tumor growth in both xenograft and syngeneic GBM mouse model. To display its best antitumoral function, ISCADOR Q and especially ML have to be in a close contact to the tumor cells. Thus, ISCADOR Q may hold promise as a multimodal antitumoral agent for concomitant treatment of human GBM.
2016-05-12T22:15:10.714Z
2012-10-22T00:00:00.000
{ "year": 2012, "sha1": "b1b0d2a524ac990b6c917d5dd5f223e5d9eb687d", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/ecam/2012/501796.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5ed69b9789d4356aab9a37fa2c1c72e274455dca", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
232056905
pes2o/s2orc
v3-fos-license
Suramin enhances the urinary excretion of VEGF-A in normoglycemic and streptozotocin-induced diabetic rats Background Vascular endothelial growth factor A (VEGF-A) and P2-receptors (P2Rs) are involved in the pathogenesis of diabetic nephropathy. The processing of VEGF-A by matrix metalloproteinases (MMP) regulates its bioavailability. Since the ATP-induced release of MMP-9 is mediated by P2Rs, we investigated the effect of suramin on VEGF-A excretion in urine and the urinary activity of total MMP and MMP-9. Methods The effect of suramin (10 mg/kg, ip) on VEGF-A concentration in serum and its excretion in urine was investigated in streptozotocin (STZ)-induced diabetic rats over a 21-day period. The rats received suramin 7 and 14 days after a single STZ injection (65 mg/kg, ip). A 24-h collection of urine was performed on the day preceding the administration of STZ and the first administration of suramin and on the day before the end of the experiment. The VEGF-A in serum and urine, albumin in urine, and total activity of MMP and MMP-9 in urine were measured using immunoassays. Results Diabetic rats are characterized by a sixfold higher urinary excretion of VEGF-A. Suramin potentiates VEGF-A urinary excretion by 36% (p = 0.046) in non-diabetic and by 75% (p = 0.0322) in diabetic rats but it did not affect VEGF-A concentration in the serum of non-diabetic and diabetic rats. Urinary albumin excretion as well as total MMP and MMP-9 activity was increased in diabetic rats, but these parameters were not affected by suramin. Conclusion Suramin increases the urinary excretion of VEGF-A in normoglycemia and hyperglycaemia, possibly without the involvement of MMP-9. Suramin may be used as a pharmacological tool enhancing VEGF-A urinary secretion. Introduction Diabetic nephropathy is a major complication of diabetes leading to end-stage renal disease and is currently a major cause of morbidity and mortality in diabetic patients. It is characterized, at the organ level, by progressive kidney damage reflected by an increase in albumin excretion in urine and a decline in glomerular filtration. Moreover, at a cellular level, it is characterized by the dysfunctions of vascular endothelial cells and glomerular visceral epithelial cells called podocytes [1]. These conditions may be related to changes in membrane receptor expression/activity and the dysregulation of angiogenic factors [2,3]. Since glomerular endothelial cells and podocytes cross-talk, a disturbance occurring in one of them may be transmitted to the other, thus aggravating and accelerating the damage to the glomerulus [4]. Among the receptors whose expression/ activity is altered are those activated by the extracellular nucleotides called P2-receptors (P2Rs). One of the P2Rs that may be relevant is the P2X7 receptor (P2X7R). For example, the increased glomerular expression of the ATPsensitive P2X7R in a diabetic rat model has been shown [5], and we have previously shown that glomerular microvasculature reactivity to an agonist of P2X7R is increased in streptozotocin (STZ)-induced diabetes [6]. While it has been shown that P2Rs are involved in intraglomerular extracellular matrix protein accumulation in diabetic glomeruli [7], diabetic nephropathy is associated with both systemic and local renal inflammation with the participation of crucial inflammatory cells expressing P2Rs influencing the release of cytokines, which further act via the nuclear transcription of factor-kappa B [8]. On the other hand, P2Rs influence the activity of the matrix metalloproteinases (MMPs), 1 3 key enzymes in the extracellular matrix metabolism, as shown by studies in which the ATP-induced rapid release of matrix metalloproteinase-9 (MMP-9) is mediated by P2X7R [9]. MMP-9 expression in human podocytes and enhanced MMP-9 urinary concentrations in patients with diabetic nephropathy have also been reported [10]. In turn, MMPs regulate the bioavailability of the vascular endothelial growth factors family (VEGF), major angiogenesis and vascular permeability factors [11]. VEGF members, among which VEGF-A plays a key role, facilitate cellular responses by binding to tyrosine kinase receptors on a cell's surface [11]. VEGF-A is produced in glomeruli by podocytes and diffuses towards capillary lumens, where it reaches the glomerular endothelial cells and causes an increase in glomerular permeability to water [12,13]. The genetic-based study has provided evidence that increased local renal VEGF levels affect the glomerular endothelium fenestration, which is a surrogate marker for local VEGF-A bioactivity [14]. Importantly, elevated VEGF-A levels are associated with glomerular pathologies in diabetic nephropathy [15]. On the other hand, the results of the genetic-based studies suggest that VEGF-A in diabetic kidneys may play protective role [16]. Thus, we hypothesized that VEGF-A bioavailability in glomeruli may be regulated by P2Rs. To reach the study's aim, non-toxic suramin, a broad-spectrum P2R antagonist, was used in STZ-induced diabetic rats, and the urinary excretion of VEGF-A and activities of MMP and MMP-9 in urine were measured. Ethical approval The experiments were conducted in accordance with the European Convention for the Protection of Vertebrate Animals Used for Experimental and Other Scientific Purposes and approved by the local Bioethics Commission in Bydgoszcz, Poland (Approval no 35/2017). The experiments were performed on rats with tail blood glucose concentrations greater than 11.1 mmol/l measured at day + 6 after STZ injection. The effectiveness of hyperglycaemia induction was 87.5%. Twenty-four-hour urine samples were collected in metabolic cages (Tecniplast, Italy) at days + 6 and + 20 after the STZ injection. The urine was collected in tubes containing protease inhibitors (5 × 10 −4 M PMSF, 10 −6 M leupeptin) and 3 × 10 −3 M NaN 3 . At the end of the experiment on day + 21, all animals were overdosed with anaesthesia, their thoraxes were opened and blood was drawn by cardiac puncture to preserve the serum of each rat, which resulted in the death of the rats. A schematic of the procedure is depicted in Fig. 1. Analytic methods Blood glucose was determined with an Accu-Chek™ Performa glucometer (Roche, Basel, Switzerland), and urine volume was determined gravimetrically. Immunoenzymatic assays were used to measure the concentration of rat albumin (AssayPro, USA, Cat. No. ERA3201-1) and rat VEGF-A (Thermo Scientific, USA, Cat. No. ERVEGFA). A fluorometric assay was used to measure the activity of total MMP (AnaSpec, USA Cat. No. AS-71158) and MMP-9 (AnaSpec, USA Cat. No. AS-71155), while the creatinine concentration was measured by the enzymatic method (Wiener lab., Argentina). Statistical analysis The statistical analyses were performed using Statistica 13.3 (TIBCO Software). A Shapiro-Wilk test was used to test the determined normality of the distribution of variables; continuous variables were expressed as mean ± SE (standard error). Statistical significance between the groups was determined using two-way ANOVA and post hoc Tukey's multiple comparisons. A paired t test was used to assess changes in repeated measures. Univariate correlations were assessed Results The serum concentrations of VEGF-A were CON, 24.4 ± 3.8 ng/l; SUR, 23.0 ± 1.7 ng/l; STZ, 23.6 ± 2.5 ng/l; and STZ + SUR: 25.8 ± 3.3 ng/l (Fig. 2a) Post hoc comparisons showed a sixfold increased VEGF-A excretion in diabetic rats compared with non-diabetic rats (180 ± 14 pg/mg creatinine vs. 29 ± 5 pg/mg creatinine, p < 0.0001). It is noteworthy that suramin additionally increases the urinary excretion of VEGF-A by 76% in diabetic rats (180 ± 14 pg/mg creatinine vs. 316 ± 8 pg/mg creatinine, p < 0.0001). An intergroup data analysis of VEGF-A excretion in urine under the influence of suramin did not reveal a statistically significant effect of suramin in nondiabetic rats. However, as shown in Fig. 3a, b an intragroup analysis of urinary VEGF-A excretion using a paired samples t test shows that 2 weeks exposure to suramin leads to a statistically significant increase in urinary VEGF-A excretion of 36% in non-diabetic rats (28 ± 4 pg/mg creatinine vs. 38 ± 5 pg/mg creatinine, p = 0.046) (Fig. 3a) and of 75% in diabetic rats (181 ± 37 pg/mg creatinine vs. 316 ± 8 pg/mg creatinine, p = 0.0322) (Fig. 3b). Figure 4 shows the results of urinary albumin excretion. Two-way ANOVA revealed a significant main effect of diabetes (F 1,21 = 137.2, p < 0.0001). No significant main effect of suramin or interaction was present. The albumin excretion in diabetic rats was threefold higher in diabetic rats compared with non-diabetic rats (6.8 ± 1.0 µg/mg creatinine vs. 2.3 ± 0.2 µg/mg creatinine, p < 0.0001). Of note, there was significant correlation between urinary excretion of albumin and VEGF-A in diabetic rats (r = 0.8817, p < 0.048). The main effects of suramin on total MMP (F 1,24 = 0.3476, p = 0.561) and MMP-9 activities (F 1,24 = 0.8143, p = 0.3758) were not significant, nor were the diabetes-by-suramin interactions. Fig. 2 The effects of suramin (10 mg/kg, ip) on VEGF-A concentration in serum (a) and urinary excretion of VEGF-A (b) in non-diabetic and streptozotocin-induced (65 mg/kg, ip) diabetic rats. Nondiabetic and 1-week diabetic rats were injected with PBS (CON and STZ) and suramin (SUR and STZ + SUR) once per week for 2 weeks. The results are presented as individual data points with means. Statistical significance was determined using two-way ANOVA with a Tukey post hoc test, *p < 0.0001 vs. CON, # p < 0.0001 vs. SUR, ^p < 0.0001 vs. STZ. Discussion The present study has provided evidence that the urinary excretion of VEGF-A may be pharmacologically modified by suramin. Polysulfonated naphthylurea is used in laboratories as a broad-spectrum antagonist of P2Rs and in clinics for a wide array of potential applications, from parasitic and viral diseases to cancer, snakebite and autism [17]. In our experiments, the administration of suramin (10 mg/kg, ip) once per week for 2 weeks leads to a significant enhancement of the urinary excretion of VEGF-A, both in normoglycemic (about 36%) and STZ-induced hyperglycaemic rats (about 60-70%). This finding seems to be of considerable importance for understanding the pathogenesis of diabetic nephropathy and perhaps extending the pharmacological possibilities of kidney protection in diabetic patients. VEGF-A is a key secreted glycoprotein of the VEGF family of heparin-binding growth factors that play an important role in the regulation of glomerular structure and function and may also influence the outcome of diabetic kidney disease. The upregulation of VEGF-A in glomeruli is observed in the early stages of diabetes [15], and on this observation anti-VEGF-A therapy is based. In STZ-induced diabetic rats, treatment with monoclonal anti-VEGF antibodies decreased hyperfiltration, albuminuria and glomerular hypertrophy [18]. However, the outcomes of VEGF-A inhibition in experimental diabetes have been conflicting [19]. Regardless, there is strong evidence that VEGF-A plays a pivotal protective role in the pathogenesis of microangiopathic processes [20]. Moreover, the results of the genetic-based studies have provided evidence that the upregulation of VEGF-A in diabetic kidneys protects the microvasculature from injury [16]. To achieve significant therapeutic benefits, while at the same time taking into account the short half-life and high susceptibility to degradation of VEGF-A in vivo, therapeutic management may require intrarenal administration, which in clinical conditions seems unlikely to be achieved today. Thus, the therapeutic challenge is to find the agents that affect the renal/intraglomerular concentration of VEGF-A that could be administered to patients on an outpatient basis. Suramin as reported is mostly accumulated in the kidneys and half-life allows it to be administered once per week [21]. It has been previously shown that suramin administered intraperitoneally twice at weekly intervals prevented the rise of 24-h proteinuria and attenuated renal fibrosis and glomerular damage in a remnant kidney model of chronic kidney disease [22]. In our experiment model of early stage diabetes, we have noticed not only significant changes in the urinary excretion of albumin in diabetic rats but also in Fig. 3 The effects of suramin (10 mg/kg, ip) on VEGF-A urinary excretion in non-diabetic and streptozotocin-induced (65 mg/kg, ip) diabetic rats. The results are presented as individual data points before, SUR (−), and after 14 days of suramin exposition, SUR (+), in non-diabetic (a) and diabetic (b) rats. Statistical significance was determined using a paired samples t test, *p as indicated Fig. 4 The effect of suramin (10 mg/kg, ip) on urinary albumin excretion in non-diabetic and streptozotocin-induced (65 mg/kg, ip) diabetic rats. Non-diabetic and 1-week diabetic rats were injected with PBS (CON and STZ) and suramin (SUR and STZ + SUR) once per week for 2 weeks. The results are presented as individual data points with means. Statistical significance was determined using two-way ANOVA with Tukey post hoc test, *p < 0.0001 vs. CON, # p < 0.0001 vs. SUR normoglycemic rats, both treated with suramin. Our observation is supported by results obtained from db/db mice in which delayed administration of a single dose of suramin did not affect protein excretion in 9-and 17-week mice, suggesting that suramin did not affect the number of podocytes or podocyte-specific proteins involved in the pathogenesis of albuminuria in this mice strain [23]. As VEGF-A is mainly produced in podocytes, the increased urinary excretion of VEGF-A indicates increased production of this cytokine in podocytes, from where it passes through the glomerular filter into the lumen of capillaries and interacts there with its receptors located on glomerular endothelial cells. Using a model of VEGF transport against glomerular filtration flow, it has been calculated that about one-third of podocyte-derived VEGF-A reaches the glomerular endothelial cells via diffusion [13]. Moreover, the recent study has provided evidence of unexpected positive correlation between single-nephron glomerular filtration rate with VEGF-A back diffusion [14]. Taking into account that suramin enhances the urinary excretion of VEGF-A by 60-70% in diabetic rats, one may expect that local concentration in the glomerulus may by increased by about 20%. We measured the VEGF-A concentration in blood and did not find significant differences in this parameter between experimental groups. This suggests a possible effect of suramin on local intrarenal rather than systemic VEGF-A concentrations, which is consistent with the pharmacokinetic properties of suramin accumulation in the kidneys. One should also take into account the results of experiments carried out on cultured endothelial cells, the results of which may suggest that suramin may act as a factor limiting the process of angiogenesis stimulated by VEGF-A [24]. The mechanism of how suramin induces the elevation of VEGF-A excretion in urine is open to question. The bioavailability of VEGF-A is regulated by its processing by MMPs, especially MMP-9, expressed in podocytes and regulated by P2X7R upregulated in diabetes [5,9,25]. We observed increased activities of MMP and MMP-9 in diabetic rats, which are consistent with clinical studies in which, for example, the level of urinary MMP-9 is increased in patients with diabetic nephropathy, and positively correlates with the clinical stage of the disease [10]. The increase in MMP-9 activity may be responsible for diabetes-associated enhanced renal production of VEGF-A [10]. In our experiments, the urinary activities of MMP and MMP-9, both in normoglycemic and hyperglycaemic rats, were not affected by suramin. The pharmacology data for P2Rs suggest that the direct in vivo action of suramin on MMPs via P2X7R is hardly possible because of low potency, namely IC 50 ~ 70 µM at human P2X7R receptors and IC 50 > 300 µM at rat P2X7R receptors [26,27]. It seems possible that the suramin concentration in the blood of our experimental rats may not reach adequate levels to block P2X7Rs because it has been shown that the administration of suramin at a dose of 10 mg/kg twice per week yields plasma concentrations below 50 µM [28]. In conclusion, our results show that suramin administered once per week enhanced already increased VEGF-A excretion in diabetic rats and is also able to increase the physiological level of VEGF-A excretion in normoglycemic rats. We believe that our observations may contribute to the extension of therapeutic possibilities in diabetic patients aimed at the protection of glomerular microcirculation. This, in turn, will result in slowing down the pathological processes underlying the development of diabetic kidneys. The effects of suramin (10 mg/kg, ip) on urinary activities of total matrix metalloproteinases (MMP) (A) and matrix metalloproteinase-9 (MMP-9) (B) in non-diabetic and streptozotocin-induced (65 mg/kg, ip) diabetic rats. Non-diabetic and one-week diabetic rats were injected with PBS (CON and STZ) and suramin (SUR and STZ + SUR) once per week for 2 weeks. The results are presented as individual data points with means. Statistical significance was determined using two-way ANOVA with a Tukey post hoc test, *p < 0.0001 vs. CON, # p < 0.0001 vs. SUR
2021-02-27T06:16:22.490Z
2021-02-26T00:00:00.000
{ "year": 2021, "sha1": "79d754e1cbd1cb583d9d25f90dc260d5ddf63126", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s43440-021-00236-0.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "6abdebef467f69af514cbf1833e8c450bdf7fab5", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
250990106
pes2o/s2orc
v3-fos-license
The expansion of chemical space in 1826 and in the 1840s prompted the convergence to the periodic system Significance The number and diversity of substances constituting the chemical space triggered, in two important steps, the convergence of the periodic system toward a stable backbone structure eventually unveiled in the 1860s. The first step occurred in 1826, and the second was between 1835 and 1845. Interestingly, the salient features of the periodic system of the 1860s can be detected as early as the 1840s, even when considering the effect of disagreement regarding the determination of atomic weights. The methods presented here become instrumental to study the further evolution of the periodic system and to ponder its current shape. We retrieved 21,521 single-step reactions with publication year before 1869 from Reaxys, accounting for 11,451 substances. By eliminating substances with unreliable formulae, e.g. holding intervals as stoichiometric coefficients, such as Ta 1.15−1.35 S 2 and by manually curating 245 formulae with non-integer amounts of crystallisation species, e.g. CdCO 3 *0.5H 2 O, curated as CCdHO 3.5 , we ended up with 11,356 substances. We associated each of these substances with its earliest publication year (in a chemical reaction) and with its molecular formula. Disregarded elements and their separations Er and Yt, along with In, were elements whose identity was questioned by Mendeleev and expressed as ?Er, ?Yt and ?In in his table (1). Yt was the symbol used until 1920 for Y (2) and the first Y (or Yt) reaction is from 1872. Thus, neither Mendeleev nor Meyer had clear information about the element. Er was also problematic. By 1868 it was unknown that Er was actually a mixture of an element later (1878) coined Er and Yb. Er was separated one year later into Ho and the current Er and Tm. The same year, Yb was found to be accompanied by current Sc. In 1886 Ho was separated into the current Ho and Dy. The 1879 Yb was found to be a mixture of current Lu and Yb in 1907 (3). It is now known that Di, reported by Mendeleev as an element, was found as a mixture of Di and Sm in 1879. One year later, Sm was separated into Sm and the current Gd; this Sm was found by 1901 to be made of current 2 Eu and Sm. The 1879 Di turned out to be a mixture of current Pr and Nd in 1885. Therefore, we excluded Er, Yt and Di from our analysis and all the study is based on our findings for the 60 elements shown in Figure 1a (main text). Evolution of some molecular fragments We determined the temporal appearance of the molecular fragments depicted in Figure S4 by exploring the connection tables of the compounds reported in the database between 1800 and 1869. A connection table is a "listing of atoms and bonds, and other data, in tabular form" (5). Figure S4. In the inset, M stands for a metal, with M={Li, Be, Al, Si, Fe, Co, Zn, As, Rh, Sb, Pt, Hg, Tl, Pb, Bi} . Quantifying similarity among chemical elements We quantified the similarity of element x regarding element y as the fraction of substances of x in whose formulae x can be replaced by y yielding a formula that is part of the chemical space. Hence, for an element x having s x substances in the chemical space, which are gathered as is the arranged formula of substance i containing element x. Arranged formulae are assigned to a reference element, whose similarity regarding other elements is to be calculated. x whose formula multiplicity is m x (i). By multiplicity of a formula is meant the number of times the formula shows up in the multiset, that is the number of times the formula is found in the chemical space of element x. With the list of arranged formulae for elements x and y, we can calculate s(x → y) as: As |F x | amounts to counting the multiplicities of arranged formulae of x, then |F x | = m x (i). 6 Similarity values among chemical elements Figure S7: Systems of chemical elements by Meyer (a: 1864, gathering together his three separate tables; b: 1868; d: 1869/70) (6,7,4) and Mendeleev (c: 1869, rotated and reflected for the sake of comparison with the other SCEs) (1). Element symbols are updated to current notation. Lines and boxes indicate similarities. The complete list of similarities for Mendeleev is found in Table S2. Line widths are proportional to the number of times the similarity is discussed by each author. Line colours are used only for the sake of clarity (8). Red entries correspond to similarities Mendeleev thought did not exist and the blue one to "not so well studied." Size of chemical space sample (s%) Figure S9: Stability of similarities regarding chemical space size. Each row contains a given similarity observed by considering the chemical space in year y. The stability of each similarity corresponds to the percentage of appearance of such similarity in the sampled space of size s%. Colours associated to this percentage are shown on the right bar. Further details in Materials and Methods (main document). Contrasting Meyer and Mendeleev' systems of chemical elements with those of the chemical space (presentist approach) We took the three systems by Meyer, which were formulated in 1864 (6), 1868 (7) and 1869/70 (4); and the first Mendeleev' system published in 1869 (1). We extracted the similarities among the elements out of these systems and contrasted them with the "most similar" relationships of the systems of elements of the respective years 1863, 1867 and 1868. The time difference of one year between the system of elements of each author and the system of elements of the chemical space is to regard the time required for a chemist to be updated with the literature in the nineteenth-century. (7,9). Similarities in his 1869 table are further discussed in the paper where they were published. Therefore, besides the usual vertical similarities (in the 1869 published representation corresponding to rows), we also included those similarities mentioned by Meyer (4), plus the transition metal ones: Mn, Ru, Os, Fe, Rh, Ir and Co=Ni, Pd, Pt ( Figure S7). Mendeleev discussed thoroughly the similarities and even some lack of similarities (8), both of them listed in Table S2. Figure S10: Chemical elements used to build up the systems of elements of the nine nineteenth-century chemists. Elements known by the year discussed in each table are shown in black, while undiscovered elements and known by 1869 in grey. In red mixtures that were thought to be elements. Figure S10). Disregarded elements correspond to those not having substances in the database participating in single step chemical reactions (Section 1). So, the ratios we are interested in approximating as simple ratios are either 0.05077 or 1.9695. We say that 0.5077 must be expressed by any fraction f of the form x/y, such that 0 < f ≤ 1. Likewise, that 1.9695 can be decomposed and approximated by a fraction of the form Having selected the order of the Farey sequence to work with, we then proceed to devise a way to quantify the accuracy of the approximation of the ratio r by the fraction f . We, therefore, calculate the relative error of the approximation. As F 200 has 12,233 fractions (the number of fractions |F n | = n(n+3) 2 − n k=2 |F n k | (22), we set up an order to explore those fractions, based on the aim of finding simple fractions. That is, we need fractions x/y such that both x and y are small whole numbers. We quantify such a "simplicity" of fractions by their associated "area" x × y. The smaller the area of a fraction, the simpler the fraction is. Hence, we order F 200 fractions by non decreasing order of their area, that is F 200 is arranged as (0/1, 1/1,1/2, 2/1, 1/3, 3/1,..., 199/200, 200/199). According to this order, we quantify the relative error of the approximation. To decide which fraction better 46 approximates the ratio in study (in this example either 0.5077 or 0.9695), we further need a stopping criterion indicating the amount of error to be allowed (tolerance). We selected 20 different values of tolerance τ, from 1% to 20% of relative error. Hence, the best fraction approximating the given ratio is that simple fraction whose error(r, f ) ≤ τ. For each τ we have a best approximating fraction. 14 Similarities in the SCE of 1868 and their relationships with those of each chemist's SCE For every chemist publishing a set of atomic weights in year y, known Reaxys substances (S y−1 ) up to year y − 1 (inclusive) were retrieved and the corresponding SCE P y−1 was obtained (see Figure 2 (main text)). Formulae of substances S y−1 were approximated with 20 different tolerance values (τ), each τ yielding a SCE with similarities gathered in P τ y−1 (Section 13). Figure S12: Fraction of similarities observed by chemist' space with tolerance τ in year y − 1 that are observed in 1868, calculated as |P τ y−1 ∩ P 1868 |/|P τ y−1 |. The 20 similarity values (coloured dots) for each chemist are gathered together in a violin plot. For the sake of comparison the similarity |P y−1 ∩ P 1868 |/|P y−1 | is depicted as a black dot. According to Figure S12, differences in the systems of atomic weights appear as a major issue in the construction of SCEs during the early years of the century. For instance, in Dalton's case (1810) strong perturbations on the formulae accounting for differences in atomic weights (low tolerances) did not produce any 1868 similarity, while tiny perturbations associated with high tolerances made that about 30% of the resulting similarities matched those observed in 1868 ( Figure S12). Interestingly, this is a larger similarity than that of the SCE obtained with the unperturbed chemical space (our modern formulae, black dot in Dalton's violin in Figure S12), which means that Dalton's data and assumptions regarding atomic weight could actually have been used to improve the SCE. A similar behaviour is observed for Berzelius (1819) ( Figure S12). These behaviours especially occurred before 1830, as a consequence of the large number of similarities resulting from those exploratory times. For instance, the black dot in
2022-07-24T06:16:12.455Z
2022-07-22T00:00:00.000
{ "year": 2022, "sha1": "ecf0af95fe431ab82cb4d5addc641965faf40e50", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1073/pnas.2119083119", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "0c8df04433b814c8b306771475adaafb8b9af7cb", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
54576622
pes2o/s2orc
v3-fos-license
Role and Status of English and Other Languages in Nepal This paper analyses the role and status of English and other languages in Nepal as well as talks about the attitude of several agents towards English and other languages when used in the domains such as education, media and business. Nepal is a culturally and linguistically diversified country and has undergone various socio-political changes in a very short span of time primarily beginning from 1950 as of now. These changes include abolition of Panchayat, a system in which the king ruled directly led to a democratic country and end of a decade long civil war as well as abolition of monarchy which led to a country as the federal republic. These socio-political changes have made a direct significant impact on language planning and policy. The official language, Nepali and the international language, English are the dominant languages in Nepal which in many cases overshadow the promotion of other vernacular languages. As a result, a majority of people opt for these dominant languages overlooking their own indigenous linguistic affluence. In this paper, as a conclusive remark, I also argue that some plans followed by pragmatic measures are needed to uplift the status of majority of other languages in Nepal. Introduction Nepal, a small Himalayan country spreads across a total of 147,181 square kilometres. It lies in between the two big and economically powerful giants of Asia: China in the north and India in east, west and south. It is a landlocked country where 122 languages are spoken (Ethnologue, 2016). The data of the total number of languages varies as Central Bureau of Statistics, Nepal (2011) records 123 languages in total. All these languages have been divided broadly into 4 language families except Kusunda, which is a language isolate. The languages are Indo-European spoken by 82.10 percent, Sino Tibetan by 17.30 percent, Austro Asiatic 0.19 percent and Dravidian by 13 percent of the total population of Nepal that is, 26,494,504 (Yadav, n.d.). Nepali is the language spoken by the highest number of 11,826,953 speakers (Population Census, 2011) which falls on Indo European family, and it is followed by Maithili, 3,092,530 speakers, Bhojpuri, 1,584,958 speakers, Tharu, 1,529,875 speakers and Tamang 1,353,311 speakers which are the languages spoken by over one million people (Yadav, n.d.). The first three languages belong to Indo European family whereas the Tamang language belongs to Sino Tibetan family. Gadhawali is the least spoken language and it records only 38 people as its native speakers. As far as the English language is concerned, Population Census, 2011 records 2,032 speakers using English as a mother tongue in Nepal. As regards Nepal's geo structure, it is divided into Terai: the southern belt, the Hilly region: the midland and Mountain region: the northern belt bordering China which is the least populated region of Nepal. The population is dense in southern belt as it is the plain land having a plethora of industries, and so is in the hilly region as the capital and other small valleys locate in this region. The capital city, Kathmandu has people speaking almost all the languages since many people from all parts of Nepal migrated to this city once the civil war broke out in 1995; however, Newari speakers are considered as the native people of this city. In Terai belt, Nepali, Maithili, Bhojpuri and Tharu speakers constitute the large number, and in Mountain belt, there are mostly the Nepali, Tamang and Sherpa speakers (Population Census, 2011). Despite Nepal's relatively small geographical area with a large number of languages, the international language English is considered the dominant language in this country due to its massive spread and use in media, education, diplomacy and tourism. "The spread of English across sectors and regions is rapid and systematic… has reached the lower strata of the population in urban, as well as rural regions" (Giri, 2009, p. 93). English is the most preferred language and people working in all areas put effort to speak English creating their own context. Eagle (1999) states " one encounters street peddlers, bicycle rickshaw pullers, taxi drivers, trekking guides, porters and street children who speak surprisingly fluent English. Most of them are unschooled" (p. 308). It is valid still at this point of time. The priority has been extended from common people to the intellects now. Feeling the strength of English in Nepal, the Second National Convention of teacher of English recommended that English,being the only language of education and communication, should be given due credit in language policy documents and funds should be allocated accordingly (Yadav as cited in Eagle, 1999). In the following sections, I will describe the status and roles of English and other languages. The status of English in Nepal The status of English in Nepal The status of English in Nepal The status of English in Nepal The status of English in Nepal Kachru (1992) referring to the sociolinguistic profile of English draws three concentric circles: the inner circle, outer circle and the expanding circle. He states that inner circle refers to the context of core countries in which English is spoken as their first language. The USA, the UK, New Zealand, Canada and Australia fall in this circle. Similarly, outer circle comprises the linguistic situation of those countries who have institutionalized English in their regions as they have passed through the colonization e.g., India, Ghana, Bangladesh, Kenya, Pakistan, Nigeria among others. The ultimate circle i.e., expanding circle represents the large linguistic context that includes the countries who treat English as a Foreign Language (EFL) and use extensively. Nepal falls in the expanding circle of Kachru's concentric division. In 1892, the first contact that people of Nepal, particularly the elite, made with English was through the first school that was established then to give English Education to Rana children (The Times of India, 2011) but it was not spread out as Rana regime focused to educate only Ranas. Giri (2010) mentions "English soon became the symbol of status, power and privileges, and a means to divide people into the rulers and the ruled" (p. 93) and Rana enjoyed during their time. He furthers that English only came to formal education in the beginning of twentieth century. Primarily it began flourishing after 1990 in the changed political context. The schools followed the British Education NELTA system that marched ahead with the English Education system of India where a goal of education was to yield the people having English in tastes, in opinions, in morals and intellects (Awasthi as cited in Giri, 2010). Then 1990 onwards, private schools start to mushroom which had a kind of brand that read 'English medium School'. In the recent years, the medium of instruction in private schools is English and following this trend, to compete with private ones, the statefunded schools have started shifting to English Medium Instruction (EMI). In this regard, referring to the entire Asian context, Philipson (1992) states "English has retained its privileged position in the education process in Asia" (p. 28), and Nepal is warmly welcoming this trend. This same tendency is massively present in media practice too. There are lots of programmes that are run only in English and a lot more are run in English and Nepali, the dominant languages in the present day socio-linguistic context. Likewise, since Nepal's economy is largely dependent upon tourism, most of the Nepalese living even in hinterlands have very basic English that serves meeting the purpose of mutual intelligibility to communicate with the foreign tourists. Crystal (2003) claims that English has no official status in South Asian context like in Nepal; however, it is used as a medium in international communication. He further says "Increasingly it is being perceived by young South Asians as the language of cultural modernity" (p. 29) and it is true in Nepal. It is spoken by many Nepali youth taking it as a matter of pride. For them, speaking English is a prestige related new fad. In mass media, education and business too, English is given preference to other languages. Pointing out the importance of English in the country like Nepal, Phillipson (1992, p. 30) states: The importance of English in such African and Asian Periphery-English countries is two-fold. English has a dominant role internally, occupying space that other languages could possibly fill. English is also the key external link, in politics, commerce, science, technology, military alliances, entertainment and tourism. This is obvious that since Nepal had a monolingual language policy for quite a long time before 1990, English occupied the position of other languages and it was treated more like second language in many contexts and yes, it is a key link language in all the domains of socio-linguistic context. If so, what about other languages and how were they treated? The status of other languages The status of other languages The status of other languages The status of other languages The status of other languages Like English, the use of Nepali is widespread across most domains such as, education, mass media, business and arts. It is the only official language as of now. Nepal experienced the democratic reigns twice, 1950-1960 and 1990-2002. However, as regards the language policy, there was no liberal planning but rather adopted Nepali-only language policy thereby banning the indigenous languages and other than the standard variety in English (Phyak, 2013) It was a kind of biased treatment to other vernacular languages. "Moreover, languages other than Nepali were assumed to be 'barbarian', 'uncivilized' and 'worthless'" (Sachdev as cited in Phyak, 2013, p. 130). The linguistic issues in 1990 constitution is quite vague. It mentions: 1. The Nepali language in the Devnagari script is the language of the nation of Nepal. The Nepali language shall be the official language. NELTA 2. All the languages spoken as the mother tongue in the various parts of Nepal are the national languages of Nepal. (part 1, Article 6) It implies that Nepali is only the nation language. It didn't clearly articulate what it means to be national languages. It seems to have played with some linguistic terms rather than giving some kind of status to the languages. The further dispute that Giri (2010) mentions is about the 1999 verdict that Supreme court gave as using the Newari (also called Nepal Bhasha) as an official language in Kathmandu Metropolitan City and Maithili, another local language in Rajbiraj and Janakpur City Councilasunconstitutional. This shows the unfair treatment made to the local/ minority languages during these time. Because of such practice, "the identity discourse" against "traditionalist discourse" (Phyak, 2013, p. 130) advocating for the recognition of minority languages was loud enough. As a result, the Interim Constitution of Nepal 2007 addressed this issue which is clearly seen in article 5, clause 3. Language of the nation: (1) All the languages spoken as the mother tongue in Nepal are the national languages of Nepal. (2) The Nepali Language in Devnagari script shall be the official language. (3) Notwithstanding anything contained in clause (2), it shall not be deemed to have hindered to use the mother language in local bodies and offices. State shall translate the languages so used to an official working language and maintain record thereon. (Part 1, Article 5) The articles of the Interim constitution of Nepal related to language policy became liberal and opened the door for the use of minority languages in offices and similarly it also opened the possibility of Multilingual Education (MLE) which was practiced from 2007. However, the teaching learning materials and resources have not been sufficiently developed to promote MLE. As a result, this simply seems to be a sketchy provision for the local indigenous languages. It again reflects the same hierarchy that Nepali as the official language and English as the modern language enjoys as the dominant languages in Nepal. The role of English and other languages English being a very widely used international language in Nepal has a bigger role in education, media, business and tourism. Giri (2010) Role in education This is the age of technological advancement and undoubtedly, English has a significant role to play. Nepal, in its own pace, is slowly walking towards such an advancement as a result, the people and academic institutions of this country have given a high priority to the English language. All the private schools are English-medium now, and even some state owned schools have already started this trend. Most of the state-owned schools' medium of instruction is in Nepali. There are only a few schools and few resources that target MLE. It seems the priority for mother tongue in education is so limited. Consequently, English and Nepali has got the bigger and significant role in education. (Phyak, 2013, p.131) states: Due to its instrumental value, English is perceived as the most important language (even more important than Nepali) in education, mass media, and other job markets (especially due to technological requirements). However bigger role they are having, Nepali and English should not be treated as "killer languages" (Phillipson, 1997, p. 243). Moreover, there has to be a planning of strengthening other languages by using them in several own local contexts. Eagle (1999) claims the language of diplomacy and international affairs in Nepal is English, and communication with other countries is done in the same language. She adds that Nepal gets a lot of foreign aids in different domains like education, communication, engineering, medicine and the list goes on and on from the foreign countries or the institutions like World Bank and United Nations, and the language used is in English. That is true whereas in business at the local level, Nepali language is used as a local link language. Similarly, in mass media too, using English seems to be a kind of fandom, and several programs are run in English and there are a plenty of programs run in Nepali. State run media run a very limited programs in local languages and privately managed media do not have it in their priority. The state owned paper entitled 'Gorkhapatra' has dedicated a page for local languages to publish their news and articles in local languages once a month recently. This seems to be a praiseworthy step to give a kind of recognition to local languages. Role in tourism Tourism is considered to be a major source of income in Nepal. "Since 1950, the tourist industry has grown rapidly in Nepal, accounting for 30 percent of the total foreign exchange entering the country" (Jha as cited in Eagle 1999, p. 315). Therefore, people in almost all touristic destinations speak basic English which has a kind of mutual intelligibility. It begins from the airport and continues up to the rural hinterlands. English has played a role of subsistence in the community level. These days, even some travel agencies focus on other international languages like Chinese and Spanish too. Likewise, at the local level for local tourist, Nepali is used for the communication. There is no any noticeable role of local languages in tourism. Attitude towards English and other languages Phyak (2013) talks about monocentric nationalism which became dominant until 2006 and further maintains that as Nepal was dominated by monolingual policy or Nepalitization/Nepalification ideology during this time, it was an internal colonization of Nepali language overruling all other local languages which was considered as a threat to national unity and other public domains. In the same line, Eagle (1999) mentions: The choice of Nepali as the sole national language of Nepal and the sole language to be used in the school system was, and continues to be, highly controversial. The central government rationale for this decision. was based on the fact that Nepali had been the lingua franca of the country for at least 150 years (1999, p. 288-289) As monolingual policy was adopted, there was a massive spread of Nepali being the language of ruling class "which reinforces a stifling, oppressive and fatalistic caste system" (Eagle, 1999, p. 292). It has led to a serious repercussion. This monolingual policy happened to systematically supress marginalized local languages in which speakers of these languages lost their faith in their languages and wanted to fall in the mainstream life to get benefitted from and adopt the cultures which are not theirs (Giri, 2010). This seems to be a kind of paradigm shift which can infuse "cultural anarchism" (Giri, 2010, p. 88) any time. Phyak (2013) mentions that even in the changing context, the policy makers' will to remain in the status quo by not recognizing the value of literacy in mother tongues seems to be a mentality of hegemony. It is of a few dominant elites in political, economic, education and linguistic power that maximizes social exclusion and inequality as the local languages are sidelined. This is their "eliteinjected backdoor language policy" (Phyak, 2013, p. 140). He says Ministry of Education has introduced a new-language-ineducation policy in which it states that local languages should be used as a medium of instruction up to grade three, but there's a lack of scholarship that can explore how this policy is implemented and supported by the agents like teachers, parents and students. Although there's a kind of ideological space for the minority languages in the changing socio-political context after 2007, the private and public schools have given a bigger room for the expansion of Nepali and English. English and Nepali are enjoying their heydays in Nepal as of now, and other vernacular languages are struggling for their status. The communities take their languages as the commodity that has been passed from their elder generations rather than their linguistic affluence. Everyone seems to be joining the craze over learning and speaking English and parents don't care of transferring their local tongues to their children, rather they focus their children to be proficient in English and Nepali. Phyak (2013) mentions that there has to be a critical dialogue between agents like parents, students, teachers that can influence in policy making and implementation process. Conclusion It is evident that the spread of English is rapid in Nepal due to its significant role in education, diplomacy, mass media, technology and tourism. Nepalese people's attitude is shifting, preferring to speak more English compared to any other languages. Similarly, as the Nepali language has got official status, there's no any as such threat to the Nepali language but the threat is there with other vernacular languages as Ethnlogue (2016) records that 32 are in vigorous state, 54 languages are in trouble and 8 are dying out of 122 languages of Nepal. Therefore, without negating the fact that English and Nepali have bigger roles in this changing context, the government of Nepal has to start planning for standardizing some languages, probably by codifying (creating lexicography) introducing it in education at least beginning from a mono-lingual community and using it in a local media. If on one hand, the discourse on identity issue gets enlarged and on the other hand, the policy and planning of the country simply overlooks, it may bring a kind of linguistic tension which can provoke any kind of anarchism in the community in future. So the state needs to review its plan and practices in time to protect other vernacular languages without mitigating the role of official language as Nepali and International language as English. The policy in the paper may not be enough; this has to be slowly put into practice. Linguistic and cultural diversity is the affluence of Nepal and the Nepalese and it has to be preserved well.
2018-12-03T13:05:09.655Z
2016-12-01T00:00:00.000
{ "year": 2016, "sha1": "b2a1d6b1cf611d8be43a8b2cc6d548999d7d0041", "oa_license": null, "oa_url": "https://www.nepjol.info/index.php/NELTA/article/download/20206/16605", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7885997d8ff633ee234213df528f24b37bcc5955", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [ "Political Science" ] }
222183072
pes2o/s2orc
v3-fos-license
The multivariate physical activity signature associated with metabolic health in children and youth: An International Children's Accelerometry Database (ICAD) analysis. There is solid evidence for an association between physical activity and metabolic health outcomes in children and youth, but for methodological reasons most studies describe the intensity spectrum using only a few summary measures. We aimed to determine the multivariate physical activity intensity signature associated with metabolic health in a large and diverse sample of children and youth, by investigating the association pattern for the entire physical intensity spectrum. We used pooled data from 11 studies and 11,853 participants aged 5.8-18.4 years included in the International Children's Accelerometry Database. We derived 14 accelerometry-derived (ActiGraph) physical activity variables covering the intensity spectrum (from 0 to 99 to ≥8000 counts per minute). To handle the multicollinearity among these variables, we used multivariate pattern analysis to establish the associations with indices of metabolic health (abdominal fatness, insulin sensitivity, lipid metabolism, blood pressure). A composite metabolic health score was used as the main outcome variable. Associations with the composite metabolic health score were weak for sedentary time and light physical activity, but gradually strengthened with increasing time spent in moderate and vigorous intensities (up to 4000-5000 counts per minute). Association patterns were fairly consistent across sex and age groups, but varied across different metabolic health outcomes. This novel analytic approach suggests that vigorous intensity, rather than less intense activities or sedentary behavior, are related to metabolic health in children and youth. Introduction There is clear evidence of favorable associations between physical activity (PA) and metabolic health outcomes in children. While associations are evident for moderate-to-vigorous PA (MVPA) and vigorous PA (VPA), associations appears to be weak for light PA (LPA) and https://doi.org/10.1016/j.ypmed.2020.106266 Received 24 May 2020; Received in revised form 24 August 2020; Accepted 21 September 2020 sedentary time (SED) (Ekelund et al., 2012;Andersen et al., 2006;Janssen and LeBlanc, 2010;Poitras et al., 2016;Cliff et al., 2016;Aadland et al., 2018a). However, few studies include the entire PA intensity spectrum in their analyses and many studies summarize all intensities above walking into one category (MVPA), which limits information about the importance of specific intensities in the moderate to vigorous range. Capturing the entire intensity spectrum is important to avoid loss of information and residual confounding (Poitras et al., 2016;Aadland et al., 2018a;van der Ploeg and Hillsdon, 2017). Accordingly, associations across the entire PA intensity spectrum, including SED, should be examined to obtain a complete picture and to ease interpretations of associations between PA and health outcomes. This aim has traditionally been difficult to address, as researchers mainly have relied on statistical methods that cannot handle multicollinearity among the explanatory variables. Aadland et al. (2018a) recently applied multivariate pattern analysis to addressed the multicollinearity challenge of accelerometer-derived PA data. This analytical approach provides a solution to limitations imposed by traditional statistical approaches, as it can model any number of completely multicollinear variables (Wold et al., 1984). Thus, multivariate pattern analysis allows for modelling multiple variables across the entire PA intensity spectrum and hence use the rich information embedded in the acceleration signal, which can provide greatly improved information from accelerometry (Aadland et al., 2018a;Aadland et al., 2019a). The recent application of multivariate pattern analysis to the field of PA epidemiology provides promising results in terms of how researchers may better exploit and model accelerometry-derived PA data. However, the previous studies (Aadland et al., 2018a;Aadland et al., 2019a) only included one cohort of 10-year-old children. Thus, these findings need verification and extension using a larger and more diverse sample of children. Therefore, the aim of the present study was to determine the PA intensity signatures associated with metabolic health outcomes in the International Children's Accelerometry Database (ICAD), which includes a large sample of children aged 6-18 years from culturally diverse settings. Study design The International Children's Accelerometry Database (ICAD) (http://www.mrc-epid.cam.ac.uk/research/studies/icad/) is a database that contains pooled data on accelerometer-determined PA, SED, and related health outcomes in children and adolescents from 21 studies from 10 different countries. The aims, selection and design of studies, as well as data reduction procedures and methods of the ICAD database have been described elsewhere (Sherar et al., 2011). Participants In the present analyses, we used data from children and adolescents aged 6-18 years from 11 studies from Europe (EYHS Denmark, Estonia, Norway, and Portugal (Andersen et al., 2006), ALSPAC (Golding et al., 2001), CoSCIS (Eiberg et al., 2005), KISS (Zahner et al., 2006), PANCS (Kolle et al., 2010)), the United States (NHANES 2003-2004(National Health and Nutrition Examination Survey, 2005), NHANES 2005(National Health and Nutrition Examination Survey, 2010), and Brazil (Pelotas (Victora et al., 2007)). Data were collected 1997-2007 and studies included cross-sectional, longitudinal, and intervention designs. A detailed overview of the studies are provided by Sherar et al. (2011) When several waves of data were available (i.e., when participants were measured at multiple time points), we included only the first wave to limit the sample to unique observations. The included studies provided data on PA and at least one of the metabolic risk factors of interest. All participants and/or their parents/legal guardian provided informed consent and all study protocols were approved by local ethical committees. Physical activity A detailed description of the assessment and data reduction procedures of PA has been published previously (Sherar et al., 2011). Briefly, accelerometer data for the vertical axis from all studies were reprocessed and reanalyzed for unification across studies using the Ki-neSoft software version 3.3.20 (Loughborough, UK). Data were reintegrated to 60-s epochs and non-wear periods of at least 60 min of consecutive zeros (allowing for two minutes of non-zero interruptions) were excluded. Inclusion criteria were a valid wear time of 10-16 h/day (i.e., excluding individuals with overnight wear) and ≥ 4 days/week. Metabolic health measures Height and weight were measured using standardized methods in all studies. We calculated body mass index (BMI; kg/m 2 ). For descriptive purposes, we further reported the proportions of individuals being overweight and obese based on the age-and sex-specific cut-offs suggested by Cole et al. (2000). We used seven cardio-metabolic variables as outcomes; abdominal adiposity (waist circumference (WC)) and resting systolic blood pressure (SBP) from 11 studies (Andersen et al., 2006;Golding et al., 2001;Eiberg et al., 2005;Zahner et al., 2006;Kolle et al., 2010;National Health and Nutrition Examination Survey, 2005;National Health and Nutrition Examination Survey, 2010;Victora et al., 2007), lipid metabolism (triglycerides (TG), total cholesterol (TC) and high-density lipoprotein (HDL)-cholesterol) from 10 studies (Andersen et al., 2006;Golding et al., 2001;Eiberg et al., 2005;Zahner et al., 2006;Kolle et al., 2010;National Health and Nutrition Examination Survey, 2005;National Health and Nutrition Examination Survey, 2010), and glucose metabolism (insulin and glucose) from nine studies (Andersen et al., 2006;Eiberg et al., 2005;Zahner et al., 2006;Kolle et al., 2010;National Health and Nutrition Examination Survey, 2005;National Health and Nutrition Examination Survey, 2010). WC was measured using an anthropometric measurement tape at the height of the umbilicus at the end of a normal expiration, except in the National Health and Nutrition Examination Survey (NHANES) where WC was measured just above the iliac crest at the mid-axillary line (National Health and Nutrition Examination Survey, 2010). WC:height ratio was used for analysis. Blood pressure was measured during rest using manual (National Health and Nutrition Examination Survey, 2005;National Health and Nutrition Examination Survey, 2010) or automatic (Andersen et al., 2006;Golding et al., 2001;Eiberg et al., 2005;Kolle et al., 2010;Victora et al., 2007) methods. The average of two, three or four recordings was used for analysis. All blood samples were drawn from fasting individuals. We calculated the TC:HDL ratio and homeostasis model assessment of insulin resistance (HOMA) (Matthews et al., 1985), which were used for the association analyses. We calculated a composite metabolic health score as the mean of five variables (WC:height ratio, SBP, TG, TC:HDL ratio, and HOMA). The score was constructed after adjustment of all variables for sex and age by obtaining standardized residuals from linear regression. Similar approaches have been used previously (Andersen et al., 2006;Aadland et al., 2018a). We regard this composite score as the main outcome. Additionally, we performed a sensitivity analysis using a composite score excluding WC:height ratio, to reduce the influence of fatness on the model. We also analyzed each of the five risk factors individually. Statistical analyses Descriptive characteristics were reported as frequencies, means, standard deviations (SD), and medians (time spent in PA intensities). Associations between PA and metabolic health were determined using multivariate pattern analysis. All analyses were adjusted for age and sex by using residuals from linear regression for all outcome variables including age and sex as independent variables. We also included sensitivity analyses adding cohort as a random effect in a linear mixed model to further adjust for potential differences between studies. We used partial least squares (PLS) regression (Wold et al., 1984) to determine the multivariate association pattern between metabolic health measures (outcome variables) and the PA intensity spectrum (explanatory variables), as shown previously (Aadland et al., 2018a;Aadland et al., 2019b). Briefly, PLS regression decomposes the explanatory variables into a few orthogonal PLS components (latent variables), while maximizing the covariance with the outcome variable. This procedure is able to handle completely multicollinear variables (Wold et al., 1984). Given the strong correlations among the explanatory variables when using a spectrum description of PA (Aadland et al., 2019c), each variable provides limited unique information about the outcome. Thus, their unique contribution to the outcome is neither meaningful nor possible to estimate. Association estimates are therefore not independent of each other (Aadland et al., 2019b), which means the interpretation of associations differs from those of ordinary linear regression. We validated all models using Monte Carlo resampling (Kvalheim et al., 2018) with 100 repetitions randomly selecting 50% of the observations as an external validation set in each repetition. For each PLS model, we used target projection (Kvalheim and Karstang, 1989;Rajalahti and Kvalheim, 2011) followed by reporting of selectivity ratios with 95% confidence intervals (CIs). These estimates show the direction and explained variance (R 2 ) for each PA intensity variable with the predicted outcome in the multivariate space (Aadland et al., 2019b;Rajalahti et al., 2009a;Rajalahti et al., 2009b). For example, a selectivity ratio of 0.50 and a total model R 2 of 10%, means the variable explains 5% of the actual outcome. Additionally, we reported the association using unstandardized estimates (Aadland et al., 2019b) to allow for an interpretation of the importance of a higher or lower duration (in minutes/day) among PA intensities. The association patterns related to metabolic health was compared by age groups and sex (5.8-11.9-year-old boys and girls and 12.0-18.4year-old boys and girls) by performing the analyses separately for these four subgroups. The multivariate PA signatures were compared among groups by correlating association patterns using Pearson's r. The multivariate pattern analyses were performed by means of the commercial software Sirius version 11.0 (Pattern Recognition Systems AS, Bergen, Norway). Fig. 1. The multivariate physical activity signature associated with metabolic health in children and youth. The composite score includes waist circumference to height ratio, systolic blood pressure, homeostasis model assessment of insulin resistance, total to high-density lipoprotein cholesterol ratio, and triglyceride (a lower score is more favorable). The PLS regression model includes five components and is adjusted for age and sex. The selectivity ratio for each variable is the explained to total variance of the predictive (target projected) component. A negative bar implies that increased physical activity is associated with better metabolic health. R 2 = explained variance. Participants' characteristics We included 11,853 participants in the analyses who provided valid data on age, sex, PA, and at least one outcome variable (Table 1). The overall number of participants varied from 4185 to 11,735 across models for single risk factor outcomes, whereas 4105 children provided data for the composite score (n = 917-1127 for age-and sex-specific groups) (Supplemental Table 1). Total accelerometer wear time was mean (SD) 780 (67) minutes/day (mean 771-795 min/day across sex and age groups) accumulated across a median of six wear days. Time accumulated across the intensities are shown in Supplemental Table 2. Fig. 1 shows the association pattern between the entire PA spectrum and the composite metabolic health score (R 2 = 4.2%). Associations were very weak for intensities lower than 1000 cpm, but gradually strengthened for intensities from 1000-1499 cpm to 4000-4499 cpm for which more time spent in PA was associated with better metabolic health. Associations weakened for intensities higher than 4500 cpm. Sensitivity analyses including adjustment for study (R 2 = 2.7%, Supplemental Fig. 1) or with exclusion of WC:height ratio from the composite score (R 2 = 3.4%, Supplemental Fig. 2) did not alter the association patterns (r between these association patterns and the pattern shown in Fig. 1 = 0.98 and 0.94, respectively). Associations between PA and metabolic health Association patterns between the entire PA spectrum and the composite metabolic health score were fairly consistent across sex and age groups (R 2 = 3.3-7.0%; r for association patterns = 0.76-0.95 across subgroups) (Fig. 2). However, a somewhat stronger unfavorable association for 0-99 cpm was found for boys than for girls, and a higher explained variance was found for the 6-12 year-old girls compared to other groups. Adjustment for study had a minor impact on association patterns (r = 0.81-0.98 for patterns adjusted and unadjusted for study), but reduced the explained variance for 6-12 year old girls from 7.0 to 3.8% (i.e., the results became more similar to other groups). We found some variation in associations for the five single risk factors (Fig. 3). For SBP we did not find a significant predictive association pattern, whereas explained variances were 1.7, 1.7, 2.7 and 4.2% for TG, TC:HDL ratio, HOMA, and WC:height ratio, respectively. Associations for WC:height ratio and HOMA gradually strengthened up to 4000-4999 cpm and thereafter decreased. For TG and TC:HDL ratio, associations gradually strengthened up to 1500-2499 cpm, then declined (TG) or plateaued (TC:HDL ratio), before associations strengthened again and peaked at 6000-7999 cpm. Adjustment for study had a minor impact on association patterns (r = 0.85-0.99 for patterns adjusted and unadjusted for study), though we did not find a predictive model for TG. The relative importance of each minute of PA in different intensities for the composite metabolic health score is shown in Supplemental Fig. 3. Whereas more time spent in 0-99 cpm was associated with a deterioration of metabolic health (0.00035 SDs per min/day), more time spent in other intensities was associated with improved metabolic health. Associations gradually strengthened for intensities from 100-499 cpm (−0.00066 SDs per min/day) up to 4500-4999 cpm (−0.05207 SDs per 1 min/day), and thereafter weakened. Discussion To handle many strongly correlated PA intensity variables from accelerometry, we investigated the multivariate PA signature associated with metabolic health in a large and diverse sample of children by means of multivariate pattern analyses. Extending previous findings using this type of analysis applied to PA data (Aadland et al., 2018a;Aadland et al., 2019a), this novel approach shows how the whole intensity spectrum of PA associates to metabolic health in childhood. Our results show strongest associations with metabolic health for vigorous intensities, whereas associations were weaker for lower intensities, in particular for time spent sedentary. Consistent with previous studies and recommendations (Ekelund et al., 2012;Janssen and LeBlanc, 2010;Poitras et al., 2016;Cliff et al., 2016;Aadland et al., 2018a;Aadland et al., 2019a), our findings support that children and youth should spend time in moderate to vigorous intensities to improve their metabolic health. However, our findings suggest that vigorous intensities are more important than previously believed. The strongest association with metabolic health was found for an intensity of 4000-5000 cpm, which is suggested as an appropriate threshold for classification of vigorous intensity (Trost et al., 2011). This accelerometer output is achieved for brisk walking or slow running at ≈ 6 km per hour in children and adolescents (Supplemental Table 3). However, in the present study, participants' PA was summed over 60 s. HDL = high-density lipoprotein; HOMA = homeostasis model assessment; SD = standard deviation *The composite score includes waist circumference:height ratio, systolic blood pressure, total:HDL ratio, triglycerides, and HOMA of insulin resistance. E. Aadland, et al. Preventive Medicine 141 (2020) 106266 Since children's PA is characterized by sporadic and intermittent bursts of activity most often lasting less than 10 s (Sanders et al., 2014;Aadland et al., 2018b), summation of PA over longer periods ("epochs") misclassify and mask vigorous activities like running and jumping (Aadland et al., 2019a). A recent study compared the PA signatures associated with metabolic health in children derived from 1-, 10-, and 60-s epoch data and found that the strongest association were observed for 7000-8000, 5500-6500, and 4000-5000 cpm, respectively (Aadland et al., 2019a). Thus, when using longer as compared to shorter epoch periods, association patterns were substantially biased towards lower intensities. Interestingly, when using 60-s epochs, the association patterns were similar in the previous (Aadland et al., 2019a) and the present study. Unfortunately, the ICAD data is only available with 60-s epochs. A similar misclassification could be a reality in much of the prevailing literature, as epoch periods of 10-60 s are most commonly used (Cain et al., 2013;Migueles et al., 2017). Consistent with research on children and youth's activity patterns (Sanders et al., 2014;Aadland et al., 2018b), we expect that most individuals do not obtain their PA from brisk walking. Rather, the stronger associations for higher intensities when using a short epoch probably show that the health effect of PA is achieved during intermittent vigorous intensity activities involving running and jumping. Shifting the focus to the lower end of the intensity spectrum, we observed a very weak association between SED (i.e., 0-99 cpm) and metabolic health, which is consistent with current evidence (Ekelund et al., 2012;Cliff et al., 2016;Aadland et al., 2018a;Hansen et al., 2018). This finding seems to be consistent across epoch settings (Aadland et al., 2019a). Similarly, and also consistent with previous findings (Poitras et al., 2016;Aadland et al., 2018a;Aadland et al., 2018b;Hansen et al., 2018), LPA intensities (i.e., ≈ 100-1999 cpm) showed weak associations with metabolic health, especially considering the biased association profile resulting from the 60-s epoch setting (Aadland et al., 2019a). As shown previously (Aadland et al., 2019a), as VPA is partly captured as MPA and MPA is partly captured as LPA when Fig. 2. The multivariate physical activity signatures associated with metabolic health by sex and age. The composite score includes waist circumference to height ratio, systolic blood pressure, homeostasis model assessment of insulin resistance, total to high-density lipoprotein cholesterol ratio, and triglyceride (a lower score is more favorable). The PLS regression models are adjusted for age and sex and include two, one, four, and one components, respectively, for 6-12-year-old boys, 12-18-year-old boys, 6-12-year-old girls, and 12-18-year-old girls. The selectivity ratio for each variable is the explained to total variance of the predictive (target projected) component. A negative bar implies that increased physical activity is associated with better metabolic health. R 2 = explained variance. using a 60-versus a 1-s epoch setting, the association for LPA shown herein is likely overestimated by misclassification of MPA. Taken together, our findings show no meaningful associations for time spent in SED and LPA with metabolic health in children and youth. Our findings are generally consistent with previous studies from the ICAD database suggesting that substituting time spent in SED and LPA with time in MVPA are favorably associated with metabolic health (Ekelund et al., 2012;Hansen et al., 2018;Wijndaele et al., 2019;Tarp et al., 2018). The exception is the association for SBP: While we did not find a predictive association pattern, consistent with a previous study using similar methodology (Aadland et al., 2018a), weak significant associations with MVPA have been observed in previous studies (Ekelund et al., 2012;Wijndaele et al., 2019), but only in a subgroup of adolescents (Hansen et al., 2018). This could be a result of our thorough validation of regression models. Since the previous studies used predefined intensity categories of SED, LPA, and MVPA (Ekelund et al., 2012;Hansen et al., 2018;Wijndaele et al., 2019) or accumulated time above 500, 1000, 2000, and 3000 cpm (Tarp et al., 2018), they do not provide detailed knowledge of specific intensities' association with metabolic health. While a more detailed intensity spectrum and PLS regression can provide more nuanced information of association patterns across the intensity spectrum, as shown herein, a direct interpretation of our findings with respect to the number of minutes/day children should spend in specific intensities for an improved metabolic health is challenging (Aadland et al., 2019b). For this purpose, a traditional PA description and isotemporal substitution models may be useful (Aadland et al., 2019b). For example, based on the ICAD data, it is estimated that substituting 30 min/day of SED with MVPA is associated with a 1.5 cm reduced WC (Wijndaele et al., 2019), and that this association strengthens with increased age (Hansen et al., 2018). Importantly, the PA do not need to be accumulated in prolonged bouts (Aadland et al., 2018b;Tarp et al., 2018). Thus, the different methodological approaches may complement each other in informing the evidence base of PA epidemiology and PA guideline development. Compared to previous studies that have modelled the associations between the whole intensity spectrum and metabolic health in children (Aadland et al., 2018a;Aadland et al., 2019a), the explained variance was considerably lower in the present study (4.2 versus 10.8, 13.4, and 17.0% explained variance for 60-, 10-, and 1-s datasets, respectively). One possible explanation for the further weakening of the association (R 2 = 4.2 versus 10.8% using 60-s epoch) may be the lack of aerobic fitness in the composite score in the present study. Among the six single risk factors included by Aadland et al. (Aadland et al., 2018a;Aadland et al., 2019a), aerobic fitness was strongest associated with PA. Another possible reason for the attenuation could be measurement error due to the application of different measures and protocols across studies. However, this factor does not seem to be important as accounting for cohort in our analysis did not improve model fit. Furthermore, most of the cohorts included in the ICAD have applied older ActiGraph models, more specifically the AM7164, which is more prone to drift and breakdown from wear and tear, compared to newer generations, for example the GT3X+, used in our previous studies (Aadland et al., 2018a;Aadland et al., 2019a). Fig. 3. The multivariate physical activity signatures associated with different indices of metabolic health. The PLS regression models are adjusted for age and sex. WC:height ratio = waist circumference to height ratio (seven components); HOMA = homeostasis model assessment of insulin resistance (two components); TC:HDL ratio = total to high-density lipoprotein cholesterol ratio (five component); TG = triglyceride (six component). The SR for each variable is calculated as the ratio of explained to residual variance on the predictive (target projected) component. A negative bar implies that increased physical activity is associated with better metabolic health. R 2 = explained variance. Strengths and limitations The main strength of the present study was the use of multivariate pattern analysis to handle the dependency among the PA variables across the intensity spectrum. This method is a novel and promising alternative to ordinary least squares regression, because it can handle multicollinear data sets (Wold et al., 1984;Aadland et al., 2019c;Rajalahti and Kvalheim, 2011). Importantly, this approach does not require pre-defined accelerometer cut points and therefore provide a solution to the cut point conundrum, which confuse the field and hamper comparison across studies. Furthermore, with regard to generalizability, the inclusion of a large and diverse sample of children from the ICAD database is an important strength of this study over previous studies using similar methodology (Aadland et al., 2018a;Aadland et al., 2019a;Aadland et al., 2018b). Accelerometers do not provide "true" PA levels, as behavior changes over time, some activities might be poorly captured by accelerometry, and several analytic choices, for example epoch length (Aadland et al., 2019a), can affect data considerably. Measurement error attenuates associations and increases the chance of type II errors (Hutcheon et al., 2010). As it is well known that frequency filtering (Brage et al., 2003;John et al., 2012) causes a leveling-off of ActiGraph counts for running at higher speed, the attenuated associations for the highest PA intensities (≥ 5000 cpm) is likely a spurious finding caused by underestimation of these activities. We only included adjustment of age and sex in our primary analyses, and additionally adjusted for cohort and removed WC:height ratio from the metabolic health composite score to remove the influence of adiposity in sensitivity analyses. As expected, this adjustment reduced the explained variance of the models, whereas association patterns were robust. Further adjustment for maturation and parents' education level did not change any findings (results not shown). We argue these findings show that our association patterns are stable, though residual confounding by, for example, diet, could influence the results. Because our results are derived from a cross-sectional analysis, causality could not be inferred from our findings. However, as argued previously (Aadland et al., 2018a), PA guidelines are largely based on population studies of free-living total PA, whereas experimental studies investigate effects of PA added to everyday activities. Moreover, due to the rigorous design, exercise prescription and supervision, and the large number of groups and participants required, it would be very complex to obtain experimental evidence informing the field like the present paper. Finally, it is biologically plausible that PA affect the metabolic risk factors, whereas it is less likely that metabolic risk factors affect PA levels, except for overweight and obesity. Therefore, we argue the results presented herein have implications for children's PA guidelines when it comes to metabolic health. Conclusion When incorporating the entire PA intensity spectrum in the analysis of associations with metabolic health, our findings suggest the strongest associations are found for VPA, whereas associations for SED are weak. Though our results are cross-sectional, our findings suggest that PA guidelines, as well as future surveillance and intervention studies, should increase their focus on VPA and reduce their focus on SED to target the strongest PA markers of childhood metabolic health. We recommend that future studies apply shorter epochs during measurement of PA and a multivariate analytic approach to develop future understanding in the field of PA epidemiology. Ethical approval and consent to participate All procedures performed in the original studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. Availability of data and materials The specific data sets generated and analyzed during the current study are not publicly available. However, a new data set including the same variables can be applied for through an individual project agreement with ICAD (http://www.mrc-epid.cam.ac.uk/research/ studies/icad/). Consent for publication All participants and/or their legal guardian provided informed consent and local ethical committees approved the study protocols. Prior to sharing data, data-sharing agreements were established between contributing studies and MRC Epidemiology Unit, University of Cambridge, UK.
2020-10-08T13:05:51.128Z
2020-10-03T00:00:00.000
{ "year": 2020, "sha1": "aa1545f94ed0a46edaad06f82b4376fa7bf9972c", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.ypmed.2020.106266", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "f25048405af5595dcad54a393eed41776198d139", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
267121292
pes2o/s2orc
v3-fos-license
Interventions for Preventing and Resolving Bullying in Nursing: A Scoping Review Bullying in the workplace is a serious problem in nursing and has an impact on the well-being of teams, patients, and organisations. This study’s aim is to map possible interventions designed to prevent or resolve bullying in nursing. A scoping review of primary research published in English and Italian between 2011 and 2021 was undertaken from four databases (Cochrane Collaboration, PubMed, CINAHL Complete, and PsycInfo). The data were analysed using Arksey and O’Malley’s framework, and the Preferred Reporting Items for Systematic reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR) Checklist was followed to report the study. Fourteen papers met the review eligibility criteria. The analysis revealed four main themes: educational interventions, cognitive rehearsal, team building, and nursing leaders’ experiences. Interventions enabled nurses to recognise bullying and address it with assertive communication. Further research is needed to demonstrate these interventions’ effectiveness and if they lead to a significant decrease in the short-/long-term frequency of these issues. This review increases the available knowledge and guides nurse leaders in choosing effective interventions. Eradicating this phenomenon from healthcare settings involves active engagement of nurses, regardless of their role, in addition to support from the nurse leaders, the organisations, and professional and health policies. Introduction Bullying, incivility and workplace violence are widespread issues in nursing [1].Before approaching research projects or implementation studies on these phenomena, it is necessary to understand the meaning of the terms used in the literature to refer to them.Some definitions claim that bullying is persistent negative actions aimed at damaging the target's professional and personal relationships through social exclusion and harassment [2], with unwanted, repeated, and harmful actions with the aim of humiliating, offending, and causing distress in the recipient [1].Bullying can be carried out by managers or supervisors (vertical bullying) when managers do not recognise the abilities of employees, deprive them of career opportunities, and deny them promotion or training, or gossip to damage their reputation, or by colleagues (horizontal bullying) when a nurse is yelled at, belittled, or receives demeaning and impertinent remarks from colleagues, sometimes in front of other nurses, patients, and their families [3,4].Lateral violence can occur as an isolated incident with no gradient of power between individuals (peers) in a shared culture.Conversely, bullying comprises repeated occurrences for at least six months [5].Bullying and lateral violence share behaviours such as sabotage, internal fighting, scapegoating, and excessive criticism [2].It has been hypothesised that workplace violence is rarely a sudden event, but rather the culmination of an escalation of negative interactions between people, beginning with low-intensity abuse typical of incivility [3].Incivility is defined as "one or more rude, discourteous, or disrespectful actions that may or may not have negative intent" [1]. Aim of Study To map possible interventions designed to prevent or resolve bullying in nursing. Research Design A scoping review was conducted between April and July 2021, and followed the first five steps of the methodological framework proposed by Arksey and O'Malley [15]: (a) identify the review question on a broad domain of a discipline; (b) identify relevant studies; (c) study selection; (d) data charting; and (e) reporting results.The Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR) Checklist was followed to report the study [16]. Identifying the Review Questions Are there interventions that enable the prevention of bullying in the nursing profession in healthcare settings? Are there interventions that enable the resolution of bullying in the nursing profession in healthcare settings? Identifying Relevant Studies We began by consulting a librarian for recommendations on the most relevant databases for this topic: the Cochrane Collaboration, PubMed, CINAHL Complete, and PsycInfo.MeSH and free terms were used, adapting them to the specific search methods of each database.The keywords bullying, lateral violence, horizontal violence, mobbing, workplace incivility, harassment, nursing, nurse, prevention, intervention, and solving were combined variously using the Boolean AND and OR operators, resulting in a search strategy that best answered the review questions (Table 1). Study Selection Two reviewers conducted the search simultaneously by applying predetermined inclusion and/or exclusion criteria to all papers independently at each stage of the selection process [17].The inclusion criteria for articles were: (1) concerning bullying, lateral violence, or incivility between employees; (2) pertained to all healthcare settings and the nursing profession, specifically related to graduate nurses; (3) published between January 2011 and March 2021; (4) published in English and Italian; and (5) all study design types (quantitative, qualitative, and mixed-methods).Given the large number of empirical studies on the topic, we excluded conference papers, editorials, reports, books, and grey literature. Each researcher conducted a selection process to determine article eligibility with an initial screening phase based on the information provided in the title and abstract, followed by mutual comparison and subsequent full-text screening, resulting in a classification of included, excluded, or uncertain studies.The comparison at the end of each stage aimed at maintaining an approach consistent with the review questions [17].To resolve disagreements or doubts regarding the selected articles, the researchers consulted an experienced external researcher [17]. Data Charting The authors constructed a tool that considered the elements of the review objective and question (author, year, country, objectives, study design and data collection instruments, participants/contexts, type of intervention, and key findings).This tool was used to graphically represent the data extraction process. Ethical Considerations As the scoping review did not involve human beings, the approval of an ethics committee was not necessary according to the Swiss Federal Human Research Legislation [18]. Results Of the 1066 articles initially identified, after removing duplicates and studies considered irrelevant, 88 articles were selected for full-text screening and 14 met the review eligibility criteria.The selection process is depicted in Figure 1 [19]. As the scoping review did not involve human beings, the approval of an ethics committee was not necessary according to the Swiss Federal Human Research Legislation [18]. Results Of the 1066 articles initially identified, after removing duplicates and studies considered irrelevant, 88 articles were selected for full-text screening and 14 met the review eligibility criteria.The selection process is depicted in Figure 1 [19].The included studies (Table 2) were mainly conducted in the United States of America (twelve) and South Korea (two).The acute setting characterised all included studies, most of which were conducted in a single institution (ten), some in multiple institutions (two), and others were national surveys (two).Quantitative approaches were dominant (nine), followed by mixed-methods (three) and qualitative approaches (two).The main objectives common to the studies aimed at understanding the effectiveness of interventions by increasing the awareness and recognition of the phenomenon among nurses, reducing bullying in the analysed contexts, and acquiring knowledge and skills to deal with and respond to bullying situations.A qualitative study investigated the effectiveness of interventions based on nurse leaders' experiences; another pursued the goal of The included studies (Table 2) were mainly conducted in the United States of America (twelve) and South Korea (two).The acute setting characterised all included studies, most of which were conducted in a single institution (ten), some in multiple institutions (two), and others were national surveys (two).Quantitative approaches were dominant (nine), followed by mixed-methods (three) and qualitative approaches (two).The main objectives common to the studies aimed at understanding the effectiveness of interventions by increasing the awareness and recognition of the phenomenon among nurses, reducing bullying in the analysed contexts, and acquiring knowledge and skills to deal with and respond to bullying situations.A qualitative study investigated the effectiveness of interventions based on nurse leaders' experiences; another pursued the goal of understanding the prevalence of the phenomenon.The narrative synthesis of results identified four themes: educational interventions, cognitive rehearsal, team building, and nurse leaders' experiences.The education sessions for the nurses included case studies, review of the literature about the effects of incivility in the workplace, and an overview of recommendations for a healthy work environment along with resources for the nurses.Then a facilitated discussion was conducted describing personal experiences of nurses in the adult ICU setting.This included discussions about professionalism, behaviors, attitudes, and ways to prevent workplace incivility.Five education sessions of one hour took place. The postintervention score had a higher mean than the preintervention score in each of the dimensions.Higher scores indicate incivility; thus, lower scores indicate civility.Therefore, more instances of incivility were identified after intervention to increase awareness of incivility.The results of the current study found that incivility perceptions were higher in the postintervention survey; these findings suggest that the education was effective, thus creating more awareness of incivility. Walrafen et al. 2012 [24] United States The purpose of the study was to determine the prevalence of horizontal violence in a multi-institutional hospital system. A mixed-method descriptive design was used, using the Horizontal Violence Behaviour Survey and the participants were asked to respond to three open-ended qualitative questions. All nurses in the multi-institutional health care system were invited to participate in the study.The final project consisted of 6 sessions with each session rolled out at 2-to 3-week intervals.Each session consisted of (1) a 30to 60 min in-service group activity embedded into standing unit meetings, (2) online journal club readings, and (3) morning huddles prior to when care of study patients commenced where the key information from the group activities and readings was discussed and reinforced.Didactic content on BLV was culled from the empiric and policy literature and additional pedagogical resources were identified.Each session began with an overview of the objectives to be covered, a brief review of previous material, and a short didactic session and supportive experiential activities, followed by group discussion. Throughout this project, it was clear that the topic of BLV was meaningful to participants.For some of the experienced nurses, the information helped explain and label incidents they may have encountered earlier in their careers. Educational Interventions Some authors proposed educational interventions to address bullying [20][21][22][23][24].These interventions considered the characteristics and consequences of bullying and were designed and carried out heterogeneously.Nikstaitis and Simko [23], starting with a literature review of the effects of incivility in the workplace and an overview of recommendations for a healthy work environment, stimulated a discussion among participants that included personal experiences, professionalism, attitudes, behaviours, and ways to prevent incivility.Howard and Embree [21] proposed an e-learning training, "Bullying in the Workplace: Solutions for Nursing Practice", with content on bullying, reacting under stress, identifying conflict management styles, and creating safe environments.It was an online activity that used scenarios to enable participants to practise what they had learned.The use of scenarios for cognitive training of nurses to handle workplace bullying was also proposed by Kang and Jeong [22] in the form of a smartphone app, which included an introduction to nonviolent conversation as standard communication, six bullying scenarios, and a question and answer board.Chipps and McRury [20] followed up an educational moment on bullying with an online registry and checklist of negative behaviours for nurses to record behaviours observed or experienced during each shift over seven months. Walrafen et al. [24] conducted a survey to determine the prevalence of horizontal violence that showed that the majority of participants witnessed/experienced eight of the nine behaviours associated with horizontal violence.They proposed a training program, "Sadly Caught Up in the Moment: An Exploration of Horizontal Violence", which contained a review of each behaviour and appropriate responses. Cognitive Rehearsal Authors of four studies implemented cognitive rehearsal training, a communication technique taught to participants as a strategy to stop uncivil behaviour [25][26][27][28].After a training intervention on incivility, Razzi and Bianchi [28] engaged participants in cognitive rehearsal training using cards with written responses to uncivil behaviour and providing examples of how to respond to such behaviour.This was followed by a role-play session in which they practised applying these responses.Kang et al. [26] investigated the effects of a cognitive rehearsal program on bullying among nurses using four phases.In the first phase, "scenario development", nine bullying scenarios were created from the results of previous studies and interviews with nurses.In the "creation of communication standards" phase, participants made desirable communication for the scenarios by employing four components of the nonviolent communication technique.In the "role-playing" phase, they simulated the nine situations in a safe environment to express/manage the experienced anger, preventing the vicious cycle of bullying.Finally, in the "re-role-playing" phase, they developed cognitive training for means of coping transferable to similar situations in the future.Additionally, Kile et al. [27] proposed a training intervention on incivility with definitions, examples, ways of manifestation, and effects on nurses, patient safety, and organisations.They taught the cognitive rehearsal technique using visual cues written on cards to instruct participants on the main forms of incivility and appropriate responses.To personalise the training, they provided ten incivility scenarios specific to the care context in the role-play and application phases of cognitive rehearsal.Balevre et al. [25] started with a policy of non-tolerance of bullying and leadership empowerment as support for employee empowerment and structured training on the psychodynamics of bullying and coaching in cognitive rehearsal.Through cognitive rehearsal exercises and role-playing with scenarios designed to practise learned responses, they taught nurses defensive techniques against bullying.An effective and professional alternative to lateral violence for communicating needs, expectations, and conflicts was proposed by Ceravolo et al. [29] through the use of workshop moments to improve assertive communication skills, healthy conflict resolution, elimination of a culture of silence, and awareness of the impact of lateral violence. Team Building Some authors have proposed activities aimed at team building through member interactions [30][31][32].Vessey and Williams [32], starting with a bullying situation, implemented a cognitive program in which each session included an overview of the objectives to be addressed, a brief review of the material, a didactic session, supportive experiential activities, and a group discussion.These sessions were held during morning meetings before patient care started through journal club activities. Armstrong [30], through the Civility, Respect, Engagement in the Workforce (CREW) intervention, aimed to increase civility in the workplace as a response to what employee evaluations indicated about the interpersonal climate [34].The four-week intervention included one meeting per week.In the first meeting, she used the "Anything Anytime" tool, which started with a discussion of a generic topic and enabled an understanding of group members' varying perspectives.In the second meeting she used the "Geometry of Work Styles" tool, which requires participants to choose from four geometric shapes that relate to a personality type.On the third day, using cues from nursing research, she stimulated a discussion on the definition and characteristics of incivility and assertive responses to it.Finally, participants practised actively replying to incivility scenarios in an interactive and safe setting.Each session concluded with a discussion of how a civil workplace can be reached, regardless of individual differences.Keller et al. [31] explored the perceptions, attitudes, and experiences of nurses who completed the Bullying Elimination Nursing in a Care Environment (BE NICE) Champion program.This program taught them how to recognise signs of bullying and provide support to their peers, facilitating the creation of bullying intervention strategies through didactic training and role-plays simulating bullying scenarios and the correct way to deal with them using the 4S strategy.The first S, "Stand by", requires facilitators to be close to the bullying victim to convey the message that they are not alone."Support" implies that facilitators show empathy, actively listen, and acknowledge the victim's feelings.Involved people who report bullying to nurse leaders apply the "Speak up" component of the 4S strategy."Sequester" implies that facilitators remove the victim from the situation. Nursing Leaders' Experiences Skarbek et al. [33] highlighted which interventions are considered effective in addressing bullying from nurse leaders' perspectives.While institutional "mandatory programs" are not perceived as effective, nurse leader-initiated individual unit-level interventions, in collaboration with administrative and institutional support, were seen as effective ways to address bullying.They agree that to establish a healthy work environment, the behavioural characteristics of collaboration, respect, effective interpersonal communication, collegiality, and mutual support must be evident to those entering the profession, senior nursing staff, and nurse leaders to build positive social practices. Discussion The magnitude of bullying in nursing has led nurse leaders to question more about the extent of the phenomenon within their own institutions, an aspect confirmed by an exponential increase in publications in recent years.Bullying often occurs with a peer form of hostility towards novices, but nurses with more years of service and nurse leaders are also exposed to this phenomenon [2,9,35,36].Therefore, it is necessary to know if there are interventions to prevent or resolve bullying among nurses in healthcare settings.The findings from the 14 identified studies highlight different interventions designed with the aim of testing their effectiveness in addressing and curbing bullying.Despite the heterogeneity of the proposed interventions, the common goal was to increase the awareness and recognition of bullying among nurses, develop the ability to respond assertively to uncivil behaviour, and reduce bullying in the analysed contexts.Educational interventions have been offered in the form of training sessions [20,23,24], e-learning [21], and a smartphone application [22].Some of these facilitated knowledge creation about bullying through case discussions, literature reviews, and discussions of uncivil behaviour and consequences [20,23].Others have found it necessary to increase knowledge regarding types of communication, such as conflict management, crucial conversations, and nonviolent communication [21,22], and still others have used prevalence results on lateral violence to create training on the behaviours that emerged from the study [24].In terms of evaluating intervention effectiveness, post-intervention measurements have been used that have given varying results, including an increase in perceptions and experiences of bullying; this is considered a positive indicator as it allows for the identification of negative behaviours and increased awareness [20,23].Evaluations of the educational interventions revealed their influence on communication skills, which resulted in a positive effect on conflict management strategies among nurses and decreases in work-related bullying experiences and turnover intention [21,22].The impact of the educational program on the behaviours noted in their own care settings [24] has been linked to the development of dialogue among nurses and their sense of professional responsibility, which are useful in breaking the cycle of horizontal violence in work environments. Cognitive rehearsal [25][26][27][28], the most widely used intervention, is a therapeutic technique in which an individual imagines situations that tend to produce anxiety or self-destructive behaviours and then repeats positive coping statements or mentally rehearses a more appropriate behaviour [37].For its implementation, the authors used bullying scenarios, provided positive coping responses to those scenarios, and included role-play in which participants could practise the learned responses [25][26][27][28].An evaluation of its effectiveness has shown that this approach improves interpersonal relationships, trains people to cope with bullying, decreases turnover intention [26], causes a perceived change in group behaviour in dealing with bullying, creates positive cultural change [25], and results in an increase in the ability to both recognise incivility and deal with it [27], an increase in awareness of incivility [28], and a reduction in the incidence of exposure to incivility [27,28].Another type of intervention is related to healthy conflict resolution through assertive communication and eliminating the culture of silence among nurses [29].To achieve this, nurse leader-focused workshops were held in which their roles in demonstrating learned behaviours to employees was emphasised, followed by interventions to foster peer learning.The effectiveness of this intervention was observed in decreased verbal abuse, increased perception of a respectful workplace, and a higher rate of nurses determined to solve the issue after an episode of lateral violence. Finally, team-building interventions have been proposed in different formats and settings.Vessey and Williams [32] presented a cognitive program starting from an actual bullying case, integrating discussions on the topic and experiential and journal club activities into daily nursing unit meetings.Armstrong [30] adopted the CREW method with the goal of team building and creating awareness of how a civil workplace can be achieved, regardless of individual differences.Keller et al. [31] emphasised the recognition of bullying and peer support using the 4S intervention to convey messages of closeness to the bullying victim, to show active listening, encouraging the reporting of bullying to superiors, and actively intervening when it occurs to remove the victim from the situation, discouraging the vicious cycle of the phenomenon.The effectiveness of team-building interventions has been demonstrated through the detection of positive and proactive engagement among participants [32], an increase in nurses' competence to recognise workplace bullying, and the ability to respond when it occurs [30,31]. In addition, this review found that nurse leaders' organisational engagement and support, through behaviours that model for co-workers, is a vital component of empowerment and is crucial and effective in addressing workplace bullying [25,29,31,33], and that it is important to intervene at all levels (society/policy, organisation/employer, work/task, and individual/work interface) to prevent it [38].In contrast, implementing zero-tolerance policies and passive dissemination of information about bullying have proven ineffective [39]. The studies predominantly considered interventions implemented in the acute hospital setting to address a problem present in the analysed settings evidenced by pre-intervention measurements [20][21][22][23][24][26][27][28][29].Although the implementation of these interventions was related to solving the problem, they are believed to have the potential to reduce bullying and consolidate a positive and civil culture in the workplace, and can be used with preventive intent and implemented by all nurses (novice and experienced). Strengths and Limitations Among the main strengths of this review are the adoption of a reproducible method and the systematic approach.To reduce the possibility of selection bias to a minimum, the study selection procedures described in the methods were rigorously adhered to.In accordance with the methodology used, a quality appraisal process was not performed on the included studies. The geographic concentration of studies in only two countries may limit the transferability of results to other health systems, as bullying tends to be related to the culture of the setting.Limiting the review to articles published in English and Italian and to literature published in databases may have led to an incomplete overview of available data and knowledge that could have added information to this review.However, the researchers chose to include articles describing research projects on the topic to identify effective interventions to counter bullying, with a view to future research projects. Conclusions This review revealed that several interventions have been designed to address the problem of bullying among nurses in healthcare settings by implementing educational, cognitive, and empowering interaction approaches among team members.Although the results showed the effectiveness of the interventions concerning nurses' recognition of the phenomenon and increased skills in addressing it with assertive communication, only a slight and not always significant reduction in the presence of this phenomenon was observed.Consequently, new research projects are necessary to demonstrate the effectiveness of the interventions, including healthcare settings other than those where they have been implemented so far, robust study designs, such as RCTs, to assess their real effectiveness and adaptability to the context, and to understand whether the effects persist over time, leading to a significant decrease in the frequency of bullying among nurses.It will, moreover, be the responsibility of each nurse leader to identify the intervention that best fits their context. Nurse leaders play a crucial role in preventing bullying in care settings.They should have a thorough understanding of the manifestations of the phenomenon and its consequences to recognise timely dysfunctional relational dynamics arising in their own teams and with peer leaders or superiors.Nurse leaders also have a responsibility to recognise and address personal, environmental, organisational, and cultural factors that may facilitate bullying in their context.This review helps to increase available knowledge on the topic and guides nurse leaders in choosing effective interventions to be adapted and implemented in the specific context.It also raises their awareness of the importance of leading by example, recognising their teams' relational patterns, and discouraging hostile peer interactions as preventive actions to foster cultural change in their context.Eradicating this phenomenon in healthcare settings involves active engagement of nurses, regardless of their role, in addition to support from the nurse leaders, the organisations, and professional and health policies. learned, "re-role-playing" was performed to practice them."Feedback and evaluation" of the final stage, was performed in every session.In this study CRP comprised 10 sessions in a total of 20 h for 5 weeks.Each session took 2 h.
2024-01-24T16:23:14.277Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "7a830a23b3fc77c168b22e4b0cb8899bd9bac27d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9032/12/2/280/pdf?version=1705920338", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a80c296a3e632eb5a529a7bff25c09dc94ad7165", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
232114212
pes2o/s2orc
v3-fos-license
Hunger enhances food odor attraction through a Neuropeptide Y spotlight Internal state controls olfaction through poorly understood mechanisms. Odors signifying food, mates, competitors, and predators activate parallel neural circuits that may be flexibly shaped by physiological need to alter behavioral outcome1. Here, we identify a neuronal mechanism by which hunger selectively promotes attraction to food odors over other olfactory cues. Optogenetic activation of hypothalamic Agouti-Related Peptide (AGRP) neurons enhances attraction to food odors but not pheromones, with branch-specific activation and inhibition revealing a key role for projections to the paraventricular thalamus. Knockout mice lacking Neuropeptide Y (NPY) or NPY receptor type 5 (NPY5R) fail to prefer food odors over pheromones after fasting, with hunger-dependent food odor attraction restored by cell-specific NPY rescue in AGRP neurons. Furthermore, acute NPY injection immediately rescues behavior without additional training, indicating that NPY is required for reading olfactory circuits during behavioral expression rather than writing olfactory circuits during odor learning. Together, these findings show that food odor-responsive neurons comprise an olfactory subcircuit that listens to hunger state through thalamic NPY release, and more generally, provide mechanistic insights into how internal state regulates behavior. Various odors activate discrete neuronal ensembles in the olfactory epithelium, olfactory bulb, and olfactory cortex 1 . In mouse, food odors and volatile sex pheromones are both attractive 1 but relevant for different physiological drives, suggesting that responsive neurons at some circuitry node display key differences, such as in connectivity, gene expression, and/or hormone responsiveness. Hunger-dependent modulation of olfactory inputs has been proposed to occur early in olfactory processing in vertebrates [5][6][7] , and in other model organisms 8 . Satiety state alters the hedonic quality of food, but is not thought to impact the ability to recognize or detect food, which would be maladaptive for locating food stores for future use 4 . Thus, it is additionally possible that behaviorally relevant state-dependent modulation occurs through central pathways downstream of primary olfactory areas. Hunger state guides odor responses We used a simple and robust two-choice assay to investigate hunger-dependent odor responses. Mice were placed in a test arena containing volatile odor ports on each side without direct stimulus contact (Fig. 1a) 9 . Investigation time was quantified as the time in which the mouse's nose was directly above an odor port, and a preference index was calculated as the normalized difference in odor investigation times. Mice were attracted to both food odors (familiar homecage chow dissolved in water and centrifuged to remove insoluble debris) and pheromones (opposite sex mouse urine), compared with water (Extended Data Fig. 1). When paired against each other, food odors and pheromones were similarly attractive to fed mice; in contrast, fasted mice (males and females) displayed a strong preference for food odors over pheromones, with hunger promoting food odor investigation in two-choice comparisons or in single-odor pairings with water ( Fig. 1b, Extended Data Fig. 1). Conversely, prior mate exposure increased attraction to volatile pheromones over food odors in fed but not fasted mice (Extended Data Fig. 2), consistent with need-based prioritization of behavior 10 . The observation that hunger enhances attraction to food odors relative to pheromones suggests that specific olfactory subcircuits can be differentially modulated by hunger-control centers in the brain. Next, we asked whether AGRP neuron terminals in particular brain regions promoted food odor attraction (Fig. 2a). Collectively, AGRP neurons have widespread projections, and approaches involving terminal-specific optogenetics in AGRP neurons have revealed a striking division of labor among downstream targets [14][15][16][17] . Individual AGRP neurons can display selectivity among target areas, so branch-specific illumination does not cause antidromic activation of axon terminals in all other locations 15 . Agrp-ires-Cre (AGRP-ON) or wild type (control) mice were injected with a Cre-dependent adeno-associated virus (AAV) encoding Channelrhodopsin (AAV-DIO-ChR2) in the arcuate nucleus, and optic fibers were used to illuminate various brain areas that receive AGRP neuron input. Optogenetic stimulation of AGRP neuron terminals in the paraventricular thalamus (PVT), but not in other brain regions analyzed, promoted food odor attraction in fed AGRP-ON mice (Fig. 2b, Extended Data Fig. 3d-e), with similar responses in mice re-fed acutely after a fast or fed ad libitum (Extended Data Fig. 3f-h). Responses were of similar magnitude after somatic stimulation of AGRP neurons and were not observed in control mice lacking Cre (Fig. 2b). Optogenetic activation of AGRP terminals in other brain regions failed to induce food odor preference in fed mice (Fig. 2b, Extended Data Fig. 3d), including the bed nucleus of the stria terminalis (BNST), paraventricular hypothalamus (PVH), central nucleus of the amygdala (CeA), lateral hypothalamus (LH), medial amygdala (MeA), periaqueductal gray (PAG), and parabrachial nucleus (PBN). Interestingly, stimulating AGRP axons in BNST, PVH, and LH promoted robust food consumption in fed mice (Fig. 2c), as reported previously 15 , but did not enhance food odor preference. Moreover, AGRP neurons that target the MeA and PBN suppress competing behaviors related to aggression and pain 14,17 , but were also not involved in odor preference. Since food odor perception can arise from experience, we asked whether novel odors could be entrained as food odors and subsequently evoke state-dependent responses (Extended Data Fig. 4). Food-restricted mice were given a diet of strawberry gelatin during a four-day training period. Before training, mice preferred pheromone odor to strawberry gelatin odor whether fed or fasted, but after training, mice displayed hunger-dependent attraction to strawberry gelatin odor. Furthermore, optogenetic stimulation of AGRP neuron terminals in the PVT promoted attraction to strawberry gelatin odor only after training. Stimulating AGRP neuron projections to PVT promoted food odor attraction in fed mice, so we asked whether silencing these projections decreased food odor attraction in fasted mice. Expressing an inhibitory opsin, halorhodopsin, in AGRP neurons caused light-induced reductions in membrane potential and firing rate (Extended Data Fig. 5a). We inserted optic fibers in the PVT (PVT-OFF) or arcuate nucleus (ARC-OFF) of Agrp-ires-Cre; lslhalorhodopsin mice or in control Agrp-ires-Cre mice. Illumination in fasted PVT-OFF or ARC-OFF mice but not in control mice lacking halorhodopsin decreased food odor attraction to levels seen in fed mice (Fig. 2d, Extended Data Fig. 5b, c), indicating that AGRP neuron projections to the PVT are required for hunger-dependent enhancement of food odor attraction. Despite similar odor preference responses, post-assay feeding was normal in fasted PVT-OFF mice, but reduced in fasted ARC-OFF mice (Extended Data Fig. 5d). The PVT is located dorsally within the thalamus, and intriguingly, in humans, attention to food odors engages the dorsal thalamus and amygdala 18 , and patients with lesions in dorsal thalamus perceive food odors as neutral or aversive without losing their ability to identify them 19 . AGRP neurons display transient activity decreases upon food cue presentation 20,21 . We used fiber photometry to ask whether similar changes were observed in PVT-projecting AGRP axons. We injected the arcuate nucleus of Agrp-Cre mice with an AAV containing a Credependent GCaMP allele, placed optic fibers in either the arcuate nucleus, PVT, or PVH, and recorded responses in fasted mice presented with food odors or pheromones (Extended Data Fig. 6). Food odor transiently inhibited AGRP neuron activity in all locations measured, while pheromones had no effect. Importantly, decreases in AGRP neuron activity were short-lived, while food odor attraction persisted throughout the behavioral assay. Moreover, optogenetic stimulation of AGRP neurons-not inhibition-promoted food odor attraction, while optogenetic inhibition of AGRP neurons, which was significantly longer than the transient activity decreases observed during food odor presentation, actually blocked fasting-induced food odor attraction. Presumably, the persistent stimulation of AGRP neurons that occurs during a fasted state enhances food odor attraction through sustained signaling in downstream neurons, perhaps through the durable action of a neuromodulator. Roles for NPY and its receptor NPY5R AGRP neurons release three principal neurotransmitters: AGRP, Neuropeptide Y (NPY), and γ-aminobutyric acid (GABA) 22 , so we asked whether any was required for hungerdependent odor attraction (Fig. 3a NPY is expressed in many neuron types, including AGRP neurons. We performed cellspecific NPY rescue to determine whether behavioral deficits observed in global Npy-KO mice were due to loss of NPY expression in AGRP neurons 23 . A Cre-dependent AAV encoding NPY (AAV-DIO-Npy) was injected into the arcuate nucleus of Agrp-ires-Cre; Npy-KO (Npy AGRP rescue) or Npy-KO (control) mice. Rescue of NPY expression in AGRP neurons restored hunger-dependent food odor attraction ( Fig. 3c-d, Extended Data Fig. 7d), indicating that AGRP neuron-derived NPY is sufficient for state-dependent modulation of food odor responses. NPY receptors comprise a small subfamily of five G Protein-Coupled Receptors. To determine whether a particular NPY receptor was responsible, we obtained knockout mice lacking individual NPY receptors and tested odor preference behavior ( Fig. 3e-f, Extended Data Fig. 7e-g). Fasted male and female Npy5r-KO mice failed to prefer food odors to pheromones, despite normal hunger-dependent food consumption, while fasted Npy1r-KO mice displayed normal odor preference behavior. Like Npy-KO mice, Npy5r-KO mice displayed attraction to food odor over water in the fed state, but lost hunger-dependent enhancement of this response (Extended Data Fig. 8). Hunger also promoted search for food buried in bedding, and food search behavior was impaired in fasted Npy-KO and Npy5r-KO mice (Extended Data Fig. 9a). Together, these findings reveal an essential role for both a neuropeptide, NPY, and its receptor, NPY5R, in hunger-evoked odor attraction. RNA in situ hybridization experiments revealed detectable Npy5r expression in cortical regions including the olfactory cortex but not the PVT (Extended Data Fig. 9b, c). One possibility is that NPY5R is localized to incoming cortical axons that arrive in dorsal thalamus. PVT injections of NPY5R antagonists blocked food odor attraction in fasted mice, while PVT injections of NPY5R agonists promoted food odor attraction in Npy knockout mice ( Fig. 3g-i, Extended Data Fig. 10a-b). Similar agonist injections into the dorsal third ventricle just above the PVT had no effect, although injecting higher agonist concentrations into the ventral third ventricle an hour before behavioral assessment enhanced food odor attraction, presumably because sufficient agonist could then access the PVT (Extended Data Fig. 10c-d). Together, pharmacological and optogenetic studies indicate that AGRP neuronderived NPY5R agonism within the PVT underlies hunger enhancement of food odor attraction. In mice, perception of food odors can arise through learning (Extended Data Fig. 4). Thus, loss of hunger-evoked food odor preference in both Npy-KO and Npy5r-KO mice could be explained by deficits in either memory formation or hunger-induced behavioral expression, or both. AGRP neurons provide a negative valence teaching signal 20 , and in Drosophila, an NPY homolog is required for appetitive memory performance 24 . If NPY were required for formation of food odor memory, then re-administration of NPY in Npy-KO mice would not restore hunger-dependent food odor preference until learning could subsequently occur. We observed that injection of NPY, or a specific NPY5R agonist, into fasted Npy-KO mice immediately restored hunger-dependent food odor preference in the absence of additional training ( Fig. 3g-i). Mice had learned particular olfactory cues to be food-associated in the absence of NPY, likely explaining the persistent basal attraction to food odor (comparable to pheromone attraction) observed in fed Npy-KO mice. Thus, NPY acts in the expression of hunger-enhanced food odor attraction rather than the formation of food odor memory. Discussion Many pathways have been proposed by which nutrients and feeding-relevant hormones may interact with the olfactory system. Here, we reveal an essential role for NPY and its receptor NPY5R in hunger-dependent odor preference. AGRP neurons that project to the PVT provide a key first connection from hunger neurons to olfactory circuits. Olfaction has been considered unique among the senses in that olfactory inputs largely access cortical regions without traversing the thalamus 25 , although a minor output pathway of the piriform cortex involves the medio-dorsal thalamic nucleus 26 , which is adjacent to the PVT. In addition, top-down olfactory inputs, such as from prefrontal cortex, descend on PVT 27 , where they presumably can be integrated with information about hunger state from AGRP neurons. Interestingly, the PVT gates information from other states, including thirst 28 , and from other sensory systems, including visual cues 29 , and may generally guide attention to salient inputs relevant for a current behavioral state. The PVT additionally plays a role in withdrawal symptoms associated with drug addiction 30 , consistent with a function in enhancing the valence of environmental cues that alleviate negative stressors, like hunger, thirst, and drug craving. Neuromodulation that controls the relative strength of signals through different sensory channels allows for flexible behaviors that vary with need. Here we uncover molecular features essential for one such neuromodulatory pathway, as NPY from AGRP neurons opens a thalamic hunger gate for specific olfactory inputs carrying an NPY5R encryption. It seems likely that different neurotransmitters function as spotlights for other behavioral drives, with the thalamus serving a general role as a switchboard that gates preferential attention to sensory inputs based on physiological need. Animals. All animal procedures followed the ethical guidelines outlined in the NIH Guide for the Care and Use of Laboratory Animals, and all protocols were approved by the institutional animal care and use committee (IACUC) at Harvard Medical School. Animals were maintained under constant temperature (23 ± 1°C) and relative humidity (46 ± 5%) with a 12-h light/ dark cycle. Wild type C57BL/6 (000664), Agrp-ires-Cre (012899) Two-choice odor preference test. Odor preference was measured using a described two-choice paradigm 9 with minor modifications. The test arena consisted of two odor applicators placed on each side of a plastic cage (M-BTM-STD, Innovive) without bedding. Odor applicators were petri dishes (35 mm, Falcon, 351008) with thirteen holes drilled in the lid to enable odor escape. Test stimuli or water (400 μl) water were added just prior to testing, and the petri dish lid was closed to prevent direct stimulus contact. Food odor was prepared by suspending 20 g of normal chow (LabDiet 5058, Lab Supply) in 50 ml water (4 hours) and centrifuging (1000 rpm, 5 min) to remove insoluble material; mouse urine was freshly collected by hand. Mice were individually housed, naive to the paradigm, and tested in the dark phase. Fed and fasted mice were fasted for 24 hours, and immediately prior to testing, fed mice were given free access to food for one hour while fasted mice were not; mice fed ad libitum were never food restricted. Mate-exposed mice lived with a mate for 24 hours immediately prior to isolation for fasting. Mice were habituated by successive administration (3 × 5 min) to mock test arenas containing blank odor applicators, and then introduced into the test arena. Odors were placed on each side of the arena in different tests, and no side bias was observed in control experiments involving water alone (Extended Data Fig. 1b). Mice were recorded with a digital video camera, and odor investigation scored manually, in a randomized double-blind manner, as time investigating each petri dish over the entire test period (5 min). The position of the mouse nose was illustrated in Figure 1 using Optimouse software 32 . Investigation time was quantified as the time in which the mouse's nose was directly above an odor port, and preference index was calculated as the percentage of time investigating food odor minus the percentage of time investigating pheromones. Data from rare mice were excluded if they did not investigate both odor sources during the two-choice odor test. Statistical analysis was performed using a Wilcoxon matched-pairs signed rank test. Sample sizes were based on prior publications involving two-choice odor tests 9 . Food intake measurement. Food intake was measured just before or after odor preference tests, as indicated in figure timelines. Food and water were provided ad libitum in a clean cage, and the amount of food consumed over 1 hour was measured by weighing food before and after the test period. Optogenetics. For surgical injection of AAV and implantation of optic fibers, mice were anesthetized with avertin (250 mg/kg) and placed into a stereotaxic device (KOPF). After exposing the skull via small incision, a small hole was drilled. . Fibers were secured to the skull with dental cement and both fibers and ferrules covered with caps (Thorlabs) for protection. Mice recovered from optic fiber surgery for 1 week, and from AAV injection with optic fiber surgery for 3 weeks. Optogenetic protocols were similar to previous reports 15 , and were initiated before the first habituation, and maintained for the duration of the odor preference test and subsequent food intake test. Light was delivered (10 ms pulses, 20 pulses for 1 sec, repeated every 4 seconds for activation or 1.5 seconds for inhibition, 6-8 mW to AGRP neuron soma and 6-10 mW to AGRP neuron axons) by LED (473 nm for activation, 625 nm for inhibition, Prizmatix) using a pulser (Prizmatix) through an optic fiber attached to the ferrule-capped optic fiber implanted in the mouse. Fiber placements were verified after each test by immunohistochemistry for AGRP (Neuromics GT15023, 1:100) and AAV injection sites were verified using AAV-derived mCherry fluorescence. Halorhodopsin function in AGRP neurons was validated by whole-cell current clamp recordings during optogenetic experiments (625 nm, 10 second continuous illumination, fiber output: 6-8 mW) of acutely harvested and dissociated arcuate nucleus two hours after attachment using a Molecular Device 700B amplifier with filtering at 1 kHz and 4-10 mΩ electrodes filled with an internal solution containing (in mM) 130 K-Gluconate, 15 KCl, 4 NaCl, 0.5 CaCl2, 10 HEPES, 1 EGTA, pH 7.2, 290 mOsm, with cells bathed in an external solution containing (in mM) 150 NaCl, 2.8 KCl, 1 MgSO4, 1 CaCl2, 10 HEPES, pH 7.4, 300 mOsm. Food odor learning paradigm. Prior to training, mice were food-restricted (2 g chow/day in food bowl to maintain 85-95% body weight) for four days. For training (next four days), food-restricted mice were given strawberry sugar-free gelatin 20 (Conagra Brands, Snack Pack) ad libitum in a food bowl for 30 minutes once a day (dark period). After gelatin was removed, mice were left without food for 1 hour, and then given 2.5 g chow/day (to maintain 80-90% body weight) to eat freely until the next day's training period. After the last day of training, mice were fasted as part of the two-choice odor test protocol instead of being given 2.5 g chow. During odor testing, strawberry gelatin (1 g) was placed in the odor applicator. In the pre-test trial when fasted (see Extended Data Fig. 4c), all mice were confirmed to prefer strawberry gelatin odor after training. For optogenetics experiments, training occurred three weeks after optic fiber placement. Cell-specific NPY rescue. Plasmid to generate AAV-DIO-Npy was made by insertion of an Npy-mCherry gene Fiber photometry. For fiber photometry, Agrp-ires-Cre mice were injected in the arcuate nucleus with AAV-DIO-GCaMP6s (Addgene #100845-AAV9, titer 2.1 × 10 13 genomes/ml), and a patch fiber (200 mm core diameter, 0.57 NA, metal ferrule, Doric) was inserted in the arcuate nucleus, PVT, or PVH. Two weeks after surgery, fiber photometry was performed using an RZ10x real-time processor (Tucker-Davis Technologies). Light from connected 405 nm and 465 nm LEDs was filtered through a fluorescence minicube (Doric Lenses) and collected with an integrated photosensor on the RZ10x connected to the surgically implanted optic fiber with a 0.48 NA, 200 μm core diameter patchcord (Doric Lenses). Fiber photometry was performed on fasted mice during a one-choice odor preference assay, which involved habituation (3 × 5 min), a water test (5 min), rest (2 min in habituation chamber), a pheromone test (5 min), rest (2 min in habituation chamber), and food odor test (5 min). 10 minutes after the food odor test, mice were given direct access to food in their home cage during fiber photometry. Changes in calcium-dependent GCaMP6s fluorescence (465 nm) signal were compared with calcium-independent GCaMP6s fluorescence (405 nm), providing an internal control for movement and photobleaching artifacts. Fluorescence measurements were made, extracted from Synapse software (Tucker-Davis Technology), and analyzed in MATLAB (GraphPad). Fluorescence signal (F) was defined as the ratio of fluorescence measured at 465 nm divided by fluorescence measured at 405 nm; a delta F/F was expressed by comparing fluorescence signal to a pre-test baseline, and a mean delta F/F was calculated for time intervals indicated in figures. Statistical analysis was performed using TWO-way-ANOVA, then post-hoc Bonferroni's multiple comparisons test; mice were excluded from analysis if AGRP neurons did not respond during food consumption after the odor test and/or if GCaMP expression/ optic fiber placement was not properly targeted based on post-hoc histology. Food search behavior. Mice were fasted (24 hours) or fed ad libitum, and briefly removed from their home cage to a fresh cage. While mice were absent from the home cage, a food pellet (3 g) was buried beneath bedding (>1 cm), and the mice were then re-introduced to the home cage. The latency to discover the food was recorded, and if the food was not discovered within 10 minutes, the trial was ended, and the latency recorded as 600 seconds. Single-color RNA in situ hybridization. Coronal cryosections (16 μm) of freshly frozen mouse brain were washed with PBS, treated with proteinase K [Thermo Fisher Scientific, 10 mg/ml, 10 mM Tris-Cl (pH 7. a, Timeline of two-choice assay involving optogenetic stimulation (blue bar) of AGRP neurons. b, Investigation time of fed control (black) and fed AGRP-ON (red) female mice to pheromones and food odors, n=10 mice, mean ± s.e.m, lines with triangles: individual mice, **p<.01 by two-tailed Wilcoxon test (p control: .32; p AGRP-ON: .02). c, Food intake was measured before (FOOD 1) and after (FOOD 2) the odor preference assay, as indicated in the timeline, for female mice used in b, n=10 mice, mean ± s.e.m, triangles: individual mice, ***p<.001 by Mann-Whitney U test (p FOOD1: .49; p FOOD2: <.0001). d, Preference index was calculated for fed control (unfilled bars, 'CON') and fed AGRP-ON (filled bars, 'AGRP') mice following illumination of brain regions indicated (n as reported for same mice in Fig. 2b-c, mean ± s.e.m). e, Food intake prior to two-choice assay and optogenetic stimulation (FOOD 1 in timeline) of mice with optic fibers implanted in various brain regions (n as reported for same mice in Fig. 2b- a, Timeline of two-choice behavioral assay before learning. b, Investigation time of fed (left) and fasted (right) wild type male mice to pheromones (unfilled bars) and strawberry gelatin odor (filled bars) before learning, n=10 mice, mean ± s.e.m, lines with triangles: individual mice, **p<.01 by two-tailed Wilcoxon test (p fed: .002; p fasted: .002). c, Timeline of learning paradigm and two-choice behavioral assay after learning. d, Investigation time to pheromones (unfilled bars) and strawberry gelatin odor (filled bars) after learning in wild type fed, wild type fasted mice, wild type fed mice after arcuate nucleus injection of AAV-DIO-ChR2 and PVT illumination (control), and Agrp-ires-Cre fed mice after arcuate nucleus injection of AAV-DIO-ChR2 and PVT illumination, n=6 mice (wild type fed, fasted), 4 (control, PVT) and 8 (AGRP-ON, PVT), mean ± s.e.m, lines with triangles: Brain cartoon is based on published brain section images 31 NPY immunohistochemistry is depicted in the figure inset. Preference indices are derived from data in Fig. 3d (n=12 mice, males and females, mean ± s.e.m, scale bar: 100 μm). e, Timeline of two-choice behavioral assay for receptor knockouts. f, Investigation times for pheromones and food odors in fasted knockout female mice indicated, n=10 mice, mean ± s.e.m, lines with triangles: individual mice, **p<.01 by two-tailed Wilcoxon test (p Npy1r-KO: .02; p Npy5r-KO: .77). g, Food intake after two-choice assay (FOOD 2 in timeline) of indicated knockout male and female mice (n=10 mice, mean ± s.e.m, triangles: individual mice). a, Timeline of two-choice assay involving AGRP neuron optogenetic stimulation (blue bar), with optic fiber location depicted in a cartoon based on published brain section images 31 . b, Investigation time of fed control (top) and fed AGRP-ON (bottom) mice to pheromones (P) and food odors (F) following optogenetic stimulation of AGRP axon terminals in brain regions indicated. c, Food consumption after the two-choice odor test (FOOD 2 in timeline) in fed control and fed AGRP-ON mice following illumination of brain regions indicated. d, Timeline (left), investigation times (middle) and preference index (right) for optogenetic inhibition experiments involving PVT illumination. (n: mice used for control, AGRP-ON in
2021-03-05T06:23:00.359Z
2021-02-05T00:00:00.000
{ "year": 2021, "sha1": "c39d824d860467894284be046124d8e77bcefe24", "oa_license": null, "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8035273", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "ac5016440a3edd87f6f29159f8b6e75bbb7f8523", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
259570109
pes2o/s2orc
v3-fos-license
ADCS Design for a Sounding Rocket with Thrust Vectoring This paper addresses the development of an attitude determination and control system (ADCS) for a sounding rocket using thrust vector control (TVC). To design the ADCS, a non-linear 6 degrees-of-freedom (DoF) model for the rocket dynamics and kinematics is deduced and implemented in simulation environment. An optimal attitude controller is designed using the linear quadratic regulator (LQR) with an additional integral action (LQI), and relying on the derived linear, time-varying, state-space representation of the rocket. The controller is tested in the simulation environment, demonstrating satisfactory attitude tracking performance, and robustness to model uncertainties. A navigation system is designed, based on measurements available on-board, to provide accurate real-time estimates on the rocket’s state and on the aerodynamic forces and moments acting on the vehicle. These aerodynamic estimates are used by an adaptive version of the controller that computes the gains in real time after correcting the state-space model. Finally, the ADCS is the result of the integration of the attitude control and navigation systems, with the complete system being implemented and tested in simulation, and demonstrating satisfactory performance. Introduction The main motivation behind this work is the development of an Attitude Determination and Control System (ADCS) for a future sounding rocket from a student rocketry team from Instituto Superior Técnico (IST), named Rocket Experiment Division (RED). The ADCS design assumes that the rocket uses Thrust Vector Control (TVC) technology as the actuation method, and aims to control the rocket's pitch and yaw angles. The roll angle is assumed to be controlled by an additional roll control system whose design is out of the scope of this work. During the atmospheric flight phase of a rocket, stabilisation can be achieved through the use of aerodynamic fins. With a correct design of the fins, the vehicle can be made naturally stable [1]. However, the rocket is subjected to various external disturbances, such as wind gusts, which prevent the vehicle to follow a desirable, pre-calculated trajectory or, even more intense, completely destabilise it [2]. It is then clear the necessity of having an active attitude control and stabilisation system that not only ensures the stability of the rocket, but allows to actively correct its trajectory in order to achieve specific mission goals. As for the actuation method, Thrust Vector Control (TVC), or thrust vectoring for short, is used by most launch vehicles and works by redirecting the thrust vector in order to create a control torque [3]. With respect to other actuation techniques, like actively controlled fins, TVC allows for a wider range of operating conditions and provides better efficiency [4]. The control system design tends to be very conservative in the aerospace industry. Restricting the dynamic analysis to accommodate more sophisticated control design techniques risks the later realisation that such restrictions would have to be lifted and would invalidate the control design. Among the classical techniques, the Proportional-Integral-Derivative (PID) control is on the core of most commonly used launch vehicle control systems [3,5]. Although widely used, PID control has its downsides when it comes to robustness and external disturbances rejection [6]. The problem of controlling ascending launch vehicles is dominated by parameter uncertainty, which in face of the lack of robustness of the 1 3 PID controller may be a concerning issue. Moreover, the rocket flight parameters considerably change throughout the flight. To overcome this, gain scheduling techniques have been proposed, that rely on the linearization of the dynamics at different operating conditions. Still in the linear domain, the use of optimal controllers, such as the Linear Quadratic Regulator (LQR), provides more robustness and ensures a locally optimal solution for a given cost function [6,7]. As a way to improve the robustness of linear time-varying controllers, real-time parameter estimators can be introduced in the control loop to form an adaptive control system. The online identification of system parameters allows the controller to act on a more accurate representation of the system dynamics [8]. Non-linear control and estimation techniques have also been proposed [9,10], and come with the advantage of ensuring a global solution, not dependent on the specific mission nor vehicle. However, this type of control and estimation laws often have to simplify complete non-linear dynamics in order to obtain a global solution. If relevant dynamics are discarded, the system might fail in a real implementation scenario. Besides that, these methods all have particular design characteristics which make it harder to develop a standardised verification and validation procedure to meet the imposed system requirements [3,5]. Although several solutions to the rocket attitude control problem can be found in the literature [7,11,12], many fail to capture all the relevant dynamics and/or oversimplify the problem, while most assume full-state knowledge, creating a considerable gap between theoretical design and implementation. Hence, the main contribution of this paper is a robust ADCS, which integrates both the navigation and control systems, relying on computationally efficient algorithms and that can be implemented in sounding rockets through readily available low-cost components. Furthermore, the design process considers the 6 DoF, as opposed to restricting the analysis to the pitch plane, as well as the time-varying nature of the system, focussing on the entire trajectory rather than a single operating point, contrarily to what is found in the literature when using linear optimal control and estimation techniques. In this way, the implementation of the system in a real case scenario is facilitated. To achieve this goal, several intermediate contributions were necessary, always having in mind the reliability and robustness of the proposed solutions: an original generic state-space representation for linear and optimal control design in the 6 DoF; a gain-scheduled optimal pitch and yaw controller resorting to the LQR technique with added integrative action; and a navigation system, based on measurements available on-board, to provide accurate estimates on the rocket's state, including a novel linear time-varying parameter estimator to estimate in real-time aerodynamic forces and moments acting on the vehicle. Rocket Dynamics and Kinematics Modelling To design the ADCS, a mathematical model that represents the translational and rotational dynamics and kinematics of the rocket in the 6 DoF is necessary. Assumptions Some assumptions are used to derive the model. The rocket is considered to be a rigid body, meaning no elastic behaviours are modelled. This assumption is considered valid for control system design given the smaller size of typical sounding rockets, and consequent reduced impact of elastic behaviour on the overall dynamics. The rocket is assumed to be axially symmetric, as well as the mass allocation, which means that the principal inertia axes coincide with the body axes, the centre of mass is on the longitudinal axis, and the aerodynamic behaviour is identical in both the pitch and yaw planes. This assumption is standard given that launch vehicles, and more specifically sounding rockets, are designed in order to respect this symmetry. Finally, neither the curvature nor rotation of the Earth are taken into account, which is also a reasonable assumption considering the typical altitude and ground distance covered by the class of vehicles under study. Reference Frames To describe the dynamics and kinematics of the rocket, it is crucial to define the reference frames to be used. Two reference frames are used: a body-fixed one (Fig. 1a), where the equations of motion are written; and an inertial space-fixed one (Fig. 1b). The body-fixed reference frame has its origin located in the centre of mass of the vehicle. The x-axis ( X b ) is along the rocket's longitudinal axis, while the z-axis ( Z b ) and y-axis ( Y b ) complete the orthogonal reference frame. As for the inertial space-fixed reference frame, given that neither the curvature nor the Fig. 1 Body-fixed a and inertial b reference frames rotational motion of the Earth are taken into account, a simple orthogonal frame centred in the launch location is used. The x-axis ( X e ) is pointing upwards, so that for a zero inclination launch the x-axes of both reference frames are aligned; and the other two axes ( Y e and Z e ) are preferably aligned with a pair of cardinal directions. With the reference frames detailed, it is necessary to define the coordinate transformation between them. This is done using a sequential rotation of the body frame relative to the Earth frame defined by the three Euler angles where is the Euler angle of rotation of the body around the x-axis of the Earth frame, also known as roll; is the Euler angle of rotation of the body around the y-axis of the Earth frame, also known as pitch; and is the Euler angle of rotation of the body around the z-axis of the Earth frame, also known as yaw. The Euler angles describe the attitude of the rocket, representing the variables to be controlled by the attitude control system. The coordinate transformation from the body frame to the Earth frame is then defined by the following transformation matrix [13]: where c and s stand as abbreviations for the trigonometric functions. The inverse transform, from the Earth frame to the body frame, is defined by the transpose ( R T ). External Forces and Moments From the dynamic point of view, sounding rockets experience four main forces during a flight: Weight, Thrust, and the aerodynamic forces-lift and drag. Gravity Model Considering the Earth as a perfect sphere, the gravitational acceleration, g, is assumed to vary only with altitude. This variation is given by where g 0 is the gravitational acceleration constant at surface level, R E is the mean Earth radius, and h is the altitude. The gravitational force, expressed in the Earth frame, is given by E F g = −mg 0 0 T , where m is the rocket's mass. To obtain it in the body frame, the rotation matrix R T is used, Propulsion Model The propulsion model was derived using equations mainly obtained from [4], considering ideal propulsion and all its underlying assumptions. The thrust produced by the rocket motor is simply where ṁ is the mass flow rate, v e is the effective exhaust velocity, p e is the nozzle exit pressure, p a is the atmospheric pressure, and A e is the nozzle exit area. Two separate contributions can be identified: the dynamic one, caused by the exhaust of the expanded combustion gases; and the static, caused by the pressure gradient between the nozzle exit and the atmosphere. Considering that the most common propulsion technology for sounding rockets is solid propulsion, and that it is the technology used by RED, a model that uses the internal combustion equations, available in [4,14], as well as the solid propellant characteristics, is implemented to calculate the thrust produced by the motor, T, the associated mass flow rate, ṁ , and the nozzle exit pressure, p e . The propellant considered for this work was the mixture of potassium nitrate with sorbitol (KNSB), a propellant commonly used in student rocketry known as "rocket candy", with its properties present in [15]. The mass and geometry of the propellant grains can be altered to obtain different thrust profiles. TVC Actuation By controlling the direction of the thrust force (or vector), TVC actuation produces torques that act on the rocket's centre of mass, influencing its rotation in pitch and yaw. The decomposition of the propulsive force in the three body axes can be done as illustrated in Fig. 2. According to it, the thrust vector is decomposed using the angles p and y , where p is the gimbal angle that, on its own, produces a pitching moment, and y is the one that produces a yawing moment. Using these angles, the propulsive force in the body frame is given by B F p = T cos p cos y − T cos p sin y − T sin p T , a n d t h e c o r r e s p o n d e n t c o n t r o l m o m e n t i s B M = 0 − T sin p l T cos p sin y l T [6], where l is the moment arm, which corresponds to the distance between the nozzle gimbal point and the centre of mass of the rocket. Aerodynamic Forces and Moments The rocket will be subjected to aerodynamic forces and moments resulting from its interaction with the fluid medium composing the atmosphere. Starting by the forces, they are expressed in the body axes according to where C A is the axial aerodynamic force coefficient, C Y is the lateral aerodynamic force coefficient, C N is the normal aerodynamic force coefficient, q is the dynamic pressure and S is a reference area, usually corresponding to the cross sectional area of the fuselage. The axial and normal aerodynamic forces correspond to the body axes components of lift and drag, and are related through the aerodynamic angles-the angle of attack, = arctan w rel ∕u rel , and the sideslip angle, = arcsin v rel ∕V rel , where u rel , v rel , and w rel are the components of the relative velocity vector with respect to the atmosphere, and V rel its magnitude. The force coefficients can be determined using a linear relation with the aerodynamic angles, C Y = C Y and C N = C N , whose derivatives ( C Y and C N ) depend mainly on the angle itself and Mach number. As for the aerodynamic moments, in the body axes they are given by C l , C m , and C n are, respectively, the rolling, pitching, and yawing moment coefficients, and d is a reference length, usually corresponding to the diameter of the fuselage. If the reference moment station is defined as the centre of pressure, and its location, x cp , can be determined, the reference moments are zero and the moment coefficients take the form where the static stability margin, SM = (x cp − x cm )∕d , intuitively appears, p, q, and r, are the body angular velocities, and C l p , C m q , C ṁ , C n r , and C ṅ are all aerodynamic damping coefficients. Translational Motion By applying Newton's second law, and taking into account that the body frame is a rotating one, we obtain the translation dynamics, where v = [u v w] T is the velocity vector in the body frame, = p q r T is the angular velocity vector in the body frame, S(.) is a skew-symmetric matrix, and the mass derivative term has been included in the propulsive force. By substituting the external forces in (1), the dynamics can be particularised in the body acceleration components: Rotational Motion Euler's equation for rigid body rotational motion yields where J is the inertia matrix. Following the axial symmetry assumption, the cross-products of inertia can be assumed as zero and the y and z terms can be assumed equal, resulting in a diagonal matrix, J = diag (J l , J t , J t ) , where J l denotes the longitudinal inertia and J t denotes the transverse inertia. By substituting the inertia matrix J and the external moments in where r represents the rolling moment caused by the additional roll control system and r accounts for external disturbances. Given that the aim of this work is to control the pitch and yaw angles only, the additional roll control system is assumed to be able to reject disturbances on this axis ( r ) and control the roll angle. Its inclusion in the model is only for the sake of completeness and is not considered during the design. Finally, the rotational kinematics are given by the time derivative of the Euler angles [13]: It is noted that using the Euler angles a singularity arises for = ± 2 , however, the way the reference frames are defined prevents the rocket to reach this attitude inside the admissible range of operation (far from horizontal orientation). By grouping (2), (4), and (5) the 6 DoF non-linear model of the rocket is fully defined. Reference Rocket A specific rocket is needed to serve as reference for the ADCS design. In this way, a preliminary design for a future RED's rocket with Thrust Vector Control is performed. The rocket is designed to have a burning phase coinciding with the full duration of the climb, so that TVC can be used to control its attitude up to apogee. It is also required that the terminal velocity is inside a safe range to allow the correct activation of the recovery system. To meet these design goals, the solid motor parameters are iteratively tested using the propulsion model, and the flight for a vertical undisturbed trajectory is simulated. Tables 1 and 2 respectively present the main rocket characteristics and the simulation results. Model Linearization To design an optimal, linear controller, it is necessary to obtain a linear version of the model and respective statespace representation. The non-linear model can be linearized at equilibrium points of the system using a Taylor series expansion, considering small perturbations. For the case of a rocket, conditions change considerably throughout the flight, hence, it is not correct to choose a single equilibrium point to linearize the system. Instead, a reference trajectory is selected and the system is linearized at multiple operating points. The outcome is a linear time-varying system. When obtaining the linear version of the system, it is advantageous to consider some assumptions: the roll rate, p, is considered to be zero as it will be controlled by an external roll control system, reducing the order of the system by one; the wind is assumed to be zero, allowing to directly use the linear velocity in the body frame in the aerodynamic terms; the actuator dynamics are not included in the model; and the system parameters are considered to be constant at the linearization points, removing the dependencies on the state variables when computing the Taylor derivatives. By applying a Taylor series expansion to the non-linear system around the operating points, and considering these assumptions, a linear time-varying system in the perturbation domain is obtained, that can be represented in the state-space form: where A(t) and B(t) are the state-space matrices given by the first-order Taylor derivatives with respect to system states and inputs, respectively, calculated at the operating points. Regarding the attitude reference that defines the reference trajectory, a varying pitch trajectory, in which the controller restricts the motion to the pitch plane (yaw equal to zero) and makes the rocket deviate from the vertical to later recover it, is selected. In this way, it is ensured that the apogee is reached further away from the launch site, increasing safety. Figure 3 shows the reference pitch angle and rate over time. (6a) Since the system is naturally unstable, it is necessary to find the time evolution of the nominal control inputs that allows the rocket to nominally follow the trajectory, defined by an attitude reference over time. This is done using a PID controller that in simulation, without perturbations, is able to stabilise the vehicle and track the attitude reference. The input values over time are then stored to use as predetermined feedforward control inputs. It is possible to identify two distinct sections of the reference trajectory: a first section up to t = 25 s in which motion is strictly vertical, and a second section up to burnout in which pitch is varying. For the varying pitch section, we have that: 0 = 0 = 0 , v 0 = 0 , r 0 = 0 , and y 0 = 0 . This results in a simplified version of the statespace representation, for which the longitudinal and lateral modes are decoupled: In the vertical section, we have that: 0 = 0 = 0 = 0 , v 0 = w 0 = 0 , q 0 = r 0 = 0 , and p 0 = y 0 = 0 . This results in a simplified version of the decoupled state-space representation, for which u and are no longer states of the system. Finally, it is important to determine the location of the system poles throughout the nominal trajectory to derive the open-loop stability. For a time-varying system, the stability is not mathematically guaranteed with this method, however, the study is carried out to understand the behaviour of the system throughout the flight. Figure 4 details the pole evolution (from blue to green) during the vertical section and the poles at t = 50 s, which serves as example for the distribution type during the varying pitch section. Looking at the different pole distributions during the flight, some conclusions can be made. First, the system has poles located in the right-hand side of the complex plane for the entire trajectory, meaning that it is naturally unstable. This was expected due to the negative static stability margin caused by the absence of aerodynamic fins. Second, during the vertical section, there is an equivalence between the lateral and longitudinal modes, verified by the identical pole distributions, which is due to the symmetry of the vehicle and to the equality in the nominal values of the corresponding states. Thirdly, the system has complex conjugate poles with positive real part during the first seconds of the flight, which indicates natural unstable oscillatory behaviour, after which all poles start to be located on the real axis. Finally, it is concluded that the velocity of the rocket is a driving factor for the response of the system-as velocity increases during the flight, the system is seen to have higher magnitude poles and hence faster response. This is attributed to the fact that at higher velocities the magnitude of the aerodynamic forces and moments is also higher, causing higher accelerations on the system when the inputs are actuated. Linear Quadratic Integral (LQI) Control Using the linear time-varying state-space representation of the system, a linear quadratic regulator (LQR) is designed with the addition of an integral action, also known as linear quadratic integral control (LQI). The LQR is a technique that finds the optimal gain matrix k for the linear control law u = −k x , which minimises a quadratic cost function given by where Q is a positive semi-definite matrix and R is a positive definite matrix [16]. In the cost function, the quadratic form ′ Q represents a penalty on the deviation of the state x from the origin, and the term ′ R represents the cost of control, making Q and R the tuning parameters for the resultant controller. Using the infinite-horizon version, which means taking T as infinity, the solution which minimises the cost function and guarantees closed-loop asymptotic stability is the constant gain matrix k = R −1 B T P , where P is the solution to the Algebraic Riccati Equation (ARE) P A + A T P − P B R −1 B T P + Q = 0. Since the system is time-varying, the ARE has to be solved for models coming from each linearization point, resulting in a set of gain matrices to be selected, or scheduled, throughout the flight. The LQR feedback control law ideally drives the states of the system in the perturbation domain to zero, ensuring that the nominal values throughout the trajectory are followed. However, it does not guarantee a zero tracking error for non-zero references in terms of attitude. In order to have a zero reference tracking error, and to increase the robustness of the controller, an integral action that acts on the attitude tracking error is added, according to the scheme in Fig. 5. Let the difference between the reference signal, r , and the output of the system, y , (the tracking error) be the time derivative of the state-space variables that result from adding the referred integrator, . The state-space representation of the resulting regulator can be obtained by combining the open-loop state-space representation with the feedback law, where z = x T is the augmented state vector and C is the output matrix that selects the output of the system from the original state vector ( y = C x ). The optimal gain K is obtained by solving the ARE using the rearranged system matrices, Considering the decoupling between the longitudinal and lateral modes, the decoupled augmented state vectors are z lon = [ u w q i ] T and z lat = [ v r i ] T , where i and i are the integral states. This implies that the A , B are divided into the longitudinal and lateral modes, and that the C matrix for the lateral mode is the one that selects the yaw angle, while for the longitudinal mode is the one that selects the pitch angle. The design degree of freedom is the selection of the tuning matrices Q and R , which will also be divided into First of all, setting all non-diagonal entries to zero, and only focussing on the diagonal ones, allows for a more intuitive matrix selection given by the "penalty" method. According to this method, the diagonal entries of the Q matrix will determine the relative importance of the state variables in terms of origin tracking performance, while the diagonal entries of the R matrix allow to directly adjust the control effort for each input. Therefore, the weighting matrices have the following generic format, separated for each mode, Given the nature of the TVC actuation, trying to control the linear velocities would conflict with the attitude control, specially for non-zero attitude references. Hence, the linear velocity related terms are set as zero. By doing this, the associated gains will have negligible magnitude, allowing to use partial state feedback with k lon = k q k k i and k lat = k r k k i . The tuning parameters are iteratively adjusted looking at the closed-loop poles and at the step response performance in the linear domain, including the actuator dynamics, modelled as a first-order system. Regarding the closed-loop poles, the control law allowed to stabilise all operating points, placing all closed-loop poles in the lefthand side of the complex plane. Table 3 details the step response parameters for multiple operating points. Navigation System Design So far, it was assumed that the control system has access to an exact full-state measurement. In reality, it is necessary to have a navigation system, composed by sensors and estimators, capable of providing an accurate estimate on the state vector. For the case of rockets, and taking into account the state variables to measured, it is common to use an Inertial Measurement Unit (IMU), composed by accelerometers, gyroscopes, barometers, and magnetometers, and a Global Navigation Satellite System (GNSS) receiver. The estimator architecture was based on [17]. It is composed by three main filters and a pre-processing unit (PU), according to the scheme in Fig. 6. The pre-processing unit (PU) combines the magnetometer and accelerometer readings, m r and a r , to obtain an indirect measurement on the Euler angles, r . Then, the first filter is an Attitude Complementary Filter (ACF), which will use the Euler angles readings and the measured angular rates from the gyroscopes, r , to provide a filtered attitude estimate, ̂ , and an estimate on the bias of the three angular rates, b , to correct the signal from the sensor. The second one is a Position Complementary Filter (PCF), which merges the position readings from the GNSS receiver, translated into the inertial frame, p r , and the acceleration measurements from the accelerometer to provide an estimate on the velocity vector, v . This filter is also self-calibrated since it accounts for the bias in the three acceleration readings, b a . Finally, a Linear Parameter Estimator (LPE), uses the control inputs, velocities, angular rates and attitude pre-filtered values to give a final estimate on the state vector, x , and parameters, ̂ . ACF For the ACF, it is assumed that the Euler angles measurement is corrupted by Gaussian white noise, w , as well as the angular rates readings, w , and that the gyroscope bias is described by a constant term with additional Gaussian white noise. Considering this, the filter is based on the kinematic equations for the Euler angles (5), using directly the Euler angles readings in the process matrices to allow for the use PCF For the PCF, both the position and acceleration measurements are considered to be corrupted by Gaussian white noise, w p and w a , and the accelerometer bias is also described by a constant term with additional Gaussian white noise. This filter is also kinematic, considering the following equations of motion: where p is the position in the inertial frame and a is the acceleration expressed in the body frame. The state-space representation of the filter is then obtained, where stands for the identity matrix and for the matrix of zeros, both of dimension 3 by 3. The rotation matrix R is calculated using the Euler angles estimate from the ACF ( ̂ ). The individual gain matrices , and , each with dimension 3 by 3, can once again be computed considering the vertical attitude time-invariant to define the rotation matrix R , so as to obtain time-invariant Kalman gains. LPE The robustness of the LQR is limited since the controller is designed considering a nominal evolution of model parameters that might considerably differ from the real evolution during the mission. Amongst the model parameters, the ones related with the aerodynamic properties of the rocket are subjected to an higher level of uncertainty, due to the difficulty in obtaining accurate aerodynamic coefficients and derivatives of the rocket for a broad range of velocities and aerodynamic angles. In this way, an online parameter estimator is proposed so that the controller acts on an informed value of the aerodynamic parameters. The aerodynamic parameters are hidden under the aerodynamic force and moment coefficients. Since a first estimate on these quantities is available using the stored aerodynamic data, a proportional error factor is multiplied in each aerodynamic force and moment, corresponding to the parameters to be estimated. The estimator design follows along the methodology proposed in [18], where an hovercraft control system is designed based on dynamic parameters identification, which details a generic parameter estimator for time-varying systems, linear in the parameters. The previously detailed rocket model (section 2.4) is rearranged by including the proportional error factors, a x , a y , a z , m , and n on the aerodynamic forces and moments, making B = −q C A S a x q C Y S a y −q C N S a z T and B = 0 q C m S d m q C n S d n T , where the aerodynamic rolling moment is discarded due to the additional roll control system. After substituting the rearranged aerodynamic forces and moments in the rocket model, and considering the linearity in the parameters to be estimated, the nonl i n e a r d i f fe r e n t i a l e qu a t i o n s t a ke t h e fo r m ̇x = f(x, t) + G(x, t) , where x = u v w q r T and = a x a y a z m n T . Using state augmentation with the parameter vector, , and assuming full-state measurements, y , are available, this system can be written in statespace form as in which the full-state measurement assumption allows to regard the system as linear, and the parameters are assumed to be slowly varying. The G(y, t) and f(y, t) matrices are easily obtained using the derived rocket model with the inclusion of the correction factors, and are not here presented to improve readability. In order to design the estimator for this system, it is necessary for it to be observable. In the reference, it is demonstrated that the system is observable if and only if there exists no unit vector d , with the dimension of the parameter vector, such that ∫ t t 0 G(y, ) d ⋅ d = 0 . Taking the time derivative in both sides and substituting for the rocket dynamics, the equivalent non-observability condition is where d i , for i = 1, 2, 3, 4, 5 , are the components of the unit vector, and the simplification is due to m, J, q , d, and S being always different from zero. It is possible to infer that the system is observable only when the aerodynamic force and moment coefficients are all different from zero, since if one of them is not, the unit vector with d i = 1, where i corresponds to component multiplying the null coefficient, satisfies the non-observability condition. However, given a null coefficient, the correspondent correction factor is the unobservable parameter, meaning that estimates can still be obtained for the remaining ones. Nevertheless, to ensure full observability, if the pre-calculation of a given coefficient results in zero, it can be forced to a small non-zero value. After verifying that the system can be made observable, a Kalman filter represents a simple and easily tunable solution for the estimation of the system state, resulting in the following state-space representation for the LPE: Adaptive LQI Control Resorting to the real-time estimates on the aerodynamic error coefficients, the LQI controller gains can be computed on-board instead of scheduling the pre-calculated gains for each operating point. This is done by rewriting the statespace representation with the inclusion of the estimated parameters, and solving the ARE on-board with the updated state-space models. The rearranged system dynamics matrix, A , can be easily achieved and is not here presented. Simulation Environment To test and validate the proposed ADCS in the complete non-linear model, a realistic simulation model is implemented in MATLAB/Simulink ® environment. The model is composed by several subsystems in order to completely transcribe the derived dynamics and kinematics, generate the environmental properties, and compute the model-varying parameters. The environmental properties are generated by the atmospheric, wind, and gravitational models. The 1976 U.S. standard atmosphere was implemented, which describes the evolution of temperature and pressure with altitude using average annual values, from which density and speed of sound are derived. Wind is introduced through the summation of the average horizontal wind components from the U.S. Naval Research Laboratory horizontal wind model with wind gusts added from the Dryden model, both available as Simulink blocks. Finally, the gravitational model is implemented according to the equations in Sect. 2.3.1. Several varying model parameters have to be computed during simulation. The ideal thrust force and mass flow rate are predetermined using the propulsion model detailed in Sect. 2.3.2, and the static, atmospheric pressure-dependant thrust component is added during the simulation. The aerodynamic properties, i.e. the aerodynamic derivatives and centre of pressure location, are stored in look-up tables and are selected according to the instant values of the aerodynamic angles and Mach number. The mass properties are also computed during the simulation, including the mass, inertia, and centre of mass, which vary due to the propellant consumption. The equations of motion used in the simulation environment are the ones presented in Sect. 2.4. It is important to note that some assumptions were used when deriving the model and, although considered valid for design, can have impact on the expected performance, obtained in simulation, when in a real case scenario. Elastic modes might be excited by the control action if the associated frequencies are similar, causing undesired oscillatory behaviour; asymmetries may cause the centre of mass to be dislocated from the x-axis of the body, which imposes additional effort on the control action; and non-linear aerodynamic effects may cause unexpected behaviour, as well as unaccounted effects caused by the rotation of the Earth, such as the Coriolis acceleration. ADCS Parameters In this section, some of the parameters used in the simulation environment are detailed. These include the model used for the actuators, the control system gains, and the covariance matrices obtained for the navigation system. Actuators' Model The actuators' dynamics are modelled using a continuous time first-order transfer function for each input ( p and y ), considering a servo-actuated system. The transfer function is given by where r is the actuator angular response and is the time constant. In addition, servo motors normally have a saturation value for the rotation velocity, which can be modelled by a rate limiter block in Simulink. The time constant and angular velocity limit values were retrieved from typical high grade servo motors, and are equal to 0.02 s and 1 full rotation per second, respectively. Control Gains The tuning of the Q and R matrices for each mode yielded the time evolution for the controller gains throughout the nominal trajectory detailed in Fig. 7. The gains remain approximately constant given that the tuning matrices were left constant for all operating points, except for the ones associated with the longitudinal mode during the varying pitch section, which were tuned in order to reduce the control effort and avoid saturation. Estimation Covariances Each of the individual filters composing the navigation system follows the Kalman filter structure [19], meaning that the tuning parameters are their respective covariance matrices, Q and R . The Q corresponds to the covariance of the process noise, and R corresponds to the covariance of the measurement noise. The R matrices can be derived by referring to the noise properties of the on-board sensors, while the Q matrices are iteratively adjusted looking at the simulation results in the realistic environment, yielding r = 1 s + 1 Q acf = 10 −1 , Q pcf = 10 −2 , Q lpe = 10 −2 10 , R acf = , R pcf = 4 , R lpe = 10 . Navigation System The navigation system is tested and is able to reject the noise introduced by the sensors, remove the bias, and provide an accurate estimate on the state of the rocket. Figure 8 presents the pitch angle and pitch rate estimation by the ACF, while Fig. 9 presents the crossrange position and longitudinal velocity estimation by the PCF, both as exemplification of the performance of the system. The linear parameter estimator is also tested in simulation, by inducing errors in the aerodynamic coefficients, and is able to correctly estimate the parameters. Figure 10 presents the results for a simulation in which errors were induced in the aerodynamic coefficients associated with pitch plane motion ( C A , C N , and C m ). The affected parameters ( a x , a z and m ) are seen to correctly converge to the expected values and the associated estimation errors for the aerodynamic forces and moments are minimised. LQI Control The LQI controller is implemented in the simulation model and tested by adding wind, with and without gusts, as external perturbation. Table 4 displays the results in terms of attitude tracking performance and control effort, not only for the LQI controller, but also for a tested PID controller for comparison. It is noted that the LQI controller provides better attitude tracking for the same control effort with respect to the PID. In the yaw plane, the results for the PID are significantly worse since it is very affected by the initial wind perturbation. The step response is also analysed (Tab. 5). Once again the LQI displays satisfactory performance, close to the design values (Tab. 3), and significantly better than the classical PID. Additionally, a robustness analysis is carried out, in which the model parameters are varied in percentage. For the assumed parameter uncertainties, the LQI controller shows high robustness. (c) Adaptive LQI Control Due to the high robustness of the non-adaptive LQI controller, the adaptive version is not able to produce significant performance improvements. Complete ADCS The complete ADCS is tested by integrating the attitude control and navigation systems. Table 6 details the attitude tracking performance and the control effort with wind gusts present, in comparison with the results for the control system alone without sensor noise. As expected, there is a performance decrease. However, it is still satisfactory. Sensitivity Analysis Finally, a sensitivity analysis was performed to determine the robustness of the system to model uncertainties. Several system parameters, including dry mass, inertia, Thrust, centre of mass position, and aerodynamic coefficients, were altered independently, inside admissible ranges in terms of percentage of the original value: The system showed sufficient robustness, being able to stabilise the plant for all the variations under study. The parameter which demonstrated the highest influence in the performance of the control system was the position of the centre of mass ( x cm ), with the results shown in Table 7. A lower value, meaning a position closer to the tip of the rocket, causes the moment arm for the thrust vector actuation to be higher, which increases the control authority. At the same time, the natural instability of the rocket reduces. In this way, the tracking performance increases when the centre of mass moves closer to the tip, while the control effort decreases. Conclusions With the conclusion of this work, it is possible to state that the primary goal has been achieved: the successful design of an attitude determination and control system applicable to sounding rockets with thrust vectoring. The design process was described in a generic way to ensure that the system can be easily applied to different vehicles under the same category. Nevertheless, the future implementation of the system in a student-built sounding rocket was always taken into account, as it was the initial motivation behind this work. As future work, it would be of interest to develop nonlinear controllers for the attitude control problem in order to compare the performance of said controllers with the developed ones. Particularly, the designed linear parameter estimator could be used for a non-linear control system that requires accurate information on the aerodynamic forces and moments to guarantee its correct functioning. Moreover, both the developed simulation model and navigation system can be verified and validated using real flight data from sounding rockets launched by RED. In fact, the simulation environment shall be improved by including phenomena yet to be modelled, such as elastic modes, non-linear aerodynamic effects, the curvature and rotation of the Earth, and body asymmetries, to further verify the system before implementation in a real case scenario. Finally, RED is currently developing small-scale prototypes to test the TVC technology and the associated navigation and control systems. In this way, it is intended to implement the techniques in here developed to such prototypes and to analyse all the results coming from test campaigns. Conflict of interest The authors declare that they have no known conflict of interest. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2023-07-11T15:06:58.788Z
2023-07-08T00:00:00.000
{ "year": 2023, "sha1": "e5d84eb418690a46adcd4b28e3dbb29db6596960", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s42496-023-00161-w.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "b39b45b0a06580c56f78db4255c061800d1e9f8c", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
255644904
pes2o/s2orc
v3-fos-license
Performance Testing of Micro-Electromechanical Acceleration Sensors for Pavement Vibration Monitoring Pavement vibration monitoring under vehicle loads can be used to acquire traffic information and assess the health of pavement structures, which contributes to smart road construction. However, the effectiveness of monitoring is closely related to sensor performance. In order to select the suitable acceleration sensor for pavement vibration monitoring, a printed circuit board (PCB) with three MEMS (micro-electromechanical) accelerometer chips (VS1002, MS9001, and ADXL355) is developed in this paper, and the circuit design and software development of the PCB are completed. The experimental design and comparative testing of the sensing performance of the three MEMS accelerometer chips, in terms of sensitivity, linearity, noise, resolution, frequency response, and temperature drift, were conducted. The results show that the dynamic and static calibration methods of the sensitivity test had similar results. The influence of gravitational acceleration should be considered when selecting the range of the accelerometer to avoid the phenomenon of over-range. The VS1002 has the highest sensitivity and resolution under 3.3 V standard voltage supply, as well as the best overall performance. The ADXL355 is virtually temperature-independent in the temperature range from −20 °C to 60 °C, while the voltage reference values output by the VS1002 and MS9001 vary linearly with temperature. This research contributes to the development of acceleration sensors with high precision and long life for pavement vibration monitoring. Introduction In order to ensure the efficient, safe and intelligent operation of road-traffic systems, more and more monitoring technologies are applied, and pavement dynamic response monitoring is an essential part. By monitoring and analyzing the pavement dynamic response data under the action of vehicle loads, traffic information and the service state of the pavement structure can be obtained, which provides data support for traffic control and road maintenance [1]. When monitoring pavement dynamic response, the quantities monitored are often stress, strain, displacement, or bending. In recent years, accelerometers have been applied to pavement vibration monitoring under vehicle loading with an improvement in accelerometer performance, such as high sensitivity, low power consumption, and small size [2]. Researchers are already using different acceleration sensors in road engineering to monitor traffic and structural health, as summarized in Table 1. The accelerometer vibration sensors were installed at a relative distance of approximately 45 m on the right side along one of the roads at the training site. Vehicle counting and classification [3,4] Road marking units Self-development MEMS A pair of sensor nodes were deployed on both sides of the two-lane, two-way highway. The two sensors on the same side were separated by 6m and worked in pairs for the detection and estimation. The accelerometer can optionally be used to improve the monitoring accuracy. Vehicle counting and classification [5,6] G-link-LXRS wireless accelerometer node LORD Sensing MEMS The G-link wireless accelerometer node was installed at the edge of the pavement, with a 1m distance to the wheel path. Vehicle counting and classification [7] The wireless accelerometers Sensys Networks MEMS Detection sensors report vehicle detection time, and multiple arrays of vibration sensors report pavement acceleration for load estimation. MS9002 from Colibrys was used due to its low operating voltage and low current consumption. Self-development MEMS The acceleration-sensing nodes were installed on the wheel path. Two rows of acceleration-sensing nodes were installed 3.1 m apart. Vehicle speed, Vehicle type, and abnormal vehicle weight [11,12] SmartRock STRDAL MEMS SmartRock sensors were installed in an asphalt pavement test section using an accelerated pavement testing (APT) loading facility as well as an in situ pavement to collect the pavement responses. Vehicle speed and pavement compactness [13][14][15][16] ICP (integrated circuit piezoelectric) accelerometer unknown IEPE The accelerometer can be mounted at any convenient location. In this paper, it is mounted on the frame under the passenger seat by a magnetic base. Road surface roughness [17] Triaxial accelerometer ADXL335 MEMS An accelerometer was mounted on each front and rear axle's hubs with the help of adhesive to be held securely. Metra Meß (Germany) IEPE A field experiment was performed using the KB12VD; it consisted of embedding the sensor in an asphalt pavement and driving a vehicle of known weight and dimensions close to the embedment location. Vehicle speed, axle weight distribution, and surface layer modulus [20,21] A wireless accelerometer unknown MEMS The wireless accelerometer was mounted on the HMA slabs at 10 cm off the wheel path for acceleration measurements. Structural condition assessment of asphalt concrete slab [22] Triaxial accelerometer unknown MEMS A triaxial accelerometer is adopted to test the pavement vibration acceleration signals caused by the bus. Structural cracking assessment [23] Structural wireless test systems (A1005 ± 2 g) Bridge Diagnostics, Inc MEMS The sensors are attached to the runway surface using special glue. There are three measurement lines with a spacing of 15 m. Four acceleration sensors are placed on each measurement line, and the spacing of sensors from the aircraft runway centerline to shoulder direction are 2 m, 2 m, and 3 m, respectively. Self-development Optical Fiber Pre-fixing the fiber to the reinforcement, then pouring concrete, and high-density placement at the corners and edges of the slab. Support conditions assessment of concrete pavement slab [25][26][27] As shown in Table 1, the acceleration sensors used in road engineering mainly include MEMS, IEPE (integrated electronics piezo-electric), and optical fiber. Among them, IEPE enables high-precision monitoring, which is suitable for roadside deployment. Fiber-optic sensors are low-cost and durable, which are currently used for the vibration monitoring of concrete pavement slabs. However, both of them lack integration and need to be equipped with dedicated data acquisition systems, which leads to high monitoring costs and highpower supply energy consumption. Compared to other sensors, MEMS accelerometers have many advantages, including small size, low power consumption, high accuracy, high reliability, and low cost. The small size of MEMS sensors allows for better integration and packaging, enabling integration with electronic components such as CPUs for edge computing, and can be embedded in the pavement without damaging the road structure. The low power consumption of MEMS sensors allows for self-powering using new energy sources such as solar, wind, and piezoelectric energy, and can greatly save power supply costs for road monitoring in remote areas without the need for cables. The high accuracy and high reliability of MEMS sensors make them suitable for long-term monitoring in harsh service environments. The low cost of MEMS sensors allows for low-cost deployment at multiple points in road structures, making wide-scale use of MEMS sensors possible. In recent years, with the improvement in the performance of MEMS acceleration sensors, MEMS-based acceleration sensors have the capability of real-time acquisition, processing, and analysis. The amount of collected data provides conditions for data-driven pavement vibration monitoring. However, the pavement materials, structures, and the external environment affect the traffic-induced pavement vibration. The pavement vibration signal is complex and fluctuates randomly, and the vibration amplitude drops rapidly. The high-precision acceleration sensors are needed for pavement vibration monitoring. Nevertheless, there are many kinds of MEMS vibration sensors with different performance parameters. It is necessary to compare and test different MEMS accelerometer chips to analyze the influence of different factors on pavement vibration signals, which can guide the development and application of acceleration sensors in a smart road. Hardware Composition In order to compare the performance of three different MEMS accelerometer chips (MS9001, VS1002, and ADXL355), a sensor, PCB, with different MEMS acceleration chips was developed, as shown in Figure 1. The main components of PCB include ultra-low-powe three MEMS accelerometer chips (VS1002, ADXL355, MS900 verter (AD7172), a buck regulator chip (AMS1117-3.3), and (MAX3485ED). Among them, MS9001 and VS1002 output di converter. The CPU calculates and processes the digital sign the host computer through the 485 communication interface puts digital signals without the AD converter. The PCB is p pressurized to 3.3 V by AMS1117-3.3 to provide CPU power chip. The main functions and datasheet links of the utilize Table 2. The main components of PCB include ultra-low-power CPU (STM32F103C6T6A), three MEMS accelerometer chips (VS1002, ADXL355, MS9001), an analog-to-digital converter (AD7172), a buck regulator chip (AMS1117-3.3), and a 485 communication chip (MAX3485ED). Among them, MS9001 and VS1002 output digital signals through the AD converter. The CPU calculates and processes the digital signals and communicates with the host computer through the 485 communication interface. The ADXL355 directly outputs digital signals without the AD converter. The PCB is powered by 5 V USB and depressurized to 3.3 V by AMS1117-3.3 to provide CPU power for the MEMS accelerometer chip. The main functions and datasheet links of the utilized components are shown in Table 2. Circuit Design The ADXL355 is a triaxial accelerometer with a range of ±2 g. The pin configuration of the ADXL355 is shown in Figure 2. To ensure a stable 3.3 V supply voltage, a single 100 nF decoupling capacitor was designed in the power port to eliminate supply noise. Circuit Design The ADXL355 is a triaxial accelerometer with a range of ±2 g. The pin config of the ADXL355 is shown in Figure 2. To ensure a stable 3.3 V supply voltage, a sin nF decoupling capacitor was designed in the power port to eliminate supply nois MS9001 and VS1002 are high-precision single-axis accelerometers with me ranges of ±1 g and ±2 g, respectively. They can output analog signals and conve into digital signals via AD. The pin configuration of MS9001 and VS1002 is shown ure 3. MS9001 and VS1002 are high-precision single-axis accelerometers with measuring ranges of ±1 g and ±2 g, respectively. They can output analog signals and convert them into digital signals via AD. The pin configuration of MS9001 and VS1002 is shown in Figure 3. MS9001 and VS1002 are high-precision single-axis accelerometers with me ranges of ±1 g and ±2 g, respectively. They can output analog signals and conve into digital signals via AD. The pin configuration of MS9001 and VS1002 is shown ure 3. Three 1 F capacitors were designed at the power supply to eliminate the supply noise and ensure the power supply stability. In order to effectively comp sensing performance of different accelerometer chips, the voltage output terminal filtered, and the direct output was adopted. The power supply voltages were both Data Acquisition In order to acquire the original data and test the sensor performance, Visual (VS) was used to design the visual acquisition interface, as shown in Figure 4. VS su various programming languages and provides a wealth of development tools to h velopers easily develop Windows applications, web applications, mobile appli and more. Three 1 µF capacitors were designed at the power supply to eliminate the power supply noise and ensure the power supply stability. In order to effectively compare the sensing performance of different accelerometer chips, the voltage output terminal was not filtered, and the direct output was adopted. The power supply voltages were both 3.3 V. Data Acquisition In order to acquire the original data and test the sensor performance, Visual Studio (VS) was used to design the visual acquisition interface, as shown in Figure 4. VS supports various programming languages and provides a wealth of development tools to help developers easily develop Windows applications, web applications, mobile applications, and more. The visual acquisition interface of the serial port of the host computer was written in the C# programming language, which has the essential functions of receiving, visualizing, and storing data. The digital signal of the acceleration sensor was output through the serial port. The communication between the host computer and the acceleration sensor was realized by a USB to the serial port. The baud rate of serial communication was set to 115,200 bit/s. This is a commonly used baud rate for serial communication, and it is effective in transmitting data from multiple MEMS accelerometer chips while maintaining a high level of data transmission quality and reducing the risk of interference. The high baud rate allows the host computer to effectively process the data without causing any data loss or degradation in data transmission quality. Experimental Design The vibration frequency of traffic-induced pavement is generally 1-50 Hz, and the The visual acquisition interface of the serial port of the host computer was written in the C# programming language, which has the essential functions of receiving, visualizing, and storing data. The digital signal of the acceleration sensor was output through the serial port. The communication between the host computer and the acceleration sensor was realized by a USB to the serial port. The baud rate of serial communication was set to 115,200 bit/s. This is a commonly used baud rate for serial communication, and it is effective in transmitting data from multiple MEMS accelerometer chips while maintaining a high level of data transmission quality and reducing the risk of interference. The high baud rate allows the host computer to effectively process the data without causing any data loss or degradation in data transmission quality. Experimental Design The vibration frequency of traffic-induced pavement is generally 1-50 Hz, and the peak vibration is within 1 g [28,29]. Therefore, the performance parameters of the accelerometers were tested within the range of ±1 g. The sampling frequency was set as 500 Hz. In order to effectively test and evaluate the accelerometer performance parameters, including sensitivity, linearity, noise, resolution, frequency response, and temperature drift, a corresponding test scheme was designed in this study. In actuality, the datasheet for an accelerometer chip provides relevant performance parameters. However, different manufacturers provide different conditions for these parameters, making it difficult to compare them effectively. For example, the supply voltage for the ADXL355 and the VS1002 is 3.3 V, while the supply voltage for the MS9002 is 5 V. Different supply voltages will affect the values of the performance parameters. In addition, the performance parameters given in the datasheet are typical values, and the sensor's performance parameters may change depending on the application and circuit design. Therefore, by studying and testing these characteristics in more detail, this study can provide valuable insights and recommendations to engineers and designers who are using or considering using these accelerometers in their products. Sensitivity Test Sensitivity is the ratio between the change in output quantity (dv) and the change in the input quantity (da). It is the slope of the input-output characteristic curve, as shown in Equation (1). For linear sensors, the sensitivity is a constant, and both dynamic and static calibration tests can be obtained. (1) Dynamic calibration test The shaker used for this calibration experiment is the IPA180L/H1248A electrodynamic vibration test system with a rated frequency of 2-2500 Hz, as shown in Figure 5. For linear sensors, the sensitivity is a constant, and both dynamic and static calibration tests can be obtained. (1) Dynamic calibration test The shaker used for this calibration experiment is the IPA180L/H1248A electrodynamic vibration test system with a rated frequency of 2-2500 Hz, as shown in Figure 5. Specific practical steps are as follows: 1. The measured sensor and the standard sensor were bolted to the shaking table. Both were located in the center area of the shaker to ensure that they were subjected to the same vibration amplitude. The accuracy of the calibration value was guaranteed by simultaneous movement. 2. The vibration frequency of the shaker was controlled to stay at 10 Hz. The acceleration amplitude of 0.1 g sine wave was output. The voltage signal output of the acceleration sensor at this time was recorded. 3. With the acceleration amplitude of 0.1 g as the increasing quantity, the amplitude gradually increased until it reached 1 g. Each acceleration input corresponding to the sensor output voltage signal was recorded in turn. (2) Static calibration test Specific practical steps are as follows: 1. The measured sensor and the standard sensor were bolted to the shaking table. Both were located in the center area of the shaker to ensure that they were subjected to the same vibration amplitude. The accuracy of the calibration value was guaranteed by simultaneous movement. 2. The vibration frequency of the shaker was controlled to stay at 10 Hz. The acceleration amplitude of 0.1 g sine wave was output. The voltage signal output of the acceleration sensor at this time was recorded. 3. With the acceleration amplitude of 0.1 g as the increasing quantity, the amplitude gradually increased until it reached 1 g. Each acceleration input corresponding to the sensor output voltage signal was recorded in turn. (2) Static calibration test The correspondence between the sensitive axis direction of the MEMS accelerometer and the direction of gravitational acceleration was used for calibration, as shown in Figure 6. were located in the center area of the shaker to ensure that they were subjected to the same vibration amplitude. The accuracy of the calibration value was guaranteed by simultaneous movement. 2. The vibration frequency of the shaker was controlled to stay at 10 Hz. The acceleration amplitude of 0.1 g sine wave was output. The voltage signal output of the acceleration sensor at this time was recorded. 3. With the acceleration amplitude of 0.1 g as the increasing quantity, the amplitude gradually increased until it reached 1 g. Each acceleration input corresponding to the sensor output voltage signal was recorded in turn. (2) Static calibration test The correspondence between the sensitive axis direction of the MEMS accelerometer and the direction of gravitational acceleration was used for calibration, as shown in Figure 6. When the positive direction of the sensor's sensitive axis coincided with the direction of the gravitational acceleration, the input acceleration was considered to be 1 g. When the positive direction of the sensor's sensitive axis was opposite to the direction of gravitational acceleration, the input acceleration was considered to be −1 g. When the direction When the positive direction of the sensor's sensitive axis coincided with the direction of the gravitational acceleration, the input acceleration was considered to be 1 g. When the positive direction of the sensor's sensitive axis was opposite to the direction of gravitational acceleration, the input acceleration was considered to be −1 g. When the direction of the sensor's sensitive axis was perpendicular to the direction of gravitational acceleration, the input acceleration was considered to be 0 g. Therefore, the sensor output voltage signal was obtained for 1 g, −1 g, and 0 g acceleration. Noise and Resolution Test The output signal of the acceleration sensor contains circuit noise and environmental noise. In order to truly reflect the noise level of the internal system of the sensor, the acceleration sensor was placed on the anti-vibration table, and the direction of its sensitive axis was perpendicular to the direction of gravity acceleration. At this time, the acceleration input in the direction of the sensitive axis was 0 g, and the voltage data output by the acceleration sensor were collected. Standard deviation is a statistical index to measure the deviation between each data value in the data set and the mean value, reflecting the degree of dispersion of data distribution, as shown in Equation (2). The smaller the standard deviation is, the less noise the sensor has. Therefore, the standard deviation σ can be used to represent the noise level of the acceleration sensor [30]. Resolution is the minor change that the sensor can detect and will only sense if the input is higher than the resolution. It is generally considered that the input quantity can be distinguished when it is greater than the system noise. Equation (3) can be used to calculate the resolution of the acceleration sensor. Frequency Response Test When the input quantity is not fixed but periodically changes, the output of the sensor will also occur with the periodic change of the input quantity, and the frequency of both remains unchanged. However, the output amplitude will vary with the frequency. This variation in the output amplitude with frequency is called the amplitude-frequency characteristic of the signal. The frequency response range is an important performance index of the sensor. When the signal frequency exceeds this range, the sensor output signal will be distorted. A larger frequency response range means that the sensor can be applied to a broader range of fields. Therefore, it was necessary to test the frequency response range of the accelerometer. The frequency response range could be obtained by testing the amplitude-frequency characteristic curve of the accelerometer. Since the pavement vibration under vehicle load was mainly concentrated in the low frequency range, the vibration amplitude was small. Therefore, the practical steps were set as follows: 1. The measured sensor and the standard sensor were bolted on the shaker. 2. The shaker was controlled to output a sine wave with a constant acceleration amplitude of 0. Real-time monitoring data were collected for frequency response range analysis. Temperature Drift Test The change in sensor output due to temperature change is called the temperature drift phenomenon. It is important to carefully consider the effects of temperature on the instrumentation system and to implement appropriate temperature compensation measures in order to ensure the accuracy and reliability of the measurements. In order to test the variation in acceleration sensor output with temperature, a temperature control experiment box (DM1000-C15-ESS) was used for testing, and the experimental setup is shown in Figure 7. measures in order to ensure the accuracy and reliability of the measurements. In order to test the variation in acceleration sensor output with temperature, a temperature control experiment box (DM1000-C15-ESS) was used for testing, and the experimental setup is shown in Figure 7. Specific practical steps were as follows. 1. The acceleration sensor was placed horizontally in the temperature control box, fixed with the straps, and connected with the external computer. The sensor was positioned in the center of the box to facilitate heat exchange and ensure accurate temperature control adequately. The temperature in the box was obtained through the temperature sensor on the acceleration sensor (TMP101NA/3K). 2. After adjusting the temperature inside the temperature control box to −20 °C and keeping the constant temperature, the temperature was set to 0 °C, 20 °C, 40 °C, and 60 °C in sequence. The acceleration sensor's output voltage and temperature data were collected in real-time. 3. The temperature control box was closed to allow it to cool naturally to room temperature. The output signal of the acceleration sensor was collected continuously. Sensitivity The shaker output acceleration values of 0.1 g, 0.5 g, and 1 g of the three accelerome- Specific practical steps were as follows. 1. The acceleration sensor was placed horizontally in the temperature control box, fixed with the straps, and connected with the external computer. The sensor was positioned in the center of the box to facilitate heat exchange and ensure accurate temperature control adequately. The temperature in the box was obtained through the temperature sensor on the acceleration sensor (TMP101NA/3K). 2. After adjusting the temperature inside the temperature control box to −20 • C and keeping the constant temperature, the temperature was set to 0 • C, 20 • C, 40 • C, and 60 • C in sequence. The acceleration sensor's output voltage and temperature data were collected in real-time. 3. The temperature control box was closed to allow it to cool naturally to room temperature. The output signal of the acceleration sensor was collected continuously. Sensitivity The shaker output acceleration values of 0.1 g, 0.5 g, and 1 g of the three accelerometer chips in the time domain and frequency domain, as shown in Figure 8. In Figure 8a, the waveform characteristics of the ADXL355 output signal are n prominent under small vibration amplitude, which is affected by noise. In contrast, th output signal of MS9001 has obvious waveform characteristics and small noise, which suitable for the monitoring of small amplitude. Since the voltage signal output from th sensor contains environmental noise, the vibration signal is transformed from the tim domain to the frequency domain using the fast Fourier transform method. From the fr quency domain signal, it can be seen that there was high-frequency noise of 150-200 H in the sensor output signal, among which the noise signal of ADXL355 was the most o vious and the noise signal of MS9001 was the least. Moreover, the acceleration signal ha a period of 0.1 s, which was consistent with the shaker vibration frequency of 10 Hz, an the corresponding amplitude was the maximum at 10 Hz. In Figure 8b,c, with the increase in vibration amplitude, the waveform characteristi of the signals output from the three accelerometer chips are apparent in the time domai However, the waveform output from the MS9001 shows a clipping phenomenon, becau the measuring range of MS9001 is ±1 g, and the sensitive axis of MS9001 is prone to th over-range phenomenon in the direction of gravitational acceleration due to the effect gravitational acceleration. Therefore, the sensitivity of VS1002 and ADXL355 was analyzed using the dynam calibration test, while the sensitivity of MS9001 was calibrated using a static calibratio test. The maximum amplitude of VS1002 and ADXL355 at 10 Hz was used as the outp value under the corresponding acceleration input. For example, the output values VS1002 at 0.1 g, 0.5 g, and 1.0 g were 130.6 mV, 677.2 mV, and 1367.0 mV. MS9001 w statically calibrated by taking the median output voltage of its sensitivity axis upwar corresponding to −1 g input; sensitivity axis horizontal, corresponding to 0 g input; an sensitivity axis downward, corresponding to 1 g input. The resulting relationship betwee the output voltage and the input acceleration of different accelerometer chips is shown Figure 9. In Figure 8a, the waveform characteristics of the ADXL355 output signal are not prominent under small vibration amplitude, which is affected by noise. In contrast, the output signal of MS9001 has obvious waveform characteristics and small noise, which is suitable for the monitoring of small amplitude. Since the voltage signal output from the sensor contains environmental noise, the vibration signal is transformed from the time domain to the frequency domain using the fast Fourier transform method. From the frequency domain signal, it can be seen that there was high-frequency noise of 150-200 Hz in the sensor output signal, among which the noise signal of ADXL355 was the most obvious and the noise signal of MS9001 was the least. Moreover, the acceleration signal had a period of 0.1 s, which was consistent with the shaker vibration frequency of 10 Hz, and the corresponding amplitude was the maximum at 10 Hz. In Figure 8b,c, with the increase in vibration amplitude, the waveform characteristics of the signals output from the three accelerometer chips are apparent in the time domain. However, the waveform output from the MS9001 shows a clipping phenomenon, because the measuring range of MS9001 is ±1 g, and the sensitive axis of MS9001 is prone to the over-range phenomenon in the direction of gravitational acceleration due to the effect of gravitational acceleration. Therefore, the sensitivity of VS1002 and ADXL355 was analyzed using the dynamic calibration test, while the sensitivity of MS9001 was calibrated using a static calibration test. The maximum amplitude of VS1002 and ADXL355 at 10 Hz was used as the output value under the corresponding acceleration input. For example, the output values of VS1002 at 0.1 g, 0.5 g, and 1.0 g were 130.6 mV, 677.2 mV, and 1367.0 mV. MS9001 was statically calibrated by taking the median output voltage of its sensitivity axis upward, corresponding to −1 g input; sensitivity axis horizontal, corresponding to 0 g input; and sensitivity axis downward, corresponding to 1 g input. The resulting relationship between the output voltage and the input acceleration of different accelerometer chips is shown in Figure 9. As shown in Figure 9, the linear relationship between the output voltage and inpu acceleration of the three accelerometer chips (R2 = 0.9) is satisfied. The sensitivity o VS1002, MS9001, and ADXL355 is the slope of the fitted straight line. The sensitivity o VS1002 and ADXL355-Z (the Z-axis indicates vertical acceleration) at 10 Hz frequency wa about 1371.1 mV/g and 393.1 mV/g. The difference with the static calibration results (abou 1358.9 mV/g and 395.9 mV/g) was less than 1%. The sensitivity of the MS9001 was appro imately 1278 mV/g. It can be seen that VS1002 had the highest sensitivity at 3.3 V supp voltage, and the calibration results of dynamic and static sensitivity were similar. Noise and Resolution The output voltage data were collected when the acceleration input was 0 g (i.e., th direction of the sensor's sensitive axis was perpendicular to the acceleration of gravity as shown in Figure 10. As shown in Figure 9, the linear relationship between the output voltage and input acceleration of the three accelerometer chips (R2 = 0.9) is satisfied. The sensitivity of VS1002, MS9001, and ADXL355 is the slope of the fitted straight line. The sensitivity of VS1002 and ADXL355-Z (the Z-axis indicates vertical acceleration) at 10 Hz frequency was about 1371.1 mV/g and 393.1 mV/g. The difference with the static calibration results (about 1358.9 mV/g and 395.9 mV/g) was less than 1%. The sensitivity of the MS9001 was approximately 1278 mV/g. It can be seen that VS1002 had the highest sensitivity at 3.3 V supply voltage, and the calibration results of dynamic and static sensitivity were similar. Noise and Resolution The output voltage data were collected when the acceleration input was 0 g (i.e., the direction of the sensor's sensitive axis was perpendicular to the acceleration of gravity), as shown in Figure 10. In Figure 10, the VS1002 output signal has the smallest fluctuation range within 1 mV, followed by the ADXL355 and MS9001. Because the sensitive axis cannot avoid the influence of the earth's gravitational acceleration, the reference value is not 0 mV. The noise was calculated using Equation (2), and the resolution was calculated using Equation (3), as shown in Table 3. Table 3. Sensitivity, noise, and resolution of different accelerometer chips. In Figure 10, the VS1002 output signal has the smallest fluctuation range within 1 mV, followed by the ADXL355 and MS9001. Because the sensitive axis cannot avoid the influence of the earth's gravitational acceleration, the reference value is not 0 mV. The noise was calculated using Equation (2), and the resolution was calculated using Equation (3), as shown in Table 3. Frequency Response Curve The frequency response of the three types of accelerometer chips is shown in Figure 11. In Figure 10, the VS1002 output signal has the smallest fluctuation range within 1 mV, followed by the ADXL355 and MS9001. Because the sensitive axis cannot avoid the influence of the earth's gravitational acceleration, the reference value is not 0 mV. The noise was calculated using Equation (2), and the resolution was calculated using Equation (3), as shown in Table 3. The frequency response of the three types of accelerometer chips is shown in Figure 11. (a) In Figure 11a, the sensitivity of the three accelerometer chips maintained a smooth trend at vibration frequencies of 2-128 Hz. In Figure 11b, the upper and lower quartiles of the VS1002 sensitivity were 1363 mV/g to 1373 mV/g. In addition, the VS1001 sensitivity at a low frequency of 2 Hz was 1402 mV/g, which was an outlier in its data sample. Compared to the VS1002, the upper quartile (1320 mV/g) and lower quartile (1303 mV/g) of the MS9001 sensitivity differed by a smaller amount, but there were four outlier values in the data sample, indicating that the stability of its sensitivity at low frequencies was not as good as that of the VS1002. The difference between the upper and lower quartiles of the ADXL355-Z sensitivity was only 4 mV/g, and its sensitivity output at low frequencies was the most stable, which is related to its built-in filtering circuit and lower sensitivity. Therefore, under the premise of meeting the sensitivity requirements, the ADXL355-Z has the best stability at low frequencies of 2-128 Hz. This indicates that the VS1002 and MS9001 can be further improved by designing a filtering circuit. Temperature Drift Evaluation In the temperature drift test experiment, the temperature control box will produce stable motor vibration noise during operation, so the signal of the acceleration sensor will fluctuate, as shown in Figure 12. (a) In Figure 11a, the sensitivity of the three accelerometer chips maintained a smooth trend at vibration frequencies of 2-128 Hz. In Figure 11b, the upper and lower quartiles of the VS1002 sensitivity were 1363 mV/g to 1373 mV/g. In addition, the VS1001 sensitivity at a low frequency of 2 Hz was 1402 mV/g, which was an outlier in its data sample. Compared to the VS1002, the upper quartile (1320 mV/g) and lower quartile (1303 mV/g) of the MS9001 sensitivity differed by a smaller amount, but there were four outlier values in the data sample, indicating that the stability of its sensitivity at low frequencies was not as good as that of the VS1002. The difference between the upper and lower quartiles of the ADXL355-Z sensitivity was only 4 mV/g, and its sensitivity output at low frequencies was the most stable, which is related to its built-in filtering circuit and lower sensitivity. Therefore, under the premise of meeting the sensitivity requirements, the ADXL355-Z has the best stability at low frequencies of 2-128 Hz. This indicates that the VS1002 and MS9001 can be further improved by designing a filtering circuit. Temperature Drift Evaluation In the temperature drift test experiment, the temperature control box will produce stable motor vibration noise during operation, so the signal of the acceleration sensor will fluctuate, as shown in Figure 12. In Figure 11a, the sensitivity of the three accelerometer chips maintained a smooth trend at vibration frequencies of 2-128 Hz. In Figure 11b, the upper and lower quartiles of the VS1002 sensitivity were 1363 mV/g to 1373 mV/g. In addition, the VS1001 sensitivity at a low frequency of 2 Hz was 1402 mV/g, which was an outlier in its data sample. Compared to the VS1002, the upper quartile (1320 mV/g) and lower quartile (1303 mV/g) of the MS9001 sensitivity differed by a smaller amount, but there were four outlier values in the data sample, indicating that the stability of its sensitivity at low frequencies was not as good as that of the VS1002. The difference between the upper and lower quartiles of the ADXL355-Z sensitivity was only 4 mV/g, and its sensitivity output at low frequencies was the most stable, which is related to its built-in filtering circuit and lower sensitivity. Therefore, under the premise of meeting the sensitivity requirements, the ADXL355-Z has the best stability at low frequencies of 2-128 Hz. This indicates that the VS1002 and MS9001 can be further improved by designing a filtering circuit. Temperature Drift Evaluation In the temperature drift test experiment, the temperature control box will produce stable motor vibration noise during operation, so the signal of the acceleration sensor will fluctuate, as shown in Figure 12. (a) In Figure 12, in the temperature-control-box operating environment, the VS1002 and MS9001 had a fluctuation of about ±250 mV, while the fluctuation range of ADXL355 was smaller, about ±75 mV. This is due to the lower sensitivity and the built-in filtering circuit of the ADXL355, which can filter out environmental noise. According to the datasheet of ADXL355, the ADXL355 can directly output digital signals without needing an external analog-to-digital converter. A three-axis sensor, temp sensor, ADC, analog filter, and digital filter are integrated and packaged in ADXL355. The analog, low-pass, anti-aliasing filter in the ADXL355 provides a fixed bandwidth of approximately 1.5 kHz, which is where the output response is attenuated by approximately 50%. The shape of the filter response in the frequency domain is that of a sinc3 filter. The ADXL355 provides further digital-filtering options to maintain excellent noise performance at various output data rates. The probability density analysis of the output signal showed that the output voltage values of the three sensor chips were distributed as Gaussian distribution, and the R 2 of the Gaussian fitting curve was more significant than 0.9. The output voltage can be considered as the sum of a constant value ( ) and noise value ( ), as shown in Equation (4). = + The noise value can be approximately fitted using Gaussian distribution. Based on the minimum root mean square (RMS), the constant value can be calculated using the regression analysis technique. For the acceleration output signal within 10 s, the value was taken as the actual output of the acceleration sensor within the unit time (10 s). The output value of the acceleration sensor was continuously recorded during the temperature change. Figure 13 shows the temperature drift of different accelerometer chips. Figure 13a presents the process of output of three accelerometer chips with temperature, i.e., the temperature drift process. Figure 13b presents the effect of temperature on the reference value of the accelerometer chip output during the temperature stabilization phase. In Figure 12, in the temperature-control-box operating environment, the VS1002 and MS9001 had a fluctuation of about ±250 mV, while the fluctuation range of ADXL355 was smaller, about ±75 mV. This is due to the lower sensitivity and the built-in filtering circuit of the ADXL355, which can filter out environmental noise. According to the datasheet of ADXL355, the ADXL355 can directly output digital signals without needing an external analog-to-digital converter. A three-axis sensor, temp sensor, ADC, analog filter, and digital filter are integrated and packaged in ADXL355. The analog, low-pass, anti-aliasing filter in the ADXL355 provides a fixed bandwidth of approximately 1.5 kHz, which is where the output response is attenuated by approximately 50%. The shape of the filter response in the frequency domain is that of a sinc3 filter. The ADXL355 provides further digital-filtering options to maintain excellent noise performance at various output data rates. The probability density analysis of the output signal showed that the output voltage values of the three sensor chips were distributed as Gaussian distribution, and the R 2 of the Gaussian fitting curve was more significant than 0.9. The output voltage can be considered as the sum of a constant value (v c ) and noise value (v n ), as shown in Equation (4). The noise value can be approximately fitted using Gaussian distribution. Based on the minimum root mean square (RMS), the constant value can be calculated using the regression analysis technique. For the acceleration output signal within 10 s, the v c value was taken as the actual output of the acceleration sensor within the unit time (10 s). The output value of the acceleration sensor was continuously recorded during the temperature change. Figure 13 shows the temperature drift of different accelerometer chips. Figure 13a presents the process of output v c of three accelerometer chips with temperature, i.e., the temperature drift process. Figure 13b presents the effect of temperature on the reference value of the accelerometer chip output during the temperature stabilization phase. was taken as the actual output of the acceleration sensor within the unit time (10 s). The output value of the acceleration sensor was continuously recorded during the temperature change. Figure 13 shows the temperature drift of different accelerometer chips. Figure 13a presents the process of output of three accelerometer chips with temperature, i.e., the temperature drift process. Figure 13b presents the effect of temperature on the reference value of the accelerometer chip output during the temperature stabilization phase. As shown in Figure 13, the output reference value of ADXL355 was not affected by temperature, while the output reference value of VS1002 and MS9002 varied linearly with temperature. Moreover, the reference value is positively correlated with temperature. The MS9002 was more affected by temperature than the VS1002. The reference value of the output voltage of the MS9002 varied with temperature at a rate of 0.255 mV/ • C, while that of the VS1002 varied with temperature at a rate of 0.102 mV/ • C. The output reference value of the sensor changed significantly during the warming or cooling phase, and the trend was similar to the temperature change curve. Conclusions In this paper, the performance testing and comparative evaluation of MEMS highprecision acceleration sensors are carried out, contributing to the development and application of acceleration sensors in intelligent road engineering. The main conclusions are as follows: (1) PCBs with three accelerometer chips, VS1002, MS9001, and ADXL355, are developed, and circuit design and software development are completed to realize the real-time output and visualization of vibration data. The sensor sensitivity, linearity, noise, resolution, frequency response range, and temperature drift test are designed. (2) The range of MS9001 is ±1 g. Due to the acceleration of gravity, the over-range phenomenon occurs after loading 0.3 g acceleration. Thus, it cannot be applied to the pavement vibration monitoring on the wheel path. However, it can be placed on the roadside to monitor the pavement's small amplitude fluctuations. (3) The VS1002 has the best performance indexes. Under the supply voltage of 3.3 V, the sensitivity of VS1002 can reach about 1371.1 mV/g, the noise can reach 0.231 mV, and the resolution can reach 0.169 mg, which can be used as the first choice for the high-precision monitoring of pavement vibration. (4) The ADXL355 has low noise and sensitivity; the resolution is 0.655 mg. In the temperature range from −20 • C to 60 • C, the ADXL355 signal output is virtually temperatureindependent. The chip is low-cost and suitable for pavement vibration monitoring at higher-density deployments. The performance test of MEMS accelerometers will help the open-source development of modules/add-ons for smart roads and promote the application of MEMS sensor technology in the transportation industry. Furthermore, filter circuit design and package optimization will be conducted based on the preferred MEMS accelerometer chip. The variation law of vibration time-frequency domain characteristic parameters in extreme service environments should be expounded. Moreover, sensors with high accuracy and long life will be developed for pavement vibration monitoring.
2023-01-12T16:35:28.639Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "8dbc1d35a328af46f710ea0a3ea9874cb8c7639e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-666X/14/1/153/pdf?version=1673073708", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d39ed2d7857e1e2d9982a2cb4dd6ffe4d8c8159f", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
23700884
pes2o/s2orc
v3-fos-license
Puerarin Facilitates T-Tubule Development of Murine Embryonic Stem Cell-Derived Cardiomyocytes Aims: The embryonic stem cell-derived cardiomyocytes (ES-CM) is one of the promising cell sources for repopulation of damaged myocardium. However, ES-CMs present immature structure, which impairs their integration with host tissue and functional regeneration. This study used murine ES-CMs as an in vitro model of cardiomyogenesis to elucidate the effect of puerarin, the main compound found in the traditional Chinese medicine the herb Radix puerariae, on t-tubule development of murine ES-CMs. Methods: Electron microscope was employed to examine the ultrastructure. The investigation of transverse-tubules (t-tubules) was performed by Di-8-ANEPPS staining. Quantitative real-time PCR was utilized to study the transcript level of genes related to t-tubule development. Results: We found that long-term application of puerarin throughout cardiac differentiation improved myofibril array and sarcomeres formation, and significantly facilitated t-tubules development of ES-CMs. The transcript levels of caveolin-3, amphiphysin-2 and junctophinlin-2, which are crucial for the formation and development of t-tubules, were significantly upregulated by puerarin treatment. Furthermore, puerarin repressed the expression of miR-22, which targets to caveolin-3. Conclusion: Our data showed that puerarin facilitates t-tubule development of murine ES-CMs. This might be related to the repression of miR-22 by puerarin and upregulation of Cav3, Bin1 and JP2 transcripts. Introduction Cell replacement is a novel theraptic strategy for rational treatment of myocardium infarction. This strategy is transplanting suitable cells into the infarcted area of heart which can functionally integrate into the host tissue, and thus rescuing cardiac function [1]. Embryonic stem (ES) cells generated from blastocysts inner cell mass can differentiate into beating cardiomyocytes (CMs) in vitro, which have been widely accepted as a promising source for cell replacement therapy [1][2][3][4]. The typical structure of adult CMs is characterized by bundles of parallel myofilaments, sarcomeres with Z-disc, A-, I-, and H-band as well as the M-line, which are the structural basis of cardiac muscle contraction. Moreover, the transverse tubules (t-tubules), which emerge into deep invaginations of the plasma membrane of CMs and are rich in ion channels, are crucial for excitation-contraction coupling (ECC) of CMs [5]. Therefore, the mature structure of sarcomeres and t-tubules in ES cell-derived CMs (ES-CMs) are prerequisite for their function and integration in the host tissue after transplantation [6]. However, accumulated transplantation studies have shown that only partial or transient restoration of electrophysiological and contractile function is observed after transplantation of ES-CMs or induced pluripotent stem cell derived CMs [7,8]. One of potential reasons might be related to the underdeveloped structure of ES-CMs and immature contractile properties. It has been reported that comparing with adult CMs, both human and murine ES-CMs (hES-CMs and mES-CMs) exhibit immature structural features, such as irregularly organized myofibril, early-stage sarcomeres with nascent Z-discs and I-band, but lack of M-line [9][10][11]. In addition, the t-tubules formation-related genes caveolin-3 (Cav3) and amphiphysin-2 (Bin1) are absent in both h and mES-CMs [12], thus resulting in the absence of organized t-tubules and unsynchronized Ca 2+ transient [10,12] and impaired contractile properties. Obviously, the immature structure of ES-CMs will lead to poor contractile function, thus hampers their integration with the host cells. Recently, transgenic technique [13], electrical stimulation [14] and cytokines application [7] had been used to promote in vitro maturation of ES-CMs, but most of these studies had focused mainly on contractile and electrophysiological properties. It is still unknown whether the methodological modifications mentioned above also have parallel effect on the structural development of ES-CMs. Additionally, the safety and convenience of these methods need to be fully considered. Safer and more economical approaches to drive structural maturation of ES-CMs in vitro are still needed. Puerarin (7, 4′-dihydroxy-8-β-D-glucosylisoflavone, C 21 H 20 O 9 ) is a traditional Chinese medicine extracted from the herb Radix puerariae, and wildly used for treatment cardiovascular diseases such as myocardial infarction [15], arrhythmic [16] and ischemic [17] in China. Our previous study found that puerarin enhances the cardiac differentiation and ventricular specialization of mES cells [4]. In this study, we tested the hypothesis that structural organization of mES-CMs would be improved by long-term application of puerarin throughout cardiac differentiation in vitro. Our findings suggest that puerarin facilitates t-tubule development of mES-CMs. Culture of mES-CMs The mES cell line D3 (ATCC, USA) and its transgenic cell line αPIG (clone 44) were cultured and differentiated into spontaneously beating CMs as previously described [3]. Briefly, the embryoid bodies (EBs) were generated using the hanging drop method in Iscove's Modified Dulbecco's Medium supplemented with 20 % fetal bovine serum, 1 % non-essential amino acids, 2 mM L-glutamine, 100 units/mL penicillinstreptomycin, and 0.1 mM β-mercaptoethanol. For cardiac differentiation of transgenic mES cells, puromycin (10 µg/mL) was added into the medium for purification of CMs at day 9 and day 12 of differentiation. Puerarin (100 μM, National Institutes For Food and Drug Control, China), which dissolved in 0.05 % dimethyl sulfoxide (DMSO, Sigma, USA), was applied from day 0 to day 20 as previously described [4]. EBs cultured with differentiation medium containing 0.05 % DMSO were severed as controls. Medium was changed every two days. All the following experiments were performed by using wild type mES cells (D3) if not otherwise indicated. All cultivation medium and reagents were purchased from Gibco if not otherwise indicated. Isolation of mES-CMs and embryonic ventricular CMs Spontaneous beating areas were mechanically microdissected at day 16 and 20 of differentiation, and were dissociated into single cells by incubation in collagenase Ⅱ (1 mg/ml, Roche, Germany) at 37 ℃ for 30 min. Cells were then plated onto gelatin-coated glass coverslips, and cultured with culture medium without puerain and DMSO for at least 24h for further experiments. For preparation of embryonic ventricular CMs, pregnant mice (Kunming mice provided by the Center of Animal Experimentation of Tongji Medical College, Huazhong University of Science and Technology, China) were sacrificed after coitum 16.5 days (E16.5). The ventricles of the embryonic hearts were isolated, dissected, and enzymatically dissociated into single cells as previously described [18]. Transmission electron microscope (TEM) Clusters of mES-CMs purified by puromycin were collected at day 16 and 20 of transgenic mES differentiation, and immediately fixed by 2.5 % glutaraldehyde in 0.1 M PBS. They were then incubated in buffered 1 % osmium tetraoxide for 2 h, dehydrated in graded ethanol series and embedded in Epon. Ultrathin sections were cut and stained with uranyl acetate and lead citrate. The sections were observed under a transmission electron microscope (FEI, Hillsboro, USA). T-tubules fluorescent staining and score Fluorescent t-tubule staining was employed to further study the morphology of t-tubule according to previous reports [12,13]. Single beating CMs with or without puerarin treatment at day 16 and 20 of differentiation were acquired from 7 and 8 independent differentiations, respectively. Then cells were incubated with the lipophilic fluorescent indicator di-8-aminonaphthylethenylpyridinium (Di-8-ANEPPS, 5 μM; Invitrogen, USA) for 10 min at 37 ℃ and then washed by PBS for 15 min. Images were then randomly taken at relative midplanes of cell height by screening through Z axis of fluorescence microscopy (Olympus, Japan) using 488 nm excitation light with detection at > 505 nm. To describe the effect of puerarin on morphology of t-tubule, a score was defined. Murine embryonic ventricular CMs at E16.5 served as positive cells, which presented an organized array of bright red spots in the cellular periphery and mideplane. As shown in Table 1, CMs presenting similar pattern of bright red spots as positive cells were scored as "CMs with developing t-tubule", otherwise scored as "CMs without t-tubule". T-tubules score of total 1782 mES-CMs with or without puerarin treatment, which acquired from 7-8 independent differentiations, was performed independently by two operators and the percentage of "CMs with developing t-tubule" was calculated. Cellular Physiology and Biochemistry Quantitative real-time PCR analysis Total mRNA was extracted from EBs using TRIzol Reagent (Invitrogen, USA) according to the manufacturer's instructions. cDNA was synthesized. Real-time PCR was performed in 96-well plates in triplicates using MxPRO3000 detector (Strata gene Technology Company, USA) with SYBR Green Real-time PCR Master Mix plus (Toyobo, Japan) for relative quantification of genes. The transcript level of GAPDH was used for internal normalization. The primers are listed in Table 2. Bulge-Loop TM microRNAs (miRs) primer for U6 and miR-22 (RIBOBIO, China) were used according to the manufacturer's protocols. U6 was used as a normalization control. The relative quantification of PCR products was performed according to the 2 -ΔΔct method and normalized by control. Statistics Results are expressed as mean ± SEM. Statistical significance of difference was analyzed using the unpaired t test if not otherwise indicated. Statistical significance was accepted when p < 0.05. Puerarin improves the myofibrillar alignment and sarcomere development of mES-CMs We firstly observed total 72 cells by TEM to investigate whether puerarin influences the ultrastructural features of mES-CMs. At day 16 of differentiation, myofilaments in the majority of puerarin-treated mES-CMs were well organized. Sarcomeric pattern showed relatively mature Z-discs and clear I-bands (Fig. 1B). However, the majority of untreated cells displayed an early-stage ultrastructural phenotype [9][10][11], which were characterized by misaligned myofibrils at low density and dispersive nascent Z-discs (Fig. 1A). Moreover, the percentage of cells with Z-discs was almost doubled in the puerarin treatment group compared with the control group. 37.5 % of control cells (9 out of 24 cells) presented misaligned myofibrils at low density and dispersive nascent Z-discs, 62.5 % of control cells (15 out of 24 cells) only had low density of misaligned myofibrils without Z-discs. However, 66.7 % of puerarin-treated cells (10 out of 15 cells) presented well-organized myofilaments and relatively mature Z-discs (Fig. 1E). In addition, the percentage of cells with sarcomeres was also higher in the puerarin treatment group compared to the control group. Sarcomeres were present in 8.3 % of control cells (2 out of 24 cells), while present in 53.3 % of puerarintreated cells (8 out of 15 cells) (Fig. 1F). At day 20, further developed structure of mES-CMs was observed. More cells presented sarcomeres, and similar sarcomeres with better-organized myofibrils, Z-discs, and I-bands were presented in both two groups of cells ( Fig. 1C and D). Besides, H-bands were faint, and observed frequently in the puerarin-treated cells, but observed occasionally in untreated cells. In the control group, 63.2 % of cells (12 out of 19 cells) presented better-organized myofibrils and mature Z-discs (Fig. 1E), and 31.5 % of cells (6 out of 19 cells) presented better organized sarcomeres (Fig. 1F). In the puerarin treatment group, 71.4 % of cells (10 out of 14 cells) presented better organized myofibrils and mature Z-discs (Fig. 1E), and 57.1 % of cells (8 out of 14 cells) presented better organized sarcomeres (Fig. 1F). Cellular Physiology and Biochemistry We also detected the intercellular junction development of mES-CMs in two groups. As shown in Fig. 2, similar intercellular junctions in the both groups were observed. Differentiated CMs of the both groups were mainly connected by desmosomes at day 16 ( Fig. 2 A and B). At day 20, the nascent intercalated discs formed bright dots in both mideplane and periphery (lower), indicating the presence of developing t-tubules. In the control group, 25.28 ± 1.27 % (n = 7 of independent differentiations) mES-CMs at day 16 and 29.57 ± 1.54 % (n = 8 of independent differentiations) at day 20 were scored as "CMs with developing t-tubule". In the puerarin treatment group, we found a significantly higher percentage of mES-CMs scored as "CMs with developing t-tubule" at both day 16 (39.56 ± 1.02 %, n = 7 of independent differentiations, p < 0.01 v.s control) and day 20 (39.93 ± 2.35 %, n = 8 of independent differentiations, p < 0.01 v.s control) (Fig. 3C). These data imply that puerarin improves the development of t-tubules of mES-CMs. Puerarin upregulates transcript levels of Cav3, Bin1 and junctophinlin-2 (JP2) To find out the possible reason how puerarin affects the t-tubule development in the present study, we next focused on the expressions of t-tubules formation-related gene Cav3 and Bin1 [19], and t-tubules development-related gene JP2 [20]. The transcript levels of Cav3 (p < 0.05, n = 5) and JP2 (p < 0.01, n = 5, Fig. 4) were significantly increased in the puerarintreated cells at day 16, and were upregulated about 2-3 folds at day 20. Bin1 transcript was increased at all three observed time points, with about 2.5 folds upregulation at day 10 (p < 0.01, n = 5, Fig. 4). These data suggest that puerarin might facilitate the development of t-tubules of mES-CMs via upregulation of Cav3, Bin1 and JP2 transcripts. Puerarin represses expression of miR-22 at day20 Many data showed that posttranscriptional regulated by miRs also quantitatively affects the maturation of ES-CMs in vitro and in vivo. Cav3 gene has been proved to be one of the target genes of miR-22 [21]. We hypothesized that miR-22 might be involved in puerarininduced upregulation of Cav3 in mES-CMs. As shown in Fig. 5, the expression pattern of miR-22 showed a development-dependent downregulation from day 16 to day 20 in both groups (n = 3, p < 0.01 v.s day16, Fig. 5). Moreover, the expression of miR-22 was significantly repressed in the puerarin treatment group at day 20 (n = 3, p < 0.05 v.s control, Fig. 5). T-tubules of CMs are invaginations of the surface membrane occurred at the Z-discs, where amount of proteins involving in ECC such as L-type Ca 2+ channel and sodium/calcium exchanger are localized [22]. T-tubules ensure the rapid spread of the electrical signal (action potential) to the cell central region triggering Ca 2+ release from the sarcoplasmic reticulum (SR), ultimately inducing myofilament contraction [23]. Studies from various groups have shown that t-tubules presented in adult ventricular CMs are absent in hES-CMs [12,24]. HES-CMs lacking of t-tubules show poor Ca 2+ handling properties, as reflected by smaller peak amplitude, slower rise, decay kinetics and nonuniform calcium dynamics across the cells [6,25]. In addition, lack of t-tubules accounts for the spatial separation of L-type calcium channels located at t-tubule and RyRs located at SR, and results in the delay of calcium peak between cell periphery and center [12,26]. By using TEM, Baharvand H [10] observed the presence of t-tubules at late stage of mES-CMs. Consistent with this report, we also detected the presence of t-tubules in mES-CMs at day 20 of differentiation. Furthermore, using Di-8-ANNEPPS live-staining, we confirmed that developing t-tubules were present in mES-CMs at both day 16 and day 20 of differentiation. Puerarin treatment significantly facilitated the development of t-tubules, which would benefit to the following contraction. To further understand why puerarin facilitates t-tubules development, we measured the transcript levels of Cav3, Bin1 and JP2, which play crucial roles in the t-tubules formation and maturation. Cav3 is a key integral membrane protein with a hairpin structure involving in the biogenesis of t-tubule [27]. It plays an important role in cardiac calcium regulation [28]. Loss of Cav3 induces t-tubules abnormality in mice [29]. Bin1 localized in the t-tubules is another membrane-associated protein, which is thought to be an initiator in t-tubule formation [19]. Both of Cav3 and Bin1 play key roles in the biogenesis of t-tubules, and they are abundantly expressed in adult CMs [19], but absent in m and hES-CMs [12]. Emerging evidence indicates the important role of JP2, a protein anchoring the SR to t-tubules, in t-tubules development of CMs [20]. In the present study, puerarin upregulated Cav3, Bin1 and JP2 transcripts in mES-CMs differentiation stages, suggesting their contributions to the improvement of t-tubules development. Recently, accumulated data showed that posttranscriptional control by miRs also quantitatively affects the development of ES-CMs in vitro and in vivo. MiR-22 is a muscleenriched miR. Increasing evidence suggests that miR-22 integrates Ca 2+ homeostasis and myofibrillar protein. Overexpression of miR-22 in mice has been shown to cause contractile dysfunction with reducing SR Ca 2+ content and amplitude of Ca 2+ transient as well as induced CMs hypertrophy [30]. Furthermore, the Cav3 gene is one of the target genes of miR-22 [21]. Overexpression of miR-22 leads to reduction of Cav3 gene expression level [21]. In the present study, we found that miR-22 dramatically decreased with development of mES-CMs. Consistent with the previous reports, we observed that puerarin repressed expression of miR-22, thereafter upregulated the transcript level of Cav3, demonstrating the role of miR-22 and Cav3 in puerarin-induced development of t-tubules. It has been reported that during in vivo cardiogenesis, myofilaments were initially distributed in sparse, irregular myofibrillar arrays, then gradually matured into parallel arrays of myofibrils and eventually aligned into densely packed sarcomeres including Z-discs, I-, A-, H-bands and M-lines [31]. Recently, Lundy et al. reported that hES-CMs cultured for 3-4 month show greater myofibril density and alignment, and better organized sarcomeres compare with early-stage CMs [11]. The hES-CMs treated by combining three-dimensional cell cultivation with electrical stimulation exhibit maturer sarcomeric organization with H zones and I bands [14]. Here, we found that mES-CMs presented immature ultrastructure, which is in agreement with previously reports, and puerarin-treated mES-CMs presented relatively maturer structure compared with control cells. Of note, although signs of t-tubules formation induced by puerarin treatment were observed in this study, it is clear that further efforts are needed to drive full maturation. Further studies are also needed to know whether puerarin could affect the contraction function. Besides, the precise mechanisms underlying puerarin-induced ultrastructure and t-tubule development of mES-CMs remain unclear. In conclusion, our results suggest that the long-term puerarin treatment facilitates structural maturation and t-tubule development of mES-CMs. These findings provide a new insight into the biological effects of puerarin on maturation of ES-CMs in vitro, and suggest the potential use of traditional Chinese medicine in driven structural maturation of functional CMs in vitro as well as in stem cell research.
2018-04-03T04:33:33.547Z
2014-07-11T00:00:00.000
{ "year": 2014, "sha1": "5a3ffed175188024154bfa4f52b0b8b3a82920a4", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1159/000363007", "oa_status": "GOLD", "pdf_src": "Karger", "pdf_hash": "5a3ffed175188024154bfa4f52b0b8b3a82920a4", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
237763666
pes2o/s2orc
v3-fos-license
Application of Broccoli Residues to Soil can Suppress Verticillium Wilt of Cotton by Regulating the Bacterial Community Structure of the Rhizosphere [Aims] Verticillium wilt (VW) of cotton was effectively controlled by application of broccoli residues (BR) to soil. Information regarding the variation in bacterial communities in rhizosphere of cotton cultivars with different VW resistance levels under BR treatment is still lacking and nally to provide guidance for screening effective biocontrol bacteria. [Methods] Real-time uorescence quantitative PCR was used to determine the population of Verticillium dahliae, the effects of BR on the bacterial community structure in rhizosphere were determined by high-throughput sequencing technology. [Results] Results showed that control effects for susceptible cultivar (cv. EJ-1) and resistant cultivar (cv. J863) on VW after BR treatment were 51.76% and 86.15%, the population of V. dahliae decreased by 18.88% and 30.27%, respectively. High-throughput sequencing showed that ACE and Chao1 indices were increased by application of BR. Actinobacteria, Proteobacteria, Bacteroidetes, Gemmatimonadetes, Acidobacteria, and Firmicutes were the most dominant phyla, and relative abundances of these bacterial taxa signicantly differed between cultivars. Additionally, Bacillus stably increased in rhizosphere following BR treatment. Redundancy analysis (RDA) showed that relative abundances of Bacillus, Lysobacter, Streptomyces, Rubrobacter, Gemmatimonas, Bryobacter and Nocardioides were correlated with occurrence of VW. Field experiments demonstrated that dressing cotton seeds with Bacillus subtilis NCD-2 could successfully reduce occurrence of VW, and control effects for EJ-1 and J863 were 35.26% and 31.02%, respectively. [Conclusions] The application of BR changed the bacterial community structure in cotton rhizosphere, decreased the population of V. dahliae in soil, and increased the of benecial microorganisms, thus signicantly VW. Introduction Cotton (Gossypium hirsutum L.) is the most important source of natural textile bers worldwide and a signi cant oilseed crop (Zhang et al., 2016). Verticillium wilt (VW), caused by Verticillium dahliae, is a typical soil-borne disease and results in extensive economic losses. In China, losses of approximately 250-310 million US dollars have been reported for cotton annually due to V. dahliae (Li et al., 2015;Rehman et al., 2018). VW is particularly di cult to control due to the long-living dormant microsclerotia produced by the pathogen, which remain viable in the soil for more than two decades (Fradin and Thomma, 2006;Alstrom, 2001), as well as the inability of fungicides to contact the hyphae of V. dahlia after they spread inside the xylem (Klosterman et al. 2009). It is imperative to develop novel control strategies to control this devastating disease. Previous studies have shown that soil-borne disease management has relied principally upon fumigation (Atallah et al. 2012; Johnson and Dung, 2010;Taylor et al., 2005). However, the application of chemical fumigants to the soil may be environmentally unfriendly (Uppal et al., 2008). Therefore, there is growing interest in the search for alternatives to fumigants for disease control. Many reports have demonstrated that the use of organic soil amendments may be a potential strategy for the control of insect pests, Akao et al., 2017). The changes in soil microbial community structure caused by organic soil amendments provide useful information on soil health and quality (Poulsen et al., 2013). In particular, the responses of soil bacterial communities to organic soil amendments are particularly important and are believed to be one of the main drivers of disease suppression (Garbeva et al., 2004). The disease-suppressive effects of certain crop residues are well documented, such as those of broccoli, buckwheat, canola, mustard, and sweet corn (Tubeileh and . Information regarding the variation in bacterial communities in rhizosphere soil that are affected by cotton cultivars that vary in resistance to VW following the application of broccoli residues is still lacking. It is unclear whether and, if so, how cultivar resistance against V. dahliae is related to rhizosphere bacteria. The overall objectives of this study were therefore (i) to determine the effect of BR on the incidence of VW among different cotton cultivars, (ii) to study the differences in the bacterial composition, diversity, and community structure following the application of BR, (iii) to analyze the relationship between disease incidence and the bacterial community, and (iv) to assess the effect of the addition of exogenous Bacillus subtilis on VW in the eld. Field experiment site The experimental sites were located in Quzhou County, Hebei Province. Field trials were conducted at two sites, Field A and Field B, from 2017 to 2019. The experimental sites have a long history of cotton cultivation and occurrence of VW. The plots with at terrain, relatively uniform fertility and continuous cotton planting for more than ten years were selected as experimental elds. Soil nutrient characteristics are outlined in detail in a previous publication (Zhao et al., 2019b). Detailed information regarding the eld experiment setting is described in the following statements. Experimental setup and design Broccoli was planted in August 2017, and the density of the broccoli plants was approximately 41 thousand plants per hectare. After harvesting the edible part of the broccoli, the remaining parts of the plants were chopped in the eld with a grinder and mechanically incorporated into the soil with a rotovator in early November 2017 at a depth of 25 to 30 cm. The amount of broccoli residues amended into the soil was approximately 57 thousand kilograms per hectare. Parts of the eld were not amended with broccoli residues as a blank control. The susceptible cultivar Ejing 1 (EJ-1) and resistant cultivar Ji 863 (J863) were planted in late April 2018. The experimental design included four treatments: 1) susceptible cultivar EJ-1 planted without broccoli residues (EJ-1-CK); 2) susceptible cultivar EJ-1 planted with broccoli residues (EJ-1-BR); 3) resistant cultivar J863 planted with broccoli residues (J863-BR); 4) resistant cultivar J863 planted without broccoli residues (J863-CK). The experiment had a randomized complete block design with three replicates. All plots were covered with plastic lm and irrigated as necessary. For Field B, in early August 2018, broccoli planting, preparation of BR, and experimental setup and design were similar to those in Field A. After that, cotton was planted in late April 2019. Soil sample collection and DNA extraction Soil samples were collected at the owering and boll-forming stages in 2018 and 2019, respectively. Within each sampling plot, three plants were randomly selected and carefully removed from the soil using a spade. The root systems of the three plants from each plot were rst vigorously shaken to remove loosely adhering soil particles, and then the remaining root systems were combined as a rhizosphere sample. Soil samples were immediately preserved at 4°C for less than 48 hr. To remove plant material, the sample was sieved through a 2.0 mm sieve and stored at -80°C for subsequent DNA extraction. DNA was Statistical Analyses Statistically signi cant differences (P < 0.05) in disease incidence, the disease index, DNA copies of V. dahliae, and changes in the bacterial community composition between the control and BR treatments were evaluated with Student's t-test or one-way analysis of variance (ANOVA) using SPSS. Soil bacterial diversity indices were calculated based on resampled OTU abundance matrices in MOTHUR. Principal component analysis (PCA) was performed to explore the differences in soil bacterial community composition. Redundancy analysis (RDA) was performed to examine the relationship between disease occurrence and bacterial community composition. Analysis of similarities (ANOSIM) was performed to identify the signi cant differences in bacterial community structure among treatments. Data on the differences in bacterial community composition among treatments were obtained, and the relative abundances of major taxonomic groups at the phylum and genus levels were compared. Graphs were generated with Origin 8.0 software. Effects of broccoli residues on VW of different cotton cultivars Broccoli residues had a signi cant impact on the disease incidence and disease index of cotton VW (P 0.05). Compared with the blank control (no broccoli residues), the disease incidence of cultivar EJ-1 decreased by 38.76% and 53.50% and the disease index decreased by 46.47% and 57.04% in Field A and Field B, respectively. The disease incidence of cultivar J863 decreased by 100% and 63.42% and the disease index decreased by 100% and 72.30% in Field A and Field B, respectively. The average control effects for EJ-1 and J863 were 51.76% and 86.15%, respectively (Fig. 1). Effect of broccoli residues on DNA copies of V. dahliae in soil When compared with those in the blank control soils, the DNA copies of V. dahliae in the soils associated with the different cotton cultivars were signi cantly reduced by the BR treatment (Fig. 2) Alpha Diversity Of The Bacterial Community The alpha diversity of the bacterial community was expressed by the ACE and Chao1 indices in our study ( Fig. 3). In Field A, the ACE index for EJ-1 ranged from 2503 (CK) to 2667 (BR), and the Chao1 index ranged from 2501 (CK) to 2624 (BR), which were greater by 6.55% and 4.92%, respectively. The ACE index for J863 ranged from 2603 (CK) to 2652 (BR), and the Chao1 index ranged from 2585 (CK) to 2617 (BR), which were greater by 1.89% and 1.24%, respectively. In Field B, the ACE index for EJ-1 ranged from 3690 (CK) to 3751 (BR), and the Chao1 index ranged from 3466 (CK) to 3541 (BR), which were greater by 1.64% and 2.15%, respectively. The ACE index for J863 ranged from 3949 (CK) to 3972 (BR), and the Chao1 index ranged from 3520 (CK) to 3655 (BR), which were greater by 0.59% and 3.85%, respectively. These results indicate that the ACE and Chao1 indices were increased by the application of BR at the different eld sites. Bacterial Community Structure Analyses Principal component analysis based on the OTU composition was used to study the effect of broccoli residues on the soil bacterial community structure associated with the different cotton cultivars. Figure 4 shows plots of the sites in the plane of the rst two principal coordinates based on the soil bacterial communities in Field A and Field B, respectively. The results show that the bacterial community structure associated with the different cultivars was located in the same quadrant after the application of broccoli residues, while that of the blank controls of the different cultivars was located in different quadrants, which indicates that the bacterial community structure changed and tended to be the same after the application of broccoli residues. In addition, the rst principal component (PC1) and the second principal component (PC2) of the bacterial community structure at the OTU level in rhizosphere soil were found to explain 34.07% and 16.36% of all variables in Field A, and 24.83% and 21.29% of all variables in Field B, respectively. The cumulative contribution rates of variance of the two principal components reached 50.43% and 46.12%, respectively. In addition, ANOSIM indicated that the BR treatment contributed signi cantly to the separation of the CK treatment (R = 0.9815, P = 0.001, Field A) and (R = 0.6481, P = 0.002, Field B). Comparison Of Bacterial Community Composition Among all sequences, unknown sequences were classi ed as "other group". In Field A, the dominant bacterial phyla were Proteobacteria, Actinobacteria, Acidobacteria, Gemmatimonadetes, Chloro exi, Bacteroidetes, Planctomycetes, Rokubacteria, Nitrospirae, Verrucomicrobia, Latescibacteria, Firmicutes and Patescibacteria, and these phyla accounted for more than 95% of the total sequences in each sample (Fig. S1). The changes in the relative abundances of the dominant bacterial taxa associated with the different cotton varieties after the application of broccoli residues were compared at the phylum level (Fig. 5). Notably, all dominant bacterial phyla associated with J863 increased in abundance following the application of broccoli residues, while for EJ-1, the dominant bacterial phyla were in uenced to different degrees by the application of the broccoli residues. Among them, Gemmatimonadetes, Rokubacteria, Nitrospirae, Verrucomicrobia and Firmicutes increased. The most abundant group was Firmicutes, which increased by approximately 2.36 and 1.41-fold for EJ-1 and J863, respectively, when compared with the values for CK. However, Proteobacteria, Chloro exi, Bacteroidetes, Planctomycetes, Latescibacteria and Patescibacteria decreased following the application of broccoli residues (Fig. 5A). In Field B, the dominant bacterial phyla were Actinobacteria, Proteobacteria, Acidobacteria, Chloro exi, Gemmatimonadetes, Firmicutes, Bacteroidetes, Planctomycetes, Rokubacteria, Patescibacteria, Entotheonellaeota, Nitrospirae and Verrucomicrobia, and these phyla accounted for more than 95% of the total sequences in each sample (Fig. S2). For the cultivars EJ-1 and J863, the dominant bacterial phyla were in uenced to different degrees by treatment with broccoli residues. Actinobacteria, Gemmatimonadetes and Firmicutes were increased by the application of broccoli residues. The most abundant group was also Firmicutes in Field B, and the fold changes were 1.42 and 1.27 for EJ-1 and J863, respectively. Acidobacteria decreased as a result of the application of broccoli residues. In addition, Proteobacteria, Bacteroidetes, Patescibacteria, Entotheonellaeota, Nitrospirae and Verrucomicrobia decreased for EJ-1, while the opposite tendency was observed for J863 (Fig. 5B). Based on the results for the different cultivars and eld sites, the relative abundances of Actinobacteria, Gemmatimonadetes and Firmicutes in the soil increased after the application of broccoli residues. Relationships between the occurrence of VW and bacterial community composition The relationships between the occurrence of VW and bacterial community composition in Field A and Field B were studied with RDA (Fig. 7). For Field A, the RDA that was performed with the genera and disease incidence data showed that the rst two RDA components could explain 52.3% of the total variation (Fig. 7A). As shown by their close grouping and by the vectors, the disease incidence of cultivar J863 was positively related to the abundant genera Gemmatimonas, Pontibacter, RB41, Blastococcus and Massilia after the application of broccoli residues, and it was negatively related to Bacillus, Lysobacter, and Nitrospira. However, the disease incidence for EJ-1 treated with BR was positively related to the abundant genera Streptomyces, Rubrobacter, Bryobacter and Nocardioides, and it was negatively related to Gemmatimonas, Pontibacter, RB41, Blastococcus and Massilia. For Field B, the RDA that was performed with the genera and disease incidence data showed that the rst two RDA components could explain 47% of the total variation (Fig. 7B). The disease incidence in the BR treatment for the cultivars (including J863 and EJ-1) was positively related to the abundant genera Bacillus, Nocardioides, RB41, Rubrobacter, and Arthrobacter and negatively related to Streptomyces, Nitrospira, Sphingomonas, and Lysobacter. Control effect of the exogenous application of Bacillus subtilis NCD-2 on VW The control effect of the application of B. subtilis on cotton VW was investigated in our study. As indicated in Fig. 8, the control effect of BS on the disease at the boll-forming stage for EJ-1 was 38.55%, while that for J863 was 26.73%. Additionally, further study at boll opening showed that the control effect for EJ-1 was 31.96%, while that for J863 was 35.31%. The average control effects for EJ-1 and J863 were 35.26% and 31.02%, respectively. Discussion The use of crop residues is an important method associated with the suppression of VW, such as those of broccoli, buckwheat, canola, mustard, and sweet corn ( , 1999). Therefore, the application of broccoli residues provides a new method and ideas for the sustainable ecological control of cotton VW. In our previous study, the main potential mechanism by which broccoli residues incorporation into the culture substrate reduced the DNA copies of V. dahliae and inhibited the spread of V. dahliae was revealed by real-time PCR and confocal microscopy methods (Wang et al., 2020; Zhao et al., 2019a). However, more in-depth research should be performed to further explore this potential mechanism, especially from the perspective of rhizosphere microbiomics. Microbiome-based research has opened a new frontier that will greatly expand our knowledge of the relationship between plant disease incidence and microbiota and offer new opportunities for developing novel approaches for biocontrol. To our knowledge, although the Illumina ) found that there was no signi cant correlation between bacterial community indices and banana Fusarium wilt following the application of bioorganic fertilizer. In the present study, although there were no signi cant differences in the alpha diversity indices following the application of BR, the values of those indices increased. Moreover, the treatment with broccoli residues had a signi cant impact on the soil bacterial community structure, which was consistent with the results of previous studies on the changes in bacterial community structure in the rhizospheric soil of eggplant (Inderbitzin et al., 2018). In addition, the bacterial community structure associated with the different resistant cotton cultivars changed in comparison to that in CK and was located in the same quadrant after the application of broccoli residues, indicating that the bacterial community structure tended to be the same after the application of broccoli residues (Fig. 4). In terms of the bacterial community composition, the analysis at the phylum level revealed that Actinobacteria, Proteobacteria, Bacteroidetes, Gemmatimonadetes, Acidobacteria, and Firmicutes were the most common phyla, but with some changes in relative abundance. This nding roughly corresponded with those of previous articles (Inderbitzin et al., 2018;Shen et al., 2014). Inderbitzin et al. (2018) found that the ve dominant phyla of soil bacteria were Proteobacteria, Actinobacteria, Bacteroidetes, Firmicutes, and Acidobacteria. Among them, Actinobacteria and Proteobacteria were more abundant after treatment with broccoli residues than in the control, while the opposite tendency of Bacteroidetes, Firmicutes, and Acidobacteria was observed. Our study found that the relative abundances of Actinobacteria and Proteobacteria also increased for the resistant cultivar J863, which was consistent with previous studies (Inderbitzin et al., 2018). However, there was no consistent conclusion for these two phyla for the susceptible cultivar EJ-1. The reason for this nding is still unclear and may be due to differences in the types or contents of root exudates among cotton cultivars, which can cause different microbial communities to be recruited. Moreover, the relative abundance of Firmicutes in the soil associated with different resistant cotton cultivars increased after the application of broccoli residues, which was not consistent with the ndings of Inderbitzin et al. (2018). Analysis of the dominant genera also revealed signi cant differences in the bacterial communities among the different treatments. Among all the cotton cultivars, the abundance of Bacillus was increased by treatment with the broccoli residues. RDA showed that the incidence of VW might be positively related . Bacillus subtilis NCD-2 was rst isolated from cotton rhizosphere soil in Hebei Province and showed excellent biological control of soil-borne diseases (Li et al., 2005). In the present study, Bacillus signi cantly increased in abundance after the application of broccoli residues. Therefore, to verify the control effect of B. subtilis NCD-2 against VW, eld experiments were executed by seed dressing with B. subtilis. However, the control effect of strain NCD-2 on disease was approximately 35% for different cotton cultivars. Some researchers have reported that direct applications of potentially bene cial species often result in poor disease suppression due to their low survival and colonization in soil (Saravanan et al., 2003;Lugtenberg and Kamilova, 2009). Therefore, the survival or abundance of the biocontrol inoculant B. subtilis in rhizosphere soil will be studied in future research. In addition, many studies have shown that soil physical and chemical properties such as soil nutrients, pH and organic matter are important factors affecting the structure of the soil microbial community Conclusions In this study, the incidence of cotton VW and the population of V. dahliae in the rhizosphere of cotton cultivars with different verticillium wilt resistance levels were decreased by treatment with BR. Highthroughput sequencing showed that bacterial diversity was increased by the application of BR. The relative abundances of Bacillus, Lysobacter, Streptomyces, Rubrobacter, Gemmatimonas, Bryobacter and Nocardioides were correlated with the occurrence of verticillium wilt. These results provide important information necessary for a better understanding of bacterial community structure in rhizosphere soil after treatment with BR. Declarations Data availability statement The raw sequence data reported in this paper have been deposited at the National Center for Biotechnology Information (NCBI) under accession number PRJNA734729 and PRJNA734770. Author contributions WZ, QG, SL, and PM planned, designed the research, and experiments. WZ, PW, LD, XZ, ZS, and XL performed the experiments. WZ and PM analyzed the data and wrote the manuscript. All authors read and approved the nal manuscript.
2021-09-28T01:09:33.255Z
2021-07-09T00:00:00.000
{ "year": 2021, "sha1": "da1fee38b40b4c9b7547dc02d9c86a577d6a621a", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-607945/v1.pdf?c=1631900422000", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "7309024d50ea0c24fa8e447d7a37acfbf484d49a", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
9658758
pes2o/s2orc
v3-fos-license
Simple Model of First-Order Phase Transition One-dimensional model of a system where first-order phase transition occurs is examined in the present paper. It is shown that basic properties of the phenomenon, such as a well defined temperature of transition, are caused both by existence of a border between the phases and the fact that only in the vicinity of that border it is possible for molecules to change their phase. Not only the model is introduced and theoretical analysis of its properties is made but also the results of Monte Carlo simulations are presented together with the results of numerical calculation of the distribution of energy levels of the system. Introduction The prime purpose of the paper is to introduce a simple model of the system where first-order phase transition occurs. One of possible scenarios is presented here, which may lead to such phenomena as melting-freezing or evaporation-condensation. The task is mainly of didactic character. It does not discover any new types of systems in question but, basing on a very simple model, shows what are essential properties of the first-order transition. The model is meant to provide an introductory exercise for those students who plan to get involved in research of such phenomena, that is why numerous simplifications are applied in the model. Due to simplicity, our model allows for formal (mathematical) calculation of some thermodynamic quantities (e.g. temperature of the transition, latent heat per molecule) as well as for the use of numerical simulation methods. Both approaches were taken in the present work. It is well known that one of the simplest attempts to classify phase transitions takes into account existence of latent heat. Hence, first-order phase transitions are those that involve latent heat (e.g. melting, evaporation) while second-order phase transitions do not (e.g. ferromagnetic to paramagnetic transition). There are a lot of models which describe phase transitions of both types. They are standard models such as Ising model, lattice gas model, XY model [1] but also some innovatory ones [2]. The initial point for most of them is Hamiltonian of the system for which partition function (or another statistical quantity) can be calculated. Then the partition function is examined and discontinuities or singularities, which may be responsible for the presence of the phase transition, are searched [3]. In our model the situation is opposite to some extent. We start constructing the model on the intuitive premise and only then the formal analysis may reveal that the partition function of the system has singularity at the thermodynamic limit . What features of crystal-liquid or liquid-gas transitions, apart from the latent heat, are most characteristic? It can be said that it is a well defined temperature (for fixed pressure) of the transition. Below this temperature we have only phase I (e.g. crystalline solid) while above there is only phase II (e.g. liquid). Both phases (generally) differ considerably from each other (symmetry, viscosity, compressibility, etc.), what means that a very little change of temperature of the system causes a dramatic change in the arrangement of molecules. Main purpose of our model is to explain intuitively how it is possible. A process of random walking at discrete time was adopted in our model in order to describe dynamics. Basic properties of melting are shown to be caused by the fact that there exists a border between the phases. Only near that border molecules may change their attachment to the phase. Far off the border they are blocked by their neighbours. Such a situation results in an exponential distribution of the energy spectrum of the system, the fact being a condition for the relation between energy and entropy to be linear so that phase transition may take place. The Basic Element of the Model The quantum system with a free hamiltonian H 0 is a basic element of our model. The spectrum consists of n + m + 1 (discrete, non-degenerated) levels, which are shown in Fig. 1. Such a system is called a molecule. The lower n + 1 levels, numbered from 0 to n, are called the levels of the first phase while the upper m levels (from n + 1 to n + m + 1) are the levels of the second phase. Energy of the k-th level of the molecule is given by: where ∆ * 12 = ∆ 12 + ∆ 1 n is energy of lowest level of phase II and ∆ 12 is energy gap between the phases, the existence of with is not necessary but added to make our consideration more general. What is more, our model is operational even if we assume ∆ 1 = ∆ 2 = ∆ 12 . Neither the evident form of H 0 nor its ,,origin" is needed. However, it is better for intuitive understanding to imagine that the states of H 0 are the states of the motion of a molecule rather than the states of its internal electron excitations. In the following part of our work we examine behaviour of such molecules which, due to the interactions with external environment (thermostat), change their state and jump at random to a neighbouring level (upper or lower). The chance of jumping has been limited to transitions between the neighbouring levels only. Such a limitation is merely technical simplification and does not affect the appearance of the phase transition. Scheme of the System and Rules of Transition In the present paper we examine a system consisted of N molecules (as introduced in the previous subsection), which are numbered from 1 to N and set into a onedimensional lattice (Fig.2). 3. exceptions: first molecule (i = 1) is always able to jump from n th level to (n+1) th level, last molecule (i = N ) is always able to jump from (n + 1) th level to n th level, 4. only that distribution of states was always chosen as the initial state of the simulation, for which all the molecules in phase II lie on the left to all of the molecules in phase I. The foregoing rules set up a border between the phases (analogous to a free surface) in the system. Transitions of molecules from the first phase to the second phase are possible only at the border of the phases. A molecule, which gathers enough energy to jump into a different phase, is not able to do it until its appropriate neighbour has done it first so the molecule is blocked by its neighbour and has to wait for its turn. It is shown below that such a property of the model guarantees that the first-order phase transition can exist. Limitations on Our Model • our model is one-dimensional while systems in nature are usually 3-dimensional, • molecules in phase II are distinguishable in our lattice while in gases and liquids they are undistinguishable, • molecules in phase I have energy levels which are mutually independent in our model while in crystals their energy levels are common (collective fonon excitations), • distribution of levels in both phases of our model are uniform; there are regular gaps between the levels, • in our model the space, which is occupied by each phase is compact (i.e. it is in one piece) while in the real systems the phases may by mixed with each other as in fog, for example, when one phase is a suspension in another phase. Such an assumption is of a special kind because it is not connected with a character of local interaction between molecules but introduces some global limitations to possible configurations of the system. Although non-natural, the assumption is necessary for mathematical simplifications. Our model is only a simple demonstration of process of growing (decline) a single drop of liquid during condensation (evaporation). Statistical Properties of a Single Molecule Let us consider a single (free 1 ) molecule defined in Fig. 1. A canonical partition function for such a system is given by: where A n is contribution of the states, which belong to phase I: while B n,m is a contribution of states, which belong to phase II: In low temperature (i.e. for A n > B n,m ) a free molecule can be found in the first phase more often than in the second. If m > n + 1 (i.e. number of levels in phase II is greater than the number of levels in phase I) the opposite situation is possible. In high temperatures (when A n < B n,m ) the probability that a free molecule is in the second phase is greater than the probability of finding it in first phase. There is a temperature at which the levels of both phases can be occupied by molecules with the same probability: It will be shown that such a temperature is the one of the phase transition in our model (Fig. 2). This temperature will be marked by T * and its reverse by β * . Statistical Properties of the Lattice Let us introduce Hamiltonian of the lattice (Fig. 2) in an evident form: where ⊕ is direct sum, H 0 is a free Hamiltonian of single molecule, and H int is an interaction Hamiltonian responsible for restriction rules as given in section 2.2. The way hamiltonian H int acts can be imagined as that: it increases energies of ,,forbidden" eigenstates of hamiltonian ⊕ N i=1 H 0 by a certain very high value Ω what results in an insignificant probability of forbidden eigenstates to be occupied. Partition function of the lattice is given by: where A n and B n,m are given by Eq. (2) and (3). The formula (6) can be written in a following form: where a = B n,m /A n , and θ is a Heaviside function. The function z l can be given in a form of two components, therefore a limit can be calculated for each of them as N → ∞. It can be seen that z l function gets divergent as N approaches infinity (due to A N and B N ). However, with β tending to β * there appears a further divergence (divergence of the sum of geometric series). Averaged energy of the lattice per single molecule is given by: For N → ∞ we obtain: where E I and E II are average energies of molecule in the first and the second phase respectively. For phase I (when levels of phase II are unavailable) averaged energy is given by: while in phase II (when phase I is unavailable) it is given by: It can be seen that for N → ∞ an energy leap takes place at temperature T = T * (see eq. 9), what let us define latent heat per single molecule in the following form: In the state of equilibrium a probability of finding k molecules in phase II is given by: It is worth noticing that: what allows (when taken into account that probabilities are normalised to unity) for eq. (13) to be written in a following form: where a = B n,m /A n . With the use of the formula (15) an average number of molecules occupying phase II can be calculated in the form: and average number of molecules occupying phase I is defined as N I = N − N II . An approximate formula for energy of the lattice per single molecule as a function of temperature is given by: where averaged energies are given by eq. (10) and eq. (11). Formula (17) is an estimation as it was not considered that near the border between the phases molecules had not enough time to reach the equilibrium state in their current phase (i.e. molecules did not "forget" yet that some time ago they had occupied the other phase). Connection with Random Walking It is convenient to use Markov chains to make it possible to understand why phase transition occurs in our model. The states of the process are numbered 0 to N and they define the number of molecules occupying phase II (or in other words a position of the phase border in our lattice). That process is a random walking (Fig. 3), with probabilities of jumping right and left given by: where α -probability of transition of the next molecule to phase II, β -probability of transition of the next molecule to phase I. Probabilities (18), (19) are chosen so that α + β = 1 and a stationary distribution of Markov chain would be in accordance with the equilibrium distribution (13)-(15). Referring to well known properties of random walking [4] we get the following facts: • for T < T * (i.e. for α < β) stationary distribution of the process is exponential (left plot in Fig. 4), what means that in the state of equilibrium most of the molecules occupy levels of phase I. Only at the left edge of the lattice there are some fluctuations leading to the fact that some molecules occupy levels of phase II. With T approaching T * fluctuations grow stronger and stationary distribution tends to become uniform, • for T > T * (i.e. for α > β) the situation is opposite (right plot in Fig. 4). Majority of the molecules are in phase II except for fluctuations at the right edge of the lattice which cause that some molecules occupy levels of phase I, In our model, a transition of a molecule from phase II to phase I is possible only if it joins itself to another molecule that is already occupying phase I (with except for the first molecule, which may always jump to phase I). It is in correspondence to real solidification, where molecules most often join an already existing crystal or begin to crystallise on microscopic grains of dust. In our model the right end of the lattice plays a role of a grain, because molecule i = N can always jump to phase I. Had that rule been eliminated from our model, all molecules being converted to phase II in high temperature, then after another fall of temperature molecules would not be able to return to phase I what would produce a supercooled phase II. The plots in Fig. 6 depict the amount of molecules in phase II (x = N II /N ) against temperature. The left plot represents the example 1 while the right -example 2. The plots in Fig. 7 depict average energy of the system per single molecule (Eq. 8) as function of temperature (right plot for example 1, left plot for example 2). We can see that with increasing N the curves become more and more steeper near the temperature of the phase transition. Monte-Carlo Simulation In order to verify our consideration in practice we created a computer program (simulation in Java language). We examined lattices, which were described in examples 1 and 2 (last section) with the use of Monte Carlo algorithm [5]. In our algorithm, each molecule changes its position at random to higher or lower at each step and probability of such a leap depends on temperature as follows: where A -probability of jumping up to a neighbouring higher level (if transition is permitted), B -probability of jumping down to a neighbouring lower level (if transition is permitted), ∆ -energy gap between the levels. Numerical Computation of Distributions of Levels and of Other Thermodynamic Properties For small lattices (i.e. for small N , m and n) it is possible to calculate energy for each state of the lattice directly (numerically). It was done with the use of a computer program for the system described in the first example (section 3.4.1) and for N = 20 molecules. We have obtained: -81 energy levels of the system (first level: E = 0, last: E = 80), -10 458 256 051 states of the system of all, -degeneracy d n of energy levels (i.e. number of states of the system with the same energy) -see appendix A. In Fig. 9 logarithm d n is plotted against energy E n In the Fig. 9 a linear fragment of the curve can be noticed, which corresponds to exponential increase of level degeneracy. If degeneracy d n was identified with density of energy levels ρ(E) and S = k ln ρ(E) then S(E) would be linear and responsible for phase transition occurrence. It can be proved intuitively in a following way that density of levels increases exponentially: • Let D i be amount of states of the system, which are available when border between the phases is located near i-th molecule (i.e. N II = i). Then D i+1 = D i m/(n + 1), what means that D i increases exponentially with respect to the number of molecules occupying phase II (N II ), • refering to Eq. (17) we can see that energy of the system during the phase transition (T = T * ) is linear function of N II , • both above-mentioned facts joined togehter let us conclude that during the phase transition density of energy levels should increase exponentially with respect to energy of the system. When numbers d n are known, they can be used at calculation of probability distribution of levels to be occupied at fixed temperature with the use of the formula: p n (β) = d n Z tot exp(−βE n ). In Fig. 10 the distribution (22) is plotted for several temperatures, the temperature of the transition being also included. It is worth noticing that the distribution for T = T * gets flat (as it was expected) since there are symmetrical random walking and large energy fluctuations then. To compare numerical results with theoretical prediction (according to section 3) an average energy of the system (per a single molecule) was calculated as follows: and is shown versus temperature (fat curve in Fig. 11), where p n are given by Eq.(22) and energies are equal to E n = n∆. The hardly visible thin curve in Fig. 11 (which almost merges with the thick one) represents the theoretically computed energy (as in Eq. (8)) and was alredy presented earlier in the left plot of Fig. 7. We can see that numerical results and theoretical expectation of the last section are in accordance.
2009-07-29T15:37:49.000Z
2009-03-12T00:00:00.000
{ "year": 2009, "sha1": "a4cf132cdbce6e98bc14998d8b6d9535936499ea", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a4cf132cdbce6e98bc14998d8b6d9535936499ea", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
117253927
pes2o/s2orc
v3-fos-license
Analysis and Design of PMBLDC Motor for Three Wheeler Electric Vehicle Application . This paper deals with analysis and design of permanent magnet brushless dc machine (PMBLDCM), primarily aimed for three wheeler applications. The motor sizing accounts for the forces acting on the motor and the design variables such as number of stator and rotor slots, stator and rotor dimensioning, air-gap approximation, slot sizing, flux per pole and permanent magnet sizing has been explained using simplified equations. The designed motor rated at 1.5 kW, 3000 rpm, 120 V radial flux surface mounted permanent magnet rotor, is then assessed using analytical tools for design such as ANSYS’s RMXprt to verify the analytically obtained results. These results are then verified using the computer aided analysis tool, finite element analysis, using ANSYS Maxwell, to obtain the electromagnetic characteristics of the motor for further modification of design . Introduction OF late, due to issues like increasing global oil demand and automobile emissions due to various factors [1], there has never been a higher demand for research and development of environmentally safer technologies to prevent this global resource depletion. One of the fastest technologies to develop among these has been in the field of electric vehicles (EVs). It has shown prominent advancements in the past 10-15 years. EVs can prove to be a very viable alternative to petroleum powered automobiles due to their features like zero tailpipe emissions, higher power efficiencies and lower costs compared to their gasoline counterparts [2]. Past developments in EV technologies, can be generally classified into these three subsystems: Battery technology/ energy storage, electric motors and their drives. As a key element of the EV system, electric motors have to offer high efficiency, wide-speed range, high power density and maintenance-free operation [3]. Fig. 1 shows the different types of motors being developed / used for EV applications. Conventional motors like Induction machines are still widely accepted due to their low costs, high reliability and maintenance-free operation. However, conventional control methods cannot provide the required performance in EVs. With newer drive configurations rapidly developing due to advent of power electronics and control methods [4][5][6][7][8][9][10][11][12], conventional motors have also been able to provide desired performance, but lack efficiency in the lower speed ranges due to higher losses and have a limited constant power range. Permanent magnet (PM) motors, namely PM-synchronous motor and the Brushless-DC (BLDC) motors' use is becoming more prevalent in the EV market [13], due to their high power density, high efficiency, linear torque-speed nature, efficient thermal operation and availability of power electronics for efficient control, which make their use appropriate for traction applications. PM motors, as their name suggests, employ permanent magnets to generate the required operating torque. Despite recent increases in the price of permanent magnets, they have proved to still be profitable in operation costs. Many [14][15][16][17][18] to improve their efficiencies, optimize PM sizes and reduce unwanted cogging torque. The paper aims to design a permanent magnet brushless dc motor for three-wheeler application, which has proved to be one of the more staple modes of transport in East-Asian countries, like India, Thailand, China, Vietnam and others. The design criteria and its principles are discussed including sizing and analytical design methods. Computer aided design and analysis is done using 2D FEM (ANSYS Maxwell) for electromagnetic modelling. The parameters for the reference vehicle are presented in Table 1, showing the various metrics used for the force model [2]. These parameters are then input into a force model to calculate the net tractive effort needed to be generated to accelerate, and run the vehicle at required speeds. Design Methodology The method to calculate the net tractive force on the vehicle is as follows: a. The net tractive effort on any vehicle comprises of four main forces: Aerodynamic drag force (Fw), rolling resistance force (Fr), acceleration force (Fa), and gradient force (Fg). b. Aerodynamic force (Fw) is force is produced due to the friction of the vehicle moving through air. It is a function of the body shape and area of the vehicle. c. Rolling resistance (Fr) is mainly caused due to the hysteresis in the tire material. The asymmetric distribution of the ground reaction forces on the tires when the vehicle is moving, causes rolling resistance. Depends on the weight of the vehicle. The net tractive force (FTE) is the sum of all these forces, FTE=Fw+Fr+Fa+Fg All the above mentioned forces can be seen in the vehicle dynamic force model in Fig.4. Every point on the graph in Fig.4 has an initial velocity and a final velocity. For each of these points, the net tractive force is multiplied with the velocity difference to get the required power at that point. The average power can be calculated and estimated as: After calculating the power for each point on the drive cycle, its average value is the required power rating of the motor. The calculated average power rating of the motor, for the vehicle is about 1531 W or about 1.5 kW. Motor Design The structure of the PMBLDC motor under design is shown in Fig. 6. It shows all the primary dimensions required for sizing and designing the motor. [19][20][21] The PMBLDC motor type selected for design is a surface-permanent magnet type, meaning the permanent magnets are attached to the outer surface of the rotor. Fig. 7 shows the procedure for the design of PMBLDC motor of rated power for the three-wheeler. Iterative loops are not mentioned here, but have been implemented in the procedure, where ever required. The followed methodology for design of the electric motor is as follows: Prediction of required power The required power for the motor has been determined using the net tractive effort. The estimated power rating required for the vehicle is about 1.5 kW. Stator Slot/Pole Selection The number of slots on the stator is selected to be 24 (Ns) and the number of rotor poles is selected to be 4 (Np). Proper slot/pole selection is very important to reduce effects like crawling and/or cogging. Ratings of the motor The motor to be designed has a power rating of 1.5kW, with an input voltage of 120V DC. The required speed range is 3000 to 4000 rpm. Motor power, P=2EI cos a  Back EMF Constant (KE) For a rated speed of 3000 rpm, and an input voltage of 120 V, the back EMF constant, KE will be equal to 0.3437 V-s/rad. Stator Outer and Inner Diameters (Din & Dout) The stator outer diameter is chosen to be 120 mm (Dout) and the stator inner diameter is calculated to be 69 mm. Stack length of the motor (Lstack) The following assumptions have been considered to calculate the stack length of the motor. a. The airgap magnetic flux density is expected to be about 0.9 T. Solving, we get σp=105334 VAs/m, the length of the stack, Lstack is equal to 50 mm. Air gap (g) and rotor diameter (Dr) The air gap is kept at an optimum 0.5 mm, and therefore the inner rotor diameter is equal to 68 mm. Magnet thickness (hm) and rotor yoke diameter (Dry) The thickness of the magnetic pole is chosen to be 2.5 mm, and therefore the rotor yoke diameter (Dry) will be equal to 63 mm. Surface area (A pole ) and total volume of PM (V m ) With the above given dimensions, the area of each PM is 2474 mm 2 /pole and the total volume of PMs is 24740 mm 3 . Armature turns per phase (N1) The number of armature turns per phase is equal to Substituting all values, the value of N1 is equal to 118 turns. Computation of the flux per pole The flux in the air gap due to magnets, ϕp =Bm x 4.Apole (12) The flux per pole is equal to ϕmagnet =Bm x Apole The flux per pole, ϕ magnet is equal to 8.1642e-4 webers for a Bm 3.3 kG. Slot sizing After establishing the no. of conductor turns/slot and the no. of parallel paths (a), we can now calculate the dimensions of the slot, the useful slot area, Asu, provided we adopt a slot fill factor, Kfill. Slot fill factor for round wire ranges from 0.35 to 0.65. The slots designed are rounded trapezoidal in shape. The useful slot area, Asu is given as, 2 . 4. co s su fill d a n A K  = (13) The key dimensions for the stator slot geometry are shown in fig. 8 Analytical Results After obtaining the required dimensions, analytical results of values such as torque, power, phase current, resistance and back EMF are estimated to be as follows. Final Element Analysis (2D) Finite element method is employed into this design using ANSYS's Maxwell package, which offers both 2D and 3D FE solution methods for any electromagnetic problem type. The motor can be designed in Maxwell's 2D environment or can be imported from ANSYS's RMxprt package. After designing the motor in the 2D environment, the software meshes the model, to simplify the problem and obtain solutions. The fineness of the mesh is directly proportional to the number of nodes formed. More the nodes, therefore more accurate the output result. After solving the 2D FE problem, the following results are obtained for the motor outputs. Fig. 10. Output torque Fig. 10. shows the characteristic torque output for a BLDC motor, the rated torque is averaged at 5 Nm. Torque output There are also ripples in the characteristic curve due to high cogging torque in a BLDC motor. These can be reduced using methods to reduce cogging torque. Fig. 11 shows the output speed for the motor, rated at the required speed of 3000 rpm. In addition to this, the FE method also can display accurate distributions of several fields, namely flux density, current density and their vectors. These can be utilized to understand the distribution of flux, saturation, temperature etc. and improve the model by optimizing to reduce the undesired effects. Fig. 13, 14, and 15 show the plot distribution for flux density, current density, flux lines and current distribution plots. Fig 13. shows the distribution of flux along the cross section of the motor. The maximum flux density along the cross section does not exceed 2T, which is still lesser than the saturation limits of the ferromagnetic material used. Current density plot (A/m) in Fig.14, and flux distribution in Fig. 15 are both within practical permissible limits. Conclusions A PMBLDC motor has been proposed and analyzed for the application of three wheeler rickshaw. Complete design of motor is carried along with analytical calculations and finite element analysis is used to analyze the performance of the motor. The results obtained show a good design with reduced losses, higher efficiency, and current density within limits. Further work can be done in optimization of the same motor and be realized in hardware to test actual performance also.
2019-04-16T13:29:34.941Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "ea2b4546b7fd2c050a508274490a2b80b61b2e1d", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2019/13/e3sconf_SeFet2019_01022.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "04bfa32da6ddfd7eef3a9c27fbd13ad13e1d6c20", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
246541292
pes2o/s2orc
v3-fos-license
Decay of Fanatical Nationalism in Pakistan: Looking Back to the Election of 1970 Pakistan and India were bloomed as an independent country in 1947 on the basis of two nation theory of Muhammad Ali Jinnah. He promulgated two nation theories in light of religious dogmatism. Pakistan had been divided into two parts namely East and West Pakistan considering the distance of 1200 km between two territories. There was no resemblance between these two parts except religious similarities. Later, Bengali nationalism had gradually been developed in West Pakistan through various events i.e., language movement in 1952, United Front election in 1954, 6-point movement in 1966 and the election of 1970. The duration of united Pakistan was the history of exploitation, oppression and deprivation of the East by the West. The present paper attempts to explore how fanatical nationalism had been decaying during the regime of Pakistan. Additionally, the study tries to search out the core consequences of the election of 1970 towards the liberation movement. This paper also provides the evaluation of present politics viewing the historical events as well. Finally, this present study gives some remedial recommendations against the Socio-political problem relevant to the findings. INTRODUCTION: The election of 1970 was the last election of united Pakistan and it helped to disclose the mental view of ruling class of West Pakistan. The franchise got opportunity firstly, to apply their voting right on December, 1970 andJanuary, 1971. Although, majority of the elections were conducted indirectly under the dominated power of military during Pakistan regime, the election of 1970 followed democratic pathways exceptionally. In Pakistan, the election was held in two phases. The voter casted the vote on 29 December about a total of 291 seats of the National Assembly and on 16 December a total of 289 seats of the Provincial Assembly against their candidate. Natural disaster (cyclones and tidal surges) also hampered that election process in the coastal areas. It was finally held on January 18 where it was 9 constituencies of the National Assembly and 21 seats of the Provincial Council. Mass people became enthusiastic to participate in the politics by means of choosing their own leader, Awami League (AL) had got massive mandate from that election. Concurrently, the result of the election reflecting the mandate of Six Points Program and its importance (Pakistan Observer, 1971a). election and its evaluation to the present politics of Bangladesh in light of past experience. MATERIALS AND METHODS: Research gap Very few in-depth research and analysis we found regarding decay of fanatical nationalism in Pakistan including various published books and articles. In order to better understanding the manifestation of nationalism during 1970s election, the researcher has composed the article for covering in-depth knowledge about the nationalism. Moreover, to give priority the events of 1970s election towards the foundation of Bangladesh in general sense the article also devised. Research methods and data collection Historical and observational methods have mainly been used in this paper. The researcher has collected data from different secondary sources. Also, we have studied various daily newspapers, periodicals, published books, magazines, statements, and research articles to collect information. Additionally, we have studied the official documents from the ministry of Bangladesh focusing the 1970s elections relevant information. Conceptual definition of Religion and Nationalism In this article, conceptual clarification helps to relate between practical experience and theoretical cognition. From this argument, the study explains some conceptions as theory. Even no human society we found with-out the presence of religious beliefs, starting from antiquity to the current arena. Although there are very few people in the world, who do not believe in a Creator. Everyone acknowledges that there is a force at work at the source of life and the world. Religion is what holds life together. Religion is a vision of life and a system of life that guides man to a better way of life by maintaining harmony with nature (Lord and Mackensie, 1919). For Nationalism, it can be defined as a Product of Political, economic, social and intellectual factors at a certain stage in history, is a condition of mind, feeling or sentiment by a group of people living in a well-defined geographical area (Kohn, 1991). Legal Framework Order for the 1970 Elections After the fall of General Ayub Khan, Yahya Khan ascended to power and issued a legal framework order on 30 March 1970 in preparation for the elections (Pakistan Observer, 1971b). The study found some features in the legal framework order. Few issues got importance in General election provisions i.e., firstly, ensuring universal suffrage for adults secondly, one person to implement one vote policy and thirdly, determining seats for all provinces based on the population. Again, for the formation of the National Assembly these provisions are considered such as; 1) The number of seats in the National Assembly is 313 2) There will be 300 general seats and 13 reserved seats for women 3) The number of seats in all the provinces will be determined on the basis of population Thus, Pakistan will be an Islamic republic, democratic rule will be established and the provinces will have autonomy power such kinds of provisions are considered to the constitution whereas, the rule of law and finance will be remained in the hands of the Center. Participation, Environment and Issues in the Election of 1970 About 25 political parties participated in the 1970 elections. Of these, 11 were from East Pakistan and 14 were from West Pakistan. The strongest parties were: a) Awami League (AL); b) Pakistan People's Party (PPP); c) National Awami Party; d) Jamaat-e-Islam; e) Jamiyate Ulama-E-Islam; f) Jamiat Ulama-E-Pakistan; g) Nezam-E-Islam; h) Pakistan Democratic Party (Ahmed, 1976). The AL in East Pakistan and the People's Party in West Pakistan were strong enough. The AL was progressive. Again, NAP (Bhasani) and PPP believe in Islamic socialist ideas. Undoubtedly, it was complex to exclude other political parties from any right-wing, left-wing or communist group completely. Because the environment was not remained in favor of AL during the military regime of Pakistan. AL as spokesman of mass people could not get opportunity to campaign in the election freely. The ruler always made impediment on the way of election campaign. They (the ruler) always attempted to prove that AL was not patriotic and people' oriented political party in light of religious sentiment. The following manifestoes are given by the main political parties of Pakistan below; 1) In the election of 1970, AL declared their manifesto in favor of mass people on the basis of six points program in 1966, 11 points demands, against the rule and economic exploitation of West Pakistani and promulgation elimination of regional inequality. AL also criticized against the central government regarding their failures of several activities especially natural disaster in 1970. 2) Pakistan People's Party (PPP) also declared their manifesto on behalf of voter based on ensuring strong center government system, establishing Islamic socialism and standing their position against India. Election Results of 1970 The election was comparatively held on free and fair. The voter gave the vote peacefully in favor of their candidate. The Daily Ittefaq published news regarding the election of 1979 in its editorial part. The news focused that the general election was the historical event of united Pakistan. There have been many obstacles in its path, many doubts have arisen, many threats and provocations have been developed. Under various pretexts, multifarious pressure on the President has been raised to hold the elections indefinitely. One thing is for sure, however, that if the elections had not been held on the scheduled date, the very foundations of the state would have been shaken and the whole future of the nation would have been plunged into grave uncertainty. The voting power is the power of the people and that power is the highest power of the state. On this day of great test, every patriotic citizen will exercise his power and rights by putting forward the eternal demand for the rights of the deprived East Bengal and keeping in mind the overall welfare of the state (Karim and Akter, 2021; The Daily Ittefaq, 1970). Other parties, namely the NAP (Bhasani) National League, the People's Party, the Pakistan National Congress and the Islamic Democratic Party, fielded 47 candidates but did not win any seats (1972). Politics in Pakistan and its Impact The result proved that united in the future Pakistan may be broken down. The people of East Pakistan clearly expressed their view through giving their vote. In one side, the ruler of West Pakistan never agreed to hand over power to AL as the winning party and the other side they denied the result and attempted to involve in conspiracy against mass people's movement. Afterwards, mass people realized that on the basis of religious sentiment united Pakistan can't be established and only separation of Pakistan the way of solution from such crisis. Victory of Awami League Multifarious reasons contributed to the victory of AL as a large popular political organization. The aspects helped to win the elections are given below; 1) The atrocities, tortured and discrimination committed by the Pakistani regime against the people of East Bengal from the beginning were answered by asking the AL by ballot in the elections (Jahan, 1977). 2) Religious Muslims play a role in conquering the AL in respecting Islamic ideals in the free campaign (Moniruzzaman, 1988). 3) The AL Manifesto was a symbol of the aspirations of the people of East Pakistan. The manifesto reflected the demands of the people of the region and expressed their overwhelming support for the AL. 4) The personal Image of party Chief Bangabandhu Sheikh Mujibur Rahman also worked behind the AL absolute victory in the 1970 elections. His charismatic leadership, personality, fluently speech easily attracted people to him easily. Later, the people voted for his party and paid respect to him (Khan, 1985). 5) Without AL in the 1970 election none of the parties that participated in the election were as strong as the AL in terms of organization. Finally, the AL easily won a majority in the 1970 elections. 6) The people of East Bengal had already gone against the West Pakistan regime as Pakistan, established in 1947 did not have 23 years of election, the people of East Bengal elected the AL by voting in this election. 7) In the 1965 Pak-India war the western regime fought against India living East Bengal completely unprotected. 8) It was under the leadership of the AL that a sixpoint demand for Bengali's release was raised in 1966. As a result, The Bengalis become rights conscious. This awareness gained greater maturity through the 1970 elections. As a result, AL won the 1970 elections. Breaking Down of Pakistan Some notable and important reasons working behind the breakdown of Pakistan i.e., geographical, social, cultural economic and political. First, there was no similarity between East and West Pakistan in terms of geography. The distance between East and West Pakistan was about 1200 km. There was no communication system except by air. As a result, Pakistan broke up (Sobhan, 2015). Second, the social structure and way of life of the people of East and West Pakistan were different. Language, art, literature, dress, food was all different. There was no similarity in any subject except religion. As a result, the downfall of the state of Pakistan became inevitable (Zaheer, 2001). Third, East Pakistan was basically a colony of West Pakistan. The Pakistani regime has exploited East Pakistan in all respects. Widespread economic inequality existed everywhere in the kingdoms, development, education and industry sectors. Although the 1972 constitution called for the elimination of inequality, it was not implemented in practice. Basically, the history of Pakistan was a history of deprivation, a history of exploitation, a history of oppression. The Pakistani regime runs a steamroller on the people of East Pakistan. As a result, the East Pakistani people jumped into the war of independence (Hossain, 1991). (Maniruzzaman, 1975). The 1970 election was the only general election in the history of Pakistan and the last chance to create a united Pakistan. The election of 1970 is an important event in the history of Bangladesh's freedom struggle. However, it was through these elections the state of Pakistan died and the idea of Pakistani nationalism was shattered. In this context, Bangabandhu Sheikh Mujibur Rahman said, "You have set only one foot on the road of struggle and not even both the feat" He further said that Bengalis will always live as Bengalis (Loshak, 1971). CONCLUSION AND RECOMMENDATIONS: The election unanimously reflected that two-nation theory was a wrong method about the foundation of Pakistan. Though we have got an independent territory through liberation war in 1971, people of Bangladesh are still trying to ensure their right yet in various sectors. Though, Bangladesh is now economically giant country in the South Asia, economic deprivation is accelerating day by day among the citizen here. Indeed, Corruption, lack of people's participation, absent of rule of law, rule of bureaucracy in the name of democracy, lack of good governance, politicization in the administration and absent of free and fair election are still influencing political environment of Bangladesh. Such phenomenon not only hampers the nation building process but also breaks down the national integration in Bangladesh. Indeed, if we build up a Sonar Bangla which was always dreamt by our father of nation Bangabandhu Sheikh Mujibur Rahman, these things must be kept off from our country. The event of election in 1970 has learned for the ruling class. This event teaches the ruling class to respect the majority. Public opinion has to be given most priority over all issues. The history also teaches the leader power is always temporary. Indeed, autocracy never remains in the long run. National integration may break down for the cause of failure of leadership. That's why, few suggestions are given below for ongoing politics in Bangladesh utilizing past experience.  Territorial integration is needed to form a united state  Religion dogmatism should be abolished from the state policy  All types of discrimination should be eliminated including economic, social, political and culture from the state  People's participation must be developed in the state ACKNOWLEDGEMENT: Authors would like to acknowledge and give warmest thanks to the Begum Rokeya University, Rangpur authority for providing insightful and valuable suggestions to carry on such kind of particular research. They would also like to thanks colleagues as a whole for their continuous support and understanding while writing this paper.
2022-02-05T16:47:16.341Z
2022-02-01T00:00:00.000
{ "year": 2022, "sha1": "4432498a87ae6d056eaef5aab6713dd16efd283f", "oa_license": "CCBY", "oa_url": "https://universepg.com/public/storage/journal-pdf/Decay%20of%20fanatical%20nationalism%20in%20Pakistan.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "b5592943d4236da5126d97cfd261000a227d5f78", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [] }
140241956
pes2o/s2orc
v3-fos-license
The association between crowding within households and behavioural problems in children: Longitudinal data from the Southampton Women’s Survey Abstract Background In England, nearly one child in ten lives in overcrowded housing. Crowding is likely to worsen with increasing population size, urbanisation, and the ongoing concerns about housing shortages. Children with behavioural difficulties are at increased risk of mental and physical health problems and poorer employment prospects. Objective To test the association between the level of crowding in the home and behavioural problems in children, and to explore what factors might explain the relationship. Methods Mothers of 2576 children from the Southampton Women's Survey population‐based mother‐offspring cohort were interviewed. Crowding was measured at age 2 years by people per room (PPR) and behavioural problems assessed at age 3 years with the Strengths and Difficulties Questionnaire (SDQ). Both were analysed as continuous measures, and multivariable linear regression models were fitted, adjusting for confounding factors: gender, age, single‐parent family, maternal education, receipt of benefits, and social class. Potential mediators were assessed with formal mediation analysis. Results The characteristics of the sample were broadly representative of the population in England. Median (IQR) SDQ score was 9 (6‐12) and PPR was 0.75 (0.6‐1). In households that were more crowded, children tended to have more behavioural problems (by 0.20 SDQ points (95% CI 0.08, 0.32) per additional 0.2 PPR, adjusting for confounding factors). This relationship was partially mediated by greater maternal stress, less sleep, and strained parent‐child interactions. Conclusions Living in a more crowded home was associated with a greater risk of behavioural problems, independent of confounding factors. The findings suggest that improved housing might reduce childhood behavioural problems and that families living in crowded circumstances might benefit from greater support. | INTRODUC TI ON Behavioural problems lead to a range of negative outcomes including mental and physical health problems, 1 increased violence and risk of a criminal conviction, 2 and poorer educational attainment and employment prospects. 1 Studies have shown that behavioural problems affect one in ten children in the United Kingdom (UK). 1,3 This results in a serious burden for the individual, their families, and the wider community and economy. Housing quality is now widely recognised as one of the social determinants of health. 4 Determining which elements of housing quality can be detrimental to behavioural problems in children could enable policies to be more effectively targeted at addressing this inequity. One such important and timely element is crowding. Crowding is worsening in the current housing crisis, 5 and new homes in the UK are the smallest in Western Europe. 6 There are various ways both to measure the level of crowding in a household and to define the point at which a household is classed as overcrowded (see Figure 1 for definitions). People per room (PPR) is the most useful measure of crowding as it is continuous and is the most commonly used metric in research. 7 The bedroom standard is widely used as a definition for classifying a household as overcrowded. 8 Using the bedroom standard, nearly one million children, or one child in every ten, live in overcrowded conditions in England. [8][9][10] This problem is more common among families of lower socio-economic status, in rented accommodation, and in cities, with nearly one child in every three living in an overcrowded home in London's social housing. 5,10 Most research on the effects of crowding is based on adults. 11 Yet children are particularly influenced by their home environment. 12 Studies have shown crowding in the home has a negative impact on children's education and a range of physical health outcomes, 13 but, as highlighted by other researchers, despite the strong theoretical links to adverse psychological processes, almost no research on children has focused on associations between crowding and behavioural outcomes. 14 The majority of studies on crowding in the home and behavioural problems in children originate from America, are from the 1970s or earlier, were based on very small samples, and used cross-sectional designs. [13][14][15][16] Notably, there has not been a study in the UK for over 25 years. 14,15 In most of the studies, children living in crowded households had more behavioural problems than children in less crowded households. [14][15][16][17][18] Crowding may impact on children's behaviour through a lack of privacy or space to play, 19,20 increased reliance on childcare, 1 interrupted sleep, 17 or impacts on parent-child interactions including conflict, reduced monitoring, and less parental responsiveness. 1,16,21 Despite the numerous theoretical explanations for the relationship between crowding and child behaviour, very little research has included potential confounding or mediating factors. The aim of this study was to assess whether the level of crowding in the home is associated with more behavioural problems in a UK cohort of children, and to explore what factors might explain the relationship. | Participants The Southampton Women's Survey (SWS) is a prospective cohort study of 12 583 women aged 20-34 years recruited, when not pregnant, from the general population resident in Southampton. 22 A total of 3,158 women who subsequently became pregnant were that improved housing might reduce childhood behavioural problems and that families living in crowded circumstances might benefit from greater support. K E Y W O R D S behaviour, cohort study, crowding, housing tenure, parent-child interactions, strengths and difficulties score Study question Is there an association between the level of crowding in the home and behavioural problems in children, and if so, what factors might explain the relationship? What's already known Early, small scale studies indicate that living in a more crowded home is associated with a greater risk of behavioural problems in children. What this study adds This UK-based cohort study confirms that living in a more crowded home is associated with a greater risk of behavioural problems in children, independent of confounding factors (gender, age, single-parent family, maternal education, receipt of benefits and social class and neighbourhood quality). The relationship was mediated in-part by maternal stress, less sleep, and strained parent-child interactions. Crowding occurs more commonly in social housing. followed through their pregnancy, and their children were then followed up at intervals during childhood. Those who had information collected on behavioural problems at age 3 years were included in the study. The final sample consisted of 2576 children (see Figure 2). Questionnaire (SDQ). Mothers were questioned regarding their children in four areas: emotional, conduct, hyperactivity/inattention, and peer problems; and the scores from each of these were summed to create a total difficulties score. 23 This score can range from 0 to 40 and was treated as a continuous variable. A higher score indicates greater behavioural problems (a score under 13 is "close to average," 13-15 "slightly raised," 16-18 "high," and 19 and above "very high"). 24 Potential confounding factors were identified a priori from existing literature and included in a directed acyclic graph (DAG) (see Figure 3). This indicated two different minimal sufficient adjustment sets. The first included level of maternal educational attainment, highest level of parental social class (by occupation), single-parent household, whether the household received benefits (support/job seekers allowance, working tax credit, or housing benefits), and housing tenure. The second included the same factors with the exception of housing tenure which was replaced with neighbourhood quality. Additionally, adjustments for age and gender of the child were included in all analyses to improve the precision of the outcome variable. We separately examined the relationship between housing tenure and crowding to try to identify the types of housing in which most crowding occurs. Housing tenure was classified as owner occupied (homes owned outright and mortgaged); privately rented; socially rented (housing rented from local authorities and housing associations); or other (families who live with a relative, in a hostel, halls of residence, or bed and breakfast). The following variables, shown in the DAG, were considered as possible mediators: sleep duration (time spent asleep per night); maternal stress (stress experienced in daily living in the last 4 weeks ranked on a 5-point scale); and two variables for parent-child interactions (conflict and closeness) which were measured using the Child-Parent Relationship Scale (CPRS). CPRS is a self-report instrument, completed by mothers, that assesses their perceptions of their relationship with their child. It is widely used and has been validated for use at this age. 25 It produces conflict and closeness scores which run from 0 to 60, with higher scores representing negative and positive interactions, respectively. Information on all the confounding and mediating variables and housing tenure was collected in the same interview with the mothers of the participants when the children were aged 2 years, with the exception of parent-child interactions and sleep, which were measured in the interview at age 3 years. | Statistical analysis Using Stata 15.0, 26 standard summary statistics including median, interquartile range (IQR), or number (n) and percentage were produced for the variables in the analysis. Spearman's correlation and linear regression methods were used to explore the relationship between crowding and behavioural problems. In all the models, crowding was F I G U R E 3 DAG model created to show covariates included in the analyses, the association between crowding within households and behavioural problems in children, Southampton, 2019 entered in units of 0.2 PPR which equates to an additional person in an average-sized five-room household. The first model simply adjusted for child's gender and age. Models 2 and 3 were based on the two options for minimal sufficient adjustment indicated by the DAG. In Model 2, single parent, maternal education, receipt of benefits, social class, and housing tenure were included. In Model 3, neighbourhood quality replaced housing tenure while the other variables remained the same. Mediation analysis, using formal mediation techniques, for the association between crowding and SDQ score was implemented. 27,28 We used Model 3 to consider the mediators. Bias-corrected confidence intervals were estimated from 500 Monte Carlo draws for nonparametric bootstrap. Direct and indirect effects were averaged across all individuals. Data on behavioural problems were slightly skewed to the right so a sensitivity analysis was conducted using the square-root transformation. We tested for nonlinearity of the relationship between child's behaviour and crowding by including a quadratic term for crowding in our models. Further, we conducted an analysis restricted only to those living in owner-occupied houses. In our data set, 78% of individuals had fully observed data. The proportion of missing data for each variable ranged from 0.2% (gender) to 19% (conflict score); we did not identify important missing data patterns in our data set. We used multiple imputation of missing data to minimise selection bias and increase the power of our analysis. For each imputation model, we included all the variables identified from the DAG as potential confounders or mediators, as well as our outcome. We generated 100 imputed data sets and combined the coefficient estimates using Rubin's rule. 29,30 We based our imputations on the assumption that missingness in the data is explained by the observed variables included in the imputation model (ie data are missing at random). 31 More details are in Table S1. | RE SULTS The characteristics of the 2576 children are given in Table 1. The median age was 3 years at the time of assessment of behavioural problems. The study sample characteristics were almost identical to the wider SWS cohort and broadly in line with England figures. 1,5,23 In households, the number of rooms ranged from 2 to 12 with a mean of 6.0. The number of individuals in households ranged from 2 to 11, and level of crowding ranged from 0.3 to 4 PPR. There was relatively little change in the level of crowding from the child's birth to age 2 years, with 1951 (76%) households having no change to the number of individuals in them. Of households that did see a change, the majority were due to the addition of a single child. The total difficulties behavioural score ranged from 0 to 31, with 248 (9.6%) of children having "high" or "very high" scores (SDQ score ≥ 16). Table 2, Model 1 shows the positive association between crowding and behavioural problems adjusted for age and gender. In Model 2, which also includes additional adjustment for the confounding variables (single-parent households, maternal education, TA B L E 1 Baseline characteristics of the study population, the association between crowding within households and behavioural problems in children, Southampton, 2019 Percentage totals may not add to 100 due to rounding. Only data on behavioural problems were slightly skewed, but medians (IQRs) are presented for consistency. a ISCED level equivalents are as follows: No qualifications is ISCED-0, 1, and 2; GCSE only is ISCED-3 A-levels or equivalent ISCED-3 and 4; and Degree or diploma is ISCED-4, 5, and 6. b Owner occupied (homes owned outright and mortgaged), socially rented (housing rented from local authorities and housing associations), and other (family lives with a relative, in a hostel, halls of residence, or bed and breakfast). c Child-Parent Relationship Scale produces conflict and closeness scores which run from 0 to 60, with higher scores representing negative and positive interactions between parent and child, respectively. d Mothers ranked the stress or pressure they experience in daily living in a 4- week period on a 5-point scale: none, just a little, a good bit, quite a lot, or a great deal. Responses were grouped so that "just a little" and "a good bit" represent mild stress and "quite a lot" and "a great deal" represent moderateto-severe stress. income, social class, and housing tenure), the association between behavioural problems and crowding was markedly attenuated. In Model 3, in which housing tenure was replaced by neighbourhood quality, there was less attenuation from Model 1 than was seen in Model 2. In households that were more crowded by 0.2 PPR (equating to an additional person in an average-sized five-room household), the children tended to have more behavioural problems by 0.20 SDQ points (95% CI 0.08, 0.32, P < 0.001), after adjustment for confounding factors. Furthermore, children with SDQ scores ≥ 16 ("high" or "very high" total difficulties score) lived in houses that had, on average, 0.2 more PPR than children with SDQ scores < 13 ("close to average" score). Examining the subscales of the SDQ score indicated that the association was dominated by the relationship with conduct problems and peer problems rather than with the other subscales of hyperactivity and emotional symptoms (Table S2). The analysis of the multiply imputed data sets to take account of missing data found very similar results to those in Table 2. The results are given in Table S3. reduced to 0.16 (95% CI 0.04, 0.28) (see Table 3). This indicates that all of these factors could, in part, explain the positive association between crowding and behavioural problems, but that after adjustment, the relationship between crowding and behavioural problems remained. A sensitivity analysis using a square-root transformation of the data on behavioural problems produced the same Spearman's correlation coefficient and significance for the correlation between crowding and behavioural problems. All the same factors remained statistically significant in the regression analyses in Models 1 and 2. We found no evidence of nonlinearity in the relationships. The association between crowding and housing tenure was found to be strong, with children living in socially rented housing being more likely to experience crowding (see eFigure 1 TA B L E 2 Multivariable regression assessing the relationship between crowding in the household and behavioural problems in children, the association between crowding within households and behavioural problems in children in the multiply imputed data set, Southampton, 2019 | Principal findings This UK-based study confirms the associations shown in studies in other countries that children living in crowded households had more behavioural problems than children in less crowded households and this was independent of age, gender, single-parent households, and maternal education, receipt of benefits, and social class. It adds to the evidence base by showing that maternal stress, less sleep per night, and strained parent-child interactions might all, in part, be mediating factors. Furthermore, we identified that children living in social housing tended to live in more crowded homes, but that even in owner-occupied homes, crowding and behavioural problems are associated. The findings of this study are consistent with the majority of earlier, small-scale studies on crowding and behavioural problems and offer resolution to a number of common limitations, not least study design. [14][15][16][17][18] It has a large sample size, strong, prospective cohort design, and relatively robust control for potential confounding factors. The findings agree with the only other longitudinal study to date by Solari et al, 12 which also found that children from more crowded households had more behavioural problems than children from less crowded households, irrespective of socio-economic status and demographic factors. | Strengths of the study Possible reasons why the findings of this study differ from the few studies that did not find an association between crowding and behaviour, such as Li et al, 20 are because of the differing methods of measuring crowding. Li et al used unit square footage per person; however, capturing crowding through PPR is preferred because it is has been reported as the most consistent crowding metric with human consequences, 7 and because of inconsistencies in how people define bedrooms. 12,16 There is no known threshold for any detrimental effect from crowding on a child's behaviour, so the continuous measure is justified and more sensitive than arbitrary categorical intervals. 12 A further strength of this study was its prospective cohort design. The longitudinal nature of the data enabled account to be taken of temporality. The SWS cohort has been well characterised, thus allowing consideration of important confounding factors, albeit that there is likely to be residual confounding. The characteristics of the sample were almost identical to the wider SWS cohort, but the SWS cohort is slightly more affluent than the general population in the UK, as commonly results from selection bias in studies. 23 Interviewers and participants were blinded to the research hypothesis, which minimised reporting bias. Missing data did not seem to be a major problem as analyses of our multiply imputed data sets gave very similar results to the complete-case analysis. The SDQ is not a clinical assessment, but it is a validated tool to measure behavioural problems in the sample age group. 32 The age of 3 years was an appropriate time to measure the outcome as child behaviour shows increasing stability from around this point onwards. 1 | Limitations of the data Several covariates could have been more refined; for example, receipt of benefits is a crude measure of income, and there is some evidence to suggest that the SDQ might be a more sensitive measure of behavioural problems after age 4 years. 32 The exposure, outcome, and covariates were all reported by the participants' mothers, which introduces the potential for response bias. For example, if some mothers in overcrowded households gave information that led to an underestimation of the PPR, then this might have led to an exaggerated effect size. However, the interviews were conducted in the participants' homes, so interviewers could, to an extent, verify the validity of participants' answers. Data were not available on some factors that may also be involved, such as intrafamilial violence or a lack of privacy. Also, the child-parent relationship variables and sleep TA B L E 3 Regression analyses of potential mediators and associated factors in the relationship between crowding in the household and behavioural problems in children, the association between crowding within households and behavioural problems in children, were measured at the same time as the behaviour outcome and it is possible that an element of reverse causation might explain the relationship between them and behaviour. The study did not have statistical power to analyse either changes in the level of crowding or household demographics over time. Lastly, in the SWS, the recruitment of pregnancies was necessarily over a prolonged period and the study was unable to account for potential temporal changes in housing and socio-economic conditions between 2001 and 2010. Our approach to causal inference using the DAG led to two different minimal sufficient adjustments sets, and we have shown analyses using both sets. Housing tenure and crowding are strongly linked and adjustment for housing tenure attenuated but did not completely remove the relationship between crowding and behavioural problems, whereas in the model adjusting for neighbourhood quality, the relationship was stronger. It is thus possible to argue that the problem lies with housing tenure rather than crowding, but we believe that our various analyses indicate that an association between crowding and behavioural problems is apparent. | Interpretation The National Institute for Health and Care Excellence (NICE) recommends that vulnerable children under 5 years at risk of developing behavioural problems are identified as early as possible so that increased visits and free childcare services can be provided. 33 This study provides support for categorising children in crowded households as "at risk" and taking action, such as referring those families to existing local support services. As maternal stress, less sleep, and it looks at how sleeping arrangements within the premises could be organised, rather than how they are actually organised (see Figure 1 for definition). 9,18 The UK is also one of the few European nations to have no nationally agreed minimum space standards for housing. 7 Although the effect of crowding on child behaviour is relatively modest, it does provide some support for creating space standards. 35 Children in social housing tended to have the highest levels of crowding, so improvements in such housing to reduce crowding should be encouraged. Evaluating housing interventions that are already in place would offer tremendous research opportunities. For example, a large-scale longitudinal study that compared two groups of households-one group where overcrowding had been alleviated compared with a group where overcrowding remained and which took into account confounding variables-would enable analysis of how crowding improvements can change behavioural trajectories. | CON CLUS IONS Living in a more crowded home was associated with a greater risk of behavioural problems, independent of confounding factors (gender, age, single-parent family, maternal education, receipt of benefits, social class and neighbourhood quality). The relationship was mediated in-part by maternal stress, less sleep, and strained parent-child interactions. Therefore, families living in crowded circumstances might benefit from greater support, or intervening on any one of the mediators may reduce the impact of crowding on behavioural problems. Crowding occurs more commonly in social housing, so increasing space in social housing would ideally be a long-term aim. ACK N OWLED G EM ENTS We thank the women and children who participated in the SWS. The analysis of these data formed the basis of an MSc dissertation by the first author. Janis Baird received research funding from Nutricia Early Life Nutrition for a specific research study which aims to improve
2019-05-01T13:04:01.821Z
2019-04-29T00:00:00.000
{ "year": 2019, "sha1": "5b08a9e02d6712938f1234d726f9e1bb464c2a84", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/ppe.12550", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "5e78b7028c9013f9aeba80dde1da49220ec098f6", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
244171774
pes2o/s2orc
v3-fos-license
Development of the insurance market in Ukraine and forecasting its crises Insurance market is an important part of the financial market, the functioning of which helps to protect individuals and legal entities from the negative and stressful effects of today’s unstable economic environment. The purpose of this study is to determine trends in the insurance market in Ukraine and its potential crises.The study found that Ukraine’s insurance market constantly grows, but is volatile and in a state of concentration. The dynamics of most indicators are cyclical, with a cycle length from 4,66 quarters to 14 quarters. The randomized R/S-analysis confirmed the stability of the dynamics of Ukraine’s insurance market and its fractal similarity. Fractal similarity was proved for six out of ten analyzed indicators of the insurance market. In addition, it was confirmed that at the moment of transition from one fractal to another, a trend break occurs. Thus, the emergence of crises on the insurance market of Ukraine is associated with the self-similarity of the dynamics and the coincidence of the moments of bifurcation of certain indicators in its development. A partial crisis on the Ukrainian insurance market at the beginning of 2019 coincided with the bifurcation of the number of concluded insurance contracts, determined based on the results of fractal analysis.Calculations made it possible to conclude that potentially crisis periods for the insurance market of Ukraine fall on Q1-2 2017, Q1 2019, Q1 2020, of which only one was realized (Q1 2019). The nearest potential moments of crises on the insurance market of Ukraine may be the following periods: Q1 2023 and Q1 2026. INTRODUCTION The insurance market is one of the most important segments of the financial market and largely determines its stability and development. At the same time, processes on the insurance market determine the processes in the real sector, since compensation for risks and threats that are realized in the real sector of the economy is provided by the insurance market. The insurance market is a specific object of study due to its heterogeneity in all forms of structuring. Within the national economy, the number of insurers on the market is limited; each insurer has its own list of priority insurance services and its own policy for forming an insurance portfolio, insurance rules, etc. There is a territorially heterogeneous insurance market due to the uneven concentration of the company's activities in the regions of the country. In the global insurance market, heterogeneity is also determined by the socio-cultural characteristics of each region, each country, which directly affect the choice of insurance services by policyholders. The importance of insurance in the formation of funds of financial resources and their capitalization should also be taken into account. Therefore, the task of forecasting the development of the insurance market acquires a narrow specificity. That is why an important area of research is identifying trends in the insurance market, both globally and nationally. Current trends in the insurance market Over the past 40 years, the emergence of crises on the insurance market (MAPFRE Economics Review, n.d.) has been associated with declining GDP, and dependence on GDP exists on both developed and emerging markets. At the same time, the peculiarities of the insurance market development can affect the country's economy as a whole. Crisis on the insurance market may well provoke a financial crisis. Thus, in 2008, the United States spent 182 billion dollars to save the well-known insurance company American International Group (OECD, 2020), whose debt holders were investment funds around the world. Significant financial losses could have been avoided if the preconditions for the crisis were identified. The current economic crisis has led not to a collapse of the insurance market, but to its restructuring. Certain insurance products have become popular, the risks of which have increased significantly. Accordingly, prices for certain insurance services increased: health insurance (+35%), property insurance (+20%), and cyber insurance (+35% in the US and 29% in the UK) (Marsh, n.d.), but in the 1st quarter of 2021, for the first time there was a situation characterized by a decline in average market growth. At the same time, crisis in the global economy has led to a number of "waves" that can promote the insurance business (Amadeo, 2019): sufficient and even excessive amount of information about risks; policyholders of insurance companies; direct and wide informing of policyholders about services that can be provided by insurance companies; a wide range of analytical methods that can be used by both insurers and consulting companies; new patterns of behavior and risk categories are also emerging. These and other changes allow insurers to change the basic business model. However, they also cause asymmetry and turbulence in the market, which makes it difficult to predict its state. Global trends in the insurance market show a steady growth over the past 10 years. Even the onset of the COVID-19 crisis has not led to market collapse in most countries. In 2020, insurers did not have such high profits and high positions regarding capital as in previous periods, but in general the market situation was stable (Ogilvi, 2021 (Gestel et al., 2007), etc. are used to predict the financial condition of an insurance company/insurance companies. For example, to predict the financial condition of insurance results, the researchers themselves note the shortcomings of using these and other models, which make their application too specific, which does not allow achieving good forecasting quality in the current conditions. The quality of forecasting often depends on the state of the national economy, random fluctuations in the insurance market, the peculiarities of the insurance company, etc. The building of a macroforecast of the insurance market is often specified by the insurance industry, its subjects and objects (Lenten & Rulli, 2006 It is no coincidence that reviews-forecasts of the development of the insurance business give the forecast for no more than a quarter. At the same time, the methodological tools of indeterminist paradigm of scientific thinking, which makes it possible to obtain qualitative results in other areas of financial forecasting, are insufficiently used to determine the forecast values of insurance market dynamics. For example, Izzeldin (2007) models financial indicators through the variance of stochastic processes. The author forms a multivariate model of exchange rate dynamics and adapts it to daily random changes. Richards (2004) notes that the use of fractal analysis helps to overcome inconvenient features of financial time series in financial forecasting such as inhomogeneities at non-uniform intervals and scaling of proportional-symmetry relations between fluctuations at different separation distances. This is typical for the indicators of the insurance market dynamics, although not to the same extent as for the financial assets market. Schmitt et al. (2001) propose overcoming disturbances of forecasting horizons for financial time series by means of fractal analysis, calculating statistical temporal translational invariance for exchange rate time series based on multifractal fluctuations. Timashova and Skachko (2016) emphasize the advantages of using R/Sanalysis to assess the fundamental characteristics of time series, such as the presence and depth of long-term memory, trend resistance (persistence) in the use of financial time series. In this work, as in the studies by Kapecka (2013) and Sviridov and Nekrasova (2016), the authors assume that fluctuations in stock prices or financial instruments have a "long memory" and are self-similar. Dalton's (2006) dissertation compares the results of using fuzzy models to predict the stock price on time series using fractal analysis and without the use, emphasizing that the use of fractal analysis significantly improves the quality of forecasting. The fractality of the market is due to a combination of global determinism in it and local randomness, which is observed in the vast majority of financial markets. The advantage of the "market fractality" hypothesis is also its tolerance to a certain number of errors while maintaining the system's stability (Anderson & Noss, 2013). It would be logical to assume that not only prices in financial markets have a "long memory" and are self-similar, but also the general laws of market processes are generally subject to indeterminate perception. Therefore, the use of tools in the methodology for forecasting the development of the insurance market, associated with the study of complexly organized systems of natural genesis, can qualitatively improve its results. Therefore, the existing methods for determining the trends of dynamics are fractional and do not give adequate forecasts for long horizons. At the same time, Ukraine's insurance market is dynamic and unstable, and there is an urgent need to determine the most regular trends in its dynamics. The purpose of this study is to identify trends in the insurance market of Ukraine and the moments of its potential crisis. DATA AND METHODS The study used the following indicators: number of registered insurers ( ); NIP The source of information was the quarterly data of insurance statistics (Forinsurer, n.d.) for the period 2014-2020. Figure 1 shows the procedure for determining potential moments of bifurcation in the insurance market. Indicators cited cumulatively during the year (number of concluded insurance contracts, gross insurance premiums, gross insurance payments, net insurance premiums, net insurance payments) were calculated on the basis of public statistics for each quarter separately. Indicators whose face value depends on inflation (assets of insurance companies, paid-up authorized capital of insurance companies, formed insurance reserves of insurance companies, gross insurance premiums, gross insurance payments, net insurance premiums, UAH million, and net insurance payments, UAH million) are listed in 2014 prices using GDP deflator (State Statistics Service of Ukraine, 2021). The assessment of trends in the dynamics of insurance market indicators was based on determining the parameters of approximation by linear, cyclic, exponential, exponential and polynomial dependences and checking their reliability using f -statistics. The cyclic component was determined based on Fourier analysis. Since the average level of reliability (more than 0,6 probability of approximation) was achieved for the linear and cyclic components, a complex series of dynamics was formed on the basis of two time variables A randomized R/S analysis was used to determine self-similarity of the dynamics (Gachkov, 2009), which can be used to obtain the fractal dimension, determine the persistence of the dynamics and the average cycle length. The main task of this stage of analysis is to prove the persistence of dynamics and the existence of stable patterns in it. Based on the calculation of autocorrelation and subsequent building of the autocorrelation function, lags for deterministic processes or fractal length for antipersistent time series are determined. Further analysis of the dynamics of insurance market indicators was carried out within individual fractals based on the determination of linear dependencies' parameters. RESULTS Below are the quarterly results of the theoretical approximation of the lines of dynamics of Ukraine's insurance market for the period 2014-2020 (Table 1). When determining the trends in the dynamics of indicators, their compliance with the following dependences was checked: linear, logarithmic, power, exponential, polynomial (with orders of magnitude from 2 to 4), while the level of reliability from 0.37 to 0.95 was achieved. The highest level of reliability for each of the analyzed indicators showed a linear approximation with a cyclic component. In general, the insurance market indicators, which can be used to determine the level of its concentration (number of registered insurers, number of life insurers, number of insurance contracts), have a downward trend, which confirms the preliminary conclusion of market concentration. On the contrary, indicators of the market size grow. At the same time, the shapes of the theoretical approximation lines testify to the potential instability of the market. Thus, with the simultaneous growth of insurance companies assets, paid-up share capital and insurance reserves, their rates do not correspond to each other. The elasticity of growth of the paid-in authorized capital and insurance reserves is much lower than the elasticity of growth of assets of insurance companies, which can be caused by excessive multiplication of capital. Cyclical fluctuations of capital/assets, reserves/assets are in antiphase. The length of the period of fluctuations of these indicators differs significantly from each Form of dependence Length of the period, other. Such dynamics also determine the market instability due to the violation of insurance companies' stability. There is also a certain dissonance in the patterns of dynamics of gross/net insurance premiums, gross/net insurance payments. This dissonance is due to the fact that the cyclical nature of these indicators differs significantly from each other, the magnitude of fluctuations in net premiums and payments is much larger than the magnitude of fluctuations in gross premiums and payments. Thus, the overall result of this stage of the study is as follows: There is a rapid development and concentration of the insurance market in Ukraine, accompanied by instability of the market as a whole, instability of insurance companies and their financial results. However, the main result of this stage of the study was that all indicators of Ukraine's insurance market have the same type of dynamics. At the same time, the duration of periods within which the same dependence is repeated for each indicator differs significantly. The shortest period of fluctuations (4,66 quarters) had indicators of the number of registered insurers, the number of life insurance companies and net insurance benefits, the longest (14 quarters) − indicators of assets of insurance companies and net insurance premiums. Given the obtained results, it could be assumed that the onset of crises in Ukraine's insurance market is a consequence of coincidence of cyclical components in the dynamics of different indicators according to their contingency patterns. Therefore, numerical methods failed to identify the correspondence of cyclical fluctuations in the dynamics to the onset of crisis periods in the Ukrainian insurance market. B. Mandelbrot's statement "…all periodicities are "artifacts", not a characteristic of the process, but rather an aggregate result that depends on the process itself, the length of the sample and the economist's judgments" is extremely appropriate here (Mandelbrot, 1977). Accordingly, it was suggested that the emergence of crises in the Ukrainian insurance market is associated primarily with the similarity of the dynamics and the coincidence of the moments of bifurcation of certain indicators in its development. The dynamics of the paid authorized capital of insurance companies ( ) 0.4999 H = is defined as anti-persistent, while the memory of this series is rather short (4-5 quarters). For other characteristics, the dynamics is determined to be persistent at , H which is close to 0,7. It should be noted that it is inherent in natural processes within complex systems (Ehanova & Kallys, 2017, p. 21), and the dynamics of the analyzed indicator at such value of the Hirst index should be fractal-like. Based on this result and given the results of calculating autocorrelation in time series, the duration of first-order fractals was determined for the indicator of the number of registered insurers, the number of life insurance companies, gross insurance payments, net insurance premiums, net insurance payments (12 quarters), and the number of insurance contracts (7 quarters). For indicators of insurance companies assets, formed insurance reserves, and gross insurance premiums, the length of the series did not allow determining the duration of the first-order fractal. For the same indicators there is a "long memory", lasting at least 14 quarters. Other works also note the influence of time on the results of determining the quantitative patterns in the dynamics of the insurance market. For example, Kozmenko et al. (2009, p. 53) note the existence of annual lags in the dynamics of indicators in the development of insurance markets of Germany and Ukraine: 5 years − for gross insurance premiums, the volume of per capita insurance premiums, the volume of gross insurance premiums for non-life insurance; 4 years − for the number of insurance companies; 3 years − for the ratio of gross insurance premiums to GDP; and 2 years − for the amount of per capita insurance premiums. Comparing the dynamics of empirical data gives their high similarity in first-order fractals for almost all indicators of the insurance market. For example, Figure 2 shows the dynamics of empirical values for gross insurance payments in first-order fractals. Theoretical approximations of dynamics, determined from empirical data for fractals of the first order, have a strictly linear shape and at the time of transition from one fractal to another change the tilt angle to the axis x ( Table 2). If we assume that market crises are realized at the time of tran-sition from one fractal to another and are characterized by significant transformations of the dynamics, then significant changes in trends occur only for the number of insurance contracts in the transition from fractal 3 to fractal 4. Since the duration of the fractal for this indicator is 7 quarters, the bifurcation period corresponds to the time interval of the first quarter of 2019, which is accompanied by crises in the Ukrainian insurance market. Therefore, potential crisis moments in the development of the market can be Source: Author's calculations. Table 2. Theoretical approximation of lines of the insurance market dynamics by first-order fractals Source: Author's calculations. considered the period when one or more of its characteristics approach the time of change of one fractal to another. Obviously, the convergence of potential bifurcation moments for a large number of indicators is of great importance. For example, in Ukraine's insurance market such convergence occurred in the following periods: Q1-2 2017 (five indicators), Q1 2020 (four indicators). Potential bifurcation moments will be Q1 2023 (four indicators) and Q1 2026 (five indicators). At the same time, it should be noted that crisis situations are not always realized in periods of transition from one fractal to another. The "trigger" for the emergence of real crises may be a certain "black swan" event and/or publicity. In particular, Melnyk et al. (2021, p. 26) note that in late 2022 − early 2023 the reformatting of the insurance market of Ukraine is highly probable due to changes in financial standards and a similar transformation of the banking sector in 2014-2015. Lines of theoretical approximation dynamics of indicators It is also noted that during this period, there will be a significant reduction in the number of insurance companies. This is confirmed by the calculations. However, in addition to the number of insurance companies, bifurcations are expected in terms of the number of life insurance companies, gross insurance payments and net insurance premiums. These processes will be preceded by a sharp change in the number of concluded insurance contracts (Q3-4 2022). DISCUSSION Globalization and the development of global economic relations have determined the close cooperation of both commodity and financial markets. Accordingly, there is coordination in the dynamics of financial markets of different countries and mutual subordination of their development. National markets of different countries are affected by the same (or similar) factors. Crises that occur in some markets are often induced in others. All this applies to the insurance market. The development of national insurance markets has common patterns defined by the specifics of insurance services as a main product and the peculiarities of the modern economic environment. Features of realization of insurance services deter-mine the relatively higher risks of insurance companies and the need for more detailed justification of the cost of goods, the structure and volume of insurance portfolios, methods and extent of interaction between insurance companies. The current economic environment is quite unstable, economic agents are exposed to increasing risks, which leads to an increase in demand for insurance services. For Ukraine, these patterns of the insurance market development are exacerbated by the fact that its national market has not passed the stage of formation, it is characterized by intensive processes of concentration and structuring. Therefore, the general patterns of both the global insurance market and the national insurance market in Ukraine determine its upward dynamics with significant instability. Currently, the task of obtaining forecasts of potential crisis periods in the insurance market development is extremely important. There are many ways to obtain forecasts about the direction (future state) of individual financial processes (financial phenomenon). To make these predictions, different methods are selected in accordance with the tasks facing researchers. When forecasting the course of processes in insurance, problems arise associated with determining the status of individual insurance companies (groups of companies). Forecasting results relate to one or two aspects of the functioning of companies. That is why a wide range of models are used, which allow obtaining high-quality results, including the forecasting of crises possibilities. However, even with such limited output data and forecasting results, it is often difficult to build models for longterm horizons, as insurance companies' incomes are usually deterministic and costs are stochastic. It is even more difficult to determine the quantitative patterns of the insurance market development as a whole. In particular, fortuity has a much greater impact on market dynamics as a whole than on the state of individual insurance companies. At the same time, the processes in the insurance market remain partially determined. The insurance market is constantly in a state of transformation, subject to significant influences from other segments of the financial market. Forecasts for the future state of the insurance market are made for short time horizons and often contain significant errors. A good example is trend models presented in this study, which are built on the basis of linear and cyclic dependencies. Given the high level of reliability, these models did not allow determining the potential moments of crisis on the Ukrainian insurance market. At the same time, the use of fractal analysis and R/S analysis, as part of it, gave more fruitful results. The choice of fractal analysis to forecast potential crisis moments in the development of the insurance market was due to the quality of other financial forecasts obtained with its help. However, the fact was taken into account that the existing applications of fractal analysis to the implementation of financial forecasts are limited and relate primarily to price dynamics on the securities markets. Therefore, it was important to check the adequacy of forecasting crises in Ukraine's insurance market using fractal analysis. Given the complex nature of processes on the insurance market, there is an urgent need for further research in the field of forecasting potential crisis moments by using fractal analysis. In particular, new results can be obtained for fractals of different orders on different indicators according to the fractal dimension. It is also advisable to identify the causes of incomplete fractal dimensions in the dynamics of the insurance market. To obtain forecasts of market development without identifying the moments of potential crises, it makes sense to check the existence of parametric relationships between indicators of insurance market development in fractals of different dimensions. Therefore, to obtain more complete results on forecasting potential crisis moments in the insurance market (and possibly in other financial markets), it makes sense to expand the use of fractal analysis. CONCLUSION The insurance market accumulates the most effective protection tools against the growing number of dangers and threats. However, currently the development trends of Ukraine's insurance market show its instability and dependence on destructive external influences. In this regard, the aim of the study was to identify trends in the development of the Ukrainian insurance market and its moments of potential crises. This study shows that the general patterns of dynamics of Ukraine's insurance market have shown its growth and concentration even in crisis conditions. At the same time, the insurance market development is accompanied by its instability and the emergence of systemic preconditions for the formation of crises. It was also noted that the dynamics of the main indicators of Ukraine's insurance market is cyclical with a cycle length from 4,66 to 14 quarters. However, the analysis of cyclic fluctuations of dynamics lines did not reveal a coincidence of crisis periods in the development of the insurance market and the periodicity of theoretical lines of dynamics. It was concluded that the formalization of parametric dependencies, the construction of simple or complex trends will allow forecasting the state of Ukraine's insurance market only for short horizons. According to the results of a randomized R/S-analysis, the dynamics of all indicators of Ukraine's insurance market (except for the indicator of paid authorized capital) is persistent and fractal-like, and it is linear within each fractal of the first order. Therefore, the moments of transition from one fractal to another (within one order) are potential moments of bifurcation. For the entire group of analyzed indicators of the insurance market development, such potential moment of bifurcation was realized only once − in the first quarter of 2019, when there was a break in the dynamics of the number of concluded insurance contracts. During the same period, the crisis phenomena in the Ukrainian insurance market were most pronounced. There are three potential moments of bifurcation during the same period: in Q1-2 2017, in Q1 of 2019, and in Q1 2020. The calculations made it possible to conclude that the following periods will be potential moments of bifurcation of Ukraine's insurance market − the first quarter of 2023 and the first quarter of 2026.
2021-10-19T16:03:52.828Z
2021-09-01T00:00:00.000
{ "year": 2021, "sha1": "0ac012af8b4d5ab7a741482126866ff5f10aa479", "oa_license": "CCBY", "oa_url": "https://www.businessperspectives.org/images/pdf/applications/publishing/templates/article/assets/15588/IMFI_2021_03_Babenko-Levada.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "33e7750bbe600071aaa15e6e689c3c10d4288e11", "s2fieldsofstudy": [ "Economics", "Business" ], "extfieldsofstudy": [ "Business" ] }
231837766
pes2o/s2orc
v3-fos-license
Gut Microbiome of Two Different Honeybee Workers Subspecies in Saudi Arabia 1Department of Biological Sciences, Faculty of Science, King Abdulaziz University, Jeddah, Saudi Arabia.71491. 2Department of Biology, College of Science, Tabuk University, Tabuk, Saudi Arabia.74191. 3Department of Genetics, Faculty of Agriculture, Ain Shams University, Cairo, Egypt. 11241. 4Princess Al JawharaAlbrahim Centre of Excellence in Research of Hereditary Disorders (PACER-HD), King Abdulaziz University, Jeddah, Saudi Arabia. Honeybees belong to the genus Apis, which is known for its tremendous role in pollination. Unfortunately, honeybee population is recently declining with a potential risk on the agricultural service and subsequently the food supply, not only locally in Saudi but also globally 1 . There is a known mutually beneficial relationship between honeybee gut microbiome and its host. The host provides the optimum environment for bacterial growth, while the bacterial community in honeybee guts aids in efficacy of nutrients absorption, optimum growth and development of the host and its ability to defend pathogens, and its adaptation to surrounding environment 2 .Honeybee gut represents a simple model system to study the relationship between gut microbiome with honeybeehosts 3,4 .The bacterial community in adult honeybee workers is diverse and estimated to reach one billion bacterial cells in each worker's gut 5,6 . Such a diversity in bacterial community is dependent on the type of flower that hosts the insect, as well as many other environmental factors 7 .Gut microbiome of honeybee (Apis mellifera) workers is composed of eight to nine core species 8,9 , e.g., Bartonella apis 10 , Acetobacteraceae 11 ,Parasaccharibacter 11 , Snodgrassella alvi 12 , Bifidobacterium asteroids 13 , Lactobacillus sp. 14 , Frischella perrara 15 and Gilliamella apicola 12 . The two most common bee species that are widely distributed throughout the kingdom of Saudi Arabia are the indigenous Apis mellifera jementica, which is a native species, and Apis mellifera carnica, which is imported from Egypt 16 as honey production of domestic bees does not meet the growing demands in Saudi Arabia. Moreover, the production cost is relatively high. Exotic bee colonies have been imported over time, reaching 200,000 bee packages annually 16 . It is well known for local beekeepers that the indigenous bees A.m. jementica highly tolerates local stressful conditions when compared with exogenous races A.m. carnica, particularly during summer when the air temperature becomes extremely high. It is also noticed that at high temperatures, indigenous bees continue to forage for pollen and collect nectar, whereas imported bees will stop foraging 16 . Initial reports revealed that the subspecies of exotic honeybees have lower heat tolerance, shorter foraging durations and are more susceptible to Varroa mites when compared with indigenous bees 16 . In the present study, we compered the gut microbiome composition and diversity of the adult honeybees of Apis mellifera jementica and Apis mellifera carnica in Saudi Arabia using highthroughput 16S rRNA gene sequencing technology. Material and Methods sample collection,isolation of guts microbiota and dna extraction Five samples each from honeybee workers of A.m. jemenitica and A.m. carnica were collected in November 2019 from a single hive of Beekeeper Cooperative Association at Al Baha, Saudi Arabia. The collected samples were immediately stored at "80°C. For whole gut dissection of honeybee workers,surface disinfection was done using 1 ml aqueous ethanol (70%, v/v) for 45 sec. Dissected guts were, then,placed in a pre-frozen mortar and 700ìl S1 lysis buffer (Invitrogen, Thermo Fischer Scientific, USA) were added and guts were transferred to bead tube for extraction process. DNA of gut samples was extracted by the genomic DNA extraction kit (Invitrogen, Thermo Fischer Scientific, USA), and stored at -20°C for further molecular analysis. PCR amplification PCR was run to amplify bacterial 16S rRNA gene of the variable regions V3-V4. The two universal primers used for PCR are 341F 52 -ACTCCTACGGGAGGCAGCAG-32 ( f o r w a r d p r i m e r ) a n d 8 0 6 R 5 2 -GGACTACHVGGGTWTCTAAT-32 (reveres primer). The PCR conditions were set as the following: one cycle for initial denaturation at 95°C for 5 min; 25 cycles of denaturation at 95°C for 30sec followed by annealing at 56°C for 30 sec and primer extension at 72°C for 40 sec; and a one cycle for final extension at 72°C for 10 min. The generated PCR products were checked for quality and selected products were utilized in preparing Illumina DNA libraries. DNA sequencing was run using Illumina Miseq platform (Illumina, San Diego, CA) at Beijing Genome Institute (BGI), China to generate high-quality pair-ends of ~300 bp. statistical analysis The high quality paired reads produced in fasta files as raw data were de-multiplexed, qualityfiltered and trimmed by trimmomatic package (Version 0.33) through Quantitative Insights Into Microbial Ecology 2 pipeline (QIIME2, v1.80). Obtained reads were merged into single sequence files by the Fast Length Adjustment of SHort reads (FLASH, Version 1.2.11). In order to assign generated unique sequences into operational taxonomic units (OTUs), reads were tagged and clustered into OTUs with similarity cut off of 97% using the de novo OTU piking procedure. Usearch (Version 7.0.1090) 19 was, then, used to remove Chimeric sequences. Taxonomies were plotted against the gut Microbiome Database (HOMD RefSeq, Version 13.2) through the RDP classifier (Version 2.2) 17 and the Green-genes database (version 201305 18 16S rDNA database, http://qiime. org/home_static/dataFiles.html) with a cut off of 70%. Alpha diversity indeces were measured in order to assess the intra-species variations within a given sample using Mothur (v1.31.2). Alpha diversity and rarefaction curve boxplots were constructed using software R (v3.1.1). To investigate the inter-species variations within samples, the beta diversity matrices were conducted and visualized using principal coordinate analysis (PCoA) by package 'ade4' of software R (v3.1.1). Also, heat maps were generated using the package 'gplots' of software R (v3.1.1), and, then, sequence alignments were searched against the Silva core set (Silva_108_core_aligned_seqs) by using PyNAST 'align_seqs.py'. The obtained OTU phylogenetic tree was, then, plotted by software R (v3.1.1), and visualized through QIIME2 (v1.80). Annotation of generated OTUs was done in order to detect the relative abundance at different taxonomical levels (phylum, genus and species). Finally, Metastats, PERMANOVA and Benjamini-Hochberg false discovery rate (FDR) correction were also used to correct for multiple hypothesis. The Linear Discriminant Analysis (LDA) Effect Size (LEfSe) was applied using software LEfSe with the online interface Galaxy (version 1.0.0; http://huttenhower.sph.harvard.edu/galaxy/root),to discriminate the two taxonomic races determining highly presented bacterial taxon within each race depending on statistical significance. statistics of 16s rrna sequence data The five gut microbiome samples of A.m. carnica were identified asC1 to C5, while the five gut microbiome samples of A.m. jemenitica were identified as J1 to J5. Illumina MiSeq was used in sequencing thepartial 16S rRNA gene Figure S1). The tagged sequences were assigned to a total of 171 OTUsacross samples ranging from 45 (J3) to 154 (C5) OTUs (Table S1) Table 2). Principal coordinate analysis (PCoA) was used to display the diversity as well as the differences in OTU composition.Diversity of A.M.C subjects was higher towards positive and negative PCA 1 directions (PC1), where as that of A.M.J subjects was higher towards positive and negative PCA 2 directions (PC2). As an overall picture, the diagram shows that the mean value of A.M.C group was localized in positive portion of PC1 and negative portion of PC2, whereas A.M.J group was mainly localized in the positive portion of PC2( Figure 2). The principal coordinate analysis (PCoA) plots were created using a Bray-Curtis distance matrix and the samples were plotted to represent the microbial community compositional differences between samples. The plots are dimensionally scattered in accordance to their gut microbiome compositional relationships. The results of the present study indicate that the differences ingut microbiomes between these two groups are possibly due to the different origins of worker honeybees of the two subspecies. The stacked number of OTUs and the number of observed species for different samples as rarefaction measures are shown in Figure S2. When the refraction curves inclines ( Figure S2a) or stops climbing ( Figure S2b), the produced data would be enough for further analysis. However, as long as the curve is still climbing, the complexity of the data in samples become higher; since more species being detected throughout sequencing analysis. The two rarefaction curve measures refer to the maximum number of sequences attained for all samples that allows to study taxonomic relative abundance and to assess eligibility of such data to represent all species of any microbial community. The findings from both rarefaction measures show that 54,000 is the maximum number of sequence reads that can be used further in studying taxonomic abundance ( Figure S2). structure of gut microbiomes across the two honeybee workers Two taxonomic ranks (phylum and species) were used in the comparison of gut microbiomes between adult honeybee workers A.M.C and A.M.J at the phylogenetic level ( Figure 3). The results indicate that phylum Firmicutes harbours 24 genera,while Proteobacteria,Actinobacteria, Bacteroidetes and Thermiharbour 23, 8, 6 and 2 differential abundance of microbes due to different origin of worker The observed microbial taxa along with their redundancies across different samples identified after OTU annotation are described in Table S2. The taxa refer to phylum, class, order, family, genus, and species. Eight phyla of the gut bacteria were identified according to relative abundance. They are Actinobacteria, Bacteroidetes, Cyanobacteria, Firmicutes, Protobacteria, TM7, Tenericutes and Thermi (Figure 4). Aligning with the number of genera of each phylum shown in Figure 3, the most abundant phylum were Firmicutes (57%), Protobacteria (31%) and Actinobacteria (10%) inA.M.C group (Figure 4). Meanwhile, Firmicutes (48%), Protobacteria (44%) and Actinobacteria (6%) were the most abundant in A.M.J group (Figure 4). The comparison at phylum level revealed a significant increase in Cyanobacteria in the A.M.C group (P-value = 0.031746), while a significant increase of Protobacteria in the A.M.J group (P-value = 0.037724) (Table S3). Interestingly, Table S3 also indicates the existence of the three phyla TM7, Tenericutes and Thermi only in A.M.C group. The previous results align with those of the heat map at phylum level as Firmicutes, Protobacteria and Actinobacteria were shown to be the most abundant phyla across samples and groups ( Figure S3). In terms of species relative abundance in the gut microbiomes of two groups A.M.C and A.M. J Bacteroides_fragilis, Bacteroides_ o v a t u s , C o m m e n s a l i b a c t e r _ i n t e s t i n i , Blautia_producta, Melissococcus_plutonius, Ruminococcus_gnavus,Saccharibacter_floricola and Snodgrassella_alviwere shown to be the most abundant ( Figure 5).The figure also indicates that a large proportion of the OTUs were not assigned to a certain species (93.80% for A.M.C and 86.20% for A.M.J). We have no explanation for these results except that a large number of species in workers of honeybee was not identified or classified before. The results in Table S4 indicates a significant increase of Melissococcus_ plutoniusin the gut microbiome of A.M.C (P-value = 0.034454), while Snodgrassella_alvi in theA.M.J group (P-value =0.008948). Results for the latter species Snodgrassella_alvi align with that presented in Figure 5c. TheRuminococcus_ gnavusandSaccharibacter_floricolawere not existed in theA.M.J group. The heat map at species level indicates that Snodgrassella_alviharbours the highest relative abundancea cross all samples ( Figures S4). Linear discriminant analysis effect size (LEfSe) and its LDA scores (Ã 3) were used to identify possible biomarkers in gut microbiota that refer to the origin of the host ( while Betaproteobacteria (Neisseriales, Neisseriaceae,Snodgrassella sp. and Snodgrassella_ alvi)in A.M.J (Figure 6b). discussion The gut microbiome structure of honeybee workersis dependent upon monophyletic origin of This genus produces several compounds in honeybee gut with known antimicrobial activities such as organic acids, hydrogen peroxide, bacteriocin, reutericyclin and reuterin that mostly inhibit decaying and protects against pathogenic bacteria, as well as some fungi 25,26 .Therefore,Honeybees likely use lactobacilli as probiotic 27 .In the present study, the dominance of Lactobacilli in both A.m. carnica and A.m. jemenitica adult workers is supported by the presence of low pH (3.9) of honey and nectar 28 . This is concluded because of the ability of lactobacilli to ferment sugar in the gut of honeybee workers and, hence,to generate acidic environment 29 , which inhibits the growth of many other bacteria. The low abundance in Lactobacillaceae was reported to be associated with the presence of pathogenic bacteria 30 . Genus Bifidobacterium,gram-positive bacteria belonging to the Actinobacteria phylum, was also identified in gut of both A.m. carnica and A.m. jemenitica adult workers. Again, it is dominant in rectum, and a core gut bacteria of honeybee workers. Bifidobacterium strains carry large surface proteins, which have a role in adhesion or degradation of plant materials 7,31,32 . Additionally, Bifidobacterium carriesgene clusters that are responsible for the production and utilization of trehalose, which is a disaccharide molecule used by insects as an energy reservoir, in comparison to glycogen, which is the energy storage form in mammals 33 . Family Neisseriaceae and its descendent Snodgrassella_alvi(S. alvi), gram-negative bacteria belonging to Betaproteo bacteria phylum, significantly increased in A.m. jemenitica. These bacteria participate in oxidation of carbohydrates. However, the pathway for the uptake and glycolytic breakdown of carbohydrates does not exist in S. alvi, thus,this bacteriumis located consistently within the periphery of the insect's gut lumen. This area has high oxygen concentrations and this environment is preferable for S. alvidue to its dependence on aerobic respiration 34, 35 . Insects depend on the aerobic oxidation of carboxylates rather than breaking down carbohydrates resulting in various products such as citrate, malate, acetate and lactic acid that serve as energy sources 12,27 . The steady co-exits of S. alvi with other fermentative bacterial taxa in the same gastrointestinal environment can result from utilizing separate sets of resources leading to metabolic variations suggesting a syntrophic interaction. For example, S. alvican utilize some of the substrates such as lactic acid, acetate and formate, which are produced from carbohydrate fermentation 36,37 . Furthermore, S. alvi and G. apicola 38 are enriched with genes encoding biofilm formation. The two species inhabit the host's ileum, indicating that the biofilm can provide a protective layer against pathogens. The bacteria of the family Acetobacteraceae and its descendent genus Commensalibacter(also referred to as Alpha 2.1), gram-negative bacteria belonging to phylum Proteobacteria,were identified as a core member of the gut microbiota in honeybees and bumble bees 9,31 . It was observed mainly in the midgut and hindgut of honeybee workers. In our study, Commensalibacter presents in A.m. carnica and A.m. jemenitica. However, Saccharibacterflorica (Alpha-2.2) presents only in A.m. carnica. Furthermore, Saccharibacterflorica is isolated from pollen, suggesting that this phylotype is associated with flowers 39 .The role of these phylotypes (Alpha 2.1 andAlpha-2.2) is associated with their abilities to adapt with fast growing metabolic processes, with two distinctive mechanisms. Alpha2.1 bacteria harvest energy through a wide range of substrates linked and utilized through a flexible oxidative and biosynthetic metabolism. Whereas, Alpha2.2 bacteria, that lack alternative oxidative pathways, determine metabolic processes through oxidative fermentation after harvesting glucose for rapid energy 40 . T h e b a c t e r i a o f t h e f a m i l y Enterococccaceae and its descendent species Melissococcusplutonius, gram-positive bacteria of phylum Firmicutes, present in low abundance (3%) in gut microbiome of A.m. carnica honeybee workers. This conclusion was also noted in previous reports 41 .M. plutoniusis known to cause the European foulbrood (EFB) in earlystage of honeybee larvae, with assistance from secondary invaders (Enterococcus faecalis, Paenibacillus alvei and Bacillus pumilus). M. plutonius was shown to have 30 different sequence types clustered under three clonal complexes (CC 3, CC12, and CC13) 42,44 , where CC13is the least virulent complex 43,45 . Honeybee workers transmit M. plutonius between colonies via robbing and drifting 46,47 .Erban et al. 45 compared control samples from the EFB zone with samples from EFB zone without clinical symptoms,and bees from colonies from EFB zone with clinical symptoms. The study identified a 100-fold higher prevalence of M. plutonius in colonies with EFB symptoms, while it only presents in 3 of 16 control colonies that are distant from the EFB zone. This suggests that M. plutonius has lower abundance in healthy honeybee colonies, which is consistent with the results of the present study. conclusion The present findings indicative that differences in gut microbiome structures of honeybee workers of the two subspecies A.m. carnica and A.m. jemenitica are due to varied monophyletic origin of the host. These findings support previous results suggesting that honeybee workers have a mutual coevolving relationship with specific group of bacteria. This group of bacteria co-exists and is maintained throughout the descending generations of the host. Inclusion of more subspecies inhabited in Saudi Arabia along with ones of this study can further support our findings. acknowledgeMents This study was supported by Beekeeper
2021-02-07T03:22:07.832Z
2021-01-15T00:00:00.000
{ "year": 2021, "sha1": "8e274c5708d1f4dd01dd852513a235daa61531c8", "oa_license": "CCBY", "oa_url": "https://doi.org/10.13005/bbra/2870", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "8e274c5708d1f4dd01dd852513a235daa61531c8", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
119259763
pes2o/s2orc
v3-fos-license
A new line on the wide binary test of gravity The relative velocity distribution of wide binary (WB) stars is sensitive to the law of gravity at the low accelerations typical of galactic outskirts. I consider the feasibility of this wide binary test using the `line velocity' method. This involves considering only the velocity components along the direction within the sky plane orthogonal to the systemic proper motion of each WB. I apply this technique to the WB sample of Hernandez et. al. (2018), carefully accounting for large-angle effects at one order beyond leading. Using Monte Carlo trials, the uncertainty in the one-dimensional velocity dispersion is $\approx 100$ m/s when using sky-projected relative velocities. Using line velocities reduces this to $\approx 30$ m/s because these are much less affected by distance uncertainties. My analysis does not support the Hernandez et. al. (2018) claim of a clear departure from Newtonian dynamics beyond a radius of $\approx 10$ kAU, partly because I use $2\sigma$ outlier rejection to clean their sample first. Nonetheless, the uncertainties are small enough that existing WB data are nearly sufficient to distinguish Newtonian dynamics from Modified Newtonian Dynamics. I estimate that $\approx 1000$ WB systems will be required for this purpose if using only line velocities. In addition to a larger sample, it will also be important to control for systematics like undetected companions and moving groups. This could be done statistically. The contamination can be minimized by considering a narrow theoretically motivated range of parameters and focusing on how different theories predict different proportions of WBs in this region. INTRODUCTION The standard cosmological paradigm relies on the assumption that General Relativity applies accurately to all astronomical systems. On Solar System and galaxy scales, it can be well approximated by Newtonian gravity due to the nonrelativistic speeds (Rowland 2015;de Almeida et al. 2016). Newtonian gravity was originally designed to explain motions in the Solar System. However, it appears to break down when extrapolated to galaxies, where the gravitational field can often be estimated from rotation curves (e.g. Babcock 1939;Rubin & Ford 1970;Rogstad & Shostak 1972). These acceleration discrepancies are thought to be caused by halos of cold dark matter (CDM) surrounding each galaxy (Ostriker & Peebles 1973). Unfortunately, the nature of this CDM remains elusive. Gravitational microlensing experiments indicate that the Galactic CDM can't be made of compact objects like * Email: ibanik@astro.uni-bonn.de stellar remnants (Alcock et al. 2000;Tisserand et al. 2007). The microlensing timescale would become longer than the survey duration if the CDM was made of heavier objects like primordial black holes (e.g. Carr et al. 2016;Clesse & García-Bellido 2018). However, this idea runs into difficulties when confronted with data on gravitational lensing of quasars (Mediavilla et al. 2017) and supernovae (Zumalacárregui & Seljak 2018). Thus, CDM is generally considered to be an undiscovered weakly interacting particle beyond the well-tested standard model of particle physics (Peebles 2017, and references therein). Despite extensive searches for such particles, none have so far been detected, ruling out a large part of the available parameter space (Liu et al. 2017). Searches for the effects of dynamical friction on the extensive CDM halos have also turned up empty-handed (Angus et al. 2011;Kroupa 2015;Oehm et al. 2017;Oehm & Kroupa 2018). Major tensions between observations and theory also indicate that a revision of CDM-based models may be needed (Kroupa 2012), in particular due to the anisotropic distribution of Local Group satellite galaxies (Pawlowski 2018). Given these results so far, it is prudent to question the underlying assumption of Newtonian gravity (Zwicky 1937). Its successes on Solar System scales do not prove that the same law can be extrapolated to the much larger scale of galaxies. In fact, a failure of this extrapolation can naturally explain the remarkably tight correlation between the internal accelerations within galaxies and the predictions of Newtonian gravity applied to the distribution of their luminous matter (e.g. Famaey & McGaugh 2012, and references therein). This 'radial acceleration relation' (RAR) has recently been tightened significantly with near-infrared photometry from the Spitzer Space Telescope , considering only the most reliable rotation curves (see their section 3.2.2) and exploiting reduced variability in stellar mass-to-light ratios at near-infrared wavelengths (Bell & de Jong 2001;Norris et al. 2016). These improvements show that the RAR holds with very little scatter over ≈ 5 orders of magnitude in luminosity and a similar range in surface brightness . Fits to individual rotation curves show that the intrinsic scatter in the RAR must be < 13% and is consistent with 0 (Li et al. 2018). Although Rodrigues et al. (2018) claimed that some rotation curves do not satisfy the RAR, it was later shown that these cases mostly arise when the distance is particularly uncertain . For galaxies where this is known well, discrepancies with the RAR are rather mild and may be caused by small yet unmodelled effects like disk warping and gradients in the stellar mass-to-light ratio. These recent observations were predicted several decades earlier by the theory called Modified Newtonian Dynamics (MOND, Milgrom 1983). In MOND, the dynamical effects usually attributed to CDM are instead provided by an acceleration-dependent modification to gravity. The gravitational field strength g at distance r from an isolated point mass M transitions from the Newtonian GM r 2 law at short range to MOND (or Milgromian dynamics) introduces a 0 as a fundamental acceleration scale of nature below which the deviation from Newtonian dynamics becomes significant and the equations of motion become spacetime scale invariant (Milgrom 2009). Empirically, a 0 ≈ 1.2×10 −10 m/s 2 to match galaxy rotation curves (McGaugh 2011). Remarkably, this is within an order of magnitude of the acceleration at which the classical energy density in a gravitational field (Peters 1981, equation 9) becomes comparable to the dark energy density u Λ ≡ ρ Λ c 2 that conventionally explains the accelerating expansion of the Universe (Ostriker & Steinhardt 1995;Riess et al. 1998;Perlmutter et al. 1999). MOND could thus be a result of poorly understood quantum gravity effects (e.g . Milgrom 1999;Pazy 2013;Verlinde 2016;Smolin 2017). Regardless of its underlying microphysical explanation, it can accurately match the rotation curves of a wide variety of both spiral and elliptical galaxies across a vast range in mass, surface brightness and gas fraction (Lelli et al. 2017, and references therein). MOND does all this based solely on the distribution of luminous matter. Given that most of these rotation curves were obtained in the decades after the MOND field equation was first published (Bekenstein & Milgrom 1984), these achievements are successful a priori predictions. These predictions work due to regularities in rotation curves that are difficult to reconcile with collisionless halos of CDM whose nature is very different to baryons (Salucci & Turini 2017;Desmond 2017a,b). Because MOND is an acceleration-dependent theory, its effects could become apparent in a rather small system if this has a sufficiently low mass (Equation 1). In fact, the MOND radius r M is only 7000 astronomical units (7 kAU) for a system with M = M . This suggests that the orbits of distant Solar System objects might be affected by MOND (Paučo & Klačka 2016), possibly accounting for certain correlations in their properties (Paučo & Klačka 2017). However, it is difficult to accurately constrain the dynamics of objects at such large distances. Such constraints could be obtained more easily around other stars if they have distant binary companions. As first suggested by Hernandez et al. (2012), the orbital motions of these wide binaries (WBs) should be faster in MOND than in Newtonian gravity. Moreover, it is likely that many such systems would form (Kouwenhoven et al. 2010;Tokovinin 2017), paving the way for the wide binary test (WBT) of gravity that I discuss in this contribution. WBs are likely to comprise at least a few percent of stellar systems given that the nearest star to the Sun is in a WB. Proxima Centauri orbits the close (18 AU) binary α Centauri A and B at a distance of 13 kAU (Kervella et al. 2017). The Proxima Centauri orbit would thus be significantly affected by MOND (Beech 2009(Beech , 2011. Given the billions of stars in our Galaxy, it almost certainly contains a vast number of systems well suited to the WBT. This is especially true given the high (74%) likelihood that our nearest WB was stable over the last 5 Gyr despite the effects of Galactic tides and stellar encounters (Feng & Jones 2018). This system was probably also stable in MOND (Banik & Zhao 2018, section 9). Proxima Centauri is far from the only WB within reach of existing observations. Data from the Gaia mission (Perryman et al. 2001) strongly suggests the presence of several thousand WBs within ≈ 150 pc (Andrews et al. 2017). The candidate systems they identified are mostly genuine, with a contamination rate of ≈ 6% (Andrews et al. 2018) estimated using the second data release of the Gaia mission (Gaia DR2, Gaia Collaboration 2018). The WBT was considered in more detail by Pittordis & Sutherland (2018), who set up simulations of WBs in Newtonian gravity and several theories of modified gravity, including MOND. These calculations were revisited by Banik & Zhao (2018) using self-consistent MOND simulations that include the external field from the rest of the Galaxy (Section 4.1) and use an interpolating function consistent with the RAR. Their main result was that MOND enhances the orbital velocities of Solar neighbourhood WBs by ≈ 20% above Newtonian expectations, consistent with their analytic estimate (see their section 2.2). Using statistical meth-ods they developed, they showed that ≈ 500 WB systems would be required to detect this effect if measurement errors are neglected but only sky-projected quantities are used as these are expected to be more accurate. They also considered various systematic errors which could hamper the WBT, in particular the presence of a low mass undetected companion to one of the stars in a WB (see their section 8.2). The WBT was first attempted by Hernandez et al. (2012) using the WB catalogue of Shaya & Olling (2011), who analyzed Hipparcos data with Bayesian methods to identify WBs within 100 pc (van Leeuwen 2007). Typical relative velocities v rel between WB stars seemed to remain constant with increasing separation instead of following the expected Keplerian decline (Hernandez et al. 2012, figure 1). However, it was later shown that their typical velocity uncertainty of 800 m/s was too large to draw strong conclusions about the underlying law of gravity (Scarpa et al. 2017, section 1). This is because the typical velocity scale of the WBT is 4 GM a 0 = 360 m/s (Equation 1 at r = r M ). Recently, Hernandez et al. (2018) revisited their earlier WB sample using Gaia DR2, focusing on only sky-projected relative velocities due to the time required to obtain followup spectroscopic redshift measurements and difficulties in correcting these for stellar convective blueshifts (Kervella et al. 2017, section 2.2). Unfortunately, the Hernandez et al. (2018) analysis suffers from a deficiency related to incorrect visualisation of how spherical co-ordinate systems work (El-Badry 2019). These perspective effects were discussed in more detail by Shaya & Olling (2011, section 3.2) and Pittordis & Sutherland (2018, section 2.4). Such effects can broadly be understood by considering a WB composed of stars A and B in sky directions nA and nB, respectively. If we are interested in their sky-projected relative velocity v sky and define this as that part of v rel within the plane orthogonal to nA, then only the proper motion of star A is required. However, for star B, we also need to know its radial velocity because nB = nA, causing nB to partly lie within the plane orthogonal to nA. In general, knowledge of both stars' radial velocities is required under other definitions of the sky plane such as Equation 4. However, the analysis of Hernandez et al. (2018) did not consider radial velocity information, implicitly assuming that nB = nA (El-Badry 2019). As well as correcting this deficiency, I consider how to reduce the uncertainty in v rel . Part of this is due to uncertainty in the relative heliocentric distances to the stars in a WB, which can be difficult to constrain (Section 5). To quantify the effect this has, suppose that the typical heliocentric tangential velocity of the system is 30 km/s (Gaia Collaboration 2018). With a 1% distance uncertainty, even a perfectly measured proper motion implies a velocity uncertainty of ≈ 300 m/s. This is nearly the same as WB relative velocities of ∼ 300 m/s (Banik & Zhao 2018, figure 7). Thus, if v rel is parallel to the WB systemic proper motion, even rather small distance uncertainties would make it very challenging to accurately infer v rel . This is a serious limitation of the WBT because distances are expected to be less accurately known than proper motions (Section 5). For example, Gaia DR2 parallaxes have a zero-point offset which probably varies with magnitude and colour (Gaia Collaboration 2018; Riess et al. 2018). This might seriously complicate the WBT or weaken its statistical significance by restricting it to only those WBs that consist of similar stars. To a large extent, the distance issue can be avoided if v rel is orthogonal to the WB systemic proper motion (Equation 9). I consider the feasibility of exploiting this using the 'line velocity method' for the WBT. The idea is to use only one component of v rel , namely that within the sky plane and orthogonal to the systemic proper motion of the WB (Shaya & Olling 2011, section 3.2). Because the WBT is statistical in any case, it is possible using line velocities in a similar way to if using both components of the sky-projected velocity. In the following, when discussing use of the sky-projected velocity, I mean both components thereof. After explaining the line velocity method more precisely in Section 2, I apply it to the Hernandez et al. (2018) dataset to confirm that it significantly reduces uncertainties compared to the use of sky velocities (Section 3). Even so, the use of essentially half as much data from each WB roughly doubles the number of systems required for the WBT if measurement uncertainties are neglected for both methods (Section 4). I discuss future prospects for the WBT in Section 5, where I explain why the line velocity method is likely to prove very fruitful in the long run. My conclusions are given in Section 6. THE LINE VELOCITY METHOD The basic idea behind the line velocity method is to focus on the directions rather than the magnitudes of the proper motion vectors of the stars in a WB. This is because converting a difference in proper motion directions into a relative velocity only requires the distance to the WB system as a whole. However, a difference in proper motion magnitudes can only be converted into a relative velocity if observers also know the relative distances to its stars. Because distances are likely to be less accurately known than proper motions (Section 5), the line velocity method uses only the most reliably known component of v rel . In this section, I explain how to apply this method. The sky-projected separation The WBT is ideally performed using accurate 3-dimensional (3D) positions and velocities. Unfortunately, Gaia DR2 distance uncertainties are ≈ 80 kAU for a system 100 pc from the observatory (Banik & Zhao 2018, section 6.2). This is much larger than the 3 − 20 kAU range of separations recommended by that work for the WBT. Even if a slightly larger range is used, it is clear that observers do not reliably know the true 3D separation r rel for the vast majority of WBs. An exception arises for very nearby systems like α Centauri (Kervella et al. 2016), though I expect there will be too few such systems to enable the WBT unless more distant systems are also considered. Fortunately, these systems can be utilised in a statistical sense if one uses their accurately known sky-projected separation r sky (Pittordis & Sutherland 2018). Thus, the WBT in the short term will be based on r sky and v rel . It is possible that not all components of v rel will be used as they are not all equally well measured. This is the main issue I consider in this contribution. To understand how v rel can be obtained from the observables of a WB, I define unit vectors n1 and n2 towards each of its stars. I use the convention that any vector v has length v ≡ |v| such that the unit vector parallel to v is v ≡ v ÷ v. Given the observed heliocentric distance di of star i, its position is thus The separation vector between the stars is r rel ≡ r2−r1 and their relative velocity is v rel ≡ v2 − v1. The velocity vi is found for each star individually from its radial velocity, proper motion, distance and sky position. To define r sky , I need to estimate the line of sight nsys towards the system as a whole. Given the small angular separation of the WB, I assume nsys is directed towards the mid-point of its stars. Their sky-projected separation is thus The relative velocity I calculate the sky-projected relative velocity v sky analogously to Equation 5. To apply the line velocity method, I need to estimate the systemic motion vsys of the WB. Given that this is typically much larger than v rel , I determine vsys under the simplifying assumption of an equal mass WB. The line velocity method involves finding the component of v rel along the direction v line , the line within the sky plane orthogonal to vsys. v line ∝ nsys × vsys . It is also possible to think of v line as the direction within the sky plane orthogonal to the systemic tangential velocity vtan of the WB. Having determined v line , it is simple to determine the relative velocity of the stars along this line. In the rest of this work, I use three different measures of relative velocity, corresponding to using 1, 2 or 3 of its components in the co-ordinate system defined by the orthogonal vectors v line and nsys. The simplest case is when using the full v rel . The 2D case corresponds to using v sky while the 1D case involves v line alone. ). Each bin is ≈ 0.7 dex wide, similar to the bins they used. The numbers in the last column refer to how many systems remain in each bin after 2σ outlier rejection (Section 3.1). APPLICATION TO THE Hernandez et al. 2). 1 This is based on using Gaia DR2 (Gaia Collaboration 2018) to update the astrometry of the carefully selected WB sample in Shaya & Olling (2011), who used data from the Hipparcos mission (Perryman et al. 1997). Quality cuts As might be expected, not all systems in the Hernandez et al. (2018) catalogue are usable in my analysis. Its table 2 has a column labelled 'Exclusion Test' specifically for the purpose of flagging systems with a serious observational inconsistency or a large change in velocity between the Hipparcos and Gaia epochs, suggestive of another component in the system (Banik & Zhao 2018, section 8.2). In this work, I only consider systems where the 'Exclusion Test' column has a blank entry for both stars, indicating that Hernandez et al. (2018) found no good reason to reject the WB from their analysis. As explained in Section 1 and pointed out by El-Badry (2019), determining even just the sky-projected relative velocity requires radial velocity measurements. Thus, I reject systems where these are not available for either star. If it is known for one star, then I assume the same value and uncertainty for the other star and use the system in my analysis. This is because the difference in radial velocity is one component of v rel and thus likely to be ≈ 300 m/s for a genuine WB (Banik & Zhao 2018, figure 7). The contribution of this to v sky is smaller by a factor of the sky angle between the WB's components. Even for a WB with r sky = 50 kAU and a downrange distance of just 20 pc, the angle it subtends on our sky is only 0.012 radians, implying an effect on v sky of ≈ 5 m/s. Thus, the small sky angles involved mean that my results should not be much affected by ignoring the difference in radial velocities. Moreover, my main objective in this work is to quantify the typical uncertainty on v sky and v line (Section 3.2). The radial velocity should play only a small part in this, as long as that of the system is known reasonably well. Upon examination of the systems which pass these selection criteria, it is evident that systems 822 and 823 in the Shaya & Olling (2011) catalogue share a star, making this a triple system. 2 Although this is a rather hierarchical system, I reject it from my analysis due to the additional complications which might nonetheless arise in a non-linear 1 Machine-readable versions are available in excel R and text formats upon request to the author. Note that the n i are given in the International Celestial Reference System (Ma et al. 1998 (Table 1). As the WBs are assumed to have zero relative velocity, non-zero values arise entirely from measurement uncertainties. I also show results obtained using v sky (Equation 6), which yields a wider distribution (lower blue curve). In both cases, the observed values are shown using dashed vertical lines with the same colour (v sky yields the higher value). Notice how the observed rms v sky is consistent with zero relative velocity in all 22 systems. This is not true for v line . gravity theory (Section 4.1) and in three-body systems more generally. For my analysis, I bin the remaining 79 systems in r sky . The bins correspond as closely as possible to those used by Hernandez et al. (2018), who used bins of width 0.7 dex. The bins used in this work are listed in Table 1. I then apply my line velocity method to determine the root mean square (rms) v line of the systems in each bin. My results in Section 4 show that their line velocities should follow a roughly Gaussian distribution. Thus, I apply a basic outlier rejection system to remove WBs whose v line exceeds twice the rms value for the systems in its r sky bin. This reduces the estimated one-dimensional velocity dispersion σ 1D , so the process is continued iteratively until it converges and no more WBs are rejected. In this way, I am left with 65 systems for the rest of my analysis. Measurement uncertainties To better estimate the uncertainty on σ 1D , I conduct a control analysis in which I set v = vsys (Equation 7) for both stars in a WB. The idea is to determine the rms relative velocity in different r sky bins if no actual velocity dispersion exists. To account for uncertainties in distances, radial velocities and proper motions, I perform 10 6 Monte Carlo (MC) trials where I vary these randomly according to their measurement errors, which I take to follow independent Gaussian distributions. (Table 1) and for the case where the full 3D relative velocity is considered (red points). Each probability distribution is summarised by its mode and 68.3% confidence interval (see text). For systems where only one star has a measured radial velocity, I assume the same value and uncertainty for the other star before averaging the resulting 3D velocities and assigning the mean to both stars. v line . To speed up the computations, I make use of the fact that changes in the distance or proper motion have a linear effect on the velocity. I include the cross-term that arises because in general both the distance and proper motion differ from their observed values and these must be multiplied to obtain a velocity. Measurement uncertainties influence the stellar velocities and thus the systemic velocity (Equation 7), slightly affecting the direction of v line (Equation 8). Because v rel vsys, I use a small angle approximation to estimate how much v line should be rotated within the plane orthogonal to nsys, which I assume is unaffected by changes in the individual di (Equation 4). I initially focus my analysis on bin 2 (Table 1), the most relevant for the WBT. The control distribution of the rms v line is shown in Figure 1 along with the rms v line of the original data. For comparison, I also show the corresponding quantities if v sky is used instead. In this case, the results are divided by √ 2 to allow a fair comparison. Figure 1 shows that the observed rms v sky can be adequately explained if v rel = 0 for the 22 WBs analysed therein. Consequently, the WBT is likely to prove very difficult using v sky . The prospects look much better if using v line because its observed rms value clearly requires σ 1D to have a non-zero latent value. To summarise probability distributions like those shown in Figure 1, I extract the most likely value and 68.3% confidence interval, equivalent to the central standard deviation of a Gaussian. The most likely value of any quantity x is simply the mode of its probability distribution P (x), normalised so that P (x) dx = 1. To get the confidence interval, I find the value α such that P (x) dx = 0.683 if the integral is taken over only that range of x for which P (x) > α. This range is easily determined for unimodal distributions of the sort which arise in this work. Once the appropriate value of (Table 1) and their uncertainties (solid circles and error bars). I also show the results of my control analyses where WBs have no relative velocity (solid squares with dashed error bars that are sometimes smaller than the marker). The rms velocity dispersion of the WBs in each bin (* markers) are shown for comparison − these do not require a Monte Carlo analysis. Results are shown using line velocities (black), sky-projected velocities (blue) and 3D velocities (red). Within each bin, the x co-ordinate is staggered by ± 1 4 for clarity. The 3D results are illustrative only as some stars lack radial velocity data (Section 3.1). α is found, the corresponding range of x defines the 68.3% confidence interval. I use Figure 2 to show these summary statistics for my control analyses of all r sky bins. As anticipated by Banik & Zhao (2018), use of full 3D relative velocities leads to rather large uncertainties. In reality, these may be even larger because I assume that any star with a missing radial velocity has a valid measurement with the same accuracy as for its WB companion. The use of sky-projected velocities reduces uncertainties somewhat, but Figure 1 shows that these are probably still too large in one of the most important r sky bins for the WBT. Uncertainties can be reduced by another factor of ≈ 4 using the line velocity method. In this case, the rms v line would typically be 40 m/s if its latent value is always 0. Given that σ 1D must be ≈ 200 m/s (Banik & Zhao 2018, figure 7), it can be accurately measured using line velocities. Inferred velocity dispersions My results in Section 4 show that line velocities are expected to follow a roughly Gaussian distribution. Therefore, I repeat my MC trials with an extra Gaussian dispersion of σ 1D added to each component of v rel . I then find the proportion of MC trials in which the rms v line of this mock dataset falls within a narrow range around the observed value. This is the relative probability of the particular σ 1D value used. As discussed in Section 3.2, the result is very small for σ 1D = 0. As σ 1D is increased, the probability rises up to some maximum before decreasing again. This is because adding a very high σ 1D causes the rms v line to exceed the observed value in nearly all MC trials. Having obtained an inference on σ 1D , I determine its 68.3% confidence interval (Section 3.2) and show the results in Figure 3. This allows a comparison with the observed rms v line and the results of my control analysis (Section 3.2). For bins 2 and 3 which are most relevant to the WBT (Table 1), σ 1D is clearly detected. For comparison, I repeat my analyses using the sky-projected and 3D relative velocities, though the results need to be scaled down by factors of √ 2 and √ 3, respectively. Because radial velocities are missing for some stars, the 3D results should be considered illustrative only. Figure 3 shows that using v line and v sky yield rather similar errors. This will change over time because uncertainties in distances are expected to drop slower than those in proper motions (Section 5). Moreover, the small sample size imposes a rather high floor on the uncertainties, even if perfect data were available. It will be interesting to apply the line and sky velocity methods to a larger sample of WBs. The results in Figure 3 allow for a preliminary comparison with theory. Thus, I plot my σ 1D inferences against the mean r sky for the WBs in each r sky bin (Figure 4). 1 Theoretical expectations require knowledge of the WB masses, which I hope to estimate and use in a future analysis. For now, I simply assume that all WBs have a total mass of 1.5M , the same assumption made in Banik & Zhao (2018, figure 7) because 1.5M is nearly the mode of the expected Gaia WB mass distribution (see their figure 2). Based on their figure 7, I assume that the Newtonian expectation is 155.7 m/s for r sky = 20 kAU while the MOND expectation is 195.9 m/s for conventional versions of it that include the external field effect (EFE). As these are predictions for sky-projected velocities, I scale them down by √ 2 and assume a Keplerian r sky −1/2 law to obtain results for other r sky . This is valid in Newtonian gravity and also in MOND for systems wider than their MOND radius of 8.6 kAU (Equation 1), because in the Solar neighbourhood such systems are dominated by the EFE such that MOND boosts the Newtonian forces by a fixed factor (Banik & Zhao 2018, figure 1). Thus, local WBs within their MOND radius should follow a Keplerian law with the Newtonian normalisation. In Newtonian gravity, the same normalisation should of course remain valid for larger radii. However, in MOND models with the EFE, the normalisation would asymptotically be ≈ 1.2× higher. Without the EFE (as discussed in their section 7.4), the Keplerian law would no longer apply beyond the MOND radius. Instead, σ 1D should become independent of r sky , reminiscent of flat galactic rotation curves. Based on Banik & Zhao (2018, figure 7), the asymptotic value should be ≈ 300/ √ 2 m/s. These predictions are valid for WBs unaffected by tides from other stars. Given that the nearest star to the Sun is 268 kAU away (Kervella et al. 2016), systems with r sky 100 kAU (bins 4 and 5) are unsuitable for the WBT. Bearing these expectations in mind, my results in Figure 4 show that the uncertainties are likely still too large to allow the WBT, at least with the Hernandez et al. (2018) sample of WBs. Nonetheless, the expected Keplerian decline is clearly evident out to the MOND radius and are suggestive of a further decline beyond it. This implies a mild amount of tension with MOND models that lack an EFE. However, the small sample size and lack of system masses means that one should not draw strong conclusions at this stage. TESTING GRAVITY WITH LINE VELOCITIES In the short term, the WBT will involve r sky rather than the true 3D separation r rel (Section 2.1). Thus, I follow Pittordis & Sutherland (2018) and Banik & Zhao (2018) in defining the scaled relative velocity The sky-projected component of this is v sky while the lineprojected component is v line . Because r sky measures only part of the 3D r rel , v is smaller than what it would be if it had been based on the full r rel . Thus, v < √ 2 in Newtonian gravity. The same limit applies to v sky and v line , though projection effects imply smaller typical values. Banik & Zhao (2018, section 2.2) derived an analytic estimate for the corresponding upper limit in MOND, which they confirmed using numerical simulations (see their section Here, ν M W is the MOND enhancement to g N,M W , the Newtonian gravity exerted by the rest of the Galaxy on the Solar neighbourhood. Although the Galaxy is a disk, the Sun is sufficiently far from the Galactic Centre that the spherically symmetric MOND interpolating function ν can be used with negligible loss of accuracy (Banik & Zhao 2018, section 9.3.1). Thus, g N,M W can be inferred from the amplitude of the Galactic rotation curve near the Sun. 1 By combining the latest measurements of the Galactic rotation curve with an interpolating function consistent with the RAR, Banik & Zhao (2018) showed that the upper limit on v is expected to be 1.68 in MOND, ≈ 20% higher than the Newtonian value. To quantify the distribution of v line , it is necessary to consider WBs with a range of properties. The semi-major axis probability distribution is carefully chosen such that the distribution of r sky matches observations (Banik & Zhao 2018, section 3.2). Similarly to that work, I assume that observational difficulties will prevent the WBT from using systems with r sky > 20 kAU. This is likely conservative as the WB catalogue of Andrews et al. (2018) maintains a low contamination rate out to 40 kAU. Increasing the upper limit on r sky would somewhat improve prospects for the WBT (Banik & Zhao 2018, figure 5). However, the improvement is not dramatic because the frequency of WBs declines ∝ r sky −1.6 (Lépine & Bongiorno 2007;Andrews et al. 2017) and the MOND enhancement to gravity is nearly flat beyond 20 kAU (Banik & Zhao 2018, figure 1). In addition to a range of WB orbit sizes, it is also important to consider a variety of shapes. These are parameterized by the orbital eccentricity e and its generalisation to non-Newtonian gravity theories (Pittordis & Sutherland 2018, section 4.1). 2 I assume a linear distribution in e. P (e) = 1 + γ e − 1 2 . I use γ N to denote the value of γ used for Newtonian WB models while γ M is used for MOND models. If the context makes clear which gravity theory is being discussed, then I just use γ. In both cases, the allowed range of values is between −2 and 2. The distribution of system masses is explained in Banik & Zhao (2018, section 3.3). Due to the EFE, I also consider systems covering all possible angles between the orbital pole and the direction of the Galactic Centre (see their section 3.4). The parameter space is explored using a full grid method, making the procedure deterministic. Motivated by difficulties in correcting observed redshifts for stellar convective motions (Kervella et al. 2017 to v sky distributions calculated using the methods described in Banik & Zhao (2018). Different model parameters are marginalized over using a full grid method, as discussed in their section 3. but the accuracies were assumed to be insufficient for direct use in the WBT. Restricting the WBT in this way roughly doubles the required number of systems to ≈ 300 (Banik & Zhao 2018, figure 5). This is much less than the ≈ 2000 WBs identified by Andrews et al. (2018), suggesting there is significant scope for prioritising data quality over quantity. To see how my line velocity method further inflates the number of systems needed for the WBT, I begin by obtaining the v line distribution P ( v line ) in the different models. In general, a WB system has v line = v sky |sin φ| for some angle φ between its sky-projected relative velocity and systemic proper motion. Thus, a particular value of v line can arise from any situation where v sky > v line . The probability of doing so depends on P ( v sky ) and the likelihood that |sin φ| = v line v sky , which is needed to achieve the correct projection effect. As the distribution of φ is expected to be uniform, I only need to consider the range 0 − π 2 . Using standard trigonometric results, I get that This allows me to take advantage of the P ( v sky ) distributions calculated in Banik & Zhao (2018). Some representative examples of P ( v line ) are shown in Figure 5 for different model assumptions. Within the context of either gravity theory, changing γ affects P ( v line ) to a much smaller extent than the difference in P ( v line ) between the different theories. These differences are especially pronounced in the high-velocity tail of the distribution. Having obtained P ( v line ) for Newtonian and MOND gravity, I use a publicly available method of comparing probability distributions to estimate the probability P detection that these models can be distinguished with accurate data from N systems (Banik & Zhao 2018, section 4). By repeating these 'detection probability' calculations for different N , I estimate how many systems are required for the WBT and the optimal range in (r sky , v line ) that astronomers should focus on. For a fixed value of γ M , I consider all possible values of γ N in order to find that which minimizes P detection . Roughly speaking, this makes the Newtonian P ( v line ) as similar as possible to the MOND P ( v line ). This mimics how future astronomers might try to fit observations of intrinsically Milgromian systems using Newtonian dynamics by adjusting its model parameters. Having obtained P detection in this way, I compare it with similar results based on using v sky in the WBT ( Figure 6). As these calculations assume no measurement errors, the line velocity method roughly doubles the N required to reach a fixed P detection . This is unsurprising given that v line is based on only one component of v rel whereas v sky is based on two components. My calculations yield an a priori estimate of the optimal parameter range for the WBT based on the proportion of systems expected to be in this range under the different gravity models. For my nominal assumption that γ M = 1.2, the best r sky range is 3−20 kAU while the optimal v line range starts at 0.94 ± 0.02 and ends at 1.68, the expected analytic limit (Equation 12) and also the maximum value which arises in my MOND models. Out of all WBs with r sky = 1 − 20 kAU, the MOND model predicts that 3.4 ± 0.3% should fall within this (r sky , v line ) range. This is nearly triple the Newtonian expectation of 1.1 ± 0.2% for the 'best-fitting' γ N of ≈ −0.5, the value which minimizes P detection . These results are unchanged for γ M = 2 apart from the fact that the best γ N rises to ≈ 0.1. Physically motivated constraints on γ N could improve the prospects for the WBT somewhat, for example if it becomes clear that negative values should not arise. These results are rather similar to those obtained using v sky (Banik & Zhao 2018, section 5). The main difference with line velocities is that projection effects significantly re- duce the proportion of systems in the optimal parameter range. This makes it more difficult to conduct the WBT, though my results in Figure 6 indicate that it should still be feasible with ≈ 1000 systems. MOND without the external field effect The EFE is a non-linear effect in MOND which arises directly from its governing equations (Milgrom 1986, section 2g). If a WB system orbits a galaxy with acceleration a 0 , then the internal dynamics of the WB will be governed by Newtonian gravity regardless of how low the internal accelerations are. This is because the total acceleration enters the governing equations. Therefore, the EFE is not tidal in nature − it arises even if the galaxy exerts a uniform gravitational field across the WB. So far, the EFE has been directly included in my models (Banik & Zhao 2018, section 2.1). Its section 7.4 discussed the possibility of MOND models without an EFE, as arises in some modified inertia interpretations of MOND (Milgrom 2011). Despite lacking a self-consistent theory of this type, it is straightforward to repeat my calculations without the EFE as neglecting it greatly simplifies the problem (Banik & Zhao 2018, equation 13). In Figure 7, I compare the P ( v line ) distribution in Newtonian gravity against MOND models with and without the EFE. MOND models without the EFE extend out to v line = 3.2 because some systems are much larger than their MOND radius, leading to a large difference compared to more conventional MOND models with the EFE. However, the differences are limited by the rapidly declining r sky distribution of WBs as this implies a similar decline in the distribution of semi-major axes (Andrews et al. 2017). Using my v sky and v line distributions for MOND without the EFE, I repeat my P detection calculations and show them in Figure 8. These models are much more easily distinguished from Newtonian dynamics (compare with Figure 6). Thus, MOND models lacking the EFE will be the first ones to become directly testable using the WBT. Neglecting the EFE slightly changes the optimal parameter range for the WBT. The best r sky range now becomes 4 − 20 kAU while the best v line range starts at 0.96 ± 0.02 and extends up to the maximum value of 3.2 reached in my simulations. The best-fitting Newtonian model (γ N ≈ 1.7) predicts that 0.8 ± 0.2% of WBs will fall in this parameter range, much smaller than the 5.5 ± 0.2% expected in MOND without the EFE. The model predictions would differ even more if accurate data is available for systems with r sky > 20 kAU because the lack of an EFE allows MOND to enhance accelerations by an unlimited amount compared to Newtonian gravity. DISCUSSION My results in Figure 2 show that the line velocity technique yields relative velocities with an accuracy of ≈ 30 m/s. This is very small compared to the expected 1D velocity dispersion of ≈ 150 m/s in my r sky bin 3 ( Figure 4). As v line should broadly follow a Gaussian distribution (Figure 7), measurement errors would increase the width of this distribution by only ≈ 30 150 2 = 4%, much less than the ≈ 20% difference between orbital velocities in Newtonian and Milgromian dynamics (Banik & Zhao 2018). 1 Future releases of Gaia data will improve the situation further. The line velocity method yields such precise results due to its significantly reduced sensitivity to distance uncertainties, which are expected to be larger and decline slower than uncertainties in proper motions. This is because proper motions arise due to the true motion of stars relative to the Sun, which is typically ∼ 30 km/s (Gaia Collaboration 2018). This is about the same as the orbital velocity of Earth around the Sun (Hornsby 1771), which underlies distance measurements via trigonometric parallax. Therefore, the annual parallax is similar to the proper motion over a year. For a fixed astrometric precision, the uncertainty in v rel thus receives similar contributions from distance and proper motion errors after ≈ 1 year of observations. As observatories such as Gaia (Perryman et al. 2001) collect data over a longer mission duration T , proper motion uncertainties are expected to fall as T −3/2 because the signal (change in n) grows linearly with T while measurement errors fall as √ T if the frequency of astrometric observations is maintained. However, distances must be inferred from the annual parallax, a cyclical change in n. Because parallax alone does not cause a long term drift in n, the distance uncertainty should decrease only as T −1/2 . Thus, in the long term, the WBT is probably best achieved using line velocities. If observers achieve better astrometric precision, this would not change the argument because it would improve both distance and proper motion measurements. The only exception arises if distance uncertainties somehow 'catch up' to those in proper motions, for example if the latter hit a noise floor. But this would need to be an unusual kind of noise floor because a minimum astrometric precision still allows proper motion measurements to improve with time while making it difficult to tighten distance constraints. The smaller uncertainties resulting from the line velocity method suggest that it can be applied to more distant WBs, where the larger uncertainties in v sky might make it unusable. Similarly, the reduced uncertainties might allow the WBT to utilise systems with fainter stars. Assuming such WBs have a lower mass, their reduced MOND radius (Equation 1) would cause their velocity distribution to differ more significantly between Newtonian and Milgromian dynamics. These considerations must be set against the simple fact that the line velocity method uses less data per WB, thus inflating the number of WBs needed to distinguish these theories (Section 4). Which method will prove more fruitful is unclear at present as this partly depends on the level of contamination (see below), which would affect line velocities more severely. The best option is to try both and check if they give consistent results. In addition to errors in velocity measurements, various other uncertainties would also affect the WBT. Some of these have previously been considered, in particular whether one of the stars in a WB has a close undetected companion as well as its more distant known companion (Banik & Zhao 2018, section 8.2). That work also looked into WBs that were previously ionised by interaction(s) with other stars (see their section 8.1). If a WB is only marginally ionised, then it will take a long time to disperse. This is quite possible if the ionisation is caused by a series of rather weak encounters, as might arise in a star cluster. The end result might be somewhat similar to a moving group of stars (Wielen 1977). This could add to a background of non-genuine WBs that makes the WBT more difficult. For the particular case of moving groups, the issue could be alleviated somewhat by focusing on systems older than e.g. 1 Gyr. The required stellar age estimates could perhaps be provided by gyrochronology, taking advantage of the increase in stellar rotation periods with age (Barnes 2003). Very precise ages would not be necessary for this purpose. By definition, such contamination involves systems whose v exceeds the limit for bound systems. Due to projec-tion effects, it is possible that v sky or v line does not exceed this limit. Statistically, however, it very often will. Thus, observers can estimate the prevalence of contaminating systems by looking at how many WBs have v line > 2, a value almost never exceeded even in versions of MOND without the EFE (Section 4.1). Nonetheless, contamination would still make the WBT more difficult for the same reason that the brightness of the sky makes it harder to identify a faint astronomical object. To get a feel for how this works, suppose accurate information is available for N = 1000 systems. My results show that, in the absence of contamination, the WBT will simply be a matter of focusing on a particular range of (r sky , v line ) and distinguishing between theories which predict 11 vs 34 WBs in this range. Because both numbers are N , it is reasonable to assume Poisson statistics. The feasibility of the WBT in this case is just the feasibility of distinguishing Poisson distributions with rates of 11 or 34. Whether these distributions are widely separated can be judged by adding their variances and comparing it to the difference in modes, which correspond to mean values for Poisson distributions. In this case, the means differ by 23 while the difference between random variables following these distributions has an error of √ 11 + 34 = 6.7, suggesting a statistically significant exclusion of one or other theory should be possible in the vast majority of cases. This is indeed what my results show (Figure 6). Now suppose that contamination from e.g. moving groups adds an extra 1% of WBs to this parameter range and that this fact is known based on the distribution of v line above 2. The gravity theories now predict 21 vs 44 systems in the same parameter range. The difference remains the same but is harder to distinguish, with the uncertainty increasing by a factor of 65/45 ≈ √ 2. To maintain the same statistical significance, roughly twice as many WBs would therefore be required. Clearly, the WBT would be very challenging if 1% of systems fell in the relevant parameter range while not being genuine WBs. Fortunately, the vast majority of contaminating systems would fall outside this range. For example, if the contamination is uniform in v line over the range 0 − 5 (corresponding to a maximum of 2.1 km/s for two Sun-like stars separated by 10 kAU), then only 14% of all contaminants would enter the v line range relevant for the WBT. Thus, the contaminants could comprise up to 7% of all catalogued WBs with r sky = (1 − 20) kAU. In reality, an even larger fraction would be tolerable because the contamination can be significantly reduced by using a narrower 'aperture' on v line . Figure 5 shows that MOND predicts almost no systems with v line > 1.35, even though the distribution extends up to 1.68. Thus, the v line aperture could be narrowed to a width of only 0.4 rather than the 0.7 assumed so far, with only a negligible loss of genuine WBs. Once the prevalence and properties of contaminants are better known, calculations including it will further optimise the best parameter range to focus on. 1 In this context, it should be mentioned that the WB catalogue of Andrews et al. (2018) has a contamination rate of ≈ 6% while extending out to double the r sky limit of 20 kAU that I assume is observationally accessible. Moreover, the number of WBs they identified greatly exceeds my estimate of how many are required for the WBT (Figure 6). This remains true even if my estimate is doubled to account for other sources of uncertainty like contamination. It must also be borne in mind that I have focused on the feasibility of the WBT using only one component of WB relative velocities. At least some information is available regarding the other components, further aiding the WBT. CONCLUSIONS If the anomalous rotation curves of galaxies are caused by a low-acceleration departure from the standard laws of gravity, then this will have significant effects on WB systems with separations 3 kAU. To conclusively perform this WBT and thereby detect or rule out such effects, accurate data is required for systems with separations up to ≈ 20 kAU (Hernandez et al. 2012;Scarpa et al. 2017;Pittordis & Sutherland 2018;Banik & Zhao 2018). This may already be within reach given that the WB catalogue of Andrews et al. (2018) extends out to 40 kAU while maintaining a low contamination rate. At its heart, the WBT requires relative velocities v rel . Since the test is statistical in nature, it could benefit from considering only the most accurately known component(s) of v rel . In particular, Banik & Zhao (2018) considered minimising the effect of radial velocity uncertainties by using only the sky-projected part of v rel . Shaya & Olling (2011, section 3.2) went a step further by suggesting that only one of the sky-projected velocity components be used. This involves projecting v rel onto the direction given by Equation 8 and using only this projected quantity in the WBT. The basic principle is to focus on the relative velocity along the direction within the sky plane orthogonal to the systemic proper motion of the WB, thereby minimising the effect of distance uncertainties. To demonstrate this line velocity method, I applied it to the WB catalogue of Hernandez et al. (2018, table 2) by conducting MC simulations where measurement errors are included but the stars in each WB have identical latent velocities equal to the mean for the stars in each system. In these control simulations, the typical 1D velocity dispersion is ≈ 100 m/s when using sky-projected relative velocities but only ≈ 30 m/s using line velocities (Figure 2). I then performed a preliminary MC analysis of the original Hernandez et al. (2018) data, finding no evidence of a clear departure from Newtonian expectations at the MOND radii of the systems. My analysis assumed all WBs have a total mass of 1.5M and suffered from a small sample. Even so, the error bars are comparable to the size of the difference between Newtonian and Milgromian expectations. Thus, the WBT should soon become feasible. To check this, I estimated how many WBs are required to distinguish these theories using the line velocity method. The use of only one component of v rel roughly doubles the required number of systems compared to the case where the WBT fully utilises sky-projected relative velocities. Even so, the WBT should still be feasible with ≈ 1000 systems ( Figure 6). With a longer observing duration T , the line velocity method becomes more compelling because it is almost immune to distance uncertainties, which are expected to decrease as T −1/2 . The method is mainly reliant on proper motions, which should exhibit a more rapid improvement as T −3/2 . Because distance and proper motion uncertainties should be similar after ≈ 1 year of observations (Section 5), the line velocity technique should be much better after a few years. Its higher accuracy increases the number of usable systems, at least partially offsetting the reduction in how much information is used from each system. I also discuss how the WBT might be hampered by contamination from unbound systems like moving groups. The effect can be minimized by defining a narrow theoretically motivated range in v line (Equations 8 and 11) such that the WBT is best performed by quantifying the proportion of systems in this range (Banik & Zhao 2018). Using this method, it is likely that the WBT is feasible with the number of WBs reported by Andrews et al. (2018) if their estimated level of contamination is correct. Moreover, the WBT will benefit at least somewhat from considering other components of v rel , even if they do have larger uncertainties. Therefore, the next few years promise to bring strong constraints on the behaviour of gravity at the low accelerations typical of galactic outskirts. This will cast a muchneeded light on what fundamental new assumptions are required to explain their anomalous behaviour.
2019-02-05T15:10:30.000Z
2019-02-05T00:00:00.000
{ "year": 2019, "sha1": "0a37fecfc96df26927b5c2674348261141d997da", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1902.01857", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "48dad1232daf4ff6590085056464d839863fc41f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
117007379
pes2o/s2orc
v3-fos-license
Extremal Problems in Bergman Spaces and an Extension of Ryabykh's $H^p$ Regularity Theorem For $1 We study linear extremal problems in the Bergman space $A^p$ of the unit disc, where $1<p<\infty$. Given a functional on the dual space of $A^p$ with representing kernel $k \in A^q$, where $1/p + 1/q = 1$, we show that if $q \le q_1<\infty$ and $k \in H^{q_1}$, then $F \in H^{(p-1)q_1}$. This result was previously known only in the case where $p$ is an even integer. We also discuss related results. where σ is normalized area measure (so that σ(D) = 1). For 1 < p < ∞, each functional φ ∈ (A p ) * can be uniquely represented by for some k ∈ A q (called the kernel of φ), where q = p/(p − 1) is the conjugate index. In this paper we study regularity results for the extremal problem of maximizing Re φ(f ) among all functions f ∈ A p of unit norm. An important regularity result is Ryabykh's theorem, which states that if the kernel is actually in the Hardy space H q , then the extremal function must be in the Hardy space H p (see [14] or [6] for a proof). In [7], the following extensions of Ryabykh's theorem are shown in the case where p is an even integer: • For q ≤ q 1 < ∞, the extremal function F ∈ H (p−1)q1 if the kernel k ∈ H q1 (if q 1 = q this is Ryabykh's theorem). • If the Taylor coefficients of k satisfy a certain bound, then F ∈ H ∞ . • The map sending a kernel k ∈ H q to its extremal function F ∈ A p is a continuous map from H q \ {0} into H p . • For q ≤ q 1 < ∞, if the extremal function F ∈ H (p−1)q1 , then the kernel k ∈ H q1 . (In fact, the proof in [7] shows that this result holds if 1 < q 1 < ∞). We show that the first two results above hold for all p such that 1 < p < ∞. We also show a weaker form of the third result holds for 1 < p < ∞, while a weaker form of the fourth holds if 2 ≤ p < ∞. It is an open problem whether the last two results hold in their strong forms for 1 < p < ∞. To overcome certain technical difficulties in the proof, we rely on regularity results from [12] for extremal functions with polynomial kernels. These results rely Date: February 9, 2015. 2010 Mathematics Subject Classification. 30H10, 30H20. 1 on regularity theorems for complex analogues of p-harmonic functions. Our paper also uses an inequality based on Littlewood-Paley theory that was proved in [7]. Extremal Problems and Ryabykh's Theorem We now introduce the topic of the paper in more detail. (See [7] for a slightly more detailed introduction). If f is an analytic function, S n f denotes its n th Taylor polynomial at the origin. We denote Lebesgue area measure by dA, and normalized area measure by dσ, so that σ(D) = 1. We recall some basic facts about Hardy and Bergman spaces. For proofs and further information, see [3] and [5]. Suppose that f is analytic in the unit disc. For 0 < p < ∞ and 0 < r < 1, the p th integral mean of f at radius r is The integral means are increasing functions of r for fixed f and p. An analytic function f is in the Hardy space we have that f (re iθ ) approaches the boundary function f (e iθ ) in L p (dθ) as r → 1 − . Two H p functions whose boundary values agree on some set of positive measure are identical. The space H p is a Banach space with norm Thus we can regard H p as a subspace of L p (T), where T denotes the unit circle. If 1 < p < ∞, the space H p is reflexive. If f ∈ H p and 1 < p < ∞, then S n f → f in H p as n → ∞, where S n f is the n th partial sum of the Taylor series for f centered at the origin. The Szegő projection S maps each function f ∈ L 1 (T) to an analytic function defined by for |z| < 1. It fixes H 1 functions and maps L p boundedly onto H p for 1 < p < ∞. . This correspondence is conjugate linear and does not preserve norms, but it is the case that where C p is a constant depending only on p. It can be shown that C p ≤ π csc(π/p) (see [2] and the proof of Theorem 6 in Section 2.4 of [5]). As with Hardy spaces, if f ∈ A p for 1 < p < ∞, then S n f → f in A p as n → ∞. In this paper the only Bergman spaces we consider are those with 1 < p < ∞. For a given linear functional φ ∈ (A p ) * such that φ = 0, we study the extremal problem of finding a function F ∈ A p with norm F A p = 1 such that Re φ(g) = φ . Such a function F is called an extremal function, and we say that F is an extremal function for a function k ∈ A q if F solves problem (1.2) for the functional φ with kernel k. Note that for p = 2 the extremal function is F = k/ k A 2 . For 1 < p < ∞ an extremal function always exists and is unique, which follows from the uniform convexity of A p . Also, for any function F of unit A p norm, there is some k such that F solves (1.2) for the functional φ with kernel k, and such a k is unique up to a positive scalar multiple. Furthermore, one such k is given by P(|F | p /F ), where P is the Bergman projection (see [6] and [8]). Even though it is well known, we restate the Cauchy-Green theorem, which is an important tool in this paper. Cauchy-Green Theorem. If Ω is a region in the plane with piecewise smooth boundary and f ∈ C 1 (Ω), then where ∂Ω denotes the boundary of Ω. The next result is an important characterization of extremal functions in A p for 1 < p < ∞ (see [15], p. 55). The last part of the theorem follows from the previous parts by a standard approximation argument. for any function h ∈ L 1 . Ryabykh's theorem is a result for extremal problems in Bergman spaces that involves Hardy space regularity. It says that if the kernel for a linear functional is not only in A q but also in H q , then the extremal function is in H p as well as A q . Ryabykh's Theorem. Let 1 < p < ∞ and let 1/p + 1/q = 1. Suppose that φ ∈ (A p ) * and φ(f ) = D f k dσ for some k ∈ H q . Then the solution F to the extremal problem (1.2) belongs to H p and satisfies Ryabykh proved that F ∈ H p in [14]. The bound (1.3) was proved in [6] by a variant of Ryabykh's proof. In [7], it is shown that if p is an even integer, then for q ≤ q 1 < ∞ the extremal function F ∈ H (p−1)q1 if and only if the kernel k ∈ H q1 . It is also shown that if the Taylor coefficients of k satisfy a certain bound then F ∈ H ∞ , and that the map sending a kernel k ∈ H q to its extremal function F ∈ A p is a continuous map from H q \ {0} into H p . We show that some of these results hold for any p such that 1 < p < ∞ and that the others hold in weaker forms. It is still an open problem whether the weaker results can be improved so that they correspond to the results from the case when p is an even integer. We need the following lemma for technical reasons. This follows from Corollary 2.1 in [12]. See page 944 of that paper for a justification of the fact that F ′ ∈ A r . The next lemma is a simplified version of Lemma 1.2 from [7]. Lemma 1.2. Suppose that 1 < p 1 < ∞ and 1 < p 2 , p 3 ≤ ∞, and also that where C depends only on p 1 and The assumption on f 1 f 2 f ′ 3 is not essential, but without it the integral on the left needs to be replaced by a principle value. In the next lemma, the notation f A ∞ means the L ∞ norm of f on the disc, which of course is equivalent to the H ∞ norm. Proof. The first inequality in this statement is from [17], and actually holds for 0 < p ≤ ∞. To prove the second inequality for 1 ≤ p < ∞ note that if f (0) = 0 then by Jensen's inequality. But by Fubini's theorem, the last displayed expression equals But the integrand in the last integral is less than or equal to . But this means that the last displayed integral is bounded above by The proof of the second inequality in the case p = ∞ is even easier, since then |f (e iθ )| ≤ sup 0≤r<1 |f ′ (re iθ )| + |f (0)| for each θ. The Norm-Equality For Polynomials Let 1 < p < ∞ and let q be its conjugate exponent. Let k ∈ H q and let F be the extremal function in A p for k. We will denote by φ the functional associated with k. Define K by [6], equation (4.2)). The first result in this article corresponds to Theorem 2.1 in [7]. Theorem 2.1. Let 1 < p < ∞, let k be a polynomial that is not identically 0, and let F ∈ A p be the extremal function for k. Then for every polynomial h. The proof of this Theorem is very similar to the proof of Theorem 2.1 in [7]. However, the proof in [7] also works if k is any H q function. where h is any polynomial. Apply the Cauchy-Green theorem and take the limit as r → 1 to transform the right-hand side into We may apply Theorem A to reduce the last expression to To prepare for a reverse application of the Cauchy-Green theorem, we rewrite the integral in (2.2) as Since F is in H p and both k and K are in H q , we may apply the Cauchy-Green theorem and take the limit as r → 1 to see that the above expression equals As in [7], taking h = 1 gives the following corollary, which we call the "normequality." Corollary 2.2. (The Norm-Equality). Let 1 < p < ∞, let k be a polynomial that is not identically 0, and let F be the extremal function for k. Then We use the norm-equality to give the following theorem, which corresponds with Theorem 2.3 in [7]. Unfortunately, the theorem in this article is weaker, and it seems difficult to prove a statement as strong as the one in [7]. In the statement of the theorem, F n ⇀ F means that F n converges to F in the weak sense. Theorem 2.3. Let {k n } be a sequence of functions in H q \ {0} and let k n → k in H q , where k is not identically zero. Let F n be the A p extremal function for k n and let F be the A p extremal function for k. Then F n ⇀ F in H p . Furthermore, if k and all the k n are polynomials, then F → F in H p . Because the operator taking a kernel to its extremal function is not linear, one cannot automatically conclude that F n → F just because the operator is bounded. It seems likely that F n → F holds for any k n and k in H q such that k n → k, and not just for polynomials, but we do not know a proof of this. Proof. The proof is basically identical to the corresponding proof in [7], but we will summarize it for the sake of completeness. To see that F n ⇀ F in H p , note that if F n did not approach F weakly in H p , then since Ryabykh's theorem implies that the sequence {F n } is bounded in H p norm, the Banach-Alaoglu theorem and the reflexivity of H p would imply that some subsequence would converge weakly, and thus pointwise, to a function not equal to F . But k n → k in A q , and it is proved in [6] that this implies F n → F in A p , which implies F n → F pointwise, a contradiction. If k and all the k n are polynomials, then the fact that F n ⇀ F together with the norm-equality implies that F n H p → F H p . Since H p is uniformly convex, it follows from F n ⇀ F and F n H p → F H p that F n → F in H p . Fourier Coefficients of |F | p We now give some results about the Fourier coefficients of |F | p that follow from Theorem 2.1. The first result gives information about the Fourier coefficients of |F | p for nonpositive indices. Since |F | p is real valued, it also indirectly gives information about the Fourier coefficients for positive indices. The next result is a bound on the Fourier coefficients of |F | p . Then, for each m ≥ 0, The proof of the theorem is identical to the one found in [7], and thus will be omitted. An interesting observation is that this theorem implies that |F | p is a trigonometric polynomial of degree at most N . The estimate in Theorem 3.2 can be used to obtain information about the size of |F | p (and thus of F ), as in the following corollary. Proof. Assume first that k is a polynomial. Observe that for m ≥ 2 we have and thus ∞ n=m |c n | 2 where C is the constant implicit in the expression O(n −α ). Thus we have (for m ≥ 2) that But this implies that Here we have also used the fact that F −1 H ∞ ≤ F −1 A p = 1. Now we drop the assumption that k is a polynomial. Let F n be the extremal function for S n k, and let φ n be the corresponding functional. By Ryabykh's theorem and the fact that S n k → k in H q , the sequence F n H p is bounded. Now, the above displayed inequality holds with F n in place of F and φ n in place of φ, since C can be taken to be independent of n. Also, it follows from the fact that S n k → k in A q that φ n → φ in (A p ) * , and that F n → F in A p and thus uniformly on compact subsets. Therefore, This proves the result. Relations Between the Size of the Kernel and Extremal Function In this section we show that if 1 < p < ∞ and q ≤ q 1 < ∞ and the kernel k ∈ H q1 then the extremal function F ∈ H (p−1)q1 . For q 1 = q the statement reduces to Ryabykh's theorem. For p an even integer, this statement and its converse are proved in [7]. It is still an open problem to decide if the converse holds for general p, although we prove a weaker result similar to it. We first prove the following theorem. Theorem 4.1. Let 1 < p < ∞ and let q = p/(p − 1) be its conjugate exponent. Let F ∈ A p be the extremal function corresponding to the kernel k ∈ A q , where k is a polynomial. Let p ≤ p 1 < ∞, and q ≤ q 1 < ∞. Define p 2 by 1 q 1 Then for every trigonometric polynomial h we have where C is some constant depending only on p, p 1 , and q 1 . Note that the case p 2 = ∞ occurs if and only if q = q 1 and p = p 1 . The theorem is then a trivial consequence of Ryabykh's theorem, so we need only prove the theorem if p 2 < ∞. Proof. Let h be an analytic polynomial. In the proof of Theorem 2.1, we showed that 1 2π Apply Lemma 1.2 separately to the two parts of the integral to conclude that its absolute value is bounded by where C is a constant depending only on p 1 and q 1 . by equation (1.1), the desired result holds for the case where h is an analytic polynomial. If h is an arbitrary trigonometric polynomial, then as in [7] the boundedness of the Szegő projection can be used to show the result holds. For a given q 1 > q, we will apply the theorem just proven with p 1 = (p − 1)q 1 and with p ′ 2 chosen to equal p 1 /p, where p ′ 2 is the conjugate exponent to p 2 . This allows us to bound the H p1 norm of f in terms of φ and k H q 1 only. Theorem 4.2. Let 1 < p < ∞, and let q be its conjugate exponent. Let F ∈ A p be the extremal function for a kernel k ∈ A q . If for some q 1 such that q ≤ q 1 < ∞ the kernel k ∈ H q1 , then F ∈ H p1 for p 1 = (p − 1)q 1 . In fact, where C depends only on p and q 1 . The proof of this theorem is identical to the proof of the corresponding theorem in [7], so we give a summary. Proof. The case q 1 = q is Ryabykh's theorem, so we assume q 1 > q. Let p 1 = (p − 1)q 1 ; thus p 1 > p = (p − 1)q. Let and p ′ 2 = p 1 /p and 1 < p 2 < ∞. Let F n denote the extremal function corresponding to the kernel S n k (where we choose n large enough so that S n k is not identically zero). Then for any trigonometric polynomial h, Theorem 4.1 implies that Taking the supremum over all trigonometric polynomials h with h L p 2 ≤ 1 gives Because F n H p 1 < ∞ (since S n k is a polynomial) we may divide both sides of the inequality by F n H p 1 to obtain where C depends only on p and q 1 . Taking the limit as n → ∞ gives the desired result. Recall from Section 1 that if F ∈ A p has unit norm, there is a corresponding kernel k ∈ A q such that F is the extremal function for k, and that this kernel is uniquely determined up to a positive multiple. Thus, it makes sense to ask if the converse of Theorem 4.2 holds. That is, does F ∈ H (p−1)q1 imply that k ∈ H q1 ? If p is an even integer and q ≤ q 1 < ∞ then by Theorem 4.3 in [7] this is the case. In fact, the proof in [7] works for any q 1 such that 1 < q 1 < ∞ (as long as p is an even integer). For general p we do not know if the result is still true. The result does hold if 2 ≤ p < ∞ and 1 < q 1 < ∞ and if F is nonvanishing, since the proof in [7] works in that case. For general F we can prove the following weaker result for 2 ≤ p < ∞. and let k be a kernel such that F is the extremal function for k. Let p 1 = q 1 (p − 1) and let p 2 = pq 1 /(q 1 + 1). If F ∈ H p1 and F ′ ∈ A p2 then k ∈ H q1 and where C p is as in inequality (1.1). Proof. Note first that the case p = 2 is trivial since then F and k are constant multiples of each other, so assume p = 2. Let q denote the exponent conjugate to p. Let h be a polynomial and let φ be the functional in (A p ) * corresponding to k. Then by Theorem A and the Cauchy-Green theorem, Here we have used the fact that |F | p−2 F ′ ∈ L 1 , which follows from the fact that (p − 2)/p + 1/p 2 < 1. Now apply Hölder's inequality to the first integral using exponents q 1 and q ′ 1 = q 1 /(q 1 − 1), and apply it to the second using exponents 2p 2 /(p − 2) and p 2 and 2q ′ 1 to obtain that the above expression is bounded above in absolute value by Lemma 1.3, this is at most Let C equal the part of the above expression in parentheses. Then for all polynomials h, and we can define a continuous linear functional ψ on H q ′ 1 so that for all polynomials h. Then ψ has an associated kernel in H q1 (see p. 113 of [3]). Call the kernel k. For h ∈ H q ′ 1 it follows that By the Cauchy-Green theorem, where h is any polynomial. Define the polynomial H by Then substituting H(z) for h(z) in equation (4.1), and using the fact that (zH) ′ = h, we have for every polynomial h. Since k ∈ A q and k ∈ H q1 ⊂ A 2q1 , we have that the power series for k and k converge in A q and A 2q1 respectively. Using this fact and choosing h(z) = z n for n ∈ N shows that the power series of k and k are identical, and so k = k and k ∈ H q1 . Now for any polynomial h, where we have used inequality (1.1). But for any trigonometric polynomial h, we have where S denotes the Szegő projection. Note that csc(π/p) is the norm of the Szegő projection on L p (∂D) (see [10]). Now take the supremum over all trigonometric polynomials h with h L q ′ 1 ≤ 1 and divide both sides of the inequality by k A q . It is interesting to note that the value of p 2 in the above theorem is less than p no matter the value of q 1 . Open Problems and a Simple Result As we have noted, unlike in the case in which p is an even integer, we do not know how to show that if F ∈ H (p−1)q1 then k ∈ H q1 . However, we can show that a corresponding result holds if we replace the Hardy spaces by Bergman spaces. This result is not difficult and may be well known, but we do not know of anywhere it appears in the literature. Theorem 5.1. Let 1 < p < ∞. Suppose k ∈ A q and F is the A p extremal function for k. If F ∈ A (p−1)q1 for 1 < q 1 < ∞, then k ∈ A q1 . If F ∈ H ∞ , then k is in the Bloch space, and if F is continuous on the closed disc, then k is in the little Bloch space. Proof. As stated about, k must be a positive scalar multiple of P(|F | p /F ) = P(|F | p−1 sgn F ), where P is the Bergman projection. The result now follows since the Bergman projection is bounded from L r to A r for 1 < r < ∞, and since it maps L ∞ onto the Bloch space and the space of continuous functions on the closed disc onto the little Bloch space (see e.g. [5]). We now mention some open problems that could motivate further study. (1) For 1 < p < ∞, if F ∈ H (p−1)q1 , is k ∈ H q1 ? As we have said, this is known from [7] to be true if p is an even integer, or if F is nonvanishing and 2 ≤ p < ∞. (2) Is it the case that if k ∈ A q1 , where 1 < q 1 < ∞, then F must be in A (p−1)q1 ? If not, can anything interesting be said about the regularity of F ? (3) If k is in the Bloch space or the little Bloch space, can anything of interest be said about the regularity of F ? (4) If k ∈ H ∞ , must F ∈ BMO? If F ∈ H ∞ , must k ∈ BMO? (5) Does the generalization of Ryabykh's theorem (Theorem 4.2) hold for 1 < q 1 < q? (6) Is the mapping from kernels to Bergman space extremal functions continuous on Hardy spaces? Is the mapping from extremal functions to kernels continuous on Hardy spaces? (Of course, there are multiple kernels with the same extremal function, but they are all positive scalar multiples of each other, so one can make sense of this question by specifying which kernel is chosen).
2015-02-05T21:14:13.000Z
2015-02-05T00:00:00.000
{ "year": 2015, "sha1": "dd2408995686ca79ea803172c1ee5b3032dcc9ba", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "dd2408995686ca79ea803172c1ee5b3032dcc9ba", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
119223120
pes2o/s2orc
v3-fos-license
Elastic systems with correlated disorder: Response to tilt and application to surface growth We study elastic systems such as interfaces or lattices pinned by correlated quenched disorder considering two different types of correlations: generalized columnar disorder and quenched defects correlated as ~ x^{-a} for large separation x. Using functional renormalization group methods, we obtain the critical exponents to two-loop order and calculate the response to a transverse field h. The correlated disorder violates the statistical tilt symmetry resulting in nonlinear response to a tilt. Elastic systems with columnar disorder exhibit a transverse Meissner effect: disorder generates the critical field h_c below which there is no response to a tilt and above which the tilt angle behaves as \theta ~ (h-h_c)^{\phi} with a universal exponent \phi<1. This describes the destruction of a weak Bose glass in type-II superconductors with columnar disorder caused by tilt of the magnetic field. For isotropic long-range correlated disorder, the linear tilt modulus vanishes at small fields leading to a power-law response \theta ~ h^{\phi} with \phi>1. The obtained results are applied to the Kardar-Parisi-Zhang equation with temporally correlated noise. INTRODUCTION Elastic objects in disordered media are a fruitful concept to study diverse physical systems such as domain walls in ferromagnets, 1 charge density waves in solids (CDW), 2 and vortices in type-II superconductors. 3 In all these systems, the interplay between elasticity, which tends to keep the object ordered (flat or periodic), and disorder, which induces distortions, produces a complicated energy landscape. 4,5,6 This leads to rich glassy behavior. For instance, at low temperature, weak defects in a crystal of type-II superconductor, such as oxygen vacancies, can collectively pin the flux lines in the so-called Bragg glass state. 7 Vortex pinning prevents the dissipation of energy, and thus, its understanding has a great importance for applications. It was observed in experiments that columnar defects produced in the underlying lattice of superconductors by heavy ion irradiation can significantly enhance vortex pinning. 8 Nelson and Vinokur 9 mapped the problem of flux lines pinned by columnar defects onto the quantum problem of bosons with uncorrelated quenched disorder in one dimension less. The mapping predicts a low temperature "strong" Bose-glass phase which corresponds to the localization of bosons in a random potential provided the longitudinal applied field H is weak enough to create vortices with density smaller than the density of pins. For larger H , the Bose-glass can coexist with a resistive liquid of interstitial vortices which, it is argued, can freeze upon cooling into a collectively pinned weak Bose-glass phase. 10 At low tilts of the applied magnetic field relative to the parallel columnar defects, flux lines remain localized along the defects, so that vortices are characterized by an infinite tilt modulus. This phenomenon which is known as the transverse Meissner effect has been extensively studied experimentally. 11 Vortices undergo a delocalization transition to a flux liquid state at some finite critical mismatch angle ϑ c between the applied field and the direction of defect alignment, i.e., at some finite transverse field H c ⊥ . The schematic phase diagram is shown in Fig. 1. The breakdown of the transverse Meissner effect above H c ⊥ can be described by where B ⊥ is the transverse magnetic induction due to the tilted flux lines. Heuristic arguments of Ref. 12 based on kink statistics predict φ = 1/2 in d = 1 + 1 dimensions and φ = 3/2 in d = 2 + 1. However, experiments on a bulk superconductor (d = 3) with columnar disorder find φ ≈ 0.5, 13 while the strong-randomness real-space renormalization group suggests φ = 1 in d = 2, 14 that is in disagreement with the predictions based on kink statistics. Thus, further investigations are needed. The theoretical advances for elastic objects in disordered media are achieved by developing two general methods: the Gaussian variational approximation (GVA) and the functional renormalization group (FRG). GVA relies on the replica method allowing for the replica symmetry breaking. 15 It is exact in the mean field limit, i.e., in the limit of a large number of components. FRG is a perturbative renormalization group method which is able to handle infinite number of relevant operator. 16 Simple scaling arguments show that the large-scale properties of a d-dimensional elastic system are governed by uncorrelated disorder in d < d uc = 4. In particular, displacements grow unboundedly with distance, resulting in a roughness of interfaces or distortions of periodic structures. The problem is notably difficult due to the so-called dimensional reduction which states that a d-dimensional disordered system at zero temperature is equivalent to all orders in perturbation theory to a pure system in d − 2 dimensions at finite temperature. However, metastability renders the zero-temperature perturbation theory useless: it breaks down on scales larger than the so-called Larkin length. 17 The peculiarity of the problem is that for d < d uc there is an infinite set of relevant operators. They can be parametrized by a function which is nothing but the disorder correlator. The renormalized disorder correlator becomes a nonanalytic function beyond the Larkin scale. 16 The appearance of a nonanalyticity in the form of a cusp at the origin is related to metastability, and nicely accounts for the generation of a threshold force at the depinning transition. 18,19,20,21 It was recently shown that FRG can unambiguously be extended to higher loop order so that the underlining nonanalytic field theory is probably renormalizable to all orders. 22,23,24 Although the two methods, GVA and FRG, are very different, they provide a fairly consistent picture and recently a relation between them was established. 25 There is also good agreement with results of numerical simulations, not only for critical exponents 26,27,28 but also for distributions of observables 29,30 and the effective action. 31 The FRG techniques were also applied to pinning of elastic systems by columnar disorder. 32,33,34 The models studied by FRG, though that may be more directly applicable to systems such as charge density waves or domain walls, exhibit many features of the Bose-glass phase of type-II superconductors. In particular, they demonstrate the absence of a response to a weak transverse field and provide a way to compute the exponent φ. 34 However, since FRG intrinsically assumes collective pinning, it also predicts a slow algebraic decay of translational order, that is not expected in the strong Bose-glass state when each vortex is pinned by a single columnar pin. Thus, the FRG is able to handle only a weak Bose-glass phase, exhibiting both the transverse Meissner effects and the Bragg peaks. In the present paper, we extend the FRG studies to two-loop order. We also extend to two-loop order our recent work 35 on the elastic objects in the presence of longrange (LR) correlated disorder with correlations decaying with distance as a power law. This type of disorder can be induced, for example, by the presence of extended defects with random orientations. In particular, we address the question of the response to a tilting field and compare the effects produced by different types of disorder correlations. The outline of this paper is as follows. Section II introduces the models of elastic objects in the presence of generalized columnar and LR-correlated disorder. In Sec. III, we study the model with LR correlated disorder using FRG up to two-loop order. In Sec. IV, we consider the response of elastic objects to a tilting field and discuss the relation to the quantum problem of interacting disordered bosons. In Sec. V, we revise the problem of surface growth with temporally correlated noise using the results obtained in the previous sections. II. MODELS WITH CORRELATED DISORDER The configuration of elastic object embedded in a D-dimensional space can be parametrized by an Ncomponent displacement field u x , where x belongs to the d-dimensional internal space. For instance, a ddimensional domain wall corresponds to d = D − 1 and N = 1, vortices in a bulk superconducter to d = D = 3 and N = 2, and vortices confined in a slab to d = D = 2 and N = 1. In this paper, we restrict our study to the case N = 1 and elastic objects with short-range elasticity. In the presence of disorder, the equilibrium behavior of the elastic object is defined by the Hamiltonian where c is the elasticity and V (x, u) is a random Gaussian potential, with zero mean and variance that will be defined below. We denote everywhere below q = The short-scale UV cutoff is implied at q ∼ Λ and the system size is L. The random potential causes the interface to wander and become rough with displacements growing with the distance x as C(x) ∼ x 2ζ . Here, ζ is the roughness exponent. Elastic periodic structures lose their strict translational order and exhibit a slow logarithmic growth of displacements, C(x) = A d ln |x|. Although most results of the paper concern the statics at equilibrium, it is instructive to give a dynamic formulation of the problem. The driven dynamics of the elastic object in a disordered medium at zero temperature can be described by the following overdamped equation of motion Here, η is the friction coefficient, F = −∂ u V (x, u) the pinning force, and f the applied force. The system undergoes the so-called depinning transition at the critical force f c , which separates sliding and pinned states. Upon approaching the depinning transition from the sliding state f → f + c the center-of-mass velocity v = In the present work, we consider model (2) with two different types of correlated disorder, which are described in two subsequent sections. A. Generalized columnar disorder Real systems often contain extended defects in the form of linear dislocations, planar grain boundaries, three-dimensional cavities, etc. We consider the model with extended defects which can be viewed as a generalization of columnar disorder. The defects are ε ddimensional objects (hyperplanes) extending throughout the whole system along the coordinate x and randomly distributed in the transverse directions x ⊥ with the concentration taken to be well below the percolation limit. 36,37,38 The corresponding correlator of the disorder potential can be written as The case of uncorrelated pointlike disorder corresponds to ε d = 0 and the columnar disorder to ε d = 1. For interfaces, one has to distinguish two universality classes: random bond (RB) disorder described by a short-range function R(u) and random field (RF) disorder corresponding to a function which behaves as R(u) ∼ |u| at large u. Random periodic (RP) universality class corresponding to a periodic function R(u) describes systems such as CDW or vortices in d = 1 + 1 dimensions. 6 The standard way to average over disorder is the replica trick. Introducing n replicas of the original system we derive the replicated Hamiltonian as follows: where we have added a small mass m providing an infrared cutoff. Replica indices a and b run from 1 to n and the properties of the original disordered system can be restored in the limit n → 0. We explicitly show in Hamiltonian (6) that one has to distinguish the longitudinal and transverse elasticity modules. Even if the bare elasticity tensor is isotropic, the effective elasticity may not due to the renormalization by anisotropically distributed disorder. B. Long-range correlated disorder In the case of isotropically distributed disorder, the power-law correlation is the simplest assumption with possibility for scaling behavior with new fixed points (FPs) and new critical exponents. The bulk critical behavior of systems with RB and RF disorder which correlations decay as a power-law x −a was studied in Refs. 39,40,41,42. The power-law correlation of disorder in the d-dimensional space with exponent a = d − ε d can be ascribed to ε d dimensional extended defects randomly distributed with random orientation. For instance, a = d corresponds to uncorrelated pointlike defects, and a = d − 1 (a = d − 2) describes infinite lines (planes) of defects with random orientation. The powerlaw correlation with a noninteger value a = d − d f can be found in the systems containing fractal-like structures with the fractal dimension d f . 43 Here we consider the model with LR-correlated disorder introduced in Ref. 35 which is defined by the following disorder correlator: with g(x) ∼ x −a . We fix the constant in the Fourier space taking g(q) = q a−d . The first term in Eq. (7) corresponds to pointlike disorder with short-range (SR) correlations and the second term to LR-correlated disorder. A priori we are interested in the case a < d when the correlations decay sufficiently slowly, otherwise the disorder is simply SR correlated. Using the replica trick, we obtain the replicated Hamiltonian H n [u] and the corresponding action S[u]: One could start with model (8), setting R 1 (u) = 0. However, as was shown in Ref. 35, a nonzero R 1 (u) is generated under coarse graining along the FRG flow. Note that the functions R i (u) can themselves be SR, LR, or RP. The generalization of these universality classes to LR-correlated disorder is discussed in Ref. 35. In the case of uncorrelated disorder, the system (2) exhibits the so-called statistical tilt symmetry (STS), i.e., invariance under transformation u x → u x +f x with an arbitrary function f x . The STS issues that the one-replica part of the replicated action, i.e., the elasticity, does not get corrected by disorder to all orders. The presence of LR-correlated disorder or extended defects destroys the STS, and thus allows for the renormalization of elasticity. For a non-Gaussian distribution of disorder, higher order (p > 2) cumulants would generate additional terms in the action with factors of 1/T p and free sums over p replicas. These terms are irrelevant in the RG sense that can be seen by power counting, and thus will be neglected from the beginning. We now study the scaling behavior of model (8) starting with simple power counting. The elastic term in ac- Since θ T is positive near d = 4 the temperature T is formally irrelevant. The STS would fix ψ = 0, however, this is not the case here. ζ and ψ are for now undetermined and their actual values will be fixed by the disorder correlators at the stable FP. Under the rescaling transformation the disorder correlation functions R 1 and R 2 go up by factors Thus, in the vicinity of Gaussian FP (R i = 0), SR disorder becomes relevant for ζ − ψ/2 < (4 − d)/4 and LR disorder is naively relevant for ζ − ψ/2 < (4 − a)/4. A posteriori these inequalities are satisfied at the RB and RP FPs. For RF disorder, however, power counting suggests that SR disorder is relevant for ζ − ψ < (4 − d)/2, while LR disorder is relevant for ζ − ψ < (4 − a)/2. 35 Let us consider the perturbation theory in disorder and its diagrammatic representation. In momentum space, the quadratic part of action (8) gives rise to the free propagator u a q u b q ′ 0 = (2π) d δ d (q + q ′ )T δ ab C(q) represented graphically by a line: We will distinguish two different interactions, SR and LR, for which we adopt the following splitted diagrammatic representation: Following the standard field theory renormalization program, we compute the effective action and determine counter-terms to render the theory UV finite as d, a → 4. To regularize integrals, we use a generalized dimensional regularization with a double expansion in ε = 4 − d and δ = 4 − a. The effective action Γ[u] is by definition a generating functional of one-particle irreducible vertex functions. However, it turns out to be nonanalytic in some directions, and therefore, the relying on the expansion in u is danger. To overcome these difficulties, we employ the formalism of functional diagrams introduced in Ref. 24. Since the temperature is formally irrelevant, we compute the correction to the effective action at T = 0. Analyzing UV divergences of the functional diagrams contributing to the effective action, we find that the disorder is corrected only by local parts of two-replica diagrams and the elasticity only by one-replica diagrams. B. Correction to disorder and β functions To one-loop order at T = 0, the correction to disorder is given by the local parts of the two-replica diagrams shown in Fig. 2. The corresponding expressions read (14) where we have included factor of 1/c 2 0 in R i0 (u). In this section, bare parameters are denoted by the subscript "0". The one-loop integrals I 1 , I 2 and I 2 diverge logarithmically and for ε, δ → 0 are given by where we have setm = m/ √ c 0 and K d is the area of a d-dimensional sphere divided by (2π) d . Let us define the renormalized dimensionless disorder R i as Note that to one-loop order, there is no correction due to the renormalization of elasticity (see below). The β functions are defined as the derivative of R i (u) with respect to the mass m at fixed bare disorder R i0 (u). It is convenient to rescale the field u by m ζ and write the β functions for the functionR i = K 4 m −4ζ R i (um ζ ). Dropping the tilde subscript, the flow equations to one-loop order read The FPs of flow equations (20) and (21) characterizing different universality classes have been computed numerically in Ref. 35 and the corresponding critical exponents have been derived to first order in ε and δ. The remarkable property of the FRG flow is that the LR part of disorder correlator R 2 (u) remains an analytic function along the flow for all universality classes. We will show below that due to this feature, one can obtain the critical exponents to two-loop order just computing the two-loop correction to elasticity and avoiding exhaustive two-loop calculations. C. Correction to elasticity The STS violation causes a renormalization of elasticity. The first order correction to the single-replica part of effective action is expressed by the following diagram: where the bare correlation function C(x) is given by Eq. (9). Using the short distance expansion and identifying the terms of the kind −(∇u a x ) 2 /2T as a correction to elasticity, we find where in the last line we have included K 4 /c 2 0 in a redefinition of R 20 (u). Since Eq. (24) is finite for ε, δ → 0, the elasticity does not get corrected to one-loop order. We now turn to the two-loop corrections. The three different sets of diagrams contributing to elasticity are depicted in Fig. 3 at To render the poles in ε and δ, we introduce the renormalization group Z factor as follows: The exponent ψ is given then by where subscript 0 indicates a derivative at constant bare parameters. Taking the derivative with respect to the mass, we obtain To calculate ψ, we have to express the bare disorder via renormalized one as follows: Substituting Eq. (29) in Eq. (28), we find that the leading two-loop corrections are exactly canceled by the counterterms, so that we leave with The finite part of the single-replica two-loop diagrams (25) is expected to correct elasticity at three-loop order. Hence, we argue that the perturbation theory for this model is organized in such a way that the single-replica p-loop diagrams correct the elasticity only to (p + 1) order. Since R 2 (u) remains analytic along the FRG flow, we have R D. Roughness exponent to two-loop order We now show how one can calculate the roughness exponent ζ to second order in ε and δ knowing only the exponent ψ computed to second order in Sec. III C. To that end, we do not need the whole FRG to two-loop order. Let us start with the RB universality class. The roughness exponent is fixed by a stable RB FP solution of Eqs. (20) and (21) which decays exponentially fast for large u. The equations possess both the SR RB FP with R 2 (u) = 0 and the LR RB FP with R 2 (u) = 0. The roughness exponent corresponding to the SR RB FP is known to second order in ε and reads 22,24 Despite the smallness of the two-loop correction, the estimation of the exponent in d = 1, ζ SRRB = 0.6866 given by Eq. (31), visibly differs from the known exact result 2/3. One can improve the accuracy of ζ by use the Padé approximant [2/1] involving also the unknown third order correction. Tuning the latter in order to reproduce the exact result 2/3 for ε = 3, we end up with the expression which is expected to be fairly accurate for 0 ≤ ε ≤ 3. We now focus on the LR RB FP with R 2 (u) = 0. We can integrate both sides of flow equation (21) over u from 0 to ∞ taking into account that for RB disorder R 2 (u) decays exponentially fast. Since for RB disorder the integral ∞ 0 du R 2 (u) is nonzero, we can determine the roughness exponent ζ LRRB = δ/5 to first order in ε and δ. Fortunately, one can go beyond the one-loop approximation. Indeed, the direct inspection of diagrams contributing to the flow equation (21) shows that the higher orders can only be linear in even derivatives of R 2 (u). The only term which is linear in R 2 (u) comes from the renormalization of elasticity and can be rewritten as 2ψR 2 (u) to all orders. Hence, we have to all orders (33) and as a consequence, ∞ 0 duR 2 (u) is exactly preserved along the FRG flow resulting in the exact identity Substituting Eq. (30) into Eq. (34), we obtain the roughness exponent ζ LRRB to second order in ε and δ. Before we proceed to compute the exponents, let us to check stability of the SR and LR RB FPs. As was shown in Ref. 35, the SR RB FP is unstable with respect to LR disorder if ζ LRRB > ζ SRRB . To one-loop order, this gives that the SR RB FP is stable for δ < 1.0415ε. Equating (34) and (32), we can compute the stability regions to second order in ε and δ (see Fig. 5). The alternative way to determine the crossover line relies on the requirement that the exponent ψ is a continuous function of ε and δ. It is zero in the region controlled by the SR FP, and therefore has to vanish when approaching the crossover line from the LR stability region. Since the ψ is of second order in ε and δ, the ψ criterion at two-loop order gives the same stability regions as the roughness exponents equating at one-loop order. However, we can significantly improve the latter if we take into account that ψ = 0 on the crossover line. The resulting crossover line is shown in Fig. 5. We can also improve the two-loop estimation of ψ. To that end, we write down a formal expansion of ψ in ε, The function f 1 (x) is basically the function shown in Fig. 4. We now tune the function f 2 (x) in order to make ψ = 0 on the crossover line and find Using Eqs. (35) and (36), we compute the roughness exponent ζ LRRB as a function of δ for ε = 1 and ε = 2 (see Fig. 6). Unfortunately, the accuracy rapidly decays with ε, so that estimation of the roughness exponent for ε = 3 is very difficult and postponed to Sec. V. Similar to the case of RB disorder, one can show that the roughness exponent at the LR RF FP is exactly given by and the crossover line between the SR and LR RF FPs is exactly given by δ = ε. IV. RESPONSE TO TILT In this section, we study the response of a ddimensional elastic object to a small tilting force tending to rotate the object in the plane (x 1 , u). The tilting force can be incorporated into the Hamiltonian as follows: Such a force can be caused, for example, by a tilt of the applied field in superconductors or by tilted boundary conditions in the case of interfaces. For superconductors, where H ⊥ is the component of the applied magnetic field transverse to the flux lines directed along x 1 and φ 0 is the magnetic flux quantum. 12 Since we restrict our consideration to the case N = 1, our results can be applied only to flux lines confined in (1+1) dimensions. However, the methods we use here can be extended to general N , and therefore applied to vortices in (2 + 1) dimensions. We focus on the response of the system to a small field h, which can be measured by the average angle between the perturbed and unperturbed orientations of the object in the (x 1 , u) plane: ϑ(h) := ∂ x1 u x . In the absence of disorder the straightforward minimization of the Hamiltonian leads to the linear response: ϑ(h) = h/c. To study the effect of disorder, it proves more convenient to work in the tilted frame: u x → u x + ϑx 1 . The corresponding Hamiltonian is where the field u satisfies ∂ x1 u x = 0. Note that due to the violation of the STS symmetry, the tilted system can exhibit anisotropic effective elasticity even if the bare elasticity and disorder are isotropic. We now show by simple power counting that a finite tilt does introduce a new length scale in the problem which can be associated with the correlation length defined through the connected two point correlator, Indeed, upon scaling transformation x → bx, u → b ζ u the arguments of the disorder term in Hamiltonian (39) scale like V (bx, b ζ u x +ϑbx 1 ). Comparing two terms of the last argument, we find that finite ϑ changes the character of disorder correlator above the length scale diverging for ϑ → 0 provided that ζ < 1. Below ξ ϑ one can neglect the tilt, while above ξ ϑ the dependence on u x is completely washed out and the ϑ term starts to suppress the correlation of disorder along x 1 . Thus, ξ ϑ serves as the correlation length along x 1 , and therefore c 1 does not get renormalized beyond this scale. In the next two sections, we investigate the difference in the response to tilt for anisotropically distributed extended defects and isotropic LR-correlated disorder. A. Response in the presence of columnar disorder Here, we extend the previous one-loop FRG studies 32,34 of elastic systems in the presence of columnar disorder to two-loop order and proceed to describe the transverse Meissner physics in a quantitative way. We consider the model with ε d -dimensional extended defects introduced in Sec. II A. We take c i = c (i = 1, ..., ε d ) and we are also free to put c i = 1 (i = ε d + 1, ..., d) since they do not get corrected by disorder. Simple power counting shows that the upper critical dimension of the problem is d uc = 4 + ε d . We use the dimensional regularization of integrals with aε = 4 − d + ε d expansion. The FRG flow equations to two-loop order read where θ T = d − 2 + 2ζ and Λ 0 is the bare cutoff. In Eq. (45),h is the coefficient in front of the term ultimately generated in the effective Hamiltonian along the FRG flow. 32 The correction toh is strongly UV diverging, and thus is nonuniversal. Note that the flow equation for the generalized columnar disorder (42) coincides to all orders with that for pointlike disorder up to changeε → ε. Let us start from the analysis at T = 0. The flow picture very resembles that for the depinning transition with tilt ϑ, longitudinal elasticity c , and tilting field h playing the roles of velocity, friction, and driving force, respectively. For d < 4 + ε d , the running disorder correlator R (4) (u) blows up at the Larkin scale Consequently, the longitudinal elasticity diverges at zero tilt ϑ in a way similar to mobility divergence at the depinning transition in quasistatic limit. Beyond the Larkin scale l > l c , the R (2) (u) develops a cusp at origin, R ′′′ (0 + ) > 0. The term (46) is generated in the effective Hamiltonian and the R (4) (0) changes its sign from positive to negative. The latter leads to a power law decay of the longitudinal elasticity c ∝ L −ψ with where R * (u) is a FP solution of the flow equation (42). Similar to the threshold force generation at the depinning transition, term (46) We now in a position to compute the exponent φ, which we define as To that end, we renormalize the equilibrium balance equation h − h c = c 1 (L)ϑ up to the scale L = ξ ϑ at which the elasticity c 1 stops to get renormalized. Using Eq. (41), we obtain the exact scaling relation The exponents ζ, ψ, and φ computed to second order inε for different universality classes are summarized in Tab. I. Note that expansions inε are expected to be Borel nonsummable, and thus ill behaved for high orders and largeε. In this light, the using of exact relation (51) may be more favorable than the expansions given in the last column of Table I. Systems described by the RP universality class exhibit slow logarithmic growth of displacements The universal amplitude can be easily deduced from the results for the uncorrelated disorder and to two loop reads A finite temperature T > 0 rounds the cusp of the running disorder correlator R l (u), so that in the boundary layer u ∼ T l , it significantly deviates form the FP solution and obeys the following scaling form: 46 where χ = |R * ′′′ (0)|. However, as was pointed in Ref. 33 the flow equations for columnar disorder have a remarkable feature in comparison with uncorrelated disorder. Indeed, substituting the boundary layer scaling (54) in the temperature flow equation (44), we obtain As follows from Eq. (55), the effective temperature T l vanishes at a finite length scale L loc = e l loc /Λ 0 , so that the localization effects are settled only on scales larger than l loc > l c . B. Interacting disordered bosons in (1+1) dimensions Let us discuss the special case of flux lines in (1+1) dimensions which the qualitative phase diagram is shown in Fig. 1. The transverse Meissner physics for collectively pinned weak Bose glass and small tilt angles ϑ = B ⊥ /B can be explored using the results obtained in the previous section for the RP universality class withε = 4−2+1 = 3. Here, we restore the dependence on the flux line density n 0 fixing the period of R(u) to 1/n 0 . In contrast to the Bragg-glass, the weak Bose-glass survives in d = 2. Indeed, for uncorrelated disorder in d = 2, the temperature turns out to be marginally relevant, so that the system has a line of FPs describing a super-rough phase with anomalous growth of the two-point correlation (u x − u 0 ) 2 = A(T ) ln 2 x + O(ln x). 47 According to Eqs. (55) and (56) for columnar disorder the temperature vanishes at finite, though a very large scale Unfortunately, the large value ofε makes estimation of φ extremely unreliable. Indeed, the expansion inε shown in Table I leads to a zero value of φ. The exact scaling relation (51) with ψ computed using the expression from Table I gives The one-loop result reproduces the estimation φ = 1/2 given by heuristic random walk arguments based on the entropy of flux lines wandering in the presence of thermal fluctuations. 12 The model of vortices wandering in a random array of columnar defects can be mapped onto a quantum problem of disordered bosons. 9 One can regard each vortex as an imaginary time world line of a boson, so that the columnar pins parallel to vortices become quenched pointlike disorder in the quantum problem. The transverse magnetic field H ⊥ will play the role of an imaginary vector potential h for the bosons, 48 so that the bosonic Hamiltonian turns out to be non-Hermitian: Here, ψ † (x), ψ(x) are the bosonic creation and annihilation operators, andn(x) = ψ † (x)ψ(x) is the density operator. U (x) is a short-range repulsive interaction potential between bosons with the strength U 0 = dx U (x). The disorder is described by a time-independent Gaussian random potential V (x) with zero mean V (x) = 0 and short-range We can pass to a quantum hydrodynamic formulation of model (59) expressing everything though the bosonic fields θ(x) and ϕ(x) which satisfy the canonical commutation relation 49 For bosons with average density n 0 , this giveŝ where in the disorder part we have retained only the leading contributions coming from the backscattering on impurities. The forward scattering term can be eliminated by a shift of the phonon field θ(x) which does not depend on the time t, and thus, this term does not contribute to the current J ∼ ∂ t θ(x). The Luttinger liquid parameter g and the phonon velocity v p are given by The imaginary time (τ = it) action corresponding to Hamiltonian (61) for a particular distribution of disorder can be derived using the canonical transformation Here, is the momentum conjugate to θ(x) which is given by Eq. (60). Averaging e −SV / over disorder by means of the replica trick and keeping only the most relevant terms, we obtain the replicated action The imaginary time action (64) is identical to the Hamiltonian of periodic elastic system with columnar disorder (6). The imaginary time plays the role of the longitudinal coordinate τ ←→ x which is parallel to columnar pins. The Planck constant stands for the temperature ←→ T , and the phonons are related to the dimensionless displacements field θ(x, τ ) = −n 0 u(x). There is the following correspondence between quantities in the vortices and bosons problems 48 The vortex tilt angle ϑ caused by the transverse field H ⊥ corresponds to the boson current J = (−i)∂H/∂h induced by the imaginary vector potential h. For h = 0, the disordered bosons undergoes a superfluid-insulator transition at g = 3/2. This determines the temperature T BG , such that g(T BG ) = 3/2, above which vortices form a liquid (see Fig. 1). It is known that in one dimension there is no difference between bosons and fermions, and both types of particles are described by the Luttinger liquid (61). In particular, the hard-core bosons can be mapped onto free fermions that corresponds to a special value of the Luttinger parameter g(T * ) = 1, which defines the temperature T * . In Ref. 14, the mapping onto free fermions was used to study the transverse Meissner effect in (1+1) dimensions. The free fermions on a lattice is described by the tight-binding model, where c † , c are on site fermion creation and annihilation operators, and µ is the chemical potential. w i is a random hopping matrix element and ǫ i is a random pinning energy. In Ref. 14, both cases, the random pinning and the random hopping models, were studied using the exact results for the Lloyd model and the strong-randomness real-space RG, respectively. It was found in both cases that J ∼ h − h c , i.e., φ = 1, that significantly differs from the FRG prediction (58). The difference can be attributed to that the free fermions analog is limited to a special point g = 1 (T = T * ), while the FRG prediction may be valid only for low temperatures since it is controlled by the zero-temperature fixed point. The correspondence between the temperature and the Planck constant in both problems reflects that the zero-temperature FRG FP may have a counterpart in the quantum problem in the form of an instanton solution. This may account for the consistency of the exponent φ computed by FRG and estimated using heuristic arguments of kink statistics. The high-T c superconductor films grown by deposition often exhibit larger critical currents than their bulk counterparts due to the formation of dislocations running parallel to the crystalline axis, and thus, they are natural candidates to verify the above results. However, as was discussed in Ref. 50, the picture may be more involved since the dislocation lines can meander or they can be of relatively short length that breaks up the Bose glass into pieces along the direction of the crystalline axis. C. Response in the presence of long-range correlated disorder We now consider the response to tilt in the presence of isotropic LR-correlated disorder. In contrast to the case of generalized columnar disorder theh term is not generated due to the analyticity of the LR part R 2 (u) of disorder correlator. Moreover, the elasticity remains finite along the FRG flow though it grows as a power law c ∼ L −ψ with ψ < 0 given by Eq. (30) and shown in Fig. 4. As a consequence, there is no threshold transverse field: the systems is tilted for any finite tilting force. Renormalizing the balance equation h = c 1 ϑ up to the scale ξ ϑ given by Eq. (41), we see that the response to the tilting force h is given by a power law with the exponent φ > 1 defined by Eq. (51). The response to tilt in systems with uncorrelated, columnar and LR-correlated disorders is shown in Fig. 7. As one can see from the figure, the response of systems with LR-correlated disorder interpolates between the response of systems with uncorrelated and columnar disorder. In particular, we argue that in the presence of LR-correlated disorder, vortices can form a new vortex glass phase which exhibits Bragg peaks and vanishing linear tilt modulus without transverse Meissner effect. We will refer to this phase as the strong Bragg glass. In analogy with the Bose glass, one can attempt to map the system with linear defects of random orientation corresponding to LR-correlated disorder with a = d − 1 to a quantum system consisting of interacting bosons and heavy particles moving with random quenched velocities according to classical mechanics. V. KARDAR-PARISI-ZHANG EQUATION WITH TEMPORALLY CORRELATED NOISE In this section, we address the relevance of our results to the Kardar-Parisi-Zhang (KPZ) equation (and closely related Burgers equation), which describes the dynamics of a stochastically growing interface. 51 The latter is characterized by a height function h(x, t), x ∈ R d ′ which obeys the nonlinear stochastic equation of motion The first term in Eq. (68) represents the surface tension, while the second term describes tendency of the surface locally grow to normal itself. The stochastic noise η(x, t) is usually assumed to be Gaussian with short-range correlations. Here, we consider the noise with long-range correlations in both time and space. It is defined in Fourier by 52 with the noise spectral density function having power-law singularities of the form Such temporal correlations can originate from impurities which do not diffuse and impede the growth of the interface, while the space correlations can be due to the presence of extended defects. Since there is no intrinsic length scale in the problem, asymptotics of various correlation functions are given by simple power laws. For instance, the height-height correlation function scales like where χ is the roughness exponent and z is the dynamic exponent which describes the scaling of the relaxation time with length (do not mix it with the dynamic exponent z at the depinning transition, which is not used in this paper). Medina et al. 52 studied the KPZ equation with the noise spectrum (70) using the dynamical renormalization group (DRG) approach and here we adopt the notation introduced in their work. Let us briefly outline the results obtained in Ref. 52 restricting ourselves mainly to the case d ′ = 1. The flow equations expressed in terms of dimensionless parameters U 0 = K d ′ λ 2 D 0 /ν 3 and U θ = K d ′ λ 2 D θ /ν 3 to one loop order read (1 + 2ρ)(1 + 2θ) sec(πθ), (72) (1 + 4θ) sec(2πθ) Note that the DRG calculations are uncontrolled, in the sense that there is no small parameter. For white noise (θ = 0), the KPZ equation is invariant under tilting of the surface by a small angle. The STS symmetry implies that the vertex λ does not get corrected by the noise to all orders. This results in the exact identity Besides the known SR FP with U θ = 0, the flow equations (72)-(75) are expected to have a different LR FP with U θ = 0. It was argued that the term U θ in the noise spectrum D(k, ω) acquires no fluctuation corrections: the scaling of U θ is completely determined by its bare dimension so that Eq. (74) is exact to all orders. 52 This allows one to compute the exact critical value θ c = 1/6 (for ρ = 0) at which there is a crossover from the SR FP to the LR FP. For arbitrary ρ, the crossover to the LR FP happens at 6θ + 4ρ > 1. The term U θ becomes relevant and as follows from Eq. (74) the exact relation holds at the LR FP. Let us for the moment ignore the noise correction to λ in Eq. (73). This approximation restoring the STS is valid only for small θ and yields χ * (θ, ρ) = 1 + 4θ + 2ρ 3 + 2θ . For large θ, one can expect a significant deviation of exponents z and χ from z * and χ * . To gain insight into the problem the authors of Ref. 52 solved the flow equations (72)-(75) for finite θ and ρ = 0 numerically. They found that the physical LR FP exists only for θ < 1/4, while nothing special is physically expected at θ = 1/4. It was argued that the problem is originated from infrared divergences of integrals and that infinite number of additional terms generated in the noise spectral density under DRG: Keeping track of renormalization of all D n , the authors of Ref. 52 solved the truncated system of flow equations numerically and found that the critical exponents for ρ = 0 can be fitted to We now revise the problem in the light of what has been learned in the previous sections. Using the wellknown Cole-Hopf transformation Z = exp[(λ/2ν)h] one can eliminate the nonlinear term in Eq. (68) and obtain a diffusion equation in time-dependent random potential The solution of Eq. (84) can be regarded as the partition function of a directed polymer (DP) of length t in (d ′ +1) dimensions with ends fixed at (0, 0) and (x, t): with ν = T /2c and λ = 1/c. The DP is a one-dimensional (d = 1,ε = 3) elastic object with d ′ = N -dimensional target space. Thus, the time-dependent noise η(x, t) in the KPZ equation is mapped to the quenched disorder V in the DP picture. This gives the exact relation between the dynamic exponent of KPZ problem and the DP roughness exponent which reads Spatial correlations in η(x, t) corresponds to correlations of quenched disorder V in the directions transverse to the DP. As the exponent ρ varies from 0 to 1, the quenched disorder interpolates between RB and RF universality classes. For example, the exponent z changes from 3/2 to 1 for d ′ = 1 and white random noise (θ = 0 based on the mapping between the DP and KPZ problems. Since z SR (d ′ = 1, ρ = 0) = 3/2 the exponent (83) computed using the modified DRG violates the criterion of the LR FP stability, and thus is ruled out. Substituting the roughness exponents computed using FRG for the RB (ρ = 0) and RF (ρ = 1) universality classes into Eq. (86) and relating δ = 3 + 2θ, we obtain the exact (for ρ = 0, 1 and presumably for any ρ) identity To one-loop order in FRG, i.e., for ψ = 0, exponent (87) coincides with the estimation given by DRG (79) for small θ. Though the exponent ψ has been computed in Sec. III C for the RB (ρ = 0) and RF (ρ = 1) universality classes to two-loop order in a controllable way, the large value ε = 3 of the expansion parameter describing the DP problem makes the estimation of ψ highly unreliable. Nevertheless, since ψ = 0 is zero on the crossover line between the LR and SR FPs, we can determine this line exactly for ρ = 0, 1 from equation z LR < z SR = 3/2 that leads back to Eq. (77). Taking into account that ψ is nonpositive for columnar disorder, we obtain the lower and upper bounds on z(θ) for ρ = 0 and θ ∈ [ 1 6 , 1 2 ] as 5 The critical exponent z computed using FRG, DRG, and measured in numerical simulations of Ref. 53 is shown in Fig. 8. The KPZ equation with temporally correlated noise was also studied using a self-consistent approximation (SCA). 54 The SCA equations have two strong-coupling solutions. The first one exhibits a crossoverlike behavior at θ = 1 6 and corresponds to the one-loop FRG prediction. The second solution, which is considered to be dominant, leads to a smooth dependence of z on θ shown in Fig. 8. Both the SCA solutions are in agreement with the FRG prediction that the exponent z is a decreasing function of θ, while the modified DRG suggests that z increases with θ. However, the second SCA solution considered to be dominant does not satisfy bounds (88), and thus is ruled out. Let us generalize identity (76) to the case of temporally correlated noise. Note that the solution of the KPZ equation h(x, t) gives the free energy of DP (85). The free energy per unit length f (ϑ) of the DP tilted by the transverse field H ⊥ to the angle ϑ can be written as f (ϑ) = f (0) +cϑ α − H ⊥ ϑ. The naive elastic approximation suggests α = 2. In order to take into account the renormalization of elasticity, we determine the exponent α from the condition that at equilibrium the response to the field H ⊥ is ϑ ∼ H φ ⊥ . This fixes α = 1 + 1/φ with φ given by Eq. (51). Then the total free energy of the DP of length t can be written as a function of the free end coordinate x as follows: The last term in Eq. (89) describes the typical fluctuation of the free energy due to the disorder and is given by Eq. (71). Balancing the last two terms of Eq. (89) and using Eq. (51), we obtain the exact scaling relation VI. SUMMARY We have studied the large-scale behavior of elastic systems such as interfaces and lattices pinned by correlated disorder using the functional renormalization group. We consider two types of disorder correlations: columnar disorder generalized to extended defects and LR-correlated disorder. Both types of disorder correlations can be produced in real systems, for example, by subjecting them to either static or rotating ion beam irradiation. We have computed the critical exponents to second order in ε = 4 − d and δ = 4 − a for LR-correlated disorder and to second order inε = 4 − ε + ε d for ε d -dimensional extended defects. The correlation of disorder violates the statistical tilt symmetry and results in a highly nonlinear response to a tilt. In the presence of generalized columnar disorder, elastic systems exhibit a transverse Meissner effect: disorder generates the critical field h c below which there is no response to a tilt and above which the tilt angle behaves as ϑ ∼ (h − h c ) φ with a universal exponent φ < 1. The periodic case describes a weak Bose glass which is expected in type-II superconductors with columnar disorder at small temperatures and at high vortex density which exceeds the density of columnar pins. The weak Bose glass is pinned collectively and shares features of the Bragg glass, such as a power-law decay of translational order, and features of the strong Bose glass, such as a transverse Meissner effect. For isotropic LRcorrelated disorder, the linear tilt modulus vanishes at small fields leading to a power-law response ϑ ∼ h φ with φ > 1. The response of systems with LR-correlated disorder interpolates between the response of systems with uncorrelated and columnar disorder. We argued that in the presence of LR-correlated disorder vortices can form a strong Bragg glass which exhibits Bragg peaks and a vanishing linear tilt modulus without transverse Meissner effect. The elastic one-dimensional interface, i.e., the directed polymer, in the presence of LR-correlated disorder can be mapped to the Kardar-Parisi-Zhang equation with temporally correlated noise. Using this mapping, we have computed the critical exponents describing the surface growth and compared with the exponents obtained using dynamical renormalization group, self-consistent approximation, and numerical simulations. + [b] (1) 22 is finite and does not correct elasticity at two-loop order. Other diagrams give [a] (2)
2008-03-15T16:42:25.000Z
2007-12-05T00:00:00.000
{ "year": 2007, "sha1": "4c303dca7a48aea57064e9f814bfdca22f9461bb", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0712.0801", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "4c303dca7a48aea57064e9f814bfdca22f9461bb", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
230670086
pes2o/s2orc
v3-fos-license
Impact of Plant Closures on Urban and Regional Communities: A Case Study of South Australian Gas Industry and its Workers Purpose: This research paper has been compiled from the articles and journals of various scholars. The elements and factors, which affected the plant closure and its impact on the economical fields, are analyzed at depth. Design/Methodology/Approach: The economic condition based on geographical scenario along with the case study on the Limestone Coast region of South Australia is vividly discussed. In other words, the paper has thrown down an in-depth analysis of the Pine plantations and Plan for the acceleration of electrical as well as industrial allocations and the role of gas on the Limestone Coast that had generated premium wine. Findings: There were four scenarios planned prior to proceeding with the work as well. At the same time, it also found different stakeholders involved in agricultural practices as well. Hence, it has focussed on the overall mission of the company. Practical Implications: The study had emphasized the methods and material of qualitative that had dealt with the case study of the Lime industry as well. Originality/value: This study contributes by supporting that conceptual, strategic, and technical competencies have a significant relationship with competitive intelligence and is achievable to be used by goverments in region and local economies. Introduction The plants provide a livelihood to a huge number of populations in the region, in any state or country. The closure of plants results in loss of livelihood for those people and attributes go to large scale of displacement of workers that have an impact on both the developing and developed economy of a country. The sustainability of the rural and urban economy has been an area of significance for economists and the closure of plants affects the economies. The reasons for closure are important for research and also generate concerns about public policies in the region (Beer, 2018). The researchers need to find how the communities respond to the closure, and what the effects on the labor market are. Other researches that need to be conducted are mainly on the impact of public policies and how effective is the labor market in assisting the labor. The major aspect of the research is an analysis of regional development, the reaction of the locals, policies, and market dynamics. This research directs on the extraction process, mining, and help locals in the particular region. The research will also project on the effects of the closure of the plant, both negative and positive, in the region. These are the changes in the economy, employment opportunities, diversity and demography, infrastructure, and the services for the concerned area. The research is based on the closure of plants in an urban community, situated in Australia. The stakeholders have a lot of concerns regarding the closure and perspectives for strategies that would profit them. This paper includes the methods used for understanding development in society and the economy of the region locally and with respect to the country. How the supply chain will be affected, the labor requirements that need to be fulfilled, the benefits the regions would receive according to the state developed royalties for the public, and the corporate social responsibility that should be undertaken. All these methods will vary and help in defining its effectiveness in the region. According to the concept of Staples Theory, the profit generated depends on the inputs and outputs of resource management and it promotes economic activities (Marcos-Martinez et al., 2019). The driving forces in countries rich with resources are low when compared to the population. The change in the economy and the demography highlights on dynamics of the process. Resource-rich countries like Australia, New Zealand, and Canada have developed historically in all ways, food supply management, and the supply of minerals to other nations around the world. For a quite a long time Australia has been a chief export met if various natural resources such as coal and iron ore around the world, which earned it a title of World's quarry. From 2018, Australia suppressed Qatar in the export of natural gas. With the increased exports of natural gas, the price in the country rises from $4 in 2012 to $12 in 2018 per gigajoule. The increase in the price of gas in the domestic market has created pressure on the nation's economy. In other words, it can be said that to meet global demands, the manufacturing units are concerned about protecting the supply for domestic purposes. The regions of resource extraction mostly struggle in maintaining the profits, especially during the initial phases. Therefore, it is significant to consider the examples that will help to overcome this challenge (Witt et al., 2018). This paper will focus on the best option available for the locals in the region of extraction and how the energy resources can be useful to the people and stakeholders related to the plant. The paper has examined the best possible ways for the development and growth of the region. Literature Review The literature review evaluates the whole report that is created by examining and scrutinizing the literature matters of the present context. The literature review portion is very essential in academic writings. It provides a vivid knowledge about the topic which comes along with the extracted part of various journals by different researchers. As a matter of fact, the literature review gives a clear concept of the theory and methodological ideas of the related context. On the other hand, it is to be remembered that the literature review cannot be treated as the original research work. It acts as a synopsis of the actual academic paper. The plant closure system is mostly benefitting the local host areas. They are gaining their resources by extraction. This is the most basic issue that leads the condition to reach a higher degree. In Australia, these local host areas are very dominant in nature which causes the situation to change day by day. Such a plant closure system has great impact on the regional financial situation as well as on geographic scenarios. A case on this topic has been discussed in this context. The whole analysis and the fact of the case are reviewed lucidly. The occurrence of this case was at the Limestone Coast in South Australia. Various policies were evaluated out of it that acted upon the extraction and distribution of the gas in regional areas. It led to the formation of a new industry over there. Different plots and frameworks were constructed to structure the development of the current situation. Factors Affecting the Plant Closure According to Beer et al. (2019), the various research works on the huge range of plant closure furnished an affluent way to the debatable topics in scholarship and academic researches. It eloquently discusses the new steps and ideas which help to realize the method of a huge amount of dismissals which is having an interlinked connection with the plant's closure system. It gave birth to immense impacts over social, economic, geographical scales on the regional areas. It led to the postulation of new detailed debatable exercise of the data collection methodology. Further, a theoretical analysis has scrutinized experimental efforts that were embraced by the local workers. The scenario was developed from a framework of certain decisionmaking procedures. The most significant discussion lies in the fact that the whole process of examination and testing required the merge of different policies of different communities that reached the ultimate depth of the analysis. As per influenced by Markey et al. (2019) the ongoing process of the financial condition of the rural areas pursues to be the keenest region that attracts the specialists of geography and economy. This situation forced the development of semi-urban regions that emerged out of the grave in the last few decades. Ryser et al. (2019) also gave similar opinions describing the elements that cast their impact over the plant closure system. Plummer et al. (2018) stated that the primary point and the main focal situation of the research was depended upon the changes and developments of the improving conditions of the local areas. The main factors that affected it were by newly developed policies and various acts that created the change in the marketing scenarios via different pathways. According to the information given by, Ryser et al. (2019) most of the analytical works in this sector have the tendency to incline itself towards the quarters of the resources. These resources specifically identify the mining areas that have the duty for the extraction of energy which provides main advantages to the local community regions. Another factor affecting this scenario is the limit up to which the host area is spread for the extraction of the resources. Measham et al. (2016) provided with the instances of the context of the current circumstances. The most vital roles were played by the mobility of the financial structure and the employment policies which carried the distribution of the income and maintaining the equality of sex discrimination. It is also involved with the account of demography, infrastructure quality, and services. The general discussion indicates the prominence of holding more and more advantageous situations in the host regions. This fact was broadly discussed during the inquiries of the parliamentary session. The House of Representatives Standing Committee on Industry, Innovation, Science, and Resources 2018 can be treated as a good instance in this regard. It escorted a variant set of stakeholders that put forward numerous perspectives of contrasting situations on proper mechanisms for retaining the advantages properly. It includes the social responsibilities of the corporate sectors, the supply chain industry of the local areas, and the need to hire laborers from local areas. Staden and McKenzie (2019) endorsed various schemes to share and distribute advantages like the royalties given for areas by the state. The initiative focused on the allocation of public taxes to the local communities. It may or may not give rise to the summation and separation from the social responsibilities of the corporate sectors. The impact of these features has a tendency to balance between different contexts. The contexts depend primarily on regional economic situations. Impact of Plant Closure on Regional Economy The impact of plant closure mainly focused on settling communities like Australia. It mainly depicts the idea of centrally shape the financial condition based on the resource. The rural finances in the settling communities can grow to improve the histories which are already featured by the systems of usage of the land, infrastructures of institutions as well as the social and communal periods. These communal periods became progressive and it re-enforces itself that brings about the new outcome of remaining locked in and diminishing the flexibility to economic disturbances. As stated by Fielke and Wilson (2017) that they identified the part and duties of the economic and political disturbances that resulted in the reconstruction as well as the opening of the new companies and industries in these areas. The farming sectors did not regulate anything in the early years, and this activated the line of connected alterations which was developed as a matter of huge compression of the cost prices, the arriving debts, the unity of the smaller assets, and also the diminishing factor of the population in the regions. The impact on the economy of the regional economy due to plant closure constructed some capable initiatives for improvements in the future. Such initiatives generally gave birth to various stakeholders. They discovered the upcoming elements and thus chose the most desirable and undesirable courses of action. According to Lendel et al. (2019) the basic affair was required to consider the ultimate future plans of the industries among various other companies. A scenario to plan a medium-term of around a decade was established to control the suspension of the fracture of the hydraulic system, depending on the mobility. The writers made a very informative background material that composed the economy along with a chronicled and verified routine of the industries that had already emerged in the areas. The previous arrangement of the framework has consisted of a different variety of charts and tables which had very minutely accurate and precise representation in order to use for connecting with the stakeholders. The inputs from the experts of the communication system, the writers synchronized the raw materials into a sheet of information that was printed and shared with the stakeholders. It acted as an input of the next generation topic for the project works in the workshops. Changes in Geographic Scenarios: The Role of Gas in the Limestone Coast Region of South Australia The geographic scenario of the areas had to face immense changes because of the plant closure system. The economic geographers struggled a lot to develop the answer to the queries and suspicions that emerged from the applicability of the intervening policies in order to achieve the improvements in the regional areas. The most debatable point lies about the matter that up to which limits the policies about the improvement of the regions can be achieved with ease. Plummer et al. (2018) raised the idea of whether these interventions will anyhow contribute to the assistance and help in the rural and semi-urban areas along with wider support on the structural system which primarily abstains the financial resilience in such areas. Most commonly the economists of these areas gave their primary focus and thus emphasized whether to realize the limit of its changing ability which is needed for the development of the new pathways indeed. According to the reports of Ryser et al. (2019) the settler communities of the Australian regions and the related geographers debated that the focus to the primary structural needs can be needed prior to the upcoming economic plans to be implied because of the investments which are missed via new policies which support liberalism. It was further noticed that Australia was the producer of the highest amount of coal and iron ore. For this reason, the country gained a reputation as the quarry of the world. Jaganathan (2018) mentioned in a journal that Australia even crossed Qatar as the highest producer of natural gas in the world. Since the export of this natural gas from Australia hovered the wholesale cost of the gas used for domestic purposes, its production rate increased all over. Grafton et al. (2018) commented that the demand for natural gas increased at such extinct that the industry fell in shortage of production in the native markets. As per the information, given by Long bottom (2019) the rise in sell prices of the domestic gases caused immense tension on different parts of the economic system as well as on the production sectors. Thus, it needs a worthy and affordable constant supply of energy. Stanford (2016) explained the fact in simple words, saying that the production businesses were shut down because of the entry of the great falls which crossed the sector. As the results of that Kalogiannidis (2020) mentioned that through proper business communication, business entities are able to enhance their organizational management and share the knowledge . The Role of Gas in the Limestone Coast Region of South Australia The case of the Limestone Coast of South Australia deserves a notable mention in history. It had brought about several changes in the concept and idea that was previously thought and exercised about the system. It had several effects on the economic condition of the people around, the workers' communities of the rural and urban areas. The Limestone Coast of South Australia case is undoubtedly a very important event that can be always mentioned in this context in order to relate and narrate the facts more vividly. This area is one of the highly known places for the production of premium wine. Terra Rossa soil is present here that increased the richness of the production level. It is the primary base of the Coonawarra terroir as well as the stigmatized district of wine production. This area consists of the regional Impact of Plant Closures on Urban and Regional Communities: A Case Study of South Australian Gas Industry and its Workers 1000 government localities of Grant, Mount Gambier, Wattle Range, Robe, Naracoorte and Lucindale, Kingston, and Tatiara. Mount Gambier is the largest known town in this region. It is well known for its light faded blue lake of groundwater that resulted in itself to be turned as a tourist spot. Penola is another town of the region which is surrounded by the vineyards of Coonawarra. The economic status of this region is diversified. There are several agricultural industries too, including a wide categorized range of farming industries, grazing livestock, and also dairy farming. Pine plantations are formulated by the forestry industry. Production of processed and packaged food plays a vital role here. The figure given below is the map of the Limestone Coast of South Australia. All the vital towns that resulted in the improvement of the economic system by its production quality and quantity are pointed out in it. There are several other production businesses, like, food and fiber. However, they have not furnished much. For instance, Penola town had a chip factory. It was operated by a Canadian company, McCain. Unfortunately, the factory was shut down in December 2013. It happened because the input prices of the factory, raised much high, and thus the company could not bear the expenses any longer. The basic reason for such declinations resulted because of the withdrawal in the production rate of the manufacturing industries of the entire region. This had happened in two specific terms, namely, the share of gross regional product or GRP and in real terms. On the other hand, the utility of the agricultural groups made immense growth. Even the forestry departments came up with the highest donations in the whole area. According to Poruschi et al. (2020); the sectors of health and hygiene and social care aided the development of the economical condition simultaneously. The Otway Basin is a well-known area of the South Eastern region in the Limestone Coast of South Australia. It is so famous for its huge resource of natural gas production. When followed according to the map given above, it is clearly visible the exact location of the basin. The geographical climate and other natural facilities featured the area and thus reached its extremism. There are several conventional production industries of commercial gas. The first such industry was developed at Kan took, which is located in the southern part of Penola. After that Ladbroke Grove field was constructed there. So, it is very obvious that this region gained much importance for its onshore gas industries. This causes a huge improvement. It is known that Kimberly-Clark came up to enact a very important position in order to develop and manufacture the South East Pipeline System (SEPS). This system linked gas producers with other industries. The paper mill of that is was thus served with energy supply much easily. There were other industries like the timber milling company, pulp, and paper milling factory, and also a commercial food preparation firm. Image 1: Map of the Limestone Coast of South Australia Source: Australia, 2020. These set of enterprises were much helped and thus enhanced because of the sudden upbringing of the natural gas industries in that area. Gradually, the supply for domestic purposes in the local houses and markets became more and more favorable. The local businesses flourished and thus improved the financial state of the working communities. However, the Katnook Gas Plant was closed down in the last decade. But several other gas plants sprang up in the meantime. The users did not have to suffer for the closure. With the official closedown of the plant in 2013, the gas pipeline was connected to the interstate infrastructure of gas supply (Poruschi et al., 2020). The Limestone Coast Region did not lose its productivity level. So, no harm came upon the economic condition of the workers' communities. The area was a mixture of rural, urban and sub-urban patches. The agricultural firms of the rural side thus kept its head up without affecting anyone. A similar situation had happened in other areas too. As the domestic gas price started increasing around 2012, the government of this region started a new strategy, named, Plan for Accelerating Exploration (PACE) Gas Program. This strategical program is known to provide the affordability of the companies that develop and produce the gas. This program would encourage and thus improve the quality of production by the companies by obtaining new methods (Sandhu et al., 2018). This would also help the generators of electricity, industrial users, and also the retailers in the province. The PACE gas program is granted for three allocation assignments to carry out the discoveries and inventions on the current plot at the Otway Basin. As a matter of fact, these assignments gave their support in the new performances on the commercial production of the gas. The new performance thus came to be regarded as the 'conventional' gas. In the next years, Beach Energy started Impact of Plant Closures on Urban and Regional Communities: A Case Study of South Australian Gas Industry and its Workers 1002 constructing a new gas producing plant, called, Katnook. This plant had the capability to produce about 10TJ/day which can substitute the old plant facilities. Along with the aim of increasing the domestic supplies in the native companies, the features of the PACE also granted the requirement of the industries to propose the initial right to refuse to the local users. Gaps in the Literature The analyst has studied various articles and journals by different scholars in order to collect information about the plant closure in urban and rural communities. This subject is a very common research project among different analysts. Numerous projects are done over it. However, still there are some gaps in the literature on this topic. This gap can be well noticed in the writings of previous scholars. Maximum analysts and scholars primarily focused just on what caused the plant closure in these areas. Apart from this, in order to discuss this matter, secondary data is required to be collected and examined well. Suppose, it can be seen in the articles that only information about the alteration in the economic condition is mentioned with various graphs and tables. But the impacts of it are not clearly stated everywhere. So, this research project is done to get information about both the primary and secondary data analysis for enhancing authenticity. Summary After understanding the advantages of the local economic condition among the rural and urban communities from the extraction and collection of resources, this region turned out to be a primary area of attraction for both the geography and economy experts. The possibilities for the usage of the resources of this region caused to be valued a lot as it is aimed by numerous nations. One of the nations is Australia itself, as it is a great source of income through diversified economic conditions among the workers. The local scenario which is drawn by the Limestone Coast Region offered a better instance about the enhancement of the gas extraction process. It is thus taken into consideration that this industry would provide more and more employment facilities for the growth of the financial condition thus reducing the crisis level of the nation as a whole. The government should look into the matter to increase the supply of gas to the local users by improving the infrastructure of the industries. In fact, the provincial government authorities took initiatives to implement new policies to control the industries which are opened shortly. Such growth can even cause industries to face isolation. The local stakeholders should also take care of the matter properly for developing the distribution of natural gas among the local rural and urban working communities. Material and Methods The gas industry in Australia has been surrounded by various uncertainties in the development of the region and its economy (Sandhu et al., 2018). The future aspects should be considered as an alternative to the development of the basis of how the gas industry affects other sectors in the region. The stakeholders advised the company to analyze various scenarios that will be based on the qualitative and quantitative analysis of the potential developments of the industry in the area. Qualitative Analysis The qualitative analysis of an industry in a region helps to determine the situation by collecting the views from the people, their opinions, and what their demands are (Cabral et al., 2020). This paper's qualitative analysis is based on the scenarios that will help the development of the region after the closure of the plant in South Australia. Scenario planning: Scenario planning is making presumptions about the future development of and how it will affect the process along with time. Specifically, it can be explained as identifying certain realities that can happen to the industry (Przeslawski, Miller, and Meeuwig2016). It sounds like a simple process, but it requires considerable effort to build a set of assumptions that will be worthy to fit the industry (Figure 1). Figure 1. Scenario Planning Source: Przeslawski, Miller, and Meeuwig, 2016. Here the industry has closed, and now the assumptions should be made on the basis of the available resources for development. According to the background, the scenarios have been drafted and introduced to the stakeholders for their inputs. The objectives of the planning are to analyze the possible outcomes for the gas industry that can develop in the region. The scenarios that are taking into consideration are the investments for the local gas industry and economic diversity. The investments can be plotted over a graph with different combinations as: Impact of Plant Closures on Urban and Regional Communities: A Case Study of South Australian Gas Industry and its Workers 1004 • Increasing economic diversity vs good investment, • Decreasing economic diversity vs good investment, • Increasing economic diversity vs poor investments, • Decreasing economic diversity vs poor investment. The above four combinations will provide the required results and convey the path that should be followed to attain good growth in the region. Stakeholders workshop: Stakeholders' workshops are an excellent way to engage the stakeholders, the ones who are directly affected by the implementation or have or direct interest in the company. In this research, the scenarios were listed and conveyed to the stakeholders for their insights on the assumptions made. The workshop went for a long time and all the points were briefly explained to them (McCabe, 2016). The primary motive of the workshop was to help the stakeholders reach an agreement regarding the scenarios designed and to find wants to avoid all the unwanted problems in the implementation of the strategies. The outcomes of the workshops were as different stakeholders were present differed. Everyone had their views and some of the suggestions are: • Gas industry, • Regional Development based locally, • Manufacturing industry for packed food and milk products, • Manufacturing industries for fibers and paper. The representatives from forestry, agriculture did not attend the workshop. All the participants reflected their views on the assumptions and discussed the potential outcomes and ways to avoid any problems arising in the implementation of the strategies. Quantitative Analysis Quantitative analysis is the process of collection and evaluation of measurable data such as the profits, market shares in order to understand the performance of industries (Wen et al., 2019). Quantitative analysis helps in making better decisions for the financial performance of a company and allows it to manage the finances based on the market demands. Rise model: RISE model is developed to guide the feedback process stands as Regional Industry Structure and Development. In this paper, this model is developed to characterize the assumed scenarios and explore the probable results. The RISE model here is a part of the qualitative analysis that helps in determining the structure, dependencies, and links between the industry and the locals of the region (Australia, 2020). The approach has been useful in determining the economic effects in the region because of different projects that have been carried out in the past decade. The analysis of the input and output process of all the industries interconnect them with each other, as though the supply chain management. Sometimes the consumers for one industry, the output can be a part of another industry, the input, and vice versa. Thus, on the basis, if the RISE model principle, the effects of the processes will be revealed with time. The questions that are raised based on the scenarios are: • how the economy will grow in the future, based on employment, manufacturing process, • What will be the role if the regional resources, • Other factors that can affect economic growth like the population. The information received from the above concerns will contribute to the development of the regional economy as well as the country's economy. Scenario parameterization: Parameterization of a scenario helps to test any data and the expected values along with verification of the data (Pakyuz-Charrier et al., 2018). In other words, parameterization can be defined as creating multiple trials for a single scenario, where each trial will consist of different assortments if the data parameters. For the scenario, in this paper the diversity if the energy mix has a low limit and the gas supply and the extraction process add value to it. The expansion of the gas industry will result in the growth if the economy as there are numerous users. The investment for the development, the local industries, and mining services will provide be a significant element in the growth. Thus, the parameterization of the assumed scenarios uses the maximum and the minimum limit for the gas industry to provide the anticipated outcomes. The business for all scenarios will help all the individual industries in the region to develop along with the gas industry and the economy. Communication with the stakeholders: The whole process is developing the scenario, conveying the scenario to the stakeholders, taking up the analysis results, and then the implementation of the process should be informed to the stakeholders. The outcomes of the process, profits, and growth should be communicated with the stakeholders. The purpose of the presentation is to anticipate the outcomes that will help in developing the economy. Results and Discussion All the assumptions and the discussions on the scenarios with the stakeholders have led to some outcomes that can be considered for later implementation. Considering all the methods for development and economic growth in the area the probable results can differ. Scenario analysis: After conducting workshops with the stakeholders, the most considerable scenario identified by the stakeholder is the second scenario. The scenario characterizes the expansion of gas and other industries, which is desirable for the region as well as the economy. Though the economic growth will be slow at the beginning in the later phase, the economy will boost as desired. The scenario will also be helpful in providing gas to the local market at cheaper rates (Sangha et al., 2019). Though during voting for the scenarios the stakeholders were concerned whether the investments in the gas industry will result in an expansion of the economy or not as the industry is local, there will be transportation charges which might not help in the reduction of the gas prices nationally. If the industry produces gas for local industries at affordable rates. But considering the history, the local gas supplies being unreliable, the stakeholders showed interest in the first scenario. Along with the most desirable, the least desirable scenario is marked too. According to the votes of the stakeholders, the fourth scenario was considered the least reliable one. The stakeholders did not want the economy to shrink at the least. The economic shrink would lead the region to be resilient and the economic growth will be staggered for a long period (Sandhu et al., 2018). The locals will have to face difficulties due to declining economic activity and this will lead to less investment in gas. Considering the third scenario, the stakeholders have mixed views, as this scenario would introduce the use of current technology and innovation that would cost a price for which all the users are not ready. For example, in the dairy industry, had is used not just as an energy source but also as a part of the technology that helps to convert milk into powdered form. However, to continue the use of gas, the industries need to focus on the fact that they would be able to afford the gas prices with the addition of delivery charges along with profits (Chapman McLellan and Tezuk, 2016). As if the gas input in the industry is expensive then the industries would have to cut the cost in labor which in return would not be of much help in the region. Going with the most desirable scenario, the second one, there are two difficulties that should be overcome, first being the availability of more reserves in the region and the second being to make the gas more available for the locals (Leviäkangas, Paik, and Moon, 2017). The stakeholders showed that government investments in the projects will be helpful in obtaining the desired outcomes from the situation. On the other hand, the industry can fulfill the additional gas supply by innovating an alternative method for the distribution prices, like the use of trucks for delivering in the local regions. Considering the large scale the gas industry can be effected on the grounds if fossil fuels in general. Industry trends: The economy is also affected by the demands in the market and the requirements of industries in order to fulfill the demands. Here, the value of the limestone has increased from $2.7 billion in 2000 to $3.3 billion in 2017. According to Thomas-Noone (2019), considering the chronology of increment, the regional value of the place would rise to $3.7 by 2030. The agricultural unit in the region has consolidated the economy by 37% in 2017. If agriculture, the forestry industry continues then it would add around 4% to the economic growth by 2030 in the region. Along with these, the healthcare industry has also shown growth and with the continued growth the industry would account for 8% if the economic growth by 2030. Input and output ratio: There is a significant difference between the scenarios considered. The scenarios, considered with growing economic growth show an increase in the gross rate in the region, and the emolument has also increased (Lenzen et al. 2017). Considering the other two scenarios where the economy is considered stagnant or decreasing, the gross development is very low for the region through the gross rate shows increment by 2%. Discussion The results so obtained by considering all the scenarios are, the increase in economic diversity directs toward the growth of the economy and increased employment in the region, along with increased investment in the gas industry. On the other hand, the decrease in economic diversity decreases investments along with higher unemployment and slow economic growth in the region. The analysis of the methods has shown that the stakeholders and all the participants at the workshop were keen on the gas industry and investing in the industry seems to be beneficial for the economy and not only an expense (McCabe, 2016). The analyses also show that investment and technological advances along with innovations will help the gas industry to flourish and gain the market. With the increasing demand for natural gas in the global market, the domestic had in Australia had to bear the high prices of gas. Therefore, the investments in the gas industry will be a means to provide gas to the locals, though, in the beginning, there will be delivery charges applied. The growth of the gas industry and its supply chain management will help grow the regional economy. As the gas industry is re-established, the Lime-Stone coats will get a new possibility to be recognized as a resource extraction region in Australia. The possibilities for a new portrayal are because of the involvement of the local infrastructure in the 1990s previous to the gas plant. With all the new possibilities and the scenario assumptions, the gas plant will expand to the gill potential. The state government has also been encouraging the gas industry to re-establish and local distribution of the gas. its surrounding. However, the scenario assumptions and the discussion over the outcomes will help the stakeholders to understand how they will be affected by the whole extraction process and identify the activities beforehand so as to get the best of the situation. Acknowledging all the outcomes of the try situation, the narrative received is mostly positive and it is important to understand that regional development is a complex process and there are many elements to affect it. Although the gas plant is preferred, the Limestone Coast is recognized to be significant in overcoming the challenges and help in developing the infrastructure to avoid lock-in of the gas industry. The economy of the region depends on the global economy and demands. The milk factory in the region uses gas and exports the packed products to the global market too. With pandemics, it is impossible to predict what will be the future of the projector gas industry in the region. The economic countries are now looking forward to changing and most of the nations are not more focused on the production industry and it has become one of the important sectors. Manufacturing is an important part of the production process along with innovations and technical advantages. For the production process energy is essential. As for the gas industry, the demand for supply is already in the market, and to fulfill all the requirements the gas plant will have to work on supply management. Thus, with sufficient demand in the market and the right involvement of technology the gas plant will grow along with increasing the economy of the region.
2020-12-17T09:13:26.869Z
2020-10-01T00:00:00.000
{ "year": 2020, "sha1": "1a374f26a7ff4b3ade84af86d71452385d3bdab9", "oa_license": null, "oa_url": "https://www.ijeba.com/journal/645/download", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c0e17ed5b94f8f12107551a8af147851896e38a5", "s2fieldsofstudy": [ "Environmental Science", "Business", "Economics", "Engineering" ], "extfieldsofstudy": [ "Business" ] }
211076137
pes2o/s2orc
v3-fos-license
Revealing magnetic component in crystalline Fe-gluconate Low temperature Mössbauer spectroscopic and magnetization measurements were performed on a crystalline sample of Fe-gluconate. Fe atoms were revealed to exist in two “phases” i.e. a major (90-94%) and a minor (6-10%). Based on values of spectral parameters the former can be regarded as ferrous and the latter as ferric ions. A sub spectrum associated with the ferric “phase” shows a significant broadening below 30 K corresponding to 7.5 kGs. A magnetic origin of the effect was confirmed by the magnetization measurements. Evidence on the effect of the magnetism on the lattice vibrations of Fe atoms in both “phases” was found. The Debye temperature, TD, associated with the vibrations of Fe 2+ ions is by a factor of 2 smaller in the temperature range below 30 K than the one determined from the data measured above 30 K. Interestingly, the TD-value found for the Fe 3+ ions from the data recorded below30 K is about two times smaller than the corresponding value determined for the Fe ions. * Corresponding author: Stanislaw.Dubiel@fis.agh.edu.pl Introduction Ferrous gluconate (Fe-gluconate) is a salt of the gluconic acid. Its chemical formula reads as C 12 H 22 FeO 14 xH 2 O with 0  x  2. Its molar mass depends on the value of x and varies between 446.14 and 482.19 g·mol −1 for x=0 (dehydrated) and x=2 (fully hydrated), respectively. The compound has chiefly applications in medical and food additive industries. Regarding the former it has been satisfactorily used in the cure of hypochromic anemia and sold under various trade names e. g. Ascofer, Fergon, Ferate, Ferralet, FE-40, Gluconal FE and Simron, to list some of them. Respecting the latter, it has been applied for coloring foods, e. g. Black olives and beverages. Interestingly, Fe-gluconate was also used as an effective inhibitor for carbon steel [1], gluconate-based electrolytes were also successfully used to electroplate various metals [2] or alloys [3]. Iron, whose content lies between 11.8 and 12.5 percent, is present in two forms: a major ferrous Fe 2+ or Fe(II) ion and a minor ferric (Fe 3+ ) or Fe(III) ion. The relative contribution of the minor fraction amounts to 10-15%, as detected by Mössbauer spectroscopy [4][5][6][7][8]. Its origin is unknown and it can either be soluble or insoluble. Following clinical studies, medicaments based on ferric iron have poorer absorption than those containing the ferrous ions [9]. Consequently, their effectiveness in the treatment of anemia diseases is less efficient. In other words, presence of the ferric ions in the Fe-gluconate is undesired from the medical viewpoint. In these circumstances any experiment aimed at the identification of the minor fraction is of importance as it can help to produce a ferric-free compound, or, at least, reduce its concentration. In a given structure of the Fe-gluconate, that can be either crystalline [10] or amorphous [11], the ferric ions should be stronger bonded than the ferrous ones, hence their lattice-dynamical properties and, in particular, a value of the Debye temperature, should be higher than that of the ferrous ions. The Mössbauer spectroscopy has been recognized as a relevant technique to investigate the issue. However, our recent study performed on a crystalline form of the Fegluconate in the temperature range of 80-310K did not show any measurable difference in the lattice-dynamical behavior of the two types of Fe-ions [12]. In order to shed more light on the issue we have performed similar measurements on the same sample of this compound but in a lower temperature range viz. 5-119 K. Results we have obtained are presented and discussed in this paper. 3 Sample Fe-gluconate, courtesy of the Chemistry and Pharmacy Cooperative ESPEFA (Krakow, Poland), which uses it for a production of the iron supplement Ascofer®, was subject of the present study. X-ray diffraction pattern registered at room temperature ( Fig. 1) on a powdered sample gave evidence that its structure was perfectly crystalline. Mössbauer Spectra and Analysis Mössbauer spectra, examples of which are shown in Figs. 2 and 3, were recorded in a transmission geometry by means of a standard (Wissel GmbH) spectrometer and a drive working in a sinusoidal mode. Each spectrum was recorded in a 1024 channels and 14.4 keV gamma rays were supplied by a 57 Co/Rh source whose activity enabled recording a statistically good spectrum within a 1-2 days run. The spectra were measured in the temperature interval of 5-119 K on the sample placed in the Janis Spectral parameters obtained with the two fitting procedures are displayed as plots and also in Table 1 (procedure B). Magnetic measurements The measurements were performed on a powder sample using a Quantum Design Abundance Temperature dependence of relative abundances of the three components as received from the analysis of the spectra using procedure A is displayed in Fig. 4a. We note that the contribution of D1, A1, lies between 60% for T <60 K, and 70% for T > 80 K. Similarly, the contribution of D2, A2, stays constant at the value of 30 % up to 60 K, and at the value of 10% above 80 K. The change of A1 and A2 in the temperature between 60 and 80 K is unknown. But as can be seen in Fig. 4a, the abundance of the minor component, A3, shows some anomaly at 80 K and also it crosses with A2. Perhaps the anomalous behavior reflects some structural changes? Corresponding X-ray diffraction measurements are underway. Figure 4b illustrates a comparison between the abundance of the minor component as found with the two fitting procedures. One can easily notice that (a) for all temperatures the abundance found with procedure A is by 5% higher than the one determined with This effect, sometimes termed as a lattice softening, is known to occur on a transition from a paramagnetic to a magnetic state e. g. [13]. Center shift Center shift is a very important spectral parameter, as its temperature dependence permits determining of the Debye temperature, T B , hence gives some insight into a lattice dynamics. Corresponding behaviors determined based on the analysis A are presented in Figs. 5a and 5b for CS1, CS2 and CS3, whereas Figs. 5c and 5d show a temperature behavior of CS12 and CS, as deduced from the spectra analyzed in terms of procedure B. Lines stand to guide the eye. Note a strong anomaly for CS3 and CS at T < 20K. 9 As can be easily noticed in all cases, there are anomalies in the low temperature range, and the strongest ones exist in CS3 and CS i.e. the components related to the minor (ferric) phase. An analysis of the center shift data in terms of the Debye model is given and discussed below. Spectral area The spectral area is also a useful spectral parameter as it is related to the recoil-free parameter, hence to the lattice vibrations. However, it is the most "difficult" spectral parameter as it is very sensitive to the geometry of experiment and also to the electronic performance and stability of spectrometer. In addition, to get correct information on the spectral area one should analyze spectra using an integral transmission method [14]. In the present case we applied such method with the procedure B. Consequently, only spectral areas obtained with this procedure are shown (Figs. 7a and 7b) and discussed in this paper. An anomalous behavior can be seen in both plots below 10-20 K. Debye temperature The Debye temperature, T D , can be determined from the Mössbauer spectra in two ways viz. from a temperature dependence of (1) the center shift, CS(T) or (2) the recoil-free fraction, f(T). (1) 11 The first term represents the isomer shift, that hardly depends on T, m stays for the mass of the 57 Fe atom, k B is the Boltzmann constant, c is the speed of light, and kT x    ( being frequency of vibrations). Fitting experimental data to eq.(1) yields the value of T D . CS(T) can be within the The figures Fig. 8 and Fig.9 illustrate corresponding data and their fits to eq. (1) together with the values of the Debye temperature obtained in high (regular) and low (anomalous) temperature ranges. One can notice that the T D -values characteristic of the low-temperature (anomalous) range of the ferrous phase (major) are by a factor of 2 lower than those derived from the high-temperature (regular) range. In the ferric phase (minor) the value of the Debye temperature in the non-magnetic phase is, in turn, by a factor of 2 smaller than the corresponding value found for the ferrous component in the similar temperature range. Why it is so remains unclear. Recoil-free fraction Temperature dependence of the recoil-free fraction, f(T), is, within the Debye model, given by the following equation [16]: Where E R is the recoil kinetic energy, k B is Boltzmann constant. By fitting experimental f-values to this equation the value of T D can be found. In praxis, instead of f, whose absolute value is difficult to determine, one uses a relative spectral area, f'=A(T)/A(T o ), which is proportional to f. Furthermore, it should be noticed that the values of T D as determined from CS(T) are, in general, different than 13 those found from f(T). This follows from the fact that CS(T) contains information on the average square-velocity of atom vibrations, whereas f(T) is related to the average square-amplitude of such vibrations. Therefore a direct comparison of T D -values obtained from these two spectral parameters is not fully justified. However, we can compare the T D -values obtained with the same method i.e. in the present case the spectral area. As shown in Fig. 10 they are significantly different as determined for the major and for the minor components in the temperature interval where both "phases" are paramagnetic. The difference is twofold in favor of the minor "phase" and this means that the average square-amplitude of ferric (Fe 3+ ) ions is shorter than the one of the ferrous (Fe 2+ ) ions. Noteworthy, the opposite is true as far as the average square-amplitude of velocity of Fe-ions vibrations is concerned. Magnetic origin of the line broadening in D3? A broadening of a line in a Mössbauer spectrum observed on lowering temperature indicates, in general, a transition into a magnetic state. If the magnetism is strong enough, the broadening may result in a split of the line into sextet. In the present experiment the lowest temperature we achieved in Mössbauer measurements was 5 K and the line of the minor sub spectrum, in which the broadening happened, did not change into sextet. Possible reasons for this are a weak magnetism and/or not low enough temperature. In order to shed more light on the issue, SQUID measurements of the temperature dependence of the magnetic susceptibility (T) along with the magnetization M vs. H data were performed in the temperature range of 2 -300 K. Zero-field-cooled (ZFC) and field-cooled (FC) susceptibility curves given in Fig.11a show the characteristic paramagnetic behavior. The same behavior was revealed by AC susceptibility measurements (not shown in Fig.11a for clarity). The (T) curves collapse and can be described with the Curie-Weiss formula. The Curie constant C= where N is the Avogadro number, g -Lande factor,  B -Bohr magneton and S is the spin value, obtained by fitting the 20-300 K data is equal to 2.8±0.1 Oe -1 mol -1 K. This value is approximately equal to 3 Oe -1 mol -1 K expected for the sample containing magnetic centers S=2 or close to 3.1 Oe -1 mol -1 K in the case of 90% content of the major phase (S=2) and 10% of the minor one (S=5/2), as concluded from the study above. The paramagnetic Curie-Weiss temperature  obtained from the fit equals to - Magnetization curves recorded at T equal 2 K, 5K, 10K, 20K, and 40K (see Fig.11b [17]. One can see that such simple approach is justifiable only for 40 K and 20 K, while the possible short range magnetic order at T < 20 K needs a more advanced description. Based on the results of the magnetic measurements the analysis of the spectra in terms of the procedure B was justified. Figure 12 illustrates a temperature dependence of the hyperfine field, B, where its increase below 20 K is evident. The maximum increase is 7.5 kGs. It should be here added that values of B  0 observed at T 20 K are an artefact caused by the analysis of a doublet in terms of a sextet. Conclusions Several interesting conclusions can be drawn based on the results obtained in the present study. Namely:  Iron in the investigated sample of the Fe-gluconate exists in two "phases": a major one with 90-94% relative contribution at 5K, and a minor one with 6-10% relative contribution at 5 K.  Iron in the major "phase" exists as Fe 2+ ions and as Fe 3+ in the minor "phase".  Spectrum associated with the minor phase shows anomalous broadening at temperatures less than 25 K. This anomaly is reflected not only in the line width and the center shift of the corresponding sub spectrum but also in the center shift of the sub spectra associated with the major "phase".  Magnetization measurements testify to a magnetic origin of the broadening.  The maximum value of the hyperfine field derived from the broadening equals to 7.5 kGs.  Average square-velocity of Fe ions vibrations in the ferric (minor) phase, is higher than the one in the ferrous (major) phase as the value of the Debye temperature is in the latter is about twofold greater.  Average square-amplitude of Fe ions vibrations in the ferrous "phase" is shorter than the one in the ferric "phase" as the value of the Debye temperature in the latter is higher by a factor of 2.  Lattice dynamics of Fe atoms in the ferrous "phase" seems to be significantly affected by the magnetism of the ferric "phase" as the Debye temperature drops twofold in the temperature range where the magnetism exists. (7) 6(1)
2020-02-12T02:00:39.513Z
2020-02-11T00:00:00.000
{ "year": 2020, "sha1": "008747ccef7363cc6dd1d635cf08f3c4b31fbf09", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2002.04286", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3dee67c2a85220c1a1e7da7820208ab7aeda094b", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
127472003
pes2o/s2orc
v3-fos-license
Data/model integration for vertical mixing in the stable Arctic boundary layer This is the final report of a short Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). Data on atmospheric trace constituents and the vertical structure of stratus clouds from a 1996 expedition to the central Arctic reveal mechanisms of vertical mixing that have not been observed in mid-latitudes. Time series of the altitude and thickness of summer arctic stratus have been observed using an elastic backscatter lidar aboard an icebreaker. With the ship moored to the pack ice during 14 data collection stations and the lidar staring vertically, the time series represent advected cloud fields. The lidar data reveal a significant amount of vertical undulation in the clouds, strongly suggestive of traveling waves in the buoyantly damped atmosphere that predominates in the high Arctic. Concurrent observations of trace gases associated with the natural sulfur cycle (dimethyl sulfide, SO{sub 2}, NH{sub 3}, H{sub 2}O{sub 2}) and aerosols show evidence of vertical mixing events that coincide with a characteristic signature in the cloud field that may be called dropout or lift out. A segment of a cloud deck appears to be relocated from the otherwise quasicontinuous layer to another altitude a few hundred meters lower or higher. Atmospheric models have been applied to identify the mechanism that cause the dropout phenomenon and connect it dynamically to the surface layer mixing. Data/Model Integration for Vertical Mixing in the Stable Arctic Boundary Layer Sumner Barr*, Douglas 0. ReVelle, C. Y.-Jim Kao and E. K. Bigg Abstract This is the final report of a short Laboratory Directed Research .and Development (LDRD) project at Los Alamos National Laboratory (LANL). Data on atmospheric trace constituents and the vertical structure of stratus clouds from a 1996 expedition to the central Arctic reveal mechanisms of vertical mixing that have not been observed in mid-latitudes. Time series of the altitude and thickness of summer arctic stratus have been observed using an elastic backscatter lidar aboard an icebreaker. With the ship moored to the pack ice during 14 data collection "stations" and the lidar staring vertically, the time series represent advected cloud fields. The lidar data reveal a signrficant amount of vertical undulation in the clouds, strongly suggestive of traveling waves in the buoyantly damped atmosphere that predominates in the high Arctic. Concurrent observations of trace gases associated with the natural sulfur cycle (dimethyl sulphide, SO2, NH3, H202) and aerosols show evidence of vertical mixing events that coincide with a characteristic signature in the cloud field that may be called "dropout" or "lift out". A segment of a cloud deck appears to be relocated from the otherwise quasicontinuous layer to another altitude a few hundred meters lower or higher. Atmospheric models have been applied to identify the mechanisms that cause the "dropout" phenomenon and connect it dynamically to the surface layer mixing. Background and Research Objectives Knowledge of the polar regions of the earth has long been recognized as crucial to the understanding of the global circulations of the atmosphere and ocean. The Arctic is particularly important because it is oceanic and two media, atmosphere and ocean, interact continuously across the entire region. Recent attention to contamination of the region by radionuclides and heavy metals from the Former Soviet Union (Crane, 1997), and prospects of expanded industrial development and fossil energy extraction , further focus scientific attention on the Arctic. The acute awareness of the extreme sensitivity of the arctic ecosystems to human impacts forces us to learn as much as we can about the systematics of this fragile yet forbidding area. on the atmosphere and the atmosphere-ocean interaction with any sense of continuity until The remoteness of the Arctic has made it very difficult to acquire surface-based data *Principal Investigator, e-mail: barr @lanl.gov very recently. An expedition on board a Swedish icebreaker in 1991 provided excellent trace chemistry and aerosol data contributing to biogenic feedbacks on cloud properties. By extension, these data revealed climatically important energy budgets from the ice edge to the center of the ice pack. Several mysteries remained regarding atmospheric properties and structure in the important three-kilometer layer above the surface. Los Alamos National Laboratory (LANL) had the opportunity to operate a lidar remote sensing system during the 1996 return expedition with the goal of providing diagnostics on clouds and the dynamical processes that influence them (convection, internal gravity waves, longitudinal roll vortices, episodic breakdowns of the stable boundary layer) (Leck, 1995). existence and structure depend on a continuing and delicate balance of radiative (visible and IR), latent, and sensible (turbulent) heat exchange. The atmospheric models exist at Los Alamos for the quantitative computation of the radiative and turbulent transfer mechanisms that drive the processes. Kao and Smith (1996) describe the formation of multiple arctic stratus layers by a decoupling mechanism caused by absorption of solar visible radiation at the base of existing stratus. That yields a shallow thermodynamically stable layer that, in turn, suppresses turbulence and decouples the cloud base from its sea level source of moisture. This "fossil" layer persists while up to three additional layers form and decouple by the same mechanism in the lower 500 m of the atmosphere. The lidar data set from 1996 contains over 300 hours of lidar imagery taken over a six-week period from open water at 70N to the central ice pack at 87N. It is supplemented with conventional meteorological data and trace chemical and aerosol concentrations to permit quantitative comparisons and evaluation of processes through the application of available models. The summertime arctic stratus clouds are scientifically fascinating because their The particular research objective of this work is to focus on a signature of the time series of stratus elevations that appears to be related to mixing episodes through reasonably deep layers of the arctic boundary layer. A segment of a cloud deck appears to be relocated from the otherwise quasi-continuous layer to another altitude a few hundred meters lower or higher. We have identified more than two dozen examples of this feature in the lidar data and at least five of the cases are coincident with excursions in concentrations of trace gases or aerosols. Such a relationship is not intuitive. The stratus are located at elevations of a few hundred meters to 1.5 km above the surface while the trace constituents are measured at about 10 meters. The correlation indicates a dynamical or turbulent connection across deep layers of the arctic atmosphere. In mid-latitudes, where convection is often a dominant mixing mechanism, mixed layers of several kilometer depth are common but the very stable density profiles in the Arctic suppress convection. In that environment the apparent coupling across the lower kilometer of the atmosphere is a surprising result that requires good dynamical models to understand. We have exercised two models and are encouraged by the preliminary results. Importance to LANL's Science and Technology Base and National R&D Needs This work is firmly within the Laboratory's core competency in Earth and Environmental Systems and utilizes HIGRAD, an extremely high-resolution atmospheric model that has been developed within the High Peifomzance Computing core competency. The data were collected under the Remote Sensing LDRD project and the interpretation is consistent with the goals of the Global Environmental Systems Tactical Goal. The experimental data were collected as part of the international expedition, ARCTIC-96, in which a major goal was to assess the biogenic feedback from sea-borne organisms to modification of cloud optical properties. This is an important corollary of the evaluation of human-induced climate modification. The steps include: (1) emission of organic sulfides from the sea, ( 2 ) oxidation and modification of those to sulfate particles (3) that serve as cloud condensation nuclei (4) resulting in altered cloud droplet size distributions, and hence (5) altered scattering and absorption of visible and infrared radiation. The mixing as diagnosed by the lidar is a crucial step in determining the quantity and location of the emissions. The other important practical problem is the transport of hazardous material across the Arctic Basin. Once again, any data and analysis that can teach us more about air-sea exchange and vertical mixing in the atmosphere will contribute significantly to an improved understanding of this important problem. Scientific Approach and Accomplishments We have examined the data images to identify mechanisms that couple boundarylayer dynamical processes to cloud physical processes. In preliminary examination of the data it has become clear that buoyancy-driven waves in the stably stratified (thermally) boundary layer of the region play a dominant role by moving stratus cloud layers vertically over ranges of a few hundred meters. In many cases the displacement is symmetrical, cloud layers simply oscillate in the vertical and no apparent mixing takes place in the surface layers beneath the cloud. However, we have identified numerous cases where sections of the cloud layer are detached from the main body of stratus. This is illustrated in Figure 1. These events are much more likely to be associated with turbulent mixing events that reach the surface. Barr et al(1997) offer conceptual mechanisms in which a displaced segment of stratus undergoes additional evaporation of small droplets (in a downward displacement) or condensation (upward displacement) because of the properties of the ambient atmosphere it encounters in the original displacement. The change in latent heat alters the buoyancy of the cloud segment. It may continue to rise or fall until it finds its own new buoyancy equilibrium. The observed result of this process is an almost "cookie cutter" removal of a slab of stratus to a new altitude a few hundred meters below or above the surrounding cloud. Concurrent surface-based observations often reveal rain or snow showers, enhanced turbulence, and excursions in the concentrations of trace chemical compounds or aerosols. episodic mixing in the atmospheric boundary layer of the high arctic. It remains to quantify the physical processes through the use of models that contain the appropriate thermodynamics and hydrodynamics with particular attention to cloud physics. We currently use three such models as described by Kao and Smith (1996), ReVelle (1994), and Nappo ( 1994) A prelirmnary two-dimensional simulation was carried out using HIGRAD, a very high resolution hydrodynamic model. The calculation was initialized with the observed temperature and moisture profile for a particular case of practical interest, August 7, 1996. 2) a parameterized but realistic long-wave radiation model, 3) a relatively simple air-flow chemistry model (with 67 chemical substances included), 4) an atmospheric aerosol model with feedbacks to clouds, radiation and to air flow chemistry, 5) a simple stratus cloud model, 6) a force-restore model for the detailed energy exchanges at the lower boundary of the model (This was originally developed for soil at middle latitudes and modified for high latitudes over iceknow surfaces. The energy exchange is linked both dynamically and energetically to the surface layer behavior.), 7) a surface-layer, Monin-Obukhov, similarity -theory approach below about 10 meters, 8) a "Ekman-layer" eddy regime aloft that is linked at its interface to the surface-layer behavior, 9) utilizes the moisture availability parameter to specify the water loading of the lower boundary, and 10) uses inputs of wind speed, direction, potential temperature, water-vapor mixing ratio, etc. for initialization and forecasts the future state of the boundary layer using a variable time step (satisfying the linearized CFL stability criterion as a function of the degree of turbulence predicted). It is on the detailed physics involved in item (7) that the current paper is focused. A similar paper is also being written by E.D. Nilsson on the ability of the model to properly predict the dynamical 12-hour repeatability of the low-level jet in the High Arctic atmospheric boundary layer. Briefly, bursting is defined as a relatively rapid variation in the air temperature and mean wind speed, etc., during periods in which relatively smooth behavior is first expected. It is intimately connected with the near balance between the turbulent flux divergence and the radiative flux divergence in the energy conservation equation for the surface layer potential temperature. It also directly involves the dynamical flow transition between laminar and turbulent flow in the surface layer. During periods when the radiative flux divergence is most important in providing cooling, the layers involved are predicted to be in a state of laminar flow. Conversely when the turbulent flux divergence dominates the flow. the heated layers are predicted to be in a state of turbulent flow. This transition happens as a function of the specified bulk-layer Richardson number, Ri, that is continuously computed throughout the numerical integration. Because of the rapid variations predicted, it is not very sensitive to the precise critical value and can even successfully incorporate a hysteresis effect with two widely separated critical values, Le.,
2019-04-23T13:26:26.264Z
1998-12-31T00:00:00.000
{ "year": 1998, "sha1": "048cc53cdacd34a5d579266f035c127dd3d20f59", "oa_license": "CCBY", "oa_url": "https://digital.library.unt.edu/ark:/67531/metadc684477/m2/1/high_res_d/334240.pdf", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "9e1c75537265176c1de243df32b1ae8f233bde08", "s2fieldsofstudy": [ "Environmental Science", "Physics" ], "extfieldsofstudy": [ "Geography" ] }
55063168
pes2o/s2orc
v3-fos-license
Gravitational wave quasinormal mode from Population III massive black hole binaries in various models of population synthesis Focusing on the remnant black holes after merging binary black holes, we show that ringdown gravitational waves of Population III binary black holes mergers can be detected with the rate of $5.9-500~{\rm events~yr^{-1}}~({\rm SFR_p}/ (10^{-2.5}~M_\odot~{\rm yr^{-1}~Mpc^{-3}})) \cdot ({\rm [f_b/(1+f_b)]/0.33})$ for various parameters and functions. This rate is estimated for the events with SNR$>8$ for the second generation gravitational wave detectors such as KAGRA. Here, ${\rm SFR_p}$ and ${\rm f_b}$ are the peak value of the Population III star formation rate and the fraction of binaries, respectively. When we consider only the events with SNR$>35$, the event rate becomes $0.046-4.21~{\rm events~yr^{-1}}~({\rm SFR_p}/ (10^{-2.5}~M_\odot~{\rm yr^{-1}~Mpc^{-3}})) \cdot ({\rm [f_b/(1+f_b)]/0.33})$. This suggest that for remnant black hole's spin $q_f>0.95$ we have the event rate with SNR$>35$ less than $0.037~{\rm events~yr^{-1}}~({\rm SFR_p}/ (10^{-2.5}~M_\odot~{\rm yr^{-1}~Mpc^{-3}})) \cdot ({\rm [f_b/(1+f_b)]/0.33})$, while it is $3-30~{\rm events~yr^{-1}}~({\rm SFR_p}/ (10^{-2.5}~M_\odot~{\rm yr^{-1}~Mpc^{-3}})) \cdot ({\rm [f_b/(1+f_b)]/0.33})$ for the third generation detectors such as Einstein Telescope. If we detect many Population III binary black holes merger, it may be possible to constrain the Population III binary evolution paths not only by the mass distribution but also by the spin distribution. Introduction The final part of gravitational waves (GWs) from merging binary black holes (BBHs) is called the ringdown phase. When the remnant compact object is a black hole (BH), this phase is described by quasinormal modes (QNMs) of the BH (see, e.g., Ref. [1]). In general, the BH is expected as the Kerr spacetime [2], 2Mr dt 2 − 4Mar sin 2 θ dtdφ + dr 2 + dθ 2 + r 2 + a 2 + 2Ma 2 r sin 2 θ sin 2 θ dφ 2 , where = r 2 − 2Mr + a 2 and = r 2 + a 2 cos 2 θ , with mass M and spin a. In Eq. (1), we used the units of c = G = 1. The detection of QNM GWs not only gives a precise estimation of the BH's mass and spin, but also tests Einstein's general relativity (see the extensive review in [3]). In our previous paper [4], using the recent population synthesis results of Population III (Pop III) massive BBHs [5,6], we discussed the event rate of QNM GWs by second-generation gravitational Population III binary population synthesis calculation To estimate the detection rate of GWs from Pop III BBH mergers, it is necessary to know how many Pop III binaries become BBHs which merge within the Hubble time. Here, we use the binary population synthesis method of Monte Carlo simulation of binary evolutions. The Pop III binary population synthesis code [4][5][6] has been upgraded from the binary population synthesis code [12] 1 for Pop III binaries. In this paper, we calculate the same models as Ref. [6] using the same methods as Ref. [6] in order to obtain the mass ratio distribution and the spin distribution. In this section, we review the calculation method and models. Note that in this paper, we do not consider the kick models and the worst model discussed in Ref. [6] for simplicity, because in these models, BBHs have misaligned spins and the final spins after merger are too complex. First, we need to give the initial conditions when a binary is born. The initial conditions such as primary mass M 1 , mass ratio M 2 /M 1 (where M 2 is the secondary mass), separation a, and orbital eccentricity e are decided by the Monte Carlo method with initial distribution functions such as the initial mass function (IMF), the initial mass ratio function (IMRF), the initial separation function (ISF), and the initial eccentricity function (IEF). For example, in our standard model, we use a flat IMF, a flat IMRF, a logflat ISF, and an IEF with a function ∝ e. There are no observations of Pop III binaries because they were born at the early universe. Thus, we do not know the initial distribution functions of Pop III binaries from observations. For the IMF, however, the recent simulations [13,14] may suggest a flat IMF, and therefore we adapt the flat IMF. For the other initial distribution functions, we adapt those of the Pop I case, where a Pop I star is a solar-like star. The above set of initial distribution functions is called our standard model of 140 cases with the optimistic core-merger criterion in this paper. Second, we calculate the evolution of each star, and if the star satisfies a condition of binary interactions, we evaluate the effects of binary interactions and change M 1 , M 2 , a, and e. As the binary interactions, we treat the Roche lobe overflow (RLOF), the common envelope (CE) phase, the tidal effect, the supernova effect, and the gravitational radiation. The RLOF is stable mass transfer, while unstable mass transfer becomes the CE phase when the donor star is a giant. Here, we need some parameters for the calculation of the RLOF and CE phases. In the case of the RLOF, we use the loss fraction β of transfered stellar matter defined aṡ PTEP 2016, 103E01 T. Kinugawa et al. whereṀ 2 is the mass accretion rate of the receiver star andṀ 1 is the mass loss rate of the donor star. In our standard model, β is determined by Hurley's function [12], which has been discussed for the Pop I case. When the receiver star is in the main sequence phase or in the He-burning phase, we assume that the accretion rate is described bẏ where τṀ is the accretion time scale defined by and the Kelvin-Helmholtz timescale τ KH,2 is defined by Here, M 2 , M c,2 , L 2 , and R 2 are the mass, the core mass, the luminosity, and the radius of the receiver star, respectively. When the receiver star is in the He-shell burning phase, we assume that the receiver star can get all the transfered matter from the donor star, i.e., Although we use the β function defined by Hurley et al. [12] in our standard model, we also treat the accretion rate of the receiver star described by the constant β parameter. This is because the accretion rate of a receiver star which is not a compact object is not understood well. Furthermore, in our previous study [6], we have shown that the Hurley fitting formula is consistent with β = 0 in the Pop III binary case. Thus, we also discuss the cases of β = 0.5 and β = 1. It is noted that the stability of the mass transfer changes if the mass transfer is non-conservative (β > 0). We use the criterion given in Ref. [15] as where M 1 and R L,1 are the mass and the Roche lobe radius of the donor star. If ζ ad = d log R ad,1 /d log M 1 < ζ L , where R ad,1 is the radius of the donor star, in the hydrostatic equilibrium of the donor star, the binary starts a dynamically unstable mass transfer such as the CE phase. When the receiver star is a compact object such as a neutron star or a BH, we always use β = 0 and the upper limit of the accretion rate is limited by the Eddington accretion rate defined bẏ where κ T = 0.2 (1 + X ) cm 2 g −1 is the Thomson scattering opacity and X (= 0.76) is the H-mass fraction for Pop III stars. At the CE phase, the companion star plunges into the envelope of the donor star and spirals in. The orbital separation after the CE phase a f is calculated by the energy formalism [16] described by where a i , α, and λ are the orbital separation before the CE phase, the efficiency, and the binding energy parameter, respectively. In our standard model, we adopt αλ = 1. We also calculate the αλ = 0.01, 0.1. and 10 cases in this paper. Finally, if a binary becomes a BBH, we calculate the merger time from the gravitational radiation reaction, and check whether the BBH can merge within the Hubble time or not. We repeat these calculations and take the statistics of BBH mergers. To study the dependence of Pop III BBH properties on the initial distribution functions and binary parameters, we calculate ten models with the Pop III binary population synthesis method [5,6] in this paper. Table 1 shows the initial distribution functions and the binary parameters of each model. The columns show the model name, IMF, IEF, the CE parameter αλ, and the loss fraction β of transfered stellar matter at the RLOF in each model. show the initial mass ratio distributions and the mass ratio distributions of merging BBHs. The RLOF tends to make binaries of equal mass. Thus, the BBH mass ratio distributions depend on how many binaries evolve via the RLOF. Population III stars with mass < 50 M evolve as blue giants [5,17]. Thus, in the case of the IMF that light stars are in the majority, the binaries tend to evolve only via the RLOF, not via the CE phase. Therefore, the steeper IMFs tend to derive many equal-mass BBHs. In this calculation, since we adopt the minimum mass ratio as 10M /M 1 , the initial mass ratio distribution of models with the IMF that light stars are the majority is up to The mass ratio distributions of binary black hole remnants On the other hand, if we change the IEF, the mass ratio distribution does not change much. Thus, the dependence on the IEF is not so large (see Figs. 1 and 3). For the CE parameter dependence (see Figs. 1 and 4), small mass ratio binaries in the model of αλ = 0.001 are much fewer than those in the other models. In the αλ = 0.001 model, all the binaries which evolve via the CE phase merge during the CE phase due to too-small αλ. Thus, the merging BBHs in this model evolve only via the RLOF, and become of equal mass by the RLOF. The change is not large between the models with CE parameters αλ = 0.1 and 10. As for the mass loss fraction β (see Figs. 1 and 5), when β becomes large, there are three effects. First, binaries tend not to enter the CE phase. Second, the mass accretion by RLOF becomes not to be effective. Third, RLOF tends to finish early. The first effect makes binaries evolve via RLOF. However, the second and third effects have a negative impact on the tendency to become equal mass. Thus, the mass ratio distributions of the β = 0.5 and 1 models look similar to our standard model. 6 The spin distributions of binary black hole remnants We calculate the spin evolution of binaries using the tidal friction. We use the initial spin distribution and the tidal friction calculations as in Refs. [5,12]. When the Pop III star becomes a BH, we calculate the BH's spin using the total angular momentum of the progenitor. If the estimated spin of the BH is more than the Thorne limit q Thorne = 0.998 [18], we assign the non-dimensional spin parameter q = q Thorne . We ignore the spin up by the accretion during a mass transfer after the star became a BH for the following reason. The spin up by the accretion is calculated as where δJ , M BH , δM are the gain in angular momentum, the BH's mass, and the gain of the BH's mass, respectively. Since the accretion rate of the BH during RLOF is the Eddington rate, the gain of the BH's mass is where t life is the lifetime of the Pop III star. The Eddington accretion rate is given bẏ and the lifetime of the massive star is t life ∼ 1 Myr. As a result, we have δq ∼ 0.01, and the spin up by the accretion during RLOF is negligible. On the other hand, the accretion rate during the CE phase isṀ ∼ 10 −3 M yr −1 [19], and the timescale of the CE phase is about the thermal timescale of a red giant t KH ∼ 10 2 yr or less. As a result, we have δq ≤ 0.1, and the spin up by the accretion during the CE phase is negligible, too. Figures 6-15 show the spin distributions of merging BBHs and cross-section views of these spin distributions. The spins of merging Pop III BBH can be roughly classified into three types: group 1, in which both BHs have high spins q ∼ 0.998; group 2, in which both BHs have low spins; and group 3, in which one of the pair has high spin q ∼ 0.998 and the other has low spin. When the BH progenitor evolves via the CE phase, the Pop III BH has low spin, and vice versa. If a Pop III star which is a giant evolves via the CE phase, the Pop III star loses the envelope and almost all the angular momentum due to the envelope evaporation. On the other hand, if the Pop III star evolves without the CE phase, the Pop III star can have a high angular momentum. Therefore, group 1 progenitors evolve without the CE phase and the envelopes of the progenitors remain. In group 2, both stars evolve via the CE phase and they lose their envelopes and almost all their angular momentum. In group 3, the primary evolves via the CE phase and the secondary evolves without the CE phase, or vice versa. The The distribution of q 2 for 0 < q 1 < 0.05. We can see that the q 2 distribution has bimodal peaks at 0 < q 2 < 0.15 and 0.95 < q 2 < 0.998. (b) The distribution of q 2 for 0.95 < q 1 < 0.998. We see that the large value of q 2 is the majority, so that there is a group in which both q 1 and q 2 are large. which have high spins. In particular, in the case of the Salpeter IMF about 40% of the BBHs have spins q 1 > 0.95 and q 2 > 0.95. As for the IEF dependence, there is no tendency like the mass ratio distribution (see Figs. 6, 9, and 10). The dependence on the CE parameter can be considered as follows (see Figs. 6, 11, 12, and 13). In the αλ = 0.01 model, almost all merging Pop III BBHs have high spins. About 60% of merging Pop III BBHs have q 1 > 0.95 and q 2 > 0.95 (i.e., group 1). The reason for this is that the progenitors which evolve via the CE phase always merge during the CE phase due to too-small αλ. Thus, the progenitors of merging Pop III BBHs in this model evolve only via RLOF and they do not lose angular momentum via the CE phase. In the case of the αλ = 0.1 model, the fraction of group 2 is lower than that of our standard model, like the αλ = 0.01 model. However, the fraction of group 1 is almost the same as that of our standard model, and the fraction of group 3 is larger than that of our standard model, unlike the αλ = 0.01 model. The reason for this is that although progenitors which enter CE phases twice merge during the CE phase due to small αλ, progenitors which enter 8 the CE phase only once do not merge during the CE phase, and the Pop III BBHs which cannot merge within the Hubble time in our standard model come to be able to merge within the Hubble time due to small αλ. In the αλ = 10 model, the shape of the spin distribution is almost the same as that of our standard model. The difference in this model from our standard model is the small increase of the fraction of group 2, because the progenitors which merge during the CE phase in our standard model come to be able to survive due to large αλ. As for the β dependence (see Figs. 6, 14, and 15), not only the stellar mass loss during RLOF but also the criterion of dynamically unstable mass transfer such as a CE phase are changed by β. In the β = 0.5 model, the fraction of group 1 is larger than that of our standard model because in this model the progenitors less frequently enter the CE phase than those of our standard model. In the β = 1 model, the fraction of group 1 is larger than that of our standard model, like a β = 0.5 model. However, the fraction of group 1 is smaller than that of the β = 0.5 model because in this model the progenitors lose a lot of angular momentum during the RLOF due to the high β. In this model, since the mass transfer cannot become dynamically unstable, the evolution passes via CE phases as follows. The progenitors of group 2 enter the CE phase when the primary and secondary become giants at the same time, and plunge into each other. On the other hand, the progenitors of group 3 enter the CE phase when the secondary plunges into the primary envelope due to the initial eccentricity. Remnant mass and spin Based on Ref. [11] (see also Refs. [20,21]), we calculate the remnant mass M f and spin q f from given BH binary parameters, M 1 , M 2 , q 1 , and q 2 (see Ref. [4] for a detailed discussion). The remnant mass and spin for each case is shown in Figs. 16-25. Here, we have normalized the distribution, and used binning with M f = 10 M for M f and q f = 0.1 (thick, red) and 0.02 (thin, blue) for q f . The IMF dependence shown in Figs. 16, 17, and 18 is described below for the remnant mass and spin. When we treat the steeper IMF, we have a lower number of high-mass remnants. On the other hand, the number of high-spin remnants increases slightly in the steeper IMF cases. This is because in the steeper IMF models we have a large number of progenitors with mass smaller than 50 M . As for the IEF dependence, we find that in Figs. 16,19, and 20 there is no strong tendency. Next, from Figs. 16,21,22, and 23 the CE parameter dependence can be described. In the αλ = 0.01 model, the maximum of the remnant mass becomes much smaller than that of our standard model. This is because the high-mass progenitors merge during a CE phase due to too-small αλ. As for the remnant spin, we do not have remnant spins which are smaller than 0.55 since BBHs tend to be of equal mass. If a light BH falls into a non-spinning massive BH, the remnant BH can have a small spin (q f < 0.6). However, in the above model many BBHs are equal mass. In the αλ = 0.1 model, the maximum remnant mass is smaller than that of our standard model again. In this model, the fraction of remnant spins with 0.7 < q f < 0.8 is larger than that for our standard model because the fraction of group 3 in this model is larger than in our standard model. As for the β dependence, we find from Figs. 16, 24, and 25 that the maximum remnant mass becomes lower for higher β, due to the mass loss during RLOF. Event rates for ringdown gravitational waves To estimate the event rate for ringdown gravitational waves, it is necessary to have the merger rate density of Pop III BBHs. The merger rate density R m [Myr −1 Mpc −3 ] has been derived for various models in Ref. [6], and can be approximated by a fitting formula for low redshift. This is summarized in Table 2. In practice, we have considered the fitting for R m in terms of redshift z up to z = 2, but 15 the above R m is derived by using z ∝ D where D denotes the (luminosity) distance, because we use it only up to z ∼ 0.2 in this paper. Using Ref. [22], we calculate the angle-averaged signal-to-noise ratio (SNR) as where we assume r = 3% of the total mass energy radiated in the ringdown phase. Note that for simplicity, any effect of the cosmological distance is ignored here. The symmetric mass ratio 16 PTEP 2016, 103E01 T. Kinugawa et al. are obtained from the remnant BH's mass and spin (see Ref. [23]). We evaluate the above SNR of the QNM GWs in the expected KAGRA noise curve S n (f ) [9,10] [bKAGRA, VRSE(D) configuration] (see Ref. [4] for the detailed calculation). This noise curve is presented in Ref. [26], and we use the fitting noise curve obtained in Ref. [24], based on Ref. [26]. Then, the event rate for a given SNR is derived by using the merger rate density in Table 2. In the right column of Table 2, we present the event rate with SNR > 8. Here, the event rates [yr −1 ] have been divided by dependence on the star formation rate SFR p and the fraction of the binary f b . In Fig. 26, based on Ref. [24], we show the parameter estimation in a case with SNR = 35 for the typical case [5,6] (with M rem = 57.0904 M and α rem = 0.686710). The (black) thick line shows the Schwarzschild limit and the ellipses are the contours of 1 σ , 2 σ , 3 σ , 4 σ , and 5 σ . In general relativity, the top-left side of the thick black line is prohibited. Thus, using the event with SNR > 35, we summarize various event rates in Tables 3 and 4. Table 3 shows the total event rates [yr −1 ] divided by dependence on the star formation rate SFR p and the fraction of the binary f b for ten models, and those for a remnant BH with q f > 0.7, 0.9, and 0.95. In Table 4, we present the detection rate [yr −1 ] divided by dependence on the star formation rate SFR p and the fraction of the binary f b as a function of the lower limit of the solid angle of a sphere 4π C by which we can estimate the contribution of the ergoregion. The relation between this C and the spin parameter q was obtained in Ref. [27] (see also the recent studies in [25,[27][28][29]). It is noted that q f > 0.9 corresponds to C 0.97. Discussions In this paper, we extended our previous work [4] (the standard model in this paper) by looking at the dependence on various parameters of the Pop III binary population synthesis calculation. As shown in the right column of Table 2, the detection rate with SNR > 8 for the second-generation GW detectors such as KAGRA was obtained as 5.9-500 events yr −1 (SFR p /(10 −2. by the mass distribution but also by the spin distribution. In particular, as described in Sect. 4, the spin of a black hole depends strongly on whether the progenitor of black hole enters the CE phase or not. Thus, we can check whether a BBH progenitor evolved via the CE phase or not by the spins of the BBH. One of the interesting outputs from the QNM GWs is whether we can confirm the ergoregion of the Kerr BH. From Table 4, the event rate for the confirmation of > 50% of the ergoregion is 0.040-3.1 events yr −1 (SFR p /(10 −2.5 M yr −1 Mpc −3 )) · ([f b /(1 + f b )]/0.33) with SNR > 35. When we consider extracting the rotational energy of BHs using the Penrose process [31] or the Blanford-Znajek process [32], for example, we want to observe highly spinning remnant BHs. For remnant BHs with spin q f > 0.95, the event rate with SNR > 35 is 0.0027-0.037 events yr −1 (SFR p /(10 −2.5 M yr −1 Mpc −3 )) · ([f b /(1 + f b )]/0.33) in the KAGRA detector from Table 3. A third-generation GW observatory, the Einstein Telescope [33] will have an improvement in sensitivity of about a factor of ten over second-generation detectors. This means that we have roughly 1000 times higher expected event rates, and, for example, the ringdown event rate with 19 Here, we have introduced r as the fraction of the BH mass radiated in the ringdown phase, and assumed r = 3% to calculate the SNR and the event rates in this paper. If r = 0.3%, we will still have the possibility of detecting QNM GWs from highly spinning remnant BHs. Finally, Pop III BBH mergers can be a target for space-based GW detectors such as eLISA [34] and DECIGO [35]. Study in this direction is one of our future works.
2016-06-01T17:30:22.000Z
2016-06-01T00:00:00.000
{ "year": 2016, "sha1": "ab9cf5faceef4bbc5beccc4c54027f12bdc7609d", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/ptep/article-pdf/2016/10/103E01/9620679/ptw143.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "0fa48fe65caa1eb0b6820ada9809478d71869143", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
8796148
pes2o/s2orc
v3-fos-license
Vegetation Structure and Carbon Stocks of Two Protected Areas within the South-Sudanian Savannas of Burkina Faso Savannas and adjacent vegetation types like gallery forests are highly valuable ecosystems contributing to several ecosystem services including carbon budgeting. Financial mechanisms such as REDD+ (Reduced Emissions from Deforestation and Forest Degradation) can provide an opportunity for developing countries to alleviate poverty through conservation of its forestry resources. However, for availing such opportunities carbon stock assessments are essential. Therefore, a research study for this purpose was conducted at two protected areas (Nazinga Game Ranch and Bontioli Nature Reserve) in Burkina Faso. Similarly, analysis of various vegetation parameters was also conducted to understand the overall vegetation structure of these two protected areas. For estimating above ground biomass, existing allometric equations for dry tropical woody vegetation types were used. Compositional structure was described by applying tree species and family importance indices. The results show that both sites collectively contain a mean carbon stock of 3.41 ± 4.98 Mg·C·ha−1. Among different savanna vegetation types, gallery forests recorded the highest mean carbon stock of 9.38 ± 6.90 Mg·C·ha−1. This study was an attempt at addressing the knowledge gap particularly on carbon stocks of protected savannas—it can serve as a baseline for carbon stocks for future initiatives such as REDD+ within these areas. Introduction The population of Burkina Faso was recorded as 15.7 million in 2009. It is spread over an area of 274,000 km 2 and almost 80% of the population lives in rural areas and depends on agriculture as their main source of mainstay [1]. The population depends heavily on fuelwood as their main source of energy [2]. Moreover, livestock production and increases in population have put undue pressures on plant resources [3]. As a consequence, the vegetation structure and composition of the savanna habitats have been severely affected [4]. This degradation is further leading to challenges such as food shortages, water scarcities, income losses, resource conflicts, and environmental deterioration [5]. Poverty and increasing need for food have resulted in agricultural expansions [6]. Burkina Faso can be divided into two main agro-ecological zones (i.e., Sahelian savanna and Sudanian savanna), categorized on the basis of isohyets and length of dry season [7]. The Sahelian savanna has a dry season of seven to nine months annually with annual rainfall of 600 mm. The Sudanian savanna has a dry season of four to seven months annually with annual rainfall of 750-1200 mm [8]. The Sahelian savanna can further be categorized into Northern Sahelian savanna (with annual rainfall of 600 mm and eight to nine months of dry season) and Southern Sahelian savanna (with annual rainfall of 600-750 mm and seven to eight months of dry season) [9]. Similarly, the Sudanian savanna can further be categorized into Northern Sudanian savanna (with annual rainfall of 750-1000 mm and six to seven months of dry season) and Southern Sudanian savanna (with annual rainfall of 1000-1200 mm and four to six months of dry season) [9]. Burkina Faso has a forest area of 19.6%, with additional 17.5% categorized as "other woodlands" [10]. The increasing pressure on forestry resources has resulted in a significant annual deforestation rate of 1.0% for the period 2010-2015 [10]. In addition, climate change constitutes a serious challenge which is undermining efforts towards sustainable development. Carbon sequestration can therefore serve as an essential strategy for the mitigation of climate change [11]. Forest resources, on the other hand, can be helpful in addressing climate vulnerabilities, such as food insecurity [12]. Reduced Emissions from Deforestation and Forest Degradation (REDD+) is a financial scheme focused upon reducing carbon emissions, which involves reducing emissions from deforestation and forest degradation, that is aimed towards the conservation and enhancement of forest carbon stocks, and a sustainable forest management including ecological and social targets [13]. Hence, REDD+ not only just provides developing countries the opportunity to tackle climate change by alleviating poverty but also helps by conserving their forest resources [14]. Moreover, it has also been identified as one of the economically most feasible mitigation options in tackling climate change [15]. The most important issue for REDD+ initiatives, however, is the estimation and the monitoring of the carbon stocks, and their success therefore largely depends on the availability of scientific information on forest carbon stocks [16]. Unfortunately, sufficient work on the quantification of carbon stocks in savannas does not exist [17]. Additionally, savannas have also been a major uncertainty in the carbon accounting of Africa [18]. The information on the composition and structural characteristics of the tree species in savannas are often lacking. Trees are considered an important component of vegetation and must be persistently monitored so that the forest successional processes can be managed for maintaining habitat diversity [19]. Such quantitative information can be helpful in developing appropriate conservation guidelines for the savannas. The composition and structural characteristics of the vegetation also help in understanding the magnitude of anthropogenic pressures on ecosystems. In Burkina Faso, like other countries in the world, protected areas were also established to safeguard the unique biodiversity for respective areas. Moreover, the protected areas also play an essential role in carbon sequestration [20]. Burkina Faso has 14% of its total land area categorized as protected areas and has future plans to increase the number to 30% [9]. The study therefore focused on assessing general composition and vegetation structure as well as the carbon stocks (Mg·C·ha −1 ) in aboveground biomass (AGB) dry of trees of typical vegetation types in two protected areas of Burkina Faso: Nazinga Game Ranch and Bontioli Nature Reserve. The main objective of this study is to provide a benchmark for future studies and baselines for future possible initiatives (e.g., REDD+), if initiated for these areas. Study Area Nazinga Game Ranch was created in 1979 ( Figure 1) and is spread over an area of 97,536 ha at an average altitude of 280 m above sea level (asl) [21]. According to Burkina Faso's legislation, it has been classified as a protected area, listed as a "Wildlife Reserve" and it is very well known as a tourist destination [22]. There is a single dry season running from October to May and a single rainy season from June to September. It has a mean annual rainfall of 900 mm [23]. The average annual temperature is 27.1 • C. The Nazinga Game Ranch is traversed by Sessile River and its two tributaries (i.e., Dawevele and Nazinga Rivers); the rivers have characteristic seasonal flows. The vegetation has the characteristics of Southern Sudanian savanna. Typical species of the area include; shea tree (Vitellaria paradoxa C.F. Gaertn.), kodayoru tree (Terminalia laxiflora Engl. & Diels), female gardenia (Gardenia erubescens Stapf & Hutch.), lingahi tree (Afzelia africana Sm.), and African birch (Anogeissus leiocarpa (DC.) Guill. & Perr.), among others [24]. Environments 2016, 3, 25 3 of 15 been classified as a protected area, listed as a "Wildlife Reserve" and it is very well known as a tourist destination [22]. There is a single dry season running from October to May and a single rainy season from June to September. It has a mean annual rainfall of 900 mm [23]. For better management purposes, the Nazinga Game Ranch has been divided into four zones: (i) conservation zone; (ii) buffer zone; (iii) commercial hunting zone; and (iv) village hunting zone. The conservation zone consists of 9% and the buffer zone consists of 5% of the total area. The commercial hunting zone and the village hunting zone comprise the remaining 86% of the total area [21]. A few settlements are also located in the commercial hunting zone and village hunting zone. The area has once known to be one of the least populated areas in Burkina Faso, but has been subjected to increasing migrations after the Sahelian drought in the 1970s [23]. Agriculture is the mainstay for the local people and the major agricultural crops are corn (Zea mays L.), sorghum (Sorghum bicolor (L.) Moench), pearl millet (Pennisetum glaucum (L.) R. Br.), and peanut (Arachis hypogaea L.). Bontioli Nature Reserve is also called ''Forêt Classée de Bontioli'' and is located in the Sudanian zone of southwestern Burkina Faso in the province of Bougouriba ( Figure 2). It is a Category IV protected area, managed mainly for conservation through active management, according to International Union for Conservation of Nature (IUCN) Protected Areas Categories. It consists of the Total Reserve and the Partial Reserve. These areas were established by the territorial government during the colonial period based on two ministerial orders; (i) Order n° 3147/SE/EF of 23 March 1957, which was related to the demarcation of the area (29,500 ha) and the establishment of the For better management purposes, the Nazinga Game Ranch has been divided into four zones: (i) conservation zone; (ii) buffer zone; (iii) commercial hunting zone; and (iv) village hunting zone. The conservation zone consists of 9% and the buffer zone consists of 5% of the total area. The commercial hunting zone and the village hunting zone comprise the remaining 86% of the total area [21]. A few settlements are also located in the commercial hunting zone and village hunting zone. The area has once known to be one of the least populated areas in Burkina Faso, but has been subjected to increasing migrations after the Sahelian drought in the 1970s [23]. Agriculture is the mainstay for the local people and the major agricultural crops are corn (Zea mays L.), sorghum (Sorghum bicolor (L.) Moench), pearl millet (Pennisetum glaucum (L.) R. Br.), and peanut (Arachis hypogaea L.). Bontioli Nature Reserve is also called "Forêt Classée de Bontioli" and is located in the Sudanian zone of southwestern Burkina Faso in the province of Bougouriba ( Figure 2). It is a Category IV protected area, managed mainly for conservation through active management, according to International Union for Conservation of Nature (IUCN) Protected Areas Categories. It consists of the Total Reserve and the Partial Reserve. These areas were established by the territorial government during the colonial period based on two ministerial orders; (i) Order n • 3147/SE/EF of 23 March 1957, which was related to the demarcation of the area (29,500 ha) and the establishment of the Partial Reserve (ii) Order 3417/SE/EF of 29 March 1957, which was related to the demarcation of the area and classification of the Total Reserve (12,700 ha). The research study was confined to the Total Reserve only, as the Partial Reserve of Bontioli does not have consistent savanna cover due to being subjected to high pressure from human activities [25]. The vegetation of the Bontioli Nature Reserve also has the characteristics of the Southern Sudanian savanna. The rainfall varies between 900 and 1000 mm per year [25]. The rainy season ranges from May to October and the dry season spans from November to April [26]. The mean temperature has been recorded as 27.1 °C for the period of 2004-2006. The main river is the Bougouriba, which is pivotal for the hydrographical network within the Bontioli Nature Reserve [25]. The highest altitude for the Bontioli Nature Reserve has been recorded as 350 m asl and the lowest altitude as 250 m asl. The tree species include wild syringe (Burkea africana Hook.), barwood (Pterocarpus erinaceus Poir.), ordeal tree (Crossopteryx febrifuga (Afzel. ex G. Don) Benth.), and cangara tree (Combretum glutinosum Perr. ex DC). Sampling Design Due to the heterogeneous and overlapping landscape matrix a stratified sampling design was adopted for this study. The vegetation at both sites is classified as Southern Sudanian savanna [27]. The vegetation was further segregated into different types according to their physiognomy; (i) woodland savanna; (ii) tree savanna; (iii) shrub savanna; and (iv) gallery forest [28]. The tree and shrub savannas were categorized according to the Yangambi classification in 1956 [29]. Gallery forests were categorized as the narrow patches found along the fringes of semi-permanent water courses [30]. Woodland savannas were categorized on the basis of their close canopies and discontinuous grasses [31]. Twenty plots were established at either site, with five plots each per vegetation type. The plots were square-shaped and had a size of 20 m × 20 m, as suggested by [32]. The vegetation of the Bontioli Nature Reserve also has the characteristics of the Southern Sudanian savanna. The rainfall varies between 900 and 1000 mm per year [25]. The rainy season ranges from May to October and the dry season spans from November to April [26]. The mean temperature has been recorded as 27.1 • C for the period of 2004-2006. The main river is the Bougouriba, which is pivotal for the hydrographical network within the Bontioli Nature Reserve [25]. The highest altitude for the Bontioli Nature Reserve has been recorded as 350 m asl and the lowest altitude as 250 m asl. The tree species include wild syringe (Burkea africana Hook.), barwood (Pterocarpus erinaceus Poir.), ordeal tree (Crossopteryx febrifuga (Afzel. ex G. Don) Benth.), and cangara tree (Combretum glutinosum Perr. ex DC). Sampling Design Due to the heterogeneous and overlapping landscape matrix a stratified sampling design was adopted for this study. The vegetation at both sites is classified as Southern Sudanian savanna [27]. The vegetation was further segregated into different types according to their physiognomy; (i) woodland savanna; (ii) tree savanna; (iii) shrub savanna; and (iv) gallery forest [28]. The tree and shrub savannas were categorized according to the Yangambi classification in 1956 [29]. Gallery forests were categorized as the narrow patches found along the fringes of semi-permanent water courses [30]. Woodland savannas were categorized on the basis of their close canopies and discontinuous grasses [31]. Twenty plots were established at either site, with five plots each per vegetation type. The plots were square-shaped and had a size of 20 m × 20 m, as suggested by [32]. AGB dry and Carbon Stock Estimation The diameter at breast height (DBH) over bark for each tree ≥5 cm in every plot was measured with the help of the diameter tape at 1.3 m above ground level. In case of multi-stemmed trees, all stems with DBH above 5 cm were measured and the following formula was used for calculation of the respective total DBH [33]; The heights of the trees were estimated using Blume Leiss Hypsometer. The heights of trees less than two meters were measured with the help of a measuring tape. For multi-stemmed trees such as Mitragyna inermis (Willd.) Kuntze, the tip of the tallest stem was measured. For tree AGB dry estimation, the allometric equation suggested by [34] for dry forest stands was used, which is valid for DBH within the range of 5-156 cm; where H = height (m) and ρ = Wood Density (g·cm −3 ). The published wood densities were used for the AGB dry estimation (Table A1). The wood densities at species or generic level were used subject to their availability [35]. The AGB dry per plot was scaled up to Mg·ha −1 . The AGB dry (in Mg·ha −1 ) was converted to carbon stocks by multiplying with a carbon conversion factor of 0.5 [36]. Quadratic Mean Diameter and Density The quadratic mean diameter for every plot was calculated as √ (∑d i 2 )/n (d i is DBH in cm for every tree and n refers to the total number of trees) [37]. The quadratic mean diameter is referred to as mean DBH throughout the document hereafter. The density was the total number of trees per plot per ha. The BA for each sampled tree was calculated as the following; IVIs were calculated from the species relative frequency (Rf ), relative density (RDe), and relative dominance (RDo) [38]; IVI for each species was calculated as the sum of Rf, RDe, and RDo. The FIVs were calculated from relative diversity (RDi), relative density (RDe), and relative dominance (RDo) according to [39]; Number of species in family Total number of species × 100 (7) RDe (%) = Number of trees in family Total number of trees × 100 (8) FIV for each family was eventually calculated as the sum of RDi, RDe, and RDo. Statistical Analysis To assess the normal distribution of different variables, the Shapiro-Wilk-Test was used. The means ± Standard Deviations (SD) for averages of different variables per plot were calculated. As some data was not normally distributed, Wilcoxon Rank Sum Test was used for probing the statistical differences between two variables and Kruskal Wallis Rank Sum Test was used for more than two variables. The post-hoc analysis for significant differences in means was done using Tukey's test. A significance level of 0.05 was used for all statistical tests. The statistical analysis was performed and graphs were produced using the version 3.1.0 of R (R Foundation for Statistical Computing, Vienna, Austria) [40]. DBH and Height No significant difference was recorded between mean DBHs of two sites (p > 0.05). The mean DBH, however, differed significantly amongst the vegetation types for both sites collectively (p < 0.05). The mean DBH of gallery forests showed significant variation from the other vegetation types for both sites collectively (p < 0.05; Table 1). The gallery forests recorded the highest mean DBH of 48.80 ± 16.45 cm for both sites collectively. The DBH classes for both sites showed a reverse J-shape. For Nazinga Game Ranch, the highest number of the trees was recorded in the DBH class of 5 cm, forming 43.07% of the total (Figure 3). Together, 5 cm and 10 cm DBH classes formed 74.61% of the total stems for Nazinga Game Ranch. Similarly, for Bontioli Nature Reserve, 5 cm DBH classes formed 36.17% of the total and together 5 cm and 10 cm combined to form 68.08% of the total sampled stems (Figure 3). Similarly, no significant difference was recorded between mean heights of the two sites (p > 0.05). Significant difference was recorded amongst the vegetation types for both sites collectively (p < 0.05; Table 1). The mean heights of gallery forests and woodland savannas differed significantly from the tree and shrub savannas (p < 0.05; Table 1). The largest value of 9.47 ± 1.38 m for mean height was recorded for the gallery forests for both sites collectively (Table 1). from the tree and shrub savannas (p < 0.05; Table 1). The largest value of 9.47 ± 1.38 m for mean height was recorded for the gallery forests for both sites collectively (Table 1). Density and BA No significant difference was recorded between mean densities of two sites (p > 0.05). Variation in mean densities of all vegetation types for both sites collectively was, however, recorded (p < 0.05). The highest mean density of 305 ± 10.70 trees·ha −1 was recorded for the woodland savannas for both sites collectively (Table 1). Shrub savannas with mean density of 27.5 ± 2.48 trees·ha −1 were significantly different from other vegetation types (p < 0.05; Table 1). The mean densities of woodland savannas and gallery forests were also significantly different from each other (p < 0.05; Table 1). There was no significant difference between the mean BA for the two sites either (p > 0.05). Significant difference was recorded between vegetation types for both sites collectively (p < 0.05; Table 1). The mean BA for gallery forests was significantly different from other vegetation types for both sites collectively (p < 0.05; Table 1). The highest mean BA of 4.67 ± 3.73 m 2 ·ha −1 was recorded for the gallery forests for both sites collectively (Table 1). Density and BA No significant difference was recorded between mean densities of two sites (p > 0.05). Variation in mean densities of all vegetation types for both sites collectively was, however, recorded (p < 0.05). The highest mean density of 305 ± 10.70 trees·ha −1 was recorded for the woodland savannas for both sites collectively (Table 1). Shrub savannas with mean density of 27.5 ± 2.48 trees·ha −1 were significantly different from other vegetation types (p < 0.05; Table 1). The mean densities of woodland savannas and gallery forests were also significantly different from each other (p < 0.05; Table 1). There was no significant difference between the mean BA for the two sites either (p > 0.05). Significant difference was recorded between vegetation types for both sites collectively (p < 0.05; Table 1). The mean BA for gallery forests was significantly different from other vegetation types for both sites collectively (p < 0.05; Table 1). The highest mean BA of 4.67 ± 3.73 m 2 ·ha −1 was recorded for the gallery forests for both sites collectively (Table 1). AGB dry No significant difference was recorded for the mean AGB dry, for both sites collectively (p > 0.05). Significant difference in mean AGB dry was recorded for the vegetation types for both sites collectively (p < 0.05; Table 1). The mean AGB dry for gallery forests was significantly different from other vegetation types collectively for both sites (p < 0.05; Table 1). The overall mean AGB dry for both sites collectively was 6.70 ± 10.02 Mg·ha −1 (Table 1). Amongst vegetation types for both sites collectively, the highest mean AGB dry was recorded for gallery forests, 18.77 ± 13.80 Mg·ha −1 ( Table 1). Carbon Stocks There was no significant difference between the mean carbon stocks of the two sites (p > 0.05). Significant difference, however, was recorded among the vegetation types collectively for both sites (p < 0.05; Table 1). The mean carbon stock for gallery forests was significantly different from other vegetation types for both sites collectively (p < 0.05; Table 1). The overall mean carbon stock for both sites collectively was recorded as 3.41 ± 4.98 Mg·C·ha −1 (Table 1). Gallery forests also showed the highest mean carbon stock, 9.38 ± 6.90 Mg·C·ha −1 (Table 1). DBH and DBH Class Distribution The result for this study for mean DBH for gallery forests was not consistent with [31] who reported mean DBH of 15 ± 3.84 cm. This difference could be attributed to their low sampling intensity. Similarly, the result of this study was also higher than what was reported by [41], which reported a mean DBH of 15.3 ± 3.9 cm for the unprotected site Yale, in southern Burkina Faso. The difference could be due to higher DBHs for gallery forests in this study. The DBH classes' distribution, representing a horizontal structure, showed a reverse J-shape for both the sites in this study. The reverse J-shape is typical for tropical and sub-tropical forests [42]. The reverse J-shape is also an indication of good regeneration of the woody vegetation community [43]. The highest number of trees was recorded in the DBH classes of 5 cm in our study for both the sites. The density decreased with increasing DBH classes. Savadogo, P. [25] also emphasized that with increasing DBH the density decreases. Stem Densities, Tree Heights, and BA Savadogo, P., et al. [31] reported gallery forests as having the highest mean stem density, which is contrary to the result of this study. Savadogo, P. [25] reported a mean density of 331 tree·ha −1 for the Bontioli Nature Reserve, which is close to the mean density for the Bontioli Nature Reserve for this study. The result for this study for overall mean density is close to [41] for their mean density of 703 ± 49 trees·ha −1 . The height measurements were consistent with other studies [25,31,43], however [25] emphasized that trees' heights are leveled down by anthropogenic pressures such as bushfires and wood cuttings. High values for mean BA for gallery forests were also confirmed [31,43]. AGB dry and Carbon Stocks This study revealed that mean AGB dry and carbon stocks for Nazinga Game Ranch and Bontioli Nature Reserve were not significantly different. This can be attributed to the mean DBHs and mean heights which were also not significantly different between the two sites. Overall, the similarity between the two sites, as shown statistically, could be attributed to the similar environmental conditions. To the authors' knowledge, there are no AGB dry and carbon stock estimates available for these two sites. Previous estimates would have helped in comparison with results of this study. Lewis, S.L. et al. [44] also emphasized that only very few carbon stock estimates based on field inventories are available for West Africa. Estimates for carbon stocks have been provided by [8] for all of Burkina Faso, but this data was not comparable with this study because of the national level focus on different land uses categorized according to [45]. In this study, the highest overall mean carbon stock was recorded for gallery forests. The mean carbon stock of gallery forests was also significantly different from other vegetation types. This significant difference could be attributed to their mean DBH, which was also the highest amongst the vegetation types. The gallery forests were mainly comprised of Mitragyna inermis. This species was the second most abundant species amongst all the species collectively from both sites. Mitragyna inermis was mainly found in clumps and was mostly comprised of multi-stem trees. It can be assumed that the calculation of the DBH of Mitragyna inermis, through the Equation (1) used in this study, may have resulted in an overestimation for DBHs for gallery forests and hence in the overall mean carbon stock estimation. There is no statistical difference among the remaining vegetation types: the mean DBH of woodland savannas was not significantly different from the tree and shrub savannas either. However, the density of woodland savannas was higher than the other two. Sawadogo, L., et al. [46] reported AGB dry , through destructive sampling, for Anogeissus leiocarpa, Combretum glutinosum, Detarium microcarpum, Entada africana, and Piliostigma thonningii as 320.95 kg, 42.26 kg, 61.74 kg, 32.16 kg, and 29.42 kg, respectively, for the sites of Laba and Tiogo State Forests, located in transition from the north to south Sudanian zone in Burkina Faso. The estimates for AGB dry for this study were only consistent with [46] for Entada africana (30.94 kg) in the Bontioli Nature Reserve. Estimates were not consistent in the case of Anogeissus leiocarpa (168.34 kg and 508.64 kg for Nazinga Game Ranch and Bontioli Nature Reserve, respectively); for Combretum glutinosum (19.03 kg) for Bontioli Nature Reserve; for Detarium microcarpum (37.87 kg) for Nazinga Game Ranch; and for Piliostigma thonningii (14.21 kg and 11.61 kg for Nazinga Game Ranch and Bontioli Nature Reserve, respectively). Inconsistencies between estimates of AGB dry between two studies could be because of the variability of basic wood density in the individuals of the same species for different geographical locations and ages [47]. Karlson, M., et al. [48] reported AGB dry of 15.96 Mg·ha −1 for Saponé, central Burkina Faso. They included open woodlands, agroforestry parklands, small scale tree plantations, and dense forest patches in their study. These stands are often characterized by trees of bigger sizes, which could be the reason why higher AGB dry estimates were recorded for them in comparison to this study. The result for this study for mean carbon stock for both sites collectively was higher than [49] who reported 1.10 ± 0.32 Mg·C·ha −1 for natural vegetation with high degradation for Bale province, south Sudanian zone, western Burkina Faso-where they used the same generalized allometric equation for estimation of AGB dry given by [34], Equation (2), which was used for this study. Floristics The results of this study for Combretaceae and Rubiaceae as the most abundant families for both sites collectively is consistent with [25,31,41,50]. The most common families in this study were Combretaceae, Rubiaceae, and Fabaceae-Caesalpiniaceae, which portrays a typical taxonomic pattern of savanna-woodland mosaics in Africa and for the northern Sudanian zone in Burkina Faso [51]. Savadogo, P., et al. [31] reported the highest IVI of 214.50 for Mitragyna inermis, which is also consistent with the result for Nazinga Game Ranch for this study. The high IVI for Mitragyna inermis in this study for Nazinga Game Ranch may also suggest that gallery forests are less affected by human disturbances [50,52]. Karlson, M., et al. [48] reported 37 species for their study site at Saponé, central Burkina Faso, which is close to 29 species collectively for both sites in this study. Species such as Detarium microcarpum and Lannea microcarpa were amongst the rarest recorded for both sites collectively in this study. This could be attributed to the preferences of local inhabitants at both sites for these two species for the associated multiple benefits which can be derived from them [25]. The highest number of 140 individuals was recorded for Anogeissus leiocarpa for Nazinga Game Ranch against a contrasting 12 trees for Bontioli Nature Reserve. This drastic difference could be due to the proximity of this species to the settlements in Bontioli Nature Reserve. Anogeissus leiocarpa is known for its medicinal qualities and hence could be the subject of prodigious cutting in Bontioli Nature Reserve [53]. Conclusions The highest mean AGB dry and highest mean carbon stock were recorded for Bontioli Nature Reserve, however, statistically there was no significant difference recorded between the two investigated sites for these two variables. Significant difference was recorded between the vegetation types collectively for both sites where the highest mean AGB dry and the highest mean carbon stock were recorded for gallery forests. The highest FIV was recorded for Combretaceae for both of the sites. The highest IVIs were recorded for Anogeissus leiocarpa and Mitragyna inermis for the sites of Nazinga Game Ranch and Bontioli Nature Reserve, respectively. This study contributes in addressing the knowledge gap on carbon stocks of protected savannas in West Africa. To the authors' knowledge, it was a first attempt to estimate the AGB dry and carbon stocks of different vegetation types at the two protected areas of Nazinga Game Ranch and Bontioli Nature Reserve. The results of this study can therefore serve as a benchmark for future studies and baselines for future possible payment for environmental initiatives and REDD+ programmes, if initiated for these areas. This study also provides insights that can be useful for areas with similar environmental settings. It is suggested that land use and land cover change analysis and carbon inventories over different time periods should be conducted at these two sites in the future, as they can also provide a good picture of deforestation and degradation at these two sites. [28], for instance, have reported losses of vegetation cover over the past 29 years as a result of agriculture expansion at Bontioli Nature Reserve through land use and land cover change analysis using remote sensing and questionnaire surveys combined. A similar study for Nazinga Game Ranch, where there are also reports of high dependency of local communities on the vegetation [22], would also be helpful for the identification of drivers responsible for deforestation and degradation. Acknowledgments: The field visit for this study was possible because of the postgraduate scholarship by the DAAD (German Academic Exchange Service). The logistic support was provided by the West African Science Service Centre on Climate Change and Adapted Land Use (WASCAL). The research was conducted as part of the WASCAL Project, funded by the German Federal Ministry of Education and Research (BMBF). We are thankful to Kangbéni Dimobe for identification of plant species. The authors are also thankful to the research assistants Christoph Höpel and Herman Hien for their support during the data collection. Last but not least, the authors are also grateful to Kangbéni Dimobe for the GIS maps with the location of sampling plots and for his generous help in locating these plots during the field visit. All authors are also grateful to the editors and two anonymous reviewers; their valuable comments significantly increased the quality of the manuscript. Author Contributions: Mohammad Qasim conducted the field work and contributed to the analysis of the data as well as in writing the first draft and revising the manuscript. Stefan Porembski co-developed the study design, supervised the floristic data analysis and contributed in revising the manuscript. Dietmar Sattler developed and supervised the biomass data analysis and contributed to writing the first draft and revising the manuscript. Katharina Stein co-developed the study design, supervised the field work on site and contributed in revising the manuscript. Adjima Thiombiano provided floristic data and contributed to the overall literature review and in revising the manuscript. Andre Lindner developed the study design and contributed in writing the first draft and revising the manuscript. Conflicts of Interest: The authors declare no conflict of interest. Appendix A Table A1. Species represents the tree species at both sites of Nazinga Game Ranch and Bontioli Nature Reserve collectively. N = number of trees for each species. Relative abundance (%) (percentage of tree species individuals relative to the total number of trees). Wood density represents the published values of wood densities at species and generic levels for all the trees at both sites.
2016-09-29T08:41:17.449Z
2016-08-15T00:00:00.000
{ "year": 2016, "sha1": "406c53a555002d02ef1b89504a3ff485cc471a81", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3298/3/4/25/pdf", "oa_status": "GREEN", "pdf_src": "Crawler", "pdf_hash": "33f2fb204476a26c9a2a57e9ecb8a3e3818e97c1", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
19366865
pes2o/s2orc
v3-fos-license
Biological Modification of Trichothecene Mycotoxins: Acetylation and Deacetylation of Deoxynivalenols by Fusarium spp Attempts were made to elucidate the acetyl transformation of novel trichothecene mycotoxins, 3a,7a,15-trihydroxy-12,13-epoxytrichothec-9-en-8-one (deoxynivalenol) and its derivatives, by trichothecene-producing strains of Fusarium nivale, F. roseum, and F. solani. In the peptone-supplemented Czapek-Dox medium, F. roseum converted 3a-acetoxy-7a,15-dihydroxy-12,13epoxytrichothec-9-en-8-one (3-acetyldeoxynivalenol) to deoxynivalenol. 3Acetyldeoxynivalenol was also deacetylated by intact mycelia of the three strains in sugar-free Czapek-Dox medium. The growing F. nivale acetylated deoxynivalenol to afford a small amount of 3-acetyldeoxynivalenol. 3a,7a, 15Triacetoxy12,13-epoxytrichothec-9-en-8-one (deoxynivalenol triacetate) was transformed by the intact mycelium of F. solani into 7a, 15-diacetoxy-3ahydroxy-12,13-epoxytrichothec-9-en-8-one (7,15-diacetyl-deoxynivalenol), which was then deacetylated to give 7a-acetoxy-3a,15-dihydroxy-12,13-epoxytrichothec-9-en-8-one (7-acetyldeoxynivalenol). It was noted that the ester at C-7 was not hydrolyzed by the fungal mycelium. Within the past several years, a group of structurally related compounds, called trichothecenes, has been isolated from several different species of toxic fungi: Trichothecium, Cephalosporium, Myrothecium, Fusarium, and Trichoderma. The individual metabolite showed the evident difference in the modification of a tetracyclic 12,13-epoxytrichothec-9-ene nucleus, such as oxidation of some carbon atoms to afford ketone or alcohol, and esterification of the resultant alcohol. It was suggested that these structural differences affect the selectivity and specificity of biological activity, including mammalian toxicity, antibiotic activity, insecticidal activity, cytotoxicity, and phytotoxicity (1,3). However, very little is known concerning the biological transformation of trichothecenes and its significance in biological activity. Horvath and Varga (4) reported evidence that the isocrotonic ester group of trichothecin and crotocin was enzymatically hydrolyzed by Penicillium chrysogenum Thom. Recently, Ellison and Kotsonis (2) reported that incubation of T-2 toxin with supernatant fractions of both human and bovine liver homogenates resulted in the conversion to HT-2 toxin. In the present paper, the authors attempted to elucidate the mode of microbial transforma-tion of the novel trichothecenes, deoxynivalenol and its derivatives (12), by the trichotheceneproducing strains of Fusarium nivale, F. roseum, and F. solani. Trichothecene determination. Physicochemical properties of trichothecenes were determined with the following apparatus: melting point, Yanagimoto melting point apparatus (model MP-S2); infrared spectrum, Hitachi model EPI-G2 double-beam infrared spectrophotometer; ultraviolet absorption spectrum, Hitachi model 124 recording spectrophotometer; proton magnetic resonance spectrum, Hitachi model R-22 high resolution nuclear magnetic resonance spectrometer; mass spectrum, JEOL model JMS-07 mass spectrometer; thin-layer chromatography At the end of the incubation period, the mycelia was filtered off. The whole filtrate was extracted three times with equal amounts of ethyl acetate. The extract was dried over anhydrous sodium sulfate and evaporated to dryness under vaccum. The transformation products were separated from the crude extract by column chromatography on silica gel using chloroform-acetone (3:2), recrystallized, and identified from their spectroscopic properties as shown in the following sections. For quantitative estimation of trichothecenes, the crude extracts were reacted with trimethylsilylating reagent and gas chromatographed. The column was a coil (1 m by 3 mm) of stainless-steel tubing packed with 3% of OV-17 on 60to 80-mesh Chromosorb W. The operating conditions were: column temperature, 240 q; flow rate of nitrogen, 75 ml/min; hydrogen, 0.6 kg/cm2; and air, 1.2 kg/cm2. Results were expressed as percentage of total peak heights of trichothecene derivatives. RESULTS Transformation of compound II by F. roseum in the culture broth. To determine the time course of compound II production and the transformation of it, F. roseum (strain 117) was surface cultured on peptone-supplemented Czapek-Dox medium at 25 C. An entire flask containing 500 ml of the culture broth was harvested at desired intervals to estimate the concentration of trichothecenes, dry weight of the fungal mat, and pH value of the filtrate. Maximal growth of the fungus was attained after 14 days ( on May 19, 2021 by guest http://aem.asm.org/ Downloaded from followed by rapid decrease, compound I accumulated in the filtrate. The production of compound II and its disappearance were coincident with the fungal growth and the accumulation of compound I, respectively ( Table 1). Acetylation of compound I by mycelium of F. nivale. Compound I was converted with growing F. nivale into a compound having a higher retention time (tR) (1.6 min) on GLC and a larger R. value on TLC than those of the substrate (Fig. 2). From its behavior on the chromatograms, the transformation product was identified as 3-acetyldeoxynivalenol: 3a-acetoxy -7a,15 -dihydroxy -12,13 -epoxytrichothec-9-en-8-one (compound II). This reaction occurred within a 12-h incubation period, and the rate of transformation in the filtrate was approximately 5% after 24 h. When compound I was used as a substrate for growing mycelium of F. roseum or F. solani, little if any transformation product in the filtrate was detected on TLC and GLC. Deacetylation of compound II by mycelia of Fusarium spp. When compound II was incubated with growing mycelium of F. roseum, 2% of the substrate was deacetylated after 24 h to give deoxynivalenol in the filtrate. No other product was detected on either TLC or GLC. Ten percent of compound II was also transformed into deoxynivalenol after 24 h by the mycelium of F. nivale. On the other hand, compound II was quantitatively converted with growing F. solani into deoxynivalenol within a 12-h incubation period. Although the transformation patterns of compound II by the mycelia of Fusarium spp. were similar, the substrate was deacetylated at an extensively higher rate by the mycelium of F. solani (Fig. 3). Deacetylation product (I) was purified by repeated crystallization from ethyl acetatepetroleum ether; the mp found was 151 to 153 C. A melting point in admixture of it with the authentic sample produced no melting depression, and infrared proton and magnetic resonance spectra of the sample were identical with those of corresponding authentic standard. Deacetylation of compound III by myce-Hum of F. solani. Deoxynivalenol triacetate was incubated with the mycelium of F. solani, and transformation products were periodically detected on TLC (solvent system; chloroformmethanol, 97:3) or on GLC as trimethylsilylated derivatives. DISCUSSION Deoxynivalenol lacking a C-4 hydroxy group is a novel mycotoxin compared to the known trichothecenes, all of which have this functional group. The toxin was isolated from naturally infected barley grains with Fusarium spp. (5). Recently, it was also isolated from the infected corn by a Northern Regional Research Laboratory group (11). In the synthetic medium of F. roseum, the toxin was converted from its monoacetate (3-acetyldeoxynivalenol), accumulated by the mycelium in the phase of linear growth (Fig. 1). The monoacetate is more toxic to mice than deoxynivalenol, but the latter shows higher cytotoxicity or vomiting toxicity than the monoacetate (13). These facts led to the suggestion that deoxynivalenol found in both field crops of Japan (5) and northwest Ohio (11) was transformed from the monoacetate by biological and/or nonbiological hydrolysis during the growth and storage of the cereal grains. By incubating deoxynivalenol and its derivatives with F. nivale or F. solani, which produces trichothecene mycotoxins having a C-4 hydroxy group, oxidation of the trichothecene nucleus, including the conversion of deoxynivalenol into nivalenol or cleavage of the ethylene oxide ring, was not detected. However, F. nivale gave a deoxynivalenol monoacetate by acetylation of the substrate (Fig. 2). Since the acetylation is an endogernic process, the reaction might be progressed more efficiently by adding a coenzyme such as adenosine 5'-triphosphate or acetyl coenzyme to the reaction system. The hydrolytic deacetylation of the monoacetate by the growing mycelia proceeded readily to give deoxynivalenol, though there was an appreciable difference in the degree of reaction between three fungal strains. Among them, marked reactivity of F. solani was noted. No reaction occurred in the culture filtrate of F. solani or sugar-free Czapek-Dox solution. These results suggest the participation of intracellular enzyme in this microbial hydrolysis. The specificity of enzymatic hydrolysis by the mycelium of F. solani for deoxynivalenol triacetate is given in Fig. 4. The intact mycelium hydrolyzed the C-3 ester at a faster rate than the C-15 ester, and the C-7 ester was not at all eliminated. The instability of the C-3 ester bond is also shown in Fig. 3 to an assumption that the regiospecificity of C-3 ester for the enzyme is stronger than those of the other two ester bonds. From the results given above, the transformation pathway of deoxynivalenols is shown in Fig. 5. It should be noted that the ester at C-7 was not hydrolyzed by the fungal hydrolytic enzyme. Deacetylation of trichothecene mycotoxins by mammalian tissues is being investigated.
2017-11-06T21:19:40.267Z
1975-01-01T00:00:00.000
{ "year": 1975, "sha1": "42f1f2ff4869ce0138c0073015e12053913d0419", "oa_license": null, "oa_url": "https://aem.asm.org/content/aem/29/1/54.full.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "cf99e1ef125848a93e2390b3ecdf1313a5cfe566", "s2fieldsofstudy": [ "Chemistry", "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
220392007
pes2o/s2orc
v3-fos-license
Using diffusion tensor imaging to detect cortical changes in fronto-temporal dementia subtypes Fronto-temporal dementia (FTD) is a common type of presenile dementia, characterized by a heterogeneous clinical presentation that includes three main subtypes: behavioural-variant FTD, non-fluent/agrammatic variant primary progressive aphasia and semantic variant PPA. To better understand the FTD subtypes and develop more specific treatments, correct diagnosis is essential. This study aimed to test the discrimination power of a novel set of cortical Diffusion Tensor Imaging measures (DTI), on FTD subtypes. A total of 96 subjects with FTD and 84 healthy subjects (HS) were included in the study. A “selection cohort” was used to determine the set of features (measurements) and to use them to select the “best” machine learning classifier from a range of seven main models. The selected classifier was trained on a “training cohort” and tested on a third cohort (“test cohort”). The classifier was used to assess the classification power for binary (HS vs. FTD), and multiclass (HS and FTD subtypes) classification problems. In the binary classification, one of the new DTI features obtained the highest accuracy (85%) as a single feature, and when it was combined with other DTI features and two other common clinical measures (grey matter fraction and MMSE), obtained an accuracy of 88%. The new DTI features can distinguish between HS and FTD subgroups with an accuracy of 76%. These results suggest that DTI measures could support differential diagnosis in a clinical setting, potentially improve efficacy of new innovative drug treatments through effective patient selection, stratification and measurement of outcomes. www.nature.com/scientificreports/ (BV), semantic variant (SV) and primary progressive aphasias. A correct diagnosis is important to better understand the different subtypes and to develop more personalized treatments. Neuropathologically, patients with FTD show relatively selective frontal and temporal lobar degeneration (FTLD) characterized by atrophy, gliosis in atrophic cortices, and protein deposition forming distinct inclusion bodies in brain cells 2 . Over the last decade, the continuing advances in neuroimaging have provided new opportunities to study the pathophysiological mechanisms of neurological diseases and to help in diagnosis. Structural MRI and CT show patterns of atrophy mainly in the fronto-temporal regions. Fluorodeoxyglucose positron emission tomography (FDG-PET), functional MRI, and single-photon-emission CT likewise show disproportionate hypoperfusion and hypometabolism in these regions 3 . Some studies have suggested Tau imaging is a promising method with potential for further differentiating between Alzheimer's disease, non-Alzheimer's tauopathies, and tau-negative dementias 3,4 , although results are still contrasting 5,6 . Research in molecular PET imaging is very active, not only because of the specificity it allows for differentiation of fronto-temporal dementia from Alzheimer's disease, but also because of its potential for further differentiating among frontotemporal lobar degeneration syndromes. However, the promising tau tracers require further development of novel compounds to detect different tau isoforms 7 . Detection of proteins using cerebrospinal fluid (CSF) biomarkers, instead of imaging methods, has potential to aid differential diagnosis between AD and FTLD, although it is an invasive method that still needs further investigation 8 . An alternative to protein quantification is to further investigate the anatomy. While FTD is characterized by assessment of cortical atrophy, this is a relatively gross effect in neuropathological terms. Previous studies have suggested that the cellular organization in the cerebral cortex could be used as a potential biomarker of cortical damage in dementia 9,10 . For example, histological studies 9,10 showed that changes in cortical architecture, caused by neurodegenerative processes and protein deposition, produced alteration in the cortical geometrical properties including disruption of minicolumnar cellular organisation. Minicolumn degeneration varies between brain regions, reflecting the typical pattern of tau tangle accumulation 11 . These differences between brain regions suggest that microstructural changes in cortical grey matter could be sensitive for differentiating between dementia variants. Some of these cytoarchitectural changes have been found to be correlated with measurements from analysis of neuroimaging data based on Diffusion Tensor Imaging (DTI) in the cortical grey matter 12 . DTI can show widespread white-matter degeneration in frontotemporal dementia, exceeding that seen in Alzheimer's disease 13 , but until now, relatively little attention has been paid to the use of DTI to examine diffusion proprieties in grey matter structures. The sensitivity of DTI to changes in microstructural properties suggests that DTI may be a useful modality to detect correlates of, or perhaps even the precursors of, macroscopic atrophy. In this study, we aimed to test some novel Diffusion Tensor Imaging (DTI) measures that had been previously validated in an ex-vivo comparison with post-mortem histology 12 . In the current study those measures were applied to in vivo scans in FTD patients based on the hypothesis that they may reflect cytoarchitectural changes in the cortex in FTD patients compared with a control group. We looked also for differences in the pattern of cortical diffusivity changes between FTD subtypes. Machine learning has been used previously to try to improve dementia diagnosis 14,15 . Therefore we investigated the use of a machine learning approach to test the discrimination power of these new DTI measures. Method Participants. A total of 96 subjects with probable FTD and 84 healthy subjects (HS) were included in the study. The frontotemporal lobar degeneration neuroimaging initiative (FTLDNI) dataset was used to select subjects' scans for the "selection cohort" and "test cohort" (Table 1). FTLDNI was founded through the National Institute of Aging and started in 2010. The primary aims of FTLDNI are to identify neuroimaging modalities and methods of analysis for tracking frontotemporal lobar degeneration (FTLD) and to compare the value of neuroimaging with other biomarkers in diagnostic roles. The Principal Investigator of FTLDNI is Dr. Howard In order to avoid potential bias due to differences in acquisition parameters for B 0 and DWI images, just the subjects with comparable acquisition protocol were selected. A balanced cohort of 30 FTD patients (10 bvFTD, 10 svPPA and 10 nfvPPA) and 30 HS was included in the "selection cohort". The remaining subjects, 42 early FTD patients (15 bvFTD, 18 svPPA and 9 nfvPPA) and 24 HS were included in the "test cohort". The group of scans acquired in the Neuroimaging Laboratory of Santa Lucia Foundation in Rome was used as a "Training Cohort" (Table 1, inserted between "selection" and "test") and included 24 FTD patients (5 bvFTD, 13 svPPA, 6 nfvPPA) and 30 HS. All subjects underwent an extensive clinical and neuropsychological evaluation and an MRI scan. The diagnosis of FTD was made according to the current criteria 16,17 . Patients with vascular, psychiatric or other neurological disorders were excluded. MRI data acquisition and pre-processing. For the Selection Cohort and the Test Cohort, MR images were acquired on a 3 T Siemens Trio Tim system equipped with a 12-channel head coil at the UCSF Neuroscience Imaging Center, including the following acquisition: (1) T1 MPRAGE (TR/TE = 2,300/2.9 ms, matrix = 240 × 256 × 160, isotropic voxels 1 mm 3 , slice thickness = 1 mm); (2) Diffusion sequences were acquired using the following parameters: TR/TE 8,200/86 ms; , b factor = 2000s/mm 2 , isotropic voxels 2.2 mm 3 ) this sequence collects 1 image with no diffusion weighting (b0) and 64 images with diffusion gradient applied in 64 non-collinear directions. The segmented masks obtained were used to estimate the volumes of cortical and subcortical grey matter, total white matter, brain stem, corpus callosum, left and right hippocampus, left and right thalamus, left and right caudate, left and right putamen, left and right pallidum, left and right amygdala and left and right accumbens. To account for head size, all volumes were normalised for total intracranial volume and expressed as fractions. All DTI images were processed using the FMRIB software library, (FSL Version 5.0.9, FMRIB, Oxford, UK, https ://www.fmrib .ox.ac.uk/fsl/). Data was corrected for eddy currents and head movement and the diffusion tensor model at each voxel was fitted using DTIFIT. To control for the effect of head movement 18 in DTI maps, a displacement index generated using an in-house script was calculated. This index measured the absolute displacement of the head from one volume to the next and was calculated as the average of the absolute values of the differentiated realignment estimates obtained from eddy correction. This value was used as a covariate in the GLM multivariate analysis. Cortical Diffusivity analysis. Cortical Diffusivity analysis was performed using an in-house novel software tool. The tool generates cortical profiles, i.e. lines within the cortex in the vertical direction based on the columnar organisation of the cortex. Values for the diffusion tensor derived metrics were averaged along the cortical profiles, within the cortical grey matter 12 . The metrics calculated were MD, FA and three measures relating to the principal diffusion component 12 , namely: the angle between the cortical profile and the principal diffusion direction (AngleR); the principal diffusion component projected onto the plane perpendicular to the cortical profile (PerpPD, (× 10-3 mm 2 /sec)) and the principal diffusion component projected onto the cortical profile (ParlPD, (× 10-3 mm 2 /sec). All of the cortical values were averaged to reduce the influence of noise in the DTI scans, effectively smoothing the data, and ensuring only directionality with some local coherence would dominate, guarding against the influence of random deflections from the radial direction. Previous work has found that measures of the cyto-and myelo-architecture are relatively stable within a cortical subregion 19 indicating that it is valid to find an average value for that region. The whole-brain DTI maps were used to extract a single value for each cortical region segmented using the recon-all pipeline of the FreeSurfer V 6.0 software package (https ://surfe r.nmr.mgh.harva rd.edu/). The cortical regions segmented (for each hemisphere) were: banks of the superior temporal sulcus, caudal anterior cingulate, caudal middle frontal, cuneus, entorhinal, fusiform, inferior parietal, inferior temporal, isthmus cingulate, lateral occipital, lateral orbitofrontal, lingual, medial orbitofrontal, middle temporal, parahippocampal, paracentral, pars opercularis, pars orbitalis, pars triangularis, pericalcarine, postcentral, posterior cingulate, precentral, precuneus, rostral anterior cingulate, rostral middle frontal, superior frontal, superior parietal, superior temporal, supramarginal, frontal pole, temporal pole, transverse temporal, insula. Design and statistical analysis. In the first part of the study, we compared the cortical diffusion measurements of patient and control groups in all cohorts separately and together. In the second part of the study, we tested the discrimination power of our new diffusion measures for classifying participants into two groups www.nature.com/scientificreports/ (patients and healthy subjects) and into FTD subtypes (semantic variant-svPPA, behavioural variant -bvFTD, non-fluent/agrammatic variant primary progressive aphasia -nfvPPA) using a machine learning algorithm. Statistical analyses were performed using IBM SPSS Statistics version 25 (SPSS, Chicago, IL). The multivariate General Linear Model of SPSS was used to assess the between-group differences in cortical diffusion measures and GM_fr in our cohorts, using the diagnosis as a fixed factor and head movement 20 , scanner and age as covariates. T-test was used also to investigate age, education, MMSE and CDR between groups. To calculate statistical differences in gender, Chi-square analysis was used. One-way ANOVA was used to compare regional values between FTD subtypes. All statistically significant results reported remained significant after false discovery rate correction (FDR < 0.05) 21 . Feature selection, classifiers and classification accuracy. To investigate the classification power of the DTI cortical measures to distinguish between patient and control groups and between the control group and FTD subgroups (bvFTD, svPPA and nfvPPA) several steps were required: (i) feature selection; (ii) identification of the best classification model from a set of plausible models using a "selection cohort"; (iii) training of the chosen classifier using the features selected on a training sample (training cohort); (iv) application of the classifier to an independent set (test cohort) that represented unseen data and provided an unbiassed test of accuracy (Fig. 1). In the binary classification all whole brain measures where used (AngleR, PerpPD, ParlPD, MD, GMfr and MMSE) while in the multiclass classification, the large number of initial features were reduced to improve the classification performance, removing irrelevant or redundant variables using principal component analysis (PCA) (SPSS Factor analysis) as a filter method on the "selection cohort". Many machine learning approaches have been trialed to classify subjects with dementia from elderly control subjects using a wide range of biomarkers [22][23][24] . In this study, a tenfold cross-validation scheme was used within the selection cohort to select the best classifier (evaluated on one fold and trained on the remainder) from a range of seven commonly used different supervised classification models: K-Nearest Neighbours (KNN), Support Vector Machine (SVM), ElasticNet (EN), Logistic Regression (LR), Random Forest classifier (RF), Gaussian NB (GNB) and Linear Discriminant Analysis (LDA). The best classifier was selected based on the majority vote from 1,000 runs of the cross-validation scheme, each using the same "best" features as calculated by principal component analysis in the selection cohort. The classifiers were used to assess the classification power for both binary (HS vs FTD), and multiclass (HS vs bvFTD vs svPPA vs nfvPPA) classification problems. The "best" model (the one with the highest accuracy) and features selected using the selection cohort, was trained on the training cohort. The final results reported are based on the performance in the test cohort. In the binary classification all the features were used together and one at a time. All classification analyses were implemented in MATLAB 2018 (The Math). The accuracy (ACC), sensitivity (SENS), specificity (SPEC), positive predictive value (PPV) and negative predicted value (NPV) were used to measure the discrimination performance. To perform a more comprehensive classification among HS and the three clinical FTD subtypes, a multiclass classification was performed. This required a sub-regional analysis of all 68 brain regions. In order to avoid over testing, only the best cortical diffusivity measure was selected that had obtained the highest accuracy in the binary classification of global, whole brain data. This measure was then extracted from each brain region, for the 68 regional values in multiclass classification. PCA analysis was applied to reduce the number of regional features in the selection cohort. The regional features selected in addition to the whole brain value, were used together in the classification. The Accuracy (A), Sensitivity (SENS), Specificity (SPEC), positive predictive value (PPV) and the false discovery rate (FDR) were estimated to investigate the classification performance. Finally, to investigate if the selected regional measures that were used as features in the multiclass classification were consistent with the pattern of cortical damage commonly described in the literature for each subgroup, a further one-way analysis of variance (ANOVA) was used to compare group differences in those regional values. Results Participants. Table 1 summarizes the principal demographic and clinical characteristics of all subjects who fulfilled the inclusion criteria, and thus entered the study. In the selection cohort, no significant difference was observed between groups for age, years of formal education and gender. As expected, the t-test revealed between-groups differences in MMSE scores (t (58) = 5.979; p = < 0.0001) and CDR (t (58) = − 6.460; p = < 0.0001). In the training cohort, no significant difference was observed between groups for age, years of formal education and gender. The t-test revealed higher MMSE (t (52) = 6.620; p = < 0.0001) and CDR scores in the FTD group (t (52) = − 8.195; p = < 0.0001). In the test cohort, no significant difference was observed between groups for age, years of formal education and gender but the FTD group showed significantly higher MMSE (t (64) = 5.016; p = < 0.0001) and CDR scores (t (52) = − 6.865; p = < 0.0001). Cortical diffusion and brain volumetric measurements. Multivariate GLMs were used to test for main effects of diagnostic group, with cortical measures (MD, AngleR, PerpPD, ParlPD and GMfr) as dependent variables, diagnostic group as the between-subjects factor (independent variable) and age and head movement as covariates. www.nature.com/scientificreports/ In the Selection cohort the multivariate GLM showed significant effects of diagnostic group on cortical measures (F 5,55 = 11.899; p = < 0.0001). Age and head movement were not significantly associated with cortical measures and did not show interactions with diagnostic group. In the Training cohort the multivariate GLM showed significant effects of diagnostic group on cortical measures (F 5,49 = 15.369; p = < 0.0001). Age and head movement were not significantly associated with cortical measures and did not show interactions with diagnostic group. In another multivariate GLM, we compared all healthy subjects and FTD patients of all cohorts using cortical measures as dependent variables (MD, AngleR, PerpPD, ParlPD and GMfr) diagnosis as independent variables and age, movement and scanner as covariates. Results showed significant effects of diagnostic group (F 5,175 = 40.912; p = < 0.0001). The healthy subjects groups from the three different cohorts were compared, using cortical measures as dependent variables (MD, AngleR, PerpPD, ParlPD and GMfr) cohort group as independent variables and age, movement and scanner as covariates. The analysis revealed a significant effect of scanner model ( Feature selection and classifiers. Comparing the different classification models in the binary classification, our analysis of the selection cohort revealed that KNN was the best classifier (selected as the best in 96.6% of runs). We used KNN in both classification tasks (binary and multiclass classification). Concerning binary diagnostic classification (HS vs FTD) all the whole brain features (MD, AngleR, PerpPD, ParlPD, GM_fr and MMSE) were used together and one at a time in the training cohort by the KNN classifier to train models, which were subsequently applied to the test cohort. The discrimination indices calculated in the test cohort were used to quantify the classification accuracy in that (Test) cohort and are summarized in Table 2. The model with all features selected by PCA had the highest classification accuracy (88%). When using the features independently, AngleR was the single feature with the highest accuracy (85%). Therefore, in order to avoid over-testing of a dataset of limited size, this best feature was used as the key measure in the multiclass classification. (See Fig. 1 for the analysis pathway). To perform the multiclass classification (HS vs bvFTD vs svPPA vs nfvPPA), we carried out a PCA on the regional AngleR values in the selection cohort. The whole-brain AngleR value was also used as an additional feature. Table 3 shows a list of the 12 anatomical features selected (from a total of 68 regional features plus the single wholebrain feature) to perform the classification with the best classifier (KNN). The results on the test cohort revealed a classification accuracy of 76%. The confusion matrices, PPV, FDR, TP and FN percentages are shown in Table 4. The ANOVA post-hoc comparison results are summarized in Table 5 and Fig. 2. Compared with the HS group, all the other groups showed significant differences, mainly in frontal and temporal cortical regions. Discussion In the present study, we used a new set of whole-brain DTI measures, related to cortical microstructure, and a machine learning approach to distinguish normally aged healthy subjects from subjects with FTD in two independent cohorts. We also tested the differential diagnostic performance of our DTI measures to classify the different FTD subtypes on the basis of a set of regional cortical values. The main findings of this work are: i) using six features (AngleR, PerpPD, ParlPD, MD, GM_fr and MMSE) the model was able to classify HS and FTD subjects with an accuracy of 88%; ii) using one of the new cortical DTI measures (AngleR) it was possible to classify HS and FTD subjects with an accuracy of 85%; iii) using a set of AngleR values from 12 cortical regions it was possible to obtain a differential diagnosis for all participants (HS, svPPA, bvFTD , nfvPPA) with an accuracy of 76%. As shown in Table 2, the best HS vs FTD classifications were obtained using the novel cortical diffusion measures (AngleR, PerpPD and ParlPD), MD, GM_fr and the MMSE score, but a good classification was obtained also using just the AngleR value. This cortical diffusion measure, with the selected classifier (KNN) obtained the best performance as a single feature, compared with other cortical diffusion measures (PerpPD, ParlPD and MD) and with the GM_fr. We compared the performance of AngleR with GM_fr (widely used as an index of severity of GM atrophy), to test the relative merits of our DTI cortical measure. Indeed, GM atrophy is well-established as one of the main criteria for the diagnosis of neurodegenerative disorders. As shown in previous studies using histology [9][10][11] , the minicolumnar cytoarchitectural organization changes can be relatively independent from grey matter volumetric changes, especially in the early stages of neurodegenerative disorders. This independence is a possible explanation for why AngleR performs better than GM_fr, as the DTI measure might be sensitive to GM microstructural changes at an earlier stage than volumetric changes. AngleR appeared to be sensitive to changes in neurodegeneration with a good accuracy. Therefore, AngleR and other cortical diffusion measures could be useful additions to the set of measures that are being tested to aid differential diagnosis and the early diagnosis of FTD. Concerning the differential diagnosis of FTD subtypes, Table 3 shows the performance of the classifier using a set of features, selected by PCA, based on a number of AngleR values from different cortical areas. More specifically, we used the AngleR whole-brain values plus 11 out of 68 regional AngleR values. Considering the small number of subjects in our cohorts, we decided not to 'over-interrogate' the data, instead focusing on the single feature that gave the best whole brain classification power-AngleR. In a larger study it Table 3. Accuracy for multiclass classification (HS vs. svPPA vs bvFTD vs nfvPPA) in Test cohort. Accuracies for multiclass classification of FTD subtypes. www.nature.com/scientificreports/ could be possible to explore the sub-regional classification performance of other cortical diffusivity measures (e.g. PerpPD and ParlPD). The performance of the classifier showed that using the selected set of features with the KNN classifier, an accuracy of 76% could be obtained for the differential diagnosis of the subjects into four different groups (HS, svPPA, bvFTD, nfvPPA). The classifier obtained a sensitivity of 84%, revealing a relatively high power to distinguish healthy subjects from FTD patients and therefore is encouraging if viewed in the light of the search for diagnostic screening power. However, the screening or diagnostic power of a test depends on threshold selection on the basis of a combination of sensitivity and specificity. The confusion matrices (Table 4) describe the discrimination ability of the combination of whole-brain and regional AngleR values in classifying HS and subjects with an FTD subtype. The sensitivity (SENS) for each patient group shows that the selected features were able to classify more accurately svPPA patients (76%) with respect to patients with bvFTD (71%) and nfvPPA (60%). This difference could, in part, be due to the smaller number of samples in the Training and Test cohorts with nfvPPA diagnosis. The cortical regions used in the multiclass classification correspond to those usually associated with FTD subtypes. To better understand the role of each regional value in the classification, we used the ANOVA post-hoc comparisons to identify the key regions for each group (Table 5). For the svPPA subtype our post-hoc comparisons showed that the main regions distinguishing svPPA and other groups were the left fusiform and entorhinal cortex, right temporal pole and right inferior temporal cortex. The left fusiform is one of the key regions involved in semantic tasks and can be particularly involved in semantic variant degeneration similar to the right temporal pole 25 , another brain region considered an important hub for semantic tasks 26 . In the svPPA group we also found higher values of AngleR in the right inferior temporal 27 and the entorhinal cortex 28 . www.nature.com/scientificreports/ The bvFTD group was characterized mainly by two cortical regions, left caudal anterior cingulate cortex and right lingual gyrus. As shown in previous studies, the left caudal anterior cingulate cortex is particularly involved in social-emotional functions 29,30 and is more damaged in bvFTD compared to other FTD subtypes 31 . The right lingual cortex has an important role in emotional processes like visual identification of facial expressions 32 and could be part of the neural correlates for apathy 33 . The nfvPPA group was classified mainly on the basis of the AngleR values in the left pars opercularis. This region includes Broca's Area for motor language function and has a central role in distinguishing nfvPPA from other groups, consistent with previous studies 34,35 . Other key regions used to classify the FTD subgroups were the right caudal and rostral middle frontal cortices. As shown by previous studies, these regions are important for executive functions 36 and are usually involved in FTD progression 37 . Finally, in line with the recent literature of motor dysfunction in FTD patients 38 , bilateral precentral cortex changes were found in all patient groups. The main limitation of the present study is the modest sample size of all cohorts. The small sample size could have an effect on feature selection and the classification power. Future research on a larger cohort will help to further advance and support the findings. Additional measures such as assessment of tau protein quantification using CSF or PET markers could also be useful. In conclusion, we suggest that cortical diffusion measures are promising non-invasive neuroimaging features that could be help to support the diagnosis of FTD and FTD subtypes. With further validation as FTD subtype biomarkers, these cortical measurements, could help to identify the characteristics of vulnerable brain regions to be targeted for new drug treatments. Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request.
2020-07-08T15:07:18.070Z
2020-07-08T00:00:00.000
{ "year": 2020, "sha1": "389cd9b282f275ec6f4b660366b0a0422d4e0248", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-68118-8.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e060d21de81065d25c9eb8994007d8b20ced8858", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
234016621
pes2o/s2orc
v3-fos-license
Comparative Assessment of Vulnerability to Drought and Flood in the Lower Teesta River Basin: A SWOT Analysis People repeatedly confronted by natural catastrophe in almost every year such as drought and flood in the lower Teesta Basin area. After the construction of two barrages in Gozaldoba and Dalia on Teesta River drought and flood occurs almost every year. Intensity and frequency of these calamities are also increasing in an alarming rate, which caused serious damage to livelihoods and economy of this area. The objective of this paper is to find out the drought and flood induced vulnerability in the study area through Strength Weakness Opportunities Threat (SWOT) analysis. By this we can summarize the current state of a space and helping to devise a plan for the future, one that employs the existing strengths, redresses existing weaknesses, exploits opportunities and defends against threats. The study is conducted in Charkharibari village of Tepakharibari union of Dimla upazilla in Nilphamary district and Jigabari village of Tepamadhupur union of Kaunia upazilla in Rangpur district, taking the locational advantage, flood and drought proneness, topographic Original Research Article Al-Hussain et al.; AJGR, 4(1): 20-33, 2021; Article no.AJGR.64294 21 nature and population diversity under consideration. The study population is finite and sample size was determined by using Kothari’s formula. Sample size for Charkharibari and Jigabari is 200 and 85 respectively; samples were drawn through Simple Random Sampling (SRS) procedure. The relevant data and information used in this study have been collected from both primary and secondary sources. Primary data has been collected through a questionnaire survey and couple of Focus Group Discussions (FGD’s) with a view to collect quantitative as well as qualitative data. Results from Strength Weakness Opportunities Threat (SWOT) analysis shows that, between the two villages severity of drought and flood is higher in Charkharibari village. After comparing the strengths Jigabari is ahead; comparing weakness, opportunities and threat Charkharibari is ahead. Based on the analysis and the findings, it is evident that in terms of vulnerability Charkharibari is more vulnerable than Jigabari. But there are more opportunities for Charkharibari than Jigabari. However, it is evident that proper dissemination of information regarding early warning and assistance from government as well as non-government organizations can significantly improve the coping capacity of people. INTRODUCTION The Teesta River originates in the Himalayas and flows through the Indian States of Sikkim and West Bengal before entering into Bangladesh, where it flows into the Brahmaputra [1]. Flowing through the length of Sikkim, the Teesta River is considered to be the lifeline of the state. The Teesta valley in Sikkim is rich in biodiversity, and the river provides livelihoods for the residents along its entire length of 393 km (245 miles) (Mullick et al 2011). The Teesta River originates from the Pahunri (or Teesta Kangse) glacier above 7,068 meters (23,189 ft.), and flows southward through gorges and rapids in the Sikkim Himalaya [2]. It is fed by rivulets arising in the Thangu, Yumthang and Donkha mountain ranges. The river then flows past the town of Rangpo where the Rangpo River joins, and where it forms the border between Sikkim and West Bengal up to Teesta Bazaar. Just before the Teesta Bridge, where the roads from Kalimpong and Darjeeling join, the river is met by its main tributary, the Rangeet River [3]. At this point, it changes course southwards flowing into West Bengal. The river hits the plains at Sevoke, 22 kilometers (14 mi) northeast of Siliguri, where it is spanned by the Coronation Bridge linking the northeast states to the rest of India [4]. The river then goes merging up with the Brahmaputra after it bifurcates the city of Jalpaiguri and flows just touching Cooch Behar district at Mekhliganj and moves to Fulchori in Bangladesh [5]. Drought in Teesta River Basin The Teesta has been drying up at different points during the dry season, threatening the Boro cultivation in six northern districts. The once mighty Teesta is now bereft of water following construction of a barrage upstream at Gojoldoba point in Jalpaiguri of the Indian state of West Bengal. The farmers in Nilphamary, Lalmonirhat, Gaibandha, Rangpur, Dinajpur and Bogra are worried over the bleak prospect of getting required quantum of water from the Teesta for the irrigation of Boro fields [6]. The construction of the barrage on this river across the border to divert its flow of water has badly affected the efficacy of the Teesta Barrage Project. According to Water Development Board, Bangladesh got only about two per cent of the required quantum of water from the border last year [7]. The release of such low quantum of water was adversely affecting navigation, irrigation, fishery and ecology of the lower Teesta riparian area of Bangladesh [8]. On the other hand, there should be 10,000 cusecs of water to bring an estimated 111,000 hectares under the Rabi crop program but only 1,000 to 1,200 cusecs are now available in the upstream of the Teesta Barrage. The Indian authorities are reportedly withdrawing the total water from the rivers Teesta and Mohananda through their Gojoldoba and Mohananda Barrages in the upstream. It is found that the average lowest discharge of Teesta was above 4,000 cubic meter/sec before construction of the two barrages -one at Doani in Bangladesh and other at Gojoldoba in West Bengal. But after construction of two barrages the lowest discharge has drastically reduced to 529 cum/sec in 2000 and just after five years in 2005, it came down to just 8 cum/sec [9]. Hence, there requires no further explanation what is going to happen to the fate of the Teesta in the near future. On the other hand, in the Indian part, the mean annual discharge of the Teesta at Anderson Bridge was about 580 cum/sec a decade back and it declines to 90 cum/sec in the lean months. The peak discharge may be as much as 4,000-5,000 cum/sec. It was estimated that the peak discharge of the river at Jalpaiguri during the devastating flood of 1968 was 19,800 cum/sec [8]. The sediment load in the river increases with high monsoon discharge. It was observed that 72 per cent of the suspended load is transported between July and August when the bulk of discharge flows through the river. And these things altogether create a drought phenomenon in Teesta River Basin every year [8]. Records show that 19 drought periods occurred in Bangladesh between 1960 and 1991, and about 12 of them were in Teesta River Basin area. This means a drought occur every 1.6 years. In the decade between 1985 and 1998 the temperatures in Bangladesh increased by 1 degree Celsius in the month of May and 0.5 degree Celsius in the month of November. This change in temperature is relatively high compared to the IPCC projection of 0.2 degrees Celsius per decade [10]. Despite, this increased warming in Bangladesh, extreme lower temperatures have been observed e.g. the lowest winter temperature (5°C) in 38 years was recorded in 2007. In the last 3-4 decades when climate change began to be observed in the North Western region of Bangladesh, the situation has progressively got worse [11,12]. Surface water has disappeared from ponds and canals and even major rivers have reduced water volume. Deep tube wells, shallow machine wells were used for irrigation and the tube wells were used for domestic needs have been deepened with time as the ground water level continues to go down. Whereas, North Western region had become a food surplus area after introduction of deep well water for irrigation, and the development of Teesta Barrage Project (TBP), such gains are getting lost due to inadequate water [13]. Flood in Teesta River Basin Once the Teesta that used to hold water throughout the year now dries up just after the monsoon. Numerous chars and shoals have been emerged on the riverbed. The discharge capacity of Teesta has drastically been reduced due to withdrawal of water and the discharge of heavy silts from the upper catchments. A series of dams and barrages erected over the vibrant river are virtually causing its death. The shrinkage of the river has been causing heavy erosion almost throughout the year displacing and making destitute hundreds of people every year. It seems certain that the dynamic equilibrium of the river will be impaired with the construction of a series of dams, and the sediment load will be trapped within the reservoirs, reducing their capacity. This, in turn, could compel dam managers to release water during heavy rainfall, causing sudden flash floods in downstream [8]. The most common water-related natural hazard in a deltaic floodplain such as Teesta River Basin is flood. Flooding in Teesta river Basin is the result of a complex series of factors. These include a huge inflow of water from upstream catchment areas coinciding with heavy monsoon rainfall in the country, a low floodplain gradient, and congested drainage channels. Different combinations of these various factors give rise to different types of flooding [14]. Three main types of natural floods occur in Teesta river Basin: flash floods, river floods, and rainwater floods. Flash floods take place suddenly and last for a few hours to a couple of days. Run-off during exceptionally heavy rainfall occurring in neighboring upland areas is responsible for flash floods. Such floods occur as waters from the hilly upstream rush to the plains with high velocity, mauling standing crops and destroying physical infrastructure [15]. Rainwater floods are caused by heavy rainfall occurring over Teesta River Flood Plain (TRF). Rainwater flooding is characteristic of meander floodplains, major floodplain Basins, and old piedmont and estuarine floodplains. Heavy premonsoon rainfall (April-May) causes local run-off to accumulate in floodplain depressions. Later (June-August), local rainwater is increasingly accumulated on the land by the rising water levels in adjoining rivers. Thus, the extent and depth of rainwater flooding vary within the rainy season and from year to year [15]. Normal river floods generally occur during monsoon. A monsoon is traditionally a seasonal reversing wind accompanied by corresponding changes in precipitation [10], but is now used to describe seasonal changes in atmospheric circulation and precipitation associated with the asymmetric heating of land and sea. River floods result from snow-melt in the high Himalayas and heavy monsoon rainfall over the Himalayas, the Assam Hills, and the Tripura Hills outside Bangladesh. River floods extend beyond the active floodplains and damage crops in parts of the adjoining meander floodplains, mainly alongside distributary channels [16,17]. The timing of the flood (whether early or late) and sometimes the duration of flooding are as important determinants of crop damage as is the absolute height reached by a particular flood. Filled channels deposits reduce the drainage capacity of minor rivers, road and railway bridges and culverts, as well as irrigation and drainage canals [15]. Vulnerability to Flood and Drought Vulnerability is defined as the susceptibility to harm. Infact, it is inability of a system to withstand against the perturbations of external stressors. Similarly, social vulnerability includes susceptibility of social groups or society to potential losses from extreme events and the ability to absorb and withstand impacts. Natural hazards has differential impacts on different groups in the society, and disaster can only take place when losses exceed the capacity of population to resist and recover. Besides, it is also depends on where people reside, and what sort of resources they have to cope [18]. In this regard, flood and droughts are recurrent external stressors in the lower Teesta river basin. Therefore, recurrent flood and drought significantly reduce the inhabitants ability to withstand against the adverse impacts of floods and droughts in the lower Teesta river basin area of Bangladesh. OBJECTIVES The objective of this research is to conduct comperative vulnerability study of two flood and drought prone villages of lower the Teesta river basin of Bangladesh through SWOT analysis. RISK ASSESSMENT FROM SWOT ANALYSIS SWOT stands for strengths, weaknesses, opportunities and threats. It is a way of summarizing the current state of a space and helping to devise a plan for the future, one that employs the existing strengths, redresses existing weaknesses, exploits opportunities and defends against threats. Strengths Weaknesses Opportunities How do I use these strengths to take advantage of these opportunities? How do I overcome the weaknesses that prevent me from taking advantage of these opportunities? Threats How do I use my strengths to reduce the impact of threats? How do I address the weaknesses that will make these threats a really? SELECTION AND LOCATION OF THE STUDY AREA The study area has been selected purposively taking the locational advantage, flood and drought proneness, topographic nature and population diversity under consideration. The study is conducted in Charkharibari village of Tepakharibari union of Dimla upazila in Nilphamary district and Jigabari village of Tepamadhupur union of Kaunia upazila in Rangpur district (Fig. 2). Charkharibari village is situated in the right bank of upstream of Teesta River, it lies between 26°12'50'' to 26°14'30'' north latitudes and between 88°59'10'' to 89°1'40'' east longitudes (Map-1). Jigabari village is situated in the left bank of upstream of Teesta River, it lies between 25°42'30'' to 25°43'40'' north latitudes and between 89°28' to 89°29'10'' east longitudes (Map 1). Both of the districts are in lower Teesta Basin area. Physically these two districts are located in two natural divisions. They are plain land and low land. SWOT ANALYSIS OF CHARKHARIBARI VILLAGE The SWOT analysis and summary of Charkharibari village is presented in (Table 2). Analysis Summary of Charkharibari Village Charkharibari is situated in a charland (Island) which is located on the right bank of Teesta river in Bangladesh. From the SWOT matrix ( There are a huge number of weaknesses of this area and they are listed as follows. Location of this village is unstable. Though, this locality is situated on a more than 40 years old charland but the charland is still eroding in different places. Fig. 3. Location map of study area There is no road network in this area. The only communication system with the inland is by boat which is also very time consuming due to dried up of river bed. There is no permanent educational institution here. There is only one primary school in this village but right now it has no permanent structure. As a result, the literacy rate is very low. There is also no permanent health center in this village. River bank erosion is very active in this village. Presently, it is eroding in 6 different places along the bank line. Almost 85% area of this village has no flood protection dam. and there is no flood shelter center. There is no electrical grid connection in this village as a result there is no industrial development. And there is no modern high efficiency irrigation system such as deep tube wells in this village. Scarcity of drinking water is severe. Sand cover is severe in this village. The agricultural production is totally uncertain. Most of the agricultural land of this area is single cropped. As a result, there is no permanent village market or hut. Several Non-Governmental Organization's (NGO's) have already closed their micro-credit or other development programs in this village. There are several opportunities such as; this village can be suitable for import-export and other border related market business. A quite large number of unemployed workforces are ready. Planned stone collecting can create a huge workplace for the local inhabitants as well as for businessmen also. If the "India-Bangladesh Combined Teesta Dam Project" could be completed earlier, then the village would be saved from riverbank erosion and flood. Fishery based market or industry can be grown here. There are several threats for this settlement. These are as follows -sand cover rate has increased and it completely destroys the agricultural land. Rate of riverbank erosion has accelerated due to unplanned stone collection. Number of flash flood has increased, similar finding was found by Hossain et al. [21]. Intensity and duration of flash flood has increased also in an alarming rate. During summer season the temperature threshold has increased resulted in severe heat stress while, during winter season the temperature threshold has decreased resulted in severe cold wave. Intensity and duration of drought have increased. This is the Strength Weakness Opportunities Threat (SWOT) analysis summary of Charkharibari. SWOT ANALYSIS OF JIGABARI VILLAGE The SWOT analysis and summary of Jigabari village is presented in (Table 3). Analysis Summary of Jigabari Village Jigabari is located on the left bank inland of Teesta River in Bangladesh. From the Strength Weakness Opportunities Threat (SWOT) matrix (Table 5) of this territory we can see that the major strengths of this locality are followings. The area has a flood protection dam. There are a number of educational institutions in this area. It has stunning literacy rate of more than 70%. There is semi-permanent health care center in this community. This village is directly connected with nearby city and other areas by road network. This locality is connected with national electrical grid. Several NGO's are working in this area on micro-credit and other programs. The agricultural land is very fertile and more 50% of them are triple cropped. There are a number of fresh water ponds available here. This area is engulfed with several problems and these are as follows. The flood protection dam is severely damaged and has a number of cracks and leakage. During flood water enters in the village through these cracks. Almost 50% land of this village is low lying, as a result flood water remain stagnant on those areas. There is no pucca road network in this settlement. During flood all these Kacha road become inaccessible. The ground water table is declining in an accelerated rate. As a result the fresh water ponds dried up during pre-summer and summer. Drinking water problem get worse on those periods. Number of landless family is around 35-40%. There is severe level of arsenic contamination in the ground water. Except agriculture there is almost no other type of economic activity is found, and fishing opportunity is very limited. During critical climatic stress periods the villagers' doesn't get enough relief. There are several opportunities for this place and the climate and geomorphology is suitable for agricultural production throughout the year also. Seasonal vegetables can be grown as side products. Connectivity with nearby city is good, so agro-products can be sold directly in the city. In the high ground "Indian Bay Leafs -Tejpata" can be cultivated which is highly profitable. Agro-based industry can be established. By taking effective measures 50% of the double cropped land can be converted into triple cropped. There are several threats for this village. These are as follows, several parts of village is getting more water logged. Health related problems are relatively high. Number of flash flood has increased. Intensity and duration of flash flood has increased and also in an alarming rate. During summer season the temperature threshold have increased resulted in severe heat stress, and meanwhile in winter season the temperature threshold have decreased resulted in severe cold wave. Intensity and duration of drought has increased. This is the Strength Weakness Opportunities Threat (SWOT) analysis summary of Jigabari. 1. Increased rate of sand cover in the agricultural field. 2. Rate of riverbank erosion is accelerated due to unplanned stone collection. 3. Frequency of flash flood has increased. 4. Intensity of flood has increased. 5. Intensity and duration of drought has increased. 6. In summer, heat stress has increased. 7. In winter, cold wave has increased. Internal Factors Strengths (+) Weaknesses (-) 1. It is a highly agriculture dependent and productive area. 2. Literacy rate is more than 70%. 3. A flood protection dam is in the settlement. 4. Connectivity with the city is good. 5. This community is connected to the national electric grid. 6. There is a semi-permanent medical center. 7. Location of this village is in inland. 8. Several NGO's are working here. 9. Almost 50% of the agricultural land is very fertile (triple cropped). 10. There are a number of fresh water ponds in the locality. 1. The flood protection dam is highly damaged and has a number of cracks. 2. Almost 50% land of the vicinity in low lying. 3. Flood water remain stagnant in several parts of the area. 4. There is no pucca road network. 5. There is no permanent medical facility in the village. 6. The fresh water ponds are dying. 7. During drought, there is a severe drinking water problem. 8. During flood, locals get very less or no relief. 9. Number of landless family is around 35-40%. 10. Arsenic contamination is found in the ground water. 11. Except agriculture there are almost no other economic activities available. 12. This community doesn't have any permanent market place. 13. Limited fishing opportunity is available. External Factors Opportunities (+) Threats (-) 1. Climate and geomorphology is suitable for agricultural production throughout the year. 2. By taking effective measures, 50% of the double cropped land could be converted to triple cropped. 3. Connectivity with city is good, so, agro-products can be sold directly to the city. 4. In the high grounds "Indian Bay Leafs -Tejpata" can be cultivated which is highly profitable. 5. Seasonal vegetables can be grown. 6. Agro-based industry can be established. 1. Village is getting more and more water logged day by day. 2. Level of underground water is falling rapidly. 3. Intensity of flood has increased. 4. Intensity and duration of drought has increased. 5. In summer, heat stress has increased. 6. In winter, cold wave has increased. 7. Health related problems are relatively high. SUMMARY OF COMPARATIVE VULNERABILITY ANALYSIS Vulnerability to flood and drought is analyzed through SWOT analysis for Charkharibari and Jigabari villages. It is found that Charkharibari is more vulnerable then Jigabari village. On the contrary, Charkharibari has more opportunity then Jigabari village. However, based on SWOT analysis concept strategy has prepared in the following section of the study. SWOT MATRIX ANALYSIS FOR THE CONCEPT STRATEGY The Strength Weakness Opportunities Threat (SWOT) analysis is useful to decide the next step of making the concept strategy accurately. After doing this technique, every factor in Strength Weakness Opportunities Threat (SWOT) can be divided into four categories by using the cross tabulate table to identify it. These are strength and opportunity, strength and threat, weakness and opportunity, and weakness and threat. Each Agricultural productivity Substantial agricultural activity resulting very little agro-income. Deep irrigation system provides triple crops in a year. 6. Transportation system Water logged and has no to little formal transportation system. Holds far better interconnected formal transportation system. 7. Fresh water source Scarcity of resource. Adequate resource available. 9. Administrative, institutional and organizational capabilities Very little activities such as lack of educational institutions, electricity connection, tap water supply, healthcare facilities, and highly efficient irrigation system etc. Has wide spread activities with far reaching capabilities. 10. Major threats Sand cover in the arable land is very severe and river bank erosion is also a major threat in this village. Arsenic contamination in ground water is very acute here. 11. Natural resources Plenty of excavatable rock and boulders, which can be used in construction purpose. No such resource is available. 12. Future possibilities Has plenty of fishing opportunity and after a certain period this locality will be protected from flood. Plenty of non-agro workplace can be created (such as stone collecting, border related import export business and boating) here. Agro based industries can be established here. OPPORTUNITY Due to an international boundary there is plenty of possibility for border related export-import business. After completing the dam there is a possibility of improved communication system. As the youth rate is very high so there is a ready work force available. After completing the dam there is a possibility reduced flooding and riverbank erosion. Planned stone collection can create a strong income source. After completing the dam there is a possibility of improved educational facility. Fishing opportunity can create another huge income source. After completing the dam there is a possibility of improved health facility. After completing the dam the village can be agriculturally productive because the remaining land is very fertile. Solar powered irrigation can be helpful to solve the water scarcity problem. THREAT Due to an international boundary there is threat of smuggling. Sand cover is a large threat to agro-production. Due to high unemployment there is threat of high crime rate. Increased heat stress in summer coupled with water scarcity could worsen the situation. Unplanned stone collection can accelerated the rate of riverbank erosion. High riverbank erosion rate would slow the dam construction. Illegal fishing equipment can lead to extinction of several local varieties. Increased intensity of climatic stresses will hamper the possible future economic sectors. Poor condition of the dam will lead to more flooding in near future. Agro-based market and industries could be established if the village's inter-connectivity gets improved. By cheap sand filling from the nearby Teesta river, the flood water stagnant area can be reduced. Because of high literacy rate, empowerment efforts and providing knowledge on disaster management would be easy. About 35-40% landless family member could be an instant workforce. Due to national grid electrical availability urbanization rate in this village will be faster. Agro-based market and industries would diversify income sector. NGO's working in this village can improve health facility. Low lying areas can be used for fish cultivation. THREAT Using high efficiency irrigation system such as deep tube wells would lead to future underground water scarcity. Increased heat stress in summer coupled with water scarcity could worsen the situation. Due to high literacy rate there is threat of high civil society migration rate. Increased intensity of climatic stresses would hamper the possible future economic sectors. Unequal distribution of agricultural land would cause future severe economic discrimination. High ground water declination rate would slow down the agro-production rate. As the ground water is declining rapidly in future this village could turn into a non-agro productive area. Overall agro-production cost would be increased. category will produce different plans based on a combination of conditions and problems. The combination of the positive and negative aspects should be combined, so that there are positive aspects to overcome the negative aspects that exist in synergy. The concept strategy of two villages can be seen to the (Tables 5 and 6). CONCLUSION Based on the analysis and findings of this research, following conclusions are drawn. In the lower Teesta Basin area, people repeatedly confronted by natural catastrophe in almost every year such as drought and flood. In terms of vulnerability Charkharibari is more vulnerable than Jigabari. But there are more opportunities for Charkharibari than Jigabari. So, drought and flood induced critical periods are more acute in Charkharibari village. Besides, coping capacity against these critical periods are highly influenced by income, occupation, education, frequency and duration of hazards. Peoples' demand varies during and after these critical periods [22]. However, it is evident that proper dissemination of information regarding early warning and assistance from governmental as well as non-governmental organizations can significantly improve the coping capacity and reduce the vulnerability of the inhabitants' of Charkharibari village. CONSENT AND ETHICAL APPROVAL The authors confirm that the ethical policies have been adhered to. As per international standard or university standard guideline ethical approval & participant consent has been collected and preserved by the authors.
2021-04-17T12:28:19.088Z
2021-01-27T00:00:00.000
{ "year": 2021, "sha1": "a4328be63a6f3ab0111a3023b26ba788a3be9f52", "oa_license": "CCBY", "oa_url": "https://journalajgr.com/index.php/AJGR/article/download/52/103", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a4328be63a6f3ab0111a3023b26ba788a3be9f52", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Geography" ] }
44470415
pes2o/s2orc
v3-fos-license
A novel biological reconstruction of tibial bone defects arising after resection of tumors The purpose of this paper is to describe a biologic reconstruction strategy for defects after resection of malignant tibia tumors. Limb-sparing surgery was used for 4 patients with malignant tibia tumors. All patients were male, with an average age of 39.5 years (range: 34–46 years). Mean length of the resected tibia segment was 135 mm (range: 120–150 mm). The defects were primarily reconstructed with bone cement and locked plate until completion of the medical treatment of the tumor. The bone transport was made through locked plate, and the docking site was grafted at the final stage. Mean follow-up period was 49.75 months (range: 71–22 months). Mean distraction index was 1148 mm/ days (range: 1130–1175 mm/days), and mean external fixation time was 167 days (range: 152–187 days). According to Paley, functional results were excellent in 2 cases and good in the other 2 cases. Radiological results were excellent in all cases. Two major and 2 minor complications were observed. In this method, stable internal fixation and active usage of extremities are provided until biological reconstruction, and possible wound problems can be completely eliminated during the duration of medical treatment of the tumor. Despite improvements in limb-sparing surgical treatment modalities, isolation of the surgical site with proper wound closure, making a stable reconstruction of the extremity, and providing a satisfactory functional extremity remain concerns of orthopedic surgeons involved in musculoskeletal oncology. [1]Whichever method is preferred, the priority of the treatment should be resection of the tumor, providing satisfactory soft tissue coverage and achieving a stable fixation which allows the patient use it functionally. [2,3]Stability and functional usage of the extremity until the definitive treatment of the defect could provide a huge advantage in patients, such as avoidance of disuse atrophy and possible wound problems during the long period of chemoradiotherapy.For definitive biological reconstruction, bone transport using external fixators is a common and very successful method. [4]hough resection of the tumor and reconstruction of the extremity are different protocols, there should be a synergism between these procedures.Protecting the integrity of the skin is necessary for the prevention of wound problems that may occur during prolonged chemoradiotherapy.Permitting full weight-bearing usage of the extremity throughout the treatment period is necessary to ensure the highest level of emotional well-being and success of the final reconstruction procedure. [5,6]hile the conventional bone transport method has a high success rate for extremity reconstruction, there are frequent and severe complications that are associ-ated with it, including pin-tract infections, soft tissue contracture, fibrosis, and refracture. [7,8]In order to minimize the external fixation period, intramedullary nails are commonly used along with external fixators.Using this combination of devices decreases the rates of complications and refractures. [9]However, most malignant tumors which lead to massive bone defects are located in the b junctions of long bones in the extremities.[12] In this study, we present a novel biological reconstruction of bone defects arising after resection of tibial sarcomas using locked plates, bone cement, and external fixators.With this technique, patients are encouraged to use their extremities functionally in every stage of the treatment. Case reports Between 2006 and 2011, 4 patients with malignant tumors of the tibia and fibula (malignant fibrous histiocytoma, osteosarcoma, synovial sarcoma, and Ewing' s sarcoma) were treated at our institution.The primary tumor was located in the proximal tibial metaphysis in 1 case, in mid-diaphysis in 1 case, in the distal tibia in 1 case, and in the distal fibula in the remaining case.The tibiotalar joint had to be resected in 2 cases and the distal third of the fibula in 1.All patients were male, with an average age of 39.5 years (range: 34-46 years).Of the 4 patients, mean length of the resected tibial segment was 135 mm (range: 120-150 mm); 150 mm of fibula was resected in 1 patient.Before surgical intervention, 3 patients were treated with chemotherapy and 1 with radiotherapy, and all received chemotherapy after resection.Mean period between the initial diagnosis and the resection was 112.25 days (range: 67-147 days), and mean period between the bone transport and the biological reconstruction after resection was 191.75 days (range: 127-361 days) (Table 1). All patients were treated in 3 stages.In the first stage, the tumor was resected, the existing defect was reconstructed with bone cement, and the defective bone and cement combination was fixed by locked plates.The second stage of treatment was initiated after a second course of medical treatment, with the white blood cell count returning to normal levels.In this stage, the cement was removed, and bone transport was performed by external fixator.In the last stage, the transported fragment was fixed with the existing locked plate, and the docking site grafting was performed. Longitudinal incisions were used in accordance with the localization and size of the mass (Figures 1a-c). Locking plates were inserted through a separate lateral incision in 3 cases (Figures 1d, e) and a medial incision in 1.In 2 cases, distal femoral lateral anatomic plates (a) (b) (c) (d) were used, and in the other 2 cases, proximal tibial lateral anatomic plates were preferred.In 2 cases, the fixation was between 2 tibial fragments, while it was between the tibia and talus in the other 2 cases (Figures 1f, g).The cement at the site of the defect replicated the shape of the original bone (Figures 1d, f, g), and teicoplanin was added to the mixture.The plate was affixed to the cement by 2-4 screws (Figures 1d, f ).If necessary, anatomical layers were closed with an aspirative drain.For easier wound closure, the diameter of the extremity was decreased as needed (Figure 1e), and rotational flaps were used.As soon as wound healing was achieved, all patients were allowed full weight-bearing with protective orthoses.Medical treatment of the tumor was continued after wound healing (Table 1).In the second stage, the cement was removed through the previous incision scar, and the wound was reclosed according to the anatomical layers.A hybrid external fixator bridging the defect was applied to the extremity.The device was made up of 3 parts, 2 of which were used to fix the proximal and distal segments, while a mobile third part was used to fix the segment to be transported.A lengthening osteotomy was performed in the longer fragment (Figures 2a, b).In 2 cases, the foot was fixed with the fixator, and the knee was fixed in another.Lengthening at a rate of 1 mm/day was initiated after a latent period of 7-15 days.Lengthening was continued until the defect was eliminated.The fixator was removed, the transported fragment was fixed -Grade I pin tract infection Local pin-site care and oral antibiotic. -Grade II pin tract infection Local pin-site care and oral antibiotic. During the treatment, major and minor complications (according to Paley Classification) were observed. with the existing plate, and the docking site was grafted from the iliac crest in the final stage (Figures 2c, d).All patients were allowed full weight-bearing immediately after wound healing with orthoses until solid osseous healing.Follow-up was conducted by clinical evaluation and laboratory tests, including C-reactive protein, sedimentation, and white blood cell count.Functional and radiologic evaluations were made according to Paley. [8]Patients were encouraged to perform muscle strengthening exercises.In order to prevent joint contractures, night splints were used during bone transport.Daily pin-site care was performed for the external fixator, and oral antibiotics were used when necessary.After wound healing, all patients were allowed to bathe.They were supported with a daily administration of 0.25 mg alfacalcidol and 1 g calcium until end of the consolidation phase. Solid osseous union was achieved in all cases.Functional results were good in 2 patients and excellent in the other 2, and radiological results were excellent in all patients according to Paley.Mean follow-up period after the treatment was 49.75 months (range: 22-71 months).Mean distraction index was 1148 mm/day (range: 1130-1175 mm/day), mean external fixation time was 167 days (range: 152-187 days), and external fixator index was 1237 mm/day (range: 1200-1266 mm/day).During the course of treatment, 2 major and 2 minor complications were observed.Minor complications were grade I and II pin-tract infections, treated with local pin-site care and oral antibiotics.One of the major complications observed was the exposition of the plate after grafting of the docking site.For this patient, the plate had to be removed, and further fixation was provided by a circular external fixator (Figures 3a-f ).The other major complication was foot equinus, which required percutaneous achiloplasty (Tables 1, 2). Discussion Resection of a malignant tumor within an extremity leads to significantly large soft tissue and bone defects.It usually requires an extended period to achieve a tumor-free extremity, to preserve the functional status, and to achieve satisfactory reconstruction. [13,14]any reconstructive procedures have been described in the literature.In most cases, arthroplasty or biological reconstruction methods are preferred.[17][18] Compliance of the patient is a necessity for optimal results, while maintaining the patient' s psychosocial status.Even if these can be achieved, reconstruction of the extremity is still a major concern for orthopedic surgeons treating musculoskeletal tumors. [5]n cases where the ankle joint must be removed, arthroplasty has many complications such as infection, talar collapse, and dysfunction. [19,20]Arthrodesis using intramedullary nails and allografts has a high rate of nonunion and allograft fracture. [21]Microsurgical techniques such as vascularized fibula transfer and free tissue transfer can be performed in few institutions.[24] In addition to radiotherapy or chemotherapy, patients are at risk for wound problems and infection, which have a negative effect on vascular autograft and allograft healing rates.In our treatment, the risk is lower compared to the other treatment protocols mentioned above.After resection, the defect was treated with a combination of cement and locking plate, which is a relatively simple application, permitting good compliance with follow-up The Masqualet technique described in the literature is similar to the first phase of our technique. [25]However, the Masqualet technique is primarily intended for the treatment of traumatic defects. [26]Studies on the treatment of large defects due to malignant tumors are lacking in the literature.In our patients, the defects were too large to be treated with graft chips.Furthermore, also necessary for an effective long-term treatment of tumor radiotherapy and chemotherapy, it can be significant burden on the integration of the applied bone graft and host bone. [27][30] As the external fixation period increases, complications are observed with greater frequency and seriousness. [31]In recent years, external fixators have been used in combination with intramedullary nails to decrease the external fixation period. [9]However, our patients were not good candidates for intramedullary nailing due to the risk of dispersing malignant cells throughout the body, as well as the risk of insufficient fixation of small metaphyseal fragments. Our patients had an increased tendency to infection because of medical treatment of the tumor. [32]It is of great importance to be able to continue medical treatment and prevent infection.The method we describe has the advantage of acute wound closure after resection, thus isolating the resection site from the exterior.The cement implanted is an important factor in preventing infections, as it includes teicoplanin.As a result, deep in-fection was not observed in any of our patients, even in immune suppressive phases. The technique we describe is a new biologic reconstruction method for defects after resection of tumors which combines the advantages of using locked plates, bone cement, and external fixators, with minimal disadvantages. Fig. 1 . Fig. 1.The tumor (a-c) was resected in the first stage.Longitudinal incisions were used (d, e) in accordance with the localization of the mass.Site of the defect was temporarily reconstructed by bone cement.The cement was shaped as the original bone (f, g).Locking plates were inserted through a separate lateral incision (e).Distal femoral lateral anatomic plates was used (d, f, g) in this case. Fig. 2 . Fig. 2.In the second session, an external fixator bridging the defect was applied to the extremity.The device was made up of 3 parts, 2 of which were used to fix the proximal and distal segments, whilst a mobile third part was used to fix the segment to be transported (a).In this case, the foot was fixed with the fixator.A lengthening osteotomy was performed in the longer fragment (b).Lengthening was continued until the defect was eliminated.The fixator was removed, the transported fragment was fixed with the existing plate, and the docking site was grafted from the iliac crest in the final stage (c, d). Fig. 3 . Fig. 3.A major complication was observed in the first case of our series, that the plate exposition (a, b) was seen after grafting of the docking site (c).In this case, the plate was extracted and a bifocal treatment was applied with circular external fixator (d).White arrows indicate reosteotomy line, solid bony union was achieved at the end of treatment (e, f). Table 1 . Characteristics of patients and treatments. Table 2 . Complications and their treatment.
2018-04-03T05:48:51.338Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "e7f3e9ffb624b3d30d95009c4022fffff4c3c747", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3944/aott.2015.13.0063", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "e7f3e9ffb624b3d30d95009c4022fffff4c3c747", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
221642477
pes2o/s2orc
v3-fos-license
Pervasive duplication of tumor suppressors in Afrotherians during the evolution of large bodies and reduced cancer risk The risk of developing cancer is correlated with body size and lifespan within species. Between species, however, there is no correlation between cancer and either body size or lifespan, indicating that large, long-lived species have evolved enhanced cancer protection mechanisms. Elephants and their relatives (Proboscideans) are a particularly interesting lineage for the exploration of mechanisms underlying the evolution of augmented cancer resistance because they evolved large bodies recently within a clade of smaller-bodied species (Afrotherians). Here, we explore the contribution of gene duplication to body size and cancer risk in Afrotherians. Unexpectedly, we found that tumor suppressor duplication was pervasive in Afrotherian genomes, rather than restricted to Proboscideans. Proboscideans, however, have duplicates in unique pathways that may underlie some aspects of their remarkable anti-cancer cell biology. These data suggest that duplication of tumor suppressor genes facilitated the evolution of increased body size by compensating for decreasing intrinsic cancer risk. Introduction Among the constraints on the evolution of large bodies and long lifespans in animals is an increased risk of developing cancer. If all cells in all organisms have a similar risk of malignant transformation and equivalent cancer suppression mechanisms, then organisms with many cells should have a higher prevalence of cancer than organisms with fewer cells, particularly because large and small animals have similar cell sizes (Savage et al., 2007). Consistent with this expectation there is a strong positive correlation between body size and cancer incidence within species; for example, cancer incidence increases with increasing adult height in humans (Million Women Study collaborators et al., 2011;Nunney, 2018) and with increasing body size in dogs, cats, and cattle (Dobson, 2013;Dorn et al., 1968;Lucena et al., 2011). There is no correlation, however, between body size and cancer risk between species; this lack of correlation is often referred to as 'Peto's Paradox' (Caulin and Maley, 2011;Leroi et al., 2003;Peto et al., 1975). Indeed, cancer prevalence is relatively stable at~5% across species with diverse body sizes ranging from the minuscule 51 g grass mouse to the gargantuan 4800 kg African elephant (Abegglen et al., 2015;Boddy et al., 2020;Tollis et al., 2020). The ultimate resolution to Peto's Paradox is trivial, large-bodied and long-lived species evolved enhanced cancer protection mechanisms, but identifying and characterizing the mechanisms that underlie the evolution of augmented cancer protection has proven difficult (Ashur-Fabian et al., 2004;Seluanov et al., 2008;Gorbunova et al., 2012;Tian et al., 2013;Sulak et al., 2016). One of the challenges for discovering how animals evolved enhanced cancer protection mechanisms is identifying lineages in which large-bodied species are nested within species with small body sizes. Afrotherian mammals are generally small-bodied, but also include the largest extant land mammals. For example, maximum adult weights are~70 g in golden moles,~120 g in tenrecs,~170 g in elephant shrews,~3 kg in hyraxes, and~60 kg in aardvarks (Tacutu et al., 2013). In contrast, while extant hyraxes are relatively small, the extinct Titanohyrax is estimated to have weighed~1300 kg (Schwartz et al., 1995). The largest living Afrotheria are also dwarfed by the size of their recent extinct relatives: extant sea cows such as manatees are large bodied (~322-480 kg) but are relatively small compared to the extinct Stellar's sea cow which is estimated to have weighed~8000-10,000 kg (Scheffer, 1972). Similarly African Savannah (4800 kg) and Asian elephants (3200 kg) are large, but are dwarfed by the truly gigantic extinct Proboscideans such as Deinotherium (~12,000 kg), Mammut borsoni (16,000 kg), and the straight-tusked elephant (~14,000 kg) (Larramendi, 2015). Remarkably, these large-bodied Afrotherian lineages are nested deeply within small-bodied species (Figure 1;O Leary et al., 2013a;Springer et al., 2013;O Leary et al., 2013b;Puttick and Thomas, 2015), indicating that gigantism independently evolved in hyraxes, sea cows, and elephants (Paenungulata). Thus, Paenungulates are an excellent model system in which to explore the mechanisms that underlie the evolution of large body sizes and augmented cancer resistance. Box 1. Eutherian phylogenetic relationships. Eutheria (eu-'good' or 'right' and thē ríon 'beast', hence 'true beasts') is one of three living (extant) mammalian lineages (Monotremes, Marsupials, and Eutherians) that diverged in the early-late Cretaceous. Eutheria was named in 1872 by Theodore Gill and refined by Thomas Henry Huxley in 1880. Living Eutherians are comprised of 18 orders, divided into two major clades ( Figure 1A): Atlantogenata including the superorders Xenarthra (armadillos, anteaters, and sloths) and Afrotheria (Proboscidea, Sirenia, Hyracoidea, Tublidentata, Afroinsectivora, Cingulata, and Pilosa), and Boreoeutheria including the superorders Laurasiatheria (Insectivora, Artodactyla, Pholidota, and Carnovora) and Euarchontoglires (Lagomorpha, Rodentia, Scandentia, Dermoptera, and Primates). In our analyses, we have focused on identifying gene duplications in Afrotherian and Xenarthran genomes ( Figure 1B), using the Xenarthrans Hoffmans two-toed sloth (Choloepus hoffmanni) and nine-banded armadillo (Dasypus eLife digest From the gigantic blue whale to the minuscule bumblebee bat, animals come in all shapes and sizes. Any species can develop cancer, but some are more at risk than others. In theory, if every cell has the same probability of becoming cancerous, then bigger animals should get cancer more often since they have more cells than smaller ones. Amongst the same species, this relationship is true: taller people and bigger dogs have a greater cancer risk than their smaller counterparts. Yet this correlation does not hold when comparing between species: remarkably large creatures, like elephants and whales, are not more likely to have cancer than any other animal. But how have these gigantic animals evolved to be at lower risk for the disease? To investigate, Vazquez and Lynch compared the cancer risk and the genetic information of a diverse group of closely related animals with different body sizes. This included elephants, woolly mammoths and mastodons as well as their small relatives, the manatees, armadillos, and marmotsized hyraxes. Examining these species' genomes revealed that, during evolution, elephants had acquired extra copies of 'tumour suppressor genes' which can sense and repair the genetic and cellular damages that turn healthy cells into tumours. This allowed the species to evolve large bodies while lowering their risk of cancer. Further studies could investigate whether other gigantic animals evolved similar ways to shield themselves from cancer; these could also examine precisely how having additional copies of cancerprotecting genes helps reduce cancer risk, potentially paving the way for new approaches to treat or prevent the disease. novemcinctus) as out-groups to the Afrotherians. This approach allows us to use phylogenetic methods to polarize gene duplication events and identify genes that duplicated in the Afrotherian stem-lineage. Many mechanisms have been suggested to resolve Peto's paradox, including a decrease in the copy number of oncogenes, an increase in the copy number of tumor suppressor genes (Caulin and Maley, 2011;Leroi et al., 2003;Nunney, 1999), reduced metabolic rates, reduced retroviral activity and load (Katzourakis et al., 2014), and selection for 'cheater' tumors that parasitize the growth of other tumors (Nagy et al., 2007), greater sensitivity of cells to DNA damage (Abegglen et al., 2015;Sulak et al., 2016), enhanced recognition of neoantigens by T cells, among many others. Among the most parsimonious routes to enhanced cancer resistance may be through an increased copy number of tumor suppressors. For example, transgenic mice with additional copies of TP53 have reduced cancer rates and extended lifespans (García-Cao et al., 2002), suggesting that changes in the copy number of tumor suppressors can affect cancer rates. Indeed, candidate genes studies have found that elephant genomes encode duplicate tumor suppressors such as TP53 and LIF (Abegglen et al., 2015;Sulak et al., 2016;Vazquez et al., 2018) as well as other genes with putative tumor suppressive functions Doherty and de Magalhães, 2016). These studies, however, focused on a priori candidate genes; thus it is unclear whether duplication of tumor suppressor genes is a general phenomenon in the elephant lineage or reflects an ascertainment bias. Here we trace the evolution of body mass, cancer risk, and gene copy number variation across Afrotherian genomes, including multiple living and extinct Proboscideans (Figure 1), to investigate whether duplications of tumor suppressors coincided with the evolution of large body sizes. Our estimates of the evolution of body mass across Afrotheria show that large body masses evolved in a stepwise manner, similar to previous studies (O Leary et al., 2013a;Springer et al., 2013;O Leary et al., 2013b;Puttick and Thomas, 2015) and coincident with dramatic reductions in intrinsic cancer risk. To explore whether duplication of tumor suppressors occurred coincident with the evolution of large body sizes, we used a genome-wide Reciprocal Best BLAT Hit (RBBH) strategy to identify gene duplications and used maximum likelihood to infer the lineages in which those duplications occurred. Unexpectedly, we found that duplication of tumor suppressor genes was common in Afrotherians, both large and small. Gene duplications in the Proboscidean lineage, however, were uniquely enriched in pathways that may explain some of the unique cancer protection mechanisms observed in elephant cells. These data suggest that duplication of tumor suppressor genes is pervasive in Afrotherians and preceded the evolution of species with exceptionally large body sizes. Results Step-wise evolution of body size in Afrotherians Similar to previous studies of Afrotherian body size (Puttick and Thomas, 2015;Elliot and Mooers, 2014), we found that the body mass of the Afrotherian ancestor was inferred to be small (0.26 kg, 95% CI: 0.31-3.01 kg) and that substantial accelerations in the rate of body mass evolution occurred coincident with a 67.36Â increase in body mass in the stem-lineage of Pseudoungulata (17.33 kg); a 1.45Â increase in body mass in the stem-lineage of Paenungulata (25.08 kg); a 11.82Â increase in body mass in the stem-lineage of Tehthytheria (296.56 kg); a 1.39Â increase in body mass in the stem-lineage of Proboscidea (412.5 kg); and a 2.69Â increase in body mass in the stem-lineage of Elephantimorpha (4114.39 kg), which is the last common ancestor of elephants and mastodons using the fossil record (Figure 2A,B). The ancestral Hyracoidea was inferred to be relatively small (2.86-118.18kg), and rate accelerations were coincident with independent body mass increases in large hyraxes such as Titanohyrax andrewsi (429.34 kg, 67.36Â increase) (Figure 2A,B). While the body mass of the ancestral Sirenian was inferred to be large (61.7-955.51 kg), a rate acceleration occurred coincident with a 10.59Â increase in body mass in Stellar's sea cow (Figure 2A,B). Rate accelerations also occurred coincident with dramatic reductions in body mass (36.6Â decrease) in the stem-lineage of the dwarf elephants Elephas (Palaeoloxodon) antiquus falconeri and Elephas cypriotes (Figure 2A,B). These data indicate that gigantism in Afrotherians evolved step-wise, from small to medium bodies in the Pseudoungulata stem-lineage, medium to large bodies in the Tehthytherian stem-lineage and extinct hyraxes, and from large to exceptionally large bodies independently in the Proboscidean stem-lineage and Stellar's sea cow (Figure 2A Step-wise reduction of intrinsic cancer risk in large, long-lived Afrotherians In order to account for a relatively stable cancer rate across species (Abegglen et al., 2015;Boddy et al., 2020;Tollis et al., 2020), intrinsic cancer risk must also evolve with changes in body size and lifespan across species. We used empirical body size and lifespan data from extant species and empirical body size and estimated lifespan data from extinct species to estimate intrinsic cancer risk (K) with the simplified multistage cancer risk model K » Dt 6 , where D is the maximum body size and t is the maximum lifespan (Peto et al., 1975: Peto, 2015Armitage, 1985;Armitage and Doll, 2004). As expected, intrinsic cancer risk in Afrotheria also varies with changes in body size and longevity ( Figure 2A,B), with a 6.41-log 2 decreases in the stem-lineage of Xenarthra, followed by a 13.37-log 2 decrease in Pseudoungulata, and a 1.49-log 2 decrease in Aardvarks ( Figure 2A). In contrast to the Paenungulate stem-lineage, there is a 7.84-log 2 decrease in cancer risk in Tethytheria, a 0.67-log 2 decrease in Manatee, a 3.14-log 2 decrease in Elephantimorpha, and a 1.05-log 2 decrease in Proboscidea. Relatively minor decreases occurred within Proboscidea including a 0.83-log 2 decrease in Elephantidae and a 0.57-log 2 decrease in the American Mastodon. Within the Elephantidae, Elephantina and Loxodontini have a 0.06-log 2 decrease in cancer susceptibility, while susceptibility is relatively stable in Mammoths. The three extant Proboscideans, Asian Elephant, African Savana Elephant, and the African Forest Elephant, meanwhile, have similar decreases in body size, with slight increases in cancer susceptibility (Figure 2A,B). Pervasive duplication of tumor suppressor genes in Afrotheria Our hypothesis was that genes which duplicated coincident with the evolution of increased body mass (IBM) and reduced intrinsic cancer risk (RICR) would be uniquely enriched in tumor suppressor pathways compared to genes that duplicated in other lineages. Therefore, we identified duplicated genes in each Afrotherian lineage ( Table 1 and Figure 3A) and tested if they were enriched in Reactome pathways related to cancer biology ( Figure 3B, Table 2). No pathways related to cancer biology were enriched in either the Pseudoungulata (67.36-fold IBM, 13.37-log 2 RICR), but few genes were inferred to be duplicated in this lineage reducing power to detect enriched pathways. Consistent with our hypothesis, 18.18% of the pathways that were enriched in the Paenungulate stem-lineage (1.45-fold IBM, 1.17-log 2 RICR), 63% of the pathways that were enriched in the Tethytherian stem-lineage (11.82-fold IBM, 7.84-log 2 RICR), and 38.81% of the pathways that were enriched in the Proboscidean stem-lineage (1.06-fold IBM, 3.14-log 2 RICR) were related to tumor suppression ( Figure 3B, Table 2). Similarly, 21.28% and 38.00% of the pathways that were enriched in manatee (1.11-fold IBM, 0.89-log 2 RICR) and aardvark (67.36-fold IBM, 1.49-log 2 RICR), respectively, were related to tumor suppression. In contrast, only 2.86% of the pathways that were enriched in hyrax (1.6-fold IBM, 1.49-log 2 RICR) were related to tumor suppression ( Figure 3B, Table 2). Unexpectedly, however, lineages without major increases in body size or lifespan, or decreases in intrinsic cancer risk, were also enriched for tumor suppressor pathways. For example, 13.85%, 37.04%, and 22.00% of the pathways that were enriched in the stem-lineages of Afroinsectivoa and Afrosoricida, and in E. telfairi, respectively, were related to cancer biology ( Figure 3B, Table 2). Our observation that gene duplicates in most lineages are enriched in cancer pathways suggest either that duplication of genes in cancer pathways is common in Afrotherians, or that there may be a systemic bias in the pathway enrichment analyses. For example, random gene sets may be generally enriched in pathway terms related to cancer biology. To explore this latter possibility, we generated 5000 randomly sampled gene sets of between 10 and 5000 genes, and tested for enriched Reactome pathways using ORA. We found that no cancer pathways were enriched (median hypergeometric p-value 0.05) among gene sets tested greater than 157 genes; however, in these smaller gene sets, 12-18% of enriched pathways were classified as cancer pathways. Without considering p-value thresholds, the percentage of enriched cancer pathways approaches~15% (213/1381) in simulated sets. Thus, for larger gene sets, we used a simulated threshold of~15% to determine if pathways related to cancer biology were enriched more than one would expect from sampling bias ( Table 2). We directly compared our simulated and observed enrichment results by lineage and gene set size, and found that Afrosoricida, Cape golden mole, tenrec, Elephantidae, elephant shrew, Asian elephant, African Savannah elephant, African Forest elephant, Columbian mammoth, aardvark, Paenungulata, Proboscidea, Tethytheria, and manatee had enriched cancer pathway percentages above background with respect to their gene set sizes, that is expected enrichments based on random sampling of small gene sets ( Table 2). Thus, we conclude that duplication of genes in cancer pathways is common in many Afrotherians but that the inference of enriched cancer pathway duplication is not different from background in some lineages, particularly in ancestral nodes with a small number of estimated duplicates. Tumor suppressor pathways enriched exclusively within Proboscideans While duplication of cancer associated genes is common in Afrotheria, the 157 genes that duplicated in the Proboscidean stem-lineage ( Figure 3A) were uniquely enriched in 12 pathways related to cancer biology ( Figure 3B). Among these uniquely enriched pathways ( Figure 3C) were pathways related to the cell cycle, including 'G0 and Early G1', 'G2/M Checkpoints', and 'Phosphorylation of the APC/C', pathways related to DNA damage repair including 'Global Genome Nucleotide Excision Repair (GG-NER)', 'HDR through Single Strand Annealing (SSA)', 'Gap-filling DNA repair synthesis and ligation in GG-NER', 'Recognition of DNA damage by PCNA-containing replication complex', and 'DNA Damage Recognition in GG-NER', pathways related to telomere biology including 'Extension of Telomeres' and 'Telomere Maintenance', pathways related to the apoptosome including 'Activation of caspases through apoptosome-mediated cleavage', and pathways related to 'mTORC1-mediated signaling' and 'mTOR signaling', which play important roles in the biology of aging. Thus, duplication of genes with tumor suppressor functions is pervasive in Afrotherians, but genes in some pathways related to cancer biology and tumor suppression are uniquely duplicated in large-bodied (long-lived) Proboscideans ( Figure 4A,B). Coordinated duplication of TP53-related genes in Proboscidea Prior studies found that the 'master' tumor suppressor TP53 duplicated multiple times in elephants (Abegglen et al., 2015;Sulak et al., 2016), motivating us to further study duplication of genes involved in TP53-related pathways in Proboscidea. We traced the evolution of genes in the TP53 pathway that appeared in one or more Reactome pathway enrichments for genes duplicated recently in the African Elephant, which has the most complete genome among Proboscideans and for which several RNA-Seq data sets are available. We found that the initial duplication of TP53 in Tethytheria, where body size expanded, was preceded by the duplication of GTF2F1 and STK11 in Paenungulata and was coincident with the duplication of BRD7. These three genes are involved in regulating the transcription of TP53 (Liang and Mills, 2013;Launonen, 2005;Drost et al., 2010;Burrows et al., 2010), and their duplication prior to that of TP53 may have facilitated re-functionalization of TP53 retroduplicates. Interestingly, STK11 is also tumor suppressor that mediates tumor suppression via p21-induced senescence (Launonen, 2005). The other genes that are duplicated in the pathway are downstream of TP53; these genes duplicated either coincident with TP53, as in the case of SIAH1, or subsequently in Proboscidea, Elephantidae, or extant elephants (Figure 4). These genes are expressed in RNA-Seq data ( Figure 4D), suggesting that they are functional. While transcript abundance estimates inferred from RNA-Seq data can suggest that genes are functional, recent non-functional duplicates can still be transcribed. Therefore we inferred if each duplicate shown in Figure 4C/D encoded a putatively function protein by manually curation, specifically to identify premature stop codons and overall sequence conservation. Most genes in Figure 4C/D, such as STK11, CD14, SOD1, and BRD7, were well conserved and lacked premature stop codons. We also find that the STK11, CD14, and BRD7 genes in the manatee were also well conserved, suggesting that extant manatees may also have enhanced tumor suppression and an Note that we are unable to determine duplication status for some genes in Proboscideans because of assembly gaps in ancient genomes (indicated with skull and crossbones); these genes appear to be independently duplicated in extant species (African Forest, African Savanah, and Asian elephants) because they are missing from ancient genomes, biasing ancestral reconstructions of duplication status. (D) Gene expression levels of genes from panel C that have two or more expressed duplicates. The online version of this article includes the following source data for figure 4: Source data 1. Data set used for manual coding gene potential associated with Figure 4C,D. augmented stress response. However, some of the duplicate genes in the mantatee genome have premature stop codons suggesting they are not translated into functional proteins, including the additional copies of MAPRE1, BUB3, and COX20 as well as at least one of the duplicate copies of CNOT11, HMGB2, MAD2L1, LIF, and TP53. For TP53, we have previously shown that duplicate copies of genes containing premature stop codons may still serve a functional role in regulating its progenitor's function. Thus, some of the genes with premature stop codons, such as duplicate COX20 and MAD2L1 which are expressed in RNA-Seq data, may encode functional lncRNA transcripts or truncated proteins. Some copies, including for CASP9 and PRDX1, contained partial RBBH hits with no premature stop codons; however, they also lacked the totality of the coding sequence and thus may represent cases of pseudogenization, subfunctionalization, or neofunctionalization. Discussion Among the evolutionary, developmental, and life history constraints on the evolution of large bodies and long lifespans is an increased risk of developing cancer. While body size and lifespan are correlated with cancer risk within species, there is no correlation between species because large and long-lived organisms have evolved enhanced cancer suppression mechanisms. While this ultimate evolutionary explanation is straightforward (Peto, 2015), determining the mechanisms that underlie the evolution of enhanced cancer protection is challenging because many mechanisms with relatively small effects likely contribute to evolution of reduced cancer risk Correlated evolution of large bodies and reduced cancer risk The hundred-to hundred-million-fold reductions in intrinsic cancer risk associated with the evolution of large body sizes in some Afrotherian lineages, in particular Elephantimorphs such as elephants and mastodons, suggests that these lineages must have also evolved remarkable mechanisms to suppress cancer. While our initial hypothesis was that large-bodied lineages would be uniquely enriched in duplicate tumor suppressor genes compared to other smaller-bodied lineages, we unexpectedly found that the duplication of genes in tumor suppressor pathways occurred at various points throughout the evolution of Afrotheria, regardless of body size. These data suggest that this abundance of tumor suppressors may have contributed to the evolution of large bodies and reduced cancer risk, but that these processes were not necessarily coincident. Interestingly, pervasive duplication of tumor suppressors may also have contributed to the repeated evolution of large bodies in hyraxes and sea cows, because at least some of the genetic changes that underlie the evolution of reduced cancer risk were common in this group. It remains to be determined whether our observation of pervasive duplication of tumor suppressors also occurs in other multicellular lineages. Using a similar reciprocal best BLAST/BLAT approach that focused on estimating copy number of known tumor suppressors in mammalian genomes, for example, Caulin et al., 2015 found no correlation between copy number or tumor suppressors with either body mass or longevity, whereas Tollis et al., 2020 found a correlation between copy number and longevity (but not body size) (Tollis et al., 2020;Caulin et al., 2015). These opposing conclusions may result from differences in the number of genes (81 vs 548) and genomes (8 vs 63) analyzed, highlighting the need for genomewide analyses of many species that vary in body size and longevity. There's no such thing as a free lunch: Trade-offs and constraints on tumor suppressor copy number While we observed that duplication of genes in cancer related pathways -including genes with known tumor suppressor functions -is pervasive in Afrotheria, the number of duplicate tumor suppressor genes was relatively small, which may reflect a trade-off between the protective effects of increased tumor suppressor number on cancer risk and potentially deleterious consequences of increased tumor suppressor copy number. Overexpression of TP53 in mice, for example, is protective against cancer but associated with progeria, premature reproductive senescence, and early death; however, transgenic mice with a duplication of the TP53 locus that includes native regulatory elements are healthy and experience normal aging, while also demonstrating an enhanced response to cellular stress and lower rates of cancer (García-Cao et al., 2002;Tyner et al., 2002). These data suggest that duplication of tumor suppressors can contribute to augmented cancer resistance, if the duplication includes sufficient regulatory architecture to direct spatially and temporally appropriate gene expression. Thus, it is interesting that duplication of genes that regulate TP53 function, such as STK11, SIAH1, and BRD7, preceded the retroduplication TP53 in the Proboscidean stem-lineage, which may have mitigated toxicity arising from dosage imbalances. Similar co-duplication events may have alleviated the negative pleiotropy of tumor suppressor gene duplications to enable their persistence and allow for subsequent co-option during the evolution of cancer resistance. Caveats and limitations Our genome-wide results suggest that duplication of tumor suppressors is pervasive in Afrotherians and may have enabled the evolution of larger body sizes in multiple lineages by lowering intrinsic cancer risk either prior to or coincident with increasing body size. However, our study has several inherent limitations. For example, we have shown that genome quality plays an important role in our ability to identify duplicate genes, and several species have poor quality genomes (and thus were excluded from further analyses). While several efforts have been established with the goal of generating high quality (chromosome length) reference genomes for mammals, such as DNAZoo, The Zoonomia Project, the Vertebrate Genomes Project, and Genome 10K, Atlantogenatans represent a minority of available genome projects. And while a few high quality Atlantogenatan genomes are available, they lack reference gene and transcriptome annotations, and genome browser graphical user interfaces that allow for easy access to genome data for the broader community, limiting their usefulness. Similarly, without comprehensive gene expression data we cannot be certain that duplicate genes are actually expressed, and thus functional. Our results on genome quality suggest several research priorities for these less well-studies species, including generating chromosome length reference genomes and genome annotations, and incorporating these species into existing genome browsers (such as UCSC Genome Browser). We also assume that gene duplicates either maintain ancestral tumor suppressor functions and increase cancer resistance through dosage effects or provide redundancy to loss of function mutations thereby increasing robustness of tumor suppression. Many processes, such as developmental systems drift, neofunctionalization, and sub-functionalization, can cause divergence in gene functions and invalidate the assumption of conservation of gene function (Rastogi and Liberles, 2005;Qian and Zhang, 2014;Stoltzfus, 1999), leading to inaccurate inferences in gene and pathway functions which is a common problem in comparative genomic studies using pathway and gene ontologies to categorize gene function. In addition, we assume that most duplicate genes are functional but it is likely that some of the duplicates were identify are non-functional pseudogenes. Differentiating between functional and non-functional genes using comparative genomics can be challenging. For example, non-functional pseudogenes often accumulate non-synonymous amino acid substitutions and premature stop codons but these same changes can also occur in functional genes. For example, we have found that the elephant genome encodes TP53 retogenes (TP53RTGs) all of which encode premature stop codons suggesting they are pseudogenes, but these TP53TRGs are expressed, encode functional separation of function mutants of the ancestral TP53 gene, and contribute to enhanced DNA damage sensitivity in elephant cells. Similarly, we have characterized duplicate LIF gene in elephants (LIF6) that lacks the start codon and exon 1 of the parent LIF gene. LIF6 is expressed, encodes a functional protein with translation initiated at an alternative downstream start site, and also contributes to enhanced DNA damage sensitivity in elephant cells. In addition, duplicate genes that lack coding potential, such as PTENP1, can also be expressed and while not translated function as LINC RNAs (in this case acting as a sponge for microRNAs that target the parent PTEN transcript). In each case classifying duplicates into putatively functional and non-functional categories based on sequence characteristic would misclassify TP53RTGs, LIF6, and PTENP1. Thus, sequence features of pseudogenes may maintain function, as a consequence of not excluding putative pseudogenes some of the genes we include in downstream analyses may be nonfunctional. Further experimental studies are needed to determine which duplicates are expressed and functional. The focus of this study, motivated by our previous identification of TP53 and LIF duplicates, was on the role gene duplication in general may have played in the resolution of Peto's paradox in largebodied Afrotherians, particularly Proboscidea. Duplication of tumor suppressor genes, however, is unlikely to be the sole mechanism responsible for the evolution of large body sizes, long lifespans, and reduced cancer risk. The evolution of regulatory elements, coding genes, genes with non-canonical tumor suppressor functions, and immune cell recognition of cancerous cells are also likely important for reducing the risk of cancer. Conclusions: All Afrotherians are equal, but some are more equal than others While we found that duplication of tumor suppressor genes is common in Afrotheria, genes that duplicated in the Proboscidean stem-lineage ( Figure 3A,B) were uniquely enriched in functions and pathways that may be related to the evolution of unique anti-cancer cellular phenotypes in the elephant lineage ( Figure 3C). Elephant cells, for example, cannot be experimentally immortalized (Fukuda et al., 2016;Gomes et al., 2011), rapidly repair DNA damage (Sulak et al., 2016;Hart and Setlow, 1974;Francis et al., 1981), are extremely resistant to oxidative stress (Gomes et al., 2011), and yet are also extremely sensitive to DNA damage (Abegglen et al., 2015;Sulak et al., 2016;Vazquez et al., 2018). Several pathways related to DNA damage repair, in particular nucleotide excision repair (NER), were uniquely enriched among genes that duplicated in the Proboscidean stem-lineage, suggesting a connection between duplication of genes involved in NER and rapid DNA damage repair (Hart and Setlow, 1974;Francis et al., 1981). Similarly, we identified a duplicate SOD1 gene in Proboscideans that may confer the resistance of elephant cells to oxidative stress (Gomes et al., 2011). Pathways related to the cell cycle were also enriched among genes that duplicated in Proboscideans, and cell cycle dynamics are different in elephants compared to other species; population doubling (PD) times for African and Asian elephant cells are 13-16 days, while PD times are 21-28 days in other Afrotherians (Gomes et al., 2011). Finally, the role of 'mTOR signaling' in the biology of aging is well known. Collectively these data suggest that gene duplications in Proboscideans may underlie some of their cellular phenotypes that contribute to cancer resistance. Ancestral body size reconstruction We first assembled a time-calibrated supertree of Eutherian mammals by combining the time-calibrated molecular phylogeny of Bininda-Emonds et al., 2007;Bininda-Emonds et al., 2008 with the time-calibrated total evidence Afrotherian phylogeny from Puttick and Thomas, 2015. While the Bininda-Emonds et al., 2007;Bininda-Emonds et al., 2008 phylogeny includes 1679 species, only 34 are Afrotherian, and no fossil data are included. The inclusion of fossil data from extinct species is essential to ensure that ancestral state reconstructions of body mass are not biased by only including extant species. This can lead to inaccurate reconstructions, for example, if lineages convergently evolved large body masses from a small-bodied ancestor. In contrast, the total evidence Afrotherian phylogeny of Puttick and Thomas, 2015 includes 77 extant species and fossil data from 39 extinct species. Therefore, we replaced the Afrotherian clade in the Bininda-Emonds et al., 2008 phylogeny with the Afrotherian phylogeny of Puttick and Thomas, 2015 using Mesquite. Next, we jointly estimated rates of body mass evolution and reconstructed ancestral states using a generalization of the Brownian motion model that relaxes assumptions of neutrality and gradualism by considering increments to evolving characters to be drawn from a heavy-tailed stable distribution (the 'Stable Model') implemented in StableTraits (Elliot and Mooers, 2014). The stable model allows for large jumps in traits and has previously been shown to outperform other models of body mass evolution, including standard Brownian motion models, Ornstein-Uhlenbeck models, early burst maximum likelihood models, and heterogeneous multi-rate models (Elliot and Mooers, 2014). Reciprocal Best Hit BLAT We developed a reciprocal best hit BLAT (RBHB) pipeline to identify putative homologs and estimate gene copy number across species. The Reciprocal Best Hit (RBH) search strategy is conceptually straightforward: (1) Given a gene of interest G A in a query genome A, one searches a target genome B for all possible matches to G A ; (2) For each of these hits, one then performs the reciprocal search in the original query genome to identify the highest-scoring hit; (3) A hit in genome B is defined as a homolog of gene G A if and only if the original gene G A is the top reciprocal search hit in genome A. We selected BLAT (Kent, 2002) as our algorithm of choice, as this algorithm is sensitive to highly similar (>90% identity) sequences, thus identifying the highest-confidence homologs while minimizing many-to-one mapping problems when searching for multiple genes. RBH performs similar to other more complex methods of orthology prediction and is particularly good at identifying incomplete genes that may be fragmented in low quality/poorly assembled regions of the genome (Altenhoff and Dessimoz, 2009;Salichos and Rokas, 2011). Effective copy number by coverage In low-quality genomes, many genes are fragmented across multiple scaffolds, which results in BLA (S)T-like methods calling multiple hits when in reality there is only one gene. To compensate for this, we developed a novel statistic, Estimated Copy Number by Coverage (ECNC), which averages the number of times we hit each nucleotide of a query sequence in a target genome over the total number of nucleotides of the query sequence found overall in each target genome (Figure 3-figure supplement 1). This allows us to correct for genes that have been fragmented across incomplete genomes, while accounting for missing sequences from the human query in the target genome. Mathematically, this can be written as: where n is the given nucleotide in the query, l is the total length of the query, C n is the number of instances that n is present within a reciprocal best hit, and bool (C n ) is 1 if C n >1 C n >0 or 0 if C n =1 C n ¼ 0. RecSearch pipeline We created a custom Python pipeline for automating RBHB searches between a single reference genome and multiple target genomes using a list of query sequences from the reference genome. For the query sequences in our search, we used the hg38 UniProt proteome (The UniProt Consortium, 2017), which is a comprehensive set of protein sequences curated from a combination of predicted and validated protein sequences generated by the UniProt Consortium. Next, we excluded genes from downstream analyses for which assignment of homology was uncertain, including uncharacterized ORFs (991 genes), LOC (63 genes), HLA genes (402 genes), replication dependent histones (72 genes), odorant receptors (499 genes), ribosomal proteins (410 genes), zinc finger transcription factors (1983 genes), viral and repetitive-element-associated proteins (82 genes), and 'Uncharacterized', 'Putative',or 'Fragment' proteins (30,724 genes), leaving a final set of 37,582 query protein isoforms, corresponding to 18,011 genes. We then searched for all copies of 18,011 query genes in publicly available Afrotherian genomes (Dobson, 2013) (Dudchenko et al., 2017;Palkopoulou et al., 2015;Palkopoulou et al., 2018;Foote et al., 2015). A summary of gene duplications in each species is available in Supplementary file 1. Duplication gene inclusion criteria In order to condense transcript-level hits into single gene loci, and to resolve many-to-one genome mappings, we removed exons where transcripts from different genes overlapped, and merged overlapping transcripts of the same gene into a single gene locus call. The resulting gene-level copy number table was then combined with the maximum ECNC values observed for each gene in order to call gene duplications. We called a gene duplicated if its copy number was two or more, and if the maximum ECNC value of all the gene transcripts searched was 1.5 or greater; previous studies have shown that incomplete duplications can encode functional genes (Sulak et al., 2016;Vazquez et al., 2018), therefore partial gene duplications were included provided they passed additional inclusion criteria (see below). The ECNC cut-off of 1.5 was selected empirically, as this value minimized the number of false positives seen in a test set of genes and genomes. The results of our initial search are summarized in Figure 3A. Overall, we identified 13,880 genes across all species, or 77.1% of our starting query genes. Genome quality assessment using CEGMA In order to determine the effect of genome quality on our results, we used the gVolante webserver and CEGMA to assess the quality and completeness of the genome (Nishimura et al., 2017;Parra et al., 2009). CEGMA was run using the default settings for mammals ('Cut-off length for sequence statistics and composition'=1; 'CEGMA max intron length'=100,000; 'CEGMA gene flanks'=10,000, 'Selected reference gene set' = CVG). For each genome, we generated a correlation matrix using the aforementioned genome quality scores, and either the mean copy number or mean ECNC for all hits in the genome. We observed that the percentage of duplicated genes in non-Pseudoungulatan genomes was higher (12.94-23.66%) than Pseudoungulatan genomes (3.26-7.80%). Mean copy number, mean ECNC, and mean CN (the lesser of copy number and ECNC per gene) moderately or strongly correlated with genomic quality, such as LD50, the number of scaffolds, and contigs with a length above either 100K or 1M (Figure 3-figure supplement 2). The Afrosoricidians had the greatest correlation between poor genome quality and high gene duplication rates, including larger numbers of private duplications. The correlations between genome quality metric and number of gene duplications were particularly high for Cape golden mole (Chrysochloris asiatica: chrAsi1) and Cape elephant shrew (Elephantulus edwardii: eleEdw1); therefore we excluded these species from downstream pathway enrichment analyses. Determining functionality of duplicated via gene expression In order to ascertain the functional status of duplicated genes, we generated de novo transcriptomes using publicly available RNA-sequencing data for African savanna elephant, West Indian manatee, and nine-banded armadillo (Supplementary file 2). We mapped reads to the highest quality genome available for each species, and assembled transcripts using HISAT2 and StringTie (Kim et al., 2015;Pertea et al., 2015;Pertea et al., 2016). We found that many of our identified duplicates had transcripts mapping to them above a Transcripts Per Million (TPM) score of 2, suggesting that many of these duplications are functional. RNA-sequencing data was not available for Cape golden mole, Cape elephant shrew, rock hyrax, aardvark, or the lesser hedgehog tenrec. Reconstruction of ancestral copy numbers We encoded the copy number of each gene for each species as a discrete trait ranging from 0 (one gene copy) to 31 (for 32+ gene copies) and used IQ-TREE to select the best-fitting model of character evolution (Minh et al., 2020;Hoang et al., 2018;Kalyaanamoorthy et al., 2017;Wang et al., 2018;Schrempf et al., 2019), which was inferred to be a Jukes-Cantor type model for morphological data (MK) with equal character state frequencies (FQ) and rate heterogeneity across sites approximated by including a class of invariable sites (I) plus a discrete Gamma model with four rate categories (G4). Next we inferred gene duplication and loss events with the empirical Bayesian ancestral state reconstruction (ASR) method implemented in IQ-TREE (Minh et al., 2020;Hoang et al., 2018;Kalyaanamoorthy et al., 2017;Wang et al., 2018;Schrempf et al., 2019), the best fitting model of character evolution (MK+FQ+GR+I) (Soubrier et al., 2012;Yang et al., 1995), and the unrooted species tree for Atlantogenata. We considered ancestral state reconstructions to be reliable if they had Bayesian Posterior Probability (BPP) ! 0.80; less reliable reconstructions were excluded from pathway analyses. We note that there may be 'ghost' duplication events, that is genes that duplicated in, for example, the Tethytherian stem-lineage that are maintained in the Stellar's sea cow genome and lost in the manatee genome. These genes will be reconstructed as a Proboscidean-specific duplication events because we cannot determine copy number in extinct species that lack genomes. Pathway enrichment analysis To determine if gene duplications were enriched in particular biological pathways, we used the WEB-based Gene SeT AnaLysis Toolkit (WebGestalt) (Liao et al., 2019) to perform Over-Representation Analysis (ORA) using the Reactome database (Jassal et al., 2020). Gene duplicates in each lineage were used as the foreground gene set, and the initial query set was used as the background gene set. WebGestalt uses a hypergeometric test for statistical significance of pathway over-representation, which we refined using two methods: a False Discovery Rate (FDR)-based approach and an empirical p-value approach (Chen et al., 2013). The Benjamini-Hochberg FDR multiple-testing correction was generated by WebGestalt. In order to correct p-values based on an empirical distribution, we modified the approach used by Chen et al. in Enrichr (Chen et al., 2013) to generate a 'combined score' for each pathway based on the hypergeometric p-value from WebGestalt, and a correction for expected rank for each pathway. In order to generate the table of expected ranks and variances for this approach, we randomly sampled foreground sets of 10-5000 genes from our background set 5000 times, and used WebGestalt ORA to obtain a list of enriched terms and P-values for each run; we then compiled a table of Reactome terms with their expected frequencies and standard deviation. These data were used to calculate a Z-score for terms in an ORA run, and the combined score was calculated using the formula C ¼ log p ð Þ Á z. Estimating the evolution of cancer risk The dramatic increase in body mass and lifespan in some Afrotherian lineages, and the relatively constant rate of cancer across species of diverse body sizes (Abegglen et al., 2015), indicates that those lineages must have also evolved reduced cancer risk. To infer the magnitude of these reductions we estimated differences in intrinsic cancer risk across extant and ancestral Afrotherians. Following Peto, 2015, we estimate the intrinsic cancer risk (K) as the product of risk associated with body mass and lifespan. In order to determine (K) across species and at ancestral nodes (see below), we first estimated ancestral lifespans at each node. We used Phylogenetic Generalized Least-Square Regression (PGLS) (Felsenstein, 1985;Martins and Hansen, 1997), using a Brownian covariance matrix as implemented in the R package ape (Paradis and Schliep, 2019), to calculate estimated ancestral lifespans across Atlantogenata using our estimates for body size at each node. In order to estimate the intrinsic cancer risk of a species, we first inferred lifespans at ancestral nodes using PGLS (Supplementary file 3) and the model. Next, we calculated K 1 at all nodes, and then estimated the fold-change in cancer susceptibility between ancestral and descendant nodes ( Figure 2). Next, in order to calculate K 1 at all nodes, we used a simplified multistage cancer risk model for body size D and lifespan t: K » Dt 6 (Peto et al., 1975: Peto, 2015Armitage, 1985;Armitage and Doll, 2004). The fold change in cancer risk between a node and its ancestor was then defined as log 2 K2 K1 . Manual verification of duplicate genes We manually verified the coding potential of the 16 genes shown in Figure 4 by first identifying the reciprocal best (DNA sequence) BLAT hits in the elephant and manatee genomes, which allowed us to determine conservation and presence of premature stop codons in the each open reading frame (ORF). We translated the ORF for each hit into amino acid sequences and grouped up hits for each gene into one FASTA file along with the UniProt protein sequences for the human, dog, cat, and cow orthologs. Using a pipeline hosted at NGPhlyogeny.fr (Lemoine et al., 2019), the homologs were aligned using MAFFT Katoh and Standley, 2013; the aligned sequences were cleaned using BMGE (Criscuolo and Gribaldo, 2010). Finally we used FastME (Lefort et al., 2015) to infer a gene tree for each duplicate. Alignments were then visually inspected for conservation and presence of premature stop codons. Additional files Supplementary files . Source data 1. All necessary data sets and scripts to reproduce results presented in this manuscript. . Supplementary file 2. RNA-Seq data sets used in this study, along with key biological and genome information. . Supplementary file 3. Summary of PGLS model used to estimate lifespan. . Transparent reporting form Data availability All data generated or analysed during this study are included in the manuscript and supporting files. The following previously published datasets were used:
2020-09-14T13:09:10.287Z
2020-09-10T00:00:00.000
{ "year": 2021, "sha1": "c5a49c34b8af8a6a51351badb29a8275ea821ee4", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7554/elife.65041", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f845e12bf610b943c0f6ed9d2775bbef1634c810", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
10675901
pes2o/s2orc
v3-fos-license
Study of the magnetic state of K$_{0.8}$Fe$_{1.6}$Se$_2$ using the five-orbital Hubbard model in the Hartree-Fock approximation Motivated by the recent discovery of Fe-based superconductors close to an antiferromagnetic insulator in the experimental phase diagram, here the five-orbital Hubbard model (without lattice distortions) is studied using the real-space Hartree-Fock approximation, employing a 10x10 Fe cluster with Fe vacancies in a $\sqrt{5}x\sqrt{5}$ pattern. Varying the Hubbard and Hund couplings, and at electronic density $n$=6.0, the phase diagram contains an insulating state with the same spin pattern as observed experimentally, involving 2x2 ferromagnetic plaquettes coupled with one another antiferromagnetically. The presence of local ferromagnetic tendencies is in qualitative agreement with Lanczos results for the three-orbital model also reported here. The magnetic moment ~ 3$\mu_B$/Fe is in good agreement with experiments. Several other phases are also stabilized in the phase diagram, in agreement with recent calculations using phenomenological models. Introduction.Among the most recent exciting developments in the field of Fe-based superconductors 1 is the discovery of superconductivity (SC) with T c ∼30 K in the heavily electron-doped 122 iron-chalcogenides K 0.8 Fe 2−x Se 2 and (Tl,K)Fe 2−x Se 2 compounds. 2These materials contain ordered Fe vacancies in the FeSe layers, increasing the complexity of these systems.Recent neutron scattering results for the parent compound K 0.8 Fe 1.6 Se 2 , 3 with the Fe-vacancies arranged in a √ 5 × √ 5 pattern, revealed an unexpected magnetic and insulating state involving 2×2 Fe plaquettes that have their four Fe spins ferromagnetically ordered, and with these plaquettes coupled to each other antiferromagnetically. 4he ordered magnetic moment is 3.31 µ B /Fe, the largest among all Fe pnictide and chalcogenide superconductors, and the magnetic transition occurs at a high temperature T N ≈559 K. Angle-resolved photoemission experiments for (Tl, K)Fe 1.78 Se 2 have revealed a Fermi surface (FS) with only electron-like pockets at the (π, 0) and (0, π) points and a nodeless superconducting gap at those pockets. 5The superconducting phase in these compounds cannot be explained by the nesting between hole and electron pockets. 5,6Moreover, the resistivity of these materials displays a behavior corresponding to an insulator in a robust range of the Fe concentration x, 7 suggesting that SC may arise from the doping of a Mott insulator, as in the cuprates.All these results certainly have challenged prevailing ideas for the origin of SC in these materials that were originally based on a nested FS picture and a metallic parent state. Several theoretical efforts have recently addressed the exotic magnetic state that appears in the presence of vacancies.Band structure calculations described this state as an antiferromagnetic insulator with a gap ∼0.4-0.6 eV. 8,9Several model Hamiltonian calculations have also been presented and, in particular, two recent publications are important to compare our results against. Yu et al. 10 analyzed this problem using a phenomenological J 1 -J 2 spin model (see also Ref. 8) with nearestneighbors (NN) and next-NN terms superexchange couplings, studied via classical Monte Carlo.In this analysis the couplings inside the 2×2 plaquettes and those between plaquettes were allowed to be different, and also to take positive or negative values.Five antiferromagnetic phases, including the phase found experimentally 3 in K 0.8 Fe 1.6 Se 2 , which was dubbed "AF1", were found varying the J 1 and J 2 couplings. 11From a different perspective that relies on a two-orbital (d xz and d yz ) spinfermion model for pnictides, and with tetramer lattice distortion incorporated, Yin et al. 12 studied the regime of electronic density n=1 (one electron per Fe), where they also reported the presence of an AF1 state, found competing with a "C" type state with wavector (π, 0). In the present publication, a more fundamental fiveorbital Hubbard model, without lattice distortions, is investigated.Our main result is that increasing the Hubbard coupling U and the Hund coupling J, a robust region of stability of the AF1 state is found.Our effort allows to display the regions of dominance of the many competing states in terms of U and J/U , facilitating a discussion on possible phase transitions among these states by varying experimental parameters.A sketch of the AF1 state and its two main competitors, the C and AF4 states, is in Fig. 1.Our results agree qualitatively in several respects with the phenomenological studies of Refs.10,12 particularly if a combination of results of these investigations is made.Finally, also note that a recent study 13 of the three-orbital Hubbard model 14 using mean-field techniques 15 has also reported the existence of an AF1 state but with orbital order (OO).The relation with our results will also be discussed below. Models and methods.In this manuscript, the standard multiorbital Hubbard model will be used.This model has been extensively described in several previous pub- lications, by our group and others.More specifically, the model used is the five-orbital Hubbard model defined explicitly in Ref. 15 with the hopping amplitudes introduced by Graser et al. 16 By construction, this model has a FS that is in close agreement with band structure calculations and angle-resolved photoemission results for the pnictides without vacancies.The presence of the realistic AF1 state in our results, as shown below, suggests that the same set of hopping amplitudes can be used in a system with Fe vacancies.The electronic density will be n=6.0,i.e. 6 electrons per Fe, for all the five-orbital model results presented below.The couplings are the on-site Hubbard repulsion U at the same orbital and the on-site Hund coupling J.The on-site inter-orbital repulsion U ′ satisfies U ′ =U -2J.The computational method that is employed to extract information from this fiveorbital model relies on the study of a 10×10 cluster, as sketched in Fig. 1(a), using periodic boundary conditions.In this cluster, several vacancies and 2×2 building blocks fit comfortably inside, giving us confidence that the main local tendencies to magnetic order are not dramatically affected by size effects. With regards to the actual many-body technique used to study the 10×10 cluster, here the real-space Hartree-Fock (HF) approximation was employed.The method is a straightforward generalization of that used recently by our group in Ref. 17 in the study of charge stripe tendencies for the two-orbital model.This HF real-space approach was preferred over a momentum-space procedure in order to allow for the system to select spontaneously the state that minimizes the HF energy, at least for the finite cluster here employed.In practice, the many fermionic expectation values that appear in the HF formalism must be found iteratively by energy minimization.At the beginning of the iterative process, both random initial conditions as well as initial ordered states favoring the many phases that are anticipated to be in competition were employed.After each of the computer runs using different initial conditions have reached convergence, at a fixed U and J/U , a mere comparison of energies allowed us to find the ground state for those particular couplings.In our setup, typical running times for one set of couplings U -J/U required approximately 20 hours of CPU time to reach convergence. 18Dozens of computer cluster nodes have been used to complete our analysis in a parallel manner.Fe vacancies studied via the realspace HF approximation to a 10×10 cluster, employing the procedure for convergence described in the text.With increasing U , clear tendencies toward magnetic states are developed.The realistic AF1 state found in neutron scattering experiments 3 appears here above J/U =0.15 and for U larger than 2.5 eV.The notation for the most important states is explained in Fig. 1 and for the rest in Refs.8,10,12.The region with low-intensity yellow circles at small U is non-magnetic. 19sults.The main results arising from the computational minimization process just described are summarized in the phase diagram shown in Fig. 2. Since the hopping parameters of Ref. 16 are already in eV units, our Hubbard coupling U is also displayed in the same units.The notation for the many competing phases used here is that of Refs.8,10,12 to facilitate comparisons.The main result of the present work is that our phase diagram displays a robust region where the magnetic order unveiled by neutron diffraction, 3 see Fig. 1(a), is found to be stable.The ratio J/U needed for the AF1 phase to be the ground state is in good agreement with previous estimations for the same model, although obtained in the absence of vacancies, based on the comparison of Hubbard model results against neutron and photoemission data. 15The ratio J/U is surprisingly similar between the pnictides and the chalcogenides.With regards to the actual value of U in eV's, the range unveiled in previous investigations that focused on the "1111" and "122" families of pnictides was approximately 1.5 eV (see Fig. 13 of Ref. 15).The increase to 2.5 eV in the present investigation is not surprising in view of the more insulating characteristics of materials such as K 0.8 Fe 1.6 Se 2 , and suggests that merely adding vacancies to the intermediate U state of the pnictides (without vacancies) is not sufficient to stabilize the AF1 state but an increase in U is also needed.Finally, with regards to OO, none is observed in the AF1 state in the range of U shown in Fig. 2, i.e. for U ≤6 eV.In this range, the electronic density of all the orbitals (d xz and d yz in particular) is independent of the site location in the cluster analyzed.However, upon further increasing U to 8 eV and beyond, the same OO pattern found in the three-orbital model 13 appears in our calculations (not shown explicitly), with the populations of the d xz and d yz orbitals now being different at all sites.It seems that with five orbitals the AF1 state manifests itself both with and without OO, depending on U , while for three orbitals the intermediate phase with AF1 magnetic order and without OO is not present.(a,c), at the U 's indicated, J/U =0.25, and using a 10×10 cluster.The gap at the chemical potential suggests that the AF1 state (U =3 and 5) is an insulator, although with a mild U dependence in the value of this gap.On the other hand, the C state appears to have only a pseudogap at the Fermi level. 20gether with the realistic AF1 phase, Fig. 2 reveals several other states, and two of them are prominent.Keeping the ratio J/U constant but reducing U , the previously described C-type state (Fig. 1(c)) was found to be stable.This is reasonable since without Fe vacancies this state is the dominant spin order in the intermediate range of couplings, where the ground state is both metallic and magnetic. 15In K 0.8 Fe 1.6 Se 2 , as the bandwidth is increased by, e.g., increasing the pressure, a transition from the AF1 to the C-state could be experimentally observed.In these regards, our conclusions agree with Ref. 12 that the C-state is the main competitor of the AF1 state.However, note that other states reported in Ref. 10 are also present in our phase diagram.For instance, the AF4 state (Fig. 2(b)) is stable in a large region of parameter space at small values of J/U .Thus, overall our results support a combination of the main conclusions of Refs.10,12. The density-of-state (DOS) for the AF1 phase is shown in Fig. 3 for representative couplings.The presence of a gap at the chemical potential indicates an insulating state, in agreement with experiments. 3This is not surprising considering that the transport of charge from each 2×2 building block to a NN block may be suppressed due to the effective antiferromagnetic coupling between blocks, at least at large U and J.In other words, using a tilted square lattice made out of 2×2 superspin blocks, the state is actually a staggered antiferromagnet that is known to have low conductance.On the other hand, it is interesting to observe that the AF1 gap is only weakly dependent on U , suggesting that not only the increase in U is responsible for the insulating behavior but there must be other geometrical reasons that may contribute to the gap through quantum interference.This is reminiscent of results reported years ago for the insulating CE phase of half-doped manganites, state that is stabilized in the phase diagram even in the absence of electron-phonon coupling due to the peculiar geometry of the zigzag chains involved in the CE state and the multi-orbital nature of the problem, that induces a band insulating behavior. 21Thus, in agreement with recent independent observations, 12 our results suggest that the insulator stabilized in the presence of Fe vacancies may have a dual Mott and band-insulating character.Note also that the competing C-state only has a pseudogap (Fig. 3), and thus it may be a bad metal. 20ith regards to the strength of the FM tendencies in each of the 2×2 building blocks of the AF1 state, examples of the values of the magnetic moment m (in Bohr magnetons, assuming g=2, and at J/U =0.25) are m=3.87(U =3.0), m=3.93 (U =4.0), and m=3.95 (U in good agreement with neutron diffraction results 3 m=3.3.Thus, the Fe spins in the AF1 superblocks are near the saturation value 4.0 µ B at n=6.0.Note that the competing C-phase also has a surprisingly large moment m=3.5 at U =2.0 and J/U =0.25. Results for the three-orbital Hubbard model.The results reported thus far have been obtained under the HF approximation.Better unbiased approximations for this model are not currently available.However, at least consistency checks of the present results can be carried out using the Lanczos technique restricted to the 2×2 cluster of irons that forms the AF1 state.For our problem, an additional simplification from five to three orbitals (d xz , d yz , and d xy ) is needed to reduce the Hilbert space to a reasonable size, thus here the model introduced by Daghofer et al. 14 was used.The present Lanczos study is equivalent to a 12-sites one-orbital Hubbard model which can be done comfortably with present day computers even with the open boundary conditions (OBC) FIG. 1 : FIG.1:(Color online) (a) Sketch of the AF1 state found to be stable in a region of the U -J/U phase diagram (see Fig.2) in our HF approximation to the five-orbital Hubbard model, in agreement with neutron diffraction. 3(b) A competing state dubbed AF4 (stable at smaller J/U 's in Fig.2).(c) The C competing state.For (b) and (c), a subset of the 10×10 cluster used is shown. FIG. 2 : FIG. 2: (Color online) Phase diagram of the five-orbital Hubbard model with √ 5× √ 5Fe vacancies studied via the realspace HF approximation to a 10×10 cluster, employing the procedure for convergence described in the text.With increasing U , clear tendencies toward magnetic states are developed.The realistic AF1 state found in neutron scattering experiments 3 appears here above J/U =0.15 and for U larger than 2.5 eV.The notation for the most important states is explained in Fig.1and for the rest in Refs.8,10,12.The region with low-intensity yellow circles at small U is non-magnetic.19 FIG. 3 : FIG.3:(Color online) Density of states of the AF1 and C phases sketched in Figs.1(a,c), at the U 's indicated, J/U =0.25, and using a 10×10 cluster.The gap at the chemical potential suggests that the AF1 state (U =3 and 5) is an insulator, although with a mild U dependence in the value of this gap.On the other hand, the C state appears to have only a pseudogap at the Fermi level.20
2011-09-08T14:19:14.000Z
2011-08-30T00:00:00.000
{ "year": 2011, "sha1": "ce06da73567408a3d4ec4d42e814604d25eb64db", "oa_license": null, "oa_url": "https://arxiv.org/pdf/1108.5807", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ce06da73567408a3d4ec4d42e814604d25eb64db", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
247649196
pes2o/s2orc
v3-fos-license
Linking Competitive Strategies with Human Resource Information System: A Comparative Analysis of Bangladeshi Organization Understanding how human resource information system (HRIS) is linked with competitive strategies (CSs) has become an important research topic in the field of strategic human resource management (SHRM) and information systems (IS). This study intends to find a relationship between HRIS and CSs and the resulting competitive advantages gained from the relationship that impact the organization’s overall performance. A semi-structured questionnaire survey based on the face-to-face interview method was conducted among human resource (HR) executives of the selected Bangladeshi business organizations to collect data and find results. The result shows that HRIS implementation has a significant influence on CSs. Again, HRIS contributes to leveraging benefits from these strategies. The statistical findings reveal that HRIS pay-off (36%) is positively correlated (37%) with CSs to a lower-medium extent, but this correlation insignificantly affects business performance in this horizon. Finally, a framework is developed showing how to leverage HRIS pay-off based on findings and literature. BACKGROUNd The organization is a cohesive organism that learns to adopt or realize higher ways of doing things primarily in response to its setting. The question then is what the organization ought to do to keep up or to optimize its situation? Ought it to become specialized in its financial condition, information systems (IS), or human resource (HR)? To answer these questions, we must initially see what different researchers concluded. Coff (1994) argued that HR is a vital source of sustainable competitive advantage (CA) attributable to causative ambiguity and systematic information, creating them irreproducible. Ben Moussa and El Arbi (2020) mentioned that if management trusts their staffs, provides them challenging assignments, and engage them affectively, they reciprocally respond with high motivation, high commitment, high creativity, and high performance. What does that mean to us? It means that sources of CA leveraged from making sustainable competitive strategies (CSs) that have shifted from financial resources to technology resources and later to human resources. In other words, success does not rely totally on the scale of the budget or the merchandise supporting technologies. It mostly depends on employees' attitudes, competencies, and skills; their ability to get commitment and trust, communicate aspirations, and add advanced relationships. IS is strategically valuable for recognizing strategic intent (Arvidsson et al., 2014). We all recognize that one of the prominent sources of CA is HR, and then what will we need to do to attain CA through them? The solution lies in CSs and HR practices shifted to human resource information system (HRIS). HRIS is a collection of computer hardware, software, databases to record, store, manage, manipulate, and retrieve data as and when necessary for the HR functions (Irum & Yadav, 2019). By working with new ways over extensive integrated systems by adopting new practices, human resource management (HRM) operations can be considered forward-thinking and cutting-edge technology (Barrett & Oborn, 2013). According to Robert Kaplan and Marvin Bower, the HR scorecard demonstrates how improved measurements play a vital role in linking HR initiatives to business strategies and significant increases in shareholder value (as cited in Becker et al., 2001). From an IS perspective, organizations striving to leverage a strategic alignment between information technology (IT) and business areas often underestimate the role of HRM in creating business value (Oehlhorn et al., 2020). RATIONALe There is an extreme need for organizations to manage changes during the technological age to compete and maintain their interest. Strategic human resource management (SHRM) is the crucial factor in efficiently managing these changes (Bhattacharyya & Atre, 2020;Kovach et al., 2002;Noutsa Fobang et al., 2019). On the other hand, meaningful use of data is fundamental to tackling workplace reshaping due to technological advancements (Kovach et al., 2002). Boateng (2007) stated that IT drives HR transformation from HRM into SHRM. This strategic role mainly enhances HR competencies and thus labels the success of HR professionals and practitioner motive. It is obvious that the correct management of organizational personnel using IS is essential for achieving potency and effectiveness in day-to-day operation. If they are appropriately managed and organized, it will lead to the general performance exceedingly positively to accomplish short-term and long-term goals; otherwise, it will lead to poor performance. Except techno-based knowledgeable personnel like analysts, these systems cannot produce a fruitful result in the case of gaining CA. Before developing a linkage between CSs and HRIS practices, there should be a principle. This principle provides a basis for predicting, studying, refining, and modifying every strategy and approach in specific circumstances. Mahmood and Nurul Absar (2015) revealed some changes or evolution of new HRM practices in Bangladesh's private and public sector organizations. Kovach et al. (2002) stated, "this evolution has resulted in firms being able to leverage HRIS for administrative and strategic competitive advantage" (p. 44). Iqbal et al. (2018) showed that practices of electronic HRM (E-HRM) significantly influence labor productivity. Despite browsing many studies directly concerning HRIS and the appropriate use of HR within the context of organizations in Bangladesh, it is scarce to ascertain research analysis on this work's realm. Most of the HRM practices in Bangladesh have tried to show how HRM can improve the organization's performance and employee satisfaction. The aspect of IS on the organization's HR management and development is highly neglected in Bangladesh. On the other hand, the proper linkage of CSs with HRIS is not seen. Grant and Newell (2013) cited in their study that "the effective management of human resources can make significant contributions to organizational performance (OP) and that human-resource-related issues are central to the creation of sustainable competitive advantage" (p. 187). Contemporary organizational developments, such as the growth of knowledgebased and networked organizations, suggest that the strategic importance of HR-related issues becomes more significant (Lawler III et al., 2004). Thus conducting a study linking CSs with HRIS is vital to HR personnel, IS specialists, academicians, and other relevant stakeholders. Though Rahman et al. (2018) conducted a study on E-HRM implementation in government organizations in Bangladesh, no alike study linking CSs with HRIS is found specifically concerning this aspect in the Bangladeshi context. Therefore, conducting this study is rationalized. OBJeCTIVe ANd ORGANIZATION Primarily, the current study aims to find out and show the linkage of HRIS with the organization's CSs. It further reveals the strategic importance of HRIS in HR development, especially for Bangladeshi organizations. Again, in this study, there is an attempt to find out the answer to the following questions: RQ-1: Whether CSs has an impact on an organization's performance or not? RQ-2: Whether organization's compatibility has a significant impact on business growth or not? RQ-3: Whether HRIS contributes to an organization's financial performance or not? RQ-4: Whether HRIS is linked up positively or negatively with the organization's CSs? RQ-5: Has there any supported relationship between business growth and HRIS pay-off? The organization of the study follows a sequential order for showing the linkage and the impact of HRIS on an organization's CSs. The previous parts of the study cover the background, rationale as well as objective and research questions. The forthcoming part discusses the methodology followed in this study. Then the fifth part covers the literature review. The sixth part shows the survey results describing interviewee information, CS pattern, CA, HRIS impact, and respondents' opinion followed by exploring the findings and evaluation in seventh part. Next, leveraging HRIS pay-off for CA is described through a model following the results outlined. The final part is about the summary discussion and conclusion involving recommendation, implication, limitation, and plan for future research. MeTHOdOLOGy This study is conducted based on both primary and secondary data. The questionnaire survey was adopted to generate the preliminary data, which is the direct research design. The researchers designed a semi-structured questionnaire and used it to collect the primary data. After the interviewee's information, the researchers included four different parts in the questionnaire including CS pattern, CA, HRIS, and open-ended questionnaire. More than 30 questions are used in the questionnaire. Then the researchers gathered the data by filling up the formative questionnaire from HR executives of selected organizations in Bangladesh and got insights through a face-to-face interview technique. The researchers used both the qualitative and quantitative data, also known as mixed approach in the study and adopted a descriptive method for data analysis. The researchers analyzed the feedbacks received from eight organizations. They visited thirteen organizations where eight organizations responded, so the overall response rate was about 62 percent. The collected data were organized and tabulated in such a way so that the insights can be found. In preparing tabulation, some statistical functions like the mean, standard deviation, regression method, ANOVA test were applied to find the hypothetical results. Secondary sources of data were also considered. The secondary data sources were various articles from the related journals, different books, book chapters, conference papers, thesis papers, organizational reports, online sources, some empirical judgments, and assumptions. The researchers used the SPSS computer package (version 20.0) and Microsoft excel 2010 to analyze the collected data. Finally, based on findings and existing literature analysis, a framework is developed. The study is conducted through a systematic process. The procedure is shown in figure 1. LITeRATURe SyNTHeSIS Many studies are found on IS, HRIS, SHRM, HRM practices, CSs, and OP from existing literature. Some literatures that are relevant to this study is mentioned in this part. In this part, the researchers outline different definition of terms associated to this study and the results found from the previous studies. The strategy involves a company's competitive moves and business approaches that its managers are trying to enhance business growth, attract and satisfy customers, tackle competitors successfully, do the operations and attain targeted performance (Thompson et al., 2008). Smith and Kelly (1997) believed that future economic and strategic advantage depend upon the organizations that effectively attract, develop, and retain various clusters of the most influential and brightest human talent within the marketplace. A CSs are a series of decisions that provide a business CA over its rivals (Schuler & Jackson, 1987). A business organization attains sustainable CA when considerable numbers of buyers prefer its product offering over the competitors and when the basis for this preference is durable (Thompson, 2008). Again, Schuler and Jackson (1987) recognized CSs apparently for cost reduction, innovation, and quality improvement. They conjointly initiated varied sorts of worker behavior and strategies of HRM for every CS. Porter's (1985) classifications of generic CSs initiate CS for the first time. He argued that an organization might achieve superior performance during a competitive trade by pursuing a generic strategy of overall market leadership, differentiation, or focus approach to industry competition. Irene and Frank (2004) argued that skillful, enthusiastic, and adjustable personnel strategic value are enhanced through competitive pressures from the environment. Boxall (1998) showed how HR strategy build and defend superior CA and ended that HR yields this advantage. Effective management of HR is thus the prime source of CA. Progressively, the delivery, support, and management of HR depend upon technology -specifically, HRIS (Johnson & Gueutal, 2010). Beyond understanding business requirements, HR professionals need to raise their strategic value and also the importance of HRM practices by improving their competencies in three areas of designing organization, managing change, and measuring performance (Boudreau & Ramstad, 2005;Cascio, Kates, 2006;Lawler III et al., 2004;Ulrich & Beatty, 2001). Accurate and practical roles of all those liable for strategy formulation, implementation, and analysis together with HR professionals and HRIS analysts ought to play along. HRIS analysts are IT experts in the field of HRM. Developed countries' organizations adopt HRIS as an essential factor for achieving the strategic purpose (Noutsa et al., 2017) though developing countries are also in the race. Through the increasing use of the computerized system within the industrial operations of developed economies, organization's HR functions got accustomed to IS exceptionally throughout the 1980s. HRIS is primarily seen as a subfunction of management information systems (MIS) that supports an organization's HR functions. The organization's success can mostly rely on the coordinated, strategic management, and integration of its HR and IT (Kovach et al., 2002). Achieving this strategic coordination needs those folks who are liable for developing, implementing, operating, and maintaining an HRIS to possess extensive data of the organization's HR programs, the link between HR programs and strategic planning, and also the potential possession of information and communication technology (ICT) (Rampton et al., 1999). HRM is the means to grow the functional activities that directly contribute to the profitability and determine its success in achieving CA. HRIS plays a crucial role in supporting the whole operations of the HR division. HRIS is not just computer hardware and HR-related software. Although HRIS includes hardware and software, it also includes people, organizations, policies, procedures, and data (Kavanagh et al., 2014) to acquire, store, manipulate, analyze, retrieve, and distribute human information resources. These are not technology but the art of human and human management. HRIS is the database, software, and computer systems that organizations use to take care of their HR in payroll, time off, worker records, benefits management, and more. These systems are associated with the intersection of HR and IT through HR software (Rietsema, 2021). Celia et al. (1995) defined HRIS as implementing system that acquire data and generate information about the personnel to assist in planning, forecasting, developing, and controlling them. Several recent studies identified that while the HR function has become more strategic in its orientation, it is not yet a full strategic partner in many organizations (Dye, 2006;Lawler & Boudreau, 2009;Weis & Finn, 2005). Whereas investments in IT are still done for each potency and effectiveness function, the strategic information system (SIS) era is premised on management. It was proactively seeking out opportunities for CA through IT, with approaches to IS strategy formulation accommodating the necessity for each alignment of IS/IT investments with company strategy and assessing the troubling impact of technology, and also the choices for its use in shaping business strategy (Peppard & Ward, 2004). Using HRIS aims to provide accurate information to make HR-related decisions and reduce HR executives' manual work. HRIS has the potentiality of transforming HR into a more efficient and strategic function by allowing them to move beyond simple administrative activities to strategic applications. During HRIS implementation, converting manual system to automated system or switching to a new system from an old system is unfortunately easy for data integrity to be compromised (Rietsema, 2021). HRIS mainly supplements HR functions from HR planning to performance management (Irum & Yadav, 2019). There is a relationship between HRIS functions and HRM functionalities. More specifically, performance development, knowledge management (KM), and records and compliance as dimensions of HRIS have a relationship with HR functionalities and an effect over them (Obeidat, 2012). According to Karake (1995), the strategic posture of an organization leads to a proactive attitude for its managers who seek to satisfy stakeholders' interests, thus enable the organization to achieve CA and more business opportunism. Managers have a relatively positive view of the impact of the HRIS on organizational effectiveness, with the most significant degree of confidence placed on the impact of HRIS on time management and HR functions. The results confirm that a well-implemented and managed HRIS enables readily available information to be translated into more information share, knowledge transfer, and management. Consequently, the HRIS can enhance the speed and quality of decision-making and realize the HR strategy, thereby enhancing organizational effectiveness (Kumar & Parumasur, 2013). The necessity to integrate HRM with IS has become a sine qua non as modern organizations realize that their people and information resources are part and parcel of their survival. That is why HRIS is used extensively in all organizations irrespective of size, tenure of establishment, and operational complexities (Bhuiyan & Rahman, 2014). HR professionals consider HRIS to support strategic HR tasks and perceive it as an enabling technology. Large-size organizations are most likely to experience many HRIS usage in support of strategic HR tasks. Again, there was an insignificant difference in proportion to the organization's size regarding HRIS usage for committed management support and support to build trade union relations (Sadiq et al., 2012). Robin (1992) reported that HR personnel gave much attention to personnel strategy development and policies to promote organizational goals. Nevertheless, HRIS facilitates strategic value by designing and implementing internally consistent policies and practices that confirm HR's contribution to accomplishing business goals (Troshani et al., 2011). Later, it was found that for strategic integration, HRIS leads to improved managerial performance and changes how organizations are managed (Katou & Budhwar, 2006;Pablos, 2004;Troshani et al., 2011). Thus, organizations can derive value through HRIS tools that assist with decision-making regarding essential HR functions (Farndale et al., 2010;Troshani et al., 2011). As a result, organizational effectiveness is triggered by the strength of the HRIS that can explain individual personnel behaviors (Bowen & Ostroff, 2004). Wynen and Kleizen (2017) studied US public sector organizations and found a negative linear relationship between employee turnover and OP. The study of Garcı'a-Sa'nchez et al. (2015) revealed that HRIS has a strong relationship and positive impact on varied SHRM decisions, whereas OP depends on these decisions. The empirical results of Awan and Sarwar (2015) showed that HRIS and SHRM play a vital role in increasing the performance of banks. Different SHRM activities in banks like business method re-engineering, healthy union relations, determination of management worker relations, training and development, and higher cognitive process feature a sturdy relationship with HRIS. HRIS brings additional advantages to the bankers and provides the organizations a brand new and more elevated look. It has become a more vital issue with SHRM tasks. Celia et al. (1995) researched the link between HRIS and an organization's culture. They found that the higher the cultural relevance, the riskier the implementation because of the increased potential for cultural incompatibility. Oghojafor et al. (2014) found no significant relationship between competitive strategy and OP in terms of differentiation approach, whereas there is a highly significant effect. A study on HRIS adoption among the 500 listed Singaporean organizations was conducted by Teo et al. (2001). This study shows that most organizations (60.3%) adopted HRIS use it for traditional purposes and 7.9% organizations use for a strategic objective. Another similar study conducted by Lin (1997) on the adoption of HRIS in Taiwan found that for the different levels of HRIS, MIS is the most demanding area by the Taiwanese organization for HRIS implementation (Andersen, 2001). Arifin and Tajudeen (2020) examined the effect of the human resource management information system (HRMIS) on Malaysian armed forces (MAF) personnel. They identified seven factors that affect MAF personnel's use, mentioning the system's quality, information, service, ICT infrastructure, security, commander support, and training. Additionally, Garcı'a-Sa'nchez et al. (2015) identified the KM process as the mediating factor of top management support for ICTs and OP. If HRIS is evaluated positively, upper job satisfaction and lower turnover intention can be found (Maier et al., 2013). Maamari and Osta (2021) showed that successful implementation of HRIS highly influences employees' job satisfaction. L'Écuyer et al. (2019) confirmed that the capabilities of small-and medium-sized enterprises' (SMEs) HRIS affect the HR performance by aligning strategies with the high-performance work system (HPWS). Mazhar et al. (2020) found that HPWS practices in various commercial banks in Pakistan influence OP positively. Ben Moussa and El Arbi (2020) studied on HR department of Tunisian companies showing HRIS impact. They found that the more the individuals become engaged in organizational activities, the more positive and effective outcomes are generated in terms of their innovation capability. Mulat (2015) conducted a close study where he found that 31% of participants replied unfavorably with the statement that their organizations' HRIS eliminates unsuitable candidates early and emphasizes promising candidates. 46.1% of respondents believed that their HRIS performs comprehensive tracking and reporting of candidates with efficiency. Nearly 50% of participants responded with a neutral position that their HRIS leverages employee's talent within the right place and at the right time. 35.4% of executives believed that their HRIS recruiting scheme was better utilized and met their expectations, while 37% disagreed. Davarpanah and Mohamed (2020) found that user-perceived benefits from HRIS is inferred in situations of user satisfaction and situational normality. From the discussion, it is evident that HRIS has a pay-off. However, in every sector, this payoff differs. Again, CSs and effective HRM are crucial for organizational success. Existing works of literature prove that the application of HRIS in different contexts is one of the highly researched issues across countries. All studies reveal the message that the central role of HRIS is efficient management of the HR functions of any organization. Overwhelmingly, no specific studies were found explicitly based on linking HRIS with CSs showing their impact on Bangladeshi organization's performance. Based on the research questions outlined in this study and the literature review, following five hypotheses are formulated. H1: CS has a positive impact on an organization's performance. H2: Organization compatibility has a significant impact on business growth. H3: There is significantly positive correlation between organization's financial performance and HRIS contribution. H4: HRIS has a positive impact on business growth. H5: HRIS has a significantly supported relationship between HRIS contribution and business growth. SITe VISIT SURVey ReSULTS The researchers obtained respondent's information through face-to-face site visit interviews. The analysis of these collected data are discussed in this part consecutively. Interviewee Information The researchers interviewed eight organizations' HR and/or IT managers who deal with HRIS related jobs. Apart from a system analyst, almost all interviewees found senior executive or executive position in the HR division of their respective organizations. They hold various academic degrees with priority for a major in HRM and IT. Most of the participants are from HRM career background. However, people can access human resource development (HRD) discipline in Bangladeshi organizations from any educational background except for few organizations. Most of the respondents had no specific choice for career selection, and even those who chose a particular career build-up failed eventually. The interviewees had been servicing their corresponding organizations ranging from 2 to 28 years. Accordingly, most respondents claimed that the person with little or no IT knowledge could cope with the IT-enabled system. On the other hand, retaining IT knowledgeable people are also tricky. Again, some executives think the work environment is challenging. The recognition for work and the high esteem in this field is the motivation for working in HR division. Most believe, there is no other particular area that makes it especially rewarding. Some respondents were reluctant to share their contact details in this survey where others were interested in admitting their connection. Competitive Strategy Pattern In the telecommunications sector, T1 emphasizes expertise and distinct resource strength that the rivals cannot imitate, and T2 aims to provide their products at low cost, differentiation-based products exercising market niche and practicing knowledge and resource strength. For hospitals, the only interviewed organization H1 hesitated about their CS disclosure; thus, the researchers failed to find out the specific CS for that organization. B1 and B2 aim to provide their products at low cost, differentiation-based products, exercising market niche, and practicing expertise and resource strength in the banking sector. B3 strives to be the industry's low-cost provider strategy. B4 follows creating a differentiation-based advantage and developing expertise and resource strength strategy. Competitive Advantage The researchers have shown the overall customer satisfaction profile based on 6 different bases (e.g., product quality, reliability, customer's needs fulfillment, overall satisfaction, continuous purchasing intention, recommendation to other customers) within a 7-point Likert scale where 1 is lowest, and 7 is the highest score. It is regarded as how customers are satisfied with the services provided by different sector organization. T2's customer satisfaction level is comparatively better than T1. H1 scores a very much satisfactory remark, where B2 is relatively better than B1, and B3's score could not be measured for data insufficiency, B4 obtains the highest score (strongly satisfactory) that possesses highest satisfaction score in this sector. C1's score is found very satisfactory. The bases for customer satisfaction ranges from satisfactory to very satisfactory. The cross-sectional overall scores from different sectors are very satisfactory. Regarding the customer satisfaction rating of the surveyed organizations, it is found that 50% of customers are very much satisfied, 25% are satisfied, and 12.5% are strongly satisfied, and at the same time 12.5% customers are dissatisfied with respective organization's services. The CSs of the interviewed organizations found for T1 are weak. CSs of T2, B1, B2, B4, and C1 are very strong, where H1 is weak, but B3 is strong. The scores and remarks are based on overall scores of CSs leveraging from low-cost provider, differentiation, focused low-cost, and focused differentiation strategy. The CSs score is shown on average and placed on the right side of the table ranging from strong to very strong. The overall organization-wide CSs is strong. The surveyed organization's CSs are found as 62.5% powerful, where the other 25% strong, and 12.5% weak. Most of the organization has positive financial growth, specifically net profit growth rate from the previous year except one organization. These ratings are 29.66%, 69.12%, 465.48%, 34.7%, and -19.2% for B3, B2, B1, T1, and B4 respectively. In case of increment or decrement of the number of HR in the respective organization, most organizations were not interested in sharing as they consider it as confidential information. Regarding HR growth, B4 has a 7.41% increment, whereas B3 and H1 increase 9.32% and 13% respectively. Nevertheless, T1 reduces its by 10% in the specified year. Almost every organization did not evaluate the pay-off gained from HRIS. T1 had estimated an overall IT pay-off as 50%. It is found from the site visit survey that the majority of organizations have a solid financial condition (see table 1). Except for a few organizations, most of the organization's annual report showed a positive growth of the organizations' performance compared to previous years. The researchers found 37.5% organizations as apparently good and very good, and 25% best for achieving CA. The aggregate CSs are rated as T1, T2, and B2 to be good where H1 and B3 are very good, B4 and C1 are best (see table 2). These rates are based on customer satisfaction profile, organization's competitiveness, business growth, and financial condition. All in all, the overall CSs are very good. Human Resource Information System All interviewed organizations were found to rely on technology for conducting HR activities partially or semi-partially or fully; thus it is obvious more or less, they practice HRIS. Most organizations rely on technology for daily works by few or large, including HR planning, recruitment, selection, training, compensation, performance management, and transformational activities. Most interviewees think that by pursuing technology in HR activities, they get benefit for the ease of administration, report generation, frequent reviews, improved productivity, analysis, attendance management, reduced error, and less time in performing jobs. All organizations' professionals think that there is a positive impact of HRIS for accomplishing CSs. HRIS has a positive contribution to the organization's low-cost provider strategy. The survey report found that 50% of interviewees think that there is a contribution of HRIS on differentiation strategy, 37.5% stated no contribution, and 12.5% did not answer. 75% of interviewees think that HRIS positively contributes to the focused low-cost strategy, where 25% think the reverse contribution. 62.5% of interviewees agreed that HRIS has a positive contribution to the focused differentiation strategy, where 12.5% did not agree with this statement, and 25% think that it is not feasible with HRIS. Whatever perceptions found on different dimensions, all interviewees provide their consent on HRIS having a positive impact on overall organizational goal achievement. Open ended Question The survey result shows that different organizations adopt dissimilar HRIS applications from various vendors. The HRIS applications and vendors used in surveyed organizations are listed as Oracle, Workday, ERP, BCSHRS, HR Matrix, SAP ERP, HRMS, and HRM System. Other organizations rely on their in-house developed HR software. The organizations installed the application many years back from today. The organization-wise picture for using HRIS is shown in table 3. Combined with the most common modules found from the survey are goal setting, recruitment/ selection/e-recruitment/hire or fire, payroll/salary processing, performance management, compensation management/performance appraisal sub-system, promotion management, effort log, reporting, analytics, attendance system, overtime management, absenteeism management, leave management, training information system, personnel management system, employee information, time management, document management system, annual confidential report (ACR), central authentication, and more. Rather than these common modules, some HR professionals addressed material management, financial accounting, and cost management as a module in their HRIS application. The survey found that the job duties in the HR division vary from organization to organization. One organization considers some job responsibilities as entry-level employees' duties where other organizations regard as mid-level job duties. Again, the mid-level job duties of some organizations are regarded as the job responsibilities of the top-level employees in other organizations. The job responsibilities at various levels found are enumerated here. § Entry-Level: doing a clerical performance, report generation, documentation, overtime entry, disciplinary actions, house allocation, storing employee information/data entry/work with the system/employee information entry, working with e-recruitment, initiating hiring/firing in the system, transferring/posting, preparing payroll report, and identity card. § Mid-Level: monitoring technical site, doing system modification, recommendation, supervision, promotional actions; accomplishing recruitment and selection process, working with personnel management system (PMS), training and development, tax calculation, implementing new projects or long term goals, delegating tasks, evaluating performance, heading departments/sections/ sub-sections, making routine decision; approving loans/advances, and separating benefits. § Top-Level: making long-term strategic decisions, target fixation, action plan, decision-making authority, authorization, distributing action plans among divisions, posting, new project launching, creating a new section, monitoring overall HR activities, initiating training and development, and strengthening sanction. No organizations agreed on sharing their strategic data. Some interviewees agreed to provide HRIS data with the consent of the higher authorities. Therefore, it was not easy to access the confidential information. Finding HRIS's contribution to the organization's net profit was difficult though a respondent mentioned that it contributes to their organization's profitability by 100%, where another mentioned 20%. No single respondent refuses the statement that HRIS is lagging and has no potentiality for growth. They think the field is growing day by day. From the respondents' point of view, the challenges might differ from organization to organization and individual to individual. Although the obstacles found to be most critical, many organizations are converting paperwork into a paperless platform. Lack of system know-how and experience, recruiting qualified workforce In-house Satisfactory directly and ensuring their retention, system integration failure/ERP down, updating software, setting up peoples' mind, managing resistance to change, workforce absorption are some of the difficulties found. However, respondent of B1 mentioned that no critical challenges they are facing nowadays. Few respondents specified the unique characteristics and competencies of the people who succeed in HRIS. Attention to detail, quick learner, patience, sharing attitude, teamwork, sufficient working knowledge of computer, system development, proficiency of English language all are mentioned as the characteristics and competencies for HRIS professional. Again, hardworking and IT skills, combined expertise and function, usage of software and HR-related experience, business and system knowledge, experience in administration and well-known IT expert facilitate most in this field. IT expertise, business acumen, system development know-how, and sound knowledge of MS Office are functional technical skills. Side by side, query language, C, C++, database, java, C sharp, oracle, visual studio, operational ERP experience, overview of system workflow, and operational process integration for advanced-level applications are recommended skill sets. These skills are identified as beneficial for the functional and technical employees in this field. Some respondents considered revealing salary is conflicting with organizational privacy. However, some specified that entry-level salary ranged from BDT 15,000 to 55,000. Again, in mid-level, salary ranged from BDT 35,000 to 120,000. Similarly, top-level salaries ranges from BDT 60,000 to 300,000. It mainly depends on organization type, size, culture, salary structure, and the experience and expertise of HR people. Some respondents thought that they enjoy the work environment, learning opportunity, job security, promotion, and an immediate effect of activities on the career than other job fields' offer. The HRIS field is evolutionary in Bangladesh. From the interview, it is found that the future will be a paperless platform based on HRIS. Countrywide online recruitment system, one-machine, one-employee concept, no physical work environment rather facility to work from anywhere and HR analytics for prevailing organizational value might be seen in coming days in HR development in Bangladesh. They concluded that web-based HRIS in practice and fully integrated system facilitate internal shareholders. Much manual work can still be streamlined, and process simplification from other ends is required. Again, reduced working time, improved work efficiency, easy to get HRrelated information, fast reports preparation and supply to top management, easy administration and decision making accordingly, radical time saving, increasing employee productivity, the achievement for success, security and correctness, timeless transaction, high communication, and contribution from more young people are required to get out the best of the HRIS. FINdINGS ReGARdING HRIS IMPACT ON ORGANIZATIONAL PeRFORMANCe From the organization site visit survey, various interviewees expressed their opinion regarding HRIS and CSs. Figure 2 illustrates the impact of HRIS on achieving CSs in the surveyed organization. Here data provided by T1 indicates that it has a bad impact on overall CSs, low-cost strategy, focused differentiation, and organizational goal achievement; very bad impact on focused low-cost provider strategy and focused low-cost provider strategy. On an average, a destructive HRIS impact was found on CS. T2 has a good effect on overall CS, low-cost strategy, focused differentiation strategy, and a very good impact on differentiation and organizational goal achievement. Overall, it has a very good effect. H1 has a very good effect on all the strategic approaches involving the overall impact. According to B1, HRIS has the best effect on overall CS, low-cost provider strategy, focused low-cost provider strategy, organizational goal achievement, and overall achieves the best impact. In the case of B2, there is the best impact on comprehensive CS and low-cost provider strategy, very good impact on focused low-cost provider strategy and organizational goal achievement, good impact on focused differentiation strategy, and moderate impact on differentiation strategy. B3 achieves a very good impact on organizational goal achievement, a moderate impact on overall strategy, a lowcost provider strategy, a focused differentiation strategy, a bad impact on differentiation strategy, a focused low-cost provider strategy, and an averagely moderate impact on CSs. B4 scores a moderate effect of HRIS in all criteria of an organization's CSs. C1 has a good impact on overall CSs, focused differentiation strategy, organizational goal achievement; moderate impact on low-cost provider strategy, differentiation strategy, focused low-cost strategy; overall it has a good effect of HRIS in CSs. On an average, the impact of HRIS on an organization's CSs can be described as attractive, satisfactory, and likely has the same 22.2% impact where overwhelming and bad have an 11.1% impact (see table 4). No organizations referred to other departments or organizations in this regard. From table 5, the researchers state that H1 is accepted for being the P-value less than 0.05. Thus, it is confirmed that the statement of the CS has an impact on the organization's performance. The sig value from table 6 is 0.47, which is more than 0.05, indicating that we cannot accept the H2. So an organization's compatibility has no significant impact on business growth. Though the Pearson correlation score here is 0.37, there is a positive correlation between an organization's compatibility and business growth with a lower positive relationship between the two aspects. It indicates that if an organization's compatibility grows, the business might enjoy a growing operation, but this growth may not be significant. From figure 3, the researchers conclude that an organization's competitiveness is linear as the score has gradually risen to some extent, although these interactions are not significant. From table 7, it is seen that there is a positive correlation between an organization's financial condition and HRIS contribution, which means the higher contribution of HRIS generates a more incredible financial performance. This level of correlation is significantly strong as the score is greater than 75%. On the other hand, the sig. value is found to be 0.425, which is greater than 0.05. It indicates that H3 cannot be accepted, which means there is no statistically significant correlation between an organization's financial performance and the contribution of HRIS. As the sig. value found in Table 8 less than 0.05, the researchers conclude with a 95% confidence level that H4 is accepted, indicating a strong positive correlation between HRIS contribution and business growth. From table 9, it is seen that HRIS contributes 35.7% growth in business, and other factors explain the remaining 64.3% of the growth. In table 10, the value of β tells us that when HRIS has not implemented or does not contribute to the net profit, then the business growth rate is 7.64. Then, 0.67 explains how much business growth be achieved if HRIS is implemented or otherwise delivers its pay-off. On the other hand, the P-value of ANOVA and sig value (0.118>0.05) explain no statistically significant relationship between HRIS pay-off and overall business growth, thus H5 is rejected. It explains that HRIS has no significant supported relationship between HRIS's contribution and business growth. FRAMewORK FOR HRIS PAy-OFF LINKed wITH COMPeTITIVe STRATeGIeS Research shows that implementing HRIS enhances an organization's growth through long-run productivity and gain. Along with increasing workers' potency, HRIS is also transforming HR functions. Researchers highlight five keys to leverage HR technology to assist business leaders in perceiving and managing this transformation. Again, an outline of challenges to be addressed and an inventory of five trends to observe. Currently, this technology is moving quickly to internet-based systems to deliver knowledge and services like employee self-service (ESS), online recruiting, webbased coaching, applicant online testing, and online benefits management. There are varieties of organizations that support internet portals that offer HR-related services from one portal. To leverage this sort of technology needs leaders specialized in underlying HR processes supported by HRIS. From figure 4, the HRIS pay-off is leveraged by combining the company's overall CSs. HRD is accountable for developing the HR strategy when broad CSs are created. This strategy is developed by involving the required workforce, which might be crammed up by crucial business leadership positions by distinctive internal employees. Succession planning needs training profile, potential profile, performance profile, rewards profile, and necessary skills and competencies. Succession planning may also be satisfied by the leadership effectiveness survey, and this survey can also play a role in organizational alignment, which also contributes to streamlined succession planning. Organizational alignment has impacts of employee satisfaction and leadership effectiveness survey, which is linked up with HR strategy on the other hand. It contributes to the successful execution and implementation of the HR strategy. External HR workforce planning requires an HR recruitment portal for hiring the necessary people to conduct the business profitably. This recruitment portal is linked up with induction and exit management and compensation planning. For sustainable organizational alignment, setting organizational goals needs to be made and tracked considering the transfer management. Goal setting and tracking have an impact on appraisal management. The appraisal is also taken upon transfer management. Based on the assessment and multi-rated feedback, HR personnel set up a performance bonus. The payroll system generates paycheques combined with performance bonuses and compensation planning. Based on goal setting and tracking, succession planning, leadership effectiveness survey, multi-rated feedback, and training and development decisions are made. Payroll, training and development, compensation planning, appraisal, working online, employee self-service, transfer management, and induction and exit management data are linked and stored in the HRIS database. These data are regularly updated when any changes are made in a module or sub-system in the HRIS. After a certain period, HRIS performance is evaluated, and thus HRIS pay-off is found. This pay-off contributes to the overall OP, primarily through financial performance, business growth, environmental and social growth, and others. dISCUSSION ANd CONCLUSION This study finds a correlation (37%) between HRIS impact on CSs that does not significantly affect the business performance in Bangladeshi organization. There is a positive correlation between organization compatibility and business growth. A strong positive correlation between HRIS contribution and business growth is found, and HRIS has a 36% pay-off positively correlated with CSs to a lower-medium extent. For the regression function, it is inferred that at the obtained HRIS pay-off, the business might grow at 32%. One vital issue that will set an organization apart from its competitors is its HR. The quality of the organization's personnel, enthusiasm, and satisfaction with their jobs and the organization all severely impact its productivity, reputation, and customer service level. The essential function of HRs today is to ensure efficient and effective use of human talent to accomplish an organization's goals and objectives. Formulating an applicable CA through the employee programs for an organization requires analyzing the organization's business strategy and HR practices following HRIS implementation. The organization requires developing a complete HRIS model that should support long-term planning, build core competencies, and establish sensing capabilities. Insufficient ICT infrastructure, lack of commitment and involvement from top management and all staff, resistance from staff, a concern of access to information by unauthorized persons, lack of IT specialists, difficulty in computerizing much paperwork in implementing HRIS hinder perceived benefit of HRIS. Thus, these factors are considered as the restrictions of implementing HRIS in Bangladesh. Finally, organizational CSs, combined with HRIS, will result in high employee satisfaction, high performance, longer tenure, willingness to accept change, and overall performance. It can be concluded that HRIS is an excellent tool for the HR division and the organization as a whole, but there are still some bones to pick up, and the actions that HRIS has not absorbed need to focus. Recommendation HRIS is considered as a valuable resource in HR and strategic decision-making (Kovach et al., 2002). Hence, implementing HRIS does not guarantee a positive pay-off. There is evidence that many corporate strategies often fail because they do not address salient people-related issues. There is conjointly a risk that enormous investments in HRIS will not improve HR professionals' satisfaction or render the strategic HR tasks performance. It might be an outgrowth of low technology acceptance among supposed users, inappropriate technology decisions, or different factors. Till a lot is understood, investments in these innovations ought to proceed with caution (Boateng, 2007). Organizational change issues need to be considered when implementing IS in small organizations (Levy & Powell, 2000). It is necessary to follow a strategic vision in performing organizational practices based on its capabilities (Devece et al., 2019). Employee engagement is positively related to flexible work arrangements (Ugargol & Patrick, 2018). The future is unpredictable; however, the organization should form the aptitude to sense the modification. To link CSs with HRIS in Bangladesh, organizations should follow several things. With keeping the details in mind, the below-mentioned recommendations specifically for organizations in developing countries like Bangladesh are made. • Knowledgeable HRIS analysts should be appointed, and a need-based training program should be arranged from time to time • HR strategy should target achieving CA through the workers and exploiting practices that support this strategy • HR executives need to revise their existing HRIS recruitment and selection procedures to provide various functionalities • Organizations need to implement an HRIS that links employee performance to corporate business goals and priorities • The organizations should have fairer evaluation and when necessary amend their current HRIS compensation and benefits scheme to stop turnover • Organizations should continue to develop their capacity for HRIS at both the strategic and the tactical levels Implication Positive HR programs translate into a positive impact on the organization. Many organizations emphasize HR that have created a distinction inside the organization's performance. These organizations acknowledge the importance of their workers in making a difference and providing the essential ingredient for its CA. The current study result has implications to the HRIS analysts, HRM executives, scholars, and other relevant stakeholders especially of Bangladeshi and other developing nations. Limitation This study covers a thorough analysis of HRIS, which is central to achieving CA. Any good work is not out of the limitation; this study is also the same. Again, as organizations consider CSs and HR data their confidential resource, it is not easy to collect them. Thus, the actual scenario cannot be revealed with appropriate data. Again, there are few implementations of HRIS in Bangladesh; as a result, limited sources are available. Consequently, the data were collected only from selected topranked organizations cannot correctly reveal the actual scenario. Future Study While this research has realistic result in HRIS, it provides a platform for future research in the relevant area that ought to think about various problems. First, a detailed analysis is needed to explore the role of HRIS in CSs, especially with a larger sample size and a higher response rate, interviewing many spot real users of HRIS, owners, vendors, trainers, and other relevant stakeholders so that a more profound analysis is generalized. Second, HRIS represents a significant investment decision for organizations of all sizes. Researchers can conduct a further study as a guide to develop a proper roadmap that guides the investors to make a strategic decision in this field. Third, an in-depth survey on HRIS usage in support of an organization's performance needs further examination. Thus, a group of researchers would find excellent and real-time research results to minimize existing limitations and find possible HRIS implications on CSs for the concurrent study. FUNdING AGeNCy Publisher has waived the Open Access publishing fee.
2022-03-25T15:25:21.127Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "7edffb8f5cb66567b8bfea43ee3ace57aa327870", "oa_license": null, "oa_url": "https://doi.org/10.4018/ijabim.300350", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "cdcbd6ef7724035710d6183852ba50aa14eb6020", "s2fieldsofstudy": [ "Business", "Computer Science" ], "extfieldsofstudy": [] }
38253004
pes2o/s2orc
v3-fos-license
Confronting Standard Models of Proto--Planetary Disks With New Mid--Infrared Sizes from the Keck Interferometer We present near and mid-infrared interferometric observations made with the Keck Interferometer Nuller and near-contemporaneous spectro-photometry from the IRTF of 11 well known young stellar objects, several observed for the first time in these spectral and spatial resolution regimes. With AU-level spatial resolution, we first establish characteristic sizes of the infrared emission using a simple geometrical model consisting of a hot inner rim and mid-infrared disk emission. We find a high degree of correlation between the stellar luminosity and the mid-infrared disk sizes after using near-infrared data to remove the contribution from the inner rim. We then use a semi-analytical physical model to also find that the very widely used"star + inner dust rim + flared disk"class of models strongly fails to reproduce the SED and spatially-resolved mid-infrared data simultaneously; specifically a more compact source of mid-infrared emission is required than results from the standard flared disk model. We explore the viability of a modification to the model whereby a second dust rim containing smaller dust grains is added, and find that the two-rim model leads to significantly improved fits in most cases. This complexity is largely missed when carrying out SED modelling alone, although detailed silicate feature fitting by McClure et al. 2013 recently came to a similar conclusion. As has been suggested recently by Menu et al. 2015, the difficulty in predicting mid-infrared sizes from the SED alone might hint at"transition disk"-like gaps in the inner AU; however, the relatively high correlation found in our mid-infrared disk size vs. stellar luminosity relation favors layered disk morphologies and points to missing disk model ingredients instead. Introduction The gas and dust disks around young stars play an important role in the formation and evolution of stars and planetary systems. A protostellar object grows as it accretes matter from its circumstellar disk. At the same time, the physical conditions in the disks constitute the initial conditions for planet formation (Williams & Cieza 2011). It is therefore important to know the disk structure and composition as a function of stellocentric radius and vertical height, density and temperature profiles of each disk component, and how these properties evolve with time, in order to improve our theoretical understanding of the planet formation processes (Bodenheimer & Lin 2002;Blum & Wurm 2008). Direct observational constraints are however difficult to obtain, due to angular resolution limitations inherent to standard imaging techniques, as we now illustrate. Generally speaking, mid-infrared (MIR) wavelengths probe disk emission from "intermediate" radial locations, between the innermost disk regions bright in the near-infrared (NIR) and the outer disk emitting at (sub)-mm wavelengths and also visible in scattered light images (see e.g. Figure 1 in Dullemond & Monnier 2010). For an A0 star, for example, van Boekel et al. (2005) place 90% of the MIR disk emission between 0.5 -30 AU. Therefore, this wavelength regime is interesting as it probes the spatial scales where planets form and reside. At typical distances to star forming regions however (d > 100 pc), these spatial scales ( 300 mas) are hardly resolved using conventional telescopes. For this reason progress has relied mostly on interpreting spectral energy distributions (SEDs), which have inherent degeneracies (most notably between disk temperature and dust properties) and therefore necessarily rely on disk models for which even the most basic aspects pertaining to the innermost regions have not been solidly established. Long baseline interferometers operating at MIR wavelengths can spatially resolve the relevant spatial scales, and provide much needed new model constraints. Previous surveys have focused on establishing the characteristic MIR sizes of a relatively small number of T Tauri and Herbig Ae/Be objects , including results at lower spatial resolution using specialized interferometric techniques on single large telescopes (Hinz et al. 2001;Liu et al. 2005Liu et al. , 2007Monnier et al. 2009). First steps have also been taken in exploring the dust mineralogy and showing that the distribution of dust species is not homogeneous in the disk ) and comparing with parametrized disk models (Fedele et al. 2008;Schegerer et al. 2009). Modelling the MIR emission in detail however is notoriously complicated, because it contains contributions from several disk regions, as well as fundamental uncertainties about whether or not the relevant disk regions are completely or partially shadowed. This is in contrast with the modelling of the NIR emission, which is almost completely dominated by a single disk component -namely the inner dust rim (there are also smaller contributions from inner gas and outer dust envelope Dullemond & Monnier 2010, and references therein). Indeed, a handful of single-object studies using specific detailed disk models have provided valuable insights, but also illustrate the difficulty of the problem (Kraus et al. 2008;Schegerer et al. 2008;di Folco et al. 2009;Ratzka et al. 2009;Benisty et al. 2010;Ragland et al. 2012;Schegerer et al. 2013;Gabányi et al. 2013). Most recently, Menu et al. (2015) present the results of a survey of 41 Herbig Ae/Be objects with the MIDI instrument at the Very Large Telescope Interferometer. They find intrinsic morphological disk diversity or evolutionary diversity, and evidence for flat disks (group II) having gaps, with implications for the evolutionary sequence and possible role of planet formation in producing the observed types of disks (flat with or without gaps, and flared/gapped -i.e. transitional). In this paper we present new spatially resolved observations using the Keck Nuller Interferometer (KIN) of the NIR and MIR brightness for 11 well known young stellar objects (YSOs), as well as near-contemporaneous spectro-photometric data obtained at the NASA Infrared Telescope Facility (IRTF). We do not attempt to constrain the parameters of a specific detailed physical model, because the amount of data available would not permit us to resolve the many model parameter degeneracies, and would result in a very limited gain in knowledge, especially considering that those detailed physical models are themselves still largely unproven. Rather, our approach is to use simple and general model prescriptions that still reflect the most salient physical processes, in order to establish the basic features of the infrared brightness, test current paradigms, and suggest directions to improve the models. The Sample Our sample consists of 11 targets selected to have strong infrared excess flux over the stellar photospheres. They represent four different YSO types: 3 T Tauri, 4 Herbig Ae, 3 Herbig Be, and 1 FU Ori object. Their basic properties, and the parameters needed for the modeling performed in the sections that follow are shown in Table 1. All the targets are well known young circumstellar disk objects, and the disk properties adopted, also inputs to the modeling, are listed in Table 2. Observations and Data Reduction Observations were made using the Keck Interferometer (Colavita et al. 2013) in its nuller mode (Colavita et al. 2009), and at the NASA Infrared Telescope Facility (IRTF) over the time period 2009-2010 -see the observing log in Table 3. Keck Nulling Interferometry The Keck Interferometer Nuller (KIN, Colavita et al. 2009) operates in N-band (8.0 − 13.0 µm, dispersed over 10 spectral pixels) and combines the light from the two Keck telescopes as an interferometer with a physical baseline length B ∼ 85 m. The KIN produces a dark fringe through the phase center ("Nulling"). The adjacent bright fringe (through which flux is transmitted), projects onto the sky at an angular separation λ/2B = 10 mas, or 1.4 AU at the median distance to the stars in our sample (140 pc), and for λ = 8.5 µm (the effective wavelength of the KIN bandpass). Thus, the instrument is sensitive to MIR circumstellar emission as close to the central star as these spatial scales (i.e. "inner working angle"). For further descriptions of the KIN observables, see ), Serabyn et al. (2012), or Mennesson et al. (2014. The KIN also uses a standard Michelson interferometer operating in K-band (2.0 − 2.4 µm, dispersed over 5 spectral pixels), as a fringe tracker in order to stabilize the MIR nulls in the presence of optical path fluctuations induced by the turbulent Earth's atmosphere. In this paper we also use these NIR interferometric data, in order to probe circumstellar emission from hotter disk regions located closer to the central star. For the physical baseline length, the fringe spacing at 2.2 µm is 5.3 mas, or 0.8 AU at the median distance to our sample. The MIR nulls and NIR visibility data provided by the KI pipeline were calibrated using their Calib package 1 . Following standard practice, in order to measure the instrument's transfer function and account for it in the data calibration process, observations of targets of interest were interleaved with observations of calibrator stars of known angular diameters (see Table 3). For ease of comparison of the MIR and NIR data, the calibrated nulls (n) were converted to visibilities using the relation V = (1 − n)/(1 + n) -an appropriate approximation given that the MIR emission from our sources appears essentially unresolved to the 4 m baseline of the KIN cross-combiner (see Colavita et al. 2009). A salient aspect of the MIR spatially resolved measurements presented in this paper is that due to the nulling mode, the precision of the calibrated MIR visibilities is substantially higher than can be achieved with standard MIR interferometers from the ground (Colavita et al. 2009(Colavita et al. , 2010. Our typical uncertainties are σ n = 0.005 − 0.01, depending on observing conditions and on the spatial extent of the object in the NIR fringe tracking channel; which corresponds to MIR visibility uncertainties 1 − 2% for an unresolved object. IRTF Spectrophotometry For most of the KIN objects and epochs, we also obtained new NIR and MIR spectrophotometric data at the IRTF. Best attempts were made to schedule the IRTF observations as nearcontemporaneously with the KIN observations as possible, in practice resulting in time lags ranging from a few days to two months, one month being typical (see Table 3). This is important because temporal variations in the star/disk flux ratios are known to be common among YSOs (Sitko et al. 2008), and accurate relative fluxes are needed input to the modelling of the interferometric visibilities. Within the time interval between the KIN and spectrophotometric data, we assume that the disk morphology and star/disk flux ratios remain constant. We obtained NIR spectra using the SpeX spectrograph . The spectra were recorded using the echelle grating in both short-wavelength mode (SXD, 0.8 − 2.4 µm) and long wavelength mode (LXD, 2.3 − 5.4 µm) using a 0.8 arcsec slit. The spectra were corrected for telluric extinction and flux calibrated against a number of A0 V calibrator stars, using the Spextool data reduction package (Cushing et al. 2004;Vacca et al. 2003). In addition to the 0.8 arcsec-slit spectra, for all but v1295 Aql and v1057 Cyg we also recorded data with the SpeX prism disperser and a wide 3.0 arcsec slit, which allows us to retrieve the absolute flux levels when the sky transparency is good and the seeing is 1 arcsec or better. This condition was met for DG Tau, RY Tau, MWC 480, and AB Aur, and confirmed using the BASS data, obtained a month (DG Tau and RY Tau) or 2 days (MWC 480 and AB Aur) in time from the SpeX observations. For v1295 Aql and v1057 Cyg we normalized the SpeX levels using the BASS observations alone, which were obtained within a week of the SpeX observations. For MWC 275, the seeing was 1.4 sec, but the Prism and BASS yielded identical scaling factors for the SXD+LXD spectra. MIR spectra were obtained with The Aerospace Corporation's Broad-band Array Spectrograph System (BASS). BASS uses a cold beamsplitter to separate the light into two separate wavelength regimes. The short-wavelength beam includes light from 2.9 − 6µm, while the long-wavelength beam covers 6 − 13.5 µm. Each beam is dispersed onto a 58-element Blocked Impurity Band (BIB) linear array, thus allowing for simultaneous coverage of the spectrum from 2.9 − 13.5 µm. The spectral resolution R = λ/∆λ is wavelength-dependent, ranging from about 30 to 125 over each of the two wavelength regions (Hackwell et al. 1990). In some cases where the wide-slit SpeX Prism observations were not available, BASS spectrophotometry that overlapped the SpeX data were used to provide absolute flux levels of the SpeX spectra. In order to construct complete SEDs for each object, additional infrared photometry from 2MASS, Spitzer and the literature have been included as needed in order to fill in wavelengths gaps in either the Spex or BASS data. The UBVRI data are primarily from the EXPORT project (Oudmaijer et al. 2001) or from the survey of HAeBe stars published by de Winter et al. (2001). Stellar Photosphere In order to study the disk emission, it is necessary to estimate the stellar contribution to the observed SEDs. It is reasonable to assume that shorter wavelength fluxes are dominated by the stellar photosphere, because the circumstellar disks are much cooler. Therefore, we fit a stellar model to the UBVRI SED data, and extrapolate the modeled stellar spectra to the longer wavelengths at which KIN operates. We use Kurucz models for the stellar photospheres (Kurucz 1979). The stellar metallicity is assumed to be solar, and stellar masses and distances are fixed to the values listed in Table 1. The parameters we fit are: stellar surface effective temperature (T ), radius (R ) and reddening coefficient (including circumstellar material). The best-fit results are shown in Table 4. Our values are consistent with previous SED-based results in the literature. When modelling the disk emission, as described in the following sections, the stellar contributions to the SED are fixed to these best-fit results. Model and Fitting Procedure We begin by using a geometric disk model in order to establish the emission size scales. The objects are represented as a linear combination of the three components expected to dominate the emission: the star, the inner dust rim, and the extended disk behind it. The star is modelled as an unresolved point source, which is appropriate given their angular diameters (all smaller than 0.2 mas) and angular resolution of the KIN (5 mas fringe spacing at even the shortest 2.2 µm wavelengths in these observations). The inner dust rim is represented by a ring of linear radius R rim , infinitely thin in the radial direction, and emitting as a blackbody at temperature T rim . The emission from the extended disk is represented by a two-dimensional Gaussian brightness with a central clearing of radius equal to the inner rim radius, we quantify the size scale of this component by its half-width at half-maximum (HWHM Disk , see Figure 1). The inclination and position angle of both the rim and extended disk are assumed to be the same as those observed via millimeter interferometry of the outer disk (given in Table 2). The fitting process is divided into two steps, as follows. First, the temperature and size of the inner rim (T rim , R rim ) are determined from the NIR SED and K-band visibilities, ignoring the extended disk component since it contributes negligible flux at NIR wavelengths (in practice, we limit the SED fits to the 1-5 µm wavelength region in order to best realize this assumption). The inner rim temperature is obtained by fitting the NIR SED (its shape constrains this parameter very well). The rim radius is then obtained by numerically solving the equation for the K-band visibilities: where V is the observed visibility amplitude at each of the 5 wavelength bins sampled within the K-band, F the total flux, F the stellar flux, F rim the rim flux, ρ = R rim /d is the angular radius of the rim, and b is the projected baseline (b = √ u 2 + v 2 /λ) taking into account the inclination and orientation of the rim on the sky. The fractional fluxes are obtained by SED decomposition using the stellar fit described above. Therefore the only unknown is the radius of the rim ρ. The Bessel function (J 0 ) in the equation above is not bijective; here we consider only numerical solutions in the main lobe of the visibility function, i.e. we adopt the smallest rim size consistent with the data. Next, we determine the characteristic size of the extended disk by fitting to the N-band visibilities. This time the star is ignored because it contributes negligible flux in N-band. Therefore, the spatial model consists of the inner rim (barely resolved at MIR wavelengths -see Table 5) and the extended disk component. Similarly to the previous step, the fractional fluxes in each component at the 10 N-band wavelength bins are obtained via SED decomposition, using the parameters for the blackbody ring representing the inner rim from the previous step. Therefore, the HWHM of the truncated Gaussian brightness representing the extended disk is the only free parameter. In practice, the fitting is performed by generating an image of this model and the visibilities are extracted via Fourier transformation. Figure 2 shows the SED data, visibility data, and fitted sizes (i.e. radii given by R rim in the NIR or R rim + HWHM Disk in the MIR) as a function of wavelength within each of those bandpasses. Table 5 shows the best-fit parameters for each object; where the rim and extended disk radii have been averaged over the spectral bins in the NIR and MIR bandpasses respectively (for the propagation of errors, we assume that the NIR spectral bins are uncorrelated, and that the MIR spectral bins are fully correlated, following Mennesson et al. (2014)). We note that the uncertainties in the characteristic sizes in Table 5 do not include systematic uncertainties due uncertainties in (a) the fractional fluxes derived via SED decomposition (for reference, a ∼ 10% effect given our photometric errors and values of the J 0 term in Eq. 1 typical of our sample), or (b) distance (a 25% effect given the same level of distance uncertainties for our sample). NIR and MIR Characteristic Sizes We obtain best-fit values for the rim temperatures and radii that are in agreement with expected dust sublimation values, as was previously found (see e.g. Dullemond & Monnier 2010, and references therein). The MIR characteristic sizes range from 1.2 AU to 6.7 AU, with median precision of 3%. We note that (as can be seen in Figure 2) for AB Aur, as well as for RY Tau and MWC 758 at some of the MIR wavelengths, there is no Gaussian HWHM solution. This is because for those cases the coherent MIR flux (MIR visibility times the total flux, solid orange line in the SED panels) is lower than the rim flux, and therefore there is no mathematical solution for the Gaussian component, given that as noted above the rims are nearly unresolved at MIR wavelengths. In other words, our procedure for this simple geometrical model places too much MIR coherent flux in the rim. In Section 4.3 we consider more physical models which allow for a more extended MIR brightness for these sources. The MIR Size -Stellar Luminosity Relation Studying how the characteristic sizes relate to the stellar properties can reveal clues about the dominant emission processes at play in a given wavelength regime. In Figure 3 we explore how the MIR characteristic sizes measured above (R rim + HW HM Disk ) relate to the stellar luminosity (L ). The index number in the plot identifies each object as in Table 5 (AB Aur is missing, because the geometrical model has no MIR size solution for this object, as discussed above). The dashed lines represent the equilibrium location of gray dust at the indicated temperatures, following the definition of Monnier & Millan-Gabet (2002). We confirm earlier findings that the MIR sizes generally scale with stellar luminosity (Monnier et al. 2009;Menu et al. 2015). However, we find a better correlation than found by these previous authors. Formally, we find a correlation of 0.9 with a low p-value (0.001) indicating that the null hypothesis (no correlation) is rejected. Alternatively, a bootstrap analysis gives a 5σ significance to the measured slope of the MIR size vs. L diagram (slope = 0.19 ± 0.04). Most likely, the reason for the higher level of correlation is that our choice of "MIR size" effectively removes the rim emission, so that the remaining MIR size correlates better with stellar luminosity. Our two-step procedure is indeed very different from e.g. the one-component Gaussian model of Monnier et al. (2009) or the half-light at half-radius measure of a T-power law disk of Menu et al. (2015). The lower scatter in our relation may also be, at least in part, the result of using the known inclination of the (outer) disk for each object (Table 2), rather than uniformly assuming a face-on geometry. Semi-analytical Model: Flared Disk with Inner Dust Rim We now turn our attention to determining how the new MIR interferometer data compares with predictions from a physical model that encapsulates current paradigms -namely a flareddisk including a "puffed-up" inner dust rim (see e.g. Dullemond & Monnier 2010, and references therein). We use our own semi-analytical implementation of this model, so that we can modify it, which we will show may be necessary. Our semi-analytical model follows Eisner et al. (2004) but with an inner rim following D' Alessio et al. (2004) and Isella & Natta (2005). As in the previous section, the inclination and position angle of the rim and flared disk are assumed to be the same, and we use the values inferred from millimeter interferometry of the outer disk (Table 2). For simplicity, dust grains in the rim are assumed to be a single species, namely amorphous olivine MgFeSiO(4), commonly found in circumstellar disks (Dorschner et al. 1995;Sargent et al. 2009) with close to cosmic Mg-to-Fe ratio (e.g. Snow & Witt 1995). The optical constants for this species are from Jaeger et al. (1994) and Dorschner et al. (1995). The opacities are computed using Mie theory. Since the rim is hot, only large grains can survive; here we assume a single size of 1.3 µm (Tannirkulam et al. 2008b). With the rim grain properties fixed as discussed above, the rim radius is determined by the sublimation temperature (D' Alessio et al. 2004;Isella & Natta 2005). Thus, the model for the rim component has two free parameters: a scale parameter related to the angular size of the projected rim surface, used to match the NIR fluxes, and the dust sublimation temperature (T rim ). Dust grains in the flaring disk component behind the dust rim are assumed to be silicates with optical properties as in Laor & Draine (1993). Our calculations showed that the dust grain size upper cutoff is not important for our results in the MIR wavelength range, therefore we used a standard MRN distribution (Mathis et al. 1977) with grain sizes following a power law with index -3.5, and minimum/maximum sizes of 0.005/0.25 µm respectively. The mass and outer radius of the flared disk component are fixed to the values in Table 2. We note that the outer radius has negligible effect on the predicted SED or the interferometric data at MIR and shorter wavelengths, because the outer disk regions contribute little NIR or MIR emission. The surface density distribution is assumed to follow a power law with index of -1.5 (Chiang & Goldreich 1997). The only free parameter is the flaring index 2 ξ, which determines how much stellar emission the extended disk can intercept (i.e. the larger the flaring index, the hotter the extended disk is). Validation of our semi-analytical implementation of the flared disk model against benchmark radiative transfer codes is presented in Appendix A. We note that for the purposes of this exercise, we do not use the NIR interferometer data. This is because Tannirkulam et al. (2008b) showed that in order to explain the shape of the NIR visibility curves past the first lobe, a relatively smooth NIR brightness was required (i.e. inconsistent with the abrupt edge in the NIR brightness that results from models devoid of emission inside the inner dust rim). They argued that the most likely origin of the extra NIR emission is hot gas interior to the dust sublimation radius, a component clearly not included in the model just described. Finally, we note that for 4 of the 11 objects: DG Tau, MWC 1080, v1057 Cyg, and v1685 Cyg, our model has no hope of reproducing the detailed SED, because for those objects no silicate emission feature is observed. Possible reasons are: (a) the disks contain only Carbon grains (a radical possibility), or (b) the MIR excess arises in an optically thick envelope of large grains, or (c) large gaps exist in the disk region normally responsible for silicate emission (Maaskant et al. 2013). Indeed these 4 objects are known to be very active and/or embedded, such that our model clearly does not apply, and the tailored models that would be required are outside the scope of this paper. However, we choose to keep those four objects in the rest of our analysis, because it is still valuable to examine how the model fares in reproducing not the details but the general features of the data, namely the infrared excess and MIR visibilities levels. A schematic sketch of the model and parameters is shown in Figure 1. One-Rim Flared Disk Model Fitting to SEDs Only We first tune the model to fit the SEDs only, in order to evaluate how the predicted MIR visibilities compare with the data. The best-fit results are shown in Table 6 and Figure 4. In addition to the best-fit parameters, Table 6 includes the fractional MIR flux in each of the two disk components (f M IR rim and f M IR disk ), relative to the total MIR flux (star + rim + disk). We include no formal parameter errors, because our intent is not to determine precise parameter values, but to evaluate the validity of the main features of the model. The table also includes the reduced-χ 2 values for the best-fit model compared to the SED and V 2 data; i.e. χ 2 red = χ 2 /(N − p), where p = 3 is the number of free parameters for the 1-rim model, and the number of data points N is 10 for the visibility data, and of order 1000 (depending on the object) for the SED data. For the 7 objects with observed silicate emission features, the SEDs are well reproduced. The NIR excess ("bump") is due mostly to the rim as expected (and this validates the assumption made for the simple geometric model of the previous section). Most of the MIR flux arises in the surface layer of the disk, and reproduces the observed 10 µm silicate peak well in most cases. What about the predicted MIR visibilities? For 3 of the objects: RY Tau, MWC 758, and AB Aur; which are the 3 most spatially resolved in the MIR, the visibility data are well reproduced. For all the other objects, the SED-best-fit model predicts MIR visibilities which are significantly lower than is observed; i.e. the data requires a much more "compact" MIR brightness. We conclude that, in general, the disk model when tuned to fit only the SEDs produces inadequate visibility predictions. This is an important observation, given that these models are in wide usage in the field, but the most common situation is the lack of spatially resolved data. One-Rim Flared Disk Model Fitting to SEDs and Visibilities We now use the same model to fit both the SED and MIR visibility data simultaneously. Since the SEDs have many more data points, we increase the weights of interferometer data accordingly (by the ratio of the number of data points). The results are shown in Figure 5 and Table 6, and can be summarized as follows: (1) For the 7 objects with observed silicate emission. (1a) A solution that fits well both the SED and MIR visibility data now exists for SU Aur, at a small cost in reduced agreement with the SED data. It can be seen in Table 6 that this is achieved by increasing the MIR flux contribution from the rim, resulting in a more compact source of MIR emission. (1b) We also note that for v1295 Aql, the fit to the visibility data is significantly improved, but at the expense of no longer fitting the SED well at all i.e. in this case, forcing more rim MIR emission results in greatly overshooting the NIR bump. (1c) In summary, a total of 4 the 7 objects with observed silicate emission are well fit by the model (the same 3 as in Section 4.3.1 plus SU Aur); for the other 3 the general feature remains that the MIR visibilities are lower than observed and a more compact MIR brightness is required. (2) For DG Tau, which does not exhibit silicate features in the SED, we note that this model can match the SED and MIR visibility levels relatively well (except at the longer KIN wavelengths) perhaps indicating that the general features of the model have some applicability to this object. Interestingly, for the other 3 objects with no silicate feature in the SED (v1685 Cyg, MWC 1080 and v1057 Cyg) the general feature remains that the MIR visibilities are lower than observed and a more compact MIR brightness is required. Two-Rim Flared Disk Model Fitting to SEDs and Visibilities As shown above, the "one-rim + disk" model tends to underestimate the observed MIR visibilities, indicating that a significant fraction of the flux originates in a more compact source than predicted by this model. In fact, the required size scale for the MIR brightness is comparable to that of the inner rim, but this component alone cannot explain the observations because the relatively large dust grains required to survive direct exposure to the stellar radiation are not able to produce the required MIR flux. Rather than attempting to tune this model, we explore here the viability of a more radical modification to the disk structure, motivated in part by the SED-modelling work of McClure et al. (2013). The precise location and shape of the inner dust rim is determined by processes such as the settling of larger grains to the disk mid-plane and the dependence of dust sublimation temperature on the local gas density, dust grain size, and chemical composition, collectively leading to curved walls, which McClure et al. (2013) successfully model using a two-layer approximation. Here we implement the two-layer approximation using two distinct inner rims of different heights, but otherwise each modelled as in Section 4.3 (see Figure 1, compare with Figure 1 of McClure et al. (2013)). The second rim is located behind the first rim (further from the star), is taller than the first rim, and therefore still partially directly heated by the star. Thus smaller dust grains can survive in the second rim, which leads to the required compact MIR emission, compared to that arising in the extended disk behind it. The emission in the region between the two rims is difficult to predict due to possible rim-shadowing effects; thus for simplicity we assume no emission. For the smaller dust grains of the second rim we adopt a size of 0.25 µm. We assume the dust composition of the two rims to be the same (described in 4.3). The 2-rim model therefore has two additional degrees of freedom: the scale parameter and temperature (T rim2 ) of the second rim. The results are shown in Figure 6 and Table 7 (in the calculation of χ 2 red , the number of model free parameters is now p = 5). As expected, rim-2 (located at 1 to few AU) is cooler and contributes mainly to the MIR flux. In order to assess the relative quality of the 1 and 2-rim models, but taking into account the increased degrees of freedom for the 2-rim model, we use the Akaike Information Criterion (AIC): AIC = 2p + χ 2 red , where p is the number of model free parameters. The AIC still favors models with lower χ 2 red , but penalizes for the increased degrees of freedom. Table 7 shows ∆(AIC) = AIC 2rim − AIC 1rim , a negative value favors the 2-rim model, which formally happens for 7 of the 11 objects when considering the fits to the SEDs, and for 5 of the 11 objects when considering the fits to the MIR visibilities. We summarize the results as follows. For two of the objects, the 2-rim model still does not provide good fits, either because the MIR visibilities are not well fit (MWC 480) or because the SED is not well fit (v1295 Aql). Both objects have high f M IR rim2 and very low values of the disk flaring index (much lower than ξ = 2/7 for hydrostatic equilibrium), such that the flared disk has been essentially replaced by the second rim. For v1295 Aql, it may be that our model fails because contrary to our assumption the disk inclination is high (values in the literature range from 0 − 65 deg, Eisner et al. (2004); Isella et al. (2006)). Another possibility for this object is that the model is valid, but the dust properties in rim 2 need to be modified, given that as mentioned above this component dominates the MIR emission, and has the correct size scale, but fails mainly in that it significantly overpredicts the NIR fluxes. For all other cases the 2-rim model leads to improved results. For MWC 275, the MIR visibilities could not be fit at all by the 1-rim model, but the 2-rim model enables a good simultaneous fit to the SED and MIR visibilities. The same is true for MWC 758, but with a more modest χ 2 V 2 improvement. For three other objects (SU Aur, RY Tau and AB Aur) the 2-rim model maintains a similar fit to the MIR visibilities, but enables much improved fits to the SEDs, especially in the ∼ 5 − 12 µm spectral region. Additional comments on specific objects SU Aur: The disk mass is log(M D )/M = −5.1 +1.4 −0.8 (Akeson et al. 2002), relatively low compared to classic T Tauri stars. Our 1-rim and 2-rim model solutions have the lowest disk flaring index, the disk near-flatness may be related to its low mass. RY Tau: Formally, the 2-rim model is preferred. However, the second rim is located at 1.6 AU with temperature 1050 K; both similar to the typical size scale and temperature of the extended disk component in the one-rim disk model. This essentially indicates a degeneracy between the two models. MWC 758: In this case the second rim and the extended disk contribute comparable MIR fluxes. Here again we obtain relatively low flaring indexes, in agreement with Beskrovnaya et al. (1999). The second rim is located at 6.8 AU from the central star, much further than the 0.54 AU rim location in the 1-rim model. In other words, formally the 2-rim model replaces the inner ∼ 6.8 AU of the extended disk with a narrow ring structure. DG Tau: As noted above this is a very active object, with silicate emission that is variable on weeks timescales (Woodward et al. 2004;Bary et al. 2009) and sometimes appears in absorption (Sitko et al. 2008), perhaps indicating that a large amount of cool dust is lifted up above the disk surface and is causing self-absorption over the emission region (Tambovtseva & Grinin 2008). At the epochs of our observations, we do not detect the 10 µm silicate feature, while the coherent flux (orange solid line in Figure 2) suggests an absorption feature. Since KIN resolves the disk partially or fully, the coherent flux must come from regions smaller than the disk, implying that the lifted dust causing the absorption is located ≤ 1AU, a dynamical timescale consistent with the observed variation timescale of the silicate feature. The spectral shape of the MIR visibilities We note that the MIR visibility data for our objects display a variety of spectral shapes: most are concave-up, but some are monotonically increasing (MWC 480, v1295 Aql) and MWC 275 is the only one with a concave-down shape (perhaps signaling a unique characteristic for this object). Our model is too simple to reproduce the shapes exactly, here we provide qualitatively arguments for how such differing spectral shapes can arise in a multi-component model for the emission. Consider a model where the MIR emission arises in two components -as in the 2-rim model considered above, where rim 2 is a compact source of MIR emission relative to the extended disk behind it. For the compact component, the visibilities will increase with wavelength as the angular resolution decreases at longer wavelengths. For the extended component, with a radial temperature profile such that the disk temperatures are lower at larger radii, the characteristic MIR size increases with wavelength and therefore the visibilities decrease; an effect that competes with the visibility increase due to lower angular resolution at longer wavelengths. The resulting shape will depend on the balance of these competing effects for the specific case of each object, as follows. In the limiting case that the compact component is completely unresolved and the extended component is completely resolved, the MIR visibilities are equal to the fractional flux in the compact component, and a spectral concave-up shape will result if the MIR flux in the compact component is less peaked than that of the extended component. And viceversa for the concave-down spectral shape. In another limiting case, the compact component dominates the MIR fluxes, and the MIR visibilities increase monotonically with wavelength as a result of lower angular resolution. If on the other hand the extended component dominates the flux, either a concave-up or down shape can result depending which of the effects described above dominates. Summary and Conclusions We have measured the infrared visibilities and near-simultaneous SEDs of 11 young stellar objects, several of them spatially resolved at MIR wavelengths and long baselines for the first time. We use a simple geometrical model to provide basic information about the infrared brightness, namely the NIR and MIR size scales, independent of details of specific physical models. Further insight on the disk structure can be gained by studying how the characteristic sizes relate to the properties of the central star. The KIN MIR sizes (measured as (R rim + HW HM Disk ) of Section 4.2.2) appear better correlated with stellar luminosity than found by previous authors, although direct comparisons are complicated by the different models assumed. We test current disk paradigms for physical disk models in the form of a semi-analytical dust rim + disk model, and find that in several notable cases the model fails to reproduce the measured MIR visibilities and the SEDs simultaneously; with the data requiring relatively compact MIR emission (1 -7 AU). We explore the possibility that the MIR brightness is better modeled by taking into consideration the proposed layered morphology of the curved inner rim, which naturally leads to a series of inner-rims which containing different dust populations (grain sizes) (McClure et al. 2013) and therefore contribute MIR emission on different size scales. We find that when implemented as a 2-rim approximation, the fits to the SEDs and MIR visibilities are significantly improved in most cases. We leave to future work extensions to the model which may alleviate the shortcomings of our 1 or 2-rim models, such as an exploration of the effects of varying the dust species or the inclusion of viscous heating processes. Instead of dust radial and scale height variations (layered disks), the 2-rim model could be mimicking structures due to forming planets (rings and gaps, Menu et al. (2015)). However, the relatively high correlation found in our mid-infrared size vs. stellar luminosity relation favors layered disk morphologies, because of the higher stochasticity expected to be associated with early planet formation processes. The detailed disk structure and brightness is likely to be complex, and to vary from object to object; emphasizing the need for theoretical progress driven by new observations. Spatially resolved MIR observations are a sensitive way to probe the disk vertical structure and time evolution, and our results highlight the fact that conventional smooth disk models developed to fit SEDs alone almost always fail to reproduce the MIR spatial scales. This is an important consideration in view of active on-going efforts to model circumstellar disks and the planet formation process within them. Improved baseline coverage and ultimately model-independent images of the inner disk at MIR wavelengths from the next generation VLTI/MATISSE instrument (Lopez et al. 2014) or the proposed Planet Formation Imager (PFI, Monnier et al. 2014) will be invaluable in offering a direct view of the ∼ 1 − 10 AU planet formation region; much as the transformative knowledge gains now being delivered by ALMA observations of the cooler, more distant regions of pre-planetary disks (e.g. the images for the HL Tau disk in ALMA Partnership et al. 2015). A. Validation of the semi-analytical flared disk model In order to verify our semi-analytical implementation of the flared disk model, we compare its predictions with the numerical radiative transfer benchmark models of Pinte et al. (2009). The model parameters used for the benchmark comparison are shown in Table 8. The parameters in our semi-analytical model are also set to best approximate the benchmark model; namely the flaring index is set to be 0.125 in order to match the disk scale height, and we suppress the dust inner rim emission, since this component is absent in the benchmark models. Figure 7 summarizes the comparisons between our semi-analytical model and the benchmark results for the case of the TORUS code in (Pinte et al. 2009). The left panel shows the SED comparison. As can be seen the semi-analytical model produces lower fluxes than TORUS from 2 − 20µm, perhaps due to scattering effects not included in our model (Dullemond et al. 2001). The middle and right panels compare the spatial flux distribution, i.e. λF λ × R as a function of radius (T. Harries priv. comm., the right panel zooms in the inner disk in linear scale). The results are very similar, although the semi-analytical model produces more centrally peaked emission. These differences are not surprising given the very different detailed implementations of the model, and do not affect the conclusions in this paper. We conclude that for the purposes of this paper our semi-analytical implementation of the flared disk model has been validated. The middle panels show the NIR and MIR interferometer data (visibility modulus) for each of the NIR (green) and MIR (red) bandpasses. The right panels show the best-fit characteristic radii as a function of wavelength in each of the bandpasses, i.e. at the NIR (green) wavelengths they are the best-fit radii of the ring representing the inner dust rim (R rim ), and at the MIR (red) wavelengths they are the best-fit R rim + HWHM Disk , where HWHM Disk is the half-width at half-maximum of the Gaussian brightness representing the extended disk (see text Section 4.2.1). The arrow symbols represent the upper and lower 1σ range. In some cases the extended disk sizes are missing because no suitable solution exists (see text Section 4.2.2). The models are shown as green lines, as follows: In the SED panels, the dotted line is the star, the short dashed line is the rim, the triple-dotted-dashed line is the surface layer, the long-dashed line is the interior layer, and the solid line is the total flux. The 4 objects in the bottom panels are the ones for which no silicate feature is observed in the SEDs, and are shown here for illustrative purposes and to evaluate how the models are able to reproduce the SED and visibility levels only. Figure 4. The 4 objects in the bottom panels are the ones for which no silicate feature is observed in the SEDs, and are shown here for illustrative purposes and to evaluate how the models are able to reproduce the SED and visibility levels only. Figure 4, with one addition: the dotted-dashed line represents the emission from the second rim. The 4 objects in the bottom panels are the ones for which no silicate feature is observed in the SEDs, and are shown here for illustrative purposes and to evaluate how the models are able to reproduce the SED and visibility levels only. Note. -For v1295 Aql the disk inclination is very uncertain and we adopt a face-on geometry based on indications of low projected rotational velocity (Acke & van den Ancker 2004;Pogodin et al. 2005) and interferometer data (Eisner et al. 2004). pionier paper in prep also gives low inc for v1295aql. References: (a) Corder et al. (2005) ρ(r, z) = ρ 0 (r)e −z 2 /2h(r) 2 Disk scale height h(r) = (10AU )(r/100AU ) 1.125 Disk surface density Σ(r) = Σ 0 (r/100AU ) −1.5 Total disk mass 3 × 10 −5 M Disk inner radius 0.1 AU Disk outer radius 400 AU Dust grain size 1µm Dust grain density 3.5 g/cm 3 Dust grain material silicates Note. -Parameters are the same as in the benchmark paper Pinte et al. (2009).
2016-04-22T17:39:49.000Z
2016-04-22T00:00:00.000
{ "year": 2016, "sha1": "7926159f58a6d9c85fe2da90bfcc22ae6ad73626", "oa_license": null, "oa_url": "https://ore.exeter.ac.uk/repository/bitstream/10871/30943/3/Millan-Gabet_2016_ApJ_826_120.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1954da6cabe62d79f46eb5d53bfab1c7985d92cc", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
251517319
pes2o/s2orc
v3-fos-license
Realist evaluation of the impact, viability and transferability of an alcohol harm reduction support programme based on mental health recovery: the Vitae study protocol Introduction Addiction is considered a chronic disease associated with a high rate of relapse as a consequence of the addictive condition. Most of the current therapeutic work focuses on the notion of relapse prevention or avoidance and the control of its determinants. Since only a small portion of patients can access alcohol addiction treatment, it is crucial to find a way to offer new support towards safe consumptions, reductions or cessations. The harm reduction (HR) approach and mental health recovery perspective offers another way to support the patient with alcohol addiction. Vitae is a realist evaluation of the impact, viability and transferability of the IACA! programme, an HR programme based on the principle of psychosocial recovery for people with alcohol use disorders. Methods and analysis The Vitae study adheres to the theory-driven evaluation framework where the realist evaluation method and contribution analysis are used to explore the effects, mechanisms and influence of context on the outcomes and to develop and adjust an intervention theory. This study is a 12-month, multi-case, longitudinal descriptive pilot study using mixed methods. It is multi-centred, and carried out in 10 addiction treatment or prevention centres. In this study, outcomes are related to the evolution of alcohol use and the beneficiaries trajectory in terms of psychosocial recovery during these 12 months after the start of IACA!. The target number of participants are 100 beneficiaries and 23 professionals. Ethics and dissemination This research was approved by the Committee for the Protection of Persons Ouest V n°: 21/008-3HPS and was reported to the French National Agency for the Safety of Health Products. All participants will provide consent prior to participation. The results will be reported in international peer-reviewed journals and presented at scientific and public conferences. Trial registration numbers NCT04927455; ID-RCB2020-A03371-38. INTRODUCTION Scientific context and issues In 2016, an estimated 80 000 people died of alcohol-attributable cancer, and about 1.9 million years of life were lost due to premature mortality or disability in the European Union (EU). 1 Alcohol use is a well-known risk factor of disease and injury. 2 3 A large contribution to this burden is alcohol use disorders (AUDs) (Defined as alcohol dependence (AD) and harmful use of alcohol (see International Classification of Disease 10th revision).) and AD. 4 In France, in 2015, more than 27 000 and almost 8% of all new cancer cases were estimated to be attributable to alcohol, whereas they were estimated to be 5.8% worldwide in 2012. 5 Heavy drinking was responsible for 4.4% of all new cancer cases 6 and was the second leading cause of so-called preventable cancers. 7 A recent review also showed that, worldwide, alcohol use can explain up to 27% of the socioeconomic inequalities in mortality. 8 Subjects with alcohol addiction (or AUD) are known to experience a range of social harms because of their own excess drinking, including family disruption, employment STRENGTHS AND LIMITATIONS OF THIS STUDY ⇒ Consistent with bottom-up approaches, our study is a realist evaluation based on a natural experiment. ⇒ Mobilising mixed-models methods this study will evaluate the impact, viability and transferability of a complex harm reduction intervention (IACA!). ⇒ This study will mobilise multiple modes of data collection: interviews with four samples, observations and questionnaires. ⇒ We anticipate a potential risk of attrition during the study due to structural and circumstantial situations. ⇒ The Vitae study will not use any kind of biological or medical information and will rely on declarative data. Open access problems, criminal convictions and financial problems. 9 Assessments of these problems are scarcer, but social cost studies give some hints of the alcohol -attributable consequences in selected countries. 10 11 Addiction is considered a chronic disease 12 13 associated with a high rate of relapse as a consequence of the addictive condition. In this perspective, treatment, whatever the addiction, aims to obtain and maintain abstinence, or at least a significant reduction in use or a controlled consumption, by avoiding situations presenting the risk of relapse and through the management of craving. Most of the current therapeutic work focuses on the notion of relapse prevention or avoidance and the control of its determinants. [13][14][15] Since only a small portion of patients can access alcohol addiction treatment, it is of paramount importance to find a way to offer new support towards safe consumptions, reductions or cessations. The harm reduction (HR) approach and mental health recovery perspective offers another way to support the patient with alcohol addiction. HR refers to interventions that aim to reduce the adverse health and socioeconomic consequences of substance use without focusing on abstinence, reduced use or addiction management. 16 The HR approach is based on: ► Suspension of the moral judgement on uses. ► The implementation of a proximity approach, based on reaching people who use alcohol 'where they are' (going to them or through outreach, implemented through mobile teams, street work or even intervention in a festive environment) and, on the other hand, on the unconditional reception of people 'where they are' with their current consumption (ie, without any requirement for a commitment to stop drug use or to a care or integration approach). ► The participation, from a community health perspective, of people who use drugs in the development and implementation of interventions and the recognition of their knowledge of the experience (knowledge of products and their effects, use practices, consumption scenes, lifestyles and peer group codes, ability to define and relay low-risk practices). In some respects, this concept is very similar to that of mental health recovery, 17 which articulates cure and care, autonomy and dependence, vulnerability and capacity. It is a non-medical process of getting better, clinically, socially and functionally. It aims at seeking and supporting the person's resources to build solutions. This process focuses on the positive transformations that the person experiences when recovering and the environmental factors that facilitate or hinder them. 18 Even though this is not their primary objective, HR and mental health recovery are likely to influence the severity of addiction and relapse. Since 2013 the organisation Santé! (Marseille, PACA region, France) has developed a risk and HR programme (IACA!) based on the principle of psychosocial recovery used in the 'Housing First' programme 19 for people with AUD. This programme aims to reintegrate the person with problem alcohol use into a path of care, by removing the psychological contributors to medical and social isolation (shame, guilt, feeling of failure), stabilising alcohol use (sometimes including access to alcohol) and providing security and support for psychosocial recovery. The IACA! intervention has already shown its effects on alcohol consumption in the centre where it was implemented and is now being extended to new sites. In order to assess the conditions under which such an intervention is deployed in other centres and how its initial effect is generalisable, we developed the Vitae study. This pilot study is a realist evaluation of the impact, viability and transferability of the IACA! programme. This pilot study will be used to collect data prior to implementation of a fully controlled effectiveness trial. METHODS This protocol is consistent with the Standard Protocol Items: Recommendations for Interventional Trials 2013 statement: defining standard protocol items for clinical trials. Aim, design and setting of the study Aim of the study The IACA! intervention proposes intervention likely to secure factors that are predictive of relapse (feelings of dissatisfaction, anxiety, stress management, family and social support, etc), thus facilitating spontaneous cessation while promoting the well-being of individuals. The IACA! intervention has already shown its effects on alcohol consumption in the centre where it was tested. The question now is to confirm the results observed over the last 2 years and to explain them in a perspective of scaling up. As the IACA! intervention was only tested in one centre, operating on an associative model and not on a care model, the question arises as to its transferability. For this reason, we decided to conduct a pilot study 20 prior to an effectiveness trial. The aims of the present study are: ► To evaluate the transferability of IACA! to various centres that take care of people that have problems related to excessive alcohol use (in 10 different treatment centres -addictions treatment centres and/ or psychosocial support centres-in the Nouvelle-Aquitaine and PACA regions, see online supplemental table 1) in terms of results. ► To assess the conditions of transferability, included viability, of IACA! in these 10 centres. ► To evaluate the feasibility of a multi-centred controlled efficacy trial. Theoretical framework Transferability is the extent to which the measured effectiveness of an applicable intervention could be achieved in another setting. 21 It depends on multiple factors such as population and stakeholders' characteristics, contextual factors, modalities of intervention deliverance and Open access the modalities and conditions of implementation. 22 When studying transferability, an analysis of viable validity is also essential. 23 As defined by Chen, viability evaluation 'assesses the extent to which an intervention program is viable in the real world. More specifically, it evaluates whether the intervention: ► Can recruit and/or retain ordinary clients, ► Can be adequately implemented by ordinary implementers ► Is suitable for ordinary implementing organizations to coordinate intervention-related activities, ► Is affordable, ► Is evaluable, and ► Enables ordinary clients and other stakeholders to view and experience how well it solves the problem'. 23 The Vitae study adheres to the theory-driven evaluation framework [24][25][26][27] where the realist evaluation method and contribution analysis 28 29 are used to explore the effects, mechanisms and influence of context on the outcomes and to develop and adjust an intervention theory. This case-study method will help to set out the contribution 'story': in light of the multiple factors influencing the result, does the intervention contribute to an observed result and in what way? 28 This method is intended to provide 'an in-depth view of how things work'. 24 In realist evaluation, developed by Pawson and Tilley, 30 the effectiveness of the intervention depends on the underlying mechanisms at play within a given context. The realist evaluation is about identifying context-mechanismoutcome configurations (CMOs). The aim is to understand how and under what circumstances an intervention works. A middle-range theory (ie, a theory that is aimed at describing the interactions between outcomes, mechanisms and contexts) is set out to highlight the mutual influences of intervention and context. 31 32 Hence, the evaluation is about identifying middle-range theories. Hypothesised and validated by empirical investigations, these CMO configurations help to understand how an intervention brings about change, bearing in mind context and target group. 31 32 The recurrence of CMOs is observed in successive case studies or in mixed protocols, such as realist trials. 32 Indeed, to consider context, realist evaluators observe in successive cases what Lawson (quoted by Pawson in 2006 33 ) calls demi-regularities of CMOs (ie, regular although not necessarily permanent occurrences of an outcome when an intervention triggers one or more mechanisms in a given context). 32 Studying these recurrences in different contexts allows the isolation of key elements that are replicable in a family of contexts. This gives rise to middle-range theories that become stronger as progress is made through the cases. 'These middle-range theories, in certain conditions, predict possible intervention outcomes in contexts different from the one in which the intervention was tested'. 32 Applied to our case As the realist principle is suitable for studying nonlinear interactions in complex systems, we adopted this approach. The intervention under investigation applies to an operational programme and it is therefore important to identify its key functions, 34 35 that is, its interventional or contextual components underpinning its effectiveness. Where usually viability and transferability are studied with scales that list attributes and criteria in order to rate or ease the transferability of an intervention, 21 36 37 we chose to mobilise the realist evaluation. Indeed, studying transferability and viability through the theory-driven lens will generate a dynamic and precise analysis of the IACA! intervention because 'theory-based evaluation is demonstrating its capacity to help readers understand how and why a programme works or fails to work. Knowing only outcomes, even if we know them with irreproachable validity, does not tell us enough to inform programme improvement or policy revision. Evaluation needs to get inside the black box and to do so systematically'. 26 In this study, each institution deploying the IACA! programme, with its own context, will constitute a case. For each case, the intervention will be studied to identify the mechanisms at play in the given context along with the variation in outcomes. CMO configurations will be identified through an analysis of each case. A cross-case analysis will highlight recurrent CMO configurations and thus identify key features for possible replication. In our study, outcomes are related to the evolution of alcohol use at 12 months after the start of IACA! and the beneficiaries' trajectory during these 12 months in terms of psychosocial recovery. Drawing on the literature and on the experience of professionals delivering the intervention, we will first set out initial middle-range theories, 30 33 which we will test in each case (ie, centres) by collecting qualitative and quantitative data. 32 The mechanisms will be identified qualitatively according to the definition of Ridde et al: 'a mechanism is an element of reasoning and reaction of an agent with regard to an intervention productive of an outcome in a given context'. 38 39 It 'characterizes and punctuates the process of change and hence, the production of outcomes'. 40 Contextual elements will be included among all the elements collected qualitatively that satisfy the following definition: elements located in time and space that may affect the intervention and the outcomes produced, and whether they relate to the centres, the professionals, the beneficiaries or the operational setting. In a realist approach, interventional elements are part of the context. Therefore, we can distinguish between Ci (for contextual factors linked to the intervention) and Ce (for contextual factors not linked to the intervention, ie, external factors). IACA! intervention and its implementation IACA! intervention Created in 2013 in Marseille by an addictology professional and a social support professional, the association Santé! in the PACA region is developing a risk and HR Open access approach for people who consume alcohol, based, among other things, on the principle of psychosocial recovery as used in the 'Housing First' programme. 19 The intervention, called IACA!, aims to reintegrate the person into a healthcare pathway by removing the barriers that cause medical and social isolation (shame, guilt, feelings of failure), stabilising the person's use and ensuring their safety, and supporting their psychosocial recovery. As shown in figure 1 and depending on the person's needs, the intervention aims to: 1. Provide advice, reassurance, listening, appeasement. 2. Secure and/or reorganise consumption in order to avoid periods of withdrawal syndrome (vulnerability factors). 3. Activate rights to maintain/obtain appropriate and satisfactory social integration. 4. Provide psychological support. 5. Adapt, build and coordinate a health path (to avoid break-up or non-recourse). 6. Promote social links. 7. Consolidate long-term alcohol consumption strategies. 8. IF REQUESTED: accompaniment for a cessation experiment. This support is organised in four sequences: 1st phase-reception/build the alliance: unburden people in relation to their issues (lifting shame): valuing their strategies without judging their consumption; inform and define the IACA! support in a break with traditional support. 2nd phase-securing: with the person, identify the situations that reinforce consumption and act on them: securing consumption to avoid risk situations (stress, periods of lack, dehydration, etc); avoiding peaks in consumption; ensuring basic needs such as food, hydration, safety, sleep, etc. 3rd phase (in parallel with or following phase 2)-stabilisation: support a project and reconstruction objectives over several months; stabilise consumption; re-engage the person in a care pathway adapted to his needs and projects; tackle social, family and professional isolation, and secure the environment by identifying a set of professionals needed to solve the main difficulties identified. 4th phase-progressive reduction of support: monitoring with regard to sustainability and autonomy; checking that the support is satisfactory. The initial results of this programme over 1 year were promising since, of the 17 people who received the intervention, all had a social or health benefit, and 13 of these benefits were associated with stabilisation (n=4), reduction (n=7) or cessation (n=2) of alcohol use after 1 year. Thus, in addition to the positive results in terms of psychosocial recovery, and even if the goal is not the cessation of alcohol consumption, the programme is potentially promising since it sometimes leads to the cessation of consumption and secures/reduces consumption for half of the people (back to occasional consumption). The programme therefore initially provides what is recommended in any attempt to quit, which could explain this spontaneous reduction or cessation. Implementation in 10 new centres The 10 centres will be supported by Santé! in the implementation of IACA! according to the following procedures: ► Training of 10 pairs of professionals (2/centre) in charge of accompanying beneficiaries in the centres. ► Anchoring an alcohol RH support practice: support for the implementation and adaptation of the IACA! method within each centre. ► Adaptation and improvement: changes to the IACA! method and its tools. Study design This study is a 12-month, multi-case, longitudinal descriptive pilot study using mixed methods (quantitative and qualitative). It is multi-centred and national, and carried out in 10 addiction treatment or prevention centres (4 in the PACA region and 6 in the Nouvelle-Aquitaine region). These sites, all in the health and social sector, are heterogeneous (see online supplemental table 1) in their aims, organisation and target populations. Among the 10 centres there are 5 CSAPAs (addiction treatment, support and prevention centre providing information, medical, psychological and social evaluations of requests and needs, and orientation), 1 CAARUDs (reception and accompaniment centres for harm reduction for drug users), 4 CHRS (accommodation and social rehabilitation Open access centres) and 1 IML (intermediation rental programme). The CSAPAs have a target population which is less vulnerable than that of the other centres. Indeed, most of the CSAPAS receive users who, although they may be followed up by care, whether specialised in addictology or not, generally have more problematic and less 'controlled' uses than the general population. They also often live in more precarious social situations. Characteristics of participants To validate the implementation of IACA! and highlight the conditions of transferability of this programme, we will collect data from three types of population: ► Individuals receiving support from the IACA! intervention (called beneficiaries). ► Professionals implementing the IACA! intervention, that is, the pairs in charge of accompanying the beneficiaries in the centres as well as the persons in charge of these centres. ► Professionals from Santé! supporting the deployment of the IACA! intervention. The beneficiaries are all persons integrating the programme in the project's partner sites and who consume alcohol. The professionals will be specialised educators, social workers, nurses, social and solidarity economy advisors, etc. The inclusion criteria will be as follows: ► For the beneficiaries: being over 18 years old, willing to participate, having started the IACA! programme 15 days beforehand or less, and being followed up by one of the 10 centres in the study. Beneficiaries will be excluded if they have a severe somatic or psychiatric pathology that is incompatible with a good understanding of the assessment tools; if they have difficulty understanding and/or writing French; if they are unreachable by telephone; if they are participating in another research project with an ongoing exclusion period; if they are placed under court protection; and if they are pregnant. ► For professionals from centres implementing IACA!: having been trained at IACA!, willing to participate, and working in the centres participating in the implementation of IACA!. ► For the professionals in charge of the centres: having participated in the implementation of the IACA! method in their centres, and willing to participate. ► For the Santé! professionals: participating or having recently participated in the implementation of IACA!. Data collection In order to collect information from multiple complementary sources we will use quantitative and a qualitative data collection methodologies. Quantitative data The aim is to collect longitudinal data concerning the effects of IACA!. The effects of IACA! involve quality of life, mental health recovery and alcohol consumption. All participants who meet the eligibility criteria will be offered participation in the study. The centres' professionals will inform patients being treated with IACA! of the existence of the Vitae study and the possibility of participating in it. A meeting will then be organised between the patients and the research team, in order to offer them the opportunity to participate in this research and to inform them of: ► The purpose of the study. ► The computerised processing of data on the participant that will be collected in the course of this research, and his/her rights of access to, opposition to and rectification of this data. The baseline M0 will then be scheduled (maximum 15 days after starting the IACA! programme). Online supplemental table 2 shows the different data that will be collected on 100 patients (10 per centre), prospectively, by trained clinical research staff. During the baseline inclusion (M0), participants will be interviewed using: ► The Addiction Severity Index (ASI). ► The Treatment Service Review (TSR). ► The Mini International Neuropsychiatric Interview (MINI). ► The Empowerment Scale. At each follow-up, participants will be assessed with a follow-up ASI, TSR interview, craving assessment and Empowerment Scale. The ASI is a semi-structured interview designed to assess impairments that commonly occur due to substancerelated disorders. 41 A modified and validated 45 min French version of the ASI will be used to take into account tobacco and addictive behaviours. 42 The ASI explores six areas that may be affected by addiction: medical status, employment/support status, substance and behavioural addiction, family and social relationships, legal status and psychological status. These data are used to generate composites scores (CSs) for each domain, thereby reflecting the severity of the subject's condition. CSs range from 0 to 1, with a worsening severity as the scores move closer to one. [42][43][44] ASI will be used at inclusion and then every 3 months during the 12-month intervention period. Mini International Neuropsychiatric Interview The MINI is a structured diagnostic interview providing a standardised assessment of 18 major psychiatric disorders defined according to Axis I DSM-IV (anxiety disorders, mood disorders, psychotic disorders, addictive disorders, eating disorders) and the diagnosis of antisocial personality disorder. 45 46 A 30 min version of MINI adapted for DSM-5 criteria will be used. Craving evaluation scale The craving evaluation scale developed by the University of Bordeaux Addiction Team in the SANPSY Laboratory will be used. It is a 5 min hetero-evaluation of craving for all substances and addictive behaviours manifested Open access now or in the past. This tool explores the frequency of craving, corresponding to the number of days craving was reported over the last 30 days, as well as the mean and maximum intensity on a scale ranging from 0 (no craving) to 10 (extreme craving). Treatment Service Review The TSR, 6th version, is an inventory of the medical, psychosocial and psycho-educational contacts of the subject over the last 30 days. 47 48 This instrument allows a quantitative evaluation of the effective medico-psychosocial management of a subject. It was validated in French, and is now integrated into the ASI evaluation as it was developed by the same group that developed the ASI. Empowerment Scale The Empowerment Scale measures personal empowerment by examining the concepts of hope, social acceptance and quality of life. 49 50 It is a 28-item scale with four points each, ranging from 'Strongly Disagree' to 'Strongly Agree'. The total empowerment score is a quantitative variable, ranging from 28 to 112. This scale can be divided into sub-dimensions measuring self-efficacy and self-esteem, power and powerlessness, community activism and autonomy, optimism and control over the future, and righteous anger. Online supplemental table 2 shows the different data that will be collected. Qualitative data Online supplemental table 3 shows the different data that will be collected. We will identify: skills field, functioning principles, contextual conditions of success, delivering conditions of success, mechanisms and contextual elements (including techniques). The data collected will help to elaborate the principles of initial middle-range theories (to establish how the intervention works in context), and mechanisms hypothesised as key functions of IACA!. We will monitor these different data in each centre implementing IACA! to verify their integrity in target centres and to verify the initial theories (contribution analysis). To perform this collection, we will cross two qualitative investigation methods: non-structured interviews and observations. Non-directive interviews with the centres' professionals (20 interviews) This investigation will be performed in all centres implementing IACA!. We will conduct this investigation almost 9 months after the beginning of implementation. A total of 20 interviews will therefore take place over the study period. From these professionals, the data collection will be focused on the data described in online supplemental table 3. Non-directive interviews with the Santé! professionals Interviews with Santé! professionals supporting the implementation of IACA! in the 10 investigated centres (three interviews). We will carry out this investigation almost 6 months after the beginning of implementation. From these professionals, the data collection will be focused on the data described in online supplemental table 1. Observations (10 observations) In addition to interviews with professionals, one observation per centre will be conducted, making a total of 10 observations. The objective is to collect the following physical contextual elements, specific to each centre, presented as being potentially key. These observations will be based on an observation grid. These investigations will be performed after 6 months of implementation. Non-directive interviews with beneficiaries (100 interviews) We will perform this qualitative investigation on the beneficiaries included in the IACA! programme (10 per centre). A total of 100 interviews will be conducted. This qualitative investigation will be performed between 9 and 12 months after beginning the IACA! programme. The data collected will be focused on the data described in online supplemental table 3 (ie, mechanisms, contextual conditions of success, delivering conditions of success). To avoid social desirability bias, we will conduct unstructured surveys. Thus, open-ended questions will be asked to the professionals and beneficiaries. The interview grids and observation log will be designed and pretested during exploratory interviews and observation sessions at the beginning of the study. Patient and public involvement The Vitae study does not include any patient or public involvement in terms of setting research priorities, defining research questions or outcomes, providing input into the study design or disseminating the results. The research participants are called on to answer questionnaires or interviews. Data analysis Quantitative data Quantitative evaluations repeated every 3 months will serve to identify the impact of this intervention on the main judgement criterion (ie, the evolution of the severity of alcohol use at 12 months after the start of IACA!) and to describe the subjects and their evolution over 12 months. A descriptive analysis will be performed to describe the severity of the subjects' alcohol use after 12 months of intervention. This evolution of the severity of alcohol use corresponds to the delta of composite scores between M12 and M0. The variables alcohol consumption, alcohol craving and severity of addiction will be described over the 12 months of the intervention in relation to the initial assessment. They will also be compared between centres. Qualitative variables will be described according to their frequency and percentage. Quantitative variables will be described according to their means and SD. Second, to determine the factors impacted by the intervention, we will perform repeated analyses of variance to determine whether the variables have changed during the Open access intervention. For the variables showing a change, we will use a comparison test on repeated measures controlling for sociodemographic variables: age, gender, work in the last 3 years, presence or absence of current mood and anxiety disorders, and the centre in which the intervention was carried out (applying the Bonferroni correction). All statistical analyses will be performed with the JMP software (Pro V.15.2.0, SAS Institute). Qualitative data A content analysis by case and inter-case (centres) will be conducted. Content analysis encodes, classifies and ranks the communication in order to examine its patterns, trends or distinguishing features, in our case the recurrence of C-M configurations. The N'vivo software will be used for this, allowing us to conduct a thematic analysis of the three data sources. The analysis performed by centre, by validating or allowing CMO adjustments, will have to answer four questions: Question 1: in what contextual and delivery conditions does IACA! seem to produce an impact on patients? By impact we mean the targeted goals presented within the intervention section. Question 2: to what extent is IACA! feasible and acceptable in the routines of professionals in the different centres? Question 3: what elements considered as key are actually adaptable (and therefore are non-key)? Question 4: what elements are mandatory to help to implement IACA!? What elements should be included in a transfer scheme? The answers to these questions will allow us to highlight the hypothetical key functions (CMO configurations) defined with Santé! for each centre by identifying (a) the degree of integrity of the key functions in each centre and (b) the degree of adaptation in each centre. We will perform monographies, providing a specific description of all key functions in each centre. The timeline (figure 2) presents the key steps of the Vitae study. QUAN/QUAL analysis We will then conduct a QUAN/QUAL 51 analysis in each centre in order to compare: the results observed on patients in terms of psychosocial recovery and consumption (collected by quantitative questionnaire) and the implementation or completeness of the IACA! intervention, the contextual conditions, the principles of operation and support and the professional skills needed in the transfer scheme. ETHICS AND DISSEMINATION Ethics approval and consent to participate The Vitae project will be carried out with full respect of current relevant legislation (eg, the Charter of Fundamental Rights of the EU) and international conventions (eg, Declaration of Helsinki). It follows the relevant French legislation on interventional research protocols involving the human person (Jardé law, category 3 research on prospective data 52 ). The protocol (version 1.2) was approved on March 2021 by the Comité et Protection des Personnes, that is, Committee for the Protection of Persons Ouest V n°: 21/008-3HPS and was reported to the Agence Française de Sécurité Sanitaire des Produits de Santé (ANSM) that is, the French National Agency for the Safety of Health Products. All participants who meet the eligibility criteria will be offered participation in the study. Professionals at the centres will inform patients being treated with IACA! of the existence of the Vitae study and the possibility of participating in it. A meeting will then be organised between the patients and the SANPSY research team, in order to offer them to participate in this research and to inform them of : Open access ► The purpose of the study. ► The computerised processing of data concerning the participant that will be collected during the course of this research and his/her rights of access, opposition and rectification to this data. For patients under a protective measure (ie, curatorship, tutorship, etc), the legal representative will also be informed by the Vitae team: ► Of the purpose of the study. ► Of the computerised processing of data concerning the participant that will be collected during this research and his/her rights of access, opposition and rectification to this data. If the person agrees to participate, he or she gives oral consent (as it is specified by the Jardé law and accepted by the ethics committee 52 ) and his or her non-opposition is documented in the participant's medical record or file. The participant may, at any time, object to the use of his or her data in the context of the research. These information will also be given to the legal representative if the patients are under guardianship. Dissemination plan The results will be disseminated in various academic and non-academic platforms. The results will be reported in international peer-reviewed journals and presented at international and national conferences. A public report will describe all the steps of the study, the results and recommendations. Eventually, a general restitution will be held in order present the final result of the study to all the participants and funders. DISCUSSION Despite a high prevalence of addiction in the general population, the worldwide proportion of individuals with addictions who access addiction treatment is estimated to be less than 25% overall, and under 10% for alcohol and tobacco, including in France. 53 54 A recent meta-analysis identified an average dropout rate of 30% for psychosocial substance use disorder treatment and a 26% dropout rate for programmes targeting alcohol. 55 The low rate of access to alcohol addiction treatment and the high level of drop-out after relapse could be explained by barriers such as the stigma associated with addiction or the desire to try to cope alone. In addition, many patients do not have access to treatment, or drop out from treatment due to the pre-requisite of a period of inpatient detoxification. 53 56 57 This study will contribute to scaling up a potentially effective intervention for the management of tens of thousands of patients currently in a therapeutic impasse. Our study will face some challenges and limitations, since it will start during the COVID-19 crisis which is impacting the follow-up and involvement of the people with AUD and the professionals. Therefore, we anticipate a significant risk of attrition during the study due to the turnover of staff and the discontinued monitoring of the beneficiaries while the intervention is being dispensed. Second, all our results are declarative and the Vitae study will not use any kind of biological or medical information. Although declarative data could lead to underestimation, the use of a hetero-administered questionnaire on substance consumption should reduce this under-declaration. 58 From a public health point of view, this study will explain and pinpoint the precise impact of IACA! and identify the conditions for this impact. It will allow us to define the key functions and how they work in different contexts or how they could be adapted, and eventually to define a guideline to disseminate IACA! to other centres and adapt it. From a research viewpoint, our proposed methodology is consistent with the bottom-up approaches advocated in health promotion, starting with a real-world response to a pressing problem. 23 Transferability and viability studies are still underused in France, even though their pertinence has been highlighted in the international literature. Here, we propose an application of these international recommendations relative to the transferability and evaluation of complex health interventions. Mobilising the realist evaluation to analyse the transferability and the viability of an intervention is quite innovative, and will produce thorough and precise knowledge on this programme. This pilot study will evaluate the feasibility and the pertinence of a multi-centred controlled efficacy trial. It will use the feedback from the teams conducting the evaluation and the interviews with centre managers or directors. These elements will allow us to establish: the size of the sample needed to conduct a trial; the integrity and relevance of the evaluation protocol and of the data collection tools used in this trial; and the randomisation, recruitment and consent procedures. Transferability of complex health interventions is a major public health topic and remains a highly valuable research field. This study, focusing on an innovative intervention for people with AUD implemented in very different contexts will provide valuable information for the implementation science but also for the HR field. The results of this study will contribute to informing public decision-making in terms of support for people with AUD. In addition, it will contribute to the preparation of a large-scale trial and, ultimately, to the scaling up of an effective intervention for the management of people with psychosocial problems related to excessive alcohol use. Open access Contributors JM-F and NS drafted this article and all authors revised the manuscript. The project design was developed by LC and MA. JM-F, NS, SM and FS were involved in implementing the project and in developing the evaluation design, under the supervision of LC and MA. HB and EL were in charge of the design and the implementation of the IACA! intervention. All authors read and approved the final manuscript. Funding This research has received funding from two national recognised research agency; the INCa and the IRESP. These funding has been obtained via two national competitive peer reviewed grant application processes, respectively named '2019 Call for projects-Population health intervention research: Addressing all dimensions of cancer control' (No. CAMBON-2020-004) and '2019 Call for projects: Tackle the addictions to psychoactive substances' (N°CAMBON IRESP-19-ADDICTIONS-05). Competing interests None declared. Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research. Patient consent for publication Consent obtained directly from patient(s). Provenance and peer review Not commissioned; peer-reviewed for ethical and funding approval prior to submission. Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise. Open access This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.
2022-08-13T06:17:20.844Z
2022-08-01T00:00:00.000
{ "year": 2022, "sha1": "d8120d0aa2fcf409c1db10abdce7bb43f466471d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "BMJ", "pdf_hash": "4b7c28cd7f9901f4f7b9ac328c47b955cdc610ae", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
269155160
pes2o/s2orc
v3-fos-license
Controlling structure and interfacial interaction of monolayer TaSe2 on bilayer graphene Tunability of interfacial effects between two-dimensional (2D) crystals is crucial not only for understanding the intrinsic properties of each system, but also for designing electronic devices based on ultra-thin heterostructures. A prerequisite of such heterostructure engineering is the availability of 2D crystals with different degrees of interfacial interactions. In this work, we report a controlled epitaxial growth of monolayer TaSe2 with different structural phases, 1H and 1 T, on a bilayer graphene (BLG) substrate using molecular beam epitaxy, and its impact on the electronic properties of the heterostructures using angle-resolved photoemission spectroscopy. 1H-TaSe2 exhibits significant charge transfer and band hybridization at the interface, whereas 1 T-TaSe2 shows weak interactions with the substrate. The distinct interfacial interactions are attributed to the dual effects from the differences of the work functions as well as the relative interlayer distance between TaSe2 films and BLG substrate. The method demonstrated here provides a viable route towards interface engineering in a variety of transition-metal dichalcogenides that can be applied to future nano-devices with designed electronic properties. Introduction The exotic properties of atomically thin two-dimensional (2D) crystals, first revealed in graphene, have led to a tremendous expansion in the 2D materials research [1][2][3][4].In particular, controllable atomic layer-by-layer growth using chemical vapor deposition and molecular beam epitaxy (MBE) has allowed us to address fundamental issues in the 2D limit and to search for artificial interfaces with designed functionalities [2,3,[5][6][7][8].Transition-metal dichalcogenides (TMDCs) provide a fertile platform to realize a number of exotic properties with various constituent atoms and crystal structures [1][2][3]5], e.g., 1H (trigonal prismatic coordination) and 1T (octahedral coordination) with differences in the coordination of six chalcogen atoms surrounding a metal atom.One caveat, and simultaneously an advantage of 2D crystals, is that the intrinsic physical properties of epitaxially grown monolayer (ML) TMDC films can be modified by strong interactions with a substrate [9][10][11][12][13].Bilayer graphene (BLG) on a SiC(0001) substrate has been ubiquitously used for the epitaxial growth of layered 2D materials when studying the intrinsic characteristics of van der Waals (vdW) materials in a 2D limit due to relative chemical inertness of BLG [14][15][16][17][18].The weak interactions between BLG and epitaxial vdW materials can preserve the intrinsic properties of overlaid 2D materials.Indeed, the formation of novel ground states has been demonstrated in TMDCs by using BLG substrate, e.g., the indirect-to-direct band gap transition in 2H-MoSe 2 [14], the exciton condensed states in ML 1T-ZrTe 2 [15], the quantum spin Hall state in ML 1T'-WTe 2 [16], and metal-to-insulator transition in 1T-IrTe 2 [17]. Among the family of TMDCs, MBE-grown MX 2 (M = Nb, Ta; X = S, Se) on a BLG substrate has been intensively studied, and the growth recipes have been well established [19], making them a great platform to study exotic quantum phenomena in the ML regime.Examples include charge density waves (CDW) and Ising superconductivity in 1H-MX 2 [20][21][22], exotic orbital textures with Mott insulating states and quantum spin liquid behavior in 1T-MX 2 [23][24][25], and heavy fermionic behaviors in 1T/1H-MX 2 heterostructures [24][25][26][27][28].One critical aspect to consider but often neglected is that BLG substrate may give a significant charge transfer to the overlaid MX 2 films due to a substantial difference in work functions between MX 2 and BLG, which may strongly affect the intrinsic properties of ML MX 2 [29][30][31].Considering that the ground states of atomically thin TMDC films can be easily modified by the amount of extra charge doping [11,15,32], it is crucial to carefully study the effect of the BLG substrate on overlaid ML MX 2 films. Here, we report the electronic structure of epitaxially grown ML TaSe 2 films on a BLG substrate using angle-resolved photoemission spectroscopy (ARPES).The interfacial interactions have been modified through the selective growth of structural phases (1T and 1H) of ML TaSe 2 using MBE.Strong interactions between ML 1H-TaSe 2 and BLG were evidenced by kinked band structures and significant charge transfer from BLG to TaSe 2 , while weakly interacting ML 1T-TaSe 2 on BLG does not exhibit any charge transfer or band hybridization.The former deviates from the previous works that found the quasi-freestanding nature of MBE-grown ML TMDC on BLG [14][15][16][17][18]. Scanning tunneling microscopy (STM) measurements and first-principles calculations reveal differences in the atomic height and the modified work functions in the ML limit of two phases of TaSe 2 , resulting in different electronic responses at the interface. Results Figure 1a presents the schematics for the controlled growth of ML TaSe 2 on a BLG substrate using MBE.It is well known that 1H-and 1T-TaSe 2 films can be selectively synthesized on BLG by controlling substrate temperature (T growth ) during the growth; low and high T growth are suitable for the formation of 1H-TaSe 2 and 1T-TaSe 2 , respectively [19]. Figure 1b and d show the ARPES spectra of MBE-grown ML TaSe 2 depending on T growth .ARPES intensity maps demonstrate that the ML TaSe 2 film grown at high T growth (= 750 ˚C) shows an insulating band structure (Fig. 1b) while the low T growth (= 450 ˚C) shows metallic behavior (Fig. 1d).These results are consistent with the Mott insulating state by the Star-of-David (SoD) CDW transition in ML 1T-TaSe 2 and the metallic nature of ML 1H-TaSe 2 , respectively [19,22,23].On the other hand, the ML TaSe 2 film grown at intermediate T growth (= 600 ˚C) exhibits mixed band structures of ML 1H-and 1T-TaSe 2 (Fig. 1c). The selective fabrication of ML TaSe 2 films by controlling T growth is also confirmed by core-level measurements since the change of crystal structures generates different crystal fields in TaSe 2 [23,33,34].Figures 1e and f represent core-level spectra for Ta 4f and Se 3d, respectively.The peak shapes and positions of Ta 4f and Se 3d obtained from high T growth = 750 ˚C (light blue) and low T growth = 450 ˚C (dark blue) are in agreement with ones of 1T-and 1H-TaSe 2 , respectively, as reported [35,36].On the other hand, for moderate T growth = 600 ˚C, not only do multiple peaks appear in both Ta 4f and Se 3d, but they also have the same positions with the core peaks from 1T-and 1H-TaSe 2 , indicating the coexistence of the 1H-and 1T-TaSe 2 islands.ARPES and core-level measurements demonstrate the importance of delicate control of T growth to tune the structural phases of ML TaSe 2 on a BLG substrate [19,23]. To investigate the effect of the BLG substrate on ML TaSe 2 , the BLG π bands have been measured with and without overlaid TaSe 2 [37][38][39][40].Figure 2a shows an ARPES intensity map of the BLG π bands without TaSe 2 taken at the K G point perpendicular to the Ŵ G -K G direc- tion of the Brillouin zone (BZ) of BLG.The obtained asgrown BLG π bands are intrinsically doped by electrons due to the presence of the SiC substrate [41].The Dirac energy (E D ), defined here as the middle of the conduction band minimum and the valence band maximum, is located at ~0.3 eV below Fermi energy (E F ) extracted from the 2nd derivative ARPES spectrum (red lines) as shown in Fig. 2d. Figure 2b and e present the BLG π bands taken from fully covered ML 1T-TaSe 2 films.Compared to as-grown BLG on an SiC substrate (Fig. 2a), there are two non-dispersive states with weak spectral intensity located at 0.3 eV and 0.9 eV below E F , which originate from ML 1T-TaSe 2 due to SoD CDW transition [42].Although the additional bands are crossing the BLG π bands, the BLG π band dispersion is hardly changed.Moreover, we found a small amount of charge transfer from BLG to ML 1T-TaSe 2 , i.e., a slight shift of E D from 0.30 eV to 0.24 eV below E F (Fig. 2e), indicating weak interactions between ML 1T-TaSe 2 and BLG. On the other hand, remarkable changes are observed in BLG π bands when ML 1H-TaSe 2 is grown on a BLG substrate.As shown in Figs.2c and 2f, ARPES intensity maps do not show the valence band maximum and E D of BLG π bands.Extended straight lines over the upper π band give E D at 0.135 eV above E F .This result provides direct evidence of significant charge transfer from BLG to overlaid ML 1H-TaSe 2 [38,39].Moreover, BLG π bands show kinked structures at the crossing points with Ta 5d bands of 1H-TaSe 2 located at 0.1 eV and 0.38 eV below E F [38] as denoted by orange and red dashed circles and arrows (Fig. 2f ). The charge transfer and the kinked structure are clearly resolved when the BLG π bands are taken along the K G -M G -K G direction of the BZ of BLG. Figure 3a shows ARPES intensity maps of BLG π bands for 0.5 ML 1T-TaSe 2 on a BLG substrate, i.e., 50% of partial coverage of the substrate by 1T-TaSe 2 .The coverage of ML TaSe 2 films was determined by comparing reflection high-energy electron diffraction (RHEED) intensity ratio between BLG and TaSe 2 peaks.As obtained in Fig. 2b and e, the BLG π bands do not show any kinked structure at the crossing points with ML 1T-TaSe 2 bands, and there are just two branches of BLG π bands due to the presence of two layers of graphene [43].We did not find any additional split of the BLG π band (Fig. 3a and d), indicating negligible interactions.On the other hand, the 0.5 ML 1H-TaSe 2 sample exhibits three branches of BLG π bands as denoted by yellow arrows in Fig. 3b and e.These multiple branches stem from the partial coverage (0.5 ML) of 1H-TaSe 2 films on BLG substrate and ARPES measurements simultaneously catch BLG π bands from both asgrown BLG/SiC(0001) and 1H-TaSe 2 on BLG/SiC(0001) due to finite spot size of the photon beam [18,23,33].Indeed, for the nearly full coverage of 1H-TaSe 2 on a BLG substrate (Fig. 3c and f ), the BLG π bands are reduced to two branches, which are shifted toward E F because of charge transfer from BLG to ML 1H-TaSe 2 .Concomitantly, there is a discontinuity in the upper π band of BLG at 1.5 eV below E F, as denoted by black-dashed circles in Fig. 3b, c and e-f.Such changes, e.g., charge transfer and kinked structures, indicate that there exist strong interactions between ML 1H-TaSe 2 and a BLG substrate [37][38][39][40][41]. BLG π bands at the M G point reveal another intriguing evidence of the charge transfer between ML 1H-TaSe 2 and a BLG substrate.We found that the split of upper and lower BLG π bands shows different split energy values (∆E) depending on the overlaid ML TaSe 2 crystal structures.The split of the lower two branches in Fig. 3e has ∆E = 0.38 eV, which is comparable to one of the as-grown BLG π bands on an SiC substrate [43] and of ML 1T-TaSe 2 on BLG (Fig. 3d).On the other hands, the upper two branches of BLG π bands in Fig. 3e has ∆E = 0.50 eV, which corresponds to the hole-doped ML 1H-TaSe 2 on BLG (Fig. 3f).The enhanced ∆E may originate from the inequivalent charge distribution in the upper and lower BLG layers [43,44].While the lower graphene layer takes the electrons from the SiC substrate, the upper layer transfers the electrons to ML 1H-TaSe 2 [44], as evidenced in ARPES results (Figs. 2 and 3).The sufficient asymmetry of the charge density between the BLG layers induces the field at the respective interfaces, resulting in the enhancement of ∆E [44]. Discussion The selective interactions in ML TaSe 2 films on BLG are non-trivial, because it is reasonable to expect similar amount of charge transfer in both structural phases of TaSe 2 , considering the work function difference between BLG (4.3 eV) and bulk TaSe 2 (5.1 eV for 1T and 5.5 eV for 2H) [45][46][47].However, the work function can be modified when TaSe 2 is thinned down to ML [46][47][48][49][50].The calculated work functions for 1H-TaSe 2 are hardly changed from bulk (5.5 eV) to ML (5.45 eV), whereas the work function of 1T-TaSe 2 are significantly reduced from bulk (5.10 eV) to ML (4.66 eV) (Fig. 4a).The difference in the charge transfer between TaSe 2 and BLG is due to the distinct behavior of the work function in the 2D limit of 1T and 1H phases of TaSe 2 . In addition, an interlayer distance between TaSe 2 and BLG can also play a crucial role in the electronic properties at the interface, since the Schottky barrier is modified as a function of the distance of vdW layers [51][52][53][54][55].Our STM measurements reveal that MBE-grown ML 1T-and 1H-TaSe 2 on a BLG substrate show a different height of 1.02 nm and 0.85 nm, respectively (Fig. 4b, c).In general, height estimated from STM topography reflects atomic positions in real space as well as contributions from electronic structures.A height difference of 1.7 Å in STM data thus implies either that the vdW gap between ML 1T-TaSe 2 and BLG is wider by ~ 1.7 Å or that 1H-TaSe 2 has much lower density of states (DOS) so that the tip must move towards the 1H-TaSe 2 film (compared to 1T-TaSe 2 ) to maintain the same tunneling condition at certain sample bias voltage (V b ) [56].Since the DOS taken at V b = -1 V is larger in ML 1H-TaSe 2 than ML 1T-TaSe 2 [25,57,58], however, the obtained STM heights provide evidence of the shorter vdW gap between ML 1H-TaSe 2 and BLG, compared to that of ML 1T-TaSe 2 .Hence, our findings suggest that the strong (weak) interactions between ML 1H (1T)-TaSe 2 and a BLG substrate originate from the dual effects of the significant (small) work function difference and the relatively shorter (larger) interlayer distances. Conclusions In summary, we have investigated the electronic structure of the ML TaSe 2 on BLG when the structural phase of TaSe 2 is selectively grown in a controlled way.The presence of ML 1H-TaSe 2 on BLG results in strong interactions evidenced by the energy shift due to hole doping in the BLG band structure and the kinked structure at the band crossing points between ML 1H-TaSe 2 and BLG.On the other hand, the presence of ML 1T-TaSe 2 on BLG shows nearly negligible effects on the BLG band structure, indicating weak interactions.The distinct response from ML 1H-and 1T-TaSe 2 on BLG originate from reduced interfacial distance and strongly reduced work function of 1H-TaSe 2 in the ML limit.Our findings provide an exceptional example of strong interactions between the BLG substrate and an epitaxially-grown TMDC material, which paves the way for discovering and manipulating novel electronic phases in 2D vdW materials and their heterostructures. Thin film growth and in-situ ARPES measurement The BLG substrate was prepared by flashing annealing of the 6H-SiC(0001) at 1300 ˚C for 60 cycles.The ML 1H-and 1T-TaSe 2 films were grown by molecular beam epitaxy (MBE) on epitaxial bilayer graphene on 6H-SiC(0001).The base pressure of the MBE chamber was 3 × 10 -10 Torr.High-purity Ta (99.99%) and Se (99.999%) were evaporated from an e-beam evaporator and a standard Knudsen effusion cell, respectively.The flux ratio was fixed as Ta:Se = 1:10, and the BLG substrate The MBE-grown ML TaSe 2 films were transferred directly into the ARPES analysis chamber for the measurement at the HERS endstation of Beamline 10.0.1, Advanced Light Source, Lawrence Berkeley National Laboratory.ARPES data were taken using a Scienta R4000 analyzer at base pressure 3 × 10 −11 Torr.The photon energies were set at 50 eV for s-polarizations and 63 eV for p-polarizations with energy and angular resolution of 10-20 meV and 0.1°, respectively.The spot size of the photon beam on the sample was ~100 µm × 100 µm.Se capping layers of ~100 nm were deposited onto ML TaSe 2 films at room temperature to prevent contamination during transport through air to the ultrahigh vacuum (UHV) scanning tunneling microscopy (STM) chamber.Se capping layers were removed by annealing the sample to 200 ˚C overnight in the UHV before the STM measurements. STM measurement STM measurements are performed using a commercial Omicron LT-STM/AFM under UHV conditions at T = 5 K with tungsten tips.STM topography was obtained in constant-current mode.STM tips were calibrated on an Au(111) surface by measuring the Au(111) Shockley surface state before all STS measurements.STS was performed under open feedback conditions by lock-indetection of an alternating-current tunnel current with a small bias modulation at 401 Hz added to the tunneling bias.WSxM software was used to process the STM images. Density functional theory calculation Work function calculations were conducted using the density functional theory method with the Quantum ESPRESSO package [59].We employed the generalized gradient approximation (GGA) of Perdew, Burke, and Ernzerhof (PBE) functionals [60].A plane wave kinetic energy cutoff of 100 Ry (1360 eV) and 12 × 12 × 1 Monkhorst-Pack mesh were employed [61].A vacuum gap thickness of 20 Å was introduced at the side of the slab for all systems to calculate the work function ( φ = V vac − E F ).All work function values were extracted from the plane-averaged electrostatic potential. Fig. 1 Fig. 1 Selective fabrications of 1 T-and 1H-TaSe 2 on a BLG substrate.a Schematics of (top) Side-and top-view of atomic structures of TaSe 2 and (bottom) the T growth dependent TaSe 2 film synthesis on a BLG substrate.b-d ARPES intensity maps of ML TaSe 2 films with three different T growth .The formation of b ML 1 T-TaSe 2 at T growth = 750 ˚C, c mixed structures of ML TaSe 2 at T growth = 600 ˚C, and d ML 1H-TaSe 2 at T growth = 450 ˚C.The p-and s-polarized ARPES intensity maps were taken with 63 eV and 50 eV photons, respectively, at 10 K. e-f Core-level photoemission spectra from e Ta 4f-and f Se 3d-levels of ML TaSe 2 films.All the data were taken at 10 K Fig. 2 Fig. 2 ARPES spectra of BLG π bands with and without overlaid ML TaSe 2 films.a-c ARPES intensity maps of a as-grown BLG on SiC(0001), and BLG π bands covered with ML b 1 T-and c 1H-TaSe 2 on a BLG substrate, respectively, taken at the K point of the BLG BZ (K G ) perpendicular to the Ŵ -K direction using p-polarized photons at 10 K. d-f Second derivative of the zoomed-in ARPES intensity maps (dented by the red-dashed rectangle in panel a for d as-grown BLG on SiC(0001), and BLG π bands covered with ML e 1 T-and f 1H-TaSe 2 on a BLG substrate, respectively.Two non-dispersive bands with broad and weak spectral intensity at ~ 0.3 eV and ~ 0.9 eV below E F in b originate from ML 1 T-TaSe 2 .The red curves in panels d and e are energy distribution curves of the second derivative maps taken at k y = 0.0 Å −1 .The yellow dashed lines and arrows indicate E D .The orange and red dashed circles and arrows in panel f represent kinked structures of BLG π bands.M G (the M point of the BLG BZ) and K G in the inset indicates the high symmetry points of BLG Fig. 3 Fig. 3 Comparison of the effect of the crystal structure of ML TaSe 2 films on the BLG substrate.a-c ARPES data of BLG π bands taken along the K G -M G -K G direction of the BZ of BLG.d-f The second derivatives of ARPES data in panels a-c.All ARPES data were taken using p-polarized photons at 10 K to better visualize the BLG π bands.The black dashed circles denote the kinked structures of BLG π bands.Yellow arrows represent the split of BLG π bands at the M G point.Green and orange arrows, and dashed lines indicate the splitting size of the BLG π bands (∆E) Fig. 4 Fig. 4 Thickness-dependent work function and STM step height of ML TaSe 2 films on BLG. a The calculated work function of few-layer 1 T-TaSe 2 (red) and 1H-TaSe 2 (blue).b STM topographic image with islands of both ML 1 T-TaSe 2 (light purple) and 1H-TaSe 2 (deep purple) on a BLG/SiC(0001) substrate (scanned at sample bias V b = − 1 V and tunnelling current I t = 5 pA at 5 K).c An STM height profile taken along a red arrow shown in panel b
2024-04-17T06:17:32.631Z
2024-04-15T00:00:00.000
{ "year": 2024, "sha1": "06b8860669a99aa89ad7b1c2cf89009e24f46f64", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "13da055257805bbd8735f5d0deed2991bf8b8e1d", "s2fieldsofstudy": [ "Materials Science", "Physics", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
207352802
pes2o/s2orc
v3-fos-license
Association of indigo with zeolites for improved colour stabilization The durability of an organic colour and its resistance against external chemical agents and exposure to light can be significantly enhanced by hybridizing the natural dye with a mineral. In search for stable natural pigments, the present work focuses on the association of indigo blue with several zeolitic matrices (LTA zeolite, mordenite, MFI zeolite). The manufacturing of the hybrid pigment is tested under varying oxidising conditions, using Raman and UV-visible spectrometric techniques. Blending indigo with MFI is shown to yield the most stable composite in all of our artificial indigo pigments. In absence of defects and substituted cations such as aluminum in the framework of the MFI zeolite matrix, we show that matching the pore size with the dimensions of the guest indigo molecule is the key factor. The evidence for the high colour stability of indigo@MFI opens a new path for modeling the stability of indigo in various alumino-silicate substrates such as in the historical Maya Blue pigment. INTRODUCTION size ~2µm) and indigo (1%wt.) were finely hand-ground and mixed in a mortar. The resulting powder was pressed into a pellet to enhance contact between the two components. The pellets were placed in an oven and heated for 5 hours in air at 250°C. After the heating phase, pellets were re-ground and washed with acetone to remove unreacted indigo. Colour stability of the samples was tested in nitric acid conditions at room temperature. Hybrid pigments were stirred in a HNO 3 solution for different duration times and different concentrations ( Table 1, Table 2). Colorimetric measurements were performed on a RUBY spectrocolorimeter (STIL) at the Centre de Recherche et de Restauration des Musées de France, Paris, equipped with a backscattering geometry. 25 Data were collected on the powder samples placed on a slide glass, using a 4mm spot. CIELAB La*b* (L=lightness; a* = from green to red; b* = from blue to yellow) colour space coordinates were calculated using the D65 illuminant. 26 The Raman spectra were recorded on a LABRAM Jobin-Yvon spectrometer at the The Infra-red spectra were recorded on a FT-IR NEXUS microscope, at the ID21 beamline at the ESRF, Grenoble using the conventional thermal source. Samples were mixed with KBr, and pressed into a pellet. Spectra were recorded between 400 and 4000 cm -1 . Data processing Principal Component Analysis (PCA) was applied to experimental Raman spectra in order to visualize the progressive transformation of the indigo dye occurring in different nitric acid conditions. Prior to PCA, Raman spectra were submitted to data pre-treatment. Spectra were smoothed using a Savitsky-Golay 5 points process, followed by first derivative to reduce baseline-offset effects (Origin v5.3 software) and normalisation. Spectral domain was reduced to 1200-1750cm -1 . PCA is a procedure employed to reduce the dimensionality of a dataset and to reveal the variance among multivariate data. The raw data X-matrix ( Colour stability of indigo@zeolite complexes The raw inorganic matrices do not show any reflectance band in the 400-800nm range. Consequently, the resulting reflectance spectra and colorimetric coordinates only depend on the electronic state of the organic guest molecule. UV-Visible reflectance spectra for the three indigo@zeolite systems before heating, after heating, and after the oxidising test (10 minutes in concentrated nitric acid) are shown in band is attributed to powdered indigo. The heating phase does not really affect the indigo@LTA reflectance spectrum (Fig. 2d). Despite supplementary absorption bands in the 400-500nm range, reflectance maximum of the indigo@MOR heated sample is still found around 660nm (Fig. 2e). The indigo@MFI sample presents different features, with the maximum reflectance blue shifting from 660nm to 615nm after the heating process (Fig. 2f). This shift is attributed to the diffusion of indigo monomers inside the zeolite channels. 12 No band is observed anymore for the indigo@LTA sample after the oxidising test (Fig. 2g). A 415nm band perdures for indigo@MOR (Fig. 2h), which corresponds to the value found for diluted yellow isatin in benzene (404nm) and acetonitrile (414nm). 28 The oxidising test performed on the indigo@MFI system does not provoke any radical change as opposed to the two first samples. Only a shift of the maximum reflectance band from 615nm to 590nm is noted (Fig. 2i). The results of the oxidising test carried out on the three indigo@zeolite samples and their associated colorimetric coordinates in the La*b* system are given in Table 1. Projection in the a*b* space of the colorimetric measurements is presented in Fig. 3 Table 1). The two other systems based on LTA zeolite and mordenite fail, and hence will not be considered any longer. The possible chemical evolution of the organic dye will be investigated on samples using only the silicalite matrix. Raman spectrum of the indigo@silicalite hybrid The Raman spectrum of the indigo@silicalite (MFI) hybrid is shown in Fig found. In the latter case, although indigo is considered as a monomer, 38,39 Raman features are strongly dependant on the interaction involved. In our case, silicalite can be considered as a "solid solvent" for indigo, and this enables to obtain for the first time spectroscopic information on indigo single molecules without any solvent contribution or organic-inorganic interaction dependence. Indigo@silicalite hybrid under oxidising conditions We extended the oxidising tests on the indigo@silicalite system by varying the nitric acid concentration and the duration time of the test. The experimental conditions and the resulting colorimetric coordinates can be found in Table 2. The corresponding projection on the a*b* space is presented in Fig. 5 In order to better understand the progressive transformation of the indigo molecule into these two new forms, the experimental Raman spectra were analyzed using a Principal Component Analysis. This data reduction method has been previously used as spectral searching algorithm for pigment identification [40][41][42] . Calculation of the principal components was group. Comparing the relative position of the ox-2h and the ox-25h groups, complete transformation into the B form is achieved after 2 hours using concentrated nitric acid. DISCUSSION Among the three indigo@zeolite systems tested in this study, only the indigo@MFI hybrid presents a conclusive colour stability under oxidising condition. The presence of aluminum atoms in the zeolite framework does not constitute a key factor to obtain a stable pigment. LTA zeolite and Mordenite (MOR) are Al-rich and possible bonding between the organic molecule and the inorganic matrix could be expected to form as found in some lacquer pigments. 6 Bonds, if they ever form, are not efficient enough to produce a durable compound and to prevent the destruction of the indigo molecules in oxidising conditions. This result is confirmed by the synthesis of a stable hybrid using the high silica MFI zeolite (silicalite). Channel cross section or cage dimensions with respect to the size of the indigo molecule are of prime importance. The breaking of the central alkene function in presence of nitric acid requires room for a perpendicular bridging intermediate (Fig. 8). 43 Depending on the oxidising test (acid concentration and duration time), colour of the indigo@MFI system varies from blue to more violet. More precise tests enable us to follow the chemical evolutions of the organic dye associated with this colour change. Formation of the A form occurs in soft oxidising conditions (Fig. 5 and 7). Under specific conditions, 24 indigo is able to form an intermediate oxidising form called dehydroindigo (Fig. 1). In order to check the possibility of the transformation of indigo into dehydroindigo in the MFI zeolite, dehydroindigo was synthesized as described in the experimental section. Raman spectrum of freshly synthesized dehydroindigo is presented in Fig. 9. (Fig. 10). Presence of the nitro group (-NO 2 ) in some benzene derivative gives two bands in the 1510-1580cm -1 and in the 1325-1365 cm -1 ranges. 44 These new bands could correspond to respectively the asymmetric and the symmetric stretching modes of -NO 2 group in substitution on the benzene rings of the indigo molecule. Formation of this nitro-compound is associated with a colour change from blue to violet. The presence of an electron attractor group such as NO 2 on the indigo molecule could be coherent with the 615nm to 590nm absorption shift and this associated colour change when forming the B form (Fig. 5). The colour modification is progressive when using a diluted 3 times nitric acid medium (Fig. 7), and the violet colour is only effective after a 25 hours treatment. On the contrary, only a 10 minutes treatment is needed to obtain the violet hue when using concentrated nitric acid. CONCLUDING REMARKS In this study, indigo molecules are compounded with three zeolites, LTA zeolite, , e) and f)), and after the oxidising test (10 minutes in concentrated HNO 3 , 14mol/l, g), h) and i)). Suffixes -nh and -ox respectively refer to "non-heated" samples and samples after the oxidising test. after the heating phase (d), e) and f)), and after the oxidising test (10 minutes in concentrated HNO 3 , 14mol/l, g), h) and i)). Suffixes -nh and -ox respectively refer to "non-heated" samples and samples after the oxidising test.
2018-04-03T05:38:30.317Z
2010-10-01T00:00:00.000
{ "year": 2012, "sha1": "8b92191c23871636171c03a2bf24be08d4a19915", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1207.3236", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8b92191c23871636171c03a2bf24be08d4a19915", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Physics", "Medicine", "Chemistry" ] }
119028511
pes2o/s2orc
v3-fos-license
Searches for Other Higgs Bosons at LEP Recent LEP searches for Higgs bosons in models other than the Minimal Standard Model and the Minimal Supersymetric Standard Model are reviewed. Limits are presented for Higgs bosons decaying into diphotons or invisible particles, and for charged Higgs bosons. Introduction The Minimal Standard Model (MSM) incorporating a single Higgs doublet has one neutral Higgs boson with all decay rates specified by the theory. The dominant production mechanism at LEP is the Bjorken process e + e − → h 0 Z 0 ; a 95.2 GeV lower mass limit on the MSM h 0 is obtained by combining the results from the four LEP experiments analyzing data up to E cm =189 GeV [1]. In models employing more than one Higgs doublet or triplet fields, a rich spectrum of Higgs bosons occurs, with possibly large numbers of unknown decay parameters. For instance, the Minimal Supersymetric Standard Model (MSSM) is a two Higgs doublet model (2HDM) obtained with a particular choice of the couplings of the Higgs fields. This model has five Higgs particles in the form of three neutrals (one is CP odd) and a singlycharged pair. In more generality, there are four ways to couple the 2HDM fields to fermions and bosons (some authors classify these as model types I, I', II, and II'). In this brief review, I present results from LEP searches for Higgs bosons decaying in the context of models other than the MSM and MSSM. Photonically Decaying Higgs Bosons In the MSM, the Higgs boson can decay into a pair of photons by means of a W-loop. For a Higgs boson of mass 80 GeV, the diphoton branching fraction is 0.001, hence this mode is not visible at LEP. However, in non-minimal models, when the topology of the theory reduces the Higgs boson coupling strength to fermions, the diphoton mode can become large. An example of this is the "fermiophobic" Higgs boson [2] arising in the Type-I 2HDM. In this model, all the fermionic couplings to one of the Higgs neutrals have a factor cos α/ sin β, so that the appropriate choice of α turns off the fermionic couplings. In this theory, the lightest neutral boson is produced in e + e − collisions at MSM strength. Very different theories can give rise to enhanced H 0 → γγ rates, so it is important to present cross section limits in addition to model-specific Higgs boson mass limits. The list of theories having enhanced diphoton rates includes the 2HDM, the Higgs Triplet model, top-quark condensate models, models with extra dimensions, models with anomalous couplings, the hypercharge axion, etc. Figure 1 shows the 95% CL upper limits on the production cross sections for σ(e + e − → XY) × B(X → γγ) × B(Y ) obtained by OPAL [3] with data from E cm up to 189 GeV; here, X can be a fermiophobic Higgs Boson and Y can be a scalar or vector particle. Factoring out the SM Higgs boson production cross section, OPAL obtains upper limits on the diphoton branching ratio ( Figure 2); similar results have been contributed to this conference by ALEPH, DELPHI, and L3 [4]. Invisibly Decaying Higgs Bosons In nonminimal models, the Higgs boson could decay into undetected particles such as a pair of SUSY particles. The Bjorken production mechanism allows searching for this mode by tagging the recoil Z 0 . Backgrounds to this search arise primarilly from 4-fermion and WW processes. The search results may be intrepreted by assuming that the invisibly-decaying Higgs boson is produced at the MSM rate modified by the factor ξ. All the LEP experiments have presented search limits to this conference [5]. The L3 plot of candidate events is shown in Figure 3. The ALEPH limits on ξ are shown in Figure 4. Charged Higgs Bosons Charged Higgs bosons can be pair-produced at LEP (e + e − → H + H − ). Models giving rise to singlycharged Higgs bosons are the 2HDM (including the MSSM), triplet models, and models with other extended Higgs sectors. In the 2HDM, the pair-production rate is specified, however the H ± decay couplings are not. The searches currently assume that H ± → cs ("hadron mode") and H ± → τ ν τ ("lepton mode") are the dominant decays, and therefore BR(H ± → τ ν τ ) is treated as a parameter of the theory. Akeroyd [7] suggests that H ± → W * A 0 is an important search channel for triplet models (and for 2HDM if H ± is very massive). All the LEP collaborations have submitted search updates to this conference [6]. The search results for E cm up to 189 GeV are summarized in Table 1, where the mass limit shown is the lowest value of the 95% CL lower bound for any value of BR(H ± → τ ν τ ). The exclusion regions for the various modes in the DELPHI analysis are shown in Figure 5. Also, for this conference, the LEP Higgs Working Group has combined the 189 GeV results from the four experiments, obtaining a lower mass bound of 77.3 GeV [1].
2019-04-14T02:28:18.649Z
1999-09-02T00:00:00.000
{ "year": 1999, "sha1": "b3ef89d085931a8ffdc957612a73ce514662a729", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "743386e44a7086e28b3abdc4becafb1e18f2f370", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
224839467
pes2o/s2orc
v3-fos-license
Cultural Nationalism or Escapist Idealism: Okot P’bitek’s Song of Lawino and Song of Ocol This article examines Okot p’Bitek’s poems, Song of Lawino and Song of Ocol, as case studies in perspectives on nationalism and identity in the contemporary African society. The assessment in this article is based on Fanonian theory, which falls within the broader context of postcolonial literary criticism and analysis and provides a historical context to cultural nationalism. Okot’s legacy has hinged on his participation in the reification of Africa’s cultural pride, but popular memory of that event has subsumed the diverse views of its similar advocates into a rigid national narrative that overlooks the dynamism of societies. In contrast, this article argues that poetry in Africa and the African mode in poetry that it posits present a broader and more inclusive ideal. Okot defines the African nation based on shared history and heritage, but acknowledges the consequences of that history, such as the presence of two linguistic traditions, African and English. He expresses, in what appears to be a quest for cultural revolution, the need to look forward and not only backwards for unity. This view of Okot p’Bitek’s poetry has long been marginalised, when mentioned at all, in both history and literary criticism. Yet this is a progressive cultural dimension that need to be explored. ISSN: 2581-5059 www.mjbas.com 92 imagination of itself", but Africa had lost that unity of imagination under the influence of colonialism and its empire. Only when the people of the land all believed in belonging to a greater national community (an "imagined community", to use Benedict Anderson's concept), sharing an identity and heritage, could the nation truly exist. Through the new body of writing, the leaders of the Negritude movement sought to create an aesthetic in order to regenerate the national spirit culturally. Debates centred on how to define and delineate Africa's cultural distinctiveness and the beliefs of prominent figures in the movement diverged as it evolved, threatening this autonomy, but nonetheless the idea remained a potent force. Okot's poetry seems to suggest that this cultural revival has given to some of us a new arrogance": it developed the "heart" of the nation and created a sense of pride among the people. Although the literature on cultural nationalism is vast, little attention has been paid to role of cultural nationalism in the formation of nations in the twentieth century. Analysts have chosen to concentrate on the apparently more significant political nationalist movements, their mass mobilising strategies against the state, and their attempts to build a representative citizen-state. It is true that cultural nationalism is usually little more than a small-scale coterie of historical scholars and artists, concerned to revitalise the community by invoking memories of the nation as ancient and unique civilisation. But periodically, it has expanded into a major ideological movement that, challenging both established political nationalist movements and the existing state, has sought to regenerate the nation on communitarianism. Geertz observes: [T]he concept of culture has its impact on the concept of man… [w] 52) Malara Ogundipe-Leslie claims, "...Song of Lawino is one of the most critically neglected works in African literature" (7). She further argues that despite the lyricism and imagery which brought this work to the attention of the world, Song of Lawino is essentially untrue, "the mission-educated man's vision of Africa" (7). It is, however, precisely in its "untruth" that Okot p'Bitek's representation of Africa in Song of Lawino and Song of Ocol is truest. One discovers that Lawino has no voice of her own, being Ocol's projection of his own repressed oedipal fixation, against which he reacts intensely both in the fictionalised Lawino's presentation of him and in his own song. This distorted view of the mission-educated man, Ocol, shows clearly the dilemma of modern Africa, for only in the distortion can the complex nature of the dilemma be discovered. In the ensuing Fanonian analysis, then, Lawino and Ocol emerge as symbols of cultural binaries that define the contemporary African society. Just below the surface of matrimonial discord lurks this cultural source of conflict and self-hatred. The overall thrust of Song of Lawino is to decry Ocol's rejection of Lawino. But, while claiming to be Ocol's legitimate wife, Lawino presents herself through images symbolic of the womb and motherhood and refers, further, to Ocol's being like a child, thus asserting the collectivity, which is the assertive voice of mother Africa. The primary relationship of the two poems thus found to be misrepresented by the characters, the authenticity of the characters themselves comes into question, particularly the authenticity of Lawino as an oral villager as she so assertively demands the reinstatement of the misrepresented relationship. Ogundipe-Leslie claims Lawino to be, in fact, an "impossible and unlikely image of the rural woman" (7 Yet it is this 'honest deception,' to borrow Leech and Short's expression, that makes Song of Lawino interesting and indeed many literary works thrive on sincere lies. Okot seems to suggest, and admittedly accurately so, that African civilisation has always been associated with backwardness and ignorance. Those who profess it, presumably the pre-literate, are perceived to unlearned, unintelligent and uneducated. They are, in this light, incapable of sustaining intellectual or academically demanding debates. Okot's choice of a village woman here is deliberately significant. He is able to relay his message forcefully without appearing to be doing so. Lawino is the antithesis of this misleading thesis. She is largely emotional, sentimental as it were, as expected in oral poetry; but her sentiments are not devoid of intellectualism. She dissects the Western civilisation with the finesse of a refined academic. She may exhibit tactlessness in certain instances, but her focus is largely on the ultimate price. By using Lawino, a presumably illiterate woman; Okot disabuses of the notion that civilisation is an exclusively European affair. And to this extent, his depiction of cultural nationalism is reasonable as it is justifiable. Okot p'Bitek's portrayal of cultural nationalism is replete with controversies and inconsistencies. Lawino, the self-proclaimed advocate of African culture, fails to follow the basic rule for the Acholi woman: to obey and to respect her husband. It is she who misrepresents Black pride and cultural nationalism because she occasionally admits to having no qualms with Westernisation. In doing so, she misrepresents her own identity as well and one is bound to accuse her of some inauthenticity. Critics tend to agree that she, rather than Ocol, is the insecure, unhappy, or psychologically afflicted one. And while Ogundipe-Leslie asserts that, "The figure of Lawino is a displacement from the mind of a male, Westernized writer …" (7), indicating authorial misrepresentation at the root of the "impossible and unlikely image of the rural woman" (7), p'Bitek has not misrepresented but has, rather, One sees here in Lawino a classical case of selflessness, which is representative of the cultural ideal that Africa signified. The artificial glamour of the West as symbolised by the "iron roof" does not seem to bother her. She is contented with her grass-thatched house. She is the mother who must accept less than the best from her son, because he has established his own household. And she is not about to let him forget her sacrifice. But her sacrifice is one which began much earlier. "When Ocol was wooing me/" she says, "My breasts were erect" (Lawino 50). Here Lawino is referring to African culture in its unadulterated form. Before the intrusion of foreign value systems and the subsequent adulteration, African culture in its pristine form was alluring, attractive and irresistible. It was idyllic. But through its contact with the West, it has lost its virginity, its irresistible allure. This interpretation is further supported in that she compares Ocol's former longing for her with a child's longing for his mother: By addressing Ocol as her "brother," she presents him as her agemate. Yet the ambiguity of the invitation to enter her "mother's house" is overwhelming. While, literally interpreted, the house is the house belonging to her mother, it becomes, the epicentre of cultural nationalism. This invitation into the mother's house appears then as an attempt to revive lost cultural intimacy. For, further, in metaphoric terms, the house is a womb image; thus the invitation to enter is an invitation to resuscitate Ocol's connection to Africa and its cultural heritage. H. O. Anyumba mentions "a certain vicarious relationship between Ocol and Lawino which contradicts an overt polarity." He further claims that, "This relationship is extended to include mother and mother-in-law, clansmen, the ancestral shrine, which are the various connections linking the two as individuals and also to society at large" (Anyumba 32). Yet one can see in these disparate elements something much more than common joining functions for the husband and wife, Ocol and Lawino. These elements compose the psychological womb from which Ocol sprang and are thus true extensions of his lineage. This extension justifies the interpretation that Lawino represents Africa's culture and Ocol is an apostate. So when Lawino says, "He cares little/About his relatives either" (Lawino 95), she is not complaining more specifically about his neglecting and rejecting her, but more about his rebelling against his people and what defines his clan, the Black people and specifically Africa. This interpretation is supported by Lawino herself as she continues: Of his own mother, Ocol says She smokes some nauseating tobacco And spits all over the place And she keeps bed bugs In her loin cloth (Lawino 95) Images of sexuality, particularly castrations, pervades the cultural persuasions posited in part ten and the subsequent parts of Lawino's song. Lawino's appeal to Ocol's masculinity goes beyond narrow confines of gender-based insinuations. Okot, like Wole Soyinka, appears to be appealing to a defeminised, asexual dimensions of manhood. Ocol is unable to project the spiritual fortitude of "the man" in Armah's The Beautyful Ones Are Not Yet Born because his exposure to the West has killed the man in him and indeed "all young men" whose testicles have been "smashed with large books" (Lawino 120 In so saying, she does, in effect, the very thing she described in the immediately preceding passage: she has lifted her breast to Ocol and asked, "Did you suck this?" (Lawino 102). In so doing she condemns him for rejecting her. It is in this act above all others that she shows her true relationship to Ocol. Ironically, Lawino uses this display of the motherly breast primarily as leverage to call Ocol back to their "marriage" bed and, by extension, original cultural intimacy. To ease generally the interpretive strategy of equating a son and a husband, one should note that Lawino asks, perhaps unwittingly, "What is so sweet in your husband?/What so bitter in other people's sons?" (Lawino 101). This parallel structure appears at first to be merely a faulty comparison of categories which are not mutually exclusive. A husband is, after all, the son of "other people." Yet, if the comparison is to be accepted as valid, the husband she refers to must needs be the son of the woman whose "husband" she has called him. Thus through parallel structure, Lawino reiterates her desire for the natural intimacy of a "husband," less disturbed by the West. Lawino, however, could hardly be more explicit in her acceptance of Africa's naturalness, especially when she speaks of death: Mother Death She says to her little ones Come! (Lawino 105) And the little ones follow her because what she offers is appealing. She is, thus, longing for unquestionable cultural loyalty, which is only attainable in childhood. Yet it is this very childhood innocence that she abhors. Thus Lawino seems to claim that Ocol can find nothing more appealing than herin fact she portrays herself as being as compelling as death. But perhaps the most significant expression of Lawino's mother relationship to Ocol is her threatening and warning Ocol of the danger of castration, a natural role of the mother. She warns him of the result of angering his mother: Your vitality will go, Ironically, in all these depictions and threats of castration, one sees Lawino once again using her natural and acceptable actions whose intentions are to restore normal marital sexuality between her and Ocol. She is evidently tired of Clementina's intrusion into their lives. The modern, westernised "second-wife" whom Lawino ridicules as looking like "a guinea fowl" (Lawino 37) and whom she despises as having "a fruitless womb"perhaps because of abortions (Lawino 39) has blinded Ocol to her very existence. The castrating Eurocentric colonial education acquires a metaphorical meaning here. Castration speaks of inability to resist the counter-productive effects of a to past, an indication that she is still longing for a return to pre-colonial history. Ocol, for instance, "roams the country like a wild goat" and wakes up before dawn like someone who "is going to hoe the new cotton field" (Lawino 106). By and large, Lawino's sentiments reflect Fanon's observations in "On National Culture" in The Wretched of the Earth. Fanon, in this essay, sets out to define how a national culture can emerge among the formerly and neo-colonised nations of Africa. Rather than depending on an orientalised, fetishised understanding of pre-colonial history as Lawino seems to be doing, Fanon argues a national culture should be built on the material resistance of a people against colonial domination. In this essay, Fanon makes reference to what he calls the "colonised intellectual," which is a befitting title for Lawino's husband Ocol. For Fanon, colonisers attempt to write the precolonial history of a colonised people as one of "barbarism, degradation, and bestiality" in order to justify the supremacy of Western civilisation. To upset the supremacy of the colonial society, writes Fanon, the colonised intellectual feels the need to return to their so-called "barbaric" culture, to prove its existence and its value in relation to the West. 100 that every culture is first and foremost national." An attempt among colonised intellectuals to "return" to the nation's precolonial culture is then ultimately an unfruitful pursuit, according to Fanon. Rather than a culture, the intellectual emphasises traditions, costumes, and clichés, which romanticise history in a similar way as the colonist would. The desire to reconsider the nation's pre-colonial history, even if it results in orientalised clichés, still marks an important turn according to Fanon, since by rejecting the normalised eurocentrism of colonial thought, these intellectuals provide a "radical condemnation" of the larger colonial enterprise. Fanon contends that this radical condemnation attains its full meaning when we regard the "final aim of colonisation was to convince the indigenous population that it would save them from darkness." A tenacious refusal by traditional African diehards, escapist idealists like Lawino, so to say, to admonish national traditions in the face of colonial rule, avers Fanon, is a demonstration of nationhood, but one that holds on to a fixed idea of the nation as something of the past, a corpse. Lawino still believes, for instance, that a married woman has to submit to communal wishes so that when her husband dies, she, her children and her husband's property are automatically inherited by her husband's brother and that fatness is a sign of opulence. claims: "Okot very appropriately used a woman as protagonist of this long lament as in Africa the role of the woman, and above all of mother, is greatly respected and thus, people are more likely to turn a sympathetic ear to her cry" (74). One finds that, despite the surface claims of the characters themselves to be, in fact, husband and wife, the text provides overwhelming clues to suggest a metaphorical interpretation. One also finds that, while Lawino unswervingly calls Ocol to safeguard the "pumpkin," Ocol is remains unyielding. In fact, he repudiates Lawino for holding on to traditions, customs and beliefs that not in sync with the modern times. One can, thus, proceed in this interpretive strategy, confident in the sound textual support for the view of Lawino and Ocol as a romanticised version of African culture and Ocol as a realistic representation of the continent today. Every struggle for liberation is a struggle for cultural freedom. Thus, resistance to colonial/ neo-colonial domination is, in many ways, resistance to the culture of the coloniser. From the writings of literary artists to those of politicians, there has been a concerted effort in Africa to recover and promote cultures that were annihilated by colonialism or simply to find a cultural framework that is suitable to the African context. Writing about cultural nationalism in the aftermath of colonization, Abiola Irele notes that négritude appears as the culmination of the complete As it was during colonialism where Africans were drawn into the cultural world of the European but kept in a secondary position, in the neoliberal global order, Africans continue to occupy a secondary position in an economic system whose balancing scales are tipped in favour of the West. Cultural nationalism may be defined as the manifestation of the nationalist "sentiment" in cultural indices, which places emphasis on cultural symbols, ideas, beliefs and other artifacts and motifs shared by a group (Adedeji 432). Boyd Shafer defines nationalism as a sentiment which unifies a group of people who have a real or imagined common historical experience and a common aspiration to live together as a separate and distinct people in the future (Shafer qtd. in Adedeji 432). Adedeji further argues that in the African context, cultural nationalism arises out of the unique cultural history of the people of Africa, the colonial onslaught on the continent and the conscious attempt by certain individuals or groups to seek ways and means of satisfying their nationalistic aspirations through a programme that resurrects the African past (Adedeji 432). Okot may not be advocating a return to an idyllic past, but his work bears out as a vicious attack on neo-imperialism in Africa. Cultural nationalism tends to have specific objectives. Hutchinson (Ndlovu-Gatsheni 946) is of the view that 'cultural nationalism is a movement of moral regeneration which seeks to re-unite the different aspects of the nation-the traditional and modern, agriculture and industry, science and religion-by returning to the creative life-principle of the nation". Cultural nationalists therefore perceive of the nation as a product of history and culture and thus they seek to inspire 'love' of community, educating members of community on their common national heritage of splendour and suffering, engaging in naming rituals, celebrating cultural uniqueness, and rejecting foreign practices (Ndlovu-Gatsheni 947). While négritude as a cultural movement sought to resist colonialism and assert African values, Okot's cultural nationalism resists neo imperialism which manifests itself through unfair trade relations and the commercialisation of war in Africa. In the first step towards the imagined and actual sovereignty of the nation both politicians and writers in Africa attempted to appropriate the past for their own present needs. This was necessary, because, as Frantz Fanon argued, "colonialism is not satisfied with snaring the people in its net or of draining the colonised brain of any form of substance. With a kind of perverted logic, it turns its attention to the past of the colonised people and distorts it, disfigures it, and destroys it" (149). Clearly many in Africa thought that this had happened, that English rule had deprived the Africans of their own history. Consequently, political and cultural nationalists, like Okot p'Bitek attempted to re-assert black pride, developing alternate interpretations of history, seeking to reclaim the past and counter the influence of English domination. For example, in Song of Lawino, Okot argues that African history should be read not as a series of failed rebellions against English rule over several centuries, but rather the opposite. He reshapes the narrative to one in which "Africa has won all along the line" because "no other people in the world has held so staunchly to its inner vision; none other has, with such fiery patience, repelled the hostility of circumstances, and in the end reshaped them after the desire of her heart" (1912, pp.45-6). In rewriting history to ISSN: 2581-5059 www.mjbas.com 105 escape "the cliché version of the nationalist myth," Okot and other participants in the Negritude movement created a "more appealing myth," or myths, as each had a slightly different narrative but all rejected the imperial one (Garvin, 2005, p.116). In doing so they adhered to the concept that "freedom in the future is predicated on the liberation of the past" (Richards, 1991, p.121). They regained history from the pens of the coloniser. This involved not only returning to the source in literature and history, but also selecting those sources and ideals relevant to the present and future of the nation, making it "by and large… a modernising force" (Castle, 2011, p.293). Okot p'Bitek's pragmatic acceptance of the inroads made by the English language in Africa, his emphasis on the interpenetration of the two linguistic traditions, his forward-looking views of poetry, and his relatively inclusive conception of the boundaries of African literature marked him out from many of his more polemic contemporaries. Literature in Africa transcends the narrow "battle of two civilisations" view of the revitalisation to embrace a pluralistic and aspirational vision of transnational literature and the centenary of revolutionary approaches provides an apt time for its recognition and reassessment. Even in the primordial dismemberment of the original schema of things, Lawino apparently holds fast and furiously to her traditions. The contemporary arrangement of the African world has made treachery so glaringly permissible, yet cultural loyalists do not seem to recognise this dangerous trend. Okot appears to suggest, and largely so, that uncritical worship of cultural nationalism has blurred the vision of post-independence citizens. To surmount this visual blur, one has to carry themselves through this collective blindness to a higher place of being. In so doing, they rise to a new level of consciousness. Superficially, Ocol's sentiments in Song of Ocol may seem like hubristic overreacting; yet they reveal pertinent issues about cultural nationalism in Africa. Although retracing cultural steps has significantly buttressed efforts towards the restoration of Africa's lost glory, there are indications that celebration and indeed worship of cultural value systems has blinded Africans to the political realities of the modern-day society. A mere return to Africa's cultural ancestry is not the solution to Africa's challenges. It is not enough to romanticise the past. Certain traditional practices are no longer sustainable. It is unrealistic to assume that the introduction of new ways of doing things is the reason Africans have challenges. Buck-passing is as escapist as it is delusionary. Africans must take practical steps to restore human dignity. The "diviner" must not be allowed to "plead with dread malaria" when a child's blood is "boiling with heavy malarial parasites raging through his veins" and all "pillars of fear [such as] witches, wizards, evil-eyes, sellers of fetish bundles, bones and claws and dealers in poisons [should be] put in a lake steamer, taken to the deepest part and cast into the void" (Ocol 130-132). At the end of the day, the colonised intellectual, as Fanon argues, has to realise that a national culture is not a historical reality waiting to be uncovered in a return to pre-colonial history and tradition, but is already existing in the present national reality. National struggle and national culture are inextricably intertwined in Fanonian analysis. To struggle for national liberation is to struggle for the terrain that allows for the growth of a culture and not "stupid village anthem of backwards ever, forwards never" (Ocol 132). Any claims to the contrary is retrogressive as it is bound to "cause misery and death" (Ocol 136). Africans can no longer remain "closed to progress" (Ocol 139). For true liberation to realised, Africa has to re-examine herself. A mere return to the past cannot resolve the complex issues bedeviling the is escapist as it is insincere. Some of the issues that Africa is grappling with existed even before the colonialism. The postcolonial leadership has to be interrogated. The continent is "stuck in the stagnant mud of superstitions"; she is an "idle giant, basking in the sun, sleeping, snoring, twitching in dreams." It is "diseased with chronic illness, choking with black ignorance and chained to the rock of poverty" (Ocol 128 For Fanon, colonisers attempt to write the precolonial history of a colonised people as one of "barbarism, degradation, and bestiality" in order to justify the supremacy of Western civilisation. The condemnation of "glorifiers of the past" and Negritude scholars such as Aimé Césaire and Leopold Sengor may appear harsh and unjustified, but this combativeness is necessary for the foregrounding urgency and the need for practical approaches in dealing with challenges in contemporary African countries (Ocol 132). To upset the supremacy of the colonial society, writes Fanon, the colonised intellectual feels the need to return to their so-called "barbaric" culture, to prove its existence and its value in relation to the West. Fanon suggests colonised intellectuals often fall into the trap of trying to prove the existence of a common African or "Negro" culture. This is a dead end, according to Fanon, because it was originally the colonists who essentialised all peoples in Africa as "Negro," without considering distinct national cultures and histories. This points to what Fanon sees as one of the limitations of the Négritude movement. In articulating a continental identity, based on the colonial category of the "Negro," Fanon argues "the men who set out to embody it realised that every culture is first and foremost national." An attempt among colonised intellectuals to "return" to the nation's precolonial culture is then, Fanon avers, ultimately an unfruitful pursuit as the "Old Homestead" is "all in ruins" (Ocol 127). Rather than a culture, the intellectual emphasises traditions, costumes, and clichés -"husks," "sacred trees" and "" ancestral shrines"which romanticise history in a similar way as the colonist would so that Africa has, metaphorically and ironically so, become "this rich granary of taboos, customs and traditions" (Ocol 129). The desire to reconsider the nation's pre-colonial history, even if it results in orientalised clichés, still marks an important turn according to Fanon, since by rejecting the normalised eurocentrism of colonial thought, these intellectuals provide a "radical condemnation" ISSN: 2581-5059 www.mjbas.com 107 of the larger colonial enterprise. This radical condemnation attains its full meaning when we consider that the "final aim of colonisation," according to Fanon, "was to convince the indigenous population that it would save them from darkness." Okot's Song of Ocol does not entertain the persistent refusal among African cultural diehards to admonish national traditions in the face of colonial rule, according to Fanon, is a demonstration of nationhood, but one that holds on to a fixed idea of the nation as something of the past, a corpse. A decisive turn in the development of the colonised intellectual is when they stop addressing the oppressor in their work and begin addressing their own people. This often produces what Fanon calls "combat literature," a writing that calls upon the people to undertake the struggle against the colonial oppressor. This change is reflected in all modes of artistic expression among the colonised nation and Okot's poems fall within this literary purview. Okot's poems, therefore, are attempts to delink Africans from the image imposed on them by coloniser. Whereas the common trope of postcolonial African literature is "an old Negro," Okot's poems present new artistic energy and dynamism that resists and undermines the common racist trope. For Okot, like Fanon, national culture is then intimately tied to the struggle for the nation itself, the act of living and engaging with the present reality that gives birth to the range of cultural productions. This might be best summarised in Fanon's idea of replacing the "concept" with the "muscle." Fanon is suggesting that the actual practice and exercise of decolonisation, rather than decolonisation as an academic pursuit, is what forms the basis of a national culture. Like Fanon, Okot is careful to point out that building a national culture is not an end to itself, but a "stage" towards a larger international solidarity. The struggle for national culture induces a break from the inferior status that was imposed on the nation by the process of colonisation, which in turn produces a "national consciousness." This national consciousness, born of struggle undertaken by the people, represents the highest form of national culture, according to Fanon. Through this process, the liberated nation emerges as an equal player on the international stage, where an international consciousness can discover and advance a set of universalising values as suggested by the obliteration of "tribal boundaries" (Ocol 127). In the absence of this cultural vision, literary productions will always paint a pessimistic picture of Africa: What proud poem Can we write Conclusion This study demonstrates that Okot p'Bitek's poems, though largely perceived as espousing cultural nationalism, depart significantly from the militancy and escapist idealism of Sengorianism. Okot p'Bitek absolves fellow African poets of any blames. What they create is determined by the raw materials they gather from real-life experiences. It is important to note that Okot deliberately alludes to historical events and personalities in Africa to lend credence to his cultural propositions. He implicitly demonstrates his heavy leaning towards Fanonianism. His Okot emphasises a sense of unified political consciousness onto the peasantry in their struggle to overthrow colonial systems of power. Peasant militancy in Fanon's analysis becomes the exact justification for his theory, yet does not necessarily exist in the material sense. He wrote that Fanon's dedication to a national consciousness can be read as a "deeply troubling" demand for cultural homogeneity and the collapse of difference. Bhabha, however, suggests Fanon's vision is one of strategy and any focus on the homogeneity of the nation should not be interpreted as "narrow-minded nationalism," but an attempt to break the imposed binaries of colonialism. This resonates with Fanon's argument that national cultural identity was basically a strategic step towards overcoming the assimilation of colonialism, and building an international consciousness where binaries of colonised and coloniser were dissolved. "On National Culture" is also a notable reflection on Fanon's complex history with the Négritude movement. Aimé Césaire, Fanon's teacher and an important source of intellectual inspiration throughout his career, was the co-founder of the movement. While Okot's thinking in the two poems appears intersected with figures associated with Négritude, including a commitment to rid humanism of its racist elements and a general dedication to Pan-Africanism in various forms, it is critical of the idealist escapism, especially considering its historical context. Okot's poems initially appear to be inspired by the movement, which often revolved around the presumption that a unified African Negro culture existed. Négritude intended to enliven black culture with qualities indigenous to African history, but made no mention of a material struggle or a nationalist dimension. Meanwhile, throughout the essay, Fanon stressed the cultural differences between African nations and the particular struggles black populations were facing, which required material resistance on a national level. Those who call for black cultural unity yet they oppose Black nations' bids for independence from neocolonial tendencies.
2020-10-21T15:34:37.257Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "4e4c44f9d1d649ad0987abdc4642f90755456c43", "oa_license": null, "oa_url": "https://doi.org/10.46382/mjbas.2020.4308", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "4e4c44f9d1d649ad0987abdc4642f90755456c43", "s2fieldsofstudy": [ "History", "Sociology" ], "extfieldsofstudy": [ "Art" ] }
18269057
pes2o/s2orc
v3-fos-license
Rapid crystallization during recycling of basaltic andesite tephra: timescales determined by reheating experiments Microcrystalline inclusions within microlite-poor matrix are surprisingly common in low intensity eruptions around the world, yet their origin is poorly understood. Inclusions are commonly interpreted as evidence of crystallization along conduit margins. Alternatively, these clasts may be recycled from low level eruptions where they recrystallize by heating within the vent. We conducted a series of experiments heating basaltic andesite lapilli from temperatures below the glass transition (~690 °C) to above inferred eruption temperatures (>1150 °C) for durations of 2 to >60 minutes. At 690 °C < T < 800 °C, crystallization is evident after heating for ~20 minutes; at T > 800 °C, crystallization occurs in <5 minutes. At T ≥ 900 °C, all samples recrystallize extensively in 2–10 minutes, with pyroxenes, Fe-oxides, and plagioclase. Experimental crystallization textures closely resemble those observed in natural microcrystalline inclusions. Comparison of inclusion textures in lapilli from the active submarine volcano NW Rota-1, Mariana arc and subaerial volcano Stromboli suggest that characteristic signatures of clast recycling are different in the two environments. Specifically, chlorine assimilation provides key evidence of recycling in submarine samples, while bands of oxides bordering microcrystalline inclusions are unique to subaerial environments. Correct identification of recycling at basaltic vents will improve (lower) estimates of mass eruption rate and help to refine interpretations of eruption dynamics. Rapid crystallization during recycling of basaltic andesite tephra: timescales determined by reheating experiments Nicholas Deardorff 1 & Katharine Cashman 2 Microcrystalline inclusions within microlite-poor matrix are surprisingly common in low intensity eruptions around the world, yet their origin is poorly understood. Inclusions are commonly interpreted as evidence of crystallization along conduit margins. Alternatively, these clasts may be recycled from low level eruptions where they recrystallize by heating within the vent. We conducted a series of experiments heating basaltic andesite lapilli from temperatures below the glass transition (~690 °C) to above inferred eruption temperatures (>1150 °C) for durations of 2 to >60 minutes. At 690 °C < T < 800 °C, crystallization is evident after heating for ~20 minutes; at T > 800 °C, crystallization occurs in <5 minutes. At T ≥ 900 °C, all samples recrystallize extensively in 2-10 minutes, with pyroxenes, Fe-oxides, and plagioclase. Experimental crystallization textures closely resemble those observed in natural microcrystalline inclusions. Comparison of inclusion textures in lapilli from the active submarine volcano NW Rota-1, Mariana arc and subaerial volcano Stromboli suggest that characteristic signatures of clast recycling are different in the two environments. Specifically, chlorine assimilation provides key evidence of recycling in submarine samples, while bands of oxides bordering microcrystalline inclusions are unique to subaerial environments. Correct identification of recycling at basaltic vents will improve (lower) estimates of mass eruption rate and help to refine interpretations of eruption dynamics. Tephra produced by explosive eruptions provides important information about magma ascent, vesiculation, fragmentation, and deposition. Mafic pyroclasts from strombolian to violent strombolian eruptions are characterized by a wide range in both vesicularity and groundmass crystallinity. In particular, mafic pyroclasts are often classified as either microlite-poor (sideromelane) or microlite-rich (tachylite) 1 . Both sideromelane and tachylite are often found within the same depositional layers, and even within a single clast. However, the origin of these clast types, and the ascent and eruption conditions implied by the variable proportions of different clasts types, is not well understood. Correct identification of microcrystalline textures has significant implications as these textures are often used to interpret eruption dynamics. The presence of both sideromelane and tachylite is a common feature of tephra deposits from cinder cone eruptions (e.g., Cinder Cone, CA 1 ; Stromboli, Italy 2,3 ; Mt. Etna, Italy 4 ; Parícutin, Mexico 5 ; Newberry Volcano, OR 6 ; Mt. Vesuvius, Italy 7 ). Sideromelane clasts are generally assumed to represent primary (deeper) magma that ascends rapidly and erupts. Tachylite clasts have been interpreted as slow-moving magma incorporated from along conduit walls 3,4 or as magma stored temporarily in shallow dikes and sills 5,8 . Both scenarios call upon tachylite-forming magma to have sufficiently long residence times within the upper crust to allow magma degassing and crystallization prior to eruption. An alternative explanation is additional residence time in the vent by recycling previously erupted clasts 9,10 , a mechanism that should be enhanced by mass eruption rates (MERs) sufficiently low that clasts are not transported far from the vent. An extreme example comes from mild submarine eruptions observed at NW Rota-1 (Mariana arc), where eruption plumes are suppressed by the overlying water column 9 . Abundant pyroclast recycling has also been described at Stromboli volcano (Italy) 11 . In both cases, the extent of vent clogging affects the subsequent eruption intensity 12 and grain size distribution 9 . Importantly, clogged vents may promote reheating of the recycled clasts. Textural evidence of recycling may include pyroclasts with additional groundmass crystallization, precipitation of sublimates on external surfaces, and changes in color, luster and external morphologies 10,13 . Recycled clasts may also form microcrystalline inclusions within more juvenile, less microcrystalline matrix that display varying degrees of deformation, mingling, and banding 9 (Fig. 1). A common feature of the latter is an oxide-rich layer surrounding the included clast. In contrast recycling in the submarine environment may preserve geochemical evidence of seawater assimilation, specifically as chlorine enrichment within microcrystalline (recycled) inclusions 9 . The common component in recycled tephra, whether entrained in juvenile melt or not, is increased groundmass crystallization. However, the time required for such crystallization is not well-constrained. Here we constrain time scales of crystallization and recreate recycling microcrystalline textures by pyroclast reheating experiments. Identification of common recycling textures and how they vary with time, temperature, and oxidation conditions may allow us to determine signatures of recycling in natural samples. Crystallization is usually studied as a cooling-driven phenomenon. However, heating glass above the glass transition temperature (but below the liquidus) can also cause crystallization 13,14 . The limited experimental data on heating-induced crystallization show that the crystallization kinetics are interface-controlled and depend on oxidation state as well as temperature, and that environments that promote such crystallization include overtopping lava flows in pahoehoe fields 14 and intra-crater pyroclast accumulation 13 . Our experiments complement those of D'Oriano, et al. 13 9 . Both thin sections show microlite-poor (tan, sideromelane) matrix glass with microcrystalline (dark brown) inclusions that we interpret to be recycled clasts. Inclusions can have sharp or diffuse boundaries and display textures that suggest mingling with surrounding matrix. Scientific RepoRts | 7:46364 | DOI: 10.1038/srep46364 (< 690 °C-1170 °C). In this study we (1) establish textural criteria to recognize heating-induced crystallization in both subaerial and submarine environments and (2) constrain time scales of clast recycling. The implications for incorrectly identifying pyroclast recycling include potential overestimation of MER by misinterpreting recycled clasts as juvenile and misinterpretation of eruption dynamics and history of crystallization through microlite textures (e.g. crystallization via long residence time along conduit walls vs recycling). Methods We conducted reheating experiments on natural basaltic andesite (~55 wt% SiO 2 ) lapilli from Parícutin, Mexico to test the effect of reheating on microcrystallinity in tephra. We chose to use low crystallinity (sideromelane) Parícutin lapilli because the eruptions and deposits have been well studied 5,15 and are of similar composition to NW Rota-1 (~52-53 wt% SiO 2 ). We also attempted heating experiments on NW Rota-1 samples, however heating the clasts resulted in an odd alteration where the glass 'inflated' producing a bubbly, popcorn texture that was very brittle, and we were unable to polish or analyze the clasts. The 'inflation' of NW Rota-1 glass may be due to moderately hydrous glass (H 2 O: 0.3-1.1 wt%, determined through Fourier transform infrared spectroscopy). No inflation was observed in any of the heated Parícutin clasts. Additionally, microcrystalline inclusions are prevalent in NW Rota-1 samples, found in most sideromelane clasts examined, making them less desirable as experimental specimens. Microcrystalline inclusions have been observed in a few dark and dense (tachylite) Parícutin clasts (Ref. 16; clast types described in ref. 5) but were not present in any of the tan (sideromelane) clasts used in the reheating experiments. Clasts 4-8 mm in diameter were split, with one half saved as a control and one half heated in a one-atmosphere Deltec vertical tube furnace at atmospheric oxygen fugacities (fO 2 ). To constrain crystallization kinetics, we heated samples over different time intervals from room temperature to experimental temperatures ranging from below the glass transition temperature (T g ) of basalt at ~690 °C 17 to 1170 °C, which we infer to be above the eruption temperature and approaches the liquidus (~1178 °C; calculated in MELTS 18,19 ). The tube furnace was brought up to temperature before each sample was inserted. 1D modeling using Fourier's heat flow equation shows that the center of a solid 4 mm diameter sphere will equilibrate with the furnace temperature within approximately 30 seconds at all experimental temperatures considered in this study. These calculations provide a conservative estimate as all of the pyroclasts were vesicular. Upon completion of the experiment the samples were promptly removed to cool in air at room temperature. During removal from the furnace, clast temperature fell below the glass transition temperature within seconds. Heating times reported indicate the length of time the clast was in the furnace and above T g . Each clast was heated isothermally for 2 to 64 minutes at T = 620-1000 °C or 5-30 minutes at T ≥ 1100 °C. Pyroclasts were impregnated with epoxy, cut, and polished for analysis via back-scattered electron (BSE) imaging using a FEI Quanta 200 SEM at the University of Oregon. Images of both the control and the experimental samples permitted assessment of groundmass crystallization caused by heating. BSE images were analyzed with ImageJ software for total heating-induced crystallization by measuring matrix area containing newly crystallized microlites. These measurements should be considered minimum estimates as areas with incipient crystallization (see below) were not included. Experimental run durations, temperatures, and area percent of heating-induced crystallization are listed in supplemental Table 1. Results Heating-induced crystallization textures are illustrated in Fig. 2a-d. The earliest stage of crystallization involves incipient crystallization, observed as areas of phase separation (groundmass microlites are just beginning to form, but are not quite recognizable) and the presence of numerous sub-micron-sized crystals (Fig. 2a). As temperature and/or time increase the percentage of newly crystallized matrix increases, transforming from glass to incipiently crystallized areas. A patchy crystallization phase follows (Fig. 2b), consisting primarily of localized dendritic growth of pyroxene microlites and expansion of phase separation and nucleation until little or no matrix glass remains (extensive crystallization). Dendritic growth is most prevalent in localized areas between plagioclase microphenocrysts; areas that presumably were the first to nucleate (Fig. 2b). Extensive crystallization at T ≥ ~800 °C after > 60 min and at T > 890 °C after 5 minutes, is characterized by dendritic growth of pyroxenes and oxides and little remaining matrix glass (Fig. 2c). At 900 ≤ T < 1100 °C three phase extensive crystallization is observed, where virtually all original glass has transformed to crystals (Fig. 2c). At T > 1100 °C, crystallization of new phases drops abruptly and is dominated by oxides (Fig. 2d) that replace pyroxenes as the dominant crystal phase. High temperature experiments (> 1150 °C) produced numerous oxides growing in clusters, on microphenocrysts with resorption textures, and on the external surface of the pyroclasts and surfaces of vesicles (likely as sublimates 13 ). The high temperature experiments lower the viscosity of the matrix glass (decreasing log viscosity by ~4 Pa s from 690-1150 °C 20 ) sufficiently to allow flow, which causes vesicles to collapse, new bubbles to form, and clasts to reshape into fluidal morphologies. The collapsed vesicles form prominent linear oxide strands from oxides formed on vesicle walls (Fig. 2d). Experimental results are summarized in Fig. 3, which shows phase types as a function of temperature and time. Most notable is the temperature dependence of phase appearance, with pyroxene + oxide crystallization at > 690 °C followed by plagioclase crystallization at ~900 °C. The apparent absence of oxides in experimental runs of T = ~800 °C and run lengths < 60 min, is most likely the result of their small size and the poor atomic number contrast in those BSE images. Plagioclase was observed only at temperatures between 900 °C and 1000 °C, and has characteristic elongate lath and swallowtail morphologies (Fig. 2c). At runs of T = ~1000 °C and ≤ 10 min, plagioclase was not clearly apparent but may have been present within dark areas between pyroxene microlites. Although clusters of plagioclase microlites are present in high temperature runs (≥ 1100 °C), the lath and swallowtail rapid growth morphologies observed in experiments between 900 °C and ~1000 °C are not observed. Therefore, we cannot conclusively determine the presence of new plagioclase growth at these temperatures. There is an increase, and then decrease, in total crystallinity as a function of temperature and time, as illustrated in the (b,f) Dendritic growth and patchy crystallization between microphenocrysts and on the surfaces of pre-existing crystals. Extensive crystallization with no unaffected matrix glass is found in (c) and with very little unaffected glass is seen in (g). Area around vesicle in (c) is altered but lacks extensive crystal growth, likely due to element loss due to diffusion. (d) High temperature experiment-glass has begun to flow, collapsing vesicles. Oxides, growing on vesicle and crystal surfaces, form linear features after vesicle collapse. A similar texture is observed in (h) where oxides lie along a boundary between microlite-rich and microlite-poor areas. Time-Temperature-Transformation (TTT) diagram of Fig. 4a. Temperature appears to have the greatest impact on crystallization, illustrated by measurements that show < 45% of matrix has crystallized at T ≤ 890 °C, versus > 80% new crystallization at 890 °C ≤ T ≤ 990 °C for experimental runs longer than two minutes, and > 93% at 990 °C ≤ T ≤ 1132 °C. At T ≥ 1150 °C the extent of new crystallization drops precipitously to 3-4% as only oxides are newly crystallized. TTT diagrams are used in industrial glass research to constrain critical cooling rates needed to avoid mineral formation. The 'zone' of crystallization is indicated by a C-shaped curve (solid black lines in Fig. 4a,b) to the left of which only amorphous glass will form and to the right of which nucleation of mineral phases will occur. These curves are typically determined experimentally, as the location of the curve is dependent on composition and experimental conditions e.g. [21][22][23] . The average placement of the TTT curve on the temperature axis can be estimated as T n = 0.5(T g + T l ) e.g. 22 , where T l is the liquidus temperature and T n is the 'nose' of the TTT diagram. For our experimental conditions, T n = 934 °C if T l = 1178 °C and T g = 690 °C; we approximate the placement of T n on the time axis using the results of our experiments, which show that the minimum dwelling time (t n ) for the onset of crystallization for basaltic andesite tephra is ≤ 120 seconds. T n anchors the TTT curve in Fig. 4a; within the (outer) TTT curve we have added dashed lines to indicate inferred boundaries separating the crystallization fields observed in our experiments. For comparison, in Fig. 4b we have plotted the experimental data of D'Oriano 13 and Burkhard 14 , using their sample descriptions. We use a T n of ~1000 °C (after D'Oriano et al. 10 ), but have fewer constraints on placement of the TTT curve on the time axis because of the long experimental durations (≥ 40 min for D'Oriano; ≥ 22 hrs for Burkhard). It should be noted that TTT curves will likely shift due to compositional differences (basaltic andesite, this study; alkali-rich basalts 13 ; Kilauea basalt 14 ). For example, incipient crystallization was observed at longer timescales in both D'Oriano's and Burkhard's experiments (thin Xs) than in those of this study, but at similar temperatures, suggesting the TTT curve of basalt is shifted to the right, with a greater t n than basaltic andesite. More experiments over a greater range of compositions and timescales are needed to explore the exact shapes of TTT heating curves and minimum dwelling times during recycling. Discussion The extent of primary crystallization in basaltic systems is controlled by composition, temperature, pressure, and time, and is driven by cooling and/or decompression. Here we have demonstrated a less intuitive process, which is crystallization by heating glassy samples (often referred to as devitrification), a method contrary to traditional experimental crystallization, which focuses on crystallization as a cooling-driven phenomenon. We further suggest that heating-induced crystallization has been under-appreciated in volcanic environments. Tephra, when rapidly quenched upon interaction with air (or water), is often glassy (meaning it was liquid at the time of eruption). Quenching prior to complete crystallization means that tephra is in a metastable state and requires only an increase in temperature above T g to activate diffusion and initiate (re)crystallization. Cooling of a basaltic andesite melt at 1 atm would initially induce crystallization of plagioclase, followed by orthopyroxene, clinopyroxene, and finally iron oxides (from MELTS 18,19 ). In our reheating experiments we see the reverse order, with pyroxenes and oxides the first to crystallize, followed by plagioclase. The mineral sequence and temperatures of crystallization are similar to those found by Burkhard 14 for Kilauea basalt and D'Oriano et al. 13 for alkali basalt, who covered a more limited temperature range (700-750 °C and 1,000-1,130 °C), but over variable fO 2 . These studies suggest the results are due to diffusion, with the first mineral phases to appear having the fastest element diffusion rates, while delayed nucleation of plagioclase requires higher temperatures because of relatively slower diffusion. It is important to note that plagioclase crystallization could also be inhibited if quenched and recycled clasts preserve elevated amounts of H 2 O (e.g. refs [24][25][26]. Although olivine-hosted melt inclusions from Parícutin volcano record pre-eruption water contents of 1.3-4.2 wt% 5 , we expect the water contents of matrix glass to be very low (~0.1 wt% H 2 O), consistent with the absence of vesiculation upon heating (in contrast to NW Rota-1 samples). Thus although plagioclase suppression has been observed at H 2 O ≤ 0.5 wt% 26 and could have contributed to suppression of plagioclase at T < 900 °C (and subsequent crystallization if further degassed), we believe variable diffusion rates are most likely to control the crystallization sequence. With increasing temperature, diffusion rates increase, effective supersaturation decreases and the phase assemblage approaches that of the erupted material. That we see continued crystallization of oxides in the high temperature experiments is explained by the high (atmospheric) ƒ O 2 of our experiments, a condition that should also apply to recycling of pyroclasts in subaerial environments. Our reheating experiments produced a range of crystal textures from localized nucleation of only one or two mineral phases to extensive crystallization with three mineral phases. Many of the experimental crystal textures observed are similar to those found in other experiments 13,14 and are comparable to textures found in microcrystalline inclusions from both submarine and subaerial volcanoes (Fig. 2e-h). Incipient crystallization textures (Fig. 2a) were very common in our experimental pyroclasts but were observed in only a couple of NW Rota-1 thin sections (Fig. 2e). Crystal nucleation and additional dendritic growth on pre-existing crystal faces (Fig. 2b) is particularly common, occurring in all experimental runs from T g to > 1000 °C, and is present in nearly all microcrystalline inclusions observed in natural samples (Fig. 2f). Extensive crystallization is also common in NW Rota-1 microcrystalline inclusions, which often show three crystal phases (pyroxenes, plagioclase, Fe-sulfides). However, the natural microcrystalline inclusions tend to have unaffected matrix glass between microlites (Fig. 2g); clean glass in extensively crystallized areas of the experimental charges was rare except in very short duration experiments (≤ 10 min at T = ~890 °C− > 1000 °C). This suggests that the recrystallization of microcrystalline inclusions in natural samples likely occurred over very short timescales (< 10 min), consistent with the pulsating form of most low-level eruptive activity. The highest temperature experimental samples show extensive oxide crystallization (Fig. 2d), as also seen in the high fO 2 (atmospheric) experiments of D'Oriano et al. 13 . Although we do not see extensive oxide crystallization in the submarine tephra samples from NW Rota-1, we do see these features in subaerial samples from Stromboli volcano (Fig. 2h). Here a thick strand of oxides extending from an oxide border between glassy and crystalline regions resembles linear strands of oxides after vesicle collapse in our reheating experiments (Fig. 2d). The absence of oxides in the submarine samples can thus be explained by the different ƒ O 2 in the submarine environment. While oxidizing conditions are not present in the submarine environment, enrichment of chlorine in microcrystalline inclusions in NW Rota-1 submarine tephra provides evidence of seawater entrainment by recycled clasts 9 . From this we suggest that chlorine assimilation provides a recycling signature in the submarine environment through interaction with seawater 9 , whereas oxide strands may be a characteristic signature of subaerial recycling. Conclusions and Implications. The experiments in this study and those of D'Oriano et al. 10,13 have shown pyroclasts that fall back into a volcanic vent can experience additional crystallization if heated to temperatures above T g . This suggests that in some cases, pyroclasts that exhibit both microlite-poor and microlite-rich textures may record recycling and reheating of previously ejected clasts. This provides an alternative explanation for these textures, and, correspondingly, different implications for eruption conditions. Our experiments also show the onset of crystallization is rapid and temperature dependent, occurring within 20 minutes at T > ~690 °C (T g ) and within < 5 min at T > 800 °C. Moreover, crystallization is extensive when clasts are heated to ≥ 900 °C. At very high temperatures (≥ 1150 °C) recycled clasts may also show evidence of deformation, such as flow banding and mingling, as well as vesicle collapse and formation of oxide trails and coatings. Textures, such as flow banding and mingling, of areas indicating recycling suggest temperatures ≥ 1150 °C; the temperature at which vesicle collapse and glass flow occurred experimentally under gravity. However, high temperature experiments, near or above the eruption temperature, did not yield the extent of crystallization observed in natural samples. Mingling of microcrystalline inclusions with surrounding matrix may occur at slightly lower temperatures (< 1100 °C) due to differential stresses during mixing and churning within the vent. Recycling of pyroclasts is most likely to occur at volcanoes characterized by mild explosivity and the inability to completely expel all pyroclasts from the vent. Pyroclasts falling back into the vent have the potential to be reheated and recycled, inducing additional degassing and microcrystallization, altering their primary textures that may be used to determine magma ascent, vesiculation, fragmentation, and deposition. The actual extent of pyroclast recycling in the subaerial and submarine environments is unknown, but is likely greatly under-reported. We note, however, that textures consistent with recycling (including high crystallinities and oxide trains or coatings) have been observed at Mt. Etna, Italy 4,10 , Stromboli, Italy (this study 2,3 ), Mt. Vesuvius, Italy 10 , Lava Butte, Oregon Cascades 27 , Parícutin, Mexico 16 , Gaua, Vanuatu 10 , Llaima, Chile 28 . From this we suggest that recycling may be quite common at volcanoes of basaltic to basaltic andesite compositions. Moreover, the higher confining pressure of submarine explosive eruptions should further promote recycling, as suggested by analysis of tephra from the 2006 eruptions of NW Rota-1, Mariana arc, where recycled material comprises up to 15% of the total volume of magma erupted during a single event 9 . However, due to limited sampling and very few observations of submarine eruptions the extent of submarine recycling cannot yet be determined. More extensive study is required to determine the frequency of recycling at individual volcanoes and at low MER volcanoes around the world. As mafic volcanoes are the most abundant on Earth, it is important that we can identify signatures of recycling, along with 'primary' textures, in order to correctly interpret eruption dynamics and depositional characteristics. Incorrectly identifying recycling textures could lead to overestimating MERs, and misinterpretation of eruption dynamics and history of crystallization.
2018-04-03T05:29:54.556Z
2017-04-12T00:00:00.000
{ "year": 2017, "sha1": "fe84d10b8d52661a6af558f279721a21b1843e46", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/srep46364.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "80866b4fa68991f1a41668f071719a47d21251c0", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Geology", "Medicine" ] }
9942814
pes2o/s2orc
v3-fos-license
Expression of single-chain variable fragments fused with the Fc-region of rabbit IgG in Leishmania tarentolae Background In recent years the generation of antibodies by recombinant methods, such as phage display technology, has increased the speed by which antibodies can be obtained. However, in some cases when recombinant antibodies have to be validated, expression in E. coli can be problematic. This primarily occurs when codon usage or protein folding of specific antibody fragments is incompatible with the E. coli translation and folding machinery, for instance when recombinant antibody formats that include the Fc-region are needed. In such cases other expression systems can be used, including the protozoan parasite Leishmania tarentolae (L. tarentolae). This novel host for recombinant protein expression has recently shown promising properties for the expression of single-chain antibody fragments. We have utilised the L. tarentolae T7-TR system to achieve expression and secretion of two scFvs fused to the Fc-region of rabbit immunoglobulin G (IgG). Results Based on the commercial vector pLEXSY_IE-blecherry4 (Jena Bioscience; Cat. No. EGE-255), we generated a vector containing the Fragment Crystallisable (Fc) region of rabbit IgG allowing insertions of single chain antibody fragments (scFvs) in frame via Ncol/Notl cloning (pMJ_LEXSY-rFc). For the expression of rabbit Fc-fusion scFvs (scFv-rFc) we cloned two scFvs binding to human vimentin (LOB7 scFv) and murine laminin (A10 scFv) respectively, into the modified vector. The LOB7-rFc and A10-rFc fusions expressed at levels up to 2.95 mg/L in L. tarentolae T7-TR. Both scFv-rFcs were purified from the culture supernatants using protein A affinity chromatography. Additionally, we expressed three different scFvs without the rFc regions using a similar expression cassette, obtaining yields up to 1.00 mg/L. Conclusions To our knowledge, this is the first time that antibody fragments with intact Fc-region of immunoglobulin have been produced in L. tarentolae. Using the plasmid pMJ_LEXSY-rFc, L. tarentolae T7-TR can be applied as an efficient tool for expression of rFc fusion antibody fragments, allowing easy purification from the growth medium. This system provides an alternative in cases where antibody constructs express poorly in standard prokaryotic systems. Furthermore, in cases where bivalent Fc-fused antibody constructs are needed, using L. tarentolae for expression provides an efficient alternative to mammalian expression. Background Antibodies are applied in both basic research and diagnostics, and represent an increasingly important class of therapeutics. Monoclonal antibodies is the largest and fastest growing class of protein pharmaceuticals [1]. In the discovery and development of these antibodies, antibody fragments such as the antigen binding fragment (Fab), the single-chain variable fragment (scFv), and the single variable domains (V H and V L , collectively sdAb) are often employed [2]. The present recombinant antibody discovery platforms, such as ribosome and phage display [3], enable easy screening and selection of antibody fragments against virtually any antigen [4]. Based on the initial screening or selection, a number of candidate antibodies are obtained [3]. Often, these recombinant antibody candidates can be expressed in E. coli. However, although the field of recombinant protein expression in E. coli is developed and expanded [5,6], the codon usage and folding dynamics of some recombinant antibody clones are incompatible with the bacterial expression machinery [7,8]. In addition, for further evaluation of an antibody fragment it can be necessary to test additional formats, including the Fc-fusion format; such formats are inherently unsuitable for (but not outright incompatible with) prokaryotic expression [7,9]. Modifying a Fab, scFv, or sdAb by fusing them to the Fc-region will produce a bivalent antibody format similar to the canonical antibody [10,11]. The bivalent format increases the apparent affinity due to avidity, provided that multiple epitopes are available. A further benefit of the Fc-fusion that potentially can be imparted to some antibody fragments is a decrease in their propensity to aggregate [1,12]. At the same time the molecular sizes of the Fc-fused antibodies increase from approximately 12, 25, or 50 kDa (sdAb, scFv, and Fab respectively) to approximately 75, 100, or 150 kDa. An increase of molecular size in this range will greatly increase the serum half-life of a recombinant antibody, by putting it beyond the cut-off for renal clearance. For example native IgG1 of approximately 150 kDa has a serum half-life of around 21 days, whereas the serum half-lives of sdAb and scFv are in the area of 0.05 and 0.1 days respectively [13]. The longer serum half-life of native IgG and of some of the larger recombinant formats is only partly attributed to their molecular size. The longer serum half-life is furthermore a consequence of the interactions of the Fcregion with the neonatal Fc receptor (FcRn). The interaction with FcRn salvages the antibodies from endosomes and returns them to circulation, rather than letting them enter the lysosomal degradation pathway [13,14]. On the other hand, increasing size in general reduces the ability of the antibody fusion to penetrate tissue. The aspects of and need for prolonging the half-life of popular small antibody formats are reviewed in Kontermann 2009 [15]. Besides extending the serum half-life of potential protein therapeutics the Fc-region also confer other useful properties with regard to purification and immunochemistry. In protein purification, the Fc-region allows binding to protein A and protein G, hence supporting effective one step purification by affinity chromatography [16,17]. With respect to the immunochemistry the presence of the Fc-region facilitates detection using many common secondary antibodies [18]. Other obvious tags for detection and purification such as His-tag and C-myc tag are also present in our vector. One should nonetheless also consider protein L for purification of those antibodies holding a kappa light chain. We thus present a vector construct, which allows for versatile strategies of purification and immunochemistry. The current method of choice for experimental scale expression of full-length antibody, and for formats including the Fc-region, is transient expression in mammalian hosts, such as Chinese hamster ovary cells (CHO) or Human Embryonic Kidney cells (HEK) [19,20]. In such systems, high expression levels can be obtained reaching tripledigit mg/L levels. The drawbacks of mammalian cell expression systems are the need for dedicated labs and equipment, the labour intensive handling, and the economic considerations such as the price of reagents, culture medium, and labware [21]. Expression of recombinant proteins is one of the most central disciplines in molecular biology and medical research, and innovative systems with unique advantages are thus constantly being explored [21]. Among emerging systems is the one based on L. tarentolae, a protozoan parasite infecting the gecko Tarentolae annularis. The unicellular eukaryote L. tarentolae is an extensively studied model for the disease leishmaniasis [22]. Their unique transcriptional and translational machinery has enabled the generation of novel expression systems [23,24] based on the use of exogenous RNA polymerases [25]. Properties like non-laborious handling, posttranslational modifications similar to those of mammalian systems, episomal vector maintenance, and effective secretion of recombinant proteins, are some of the strengths of the L. tarentolae systems. Recent studies aimed at addressing the importance of the signal peptide cleavage site have shown that expression of scFv antibodies (3.83 mg/L culture medium) can be obtained using this system [26]. In order to validate the system for expression of Fcfused antibody fragments, we have created a vector allowing expression and secretion of scFvs fused with the Fc-region of rabbit IgG (rFc) using L. tarentolae T7-TR (Jena Bioscience). The rFc is a convenient choice when the fusion proteins are to be used in immunochemical analysis of cells (ICC) and tissues (IHC) of human or mice. This is essential to the research performed at laboratories, in which cells and tissues are of human and mice origin, are used. Using the rFc region decreases the interaction, if any, between the endogenous Fc-receptors and the rabbit Fc part. The use of this rFc construct thus reduces the need to block the endogenous Fc receptors of human and mouse tissue [27]. In summary, we here report a system based on L. tarentolae T7-TR, for the production of rFc-fused scFv antibodies. The resulting recombinant antibodies are ideal for IHC or ICT on human or rodent material, have a higher apparent affinity for multivalent antigens due to avidity, and are compatible with convenient purification methods. The expression host is characterised by non-laborious handling requirements, eukaryotic posttranslational modification of expressed proteins with nearmammalian N-glycan structures, and effective secretion of recombinant protein to the culture medium [28]. Results and discussion Construction of the vector pMJ-LEXSY-rFc for episomal expression-secretion of scFv-rFc The commercial episomal expression system based on L. tarentolae T7-TR (Jena Bioscience) is appealing as the target protein can be isolated directly from the culture supernatant, enabling convenient one-step affinity purification. In addition, expression in L. tarentolae facilitates post-translational modifications of proteins from higher mammals [28]. However, the vector pLEXSY_IE-blecherry4 does not allow for cloning via NcoI/NotI, a routinely used restriction enzyme combination for antibody fragments [29], as this will remove the secretory signal peptide and the polyhistidine stretch (His-tag). To render NcoI/NotI cloning possible we replaced the existing expression cassette with a cassette from a modified version of the pF4SPImsapX1.4sat vector using BglII and MluI (own unpublished work). The expression cassette from this vector comprises an NcoI/NotI cloning site downstream from the signal peptide of L. mexicana secreted acid phosphatase 1 (LMSAP1) and a C-terminal His-tag for purification followed by a c-Myc-tag for detection. Replacement of the expression cassette was done without changing the untranslated regions of the parent vector; pLEXSY_IE-blecherry4. The new vector was named pMJ-LEXSY. Moreover, for production of Fcfusion antibodies, we integrated the rabbit IgG Fcencoding region (hinge region and CH2-CH3) into the pMJ-LEXSY vector whereby the pMJ-LEXSY-rFc vector was generated ( Figure 1). The rFc-encoding region was inserted into the NotI site. The insert was prepared using sticky PCR at one end of the insert, resulting in deletion of the plus strand 3′ NotI site proximal to the tags upon insertion of the rFc-encoding sequence [30]. Thus the unique NotI restriction site was retained, allowing NcoI/ NotI insertion of antibody fragments. Expression and purification Expressions (80-100 mL in 250 mL flasks) of the two scFv-rFc constructs were carried out for 68-73 hours to explore the level of expression at small-scale. In addition, we explored the capacity of the expression system to produce classical scFvs using the vector pMJ-LEXSY. Three different scFvs, A10, LOB7, and Y4A were expressed together with A10-rFc and LOB7-rFc in L. tarentolae T7-TR ( Figure 2). The antibody A10 is an scFv derived from the Tomlinson I library [31]. It binds to murine laminin from Engelbreth-Holm-Swarm tumour cells. The antibody LOB7 in an scFv derived from the Tomlinson J library [31]. This antibody binds to human vimentin. The Y4A antibody is an scFv with lambda light chain derived from the YAMO library [32]. This antibody recognises C5a anaphylatoxin. These antibodies were chosen as model antibody fragments, as we had significant experience in expressing them as scFvs in E. coli. Expression and subsequent purification by affinity chromatography established that recombinant antibody fragments could be obtained from all supernatants ( Figure 2). Purifications were carried out using Protein A chromatography for the rFc constructs and Ni-NTA Protein Purification for the scFvs. The expression levels observed for the various clones ranged from 0.3 mg/L to 1 mg/L for the scFvs and from 0.6 mg/L to 2.95 mg/L for the rFc-scFvs. Recently, Jäger et al. have shown that yields up to 600 mg/mL of scFv-Fc fusion proteins can be obtained by optimised transient expression in HEK 293 cells [19]. Although the yields in our work is significantly lower, we still consider expression in L. tarentolae beneficial due to the few requirements of implementing this system into the laboratory and the non-laborious handling of L. tarentolae. When the purified proteins were analysed by SDS-PAGE two bands were detected. This was also confirmed by the western blot analysis (Figure 2) in which the LOB7-rFc was detected with an HRP conjugated antirabbit antibody. Additionally, the western blot analysis could also confirm that the rFc-conjugated antibody assembles into a bivalent structure in non-reducing conditions. The same trend was seen for the A10-rFc antibody (Additional file 1). The apparent heterogeneity of size can be difficult to see for the non-reduced band in the western blot. This is due to the fact that the size difference of the two bands is relatively smaller to the total mass of the assembled bivalent antibody, hence giving a lower separation. This apparent heterogeneity of size was seen for all constructs; scFvs (28 kDa) and scfv-rFcs (55 kDa) ( Figure 2). To investigate this issue we utilised that the pMJ-LEXSY-rFc vector ( Figure 1) enables the removal of the His-tag and the c-Myc-tag by TEV protease digestion. By removing the c-terminal tags with TEV Protease it was possible to assess whether the source of size heterogeneity was placed in the tag-region. SDS-PAGE analysis of LOB7-rFc digested with TEV Protease showed that one band appeared on the gel as opposed to two bands from non-digested sample, confirming that the apparent heterogeneity of size was due to unpredicted modifications of the tag-region (Additional file 2: Figure S1). Furthermore, western blot analysis and mass spectrometry on the Y4A-scFv strongly suggest that the heterogeneity of size emerges from a truncation of the c-Myc tag in the lower molecular-weight protein (Additional file 2: Figure S2-S4). The truncation of the c-Myc tag has however no obvious influence on the intended application of the scFv-rFcs in IHC and ICC using secondary antibodies targeting the rFc-region. Functionality To assess the binding activity of the scFv-rFcs, ELISAs were performed (Figure 3) with A10-rFc and LOB7-rFc against their respective antigens (mouse laminin and human vimentin). The antigens were coated on ELISA plates, targeted by the scFv-rFcs, and detected using polyclonal swine anti-rabbit immunoglobulin HRP. The rFc-constructs show specific binding to their cognate antigens. Furthermore, the scFv-rFcs were detectable using HRP-conjugated anti-rabbit immunoglobulin. The detection with the secondary antibody shows that the rFc-regions were functionally intact, in line with the fact that the antibodies can be purified using protein A affinity chromatography [33]. On this basis we conclude that our system is suitable for the production of rFcfused antibody fragments with uncompromised folding. His-tag (yellow), and a c-Myc-tag (pink) encoding region. The bleomycin resistance gene is fused with a cherry gene, hence designated BleCherry. This combination allows for selection of recombinants with bleomycin and subsequent screening of the best expressing clones by monitoring the cherry fluorescence. The PciI restriction site is used for linearization and is placed in a telomere region (LTE + RTE), which upon transformation stabilises the linear episome. Finally, the vector holds a tandem T7 transcription terminator (2xT7), a bacterial origin of replication (PBR322 ori), and the Bla ampicillin resistance marker. To further establish the capacity of this system for production of antibodies intended for immunochemistry, LOB7-rFc was used for immunocytochemical staining of fixed and permeabilised human adult skin fibroblasts (ASF-2) [34]. A commercial mouse monoclonal antibody (V9) directed against vimentin was included as a benchmark for detection fidelity (Figure 4). Both antibodies target vimentin, a cytoplasmic intermediate filament. As can be seen, LOB7-rFc and commercial V9 antibody produces similar labelling patterns when binding to vimentin. Conclusions Antibody fragments fused to the Fc-region of IgG have previously been produced in several organisms [35][36][37][38][39], but to our knowledge this is the first time such constructs have been expressed in L. tarentolae. The L.tarentolae system offers a system capable of making N-glycosylations and O-glycosylations. Breitling et al. reported the production of biological active biantennary homogenously N-glycosylated hEPO in L. tarentolae [23]. The N-glycosylations performed in L. tarentolae are of higher mammal resemblance than those performed in yeast and insect cells. Moreover, studies by Klatt et al. [40] have shown that ssAlpha expressed in L. tarentolae are O-glycosylated at the same sites as in mammal cells and furthermore displayed an increased resistance to degradation compared to ssAplpha expressed in E. coli [40]. At the same time L. tarentolae expression cultures can be handled under standard laboratory conditions, similar to the conditions used for propagation of E. coli cultures. For the production of antibody fragments resulting from high throughput screening, the system we describe here presents substantial benefits: Firstly, the system can provide a rescue avenue for antibody fragments that express poorly in E. coli. Secondly, the system can provide recombinant antibodies in a bivalent format incorporating a native Fc-region; a format ideally suited for use in immunostaining of cells and tissues. Vector construction All restriction enzymes, polymerases, and chemicals used for cloning and PCR were purchased from Fermentas (Thermo Scientific Molecular Biology). Oligonucleotides used are listed in Table 1, and were purchased from Sigma Aldrich. To make the pLEXSY_IE-ble-cherry4 applicable for NcoI/NotI cloning of scFv antibodies for secretion an expression cassette from the vector pF4NAF was cloned into pLEXSY_IE-blecherry4 via BglI and MluI. The pF4NAF expression cassette had been constructed previously using components of two vectors; pIT2 [31] and pF4SPImsapX1.4sat (Jena Bioscience; Cat. No. EGE-211). In this previous work, the vector pF4SPImsapX1.4sat (an early version of pLEXSY-sat2, Jena Bioscience; Cat. No. EGE-234) was modified by deleting the NcoI with primer Z and primer X, by use of PCR amplification of the region between BglII and NotI. The PCR product, now deleted for NcoI, was subsequently digested with BglII and NotI and re-inserted into the pF4SPImsapX1.4sat. Next, the expression cassette from the vector pIT2 was PCR amplified using primer A and primer B, followed by a digestion of the PCR product with KasI (5′ digestion) and Bsp120I (3′ digestion). Bsp120I produces overhangs compatible with NotI. The PCR amplified expression cassette of pIT2 was then inserted into pF4SPImsapX1.4sat via KasI and NotI hereby creating the pF4NAF cassette. In the ligation of the Bsp120I overhang to the NotI overhang the recognition sequence for both was destroyed. This cassette was inserted into pLEXSY_IE-blecherry4 via BglI and MluI, resulting in pMJ-LEXSY. The ligated DNA was used to transform electrocompetent XL1-Blue, and the cells were plated on TYE agar-plates containing 100 μg/mL ampicillin (Sigma Aldrich). The plates were incubated overnight at 30°C and colonies were picked for sequence analysis. To construct the rFc-fusion vector pMJ-LEXSY-rFc, the Fc encoding region of rabbit IgG (hinge region and CH2-CH3) was amplified by two rounds of PCR Figure 2 Antibodies expressed and purified from T7-TR. Coomasie from the left: Molecular weight marker (SeeBlue plus2; Invitrogen), A10 scFv, LOB7 scFv, Y4A scFv, A10 scFv-rFc, and LOB7 scFv-rFc. Western blot from the left: LOB7-rFc reduced (r), and LOB7-rFc non-reduced (n-r). All antibodies seen in the Coomasie stain were separated on SDS in reducing conditions. amplification using the vector pFUSE-rIgG-Fc2 as template [18]. The two PCR amplifications were performed applying primer pairs c/d and c/e. respectively. The rFcencoding region was prepared for cloning using sticky end PCR as earlier described [30] and inserted into the NotI site of pMJ-LEXSY. The Ligations were electroporated into XL1-Blue and plated on agar-plates holding 100 μg/mL ampicillin. Colony PCR was used to identify positive clones and correct orientation of DNA inserts were verified by sequencing (Eurofins MWG). Cultivation and transformation XL1-Blue Before transformation, XL1-Blue cells were made electrocompetent roughly as described for Pseudomonas Putida in [41], but without adding sucrose to the storage medium. Batches of competent E. coli were frozen in liquid nitrogen and stored at −80°C. Electroporation was carried out in 2 mm pre-chilled cuvettes holding 20-30 ng of vector and 50 μL cells. The cells were pulsed at 2500 V using an Electroporator 2510 (Eppendorf) and plated on TYE agar-plates with 100 μg/mL ampicillin. For plasmid propagation, XL1-Blue were grown at 30°C and 200 rpm in baffled Erlenmeyer flasks containing PDM medium [42] with 100 μg/mL ampicillin. respectively. For transformation, 10 μg of linearized DNA in 50 μL of water was incubated with 350 μL densely suspended Leishmania tarentolae T7-TR (OD 600 > 2). Electroporation was carried out in pre-chilled 2 mm cuvettes. Cells were pulsed in a GENEPULSER Xcell (BIORAD) at 450 V and 450 μF, obtaining a pulse time in ranges of 5-6.5 ms. The cells were kept on ice for exactly 10 minutes after electroporation. Immediately hereafter the cells were transferred to a T25 flask and grown for 20 hours in 10 mL of non-selective LEXSY BHI medium. Cells were subsequently gently dispensed onto selective LEXSY BHI agar plates and grown for 10 days at 26°C in the dark. The plates for clonal selection contained 100 μg/mL Zeocin (Invitrogen). All colonies visible after 10 days were picked from the plates, and each clone was then cultivated in selective LEXSY BHI medium for one day in a single well of a 24-well plate. This cultivation was performed in 1 mL selective LEXSY BHI medium with 100 μg/mL zeocin (Invitrogen). The cell density and condition was assessed in the microscope and clones with low growth were cultivated for further 1-2 days before they were transferred to a larger volume. Clones exhibiting acceptable motility, cell shape, and growth were transferred to 5 mL selective LEXSY BHI medium and grown to OD 1.4. Assessment of expression levels for each clone was conducted in 96-well plates by inducing the expression with 100 μg/mL tetracycline. The cherry fluorescence of each clone was measured at 584 nm excitation/612 nm emission in a POLARstar OPTIMA fluorimeter (BMG Labtech). Expression and purification Clones displaying the highest level of cherry fluorescence were chosen for the further work. Expression was carried out in 100 mL LEXSY BHI medium containing 100 μg/mL tetracycline (Sigma Aldrich) and 100 μg/mL Zeocin (Invitrogen). Transformed L. tarentolae T7-TR (OD 600 1.4-2) were innoculated 1:10 into 100 mL cultures. The culturing was performed in the dark at 26°C for 72 hours in 250 mL baffled Erlenmeyer flasks with agitation (120 rpm). The cultures were centrifuged at 2700 × g for 30 min to precipitate cells and the proteins present in the supernatant precipitated by using 30% m/v ammonium sulphate. The precipitated proteins were then pelleted at 5250 × g for 45 min and re-suspended in 0.5 × PBS. Finally, affinity purification using a Protein A HP spinTrap column (GE healthcare) was used for recovery of the scFv-rFc antibodies. The scFv antibodies were purified using a Maxwell 16 instrument in combination with a Maxwell 16 Polyhistidine Protein Purification Kit (Promega). Protein concentrations were estimated by absorption at 280 nm using a NanoDrop 1000 instrument (Thermo Scientific) and applying the protein specific molecular weights and molar extinction coefficients. A repeated and up-scaled expression of LOB7-rFc was performed as described above, but this time as 5 times 80 mL cultures incubated for 72 hours in 250 mL baffled Erlenmeyer flasks. Supernatants from all cultures were pooled before the proteins were precipitated. The ammonium sulphate precipitated proteins were then resuspended in 20 mM sodium phosphate and purified by using a 1 mL HiTrap Protein A HP column. Protein A purification was carried out as outlined by the manufacturer (GE Healthcare). Functionality ELISA Maxisorp 96-well flat bottom plates (Nunc) were coated with 30 μL of the relevant antigens at 20 μg/mL (in 2% BSA-PBS) for both vimentin and laminin. The protein was adsorbed to the plates during storage at 4°C overnight. Each well was washed with three times 200 μL PBS using a multi-channel pipette, before the plates were blocked with 300 μL 2% BSA in PBS (BSA-PBS) for 1 hour at room temperature under gentle agitation. The washing step was then repeated, followed by the addition of serial dilutions of scFv-rFcs in 100 μL 2% BSA-PBS. The antibodies were incubated in the antigen-coated wells for 1.5 hour at room temperature, and subsequently the plates were washed three times with PBS, each time for 5 min. Detection of bound scFv-rFcs was performed by incubating the plates with polyclonal swine anti-Rabbit Immunoglobulins (Dako) in a 1:2000 dilution in 2% BSA-PBS for 1 hour. Finally, the plates were washed three times with PBS, each time for 5 minutes. The amount of bound antibody was visualised by the addition of 80 uL TMB single solution (Sigma Aldrich), and the colour reaction was terminated after 7 minutes by the addition of 50 μL 1 M H 2 SO 4 to each well. Absorbance was measured at 450 nm and corrected for at 655 nm. The absorbance was conducted in a microplate reader; Model 550 (Bio Rad). Eukaryotic cell handling and Immunocytochemistry ASF-2 cells were grown in DMEM (Lonza) with 10% fetal bovine serum (Thermo Scientific), 100 U/mL penicillin and streptomycin (Lonza) at 37°C, 5% CO 2 and 95% humidity. ASF-2 cells of passage 10 were detached from a tissue culture flask by trypsination with Trypsin EDTA (Lonza) and spun down at 600 g for 6 minutes. Cells were then resuspended in growth medium and 15.000 cells/well were seeded out in an ibiTreat μ-Slide VI 0.4 (Ibidi) and grown overnight. The next day the cells were rinsed with PBS, fixed in 4% PFA for 15 min., permeabilised with 0.025% Triton X100 (Sigma-Aldrich) for 10 min. and blocked with 2% BSA in PBS (BSA-PBS) for 1 hour at room temperature. The cells were incubated with 5 μg of purified LOB7 in 2% BSA-PBS or 100 μl of a 1:100 dilution of V9 antibody in 2% BSA-PBS (Sigma Aldrich) pr. well for 1 hour at room temperature. Visualisation of LOB7 was accomplished by incubation with a 1:100 dilution of Goat-anti-Rabbit Alexa Fluor 488 (Invitrogen, USA). V9 was visualised by a 1:100 dilution of Goat-anti-Mouse Alexa Fluor 546 (Invitrogen). Cell nuclei were stained with Vectashield Mounting Medium with DAPI (Vector Labs). Fluorescent images were obtained with a Leica DMI3000 B inverted microscope (Leica Microsystems). Additional files Additional file 1: SDS PAGE showing reduced and non-reduced rFc constructs. Additional file 2: Figure S1. SDS PAGE analysis of TEV Protease digested LOB7-rFc. To assess if the heterogeneity correlated to modifications of the C-terminal tag region, we digested LOB7-rFc with TEV Protease. (1) Non-digested LOB7-rFc showing two bands (2) LOB7-rFc digested with TEV Protease showing one band. Therefore, the size heterogeneity resides in the C-terminal tag-region. Figure S2 -Western blot analysis of Y4A-scFv. (A) Western blot analysis of 0.018 μg and 0.09 μg Y4A-scFv using an anti-His antibody. Two bands appeared after exposure for 1 min and 10 sec, respectively. (B) Western blot analysis on 0.45 μg Y4A-scFv using an anti c-Myc antibody. The films were exposed 1 min and 10 sec, displaying only one band. (C) Coomasie stain of the Y4A-scFv. Figure S3 -Verification of degradation of c-Myc-tag by mass spectrometry (MS). The bands detected by SDS-PAGE were subjected to in-gel digestion using Lys-C and the peptides were subsequently analysed by MALDI-TOF mass spectrometry. Ions in the range m/z 1500-2500 are shown. An ion of m/z 2221.28 was detected in the upper band whereas an ion of m/z 1693.91 in the lower band. The mass difference (~500 Da.) correlates with the mass difference observed by SDS-PAGE. Figure S4 -MSMS analysis of heterogeneous antibody. To evaluate the identity of the ions detected, we subjected them to MSMS analysis. (A) The analysis of the ion detected in the upper band produced fragment ions corresponding to the peptide represented by Leu239-Lys258 encompassing the His-tag and three amino acid residues of the c-myc tag. The C-terminal Lys258 indicates that this peptide is generated by Lys-C cleavage. (B) The ion of m/z 1693.91 was found to represent Leu239-Gly253. It is thus likely that the C-terminal Gly253 represents the C-terminus of the mature protein excised from the gel (lower band).
2016-05-04T20:20:58.661Z
2014-01-15T00:00:00.000
{ "year": 2014, "sha1": "bb50ce68d4cb567ddeeb76656b6871f754dcc4fc", "oa_license": "CCBY", "oa_url": "https://microbialcellfactories.biomedcentral.com/track/pdf/10.1186/1475-2859-13-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ee8d34714e38726041352286c82b626651b815f8", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
270886569
pes2o/s2orc
v3-fos-license
Systematic Review and Critical Appraisal of Cauda Equina Syndrome Management During Pregnancy Cauda equina during pregnancy represents a rare entity, with data regarding optimal treatment being very scarce in the pertinent literature. Given the scarcity of current evidence on the topic, this study conducts a systematic review and analysis of existing literature concerning cauda equina syndrome (CES) management in pregnant women. A comprehensive search was performed across multiple databases, yielding 26 level IV peer-reviewed articles that met the inclusion criteria. These studies collectively encompassed 30 pregnant patients with CES, with a mean age of 31.2 years and an average gestational age of 26 weeks. Disc herniation emerged as the primary cause in 73% of cases. Regarding surgical interventions, the prone position was utilised in 70% of cases, with 73% receiving general anaesthesia. Notably, third-trimester spinal surgeries exhibited a higher complete recovery rate compared to earlier trimesters. Minimally invasive spinal surgery demonstrated superior outcomes in terms of complete recovery and reduced risk of persistent post-operative symptoms when compared to open approaches. Moreover, patients undergoing caesarean section (CS) after spinal surgery reported higher rates of symptom resolution and lower symptom persistence compared to those with CS before spinal surgery or vaginal delivery post-spinal surgery. Despite these study's findings, the overall evidence base remains limited, precluding definitive conclusions. Consequently, the study underscores the importance of multidisciplinary team discussions to formulate optimal treatment strategies for pregnant individuals presenting with CES. This highlights a critical need for further research to expand the knowledge base and improve the guidance available for managing CES in pregnant populations. Introduction And Background Cauda equina syndrome (CES) is a spinal surgical emergency, most often attributable to compression of the cauda equina roots and disruption of neuronic signal transmission.It is characterised by a variable constellation of symptoms and signs, including severe low back pain (LBP), radiculopathy, reduced reflexes, saddle anaesthesia, urinary and/or bowel incontinence, and sexual dysfunction [1,2].A high index of suspicion is essential in diagnosing CES during pregnancy, as many of the common symptoms of pregnancy itself, including urinary dysfunction and back pain, may mimic spinal conditions [3]. CES is rare, with an incidence of 1-3 per 100,000 people [4], accounting for 1-2% of those undergoing surgery for lumbar disc herniation (LDH) [4], and even rarer in pregnancy.It is postulated that hormonal changes, in particular serum relaxin, a hormone that regulates collagen and softens the ligaments of the pelvis in preparation for parturition, may predispose to disc herniation during this time [5].LDH is the most common cause of CES, with other causes including spinal lesions or tumours, lumbar spinal stenosis, spinal infections, lower back trauma, spinal arteriovenous malformations, spinal haemorrhage, spinal ischemic insults, or post-operative spinal surgery complications [6]. Opting for surgical intervention during pregnancy must always balance the risks and benefits of treatment for both the mother and foetus.Hence, clinical decision-making must be meticulous and ideally made by a multidisciplinary team (MDT) of anaesthetists, neonatologists, obstetricians, and surgeons [7].The effects of patient positioning, anaesthesia, foetal monitoring, plans for urgent delivery, and monitoring maternal blood pressure may have a detrimental effect on the outcome and must be carefully weighted [8]. There is limited evidence available for the optimal management options for CES in pregnancy.Most of the evidence derives from case reports and series, and no randomised controlled data is available, resulting in contradictory recommendations.With the above in mind, the overreaching goal of this study was to systematically review the current evidence for the management of CES in pregnancy and analyse the available literature.To our knowledge, this is the first systematic review focusing on the management of CES during pregnancy.This article was previously presented as an oral presentation at the 2023 British Association of Spinal Surgeons (BASS) annual scientific meeting on April 20, 2023. Review Materials and methods The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines were followed [9].A literature search was conducted using five databases (Medline, Embase, PubMed, Cochrane Library, and Science Direct) from 1946 to February 2022.The inclusion of studies from such an extended timeframe is justified by the very limited availability of data on the topic.The search strategy used a combination of keywords and Medical Subject Headings, which is shown in Table 1. TABLE 1: Search Strategy All studies that evaluated any intervention for treating CES during pregnancy were eligible for inclusion.In light of the limited clinical evidence, case reports and case series were included.The methodology used was in line with the recommendations of the Cochrane Back Review Group [10]. Inclusion and Exclusion Criteria We included studies reporting CES during pregnancy, including symptoms, radiographic findings, management details, and outcomes related to surgical intervention and foetal outcomes.We excluded articles in languages other than English, those focused on the prevention of CES and not a treatment, and also where interventions were started before pregnancy but measured symptoms during pregnancy.Furthermore, we excluded articles with participants with the onset of symptoms during labour and conference abstracts and reviews (due to data duplication). Identification of Studies Studies identified by the search strategy were assessed for inclusion.Duplicates were removed, and the remaining studies were screened using their titles, abstracts, and full texts.The process was performed independently by two researchers, and in cases of disagreement, the studies were discussed with the supervising researcher to reach a consensus. Quality Assessment A quality assessment was carried out using the Joanna Briggs Institute Critical Appraisal Checklist for Case Reports [11].This consists of eight questions accounting for clear documentation of the patient's demographics, history, presentation of clinical condition, the diagnostic tests used and their results, the intervention used, adverse events, and the clinical outcome. Data Extraction A narrative synthesis of the findings from the included studies was conducted due to the heterogeneity of the studies and their results, including differences in design, outcome, and measure of effect.Hence, the information was pooled into subgroups for analysis. The type of surgical intervention, types of anaesthetics, and patient positioning during surgery were included.All studies had clinical and functional outcome measures evaluated at least once following intervention during pregnancy. Primary Outcomes Primary outcomes include pain intensity, neurological deficits, sensory changes or loss of motor function, Participants' Demographics From the included 26 studies, data were extracted from 30 pregnant women and used in this review.The participants' ages in the review ranged from 20 to 41 years, with a mean of 31.2 years (SD=5.2).The mean age of patients in their first trimester (n=2) was 38 years (SD=1.4).The second trimester (n=15) had a mean age of 31 years (SD=5.2),and the third trimester (n=13) participants were 30.3 years old (SD=4.9).The included patient's gestational age ranged from 11 to 36 weeks, with a mean of 26 weeks (SD=7.3).The mean gestational age of each trimester includes the first trimester, 11.5 weeks (SD=0.7); the second trimester, 22 weeks (SD=3.9);and the third trimester, 33 weeks (SD=2.5).The proportion of included patients with single pregnancies was 90% (n=27), twin gestation was 6.7% (n = 2), and one patient had a triplet.The proportion of patients who have only been pregnant once was 40% (n=12), those between two and five times was 36.7% (n=11), and only one patient had more than five previous pregnancies, while no information was obtained from six patients.Significant past medical history was obtained from 14 patients; 10 had chronic lower back pain, of which two had confirmed degenerative spine disease.Three patients had a high BMI, with chronic lower back pain in two of these patients.One patient had previous spine surgery, and one was diabetic. As stated above, the most common cause of CES in pregnancy was disc herniation (73.3%, n=22).There were three cases each of spinal canal stenosis and epidural venous engorgement.One case of cavernoma and hemangioblastoma was identified in Figure 2. Single-spinal-level pathology was identified in 73.3% (n=22), with the L5-S1 (46.6% (n=14)) level being the most common culprit. Intervention MDT discussions involving at least an obstetrician, neonatologist, anaesthetist, and spinal surgeon were undertaken in 73.3% (n=22) of the cases.No information about MDT was given to 26.7% (n=8).Spinal surgical intervention was undertaken to relieve the CES symptoms in 96.7% (n=29) of patients.The only case that was not selected for spinal surgery was diagnosed with epidural venous engorgement secondary to increased intra-abdominal pressure, with symptoms improving following a caesarean section (CS). Most spinal surgeries were via the open approach (73.3%, n=22), and others were through minimally invasive/endoscopic approaches (13.4%, n=7).Decompressive surgery was carried out within 48 hours in 83.3% (n=25) of cases, while 16.7% (n=5) had surgery outside the 48-hour window.Most of the patients (70%, n=21) were operated on in a prone position, while two patients were positioned in the left and right lateral positions, respectively.General anaesthesia (GA) was provided in 73.3% (n=22) while spinal anaesthesia was used in 13.4% (n=4).The mean length of surgery was 153 minutes (SD=59), ranging from 60 to 240 minutes. In 53.3% (n=16) of patients, it was decided to proceed with a CS after spinal surgery, while two cases had vaginal delivery after spinal surgery.Six patients had CS before their spinal surgery, and no patient had a vaginal delivery prior to spinal intervention (Figure 3).The mode of delivery wasn't specified in six of the cases.The average time to delivery after spinal surgery was 8.7 weeks (SD=8.6),ranging from 0 to 28 weeks. Foetal Monitoring Intra-operative foetal monitoring has been used in 23.3% (n=7), and two studies reported monitoring the foetus both in the pre-and post-operative period.In two cases, the foetus has been assessed only postoperatively, and in one case, only pre-operatively.No information on foetal monitoring was provided in 17 studies.In terms of monitoring modality, foetal Doppler ultrasound was used for foetal assessment in 13.3% (n=4) of the cases.Foetal heart rate (FHR) monitoring was used in three cases, while others either performed foetal ultrasound imaging (6.7%, n=2) or a cardiotocogram (3.3%, n=1). Follow-Up and Bias There was adequate follow-up (greater than six weeks) in 19/30 patients (63%).The included studies were all either case reports or case series, which increased their risk of bias and limited the methodological quality.A summary assessing the risk of bias for individual studies was constructed to depict the measurable influences (Figures 4-5). Measurement of Exposure and Outcome Sufficient information was provided concerning the intervention used and the maternal and infant outcomes in 22/30 (73%).Seven of 30 patients (23%) had some missing data, and 1/30 (4%) had insufficient data. Outcome All studies in this review included information on the outcome of managing CES in pregnancy (Figure 6 and Table 2).The outcomes were analysed based on the trimester of spinal surgery.Among the cohort, two patients had spinal surgery in their first trimester.Their surgeries were performed within 48 hours of symptom onset, and they both had persisting symptoms at follow-up.In cases operated on in the second trimester, 80% (n=12) were operated on within 48 hours of symptom onset; others were later.They reported complete resolution of symptoms in 33.3% (n=5) and persistence of either weakness, hypoesthesia, paraesthesia, bowel, or bladder symptoms in 66.7% (n=10).Among the population of patients operated on during the third trimester, 85% (n=11) had their operation within 48 hours, while 15% (n=2) were delayed (>48 hours).Five cases (41.7%) reported full resolution of symptoms, whereas symptoms that persisted at follow-up were seen in 58.3% (n=7).In comparing the surgical approach, minimally invasive surgery (MIS) was associated with better chances of full resolution of symptoms (50% vs. 23%) and a lower risk of persistent symptoms (50% vs. 77%) compared to the open approach; however, the timing of surgery within 48 hours in both open and MIS was similar, at 80% in both arms.Patients who had their spinal surgery within 48 hours did better in terms of complete resolution of symptoms (32% vs. 20%) and less persistent symptoms (68% vs. 80%) than those falling within the late group (>48 hours). Outcome of infant Participants who had CS after spinal surgery reported a higher rate of full resolution of symptoms (44% vs. 34% vs. 0%) and fewer chances of developing persistent symptoms (56% vs. 66% vs. 100%) compared to those who had CS before spinal surgery and vaginal delivery after spinal surgery, respectively.One patient who had a vaginal delivery after spinal surgery re-herniated four days post-delivery and required re-operation. Overall, 79% (n=21) of infants were healthy, with the first and third trimesters recording 100% of healthy infants and the second trimester recording 66.7% (n=10), with one patient miscarried after spinal surgery, and no information on foetal outcome was provided in four cases.Patients placed in the right and left lateral positions during spinal surgery reported a 100% successful birth rate with no mother-or infantrelated complications.Twenty per cent of patients placed prone didn't have any information on the foetal outcome, but 75% (n=15) recorded healthy infants and only one miscarriage during the second trimester of pregnancy with healthy mothers.There was no maternal mortality recorded in any of the studies. Discussion The demographics, management, and outcomes of 26 case reports or case series involving 30 patients were collated and compared through a systematic review.Overall, patients operated during the third trimester, within 48 hours of presentation, underwent MIS and delivery with CS post-operatively and had higher rates of complete resolution of symptoms at follow-up when compared with their counterparts.There were no statistically significant differences in maternal or foetal outcomes based on the patient's positioning during spinal surgery.Current evidence corroborates the theory that non-obstetric surgery is safe in experienced hands for both mother and foetus [7,12,13]. Clinical Implications and Repercussions The management of CES during pregnancy should be ideally planned by an experienced MDT, which comprises at least an obstetrician, neonatologist, anaesthetist, and spinal surgeon.In terms of imaging modality, MRI is not contraindicated in pregnancy and can be safely used to investigate CES [5,[14][15][16]. Controversies exist about the timing of surgical decompression, but several studies and meta-analyses report better prognoses for patients who undergo surgery within 48 hours from the beginning of symptoms [17,18].In our review, the maternal and foetal outcomes were better when spinal surgery was performed within 48 hours. There is a paucity of literature with regard to ideal positioning.It is well established that spinal surgery in gestational patients can be performed both in prone and lateral positions [13].A prone patient allows better access; however, the lateral position may prevent abdominal compression [13].The prone position is not recommended beyond 12 weeks of gestation [13,19] without using the Relton-Hall Laminectomy frame as it can cause abdominal compression, inciting preterm labour [14,20,21]. Pregnancy is not a contraindication for general or regional anaesthesia [5,14,15,22].Regional anaesthesia is recommended for shorter operations [23].GA should be used cautiously in the first trimester due to the increased risk of spontaneous abortion [24].The American College of Obstetricians and Gynaecologists (ACOG) recommends regional anaesthesia for surgery in pregnancy when possible [25]. The options for surgical management of CES range from traditional open procedures to minimally invasive spinal surgery techniques based on aetiology.Spine surgeons consider microdiscectomy the "gold standard" technique for lumbar discectomy as it presents shorter hospital stays and lower complications [26].In our review, due to the small numbers and heterogeneity of cases, we couldn't establish a reliable superiority of one approach over the other as the surgical outcome is governed by other factors such as the timing of surgery, whether complete or incomplete CES, and degree of decompression, which were not entirely accounted for by the reviewed articles.Bipolar electrocautery was used to manage epidural venous engorgement causing CES in this review.In the analysed cohort of patients, the only case of a spinal tumour, a cavernoma, underwent a successful CS before tumour resection and hematoma evacuation at 26 weeks gestation.However, the infant died within five days of birth due to ileus and pulmonary insufficiency. The ACOG recommends FHR measurement using Doppler ultrasound before and after any surgical intervention, regardless of gestational age, with the addition of contraction monitoring in the viable foetus [25].The Royal College of Obstetrics and Gynaecologists (RCOG) recommends that continuous monitoring is not needed when a woman is healthy and has no significant history of obstetric complications but is recommended if any indications of foetal compromise are present [27].ACOG further recommended that an MDT determine the use of intraoperative foetal monitoring based on each patient and the surgery to be performed [25]. There is controversy regarding the optimal delivery route for patients who do not undergo a CS during spinal decompression surgery.Some physicians recommend a CS to prevent further spinal-related complications like recurrence due to increased intra-abdominal pressure [28][29][30].However, among women with vaginal deliveries, there is no report of an increased rate of persistent neurological symptoms [29].Brown and Brookfield postulated that labour induction before treatment for LDH can cause increased neurological injury due to increased epidural venous pressure during labour [20].In our review, women who had CS after spinal surgery had a higher rate of symptom resolution and fewer persistent symptoms compared to the CS before spinal surgery and vaginal birth after spinal surgery patient groups.A higher quality of evidence, MDT-based decisions, and individualised approaches should guide the delivery route.The included studies are summarised in Table 3. Limitations This systematic review incorporates level IV evidence derived from a small number of studies and a limited cohort of patients (26 studies, 30 patients).Studies had poor methodological quality, a high bias rate, a lack of randomisation, and incomplete outcomes.Selection bias was a significant confounder, often preventing reliable conclusions about patient positioning, mode of delivery, and open vs. minimally invasive spinal surgeries. The reports included women at various stages throughout their pregnancy and with varying symptoms and interventions, using variable quantitative and qualitative data to measure interventions' effects, hence making the data not directly comparable.Follow-up duration was not reported in 11 studies, causing a high risk of attrition bias.Finally, a limited representation of patients' baseline function prevented comparisons from being made pre-and post-treatment. Conclusions This study offers the first systematic analysis of the literature regarding the management of CES during pregnancy.The management of pregnant women with confirmed CES should involve a multidisciplinary team to devise the most effective treatment strategy.The lack of universal guidelines for managing these patients often results in delayed diagnosis and treatment, increasing the risk of chronic neurological complications.Early identification of the pathology is crucial to achieving better outcomes.Due to the limited available literature, it is possible to offer recommendations but not to draw definitive conclusions about the optimal management of CES in pregnant women. FIGURE 1 : FIGURE 1: PRISMA flowchart illustrating the included and excluded studies PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses FIGURE 4 : FIGURE 4: Risk of bias graph: review authors' judgements about each risk of bias item presented as percentages across all included studies FIGURE 5 : FIGURE 5: Risk of bias summary showing review authors' judgements about each risk of bias item for each included study
2024-07-03T15:11:18.905Z
2024-06-01T00:00:00.000
{ "year": 2024, "sha1": "8eeeec25bce19cfe353ede1b08afe6d22012d2a3", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "6f193d226376010f174e1912ee9a979194792de0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
233520026
pes2o/s2orc
v3-fos-license
Geochemistry of the Cheremkhovo and Lower Prisayan Formations from the Jurassic Irkutsk Coal-Bearing Basin: Evidence for Provenance and Climate Change in Pliensbachian–Toarcian The Cheremkhovo formation (Pliensbachian) is the primary coal-bearing formation of the Irkutsk basin, Eastern Siberia. Still, few geochemical studies of the Jurassic sediments of the Irkutsk coal-bearing basin have been conducted, and there are no data on the geochemistry of the coal-bearing formation itself. This study presents geochemical data for 68 samples from the Cheremkhovo formation and the overlying Lower Prisayan formation. The age of the former has been estimated by U-Pb dating of zircon from a tonstein (altered volcanic ash) layer as Pliensbachian, whereas the age of the latter is estimated as Pliensbachian–Toarcian according to regional stratigraphy. Major oxide and trace element concentrations were obtained using X-ray fluorescence spectrometry. Geochemical indicators showed diversity between the two studied formations. The indicators used show the change in climate conditions, from warm and humid in the Cheremkhovo formation, to hot and arid during the deposition of the lower Prisayan formation. The provenance of the Irkutsk coal-bearing basin was mainly influenced by the source composition, not recycling, and sediments were mainly derived from felsic to intermediate igneous rocks with a mixture of other rock types. Introduction The Irkutsk coal-bearing basin is located in the southern part of the Siberian Craton ( Figure 1). A large fuel and energy complex has been created in the region; about 15 million tons of coal are mined annually. On the Russian scale, it provides 6% of coal production [1]. Geological Setting The Jurassic stratigraphy of the Irkutsk basin is compiled in Figure 2, using data from geological surveys [22,23] and Skoblo et al. [3]. The basis of the Jurassic sediments is represented by the 30 m-to 240 m-thick Cheremkhovo Fm. It is overlain by the 135 m-to 350 m-thick Prisayan Fm. The Jurassic sediments compete with the up to 145 m-thick Kuda Fm. The alluvial, the alluvial-deluvial, the floodplain, and lacustrine-boggy facies are distinguished by the basin sediments [7,8]. On the global scale, the Toarcian is marked by an ocean anoxic event (Toarcian-OAE) [19], which in turn likely resulted from Karoo-Ferrar flood basalt volcanism [20,21]. To see the probable changes in the volcanic-related environmental conditions, we provide a detailed geochemical study of sediments of the Cheremkhovo Fm. and Lower Prisayan Fm., i.e., those formations which, on the one hand, were formed before, at, and after the Toarcian-OAE, but, on the other hand, were not yet studied. Geological Setting The Jurassic stratigraphy of the Irkutsk basin is compiled in Figure 2, using data from geological surveys [22,23] and Skoblo et al. [3]. The basis of the Jurassic sediments is represented by the 30 m-to 240 m-thick Cheremkhovo Fm. It is overlain by the 135 mto 350 m-thick Prisayan Fm. The Jurassic sediments compete with the up to 145 m-thick Kuda Fm. The alluvial, the alluvial-deluvial, the floodplain, and lacustrine-boggy facies are distinguished by the basin sediments [7,8]. The lower Cheremkhovo Formation The lower Cheremkhovo Fm. is mostly coarse-grained and composed of sandstones, gravelstones, conglomerates, and thin layers of siltstones, mudstones, and coal. The subformation thickness is from 100-130 m to 30 m, up to full thinning [24]. The lower Cheremkhovo Fm. indicates the transitions from widespread coarse-grained facies to clay-brecciated and sand facies [3,7,8,25]. Facies depend on sedimentation conditions, the buried morphology, and provenance. The clay-brecciated facies is not more than 20 m [23] The Lower Cheremkhovo Formation The lower Cheremkhovo Fm. is mostly coarse-grained and composed of sandstones, gravelstones, conglomerates, and thin layers of siltstones, mudstones, and coal. The subformation thickness is from 100-130 m to 30 m, up to full thinning [24]. The lower Cheremkhovo Fm. indicates the transitions from widespread coarse-grained facies to clay-brecciated and sand facies [3,7,8,25]. Facies depend on sedimentation conditions, the buried morphology, and provenance. The clay-brecciated facies is not more than 20 m [23] thick. The average thickness of the sand facies is 10-16 m, up to 30 m [23]. Both of these facies are limited within the Irkutsk basin. The widespread coarse-grained facies building up the basal part of the Jurassic section extended over almost the entire area of the basin. Samples were collected in order to cover the widespread coarse-grained facies of the lower Cheremkhovo Fm. The lower Cheremkhovo Fm. is exposed in the area of Zalari town (53 • 34 09.14 N, 102 • 32 37.74 E) ( Figure 3). The base of the section is unconformably overlying the Cambrian rocks. The section consists of well-sorted pebble to boulder conglomerates. The conglomerates are interlayered with beds of well-sorted sandstones, varying from finegrained to gravel. The composition of the pebbles is controlled by quartzites, quartz with granitoids. Pebbles are characterized by a good degree of roundness. The polymict sandstones consist of badly sorted minerals (quartz, calcite, plagioclase, and feldspar) and rock pieces (quartzite, gneisses, and granite). Clast roundness is from medium to good. The matrix is of clay and iron oxide particles intermixed with the clay. The siltstone composition is close to that of sandstone. [23]. Both of these facies are limited within the Irkutsk basin. The widespread coarse-grained facies building up the basal part of the Jurassic section extended over almost the entire area of the basin. Samples were collected in order to cover the widespread coarse-grained facies of the lower Cheremkhovo Fm. The lower Cheremkhovo Fm. is exposed in the area of Zalari town (53°34´09.14″N, 102°32´37.74″E) (Figure 3). The base of the section is unconformably overlying the Cambrian rocks. The section consists of well-sorted pebble to boulder conglomerates. The conglomerates are interlayered with beds of well-sorted sandstones, varying from finegrained to gravel. The composition of the pebbles is controlled by quartzites, quartz with granitoids. Pebbles are characterized by a good degree of roundness. The polymict sandstones consist of badly sorted minerals (quartz, calсite, plagioclase, and feldspar) and rock pieces (quartzite, gneisses, and granite). Сlast roundness is from medium to good. The matrix is of clay and iron oxide particles intermixed with the clay. The siltstone composition is close to that of sandstone. The lower Cheremkhovo Fm. is exposed in the area of Kutulik village (53 • 22 02.14 N, 102 • 47 02.30 E) ( Figure 4). The deposits are composed of sandstones and siltstones. The polymict and feldspar-quartz sandstones consist of poorly sorted mineral grains (quartz, feldspar, and plagioclase) and rock pieces (quartzite, felsic volcanic rocks, gneisses, and granite). Mineral grains and rock pieces are characterized by varying degrees of roundness, from poor to good. The matrix is composed of clay and iron oxide particles intermixed with the clay. The siltstone composition is close to that of sandstone. The lower Cheremkhovo Fm. is exposed in the area of Kutulik village (53°22´02.14″N, 102°47´02.30″E) ( Figure 4). The deposits are composed of sandstones and siltstones. The polymict and feldspar-quartz sandstones consist of poorly sorted mineral grains (quartz, feldspar, and plagioclase) and rock pieces (quartzite, felsic volcanic rocks, gneisses, and granite). Mineral grains and rock pieces are characterized by varying degrees of roundness, from poor to good. The matrix is composed of clay and iron oxide particles intermixed with the clay. The siltstone composition is close to that of sandstone. The upper Cheremkhovo Formation The upper Cheremkhovo Fm. is composed of siltstones, mudstones, industrial coal, and sandstones. All coal deposits known in the Irkutsk basin are associated with the upper Cheremkhovo Fm. The Irkutsk basin coals are mostly lower rank; metamorphism increases in the northwest direction to the southeast [23]. The second-highest stage of metamorphism is characterized by coals of the southeastern part of the basin, but there are no coal deposits that would be economic to mine. The Upper Cheremkhovo Formation The upper Cheremkhovo Fm. is composed of siltstones, mudstones, industrial coal, and sandstones. All coal deposits known in the Irkutsk basin are associated with the upper Cheremkhovo Fm. The Irkutsk basin coals are mostly lower rank; metamorphism increases in the northwest direction to the southeast [23]. The second-highest stage of metamorphism is characterized by coals of the southeastern part of the basin, but there are no coal deposits that would be economic to mine. Figure 5). The deposits are composed of coals, mudstones, siltstones, and sandstones. The arkose and feldspar-quartz sandstones consist of badly sorted minerals (quartz, feldspar, and plagioclase) and rock fragments (volcanic rocks, gneisses and granite, and quartzite). Mineral grains and rock pieces are differentiated by roundness, from poor to good. The matrix is composed of clay. The siltstone composition is close to that of sandstone. The upper Cheremkhovo Fm. samples were collected at the Cheremkhovo, Golovinsk, Bozoy, and Azeisk coal deposits. The sampled sections of the Cheremkhovo (53°15´15.27″ N, 102°57´09.06″ E) and Golovinsk (53°26´51.21″ N, 102°53´58.48″ E) coal deposits are almost the same ( Figure 5). The deposits are composed of coals, mudstones, siltstones, and sandstones. The arkose and feldspar-quartz sandstones consist of badly sorted minerals (quartz, feldspar, and plagioclase) and rock fragments (volcanic rocks, gneisses and granite, and quartzite). Mineral grains and rock pieces are differentiated by roundness, from poor to good. The matrix is composed of clay. The siltstone composition is close to that of sandstone. The Bozoy section (52 • 52 01.16 N, 105 • 02 28.37 E) is represented by coals, mudstones, siltstones, and sandstones ( Figure 6). The feldspar-quartz sandstones are composed of badly sorted minerals (quartz, feldspar, and plagioclase) and rock pieces (volcanic rocks, gneisses, granite, and quartzite). The fragments are differentiated by varying degrees of roundness, from poor to good. The matrix is composed of iron oxide particles intermixed with the clay. The siltstone composition is close to that of sandstone. The Bozoy section (52°52´01.16″ N, 105°02´28.37″ E) is represented by coals, mudstones, siltstones, and sandstones ( Figure 6). The feldspar-quartz sandstones are composed of badly sorted minerals (quartz, feldspar, and plagioclase) and rock pieces (volcanic rocks, gneisses, granite, and quartzite). The fragments are differentiated by varying degrees of roundness, from poor to good. The matrix is composed of iron oxide particles intermixed with the clay. The siltstone composition is close to that of sandstone. The lower Prisayan Fm. was sampled along the Angara river's starboard side below the village of Ust-Baley (52°37´34.51″ N, 103°58´08.27″ E; Figure 7). The section is represented by sandstones with beds of pebbles and siltstones. The composition of pebbles is dominated by felsic effusive rocks, with a few granitoids and metamorphic The Lower Prisayan Formation According to Skoblo et al. [3], the lower Prysayan Fm. is Pliensbachian-Toarcian. The lower Prisayan Fm. is composed of sandstones, siltstones, and gravelstones with a thin inter-bed of mudstones, conglomerates, and coals. The lower Prisayan Fm. was sampled along the Angara river's starboard side below the village of Ust-Baley (52 • 37 34.51 N, 103 • 58 08.27 E; Figure 7). The section is represented by sandstones with beds of pebbles and siltstones. The composition of pebbles is dominated by felsic effusive rocks, with a few granitoids and metamorphic rocks. Pebbles are characterized by a good degree of roundness. The quartz-feldspar and feldspar-quartz sandstones consist of badly sorted minerals (quartz, feldspar, plagioclase, and mica) and rock pieces (felsic volcanic rocks, granite, gneisses, and quartzite). Clasts differ by varying roundness, from poor to good. The matrix is composed of iron oxide particles intermixed with clay, and clay. The siltstone composition is close to that of sandstone. At different stratigraphic levels of the Irkutsk basin, especially in the Prisayan Fm., beds or nodules with a high calcite concentration occur. rocks. Pebbles are characterized by a good degree of roundness. The quartz-feldspar and feldspar-quartz sandstones consist of badly sorted minerals (quartz, feldspar, plagioclase, and mica) and rock pieces (felsic volcanic rocks, granite, gneisses, and quartzite). Clasts differ by varying roundness, from poor to good. The matrix is composed of iron oxide particles intermixed with clay, and clay. The siltstone composition is close to that of sandstone. At different stratigraphic levels of the Irkutsk basin, especially in the Prisayan Fm., beds or nodules with a high calсite concentration occur. Materials and Methods The studied material comes from natural outcrops and sections of the Cheremkhovo, Azeysk, Golovinsk, and Bozoy deposits. The sampling locations are shown in Table S1. The petrographic analysis was carried out on sedimentary rocks and conglomerate pebbles. Sample preparation and the analytical procedures were performed at the Center for Geodynamics and Geochronology of the Institute of the Earth's Crust, Siberian Branch of the Russian Academy of Sciences (Irkutsk). The samples were crushed, split, and pulverized to a powder. Major element oxide (SiO 2 , Al 2 O 3 , Fe 2 O 3 , TiO 2 , MnO, MgO, CaO, K 2 O, Na 2 O, P 2 O 5 ) and trace element (Ni, Cu, Ga, Pb, V, Cr, Co, Ba, La, Ce, Nd, Sm, Ta, Sc, Cs, As, Br, Nb, Zr, Y, Sr, Rb, Th, U) concentrations were analyzed by wavelength dispersive X-ray fluorescence (XRF) with an S8 TIGER (Bruker AXS GmbH, Germany) X-ray spectrometer using SPECTRAplus software after the procedures described in [26,27]. This method was chosen because of the low cost and high speed of the total analytical procedure, compared to other multielement analytical methods. Results This study presents geochemical data on the Jurassic deposits of the Cheremkhovo and lower Prisayan formations from the Irkutsk coal-bearing basin. Major element oxide and trace element data for the samples are given in Table S1. The Irkutsk basin's sediments reveal a wide range of major element oxides, with SiO 2 and Al 2 O 3 being the dominant constituents. The concentration of SiO 2 is from 40.93 to 88.24% in the Cheremkhovo Fm. and from 53.12 to 75.57% in the lower Prisayan Fm. The concentration of Al 2 O 3 is from 3.92 to 23.13% in the Cheremkhovo Fm., and from 12.91 to 20.92% in the lower Prisayan Fm. The concentration of Fe 2 O 3 ranges from 0.69 to 19.32% in the Cheremkhovo Fm., and from 1.75 to 13.99% in the lower Prisayan Fm. In sporadic samples the concentration of CaO is very high, more than 6.60%. We associate such CaO concentrations with nodules that occur at different stratigraphic levels of the Irkutsk basin section. Other major element oxides, such as K 2 O, TiO 2 , MgO, Na 2 O, and P 2 O 5 , are present in low concentrations (≤4.05%). In order to geochemically characterize the studied deposits of the Irkutsk coal-bearing basin, we did not use carbonate-rich sediments (CaO ≥ 10%), metasomatized or metamorphosed sediments [28]. Chemical Classification The samples from the Cheremkhovo and lower Prisayan formations have different ratios of SiO 2 /Al 2 O 3 , and geochemical classification suggests that they cluster tightly in the graywacke, arkose, and litharenite fields. The Cheremkhovo Fm. samples fall in the graywacke, arkose, and litharenite fields [29]. The lower Prisayan Fm. samples fall in the graywacke field. The low SiO 2 and ratios of SiO 2 /Al 2 O 3 content demonstrate the immaturity of the Irkutsk basin sediments and suggest short transport distances between the major source regions and the sedimentary basin. The samples exhibit differences in Na 2 O/K 2 O ratios, especially in the lower Prisayan Fm. The group of the lower Cheremkhovo Fm. samples (six points) has very low log (Na 2 O/K 2 O) values from −1.44 to 1.26, and was not included in the classification diagram of Pettijohn [29] (Figure 8). Weathering and Paleoclimate The rate of chemical weathering of rocks and the erosion rate of weathering profiles is controlled by climate as well as source rock composition and tectonics. It is well known that chemical weathering strongly affects the major element geochemistry and mineralogy of siliciclastic sediments [30][31][32]. Several chemical indices have been proposed to quantify the intensity of weathering [28,[30][31][32][33][34][35][36]. The chemical index of alteration (CIA) proposed by Nesbitt and Young [30] is widely used to check the degree of chemical weathering in rocks and as a marker of palaeoclimate. The CIA ratio can be calculated as: CIA = Al2O3/(Al2O3 +CaO 1 + Na2O + K2O) × 100 [30]. The plagioclase index of alteration (PIA) is a CIA modification [32]. This can be calculated as: PIA = 100 × (Al2O3 − K2O)/(Al2O3 + CaO 1 + Na2O − K2O). CaO 1 represents the quantity of CaO integrated in the silicate fraction, and can be determined using the McLennan et al. method [31], where CaO 1 = CaO − (10/3 × P2O5). As shown in Figure 9, (Figure 9). In the group of the lower Cheremkhovo Fm. samples, the PIA and CIA values do not match well. This group also has very low log (Na2O/K2O) values in the classification diagram ( Figure 8). Weathering and Paleoclimate The rate of chemical weathering of rocks and the erosion rate of weathering profiles is controlled by climate as well as source rock composition and tectonics. It is well known that chemical weathering strongly affects the major element geochemistry and mineralogy of siliciclastic sediments [30][31][32]. Several chemical indices have been proposed to quantify the intensity of weathering [28,[30][31][32][33][34][35][36]. The chemical index of alteration (CIA) proposed by Nesbitt and Young [30] is widely used to check the degree of chemical weathering in rocks and as a marker of palaeoclimate. The CIA ratio can be calculated as: CIA = Al 2 O 3 /(Al 2 O 3 +CaO 1 + Na 2 O + K 2 O) × 100 [30]. The plagioclase index of alteration (PIA) is a CIA modification [32]. This can be calculated as: PIA = 100 × (Al 2 O 3 − K 2 O)/(Al 2 O 3 + CaO 1 + Na 2 O − K 2 O). CaO 1 represents the quantity of CaO integrated in the silicate fraction, and can be determined using the McLennan et al. method [31], where CaO 1 = CaO − (10/3 × P 2 O 5 ). As shown in Figure 9, (Figure 9). In the group of the lower Cheremkhovo Fm. samples, the PIA and CIA values do not match well. This group also has very low log (Na 2 O/K 2 O) values in the classification diagram ( Figure 8). Th/U ratios rise with increasing weathering. The Th/U ratios of the lower Cheremkhovo Fm. samples ranged from 0.6 to 4.0 (average value 2.1), from 1.9 to 4.5 (average value 3.0) for the upper Cheremkhovo Fm., and from 1.9 to 4.0 (average value 2.8) for the lower Prisayan Fm. (Figure 10). At the plot of Th versus Th/U, we can see that samples from the Irkutsk basin did not suffer from strong weathering. Low Th/U ratios may characterize coal-bearing samples, with the wide U content bonded with an organic agent [37,38]. The Th/U ratios to set weathering degrees for the Irkutsk basin samples seem to be problematic, but such values in the Irkutsk basin sediments may indicate a simple cycling history. Variations in Rb/Sr and Sr/Cu ratios are applied to represent palaeoclimatic conditions. The Sr/Cu ratios of the lower Cheremkhovo Fm. samples ranged from 2.6 to 5.2 (average value 4.2), and from 9.2 to 15.3 (average value 12.9) for the lower Prisayan Fm. samples (Figure 11). Rb/Sr ratios demonstrated minor diversities, with all samples Th/U ratios rise with increasing weathering. The Th/U ratios of the lower Cheremkhovo Fm. samples ranged from 0.6 to 4.0 (average value 2.1), from 1.9 to 4.5 (average value 3.0) for the upper Cheremkhovo Fm., and from 1.9 to 4.0 (average value 2.8) for the lower Prisayan Fm. (Figure 10). At the plot of Th versus Th/U, we can see that samples from the Irkutsk basin did not suffer from strong weathering. Low Th/U ratios may characterize coal-bearing samples, with the wide U content bonded with an organic agent [37,38]. The Th/U ratios to set weathering degrees for the Irkutsk basin samples seem to be problematic, but such values in the Irkutsk basin sediments may indicate a simple cycling history. Th/U ratios rise with increasing weathering. The Th/U ratios of the lower Cheremkhovo Fm. samples ranged from 0.6 to 4.0 (average value 2.1), from 1.9 to 4.5 (average value 3.0) for the upper Cheremkhovo Fm., and from 1.9 to 4.0 (average value 2.8) for the lower Prisayan Fm. (Figure 10). At the plot of Th versus Th/U, we can see that samples from the Irkutsk basin did not suffer from strong weathering. Low Th/U ratios may characterize coal-bearing samples, with the wide U content bonded with an organic agent [37,38]. The Th/U ratios to set weathering degrees for the Irkutsk basin samples seem to be problematic, but such values in the Irkutsk basin sediments may indicate a simple cycling history. [40][41][42]. Rb/Sr ratios reduce under drier conditions, and rise under cold conditions, just as low ratios represent warm conditions [43,44]. We can see that Sr/Cu ratios increase and Rb/Sr ratios decrease from the upper Cheremkhovo Fm. to the upper part of the lower Prisayan Fm. section ( Figure 11). This can reflect changing climate conditions from warm and humid to hot and arid. having values from 0.1 to 2.7 ( Figure 11). The smallest Rb/Sr values are seen at the lower Prisayan Fm. (average value 0.3). Sr/Cu ratios rise under drier conditions [40][41][42]. Rb/Sr ratios reduce under drier conditions, and rise under cold conditions, just as low ratios represent warm conditions [43,44]. We can see that Sr/Cu ratios increase and Rb/Sr ratios decrease from the upper Cheremkhovo Fm. to the upper part of the lower Prisayan Fm. section ( Figure 11). This can reflect changing climate conditions from warm and humid to hot and arid. Figure 11. Sr/Cu versus Rb/Sr plot. The trends of Sr/Cu and Rb/Sr ratios are from [43,44]. The geochemical data have shown the effect of climate fluctuations within the deposition of the Irkutsk coal-bearing basin deposits. In the Cheremkhovo Fm. a warm, humid climate prevailed and we see a high grade of chemical weathering of the source rocks. The indicators used represented changes in climate conditions within the deposition of the lower Prisayan Fm. At this time, climate conditions changed from warm and humid to hot and arid. Source Composition and Provenance The geochemical characteristics of sedimentary rocks can preserve provenance information, despite the destruction of primary structures and alteration of minerals by sedimentary processes [39,[45][46][47]. The ratio Th/Sc does not change considerably during sedimentary recycling [39], whereas the ratio Zr/Sc increases significantly. Thus, the ratios Th/Sc can be applied in tracing sedimentary provenance, and high Zr/Sc ratios can be considered indicators of zircon enrichment. On a Th/Sc-Zr/Sc diagram (Figure 12), the Cheremkhovo Fm. and lower Prisayan Fm. samples reflect a positive correlation between Th/Sc and Zr/Sc. The average Zr/Sc and Th/Sc ratios of the Cheremkhovo and lower Prisayan formations are 22.21 and 0.95. This pattern suggests that the provenance of these rocks was influenced by the source composition, not by sediment recycling [39]. The geochemical data have shown the effect of climate fluctuations within the deposition of the Irkutsk coal-bearing basin deposits. In the Cheremkhovo Fm. a warm, humid climate prevailed and we see a high grade of chemical weathering of the source rocks. The indicators used represented changes in climate conditions within the deposition of the lower Prisayan Fm. At this time, climate conditions changed from warm and humid to hot and arid. Source Composition and Provenance The geochemical characteristics of sedimentary rocks can preserve provenance information, despite the destruction of primary structures and alteration of minerals by sedimentary processes [39,[45][46][47]. The ratio Th/Sc does not change considerably during sedimentary recycling [39], whereas the ratio Zr/Sc increases significantly. Thus, the ratios Th/Sc can be applied in tracing sedimentary provenance, and high Zr/Sc ratios can be considered indicators of zircon enrichment. On a Th/Sc-Zr/Sc diagram (Figure 12), the Cheremkhovo Fm. and lower Prisayan Fm. samples reflect a positive correlation between Th/Sc and Zr/Sc. The average Zr/Sc and Th/Sc ratios of the Cheremkhovo and lower Prisayan formations are 22.21 and 0.95. This pattern suggests that the provenance of these rocks was influenced by the source composition, not by sediment recycling [39]. Minerals 2021, 11, x FOR PEER REVIEW 13 of 17 Figure 12. The Zr/Sc versus Th/Sc plot (after [39]). Major element contents record sediment recycling processes and the changing proportions of sedimentary and first-cycle source rocks which are shown by the index of compositional variability (ICV) [48]. The ICV can be calculated as: ICV = (Fe2O3 + K2O + Na2O + CaO + MgO + MnO + TiO2)/Al2O3 [48]. demonstrates an increase in the ICV values to immature parameters (ICV > 1, by [46]) ( Figure 13). Samples with ICV > 1 were deposited in tectonically active settings. On the other hand, ICV < 1 are mature and were deposited in a tectonically quiescent environment [48]. [46]) ( Figure 13). Samples with ICV > 1 were deposited in tectonically active settings. On the other hand, ICV < 1 are mature and were deposited in a tectonically quiescent environment [48]. Major element contents record sediment recycling processes and the changing proportions of sedimentary and first-cycle source rocks which are shown by the index of compositional variability (ICV) [48]. The ICV can be calculated as: ICV = (Fe2O3 + K2O + Na2O + CaO + MgO + MnO + TiO2)/Al2O3 [48]. demonstrates an increase in the ICV values to immature parameters (ICV > 1, by [46]) ( Figure 13). Samples with ICV > 1 were deposited in tectonically active settings. On the other hand, ICV < 1 are mature and were deposited in a tectonically quiescent environment [48]. [50], and is similar to that of the average Earth's crust [51] (Figure 14). The high Rb concentrations (>40 ppm) indicate that the studied samples were derived from an acidic-intermediate igneous source (Figure 14). [50], and is similar to that of the average Earth's crust [51] (Figure 14). The high Rb concentrations (>40 ppm) indicate that the studied samples were derived from an acidic-intermediate igneous source (Figure 14). Discussion The Cheremkhovo Fm. is considered to belong to either the Pliensbachian [3], Pliensbachian-Toarcian [2,4] or Pliensbachian-Aalenian [5] intervals by biostratigraphy. The upper Cheremkhovo Fm. marks the lacustrine-swamp sedimentation in the wide territory of the Irkutsk coal-bearing basin. There is evidence of the presence of tuffaceous interlayers in the Cheremkhovo Fm. sediments [17,23,52]. An ash interlayer (tonstein) is present in the industrial coal seam of the Azeisk deposit in the northwest part of the basin. The tonsteins were formed due to the transformation of felsic pyroclastic material [17]. The age of the coal-bearing deposits of the Cheremkhovo Fm., represented at the Azeisk coal deposit, was established as 187.44 +0.45/−1.60 Ma using the LA-ICP-MS U-Pb method on accessory zircons from tonstein [6]. The age of the upper Cheremkhovo Fm. places the upper limit of the tectonic quiescence period before the intensification of tectonic processes in the southern mobile framing of the Siberian craton. In the context of regional stratigraphy, the obtained age of 187.44 Ma is consistent with the valid stratigraphic scheme [2] and the research of Skoblo et al. [3] that supplements it. The coarsening of the basin's sediments is upward through the Prisayan and Kuda Fm. According to the consistent scheme [3], the Prysayan Fm. and Kuda Fm. are Pliensbachian-Aalenian and Aalenian-Bajocian, respectively. From the lower Prisayan Fm. there are no industrial coal seams in the Irkutsk basin. The lower Prisayan Fm. is Pliensbachian-Toarcian [3]. Petrographic studies of the Irkutsk basin rocks are fully consistent with the data of previous studies [3,7,8]. The geochemical studies generally confirm this but give more detailed information. The available data indicate three major provenance areas of sediments: the Siberian Craton, the Caledonian complexes bordering the craton, and the Discussion The Cheremkhovo Fm. is considered to belong to either the Pliensbachian [3], Pliensbachian-Toarcian [2,4] or Pliensbachian-Aalenian [5] intervals by biostratigraphy. The upper Cheremkhovo Fm. marks the lacustrine-swamp sedimentation in the wide territory of the Irkutsk coal-bearing basin. There is evidence of the presence of tuffaceous interlayers in the Cheremkhovo Fm. sediments [17,23,52]. An ash interlayer (tonstein) is present in the industrial coal seam of the Azeisk deposit in the northwest part of the basin. The tonsteins were formed due to the transformation of felsic pyroclastic material [17]. The age of the coal-bearing deposits of the Cheremkhovo Fm., represented at the Azeisk coal deposit, was established as 187.44 +0.45/−1.60 Ma using the LA-ICP-MS U-Pb method on accessory zircons from tonstein [6]. The age of the upper Cheremkhovo Fm. places the upper limit of the tectonic quiescence period before the intensification of tectonic processes in the southern mobile framing of the Siberian craton. In the context of regional stratigraphy, the obtained age of 187.44 Ma is consistent with the valid stratigraphic scheme [2] and the research of Skoblo et al. [3] that supplements it. The coarsening of the basin's sediments is upward through the Prisayan and Kuda Fm. According to the consistent scheme [3], the Prysayan Fm. and Kuda Fm. are Pliensbachian-Aalenian and Aalenian-Bajocian, respectively. From the lower Prisayan Fm. there are no industrial coal seams in the Irkutsk basin. The lower Prisayan Fm. is Pliensbachian-Toarcian [3]. Petrographic studies of the Irkutsk basin rocks are fully consistent with the data of previous studies [3,7,8]. The geochemical studies generally confirm this but give more detailed information. The available data indicate three major provenance areas of sediments: the Siberian Craton, the Caledonian complexes bordering the craton, and the Transbaikalia region [12,18]. The evolution of the relief near the southern part of the Siberian Craton depended on the subduction of the Mongol-Okhotsk oceanic slab [53]. Data in the Irkutsk basin confirmed the coarsening of sediments upward through the Prisayan and Kuda Fm. This fact, with an increase in 143 Nd/ 144 Nd ratios and the Jurassic-age detrital zircons, was associated with the closure of the Mongol-Okhotsk Ocean [12,54]. To mark the provenance area for the Cheremkhovo Fm., isotope-geochemistry and geochronological studies are required. An understanding of coal deposits demands wider perspectives of the processes in sedimentary settings. Coal-depositional environments significantly influence coal's characteristics [55]. Different geochemical parameters have been used as indicators for the depositional environment during or shortly after coal accumulation. The climate change in Toarcian led to an almost complete stop in the processes of coal accumulation in the Irkutsk basin. Conclusions This study presents geochemical data on Jurassic deposits of the Cheremkhovo and lower Prisayan formations from the Irkutsk coal-bearing basin. Based on the chemical composition of siliciclastic rocks, the Cheremkhovo and the lower Prisayan formations are classified as graywacke, arkose, and litharenite. CIA and PIA, Rb/Sr, and Sr/Cu ratios showed the event of climate change within the sedimentation of the Irkutsk coal-bearing basin deposits. The Cheremkhovo Fm. prevails in a warm, humid climate, and we see a high grade of chemical weathering of the source rocks. The indicators used revealed the change in climate from warm and humid to hot and arid during the deposition of the lower Prisayan Fm. The provenance of the Irkutsk coal-bearing basin was influenced by the source composition, not recycling. The studied sediments were deposited due to the destruction of mainly felsic to intermediate igneous rocks.
2021-05-04T22:06:39.192Z
2021-03-30T00:00:00.000
{ "year": 2021, "sha1": "fba9edefeb5dfac4fc1436225a6995a9f8ade45c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-163X/11/4/357/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "36fc27703a83b32eb4eb52e4ab39cd4d4f7e3629", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Geology" ] }
221477072
pes2o/s2orc
v3-fos-license
Developments of Smart Drug-Delivery Systems Based on Magnetic Molecularly Imprinted Polymers for Targeted Cancer Therapy: A Short Review Cancer therapy is still a huge challenge, as especially chemotherapy shows several drawbacks like low specificity to tumor cells, rapid elimination of drugs, high toxicity and lack of aqueous solubility. The combination of molecular imprinting technology with magnetic nanoparticles provides a new class of smart hybrids, i.e., magnetic molecularly imprinted polymers (MMIPs) to overcome limitations in current cancer therapy. The application of these complexes is gaining more interest in therapy, due to their favorable properties, namely, the ability to be guided and to generate slight hyperthermia with an appropriate external magnetic field, alongside the high selectivity and loading capacity of imprinted polymers toward a template molecule. In cancer therapy, using the MMIPs as smart-drug-delivery robots can be a promising alternative to conventional direct administered chemotherapy, aiming to enhance drug accumulation/penetration into the tumors while fewer side effects on the other organs. Overview: In this review, we state the necessity of further studies to translate the anticancer drug-delivery systems into clinical applications with high efficiency. This work relates to the latest state of MMIPs as smart-drug-delivery systems aiming to be used in chemotherapy. The application of computational modeling toward selecting the optimum imprinting interaction partners is stated. The preparation methods employed in these works are summarized and their attainment in drug-loading capacity, release behavior and cytotoxicity toward cancer cells in the manner of in vitro and in vivo studies are stated. As an essential issue toward the development of a body-friendly system, the biocompatibility and toxicity of the developed drug-delivery systems are discussed. We conclude with the promising perspectives in this emerging field. Areas covered: Last ten years of publications (till June 2020) in magnetic molecularly imprinted polymeric nanoparticles for application as smart-drug-delivery systems in chemotherapy. Hurdles in Chemotherapy Cancer is one of the most difficult-to-manage diseases, causing a vast amount of mortality around the world, with more than 10 million new patients each year. The three main approaches in cancer therapy are surgical removal, irradiation and chemotherapy. The cancer type and development stage determine the comparative value of these approaches. However, the most applied strategy implemented for localized and metastatic cancer treatment is chemotherapy, which is carried out alone or combined with other treatment approaches [1]. Conventional direct administration of chemotherapeutic agents shows several serious hurdles, including low or no specificity to tumor cells and consequently low discrimination between cancer cells and healthy cells, rapid elimination of drugs from the body, substantial multidrug resistance, lack of aqueous solubility, poor oral bioavailability, narrow therapeutic windows, restricted cellular penetration and low therapeutic indices [1][2][3]. The direct systemic administration leads to extreme fluctuation in the drug plasmatic concentration causing high toxicity, poor specificity and massive side effects on healthy cells. These drawbacks associated with the conventional chemotherapeutic agents as the leading causes of the dramatic decrease of their therapeutic value should be addressed with novel strategies [4], concerning the use of tumor-targeted delivery systems capable of promoting specific drug accumulation at the pathologic site. Unfortunately, these drastic adverse effects enforced by chemotherapeutic agents on healthy organs are one of the main reasons for the vast mortality rate of cancer patients. On the other hand, the relatively weak bio-accessibility and penetration of these drugs to tumor tissues show the need for higher doses leading to increased toxicity and the incidence of multiple drug resistance [5]. Multidrug resistance (MDR) is one of the fundamental obstacles for several chemotherapeutic agents like 5-fluorouracil (5-FU) to effect efficiently in therapy [6]. Efficiency in drug delivery means the safe transport of the drug to target sites without significant degradation of the drug and harming the body [7]. An ideal drug-delivery system (DDS) can ensure the release of the therapeutic agent at the right site and in the right dosage for the required period to maximize its efficacy by the accumulation at the site of action and reach the therapeutic concentration level within the therapeutic window while minimizing the side effects on healthy tissues [3,8]. Furthermore, this delivery system necessarily should be biocompatible and biodegradable to be able to enter the body without specific toxicity, immunogenicity and accumulation in other organs than the tumor [4]. Keeping the main focus on the anticancer drugs, in the next subsections, we first give a brief overview of the developed systems as DDS, their features and obstacles, followed by a description of the development of DDSs based on the recently developed molecularly imprinted multifunctional polymers (MIPs) alone as well as combined with highly interested magnetic materials. Section 2 highlights the latest studies of this versatile combination as a targeted DDS in the delivery of chemotherapy agents to the tumor location and enhancement of their therapeutic efficacy, stating what currently are the computational pre-screening methods, preparation methods and the in vitro/vivo outcomes from different aspects with an emphasize on safety regulation of the obtained systems. We conclude with an outlook on the role of these systems in anticancer drug-delivery technology, suggesting further study aspects. Nano-Size Delivery Systems Over the years, numerous studies have been performed to develop nanosize DDSs with the broad range of different materials and anticancer drugs, alongside with the nanotechnology that has emerged as a powerful tool for drug delivery. Altogether showing a massive potential in terms of pharmacological enhancement and control over drugs performance in chemotherapy [9,10]. The use of nanotechnology for cancer treatment is an active area of biomedical research [7]. Nanomaterials had a strong influence on developing promising DDSs [11]. The wide range of materials including natural polymers (biopolymers) [12][13][14], (semi) synthetic polymers [13,15] in forms of polymeric nanoparticles (NPs), micelles, vesicles or dendrimers [16], as well as lipids (liposomes) [17] and inorganic materials [13], have been employed to develop drug-delivery complexes with high biologic efficacy. Liposomal nanodelivery of chemotherapeutics as the first generation of nanosize DDSs indeed became the most successful DDS in chemotherapy by the number of encapsulated anticancer drugs such as daunorubicin, vincristine, irinotecan and doxorubicin that have entered several stages of clinical trials [18] and several formulations approved by the US Food and Drug Administration (FDA) such as Myocet ® , Daunoxome ® , Doxil ® /Caelyx ® and the most recently approved liposomal DDS, Onivyde ® (liposomal irinotecan), approved as a second-line treatment for metastatic pancreatic cancer [19][20][21]. However, among considerable shortcomings of liposomes are the low capacity to encapsulate lipophilic drugs, manufacture processes involving organic solvents, often leakage and instability in biologic fluids and aqueous solutions [22,23]. Polymeric NPs are also extensively employed as biomaterials because of their favorable characteristics. About 30% of total nanomedicines approved by the FDA from the mid-1990 s to 2016 belong to polymeric NPs, due to high synthetic versatility and ease of modifications [24], showing much reduced adverse effects compared to bare drugs. Copaxone ® (a random copolymer composed of l-glutamic acid, l-alanine, l-lysine and l-tyrosine [25] used in multiple sclerosis) and Neulasta ® (PEGylated GCSF protein for the treatment of neutropenia in chemotherapy), are two polymeric NP formulations, which ranked among the top 10 best-selling drugs in the US in 2013 [26,27] as well as the FDA approved polymeric NP for cancer therapy, Eligard ® (Tolmar) (Leuprolide acetate and polymer (PLGH (poly (dl-lactide-co-glycolide) in prostate cancer) [21]. All these DDSs based on polymeric NPs were introduced to decrease traditional drug administration challenges but come with their obstacles and constraints, emphasizing the importance of further research [21,28]. There is no consensus about the actual therapeutic efficacy of the developed NPs toward cancer therapy because of many different kinds of NP treatment techniques that are used. It can be said that, despite encouraging remarkable results with polymeric NPs toward cancer therapy, there has been limited clinical advancement [7,29]. It is hard to conclude whether they are equal to or better than conventional treatments in regards to "treating" cancer [7]. Lack of therapeutic-acceptable drug-loading capacity and initial fast premature drug release leads to a suboptimal activity at the targeted site [4]. Dose-dumping induced toxicity, inconsistent release pattern [2], changes in the physicochemical properties of the NPs in the systemic circulation such as in particle size and aggregation behavior [28] are some of the persisted limits with these formulations and arise the necessity of further studies to address these issues. As analyzed by Wilhelm et al. [30] in the literature from the past ten years on NP-based drug carriers, only 0.7% of the administered NP dose was delivered to a solid tumor. This low delivery efficiency negatively affects the translation of nanotechnology to clinical applications [10]. Most of these systems reach the site of action passively using the enhanced permeability retention effect (EPR) offered by the vascular permeability and lack of lymphatic drainage around tumors that facilitate the extravasation and accumulation of NPs passively within cancer cells [31]. Hence, many research groups focus on active tumor-targeting by optimization, surface modification and drug-triggered-release of NPs to escape immune clearance, avoid nonspecific cell uptake, stick to the target tissues and interact by desired cells [22,32,33] that could address the tumor tissue directly and enhance chemotherapeutic efficacy [34]. However, despite the significant findings and potential to impact drug clinical features, only marginal progress has achieved in their therapeutic efficacies toward their clinical application. Therefore, the current focus on developing nanomedicines of high therapeutic index lies in tailoring the fundamental physicochemical properties of NPs, most importantly, selectivity, stability, surface properties and size [22]. Among various DDSs, one of the newly viable developed strategies for this aim could be molecular imprinting technology (MIT) to generate new nanoscale and larger tailor-made pharmacological complexes [4]. Molecularly Imprinted Technology toward Drug-Delivery System (DDS) MIT is a step further into the design of polymeric NPs. This technology has become an established strategy, but it is still considered a burgeoning method toward biomedical applications. MIT allows for producing smart materials in nano and larger sizes with active sites that match the target compound's size and functionality, the so-called template, within a polymeric matrix. Generally, the copolymerization of a liquid mixture containing porogenic solvent(s), functional monomers, template molecules and crosslinkers with a careful design leads to the development of molecularly imprinted polymers (MIPs). The responsibility of creating intermolecular interactions with the template molecule is by functional monomers through either covalent or non-covalent bonds, whereas crosslinkers form the polymer scaffold around the template [35]. The obtained MIPs possess tailored cavities resembling the original template in terms of size, shape and orientation [36]. Regarding drug delivery, due to the intermolecular interactions like hydrogen bonds, dipole-dipole and ionic interactions between the template molecule and polymer functional groups, these cavities are capable of enhancing the NPs loading capacity, improving drug stability, solubility and adjusting the drug release kinetics [4,37,38]. An intelligent or smart drug release is the anticipated release of a therapeutic agent on-demand. For this aim, these MIPs can react to the external stimulations, making changes in their structure or the strength of interactions between the polymer functional groups and the template captured in the cavities. This feature is highly suitable for DDS as it allows the drugs to be released only upon a particular change in the environment (Figure 1) [35] such as heat, pH changes, light, electric or magnetic fields, enzymes, reduction and ultrasound waves [8,29]. The combination of stimuli-sensitivity and imprinting technology potentially leads to a high loading capacity of the template by imprinting, while the response to the external stimuli modulates the affinity of the polymeric network for the template molecule, providing the regulatory or switching capability of the loading/release processes [37]. The main advantages of MIPs in comparison with biomolecules such as antibodies and biologic receptors are their relatively high stability over various conditions and low cost [37]. MIPs have stable spatial structure and long-lasting shelf life that can be up to several years at room temperature [38] and exceptional physical robustness and stability against tough conditions, including highly acidic and basic pH, temperature fluctuation, organic solvents and mechanical and thermal pressures [36,39,40]. In addition, compared to the non-imprinted polymeric NPs, the chief advantages of MIPs are their high selectivity and affinity for the target molecule, leading to the higher loading capacity and potential lower dose-dumping and immature burst release of the cargo [38]. They have been implemented in the development of biosensors [41], antibody mimics [42], catalysis [43], molecular recognition [44], drug delivery [45,46], diagnostics [47] and other biomedical applications [36]. As drug-delivery carriers, they have favorable specific binding tendency and loading, stability under different harsh settings, flexibility and antibody-like recognition [48]. Noticeable progress has been gained in this area and many nanoMIP-based DDSs with largely improved sustained release drug-delivery ability compared to their control polymers have been developed for different kinds of drugs intended to be used in various diseases [49]. The constraints concerning MIPs still need to be addressed, such as slow binding kinetics, aqueous compatibility, permeability in order to the drug extraction and heterogeneity of binding site distributions [8,37]. However, there is a high chance for anticancer drugs to be transported by these carriers easily cross the cytoplasmic and nuclear membrane that will bring them with the in situ delivery with intact concentration and consequently, higher efficiency in the elimination of the tumor cell than that administered alone by conventional chemotherapy [50]. Anticancer drugs such as doxorubicin, 5-fluorouracil and paclitaxel were utilized as a template of MIPs often to achieve controlled/sustained release of these drugs as well as better bioavailability, protection of the drug from fast degradation, diminish the adverse effects and efficient localized effect for potential chemotherapy of various cancers [51][52][53][54][55]. Bai et al. reported high drug loading (17.1%) and encapsulation efficiency (85.5%), as well as the desirable pH-dependent release (much faster release at pH 5 than those at pH 7) with a very slow and controlled release of paclitaxel imprinted system [51]. Similar outcomes were also reported by other groups [52,54]. The concept of MIT has a long history back to the early 1930 s. However, the preparation of organic polymers with molecular recognition as we know it today was first reported only in 1972 when two independent laboratories of Wulff and Klotz reported the preparation of organic polymers with a preselected ligand. Template molecules that were present during polymerization or derivatives were recognized better by the resultant structures [37,56,57]. Later on, a magnetically assisted DDS (MADDS) was introduced by Widder and coworkers in 1978, applying inorganic magnetic material in the structure of MIPs [58]. The combination of a magnetic core covered by a thin MIP shell leads to the generation of smart hybrid structures, namely magnetic MIPs (MMIPs) that provides the possibility of high drug loading and low off-target drug release followed by remote guidance, rapid distribution and local accumulation of the obtained MMIPs by using an external magnetic field [59]. Due to the good biocompatibility and chemical, thermal and mechanical stability, high sorption capacity, high selectivity, reusability, low cost and facile preparation method of the available magnetic materials, especially magnetic NPs (MNPs), the design of MMIPs as DDSs is recently become favorable [60,61]. Due to the high surface-to-volume-ratio of MNP, compared to MIPs, imprinting position of the polymer at the surface is increased leading to MMIPs with more accessible imprinted positions, rapid mass transfer and hence, fewer permeability issues, as well as strong anti-interference ability [62]. One of the main obstacles toward nanocarriers' efficacy in drug delivery could be the lack of knowledge about the precise bio-distribution, location and subsequent therapeutic effects, as most studies have not examined the targeting efficiency of NPs real time in vivo [28]. With this regard, among the active targeted MIP-based systems [53,54,[63][64][65][66], MMIPs are appealing [53,64,65] because of the ease of active remote guidance to the site of interest in the body by using an external magnetic field. Regarding tumor chemotherapy, this feature largely enhances the drug concentration in the tumor tissues by much lower costs and potentially improves its therapeutic efficacy while narrowing the adverse toxicity to healthy cells through tumor local accumulation [49]. Magnetic Molecularly Imprinted Polymers, Promising Hybrid Nano-DDS Besides ligand-mediated targeting, physical targeting can be achieved by adding some specific physical properties to the DDSs. One of the most interesting features could be the magnetic force which can accumulate magnetic materials in the specific region, in this content, tumor location, by use of a magnetic field [22]. Nowadays, MNPs, find vast applications in medicine, analytical chemistry and biotechnology. The most commonly used MNPs includes metal or metal oxide NPs [67]. Iron oxide-based MNPs (Fe 2 O 3 , Fe 3 O 4 ) especially the only clinically approved MNP, superparamagnetic iron oxide NP (SPION) are extensively investigated in nanomedicine for their biocompatibility, stability, eco-friendliness, low toxicity, contrast agent properties, ability to generate heat when submitted to an alternating magnetic field (hyperthermia) and intrinsic magnetic properties, i.e., superparamagnetism that allows them to exhibit magnetic properties only in the presence of an applied magnetic field [9,68]. Considering the property of superparamagnetism, they are broadly investigated in different clinical applications [67], especially as imaging agents. Feridex ® /Endorem ® and GastroMARK™; Umirem ® (AMAG pharmaceuticals) are SPION NPs coated with dextran and silicone, respectively that due to their superparamagnetic character, were approved as imaging agents [21]. Magnetic drug targeting (MDT) involves enriching SPIONs at the area of interest via a strong external magnetic field and, consequently, potentially enables more specific and efficient treatment. MDT of drug-loaded SPIONs is indeed closer to application in patients [22]. Successful employment of SPIONs for cancer treatment was demonstrated by the complete tumor remission without significant side effects followed by the administration of mitoxantone−SPIONs (with only 5-10% of the conventional chemotherapeutic dose) through the tumor-supplying vessel in rabbits and the application of a strong external magnetic field over the tumor location. The distribution profile after MDT displayed 66.3% of the particles localized in the tumor region with magnetic targeting, compared to less than 1% of drug and NPs reaching the tumor region during conventional intravenous application [69]. Furthermore, applying a thin imprinted polymer shell on the surface of the MNPs leads to enhancement in physicochemical properties for the intended MDT by enhancing binding kinetic, high surface-to-volume-ratio, increasing binding capacity, uniform spherical shape and also monodispersity in aqueous blood circulatory [62]. Due to the presence of MNPs in their structure, they can induce the so-called magnetic hyperthermia (local heat enhancement) when submitted to an external magnetic field (subsequent release of the drug when MMIP is loaded). This feature is exclusively efficient for the demolition of cancer cells, which cannot survive in the temperature range of 40-48 • C, unlike healthy cells that can endure such temperatures with insignificant or no injury [9,70,71]. It is well known for over three decades that tumor cells have a significant sensitivity to moderate hyperthermia in "fever-range" temperatures (41-45 • C) than normal cells, as usually, the consequences of hyperthermia on healthy cells show up at temperatures >50 • C with coagulation [71]. Nanotherm™ (MagForce) consists of amino silane-coated SPIONs and designed for tumor therapy (glioblastoma) using local tissue hyperthermia [72,73]. Nanotherm™ is already marketed in Europe for the thermotherapy of glioblastoma and is in late-stage clinical trials in the US, and FDA approval is pending [74,75]. Here, the magnetic fluid is injected directly into the tumor and then an alternating magnetic field applicator which changes its polarity up to 100,000 times per second is used to selectively heat the particles, resulting in local heating of the tumor environment (temperatures reach 40-45 • C), leading to cell death [72] and increase in the overall survival of up to 12 months [76]. State of the Art In the last decade, the preparation and application of different types of magnetic nanocarriers for the delivery of chemotherapy agents were well studied and fairly reviewed [9]. However, the employment of MMIPs as carriers of anticancer agents is emerging. Only a few publications reported the development of MMIPs with the aim of smart delivery of anticancer drugs local to the tumor [3,50,54,61,65,68,[77][78][79][80][81] that shed light on the development of a novel generation of multifunctional hybrid DDSs and future perspectives in this field. 5-fluorouracil (5-FU), doxorubicin (DOX), carbazole derivatives (CAB1, CAB2), epirubicin (EPI) and azidothymidine (AZT) were chosen as anticancer drug templates for this purpose ( Figure 2). The development methods employed are summarized, and their achievements so far on in vitro and in vivo studies are stated in the following. 5-FU, DOX and EPI with a broad antineoplastic spectrum are used to treat many different types of cancer as colon or rectum, breast, liver, bladder and brain [50,61,79,80]. The main drawbacks of direct chemotherapy with these agents can be listed as severe depression of hematopoiesis (anemia, leucopenia, thrombopenia), infection and cardiotoxicity regarding EPI [50], 5-FU resistance and rapid clearance from the body [6] and DOX cumulative cardiotoxicity and nephrotoxicity [82]. Overall, these severe side effects had a deep impact on the mortality rate of cancer patients [83]. Magnetic Molecularly Imprinted Polymer (MMIP) Preparation Methods The development of MMIPs usually includes synthesis of MNPs followed by the surface protection and functionalization of these MNPs, template attachment and decoration of pre-polymerization complex with monomers and crosslinker(s) and polymerization. Template removal is performed afterwards to achieve the empty imprinted cavities for analytical applications such as recognition and isolation [59]. In the case of DDS, the last step may not be necessary. The interaction of the functional monomer and the template in this system is governed by equilibrium. The functional monomers normally must be added in excess, relative to the number of moles of the template to favor the formation of the complex leading to the several configurations of the template-functional monomer complex with a range of affinity constants. The crosslinker keeps control over the morphology of the polymer matrix, serves to stabilize the imprinted binding sites and gives mechanical stability to the polymer matrix to retain its molecular selectivity via imprinted cavities [38]. Ethylene glycol dimethacrylate (EGDMA) and trimethylolpropane trimethacrylate (TRIM) are the most common crosslinkers and have an impact on the physical characteristics of the polymers and show a low effect on the specific interactions between the template and functional monomers [4,38]. TRIM, as a crosslinker, gives polymers with more rigidity, structure order and effective binding sites than EGDMA [38]. It is necessary to note that in all the following studies, the magnetic non-imprinted polymers (MNIPs) are also prepared by the same synthesis procedure without adding the template molecule in the preparation steps. The preparation of magnetic Fe 3 O 4 NPs is commonly performed by a chemical co-precipitation of Fe 2+ and Fe 3+ ions of FeCl 2 ·4H 2 O and FeCl 3 ·6H 2 O at 80 • C in the presence of ammonia (sodium hydroxide solution, 2.0 mol·L −1 ) with high yield and high sorption capacity of the resulting particles [67,84]. These obtained MNPs can aggregate and degrade gradually by oxidation, which reduces their magnetization capacity [84]. Therefore, their long-term stability and surface functionalization are fundamental issues concerning these iron-based MNPs [85]. In addition, a hydrophobic surface with a large surface-area-to-volume ratio is unfavorable to be implemented in any biologic accepted fluids [59,86]. Hence, for biomedical applications, it is desirable to modify MNPs with a surface coating layer for their stability in suspension, protection against oxidation and in vivo biocompatibility [84]. There are several well-developed methods for this purpose that can be divided into two main groups, organic and inorganic coating. Among inorganic coating materials, silica and gold (Au) are frequently used as MNP coating material. Dextran, alginate, starch, chitosan, silanes, glycosaminoglycan, sulfonated styrene-divinylbenzene, polyethylene glycol (PEG), polyvinyl alcohol, poly (methyl methacrylate), polyacrylic acid and dendrimer shell are broadly used organic coatings materials of MNPs [59]. The choice of the coating material highly depends on the other interaction partners in the imprinting process as well as the final application of the designed system as each of them offers different properties to the system. However, coating with materials with functional groups offers the possibility to immobilize other materials at the surface prior to the imprinting process, leading to a stable pre-polymerization complex and higher imprinting efficiency, respectively. Polymerizable vinyl groups offered by silane coupling agents are good examples and widely used as this feature is highly directed to the selective occurrence of molecular imprinting polymerization at the surface of MNPs [59,61,87]. The Griffete group utilized this feature in their imprinting process to obtain DOX-loaded MMIPs for drug delivery. They performed the functionalization of MNPs by merely growing a thin polymer layer of acrylic acid (AA) monomers on their surface, forming a molecule monolayer with polymerizable vinyl end groups [68,77]. The process is a simple complexing reaction of AA with unsaturated iron ions of the MNP surface. Subsequently, in the presence of functional monomers and crosslinkers, the AA monolayer will direct the selective occurrence of polymerization at the surface of MNPs [68,77,87]. This simple approach was rapidly used in other studies as a bullet point to increase drug loading and imprinting yield [78,88]. Choose of Suitable Monomers with the Aid of Computational Modeling As it is well shown in most studies on MIT, limitations occur in the preparation of imprinted systems with the choice of the bond type and monomers used to attach the drug template with these designed systems. The optimal template-monomer-crosslinker-solvent interactions strongly affect the successful imprinting process [89]. The selection of the best functional monomers is usually made using different formulations with various monomers or in the manner of a trial-and-error procedure that is time-consuming and expensive [89,90]. Nowadays, the use of computational simulation is suggested toward the rational design of imprinting systems for the optimization of monomer preselection based on potential monomer-template complex conformation by the comparison of the binding energy between the template and functional monomers and the utilizing of molecular docking platforms to predict several modes of interactions between the reaction counterparts [91]. Indeed, molecular modeling and computational approach could facilitate the MIP development [92]. Wu et al. [93] were one of the pioneers of employing a computational approach to study the nature of recognition of MIPs. They reported a correlation between the binding energy of the template and functional monomer and the capacity factor via the production of high-affinity binding sites in the obtained polymer. There are several computational chemistry approaches based on molecular mechanics (MM) and molecular dynamics (MD) aiming to perform predictive analyses of such complexes intermolecular as well as in interaction with their environment, utilizing quantum methods (like ab initio mechanical quantum calculations and semi-empirical methods) [89,91,93], solubility parameters [94] and geometric parameters [89]. Several groups developed strategies for the rational design of MIPs through molecular modeling [40,50,61,80,89,[95][96][97][98][99]. Nevertheless, there is no consensus on the most effective computational model in the prediction and determination of the properties of the designed MIP [90]. These approaches are implemented in several widely used software packages under different methods [90,100], and the need for a comprehensive classification and application of these methods in the rational design of imprinted polymeric nanocarriers greatly exists. We briefly introduce the methods and parameters implemented by the studies of interest of this review in the following. Cohesion Parameters The first insight into the computational modeling of the intended complex can be the investigation of the solubility and miscibility of the components. The compatibility and miscibility of functional monomers can be determined prior to the experimental step by measuring solubility parameters, as well as the cohesive energy density of components (CED) [61]. With this regard, Talavat and Güner chose optimal monomers to synthesize pH-responsive 5-FU-loaded MMIPs based on the chemical affinity profiles of the Hansen method, a thermodynamic computational calculation method, using the "Hansen solubility parameter calculation program" (HSPCP) containing data for the polymers and solvents [61]. The Hansen solubility parameter (δ) includes δd, δp and δH subparameters, which δd represents the dispersive component, δp the polar and δH the hydrogen bonding. The Hansen parameter solely covers all the molecular interactions in a mole of material, which are dispersion forces, polar interactions (dipole-dipole) and specific interactions such as hydrogen bonding [94]. Using this factor, a good interaction partner for a given polymer should have a solubility parameter close to that of the polymer [94]. CED, on the other hand, is a quantitative measure of how strongly the monomers interact with one another. The Stronger monomers' interaction, the higher packing densities, and subsequently, the more rigid polymer chains. Chain stiffness hinders molecular movement, reducing the creation of temporary voids for penetration of the molecules to jump into these sites [101]. However, considering MMIPs, this issue is less important because of the thin imprinted polymer layer around the MNP. An ideal imprinting formulation should have an equilibrium of solubility parameters and CED parameters between the monomers and the template. These can be considered as the base of selecting the higher chemical affinity between the template-monomer complexes. Due to the polymerization occurring in solution, the possible effects of the solvents intended to use in the process should be considered as well. Especially in molecular imprinting procedure, the organic solvents are mostly used to greatly affect the physical features of the final complex, including not only its porosity, the surface area and the swelling behavior, but also the pre-polymerization complex stability, which in turn decisively determines the selectivity of the resulting imprinted cavities [102]. It is noteworthy; that the possible residues of these organic solvents in the final complex should be quantified according to the maximum acceptable amount stated by the regulatory authorities in terms of the product safety profile that in the following will be discussed in the related section. Interaction Mode and Energy Calculation Calculating the thermodynamic features, namely geometric optimization as well as binding types and energies of each virtual pair of monomer and template becomes a routine computational approach prior to MIP synthesis to predict the most stable complex with the lowest binding energy profile [80,[89][90][91][92][93]99,103,104]. Alongside with the atom in molecules analysis of the molecular electron density distribution of the complex to understand the nature of the bonds in deeper detail [89]. However, not all of these studies took the effect of solvents into account [90][91][92]97,99,102,103]. Using ab initio computational methods can bring us with more reliable prediction due to the consideration of the polymerization solvent(s) in the design of MIPs as they can change in energy and stability of the template-monomer complexes [91,97]. In other words, different solvents can cause different template-monomer complexes stabilization energies. This effect can be calculated by the Hartree-Fock method in order to select the most stabilizing system [97]. This method is an ab initio method and shows the relative stability through total energies and relative energies of complexes [89,96]. Herein, to comprehend the best selectivity at the molecular level in pre-polymerization solution with the lowest binding energies, Dramou and coworkers assessed the possible influence of the solvent (DMSO) on the conformation of EPI-loaded MMIPs on the basis of ab initio calculations of the binding energy of the interaction partners in the pre-polymerization. However, ab initio calculation is relatively time-consuming, making it difficult to screen suitable monomers and solvents. MD simulations have been suggested as a fast method to search for optimal imprinting conditions, especially for the screening of functional monomers. This approach is based on classical mechanical force fields that describe non-covalent interactions, Hydrogen bonding, van-der-Waals forces, dipole-dipole, as well as electrostatic interactions [99]. As a result, Dramou et al. used MD simulations for selecting the most suitable monomers in a fast and no reagent-consuming way developed by Li et al. [99]. Employing forcefield parameter Merck molecular forcefield (MMFF94X) [50,104], they determined the mode and energy of interactions between DMSO and monomers as well as template molecule, showing the generation of hydrogen bonds and van-der-Waals interactions and predicted the final conformations using Molecular docking [50]. Piletska et al. also employed MD simulations as the basis of selecting the most energetically favorable structures for the design and development of MMIPs for the controlled delivery of curcumin [95], emphasizing the suitability of this modeling strategy for further studies on the magnetic imprinted systems. Imprinting Strategy, Advantages of MNPs Surface Imprinting in Drug Delivery Among the different imprinting approaches such as bulk imprinting, emulsion polymerization, precipitation polymerization, iniferter polymerization, surface imprinting, etc., [62] the favorable grafting of a thin MIP film on the surface of MNPs is possible through surface imprinting or 2D imprinting technique, an easy and straightforward method for fabricating core-shell MIP NPs [49]. This process results in MIPs with imprinted binding sites near to or situated at their surfaces possessing features of both components; that is, MIP high selectivity toward template and high loading capacity with electrochemical and magnetic properties of the core NPs into a single functional hybrid structure (MMIP) [59,62,105] facilitating its distribution and preventing the nanomaterial from being cleared by metabolic burden before reaching the site of action [69]. Therefore, enabling tumor location accumulation via an external magnetic field and vanquishing cancer cells more efficiently with smaller drug dosage and most probably significantly fewer side effects on healthy tissues. The surface imprinting polymerization exhibits high binding capacities with an exceptional selectivity, especially when compared to other imprinting strategies. The main reason can be mentioned as the high surface-to-volume-ratio of MNP. For example, bulk imprinting, which produces polydisperse particles, irregularly shaped with diffusional limitations of out of access or destroyed potential imprinted sites, generally yields particles with low binding capacities and selectivity. Furthermore, precipitation polymerization in which highly diluted monomer solutions are employed can negatively affect the template-monomer interaction and thus sensitivity and selectivity. Even emulsion imprinting with a potential platform disruption due to the stabilizers/surfactants addition and remaining residual of these additives even after extensive washing steps possess limitations when compared to MIP grafted onto MNPs [62,106]. In these molecular imprinting methods, a high level of crosslinking is used to ensure template binding specificity, and thus the resulting rigid polymeric network hinders the penetration and accessibility of the solvents to the template embedded in this polymer matrix. This extraction does not totally remove the template, which may lead to a suboptimal release rate [104]. On the contrary, however, MMIPs developed by surface imprinting took a step forward by their excellent easily reachable positions of the template due to the thin accessible MIP layer and fast binding kinetics, rapid mass transfer with no or less diffusional problems, reduce permanent entrapment of templates and strong anti-interference ability [62,107]. Optimizations in the Imprinting Process toward Enhancing the MMIPs Physicochemical Features With the aid of the aforementioned computational analyses, Dramou and coworkers selected methacrylic acid (MAA) and methacrylamide (MAM) as the functional monomers, alongside with the EGDMA as the crosslinker and EPI as an anticancer template in the presence of a dispersant (polyvinylpyrrolidone (PVP)) in the mixed media of DMSO-H 2 O illustrated in Figure 3 [50]. The modification of the obtained MMIPs surface was performed using a high amount of oleic acid as a top coat above the imprinted system, giving it an amphiphilic property that makes it water, as well as other solvents, compatible [50,108]. They reported obtaining a good reproduction and repeatability of their designed EPI-loaded MMIPs. We state the loading capacity and release behavior in the related sections. The surface modification of MMIPs with oleic acid is due to this fact that most of the MMIPs are developed in organic solvents, and therefore, they often retain their selectivity in aqueous solvent systems as well as in biologic fluids because of the weaker hydrogen bonding and electrostatic interactions in aqueous media compared to the organic solvents [50,108,109]. Due to the presence of the oleic acid on the surface of the MMIPs, the hydrogen bonding between the template and the polymeric matrix is preserved to water from rapid destruction [108]. ). This pre-polymerization solution was then transferred into a three-necked flask, followed by 0.1 g of 2,2 -azobis(isobutyronitrile) (AIBN). Five hours later, 5 mL of oleic acid was added to the flask. The reaction was kept at 60 • C for 12 h [reworked following [50]] (reprinted with permission of the Royal Society of Chemistry, 2020). Furthermore, Parisi and coworkers reported a developed synthetic strategy based on photo-polymerization with 360-nm light at 4 • C that allows the preparation of magnetic imprinted nanospheres loaded with carbazole derivatives (CAB1, CAB2) at low temperatures, which is essential to avoid any possible drug degradation [3]. The MMIPs were obtained by the precipitation polymerization as followed. The pre-polymerization mixture was formed by the dissolution of 1-mmol of the template and 8 mmol of functional monomer MAA in a mixed solvent including acetonitrile (20 mL)-toluene (20 mL) and followed by the addition of 0.5 g of MNPs in the presence of EGDMA and AIBN and photo-polymerization with 360-nm light at 4 • C for 24 h. Supporting data in the literature also shows that photoinitiated polymerization at low temperatures decreases the kinetic energy of the pre-polymerization complex, which increases its stability and brings more binding capacity and specificity than polymerization with thermal initiation, that mostly requires temperatures higher than 40 • C [38]. Another group reported the preparation of a novel MMIP with dopamine (DA) as the monomer in two parallel studies for the controlled and sustained release of DOX and 5-FU at the tumor site in a breast tumor-induced mouse model [65,79]. This straight forward imprinting process may help future studies to reduce energy in the imprinting process. In this report, imprinting was achieved by dispersion of 0.5 g of MNP in Tris buffer (150 mL, 10 mM and pH 8.5) and the addition of the template following by 0.5 g of DA and 12 h of mechanical stirring, without any external energy source (UV light or heating) at room temperature. The concept is based on the facile self-polymerization of DA to form polydopamine (PDA) coating. PDA's low toxicity and biocompatibility make it a right candidate for shell materials of MNPs [110]. Alongside the other advantages like lower mass transfer, host for large active groups reactions on the surface and formation of all kinds of material surfaces through covalent and non-covalent interactions [79]. Stimuli-Sensitive MMIPs Triggered Release As mentioned before, with a proper design, anticancer drugs can be released from their carriers upon a particular stimulation [8,29,35]. The stimuli-sensitivity modulates the affinity of the polymeric network for the template molecule, providing the switching capability of the loading/release processes [37]. With this regard, to achieve a controlled release of the drug out of MMIPs, the advantage of tumor environment chemistry is taken by enhancing the DDSs with a pH/thermo-sensitive trigger [78]. Suitable hyperthermia will directly eradicate tumor cells without damaging nearby healthy cells because of the higher temperature in cancer cells when heating as well as less tolerance of the heat by the cancer cells [71]. A proper magnetic field can cause hyperthermia generated by the MNP core at the vicinity of MMIPs without global heat dissipation, so-called hotspots [68]. Therefore, by a careful selection of thermo-responsive monomers, such as 2-(dimethylamino) ethyl methacrylate, MAA and N-isopropyl acrylamide (NIPAM) [111][112][113], we would have a multifunctional DDS guided and accumulated by the external magnetic field into the tumor location and perishes the cancer cells by magnetic hyperthermia as well as the thermo-triggered release of the anticancer drug from the imprinted polymer [68]. Taking thermo-sensitivity into account, Li and coworkers reported the development of a thermo-sensitive MMIP based on Fe 3 O 4 -carbon NPs and NIPAM as the thermo-responsive monomer for selective adsorption and controlled release of 5-FU from an aqueous solution [80]. Poly-NIPAM (PNIPAM) has a reversible solubility in an aqueous solution at around 32 • C. Therefore, the ability of the resulting MMIP in capturing and releasing template molecules can be adjusted by temperature [111]. Li et al. obtained the multi-core MMIP synthesis in two steps. They first, formed Fe 3 O 4 -carbon NPs (C-MNPs) by the reaction of ferrocene iron (Fe(C 5 H 5 ) 2 ) in the presence of H 2 O 2 , followed by the silanization of the obtained C-MNPs surface with 3-(trimethoxysilyl)propyl methacrylate (MPS). The second step was the 5-FU-NIPAM imprinting process at the surface of these functionalized MNPs [80]. The logic of the carbon layer around MNPs did not explain by the authors, but it seems carbon film was chosen as the support material, owing to its good acid-base and thermal stability and mechanical stability, as well as rich bonding sites on surface-modified with silane groups [114]. An interesting reversion in the solubility of these MMIPs at around 39.3 • C, compared to the pure PNIPAM (around 32 • C), shows the effect of grafting this polymer on a rigid substrate and the restriction of polymer chains movement, as well as the incorporation of a hydrophobic crosslinker into the system [115]. The PNIPAM shell can swell below this temperature (around 39 • C), leading to access of template to the imprinted cavities and drug loading, as well as the shrink in higher temperatures to become more hydrophobic in an aqueous environment, causing a deformation of the imprinted cavities and drug release. The hydrodynamic diameter of these MMIPs decreased from 282 to 214 nm, as the temperature increases from 20 to 65 • C [80]. Besides the conformation changes of the polymeric network, thermo-triggered drug release out of MMIPs by destabilization and disruption of the hydrogen bonds existing between the drug and the polymer was also studied [68,77] and will be discussed in the section related to the drug release. Furthermore, almost all tumor tissues have a lower pH (pH = 5.8) compared to the healthy tissues. Hence, pH-responsive polymeric structures have been one of the most prevalent approaches for cancer treatment [116]. pH-responsive polymers are polyelectrolytes with weak acidic or basic groups that either protonate or deprotonate with a change in the pH of their environment [117]. Therefore, the anticancer drug can selectively release from a pH-sensitive polymeric DDS only around the tumor lesion when the environment is acidic [118,119]. The pH responsivity comes from the polymers with ionizable moieties that employs a non-covalent transition to achieve pH responsivity through basic moieties include amines, pyridines, morpholines, piperazines; and acidic groups include carboxylic acids, sulfonic acids, phosphoric acids, boronic acids, which can be protonated or deprotonated at different pH values [116,117]. The (meth) acrylate, (meth)acrylamide, and vinylic polymers are frequently used due to the presence of such groups in their structure [117]. A pH-responsive system based on thermodynamic computational calculations for the preparation of 5-FU-loaded MMIPs was released By Talavat et al. [61]. Following the calculations 4-vinyl pyridine (4-VP) and AA were chosen as optimal monomers to generate pH-sensitive polymers on the surface of vinyl-modified MNPs, prepared briefly by the addition of 0.5 mL functional monomer and 0.5 mg 5-FU to the 100-mL solvent mixture (acetonitrile/methanol (80:20, v/v)) and dispersion of 0.3 g of vinyl-modified MNPs, EGDMA and AIBN to this mixture, followed by the polymerization at the reflux temperature of 75 • C for 8 h [61]. As well as Hassanpour and his coworkers that evaluated the pH-sensitivity of MAA and itaconic acid (ITA) as pH-responsive monomers in the preparation of pH-sensitive AZT-loaded MMIPs for use in breast cancer therapy [81]. For this aim, 1 mmol of AZT as the template and 2 mmol of the functional monomer as functional monomer were dissolved in the least volume of acetonitrile and mixed with the dispersion of vinyl-modified MNPs in acetonitrile. Eventually, the polymerization was performed in the presence of EGDMA as crosslinker and AIBN as the initiator at 60 • C for 24 h. Natural polymers can also display pH-responsive behavior such as gelatin, chitosan, alginate, hyaluronic acid and dextran. Natural polymers have appeal because they display desirable biocompatibility. However, they may not provide sufficient mechanical strength and may contain pathogens or evoke immune/inflammatory responses [120]. For this issue, synthetic pH-responsive polymers were produced from polypeptides such as poly(l-glutamic acid) (PLGA), poly(histidine) (PHIS) and poly(aspartic acid) (PASA). These polymers are biocompatible and degradable like natural polymers [117]. Multiresponsive polymers also have recently utilized a lot to the preparation of polymers that respond to several stimuli, like temperature, pH, biomaterials, redox, light, electrical field, magnetic field, etc. [121,122]. Among these polymers, dual thermo and pH-responsive polymers are the most studied. This class of polymers is prepared by the combination of a thermo-responsive block, such as PNIPAM, poly(N-vinylcaprolactam), poly(N,N-dimethyl acrylamide), with a pH-responsive block, like poly-AA, poly-MAA or poly(N,N-dimethylaminoethyl methacrylate) (PDEEMA) [121]. In general, PNIPAM is chosen as a thermo-responsive block due to its lower critical solution temperature (32 • C) in water that is near the body temperature [117,121]. In this manner, Kaamyabi and coworkers developed a dual pH-thermo-sensitive MMIPs by polymerization of NIPAM on the functionalized Fe 3 O 4 substrate resulting in multi-core MMIPs for controlled delivery of DOX to the tumor location [78]. The functionalized MNPs (100 mg) was suspended in water-ethanol solution (1:5, v:v, 30 mL) followed by the dropwise addition of NIPAAM-DOX complex in water/ethanol (20 mL, 20/80) (prepared by stirring at room temperature for 12 h) and AIBN as an initiator. The polymerization was performed through the overnight stirring of this mixture at 70 • C. A solution EGDMA was added and the reaction mixture allowed to mix up for more six hours [78]. Cyclodextrins as Comonomers The novel advances in MIT have resulted in the appearance of synthetically engineered MIPs incorporated with cyclodextrins (CDs) in an imprinted polymeric framework with the improved performance [123][124][125]. CDs are cyclic oligosaccharide structures established by d-glucopyranose units consist of 6 (α-CD), 7 (β-CD) and 8 (γ-CD) d-Glucose Monomer, that are linked by glycosidic bonds [126]. The glucose units of CD with a non-twisted chair arrangement conform a narrow half-tapered cavity structure of CD [127]. These highly versatile oligosaccharides owe multifunctional properties that are mostly implemented to elevate the drugs' solubility, stability, dissolution rate and bioavailability [128]. CDs possess mostly hydrophilic groups on the outer surface and hydrophobic ones on the inner surface inside their cavities. Therefore, hydrophobic drugs can entirely or partially enter within these lipophilic cavities, and a CD-drug complex is formed by host-guest non-covalent bindings. Hence, CDs potentially can improve the problem with the solubility of hydrophobic poorly soluble drugs in aqueous media through encapsulation of such guest compounds into their cavities when they can match with cavities in terms of polarity, size, shape and properties [126,129]. Formation of a stable supermolecular complex between CDs and larger guest structures is also possible through their hydrophobic groups, which can bind into these CDs cavities [126,129]. Taking advantage of this feature, CD derivatives, especially β-CD, has been recently gained interest as a functional monomer at MIT. CD-MIPs are generally composed of CD/derivatives and other functional monomers as binary functional monomers [130]. Most the studies implied CDs into MIPs formulation are developed for in vitro compounds recognition, absorption and separation [131,132]. β-CD-MIPs were successfully employed to recognize, isolate and absorb several biologic compounds, such as peptides, steroids, cholesterols, antibiotics and chemical compounds like pesticides and phthalate [130,133]. CDs were successfully broad-studied in terms of drug delivery. Nevertheless, the combination with MIT to generate a DDS is relatively new. There are only a few studies that investigated the insertion of CDs in MIPs formulation for the aim of drug delivery and release control [48,134,135]. Herein, Sedghi and her coworkers took advantage of this thermo-sensitive MMIP system in combination with acryl functionalized β-CD and curcumin (CUR), as a potential herbal chemotherapy agent, to enhance the CUR solubility, stability, bioactivity and also drug loading and sustained release by β-CDs cavities [135]. The silica-protection of MNPs based on the modified Stöber hydrolysis reaction [136], followed by the surface silanization with MPS provided a suitable base for the imprinting of CUR/vinyl-modified β-CD complex and NIPAM as monomers. A most significant point of their report could be highlighted as the promoted adsorption of CUR into this MMIP system due to the presence of β-CD and host-guest interactions of them [135]. Particles Size and Loading Capacity When talking about nanoscale in the manner of NPs, they are defined as particles in the nanometer size ranging from 1 to 100 nm [40,49]. However, in the field of molecular imprinting nanoMIPs typically refer to MIPs with diameters up to several hundreds of nanometers [1,137,138]. One of the most important barriers in front of designed MIPs to show their favorable efficacy is the fact that only molecules with a diameter in the range of ≤100 nm can leak from these blood vessels and accumulate within the tumor tissues. Larger NPs show restricted diffusion into the extracellular space, leading limits in their efficacy by preventing them from quickly reaching cancer cells [2]. The recent developed MMIP DDSs are spherical nanomaterials and the average diameter of them can be classified into three general groups: (1) sub-100-nm [65,68,[77][78][79], (2) 100-500 nm [61,80,135] and (3) over 500 nm [50,81] (see Table 1). These comparatively large NPs (>100 nm) tend to high accumulation into liver and spleen, resulting in nonspecific clearance by the reticuloendothelial system (RES) and preventing the EPR effect for tumor accumulation, even when an external magnetic field is applied to concentrate the particles in the tumor site. Smaller sub-100-nm NPs like 30 nm micelles have been reported to have the ability to penetrate the poorly permeable tumor, resulting in a higher antitumor efficiency in animal models [139]. Although research evidence on non-imprinted polymeric DDSs has shown improved drug delivery with an increase in efficacy and decreases in side effects, they mostly suffer from low drug-loading capacity and initial fast premature release of the encapsulated drug [140]. It leads to a suboptimal activity at the targeted site and elevated side effects [4], potential toxicity from dose dumping and inconsistent release [2]. The tailor-made affinity between the template and polymer functional groups introduced by MIT leads to a higher loading capacity than the non-imprinted ones. The already published studies reviewed by Bodoki et al. showed the sustained release manner out of MIPs and potential zero-order drug release that could be achieved over long periods. These findings represent a clear advantage over non-imprinted polymeric drug delivery, as they are capable of providing higher loading capacity, more control on drug release behavior and protecting the active ingredient from enzymatic degradation during its transit through the body [4]. Therefore, less dose-dumping toxicity and adverse effects on healthy tissues and more efficacy due to prolonged circulation time are expected [49]. Previously, drug-loading capacity (adsorption) kinetics and isotherms of obtained products have mostly been investigated by the incubation of washed MMIP/MNIP with aqueous template solutions (template rebinding) at different time intervals until the equilibrium (the soaking procedure) and applying the different conditions during this procedure can determine the dose-dependency (the amount of template), pH-dependency, as well as thermo-sensitivity of adsorption pattern [3,50,77,80,135]. The soaking method seems to be favorable for drug-delivery purposes by determining the maximum and optimum loading capacity, considering drug potency and release rate for further in vivo applications. With interest in sub-100-nm MMIPs, The experimental data of Li et al. [80] revealed the equilibrium adsorption capacity (Q) value of MMIPs about 1.5 times higher than that of MNIPs at the steady temperature of 25 • C, suggesting the favorable binding ability of the imprinted system over non-imprinted polymer matrix. The maximum adsorption capacity of MMIPs and MNIPs at 25 • C was reported 96.53 and 59.5 mg/g, respectively. As they elevated the temperature, the Q of MMIPs became lower due to the shrinking of the polymer to become more hydrophobic. It is interesting that, because of the non-specific binding sites in MNIPs, a smaller change in Q value was observed for MNIPs when the temperatures raised [80]. Supporting data were reported by other studies, indicating the higher binding capacity of MMIPs compared to MNIPs due to the presence of the selective imprinted cavities with a high affinity toward the template molecule [50,77,81]. Parisi et al. performed the binding experiments via the incubation of washed MMIPs and MNIPs with a CAB1 standard solution (0.1 mM) for 24 h and reported the loading capacity with the percentages of the bound CAB1 by imprinted and non-imprinted nanospheres that were 52% and 38%, respectively [3]. To study the effect of pH on the drug-loading capacity of MMIPs, Hassanpour et al. incubated the particles with AZT solutions (25 ppm) with a pH range of 3 to 11 for 4 h. Their results indicated that the adsorption percentage of AZT on MMIP reached to its maximum value (60%) at pH 5 (related to the protonation of AZT functional groups and deprotonation of the carboxyl groups of the imprinted cavities, respectively) and then slightly decreased by a further increase of pH. In contrast, MNIPs showed much lower loading capacity of around 19% at pH 5 and smaller capacity in other pH s [81]. Dramou et al. also reported the same pH-dependent adsorption behavior of EPI into the MMIPs in two directions and the existence of a climax pH point 5.8 [50], which may contribute to the counteraction between the decrease and the increase of the hydrogen bonding interaction between the template and polymeric matrix [50,81]. Toxicity and Degradability Studies Although the development of nanocarriers intended for drug delivery has become more robust, there are still some critical knowledge gaps in terms of their safety profile. The safety of a complex depends on the safety of each material involved and the unpredictable effects when acting as one unit [4]. NPs can cause toxicity and immunogenicity due to the relative size, shape, chemical composition and surface charge. Vital organs like the spleen accumulate particles larger than 100 nm, while the pores in the liver are about 100 nm, and can cause the aggregation of the smaller materials [7]. As a result, the number of nano-DDSs approved for chemotherapy is scant, and they are mainly based on liposomes, which are biocompatible and biodegradable and with a bilayer structure analogous to that of the cell membrane [4]. MIPs were reported in recent years more for the application in the delivery and controlled release of anticancer agents [25,35,54,61,65,66,96,[141][142][143]. However, most of the studied MIPS supposed for demonstrating their relevancy as DDS use formulations (type and molar ratios of functional monomers, crosslinkers and solvents) ab initio tested for analytical applications [144] and a few in vivo animal model assessment of the MIP-DDS were performed [4]. Most of the presently developed MIPs are nonbiodegradable, which may be dangerous due to their bioaccumulation in blood vessels or cells, tissues and organs after administration [49]. Biocompatible materials are biologically compatible without making local or systemic reactions of a living system or tissue [145]. The materials of the intended MMIPs for drug-delivery purposes need to be selected very carefully due to the most critical issue, which is nontoxicity, biocompatibility and biodegradability. The biocompatibility of NPs is a vital aspect of their medical applications. Thus, NPs should integrate with biologic systems without immune response stimulation and any toxic accumulation [146]. As an example, Asadi and coworkers introduced a multi core-shell MMIP loaded with 5-FU based on biodegradable materials for targeted, sustained and controlled release of the drug in vitro analyses onto Michigan Cancer Foundation-7 (MCF7) cells. Herein, tannic acid as a biodegradable polyphenol was used to fabricate the crosslinker [54]. Tannic acid is a natural crosslinker owing to the presence of hydroxyl and carboxyl groups that can interact with biopolymers. The biodegradable structure of the obtained imprinted system was shown, using various conditions similar to the body. Data revealed that, due to the acidic nature of this crosslinker, the degradation of particles in higher pH, similar to the kidney and intestine environment is faster. The liver and the spleen catch most NPs in the bloodstream as organs of the mononuclear phagocytic system (MPS). It is noted that iron oxide NPs are captured within the MPS via endocytosis into Kupffer cells of the liver sinusoid and macrophages of the spleeny red pulp, where they undergo degradation within the lysosomes of these cells. The degraded iron is finally eliminated from or restored in the body via normal iron metabolic pathways with no or low signs of in vivo hematotoxicity and blood chemistry effects [147,148]. Indeed, biocompatibility and degradation of iron oxide NPs have been well monitored in the past few years [144,[147][148][149], but only a small number of papers deal with the toxicity of imprinted polymers [64,144,[150][151][152]. Recently, biodegradable MIPs have been developed by using biodegradable crosslinkers or monomers in the imprinting systems [52,53,64,137]. Both (semi) synthetic and biopolymers are available for use in NP DDS. Typical functional monomers used in the imprinting process are carboxylic acids (AA, MAA, vinylbenzoic acid), sulfonic acids (2-acrylamide-2-methylpropane sulfonic acid), heteroaromatic bases (vinylpyridine, vinyl imidazole) [57]. However, biopolymers such as chitosan, albumin, alginate, dextran or collagen have gained many attractions for pharmaceutical applications as they are inexpensive, biocompatible, chemically modifiable, biodegradable and allow simple control of the size and surface properties of the resulting system [153]. Chitosan is an excellent candidate among biopolymers, which has remarkable properties and the presence of functional groups. It is widely used in the encapsulation or coating of various types of NPs [154] and as functional [155] and supporting matrix [156] in the imprinting process of MIPs and MMIPs, showing strong potential in many fields like curbing environmental pollution, protein separation and identification, chiral-compound separation and medicine [157][158][159]. However, concerns associated with their relatively fast release profile, purity and homogeneity and more important, the toxicity of natural compounds present a greater challenge of using them as medicine [4,32,154]. Consequently, many natural NPs are not clearing the clinical trial phases [154]. Herein, the monomers implemented in the preparation of the MMIPs for drug delivery can be subdivided into two general categories: natural elements like DA and oleic acid, as well as (semi) synthetic monomers whether vinyl monomers including AA, MAA, MPS, PVP, 4-VP or acrylamide (AAM) derivatives like NIPAM and methacrylamide (MAM). Generally, polymers with hydrolyzable backbones like vinyl polymers containing easily oxidizable functional groups are susceptible to hydrolysis and enzymatic biodegradation [160]. The ability of artificial or natural biodegradable polymers to be cleaved into biocompatible byproducts through chemical or enzyme-catalyzed hydrolysis make it feasible to optimize the safe removing of these structures in the body [54]. Polyacrylamide (PAAM) is widely used in biomedical applications and itself is not significantly toxic [161]. However, neurotoxicity, reproductive toxicity and carcinogenicity of its monomer, AAM, in animal species have been documented [162,163]. AAM monomer residues are probably an impurity in most PAAM preparations, ranging from <1 ppm to 600 ppm. Higher levels of AAM monomers are present in the solid form of the polymer [161]. Darnell and coworkers made a study to identify the biocompatibility of extremely tough alginate/ PAAM hydrogels. They reported a statistically significant reduction in cell metabolic activity with PAAM gels suggesting latent AAM monomer as a potential source of such reductions. However, following histology of the tissue surrounding the gels showed an absence of immune cells, suggests that in vivo exposure to latent AAM monomer is minimal. Therefore, they concluded minimal effects of this compound on cells in vitro and in vivo, [163]. Use of this polymer and its derivatives in foods, drugs and devices is regulated by the FDA, with restrictions on the amount of PAAM that can be used, and the AAM residue in either the polymer or in the final product is restricted and should be monitored closely [161]. The most studied thermo-responsive polymer used in the preparation of thermo-sensitive MIPs and MMIPs is the PNIPAM, but its potential toxicity emphasizes close monitoring and using alternatives when possible. Studies demonstrated excellent thermo-sensitivity of the polymers based on oligo(ethylene glycol) methacrylates (OEGMAs and in vitro or in vivo biocompatibility already validated by the FDA [164,165]. Sousa-Herves et al. developed a multi-responsive DOX-loaded nanogels focusing on pH and redox-sensitivity by using monomethyl oligo(ethylene glycol) acrylate (OEGA) and pH-responsive 2-(5,5-dimethyl-1,3-dioxan-2-yloxy)ethyl acrylate (DMDEA) as monomers and Redox-responsive bis(2-methacryloyl)oxyethyldisulfide (BMADS) as the crosslinker. They reported that approximately 95% of DOX was released after 8 h at pH 5 in the presence of a reductive agent, while only 20% of the free drug could be observed for the same incubation time at pH 8 [166]. In addition, Cazares-Cortes et al. prepared dual pH-thermo-responsive magnetic nanogels, based on OEGMAs and MAA comonomer and demonstrated the desirable thermo-sensitivity of the obtained nanogels to the magnetic-induced hyperthermia [68,167]. Under AMF, the MNPs inside nanogels act as nanoscaled hot spots. This heat was sufficient to cause the shrinkage of the nanogels and the drug release subsequently (32% DOX release after four hours at 50 • C), while the negligible release was observed when these nanogels solutions were stored at 4 • C (less than 20% after one month) [167]. Moreover, potentially, photopolymers are known to release residual monomers, photoinitiators and similar products resulting from decomposition processes into the environment [168] with the logic that when the water permeates into the matrix, the leachable unreacted monomers diffuse out [145]. Although hydrophilic monomers were identified in higher proportions in aqueous extraction media than hydrophobic once [169], it is accepted that some unreacted methacrylate groups are not capable of being leached into aqueous media due to their covalent bounds to one end of the polymer chain [145]. The imprinted polymer layer is relatively thin as grafted onto the magnetic core, and the chance of remaining unreacted monomers in the polymer matrix is expected to be little. Furthermore, most the imprinted systems are developed and evaluated in non-polar organic solvents such as toluene, chloroform, dichloromethane or acetonitrile as they depend on potent hydrogen bonding and electrostatic interactions formed in these organic solvents [38,50]. The solvent brings all the interaction partners into one phase in the polymerization and creates pores in the polymer matrix for further access to the embedded templates in the imprinted system. However, the presence of these organic solvents may cause cellular damages. Therefore, in drug-delivery processes, it is vital to prepare MIPs in such a way that they are compatible with biological systems [38]. One of the issues that face MMIPs is their lower selectivity when tested in aqueous solvent systems. MIPs usually perform optimal recognition of the template in the same solvent as the one used in their preparation process [50]. This can be due to the considerable weakness of hydrogen bonding and electrostatic interactions in water. However, people are working to develop strategies to make the imprinting process efficient enough in polar media. There are reports regarding the preparation of efficient MIPs in rather polar solvents (e.g., acetonitrile/water, ethanol or methanol/water) since strong template-monomer interactions have been observed [78,107,170]. Metal coordination and hydrophobic interactions are also suggested to enhance template and functional monomer interactions [171,172]. In addition, one of the advantages of MMIPs is the thin imprinted polymeric matrix around the MNPs, leading to the decoration of the imprinted cavities almost at or near the surface. Therefore, due to the less penetration needed for the extraction of the template, the possibility of using less porogenic solvents exists for the imprinting process. In-Vitro Drug Release Behavior and Cytotoxicity, In-Vivo Experimental Studies From DDS aspects, the first step of almost every NP in vitro evaluation is to determine the release behavior, which is carried out mostly using the dissolution methods and sampling during the specific periods [3,50,95]. Although studies analyzed the effect of different temperatures and pH on the drug release behavior, several further parameters are affecting the release rate and behavior of a drug from its carrier in vivo. To name a few, tissue homeostasis, protein corona around the particle (the unspecific adsorption of proteins on the MMIP surface that must be avoided because it could interfere with the interactions with the template [3]) and immune response (that can be prevented by using biocompatible materials). Therefore, if the dissolution platform resembles the biologic environment as much as possible the release behavior in vitro is expected to be near to the biologic condition, suggesting phosphate-buffered saline or simulated body fluid (SBF), which is similar to the human body plasma and has an ionic composition, pH 7.4 at 37 • C ± 0.5 • C [61,65,78]. Due to the acidic environment of cancer cells (pH 5.8), it is worthy of investigating the release behavior in acidic conditions [78]. However, in general, high physical adsorption on the surface and the nonspecific bonds cause a rapid release in the initial phase to be observed in most release process curves [3,65,79,135], followed by a sustained release process that seems to go on for a prolonged period, related to the drug release from imprinted binding sites. To avoid complications on other healthy organs, the MMIP can be directed by an externally placed magnet to the target tumor site in the body, thanks to the presence of magnetic core and the compatibility with an aqueous environment [50]. The in vitro release profile of the CAB1 from MMIPs and MNIPs in SBF at 37 • C was reported as following [3]. About 19% of the total loaded CAB1 was released from the imprinted matrix during the first hour, while MNIPs released about 49% within this period. The MNIPs released the drug completely within 6 h, while in the case of MMIPs even after 48 h the drug release was not complete. It shows a sustained release upon imprinted polymers that can reduce the massive adverse effects of off-target release toxicities. To demonstrate the effect of the alternating magnetic field (AMF) on the drug release profile, Griffete group compared the release rate from thermo-sensitive DOX MMIPs placed under AMF with the same particles that were left at 37 • C for the same period. The release rate was reported by 60% and 10-15%, respectively. The authors suggested the thermo-triggered drug release out of MMIPs is performed by destabilization and disruption of the hydrogen bonds existing between the drug and the polymer. They also reported a high drug release from thermo-sensitive DOX-loaded MNIPs that were subjected to the same condition as imprinted particles. The high release rate from MNIPs placed under AMF compared to those left at 37 • C was described as 73% and 98%, respectively, showing AMF-unspecific, non-sustained drug release patterns compared to MMIPs. Taking together, these results bold the significance of the magnetic core as nano hot spots that trigger the release of the drug out of the cavities only upon a suitable AMF [77]. Furthermore, Cazares-Cortes et al. compared the release behavior of DOX out on MMIPs with non-imprinted magnetic nanogels (MagNanoGels) that physically encapsulated the drug. According to their results, the release amount of DOX from the nanogels was two-fold more than MMIPs under the AMF (16.7 µM compared to 7 µM), but in terms of percentage, the MMIPs released more than half of the DOX trapped in their matrix. Additionally, MMIPs demonstrated a low passive release pattern (10%) without AMF at 37 • C and pH 7.5, compared to 24% for MagNanoGels in similar conditions, emphasizing the effect of imprinting on the better control over the drug release on-demand [68]. Followed by the design of a dual-pH-thermo-responsive DOX-loaded MMIP explained before, the release profile was investigated in the SBF at different pH s. The drug release rate elevated significantly by increasing the temperature from 37 to 40 • C. They reported a huge difference between the release rate in pH 7.4 and pH 5.8 after 144 h (12% compared to 70%, respectively). DOX was almost released in the cancer cell's acidic pH, but only 12% are released in the pH related to healthy tissues. Therefore, the use of this imprinted system is promising to deliver more DOX to the carcinogen cells and reduces damage to healthy cells. This release pattern could be attributed to the fact that the hydrogen bonds, which cause DOX-loading, become weak in acidic pH s and this leads to a faster release [78]. However, to show the thermo-responsivity of the developed MMIPs, the reports of Li et al. [80] from their in vitro drug release experiments of their designed 5-FU-loaded MMIPs are stated here. It was seen that the release amount and release rate for MNIPs are much higher than those of MMIPs at 25 • C within 100 min. Nearly 70% of loaded 5-FU by MMIPs is released, whereas 84% of loaded 5-FU by MNIPs is released at 25 • C, which attributes to the more specific adsorption of the drug in imprinted cavities. They reported the elevated drug release amount (90.75%) from MMIPs when the temperature rises to 45 • C due to the shrinkage of the polymer matrix to become hydrophobic. The release amount from MNIPs was not demonstrated by the study [80]. Cytotoxicity studies against cancer cells give more specific biologic perspectives into the aim of a drug-loaded MMIP intended to use as cytotoxic agents. Parisi and coworkers conducted their developed MMIPs on HeLa and MCF-7 cancer cells to investigate the potential application of the magnetic imprinted nanospheres as a drug carrier in targeted cancer therapy. They reported sharp retardation in HeLa cells growth after 2 and 3 days of treatment with MMIP-CAB1, compared to the relative control (DMSO). The effect was even more evident in MCF-7 cells, which, concerning DMSO treated cells, showed dramatic growth retardation already after one day of MMIP-CAB1 treatment, reaching an almost complete growth inhibition on day 3 [3]. Drug release tested in the PC-3 cancer cell line was investigated and MMIP NPs were efficiently captured by cancer cells [77]. The results demonstrated explicit drug intracellular localization, captured inside intracellular endosome-like compartments. Surprisingly, this MMIP internalization did not induce cancer cell death, with this explanation that when bonded to the MIP (and thus the NPs), the drug is inactive. By contrast, after the athermal AMF application (steady temperature of cell environment at 37 • C), cancer cell viability was affected, with cell viability reduced to 60% after a 1.30-h treatment, corresponding to the cell cytotoxicity rate with free drug after 2 h incubation. These cellular experiments support the AMF-induced drug release and demonstrate the possibility of initiating chemotherapy via an athermal magnetic hyperthermia strategy through nano hot spots by magnetic cores. This remote magnetic activation is particularly promising to limit the adverse effects of chemotherapy on bystander tissues [77]. Similar results were reported of prepared DOX MMIPs with the same study design [68]. Hassanpour et al. demonstrated pH related cell-based studies with three factors: cell viability, cell toxicity and caspase-3 assay percentage of free AZT, AZT-loaded MIPs and MMIPs on MCF-7 and MCF-10 cancer cell lines. The results showed that due to the acidic cancer intracellular medium (pH 5), the drug was released from its imprinted carrier only inside these cancer cells. Nevertheless, in normal healthy cells or blood circulatory system (pH 7.4), carriers released only about 14% of their loaded drug. This ability could notably elevate the drug concentration in cancer cells and subsequently decrease the dose-dependent side effects. They reported that induced cell cytotoxicity in MCF-7 for MMIP, MIP and free AZT was 91%, 71% and 11%, respectively. Cell cytotoxicity induced by MMIP was about 49 times more than free AZT [81]. These findings clearly show the likely controllable and effective role of MMIPs as smart carriers in chemotherapy. The development of MMIP NPs as a DDS for cancer treatment is in its early stage of development. Therefore, the number of studies performing in vivo experiments with the aim of chemotherapy applying MMIPs is still seldom. Hashemi-Moghaddam et al. developed the study of their 5-FU-loaded MMIP NPs on female tumor-bearing BALb/C mice with a tumor diameter of 80 mm to 100 mm generated of murine mammary adenocarcinoma cells (MMAC; derived from M05 cell line) and compared treatment with free 5-FU, 5-FU-imprinted polymer (IP) and 5-FU-IP in the presence of a magnetic field. Alongside with the evaluation of drug distribution by analyzing the concentrations of free 5-FU or 5-FU-IP in the tumor, liver and kidney tissues through high-performance liquid chromatography (HPLC) to address the potential side effects related to the systemic distribution of each treatment. The HPLC results in these tissues show a high 5-FU-IP accumulation into the kidney due to being close to the magnetic source, but still, show the considerable accumulation of the DDS inside the tumor compared to the liver when there is no significant difference between the amount of free drug accumulation in different tissues [65]. Critical physical parameters to indicate the treatment effectiveness was the relative tumor volume percentage (Figure 4), that was lower in the group exposed with 5-FU-IP in the presence of the magnetic field, resulting in higher control of tumor growth than treatment with free 5-FU and an increase in animal life span. Indeed, by targeting 5-FU to the tumor site and enhancing its local uptake via a magnetic field, unwanted side effects are reduced, and the therapeutic efficiency is enhanced. Indeed, an increase of tumor growth inhibition ratio (IR%) between the 6th and 30th day after treatment in the 5-FU-IP-treated group with the magnetic field was reported 47% to 71%, respectively. In the free 5-FU-treated group, these values were reported from 15% to 48% between the third and twelfth day and thereafter began to decrease, indicating a transient control of tumor growth in this group [65]. Very similar outcomes were reported by the same group for the DOX-loaded MMIP NPs with the same experimental design [79]. Prospects and Conclusions As reviewed above, MMIPs are gaining more interest in the preparation of an efficient targeted DDS especially for cancer treatment and it can be predicted that more imprinted magnetic assisted DDSs will emerge. MMIPs are suitable for in vivo applications due to their superparamagnetic properties, which allows them to show no magnetization after removal of the magnetic field [59,173]. Their capability to be guided and induce hyperthermia with an appropriate external magnetic field introduces promising ways for cancer treatment, using the MMIPs as smart-drug-delivery robots, a potential alternative to conventional, systemic direct administered chemotherapy [59]. To date, these systems are generally investigated in terms of their preparation method stabilization, physicochemical properties, selectivity toward templates, loading capacity, in vitro cytotoxicity and comparatively simple in vivo tests. The more important issues regarding their safety and side effects, such as the specific interaction of these systems with human organs, tissues, cells or biomolecules, the effect on human's metabolism brought by the MMIPs, and the wider application of these systems for drug delivery, await further deep study, which should be focused on shortly. The basis for the creation of a selective strong molecular imprinting of MIPs lies in the formation of stable template-functional monomer adducts in the pre-polymerization reaction mixture. Hence, the choice of functional monomers to form such stable complexes with the template is vital [98]. Computational modeling proved itself as a powerful tool for the rational selection of the functional monomers and design of the MMIPs prior to the experiment to prevent waste of time and resources as well as increase the imprinting efficiency. MD simulations have been suggested as a fast method to search for optimal imprinting conditions, especially for the screening of functional monomers. Nowadays the demand for the development of safe body compatible and degradable DDSs forces the use of such materials in the preparation of delivery systems. As a result, the use of biopolymers like chitosan in the preparation of imprinted systems is getting attention and suggesting further studies to solve the related issues and achieve a controllable efficient system for drug delivery in cancer therapy. Most published studies have implemented thermally initiated free-radical polymerization in order to synthesize MIPs and MMIPs [61,107,174]. However, photoinitiated free radical polymerization offers several advantages different from thermally initiated free radical polymerization [175]. Photoinitiated polymerization is a green method that allows the polymerization to be carried out under more mild conditions, under air, upon blue light exposure, under low light intensity, no need to heat the system and low pressure used. In addition, due to the easiness of turning the light on or off, spatial and temporal control of the initiation step can be reached; such behavior is not possible by heating. Thus, the development of new initiating systems able to initiate polymerization in such conditions is at the center of numerous research [176] due to the low-temperature conditions, facilitation of temporal and spatial control-and especially for practical application solvent-free formulation wavelength flexibility and high curing speed [175]. Nevertheless, as mentioned before, photopolymers possibly release residual monomers, photoinitiators and similar products resulting from decomposition processes into the environment [168]. One of the most studied toxic products is tetramethyl succinonitrile (TMSN) that is released from AIBN as the main product of its decomposition and is built into the polymerized plastic product. Animal experiments in rodents have revealed that TMSN could act as a potent convulsant, which leads to the death of animals due to asphyxia [177]. Therefore, the choice of suitable initiator and close monitoring of its residues in the final product as given in the Code of Federal Regulations (CFR), FDA or other relative regulations is crucial. To date, the advances in MIT have led to the development of novel synthetically engineered MIPs material that is incorporated with α/β/γ-CDs within an imprinted polymeric framework, which fortunately improved the performance of MIPs [123][124][125] due to their hydrophilic outside and hydrophobic inner space and formation of a non-covalent complex with the guest molecule. Hydrophobic drugs can enter the lipophilic CD cavity with their whole structure or partially. CDs were successfully broad-studied in terms of drug delivery as the functional monomer combined with other functional monomers as binary functional monomers [72]. Even the combination of CDs with MIT to generate a DDS is relatively new [49,134,135], it is a highly promising step forward. In one of the reviewed papers, the prepared MMIP containing β-CD as a monomer showed imprinting ability of this material on the surface of MNPs, high affinity and high adsorption capacity of the imprinted film toward the template and adsorption equilibrium in a short time. These findings are showing the opportunity of further studies with the combination of CDs and MNPs for DDS [135]. Furthermore, most NPs in the systemic circulation are recognized by RES and get accumulated in the liver and spleen leading to toxicity to other organs [28], suggesting the need for utilizing a stealth approach to overcome this biologic barrier. Poly(ethylene glycol) (PEG) is an FDA approved polymer that has become the most widely used "stealth" polymer in drug delivery. Due to its flexible and hydrophilic nature, on NP surfaces it can form a dynamic hydration barrier, which prevents the plasma proteins binding (also termed opsonization) on the surface of the particles and clearance by the MPS, respectively, [178] suggesting the use of such an approach on the surface of future MMIPs intended for drug delivery. Although there is still a great need for further studies about the application of anticancer agentsloaded MMIPs toward cancer treatment, the here represented reports so far showed promising potential in developing an effective system with high drug loading and controllable guidance with AMF to the tumor local. A desirable controlled drug release with pH-thermo-sensitive triggers, as well as considerable cytotoxicity toward cancer cells compared to the free drugs and a good ability of tumor growth retarding was reported by these studies. All together highlights the need for further detailed studies to develop biocompatible and biodegradable imprinted systems as DDSs for in vivo investigation related to the specific interaction of these systems with animals and human organs, tissues, cells or biomolecules and those possible effects on human's metabolism. However, first, the existing important limitations such as their behavior in the aqueous media, binding kinetics and slow leaching of the template from the polymeric matrix, need to be addressed for the application of MMIPs.
2020-09-03T09:02:45.301Z
2020-08-31T00:00:00.000
{ "year": 2020, "sha1": "5eb908ef1313f4ca79d3c09ebbff1ce317da20b4", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4923/12/9/831/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "72b507d7c84ff13b66d13362ce28380b69ed2f0c", "s2fieldsofstudy": [ "Biology", "Materials Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
118969972
pes2o/s2orc
v3-fos-license
Exciton interference in hexagonal boron nitride In this letter we report a thorough analysis of the exciton dispersion in bulk hexagonal boron nitride. We solve the ab initio GW Bethe-Salpeter equation at finite $\mathbf{q}\parallel \Gamma K$, and we compare our results with recent high-accuracy electron energy loss data. Simulations reproduce the measured dispersion and the variation of the peak intensity. We focus on the evolution of the intensity, and we demonstrate that the excitonic peak is formed by the superposition of two groups of transitions that we call $KM$ and $MK'$ from the k-points involved in the transitions. These two groups contribute to the peak intensity with opposite signs, each damping the contributions of the other. The variations in number and amplitude of these transitions determine the changes in intensity of the peak. Our results contribute to the understanding of electronic excitations in this systems along the $\Gamma K$ direction, which is the relevant direction for spectroscopic measurements. They also unveil the non-trivial relation between valley physics and excitonic dispersion in h--BN, opening the possibility to tune excitonic effects by playing with the interference between transitions. Furthermore, this study introduces analysis tools and a methodology that are completely general. They suggest a way to regroup independent-particle transitions which could permit a deeper understanding of excitonic properties in any system. In this letter we report a thorough analysis of the exciton dispersion in bulk hexagonal boron nitride. We solve the ab initio GW Bethe-Salpeter equation at finite q ΓK, which is relevant for spectroscopic measurements. Simulations reproduce the dispersion and the intensity of recent high-accuracy electron energy loss data. We demonstrate that the excitonic peak comes from the interference of two groups of transitions involving the points K and K of the Brillouin zone. The number and the amplitude of these transitions determine variations in the peak intensity. Our results contribute to the understanding of electronic excitations in this system, unveiling a nontrivial relation between valley physics and excitonic properties. Furthermore, the methodology introduced in this study to regroup independent-particle transitions is completely general and can be applied successfully to the investigation of excitonic properties in any system. Hexagonal boron nitride (h-BN) is a layered crystal homostructural to graphite. It displays peculiar optoelectronic properties, measured notably with luminescence [1][2][3][4], X-rays [5,6] or angular resolved electron energy loss spectroscopy (EELS) [7,8]. Several studies have been carried out on its excitonic properties, however some fundamental aspects are still controversial. For instance, established theoretical calculations predict h-BN to be an indirect gap insulator [6,9], and this seems to be confirmed by recent photoluminescence data [3], but this conclusion contrasts with the experimental finding of strong luminescence in h-BN crystals [1], not compatible with a phonon-assisted excitation picture. In this context high-accuracy EELS measurements have been performed very recently [7] at momenta 0.1Å −1 ≤ q ≤ 1.1Å −1 parallel to the ΓK direction of the first Brillouin zone. The authors give account of an excitonic peak dispersing approximately 0.2 eV, reaching the highest intensity and minimum excitation energy at about 0.7Å −1 and almost disappearing at 1.1Å −1 . Finite-momentum EELS give access to the energy and momentum dependent loss function which gives information about the dielectric function (q, ω) of the probed material. Peaks of L(q, ω) can be put in relation to inter-band excitations (∝ Im[ (q, ω)]) and plasmon resonances (| | ≈ 0). So far, measures have been reproduced, interpreted and even anticipated by ab initio simulations based on the Bethe-Salpeter equation (BSE) formalism [12], which includes explicitly the electron-hole interaction (the exciton). A very general behaviour, observed in the recent EELS measures [7] as well, is a sizeable variation of the intensity of L(q, ω) as a function of the exchanged momentum q, notably the enhancement or the attenuation of excitonic peaks along their dispersion [6,13,14]. In this letter we devise accurate numerical methods based on the BSE for the analysis of excitonic features. We applied them to the investigation of the loss function of bulk h-BN in the same energy and momentum conditions as in [7], confirming the excitonic nature of the peak, clarifying the origin of its enhancement at 0.7Å −1 and its dramatic attenuation at higher momentum. Our analysis provides a deeper insight into the electronic excitations of h-BN and it unveils a non-trivial valley physics, indicating possible ways to tune the exciton intensity. More importantly, the approach introduced here is of general applicability. We believe that this approach constitutes a helpful way to understand and control excitonic properties in any system. We are convinced that the outcome of our analysis provides the key ingredients to explain similar effects observed in other materials [6,13,14]. Furthermore, it provides a general methodology to identify how and where the electronic structure has to be modified to achieve the desired exciton intensity. Numerical analysis methods EELS and non-resonant inelastic X-ray scattering give access to the loss function L(q, ω) with complementary degrees of accuracy in the q range [8]. Theory-wise, L(q, ω) can be calculated accurately from the dielectric function (q, ω), obtained as a solution of the BSE. This can be cast in the form of an eigenvalue problem whose Hamiltonian is most often written in a basis of independent-particle (IP) transitions of index t = (v, k) → (c, k + q) between occupied and empty states of an underlying IP model, e.g. the Kohn-Sham system. Here (v, k) indicate the initial state and (c, k + q) the final state, where q is the exchanged momentum laying inside the first Brillouin zone. Within this framework and including only resonant transitions where E λ (q) is the energy of the λth exciton [15] and η a positive infinitesimal quantity. The spectral intensity is the modulus squared of a linear combination of IPtransition matrix elementsρ t (q) = vk|e −iq·r |ck + q weighted by the exciton wave function components A λ t (q). The exciton λ is called "bright" when I λ (q) is sizeably high, and conversely it is called "dark" when I λ (q) ≈ 0. This can happen if eitherρ t (q) or A λ t (q) or both are negligible for all t, or when IP-transitions interfere destructively leading to a vanishing sum in expression (3). Thus it is sensible to introduce the normalised cumulant weight [13,16]: which allows for a visualization of the building-up of the exciton spectral weight as a function of the IP-transition energy E. This function is positively defined, it tends asymptotically to 1 and in general is not monotonic. The normalized cumulant weight (4) gives a piece of information relying on the energy of the IP-transitions, though more detailed analysis can be achieved by a careful study of the single M λ t (q) amplitudes themselves. In particular one can use the phase of M λ t (q) to split IPtransitions into groups depending on their sign in the sum (3). This allows for a deconvolution of the exciton (which includes all IP-transitions) into competing groups of IP-transitions, the intensity of the total peak resulting from the interplay of these contributions. Results In Figure 1 we report the simulated loss function for exchanged momenta q ΓK at intervals of q 0 = K/12 ≈ 0.14Å −1 (see Appendixes). In the inset, circles depict the calculated dispersion of the peak compared to the experimental data (red bullets) extracted from [7]. Beside a blue-shift of about 0.47 eV that comes from a wellknown underestimation of the gap with the G 0 W 0 approximation in this material [17], the calculated spectra and their dispersion are in very good agreement with the measurements. In particular our simulations reproduce the fact that the lowest-energy excitation is at q = 5q 0 ≈ 0.7Å −1 , where the peak attains its highest intensity, and that approximately at q = 8q 0 ≈ 1.1Å −1 the peak is strongly suppressed (cfr. Figure 1 in [7]). At higher q, the loss function increases again with abrupt intensity, reproducing the strong exciton expected at q = K and already analysed elsewhere in literature [5,6,8]. In this energy range, it turns out that | (q, ω)| does not vanish, consequently equation (1) allows us to attribute an interband character to the excitation and put features of the loss function in direct relation to peaks of Im[ ]. This appears clearly from Figure 2, where we show that Im[ ] (dashed curve) and L(q, ω) (solid curve) at q = 5q 0 and q = 8q 0 present the same spectral features. We also mark the energy of the first six excitons, that is E λ (q) for λ ≤ 6, for both momenta with coloured circles whose size is proportional to log[I λ (q)]. The scale of the loss function in the two panels is the same, and similarly for the scale of Im[ ]. Additional information about the dispersion of the first six excitons can be found in Appendix Furthermore, for q = 5q 0 we report also the corresponding spectrum of Im[ ] without electron-hole interaction, i.e. taking into account only independent-particle (IP) transitions between GW levels (dot-dashed curve). This spectrum appears flat at 5.5 eV where the BSE calculation predicts a relatively sharp peak. This comparison confirms the hypothesis, already advanced in [7], that the peak has an excitonic nature. In the following we will focus on the reason of the variations of intensity of the first peak, and in particular at momenta q = 5q 0 and q = 8q 0 , where the intensity is at its highest and its lowest. Based on the observations done above, we will perform our analysis on Im[ ] instead of working with the more cumbersome loss function. Exciton analysis at q = 5q0 ≈ 0.7Å −1 The excitonic peak at q = 5q 0 has a binding energy of 0.33 eV, that is the energy difference with respect to the lowest IP-transition with the same q (including G 0 W 0 corrections). In Figure 3 the normalised cumulant weight defined in (4) is reported versus the energy E of the IP-transitions. We observe that J λ=1 (q, E) is a monotonic function of E; it rises steeply up to E ≈ 6.8 eV from where its derivative decreases mildly. Finally it reaches its asymptotic value of 1 at about E ≈ 12 eV (not shown). What this tells us is that IP-transitions sum up constructively at all energies, with most important contributions coming from transitions of energy E < 6.8 eV. Indeed these few transitions (0.4% of the total) account for almost the 42% of the spectral weight, as J 1 (5q 0 , 6.8) = 0.42 attests. Still, to get closer to the full spectral weight, one has to include higher-energy transitions. At E = 9.5 eV, 85% of the spectral weight is accounted for by still a relatively small number of transitions (less than 10% of the total). We can now gain a deeper insight into the way IPtransitions combine in forming the exciton by looking at the terms of the sum (3). Let us divide the latter group of transitions (E ≤ 9.5 eV) in three categories: those transitions t for which both real and imaginary parts of the amplitude M λ t (q) are positive, those for which both are negative and transitions where they have opposite sign. The latter group turns out to be composed by transitions with amplitude M λ t (q) ≈ 0, so they do not contribute significantly to the exciton intensity and we can safely neglect them in the analysis. The other two groups enter the sum of Eq. (3) with different signs. Almost (v,k)→(c,k+q) (k)|) is reported as a function of the valence state k or conduction state k + q for k-points in the ΓKM -plane. On the other hand, little more than one third of the transitions belong to the negative amplitude group, and they have higher energy but in general lower intensity. Their maps are reported in panels (c) and (d) of Figure 4. The analysis suggests the following interpretation. Two groups of transitions participate to the formation of the bright exciton (λ = 1), observed in Ref. [7]. One group (let us call it KM -group) is composed mostly by low-energy transitions going from points close to K to points close to M (and similarly H → L in the AHLplane, not shown). The lowest-energy transitions of this group have also the larger amplitude M λ t (q), and they sum constructively in the steep part of the cumulant (E < 6.8 eV). At higher energy, a second group of IPtransitions (call it M K -group), from points in the vicinity of M to points in the vicinity of K (and L → H ) enter the sum with a negative amplitude, hence canceling partially the contribution of the KM group. This explains why the derivative of the cumulant decreases from 6.8 eV on, but it is still positive because of the larger number and higher amplitude of the dominating KM group. The origin of the peak being established, we can now draw the connection with the single-particle band structure. In the inset of Figure 3 we report the GW band structure along the relevant path KM K , in good agreement with previous calculations [5,6,9,18] and experiments [20]. The KM and the M K groups of transitions have been sketched with coloured arrows, respectively red and blue. At this q, the KM group of transitions are basically the indirect transitions between the top valence and the bottom of conduction. The fact that the top valence is close to, but does not coincide with K is consistent with the fact that the lowest excitation is found at q < 6q 0 . The strength of the peak is explained by the fact that the KM transitions take place between regions of the band structure where bands are particularly flat (van-Hove singularities). Also, the convex curvature of the band structure explains why the M K transitions start contributing at higher energy and have lower amplitude. Exciton analysis at q = 8q0 ≈ 1.1Å −1 Let us switch now to q = 8q 0 . At this momentum, the spectral weight is dramatically reduced and it is moved from λ = 1 to a group of higher-energy excitons among which λ = 2 and λ = 5 have the highest (although still very weak) intensity. In the λ = 2 case, the normalized cumulant weight, reported in Figure 5 does not grow monotonically, instead first it explodes for E ≤ 6.8 eV, where it attains the value of ∼ 50, then it attains its maximum between 7 and 7.8 eV and then decreases to reach the asymptotic limit at about 12 eV. We can perform the same analysis as before for IP-transitions of energy These results can be rationalized as follow. Most of the IP-transitions entering in I λ=2 (q) up to 6.8 eV are of the KM group and they sum constructively. But at higher energy, the M K transitions, which contribute with opposite sign, start having comparable importance. This induces a halt in the increasing trend (6.8 < E < 7.8 eV) and eventually they dominate bending down the cumulant back to its asymptotic limit. The result is that the two groups of transitions almost cancel each other, leading to a very weak intensity. It is worth recalling here that the λ = 2 exciton is almost degenerate with another exciton of non-negligible intensity (λ = 5). Not surprisingly, carrying out a similar analysis on the latter leads to basically the same results (see Appendix C). Conclusions We computed the loss function of bulk h-BN solving the ab initio GW-Bethe-Salpeter equation at finite q along the ΓK direction, which is relevant for spectroscopic studies.We observe an excitonic peak dispersing of about 0.45 eV, displaying a strong intensity at q ≈ 0.7Å −1 , where the excitation energy is the lowest, and almost disappearing at q ≈ 1.1Å −1 . These findings are in very good agreement with recent electron energy loss experiments [7]. The associated dielectric function displays similar characteristics. We show that the peak intensity is determined by the interference of two groups of transitions contributing to the peak formation with opposite signs. Our investigation allow us to unveil a non-trivial connection between the exciton dispersion, its intensity and the electronic structure in the vicinity of K(H) and K (H ) points in bulk h-BN, eventually suggesting ways to control excitonic properties by changing the electronic structure in the vicinity of the K and K valleys. It is worth stressing that with the help of the methodology we devised, it is possible to use spectroscopic methods to probe electronic excitations at the two valleys at the same time. This is of paramount importance, for instance in the vallytronics of layered systems [21]. Furthermore, the methodology presented in this work is of general applicability and could be extended to studies of excitonic properties in any system. The splitting of relevant IP-transitions into appropriately defined groups can simplify the interpretation of the excitonic properties, help the analysis and possibly disclose some nontrivial mechanism. We believe that the strategy adopted here can be employed successfully also to other cases in bulk as well as in 2D materials. This helps the interpretation of measured data (as the case of our application to h-BN) but most importantly it can suggest where and how to change the electronic structure whenever a control on the excitonic intensity is required. The authors thank Doctor R. Schuster for the clarifications regarding the experimental data [7]. The research leading to these results has received funding from the European Union H2020 Programme under Grant Agreement No. 696656 GrapheneCore1. We acknowledge funding from the French National Research Agency through Project No. ANR-14-CE08-0018. APPENDICES Appendix A: Computational details The simulated h-BN has lattice parameters a = 2.5Å and c/a = 2.6 [22]. The Kohn-Sham system and the GW corrections have been computed with the ABINIT simulation package (a plane-wave code [23]). Norm-conserving Troullier-Martins pseudopotentials have been used for both atomic species. DFT energies and wave functions have been obtained within the local density approximation (LDA) to the exchange-correlation potential, using a plane-wave cutoff energy of 30 Ha and sampling the Brillouin zone with a 8 × 8 × 4 Γ-centred k-point grid. The GW quasiparticle corrections have been obtained within the perturbative GW approach. They have been computed on all points of a 6 × 6 × 4 Γ-centred grid, a cutoff energy of 30 Ha defines the matrix dimension and the basis of wave function for the exchange part of the self-energy. The correlation part has been computed including 600 bands and applying the same wave function basis as before. To model the dielectric function, the contour deformation method has been used, computing the dielectric function up to 60 eV, summing over 600 bands and with a matrix dimension of 6.8 Ha. The quasiparticle corrections have been subsequently interpolated on a denser 36 × 36 × 4 k-point grid where the BSE calculation has been carried out. The macroscopic dielectric function (q, ω) has been calculated at the GW-BSE level in the Tamm-Dancoff approximation using the code EXC [24]. We included six valence bands and three conduction bands; 360 eV is the cutoff energy for both the matrix dimension and the wave function basis. The static dielectric matrix entering in the BSE kernel has been computed within the random phase approximation with local fields, including 350 bands and with a cutoff energy of 120 eV and 200 eV for the matrix dimension and the wave function basis respectively. With these parameters, the energy of the first peaks of (q, ω) are converged within 0.01 eV and their intensity are converged within 5%. All reported spectra have been convoluted with a Gaussian of σ = 0.05 eV in order to reduce the noise due to the discrete k-point sampling and to simulate the experimental broadening. Appendix B: Dispersion of the first six excitons In Figure 7 we report the dispersion of the first six excitons along the line ΓK with coloured circles whose size is proportional to log[I λ (q)], so larger circles correspond to bright excitations. The points have been obtained within the GW-BSE framework an shifted by 0.47 eV to higher energies. As expected [9,18,19], at q = Γ the first two excitons are degenerate and basically dark, whereas all the peak intensity is concentrated on the degenerate excitons with λ = 3 and 4 (the two are superimposed in the plot, so that only λ = 3 is visible). As soon as one moves away from Γ, the degeneracy is lifted [19] and the first bright peak coincides with the lowest energy exciton (λ = 1). This is valid up to q ≈ 6q 0 (halfway in the ΓK line) where the peak intensity is moved to λ = 2 as a consequence of a band crossing. The intensity of the excitations is successively reduced at 8q 0 and 9q 0 where several excitons are concentrated in a narrow energy range. Finally, as q approaches K, the exciton λ = 2 steps-up again concentrating most of the intensity. We also report on the same Figure the dispersion of the loss function as measured [7] (red crosses) and computed in this work (purple squared curve). One can see that the position of the peak of L(q, ω) follows closely the dispersion of the first bright excitation of Im[ ] (lowest energy larger circles). Appendix C: Analysis of the λ = 5 exciton at q = 8q0 At q = 8q 0 , the intensity of the peak is very low. This low intensity is basically shared by two excitons, λ = 2 (analysed in the main text) and λ = 5 with an energy around 50 meV higher. The analysis with the cumulant weight, reported in Figure 8, has a surprising shape. After a first increase around 6 eV, the cumulant decreases abruptly and vanishes at 6.5 eV. Then it oscillates around the value 0 until one starts including IP-transitions of energy E ≥ 8.0 eV. Remembering that cumulant weight is defined as a modulus squared, one realizes immediately that what is observed is again a phenomenon of interference (as in the λ = 2 case), but the dominating group changes during the analysis. At the very beginning (E ≤ 6.1 eV) the KM group constructs the peak, but immediately after the M K transitions cancel this contribution and leads the cumulant weight back to 0. From this point, the two contributions mutually cancel and it is only at E > 8 eV that the M K group prevails and the cumulant start growing monotonically to its asymptotic limit. FIG. 7: (Color online) Dispersion of the excitonic energies E λ (q) for λ ≤ 6. The size of the circles is proportional to the logarithm of the peak intensity. A Purple line connects the lowest peaks of the loss function L(q, ω). Red crosses are the peak of the experimental loss function extracted from [7]. All calculated points have been blue-shifted by 0.47 eV.
2017-11-24T17:55:54.000Z
2017-09-21T00:00:00.000
{ "year": 2017, "sha1": "cb5008fd37170439d0f6ddea01347f5671e996db", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1709.07397", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "cb5008fd37170439d0f6ddea01347f5671e996db", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
234492259
pes2o/s2orc
v3-fos-license
EVALUATION OF STATISTICAL ILLITERACY IN LATIN AMERICAN 1 CLINICIANS AND OF THE EFFICACY OF A 10-HOUR COURSE Statistical Illiteracy is highly prevalent among Latin American clinicians. Short-term 47 educational interventions are effective but, their benefits quickly fade away if they are not 48 periodically reinforced. Medical boards and Medical schools need to periodically teach and 49 evaluate statistical proficiency to ameliorate these issues. 50 Introduction 24 All medics require statistical interpretation skills to keep up to date with the scientific 25 advances and evidence-based recommendations of their specific field. However, statistical 26 illiteracy among clinicians is a highly prevalent problem with far-reaching consequences. The 27 few available studies that report statistical literacy improvements after educational 28 interventions do not report for how long these benefits last. We measured for the first-time 29 statistical proficiency among Latin-American clinicians with different levels of training and 30 evaluated the efficacy of a 10-hour course at multiple timepoints. 31 Methods 32 Using an online questionnaire, we evaluated self-perceived statistical proficiency, scientific 33 literature reading habits and statistical literacy (using an adaptation of the Quick Risk Test) 34 across multiple levels of medical training. Separately, we evaluated statistical proficiency 35 among Internal Medicine residents at a tertiary centre in Mexico City immediately before, 36 immediately after and one and two months after a 10-hour statistics course using the same 37 adaptation (allowing for "I don't know" answers) of the Quick Risk Test. Scores across 38 multiple time points were compared using Friedman's Test. 39 Results 40 Data from 392 clinicians from 9 Latin American countries were analyzed. Most clinicians 41 (85%) failed our adaptation of the Quick Risk Test (mean score = 2.6/10, IQR:1.4). The 10-42 hour course significantly improved the scores of the Internal Medicine Residents (n=16) from 43 3.8/10, IQR:1.8 to 8.3/10, IQR:1.4 (p<0.01). However, scores dropped after one and two 44 months to 7.7/10, IQR:1.6 and 6.1 / 10, IQR:2.2, respectively. 45 . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) Most medical schools and Medical board recognise the importance of Statistical Skills for 55 practising clinicians 1 . However, evidence shows that even experienced clinicians struggle 56 with assimilating the differences and implications of fundamental statistical concepts such as 57 odds ratio versus absolute risk and sensitivity versus positive post-test probability 2 . 58 Moreover, essential concepts such as absolute risk changes, number needed to treat/screen, 59 intention-to-treat analysis and Bayesian probability are often overlooked when making 60 clinical decisions and when explaining the implications of tests and treatments to patients 3,4 . 61 62 The implications of Statistical Illiteracy among clinicians are frequent and range from 63 generating individual ethical problems 5-7 to health-policy misinformed decisions 8 . Moreover, 64 improving health statistics among medical doctors has been put forward as one of the seven 65 goals for improving health during this century 9 . 66 67 Importantly, evidence also suggests that cheap, easy-to-implement and short-term 68 interventions can improve statistical skills among clinicians 10 . In their 2018 study, Jenny,69 Keller and Gigerenzer 11 demonstrated that a 90-minute training session in medical statistical 70 literacy improved the performance (from 50% to 90%) in 82% of the participants using a 71 multiple-choice Statistics test. However, it was not evaluated how quickly these 72 improvements fade away after the educational intervention. 73 74 In this study, we estimated Statistical Literacy among Latin American clinicians and 75 evaluated the efficacy of a 10-hour Statistics course across multiple timepoints. 76 . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 14, 2021. is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. Since scores were not normally distributed, we compared them using Friedman's Test using 119 the R function "friedman.test" from the R package "stats" version 3.6.2. Normality was 120 evaluated using Shapiro-Wilk tests using the function "shapiro.test" in the same R package. 121 . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. Survey responses and Statistical Literacy results among Latin American clinicians. 123 A total of 403 responses were collected, however, 11 were discarded due to having 124 incomplete data. In total 392 from 9 different countries and 53 different medical schools were 125 included in the analysis. Most respondents (82%) were Mexican. Scores were not 126 significantly different across different levels of medical training. Table 1 summarizes their 127 answers in the survey and the overall performance in the test. Table 2 CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) FUNDING, DATA SHARING AND CONFLICT OF INTEREST DISCLOSURES 195 This study did not receive funding. The authors declare they do not have conflicts of interest 196 to disclose. Data for research purposes will be shared upon request to the corresponding 197 author. Dr Adrian Soto-Mota is the guarantor of the integrity of this work. 198 199 . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 14, 2021. ; https://doi.org/10.1101/2021.05.08.21256882 doi: medRxiv preprint
2021-05-14T19:02:45.241Z
2021-05-14T00:00:00.000
{ "year": 2021, "sha1": "2baadb5acecbf159dc227d51e78ca2354ade4db6", "oa_license": "CCBY", "oa_url": "https://www.medrxiv.org/content/medrxiv/early/2021/05/14/2021.05.08.21256882.full.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "d1e0e35c3b4391cdb3d2802a4f46f36634a8c5b6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245851576
pes2o/s2orc
v3-fos-license
Sustainable Materials Used as Lightweight Aggregate :(An Overview) Lightweight aggregates (LWA) are building materials with a lower bulk density than standard construction aggregates. In recent years, the contribution of industry to the circular economy has become a serious concern. Among these, the mining sector is confronted with significant problems relating to the management of a huge quantity of generated waste. The major contemporary task is to address a number of interconnected challenges, including waste management and recycling, conservation of scarce natural resources, reduction of energy use, and reduction of greenhouse gas emissions. Natural aggregates are consumed by the construction materials industry in the range of 8 to 12 billion tons per year. According to reports, the construction materials sector consumes the most energy and scarce natural resources (rocks, aggregates, and water) while also emitting greenhouse gases. In general, using waste material as lightweight aggregate decreases the concrete’s overall weight. The materials used as lightweight aggregate in concrete are discussed in this study. According to research, utilizing trash as a lightweight aggregate not only improves the characteristics of concrete but also gives a sustainable approach to minimize global waste. Interdiction Resources and energy have recently surfaced as two critical problems for the building industry's long-term viability(1) (2). China is one of the largest waste-generating countries, as in 2018, it generated approximately 1.55 billion tons. This statistic was based on the Chinese National Bureau of Statistics(3). There is a significant impact on the environment resulting from industrial solid waste, as the reuse of waste generated as a result of demolition or other waste with a focus on reaching a minimum level of energy is the best way to develop green buildings . Construction material products might be employed on a big scale in the future if a feasible production or reuse system could be developed (4). CO2 curing for recycled concrete aggregates is an example of a novel approach(5), geopolymers for recovered waste glass(6)(7), treatments for regenerated cellulosic fibers using microwaves and enzymes (8), and palletization for bottom ash incineration (9) in recent years have been characterized. Researchers all around the world are always looking for new and inventive methods to create the next generation of building materials that are both environmentally friendly and long-lasting (4). In today's construction sector, concrete is the most widely utilized building material. Concrete's strong mechanical and physical properties, when correctly planned and produced, are one of its most notable advantages. It is a very cost-effective and easy-to-use substance for the operation; structural concrete components can be molded into a range of shapes and designs; and it (10). The widespread use of standard weight aggregates such as granite and gravel in concrete building has drastically reduced natural stone deposits, resulting in irreparable environmental damage (11). After water, aggregate is the second most consumed raw resource by humans. Its industry is the principal source of raw materials for infrastructure and building construction, as well as industry and environmental protection, and so plays a critical role. Although artificial lightweight aggregates consume less energy than natural lightweight aggregates, their low density, high porosity, inert nature, and reasonable mechanical strength enable them to be used in a wide range of applications, including lightweight concrete, horticulture, geotechnical engineering, masonry, pavements, water treatment and green roofs (12). Although the lightweight aggregate sector has been "stagnant" for several decades, rising environmental concerns and the significant benefits of lightweight aggregate should boost production in the future, as a result, the market for lightweight and thermally insulating concrete is projected to expand (13). LWAs (lightweight aggregates) are a type of porous low-density aggregate that has been frequently utilized in concrete to minimize building weight. It has increased in recent years, especially in China, Europe and America. Back-weight aggregate has been used as an internal maturation of concrete as a result of its high effectiveness (14). These broad uses eat up a lot of nonrenewable natural resources all around the world. As a result, scholars have taken a keen interest in the use of various aggregates and the concrete that results. Although abuse has resulted in resource shortages in certain nations or areas, LWAs are currently primarily collected from natural sources (15). As a result, using artificial LWAs is an essential approach to address this problem(4). The microstructure of lightweight aggregate (LWA) influences properties such as density, water absorption, aggregate strength, and their proportional effect on concrete. The microstructure of aggregate is influenced by the raw materials used and the hardening methods used (16). Sintering and cold bonding are the two most common methods for creating artificial LWAs. Sintering is a method of altering the structure and density of materials without melting them to the point of liquefaction using heat or pressure (17). Fly ash (FA) (18) (19), perlite (20), ceramist (21), and some waste materials (21) (22) in the production of low-density concrete, are often used to sinter LWAs(4). This study aims to examine the materials that can be used as lightweight coarse aggregate, the majority of which are natural materials that have no use or are waste or materials resulting from waste treatment, and which, if benefited from and used in lightweight concrete on a large scale, could produce positive results in terms of reducing global pollution. Concrete Sustainability Areas with a lot of urban expansion tend to run out of aggregate sources after a lengthy time of concrete manufacturing. For the purpose of reducing damages to buildings and extending their productive lives, the engineering staff must examine the entire life cycle of the buildings. Even after the work of the buildings, the possibility of recycling the largest part of the demolition waste and recycling should be taken into account so that it can be reused again (23) (24). In accordance with the requirements established by the United Nations Commission on Development and Environment, it is important to be able to fulfill the requirements of today's work without infringing on the right and requirements of future generations to live (25). In recent decades, the increased consumption of natural resources has been the important thing that has helped to increase global warming and increase seawater levels above their previous natural levels, in addition to threatening different types of living and organisms at the risk of mass extinction (26). Figure 1. How can concrete be made to be more long-lasting (27). Each year, 60 percent of the raw materials extracted are used for construction and infrastructure. The structure accounts for about a quarter of all global extractions (28). The initial period of building construction is the most energy consuming period, as approximately 67 of the total energy of the structure during its total life is consumed during the construction period (29). Furthermore, most structures, with the exception of monuments, must be destroyed at some time. Concrete accounts for more than 75% of all building materials by weight, so it's no wonder that it's the most frequent demolition waste (30). Since one of the most prevalent types of waste is building demolition waste, reducing it and seeking to reuse it is an important matter that leads to enhancing the environment and preserving public health and is a good starting point for following the requirements of sustainable buildings (29). Concrete only developed many solutions that would use recyclable materials in its production. As for building materials, there is a lot of development around the greatest possibility of reducing the energy used during the operating period(31). For the purpose of advancing the reality of sustainable development, it is necessary to move towards renewable resources and to move away from non-renewable resources in order for the shift from generating waste to consuming all kinds of waste through recycling (32). Materials Riley(33) lightweight aggregate is created when two circumstances occur at the same time, as stated more than sixty years ago: i) The development of a viscous phase occurs at temperatures around the melting point (but never reaching it, since only some phases become fused). ii) The release of gas (usually CO2, CO, H2O, H2, O2, SO2 or Cl2) as a result of the breakdown of a variety of organic and mineral substances (such as calcite, dolomite, phyllosilicates, metallic sulfides, chlorides or ferrous minerals, among others). The gas is imprisoned inside the viscous matrix and pore development is thus stimulated only if these two criteria are satisfied simultaneously. In contrast to the size of the initial pellet, this process is generally followed by a considerable volume increase (bloating or expansion). Natural and manufactured lightweight aggregates are the two forms of lightweight aggregate. . Pumice The origin of the word pumice is a Latin word that appears as a result of presence of high-porosity stones. The pumice stone is one of the natural stones produced by the rapid cooling of lava, which has a spongy internal structure with high pores. The small bubbles collected during the lava flow are the main reason for the formation of this spongy structure. The pumice stone has a whitish gray exterior. The external color of the pumice stone is greatly affected by its chemical composition. As a result of this high porosity, pumice acquires unique qualities such as good thermal insulation and acceptable resistance despite its high porosity, which makes it a good material that can be used as a lightweight aggregate (34). Attapulgite Attapulgite clay (also known as attapulgite) is a crystalline hydrated magnesium aluminum silicate with a unique chain structure that gives it unique colloidal and dynamic properties. It is the most common type of fuller's earth, which is a collection of sporting clays (35). In Iraq, research on attapulgite, a native clay mineral, began with the manufacture of mineral admixture from raw materials gathered in the Tar Al-Najaf region. Al-Aride investigated the feasibility of utilizing native clays from the southwest of Iraq as a coarse aggregate in 2014. The study was divided into two parts: part one was for making coarse aggregate (LWA) and determining the appropriate burning temperature, and part two was for generating (LWAC) from the manufactured aggregate. At a treatment burning temperature of 800°C, the bulk density of the Attapulgite lightweight aggregate (ALWA) was 808 km/m3, and the dry specific gravity was (1.45). (1100 C) (35). Diatomite Diatoms are one of the most ancient types of algae, as there were about 25,000 different species of them. And after her death and decomposition, she built a thick layer with time. And after being buried and pressed, chalk rocks are generated, which are distinguished by their cellular structure, which is suitable for use as an internal ripening aggregate (37). Volcanic slag Open-pit mining has transformed natural vesicular glassy lava rock into an industrial substance known as volcanic slag. For over three centuries, volcanic slag has been successfully used in over 70 different applications all over the globe. This material's open structure and outstanding drainage characteristics, crushed and screened into necessary sizes, making it a really flexible product for landscaping and undersoil drainage applications (38). During volcanic eruptions, the lava begins to eject to high distances, as this lava contains many air bubbles, and during its flight, it begins to cool and solidify, resulting in black rocks with high pores. This type of rock provides good properties such as good sound and thermal insulation with a low amount of shrinkage. Volcanic slags are lightweight accumulations of volcanic tailings. Volcanic slag contains many cavities in its structure, which makes it highly porous and light in weight (39). Artificial lightweight aggregate 3.2.1. Oil Palm Shells (OPS) Malaysia is the main palm oil-producing country, as more than half of the palm oil production in the world comes from Malaysia. However, it is considered a major source of pollution, as 2.6 tons of this solid waste is produced annually, especially as this number is increasing exponentially with the increase in global demand for Palm oils. (40). In 1985, Abdullah of Malaysia was the first to explore the use of OPS as a lightweight aggregate in the production of lightweight concrete (41). Several studies have shown that it is possible to use the leftover husks from the palm oil industry as a lightweight coarse aggregate (42)(43)(44)(45)(46)(47). As its cellular structure enables it to be used as a lightweight aggregate (47). Ceramic Waste Garbage and waste generated as a result of the vast increase in the population are one of the challenges facing the world. Many of the items that are discarded are recyclable and should be. There are few attempts to recycle waste and use it in new buildings, and one of these materials is ceramic waste (48). Recently, there has been a lot of research trying to find acceptable solutions for the use of ceramic as a lightweight aggregate in concrete in order to reduce the increase in waste and reduce its impact on the environment. 7 Campos and Paulon claim that (48), the use of ceramic as a lightweight aggregate in concrete is of high benefit due to the porosity that the ceramic enjoys, in addition to its good thermal insulation and light weight. Given this background, it is important to investigate the material's potential reuses, one of which is the use of ceramic waste as coarse aggregate in the manufacture of structural concrete. In this scenario, structural elements made with these alternative concrete mixes must fulfill the required design requirements, focusing on building safety in both environmental and fire conditions. Studies have shown that the use of ceramic as a lightweight aggregate sometimes reduces the strength of concrete, but with this reduction, it is acceptable for use (49). Sewage sludge The last by-product of the wastewater treatment industry is municipal sewage sludge (SS). With the fast growth of urbanization, the amount of SS produced worldwide is rapidly increasing. Due to the presence of organic and inorganic chemicals, as well as a variety of bacteria in the sludge, landfilling such a huge volume of SS would result in significant land use and pollution concerns(51). Many researches have been done on the recycling of SS in the construction industry. The amount of SS in those above-mentioned construction materials, however, is limited to less than 30% (52). The manufacture of lightweight aggregates is a more appealing alternative method for repurposing SS (LWAs) (53). There has been a lot of research into the manufacturing of LWAs using SS. According to earlier research, the quality of the LWAs created using SS had equivalent characteristics to those made with clay(51). Waste glass powder Trash glass (WG) is a type of solid waste, and its repurposing has sparked a lot of interest due to the everincreasing landfill strain. Previous research has shown that adding glass lowers the sintering temperature, resulting in improved sintering reaction during LWA manufacturing, since glass contains a specific quantity of Na2O, which leads to a lower melting point. Furthermore, at high temperatures, Na2O can lower the viscosity of the liquid phase, which is advantageous in the manufacturing of sintered goods(55). Drilling shale cuttings The treatment of drilling waste is one of the most difficult to perform due to the high cost and the need for high energy, in addition to the lack of the required technologies for this (56). Since this type of waste is one of the most deposited materials during this process, the accreditation began to find appropriate ways to reuse it again (57). In this instance, the cuttings were stabilized and consolidated to create a geotechnical sound filler (58). Studies have shown that it can be used as a lightweight aggregate in concrete mixes. Incineration fly ash and reaction ashes Large incinerators produce ash deposits in incinerator screens. These ash are usually of three types, the first is light fly ash, the second is heavy ash deposited at the bottom, and the third is a mixture between fly ash and bottom ash (60). Among the incineration waste there are highly toxic substances that exceed the possibility of toxic filtration requirements, such as fly ashes and reaction ashes. Industrial wastes that have been shown to be dangerous must be handled and controlled with great care. Most hazardous chemicals have been demonstrated in the literature to be stabilized by the high temperature solid solution technique, and the resultant consolidation matter may be safely handled or repurposed as building materials (61). Heat treatment, according to Wunsch et al., renders heavy metals in waste wastes inert against leaching (62). According to Sakai and Hiraoka, thermal treatments that are carried out on the incineration waste reduce the effect of harmful substances and make the toxic substances stable to some extent (63). Mangialardi researched the possibility of using this waste as lightweight aggregate in concrete (64). He discovered it can be used as aggregate in concrete as it improves the mechanical properties of concrete (65). Figure 11. Sintered incineration fly ash and reaction ash aggregate appearance (65). Incineration of solid waste The waste hierarchy should be utilized to offer sustainable waste management and resource efficiency solutions in order to transition to a low-carbon economy, due to the rising volume of residual waste. This entails creating long-term solutions for repurposing various types of trash by transforming them into secondary resources. Due to increased demand and limited disposal capacity, municipal solid waste (MSW) incineration in waste-to-energy facilities is predicted to expand across the world (66). Incineration decreases the weight and volume of MSW by around 75% and 90%, respectively, but it still creates substantial volumes of ash, notably incinerator bottom ash (IBA) and air pollution control (APC) fly ash. IBA is the most important MSWI waste, accounting for 85-95 percent of the solid residue left after combustion. It's a non-hazardous waste that may be used as a secondary raw material after being weathered for 2-3 months to immobilize heavy metals (67). Heavy metals that are integrated or incorporated into the neo-formed crystalline or vitreous phases have been demonstrated to be greatly reduced when sintering at high temperatures (68). Despite the fact that the leaching potential reduces as the processing temperature rises, sintering IBA to generate lightweight aggregates (LWA) at temperatures around 1100 °C results in a significant reduction in the leaching potential (69). Red clays During open-pit mining, enormous volumes of diverse interlayers made up of limestone, marls, clays, and flintstone are drilled and blasted in order to obtain sedimentary phosphate ores. Because they're generally located in waste rock piles, they're also known as trash mines. The utilization of phosphate mining waste rocks (particularly red clays) to reduce mine footprints and environmental consequences has become a major concern. This is also true of the rising demand for construction materials, which has the unintended consequence of disrupting fragile ecosystems. Many potential environmentally friendly solutions, particularly those linked to phosphate mining, have been created through numerous studies. Their different wastes can be utilized as raw materials in the construction sector to make bricks and membrane filters(70), geopolymers (71), lightweight aggregates(72), road construction(73) and natural stone products (74). Thermostone Thermostones are cellular concrete blocks (also known as autoclaved aerated concrete). In multistory structures, these blocks are increasingly being utilized as a brick alternative to decrease overall weight while maintaining thermal comfort. Thermostone is often made from lime, salt, and cement, along with an aluminum powder and water admixture. The key reason for adding aluminum powder is the nutritious bubble structure of the Thermostone. This structural structure is the consequence of the chemical interaction of silica, hydrated lime, and aluminum powder (75). It's also feasible to recycle the trash generated by these concrete blocks to create a lightweight aggregate (76). Conclusions The use of lightweight concrete is of great benefit because it significantly reduces the weight of the structural members. The use of lightweight aggregates resulting from waste reduces global pollution problems and is considered one of the aspects of sustainability. Most of the exposed materials are either waste or unused materials or by-products of accidental industries, so they are widely available in most countries. The use of this technology in Iraq in a large way greatly reduces the problems of pollution, since Iraq relies heavily on natural aggregate quarries in construction.
2022-01-11T20:05:50.041Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "e810f8efbee45ad4921581efeac5b3480507fd18", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1755-1315/961/1/012027", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "e810f8efbee45ad4921581efeac5b3480507fd18", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
52290596
pes2o/s2orc
v3-fos-license
Compliant Manipulation of Free-Floating Objects Compliant motions allow alignment of workpieces using naturally occurring interaction forces. However, free-floating objects do not have a fixed base to absorb the reaction forces caused by the interactions. Consequently, if the interaction forces are too high, objects can gain momentum and move away after contact. This paper proposes an approach based on direct force control for compliant manipulation of free-floating objects. The objective of the controller is to minimize the interaction forces while maintaining the contact. The proposed approach achieves this by maintaining small constant force along the motion direction and an apparent reduction of manipulator inertia along remaining Degrees of Freedom (DOF). Simulation results emphasize the importance of relative inertia of the robotic manipulator with respect to the free-floating object. The experiments were performed with KUKA LWR4+ manipulator arm and a two-dimensional micro-gravity emulator (object floating on an air bed), which was developed in-house. It was verified that the proposed control law is capable of controlling the interaction forces and aligning the tools without pushing the object away. We conclude that direct force control works better with a free-floating object than implicit force control algorithms, such as impedance control. I. INTRODUCTION Manipulation has emerged as one of the major fields in robotics research in the past few decades [1]. Typical examples of manipulation tasks are assembly of objects and alignment of workpieces. These tasks involve interaction with the object, thus requiring compliance. Compliance can also mitigate position uncertainties of objects being aligned. In addition to compliance, control of interaction forces between objects is required for successful completion of these tasks. Therefore, the controller plays an important role in manipulation operations and its goal is to successfully control the motion as well as contact forces. In space, underwater, or aerial applications, the manipulator and the target are free-floating such that a fixed base does not absorb reaction forces (Fig. 1). If the interaction forces are too high, the object can gain momentum and move away after contact. Additionally, the motion of the object after contact depends on the relative inertia of the endeffector with respect to the object. Hence the objective of a controller is to minimize the interaction forces along with minimizing relative inertia while maintaining the contact. Maintaining contact is important, contact breaking results in an impact force every time the bodies come in contact during the manipulation task. Impedance control is one of the most popular methods for compliant manipulation, in which the controller imposes mechanical impedance of an equivalent mass-spring-damper system with adjustable parameters on the end-effector. However, it is an implicit way of force control, i.e. it indirectly regulates the contact forces by generating an appropriate motion that ends up in a desired dynamic relationship between the robot and the environment [2]. Moreover, if the object is free-floating, there is no control over its position. Hence it is difficult to maintain a desired value of force with impedance control especially in context of free-floating object. In this paper, we propose an approach which maintains a constant force along the motion direction and apparent reduction of manipulator inertia similar to virtual tool along remaining DOF. The proposed approach is based on direct force control for compliant manipulation while minimizing the contact force. Simulation results are presented, emphasizing the importance of relative inertia between the manipulator and free-floating object when motion is constrained along one dimension. The approach is studied experimentally with an inexpensive two-dimensional micro-gravity emulator setup developed in-house and a 7-DOF manipulator arm. The object floating on an air bed is used to verify the proposed control algorithm for maintaining minimum interaction force and to achieve alignment of tools. Experiments demonstrate that the direct force control works better with free-floating objects than indirect force control. Section II reviews related work for compliant manipulation of free-floating objects. Section III discusses simulation and corresponding results. In Section IV we explain the proposed method used for compliant manipulation of free-floating objects. Next, experiments with a KUKA LWR4+ robot arm and their results are presented in Section V. Finally, the results are discussed and future work is outlined in Section VI. II. RELATED WORK Compliant motions have many advantages in manipulation tasks, such as assembly or alignment of tools. Compliant manipulation can be achieved by controlling interaction forces passively or actively. In passive interaction control, the inherent compliance of the robot is exploited, e.g., structural compliance of links, joints, and end-effector. On the other hand, active interaction control ensures the compliance by a purposely designed control system. In practical robot systems some combination of active and passive interaction control is often employed [3], [4]. A popular approach for compliant manipulation is impedance control. In impedance control the end-effector deviation from prescribed trajectory due to environment gives rise to contact forces [2]. Variations of impedance control and other simplified control strategies have also been used, such as admittance control, stiffness control, damping control [5], and compliance control [6]. Additionally, direct force control approaches such as hybrid force-motion control have also been used for controlling end-effector [7]- [9]. These approaches regulate the contact force to a desired value with an explicit force feedback loop. Most of the work reported for compliant manipulation using impedance or force control of a manipulator arm is related to ground based robots [10]. Little work has been done for compliant manipulation of free-floating objects. Some research is available for space manipulators, i.e. manipulator in free-floating environment. Approaches based on compliance control of space manipulators for on-orbit interaction are, for instance, the impedance control of a free-flying space robot proposed by Yoshida, et al. [11]- [13], the joint compliance control by Nishida, et al. [14], and impedance control of dexterous space manipulators by Colbaugh, et al. [15]. As these space manipulators are free-floating, these control approaches are similar to our approach. However, none of the approaches use compliant motions for alignment of workpieces. III. SIMULATIONS Control of interaction forces is central for compliant manipulation. The goal of simulations is to understand the influence of inertia, stiffness and damping on interaction forces. Fig. 2 shows the schematic model of a robotic manipulator arm and a free-floating object (on the air bed) with 3-DOF. However, the simulations were performed for 1-DOF in motion. The robotic arm is fixed on the table and the end-effector is controlled to have desired compliance characteristics. The system of differential equations for manipulator, object, modeled contact force, and relative acceleration between manipulator and object in one dimension can be written as where m ri is mass, b ri is damping, and k ri is stiffness. In the simulations, stiffness and damping of the manipulator correspond to passive compliance of the arm. x i is the position of the manipulator end-effector and x t the position of the free-floating object. Mass of the free-floating object is m t , and F f is force of friction (value used as zero in case of air bed). F c is contact force exerted on manipulator, which is modeled as a continuous function of penetration and rate of penetration of one rigid body into another. The penetration and rate of penetration are the relative position, ∆y = x t −x i , and relative velocity, ∆ẏ, respectively, defined between the manipulator hand and a contact point on the target surface. The parameters k c and b c are the stiffness and damping of actual contact surfaces, respectively. Fig. 3 shows the simulation results for contact force F c ; the three sub-figures are generated by solving (1) with varying mass ratio, stiffness and damping parameters. Parameters for simulation are listed in Table I. At the start of simulation, the target is at rest and the robot has an approach velocity. It is assumed that the simulation starts in contact, i.e. impact phase is not considered here. As the simulation proceeds, the robot and target achieve same velocity while maintaining contact. The objective of the simulation with varying mass ratio between manipulator and object was to study the effect of relative inertia on interaction force (Fig. 3a). The target is difficult to move if the inertia of the target with respect to the manipulator is high. Also in this case the final velocity of the target and manipulator is lower which reduces the chase length and duration. As seen from Fig. 3a, high relative inertia results in higher impact and hence contact break. Consequently, lower relative inertia results in damped oscillation in transient behavior, and manipulator continues to be in contact with target. The objective of simulation with varying stiffness and damping of the manipulator was to study the effect of compliance on interaction force ( Fig. 3b and 3c). The higher the stiffness, the higher the contact force between manipulator and object during transient phase ( Fig. 3b). High stiffness also results in breaking contact, as seen from negative value of contact force. Similarly, higher value of damping leads to faster decay in oscillations for contact forces, as shown in Fig. 3c. Initial conditions It can be concluded from these observations that low manipulator to object mass ratio, low stiffness, and high damping are desirable parameters to have for manipulation on free-floating object. As described before, we have assumed that the stiffness and damping are the structural properties of the manipulator and they form passive compliance. Although, stiffness and damping can be actively controlled by impedance controller. Nevertheless, we are not using impedance controller due to the contact breaking problem discussed in experiment section. In the following section, the proposed method focuses on reducing the apparent inertia of manipulator with respect to the free-floating object. We focus on solving the most difficult case shown in Fig. 3a, where the mass ratio between manipulator and target (m ri /m t ) is high and thus the target may float away if too much force is applied. IV. METHOD The proposed approach uses force control for maintaining constant minimum force along the motion direction and apparent reduction of manipulator inertia along remaining DOF. A number of DOFs are utilized for maintaining contact and the rest are compliant with measurable contact force. A proportional-integral (PI) controller is used as the constant force controller. For apparent reduction in translational inertia along an axis, measured contact force along this axis is scaled up and applied as additional actuator force. Similarly, for apparent reduction in rotational inertia around an axis, the measured contact torque around this axis is scaled up and applied as additional actuator torque. Using the scaled contact wrench as the actuator wrench emulates the behavior of the target being much heavier than the robotic manipulator. As force control is not feasible before contact phase, impedance control is used for the motion in free space. A. Control Law A manipulator with open kinematic chain structure with n joints is considered. The dynamics of the end-effector in an operational or end-effector space (set x of m independent configuration parameters) is given by is m×1 Coriolis/centrifugal term, and G o (x) is m×1 gravity term [8]. M ro (x) becomes the true inertia matrix when m = n and the manipulator is at a non-singular configuration. F is the generalized wrench at the end-effector and F c is the contact force. The control force F in (2) can be decomposed to provide a decoupled control structure where the circumflex ( ) denotes estimates of the quantities from (2); F * is the control or command vector for the decoupled control, and M d is the desired inertia matrix. With perfect nonlinear compensation and dynamic decoupling (i.e. The control vector F * in (3) can be decomposed along orthogonal DOFs for maintaining contact and apparent inertia reduction. The motion direction gives the direction of force control (chosen by matrix W ), and remaining orthogonal DOF are the directions for reduced apparent inertia (chosen by matrix I −W , I being identity matrix). The control vector is given by where K pF B and K iF B are the proportional and integral gain matrix for the PI negative feedback controller used for force control in motion direction; F c is the measured contact wrench constituting of contact force and torque; F ref is desired interaction wrench and K RI is positive feedback gain matrix for apparent inertia reduction. This composition of F * results in decoupled second order equations in both the force and reduced apparent inertia directions, i.e. whereẍ ri is acceleration in reduced apparent inertia directions andẍ f is acceleration in force control direction. In this method a number of robot's DOFs are regarded as forcecontrolled, whereas the rest are controlled for apparent inertia reduction. This approach provides a dynamically decoupled control system with feedback linearization, and there is more flexibility to choose different control subsystems in the controller design. Instead of matching the inertias in M d , we reduce the inertia of the manipulator to achieve the desired behaviour. B. Reduced Degrees of Freedom Case The proposed approach can be explained with respect to the experimental setup used in this paper for reduced DOF (Fig. 5c). The target is assumed to be free-floating on an air bed, hence it has two degrees of translational freedom (Y and Z in Tool Coordinate Frame, TCF) and one degree of rotational freedom (X in TCF). All the axes description used in the following text belongs to TCF. Chosen motion direction or force control direction is along Z-axis, apparent reduction in translational inertia is along Y-axis, and apparent reduction in rotational inertia is around X-axis. The values of matrices from (4) for reduced-DOF case are By substituting matrix values from (7) in (4), a 2-DOF form of the proposed controller is deduced as where F c = 0 is for the free space motion before contact, which is controlled by an impedance controller to reduce the impact. In case of contact, F c = 0 and we use our proposed controller, the constant force PI controller with apparent inertia reduction. Proportional and integral gain for the controller are k zp and k zi , respectively. k xp and k yp are the reaction force gains. τ x , f y , and f z are the contact torque around the X-axis, and the contact force along Y and Z axes, respectively. f ref z is the desired contact force along Z-axis. x,ȳ, andz are unit vectors along X, Y, and Z-axis. The block diagram for the proposed controller is given in Fig. 4 V. EXPERIMENTS AND RESULTS The experimental scenario of this paper is alignment of workpieces (two similar funnels) using compliant motions as shown in Figure 5c. A. Hardware Setup The hardware setup consists of an arm manipulator and a free-floating target. 1) Manipulator: The robot used was 7-DOF KUKA LWR 4+ manipulator with an ATI mini 45 force/torque sensor attached between the flange and the end-effector. One of the funnel shaped guides was attached to the end-effector and the other funnel was mounted on a free-floating object platform (Fig. 5a). To implement the controller on the robot, we used KUKA's Fast Research Interface (FRI) [16] with control frequency of 200Hz. The control law for the Cartesian impedance control of the KUKA LWR4+ through FRI is where J is the Jacobian. The control law represents a virtual spring k c (x cmd − x msr ). The stiffness of the virtual spring, k c , the damping factor, d c , the desired Cartesian position, x cmd , and the superposed force/torque term, F cmd , can be dynamically set. The term f dyn (q,q,q) is the dynamic model of the robot and compensates for gravity torques, coriolis, and centrifugal forces. We implemented the controller through F cmd by setting k c = 0 and d c = 0. To negate the effects of gravity, linear motion along X-axis and rotational motion around Y and Z axes were constrained by setting the stiffness in internal impedance controller corresponding to these axes to maximal value. For the other axes, the proposed controller was implemented via F cmd . 2) Free-Floating Target: A two-dimensional microgravity emulator that floats a target object on an air bed was developed in-house (Fig. 5a). This system can be used to emulate the planar (3-DOF) version of a 6-DOF space robot. The free-floating platform was made of an aluminium block (25 × 25 × 5 cm), with an in-built air channel. This air channel creates an air bed under the target of a few micron thickness. The air pressure is adjustable with an air pressure regulator and an air valve in the range of 2-8 bars. In the experiments, 2 bar pressure was used for making the whole setup float on a glass surface. The floating target weighed 12kg. Two primary limiting factors in this kind of setup are the residual friction and rigidity of the air hose. Experiments were performed to estimate the coefficient of friction by using a video camera with a spatial resolution of 1280 × 720 and temporal resolution of 50 frames per second. An impulse force was applied to the free-floating target, putting it in motion. Deceleration of the free-floating platform was calculated by using initial and final velocity estimates from the video. The coefficient of friction of the free-floating target was found to be less than the resolution of the camera setup, which gives an upper bound of 0.01 for the friction coefficient of the free-floating target. B. Design of Experiments Experiments were performed using the experimental setup shown in Fig. 5b. The goal of these experiments was to implement and compare three methods of manipulator control: (i) Force control with apparent inertia reduction (proposed approach), (ii) Force control, and (iii) Impedance control. The main focus was to study two properties: first, whether or not the manipulator is able to maintain contact with the free-floating object during manipulation. Second, to evaluate the effect of apparent inertia reduction on minimization of interaction forces. At every update interval, the desired position of the endeffector was calculated to make the end-effector follow a straight line along Z-axis, in the YZ plane. While following this trajectory, the funnel on the manipulator interacts with the target funnel, which helps in guiding the manipulator funnel towards alignment. In methods (i) and (ii), impedance controller was only used during the chase phase before contact. As soon as contact was detected, the controller switched to proposed force control with apparent inertia reduction in method (i) and to force control in method (ii), both described in (8). In method (iii), the impedance controller is used throughout the experiment. The parameters for free space impedance control and contact phase force control are given in Table II. An initial set of experiments was performed to find the minimum value of reference or desired contact force (f ref z ) along Z-axis with apparent reduction in translational inertia along Y-axis, and apparent reduction in rotational inertia around X-axis. It was found that the end-effector funnel does not slide on the target funnel if the value of f ref z was below 1N. This can be attributed to the Coulomb friction between funnels and internal friction in the joints of manipulator arm. C. Compared Methods 1) Method (i), Force Control with Apparent Inertia Reduction: Two cases were considered for this method: (i) apparent reduction of both translational and rotational inertia (using k xp and k yp ), and (ii) apparent reduction of only translational inertia (using k yp ). This was done to understand the effect of reducing apparent inertia along a subset of (I−W ) directions in (4). In both cases interaction forces were minimized by reducing the apparent inertia according to (8). Experiments were performed to empirically derive the values of k xp and k yp . The theoretical lower limit of k xp and k yp is 0. The theoretical upper limit for k xp is the value at which applied controller torque around X-axis is able to overcome the manipulator inertia around X-axis. Similarly, theoretical upper limit for k yp is the value at which applied controller force along Y-axis is able to overcome the manipulator inertia along Y-axis. 2) Method (ii), Force Control: Force control is a subset of the proposed controller with k xp = 0 and k yp = 0 in (8). As mentioned in the previous subsection, the end-effector funnel did not slide on the target funnel for the value of f ref z below 1N. By using only force control, higher value of interaction force was required to achieve the alignment. As f ref z was increased in magnitude, the sliding resulted in alignment of tools. However, this also resulted in the free-floating object moving faster posing a challenge as the manipulator has to chase the target. 3) Method (iii), Impedance Control: The impedance controller was implemented as where K e and B e are the stiffness and damping matrix. x current is the current pose of the tool tip, x setpoint is the desired pose, andẋ is the velocity of the end-effector. The actuator force is calculated by impedance controller. To achieve the goal of minimizing interaction forces, low velocity of the manipulator was used with high compliance. The motion is along Z-axis, translation along Y-axis and rotation around X-axis is completely compliant. D. Results 1) Maintaining contact during manipulation: Fig. 6 shows force-torque sensor data for interaction wrench for each control method discussed. Parameters of the methods are shown in Table III. The experiment started around 8s with the manipulator in impedance control mode. Time of contact varied (observed as rise in contact force), as it depends on the initial placement of free-floating object. Control mode switched at contact for method (i) and (ii), whereas for method (iii) there was no switching of control mode. In direct force control approaches, Figs. 6a, 6b, and 6c, the end-effector funnel slid on the free-floating object funnel resulting in alignment while both were in motion. In the impedance control approach, Fig. 6d, the contact kept breaking. These contact breaks can be seen as small impacts and they can be attributed to the indirect force control where the forces are controlled by the difference between current and desired positions of the manipulator, which tends to fluctuate. This change in forces resulted in a contact bounce problem, that is, the end-effector kept losing contact with the target. Direct force controller worked better for the freefloating target since the contact force could be maintained at a constant minimum force required for the alignment task. Based on this and the simulations in Sec. III, it can be concluded that the impedance controller is not suitable for alignment if the target is free-floating because of the contact breaking. 2) Minimization of Interaction Forces: Since the impedance controller was unable to maintain contact, the interaction forces were only evaluated for methods (i) and (ii), for which the end-effector funnel slid on the free-floating object funnel resulting in alignment while both were in motion. The reference forces were chosen to smallest values which maintained sliding and contact. Forces and torques for method (i) are shown in Fig. 7. It can be observed that the contact force along Z-axis was maintained at f ref z = 0.8 (Fig. 7a), the applied force for Y-axis was 0.2 times contact force (Fig. 7b), and the applied torque for X-axis was 0.5 times contact force (Fig. 7c). For method (ii), the reference force needed a higher value of f ref z = 1.5N . The measured forces are shown in Fig. 8. It can be seen that the constant force controller is trying to maintain the contact force at f ref z = 1.5N . By comparing Fig. 8 to Fig. 7a, it can be concluded that the proposed inertia reduction allows smaller contact forces to perform the alignment. Hence direct force control with apparent inertia reduction works better for minimizing interaction forces during manipulation. When both translational and rotational inertia are reduced, it was found experimentally that k xp should be above 0.5 (sliding did not occur at a lower value) and below 2 (contact broke for a higher value). For k yp the lower and upper limits were 0.2 and 1.5, respectively. However, in some applications it can be preferable to reduce the inertia only along a subset of DOF. To study the effect of this option, Fig. 9 shows the case where only the translational inertia was reduced. It was found that the minimum contact force required for sliding and alignment remained at f ref z = 0.8N . However, the absence of inertia reduction in rotation k xp = 0, the reduction factor of translational inertia needed to be increased to k yp = 1 from k yp = 0.2. It can also be observed that the applied or controller output force for Yaxis is proportional to contact force(k yp = 1) and the ripples from Y-axis force are reflected on the Z-axis force because of the coupling in forces due to the slope of the funnel. The higher value of k yp also results in amplification of ripples which is not desirable for smooth control. VI. CONCLUSIONS AND FUTURE WORK This paper presented a controller for performing compliant manipulation of free-floating objects. The objective of the controller was to minimize the interaction forces while maintaining the contact. The proposed approach achieved this by maintaining constant minimum force along the motion direction and by apparent reduction of manipulator inertia along remaining DOF. The experiments were performed with KUKA LWR4+ manipulator arm and two-dimensional micro-gravity emulator verifying the applicability of the proposed approach. Figure 6 Experiments showed that an approach based on direct force control is superior to indirect force control for compliant manipulation of free-floating object. In case of impedance control, the end-effector keeps losing contact with the target. Direct force controller works better for the free-floating target since the contact force can be set to a minimum force required for the alignment task. Furthermore, taking cue from simulations, the apparent inertia of the robotic manipulator was reduced by using the measured contact wrench for additional actuator wrench. This increase in apparent inertia allowed alignment of tools with lower interaction forces. The proposed control law was verified experimentally for 9.8 10 10 The proposed approach would be also applicable to manipulation of a free floating target by a free flying manipulator, but the interaction of manipulator base control and the reaction forces should be further studied.
2018-09-18T19:07:23.326Z
2018-05-21T00:00:00.000
{ "year": 2020, "sha1": "4725814546f762f3e57652575ee81d65247cb649", "oa_license": null, "oa_url": "https://research.aalto.fi/files/33891458/ELEC_Sharma_etal_Compliant_Manipulation_of_Free_ICRA_2018.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "9a81132ae3eb63574a226b441f83446a4b1d074f", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
245061465
pes2o/s2orc
v3-fos-license
Gastroprotective Effect of Ethanol Stem Bark Extract of Pepolo (Bischofia javanica Blume) Against Aspirin-Induced in Wistar Rat Prolonged use of nonsteroidal anti-inflammatory drugs (NSAIDs) can trigger ulcers in the gastric mucosa. Pepolo stem bark (Bischofia javanica Blume) has been empirically used to treat gastric ulcers. This plant contains alkaloids, flavonoids, tannins, saponins, and triterpenoids with gastroprotective potential. This study aims to determine the gastroprotective effect of pepolo stem bark extract (PSBE) against gastric ulcers of Wistar rat induced by Aspirin. Aspirin (150 mg/kg BW) was administered orally for eight days to all the groups except normal control. After 4 h of induction, normal and negative control received Na CMC 0.5%, while the positive group received Omeprazole (3.6 mg/kg BW). The remaining low, middle, and high dose groups received PSBE (100, 200, and 300 mg/kg BW). On the 9th day, the rats were dissected and tested on the stomach organs. The parameters were scored based on the severity of rat peptic ulcers, ulceration index, and percentage protection ratio. The scoring data were analyzed using the Kruskal-Wallis non-parametric test and post hoc the Mann-Whitney test. The administration of PSBE showed the ability to reduce the ulcer index and increase the percentage of ulcer inhibition significantly compared to the negative control (p<0.05), thus showed that PSBE could be a promising gastroprotective herbal medicine. 1. Introduction Gastric ulcer is a digestive disorder that occurs when the inner layer of the stomach's skin is inflamed or swollen.1 Inflammation of the stomach can be caused by the gastric mucosa protection system which cannot protect the stomach due to continuous exposure to various harmful substances and exacerbate chronic inflammation and oxidative stress.2 Inflammation that damages the stomach and causes a cut or loss of part of the stomach wall can cause peptic ulcers. Ulcers occur if the wound has made a tear ≥ 5 mm in diameter ranging from submucosal to gastric wall mucosal muscles. Although the core mechanism is still being researched, it is thought that stomach ulcers are caused by an imbalance of protecting and destructive substances.3 Gastric ulcer disease is a disease that affects a vast number of people all over the world. Health profile data in Central Sulawesi 2020 shows gastritis is 2 of the ten most common diseases in Central Sulawesi, with a total of 100,525 cases.4 Risk factors for gastritis are nonsteroidal anti-inflammatory (NSAIDs), Helicobacter pylori infection, consuming alcohol, smoking, stress, irregular diet, and consuming too much spicy and acidic foods.5 Nonsteroidal anti-inflammatory drugs (NSAIDs) are a group of drugs widely used for rheumatoid arthritis, osteoarthritis, and relieve pain. This drug can cause damage to the gastric mucosa because of inhibition of enzyme COX-1 and gastroprotective prostaglandin, membrane permeabilization, and production of additional proinflammatory mediators. 6 Pepolo stem bark (Bischofia javanica Blume) is one of the plants used by the community as a traditional medicine to treat gastric ulcer disease.7 This plant was discovered to contain several significant phytochemical components and has been used in traditional medicine to treat various ailments. Flavonoids and tannins have been identified as major secondary metabolites of Pepolo stem bark extract in phytochemical studies. 8,9 Flavonoid compounds act as cytoprotectants (increased mucus), antioxidants, immunoregulators (decreased proinflammatory cytokines and increased anti-inflammatory cytokines), and antisecretory (decreased H+). 10 The mechanism of gastroprotective action of tannins is based on their ability to promote tissue repair, their antioxidant activity, and their ability to interact with other molecules. This tannin-protein complex layer protects the stomach from chemical and mechanical injury or irritation in gastric ulcers. 11 Based on the research library, there is no data available about gastroprotective activity in the stem bark extract of pepolo. Therefore researchers are interested in researching the gastroprotective effects of stem bark extract of pepolo in aspirin-induced male rats. Procedure 2.3.1. Animal Test The experiment was carried out on a total number of 24 healthy rats. The rats were aged between 2-4 months and weighed between 200-250 g. The study was approved by the Ethics Committee for Medical and Health Research, Faculty of Medicine, University of Tadulako (Number: 1436/UN 28.1.30/ Kl/2021). Rats were housed individually in polypropylene cages, maintained under standard conditions (12 hrs light and 12 hrs dark cycle; 25°C and 45-55% relative humidity). They had been given standard pellet diet and water ad libitum throughout the study. The animals were adapted to the laboratory environment for seven days before being used in the study Preparation of Ethanol Extract The stem bark of pepolo is obtained in Sedoa Village, North Lore District, Poso Regency, Central Sulawesi. Stem bark was taken with chessboard technique, and then the bark was washed thoroughly and drained. The stem bark was cut into small pieces approximately 2.5 x 3.5 cm to speed up drying. The samples were dried with the oven overnight at 50°C. Once dry, simplicia was pollinated until smooth using a blender and sifted. Stored in tightly sealed plastic containers for use in research. A total of 1.1 kg of dried stem bark powder was extracted by reflux using 96% ethanol solvent of 6 L. The powder was extracted for 3x4 hours at the boiling temperature of the solvent. The liquid extract obtained was collected, then evaporated with a vacuum rotary evaporator. Phytochemical Screening Procedures According to standard procedures, the ethanol extract was tested for secondary metabolites such as alkaloids, flavonoids, saponins, tannins, and steroids/triterpenoids. 12 Experimental Animals The experiment method followed Yasin, H et al. 2020 with minor changes.2 Stomach ulcer was induced by Aspirin 150 mg/kg BW. Omeprazole 3.6 mg/kg BW has been utilized as a standard antiulcer medicine since it is an irreversible and selective proton pump inhibitor. Rats were distributed randomly into six groups (4 rats each). Treatment was as follow: Group I: normal control, the rats were given orally suspensions 0.5% CMC Na Group II: negative control, administered aspirin suspension orally. After 4 hours, the rat was given orally suspensions 0.5% CMC Na. Group III: positive control, administered aspirin suspension orally. After 4 hours, the rat was given omeprazole suspension orally. Group IV, V, and VI: test group dose, administered aspirin suspension orally. After 4 hours, the rat was given orally suspended pepolo stem bark extract at a dose of 100 mg, 200, and 300 mg/kg BW, respectively. Treatment was given for eight days. Then on the 9th day, termination was carried out on test animals that had previously been fasted for 12 hours. Then after the animals were sacrificed, stomachs were detached, and the gastric ulcer lesions were measured. Macroscopic Observations On the ninth day, the rat was dissected and taken from his stomach organs. The gastric organs are opened along a minor curve, washed with NaCl 0.9%, and then stretched to facilitate lesion length measurement. Measurement of ulcer/ulcer severity score was performed by examining the stomach's inner surface and measuring lesions formed using a vernier caliper. Lesions were scored based on the length of the lesion in order to be analyzed. The score given is as follows: normal stomach = 1; reddish/red stomach = 1.5; bleeding spots or ulcers diameter up to 0.5 mm = 2; hatchlings with a diameter/ length of 0.5-1.5 mm = 3; hatchlings with a diameter / length of 1.6-4 mm = 4; ulcers with a diameter of >4 mm = 5; perforation with a diameter of 2-7 mm = 6; perforation with a diameter of 8-13 mm = 7; perforation with a diameter of >13 mm = 8. According to Adefisayo, M.A, 2018, the number of ulcers that occur and the severity of ulcers is determined by the ulcer index formula: 13 Ulcer Index (UI) = (total ulcer score)/ (number of animals that have ulceration) While the level of healing is assessed based on the percentage of ulcer inhibition: % inhibition of ulcers = 100% -[(IU treatment group)/(IU group NC) x 100%] 2.3.6. Data Analysis Data research was analyzed using SPSS software 26. Kruskal-Wallis nonparametric test was used to analyze data and then continued with post hoc Mann-Whitney's analysis. The significant value in this study was p<0.05 (95% of confidence level). 3. Results 3.1. Extraction and phytochemical screening of pepolo stem bark extracts This study used the reflux method. A total of 1.1 kg of simplicia powder was obtained, 24.51 g of extract with the percentage of 2.2%, and the extracted color is blackish-red. The preliminary phytochemical screening was performed on ethanol extracts of pepolo the results indicated several important phytochemicals natural product groups. The change of colours or the precipitate formation was observed when the test reagent was added to the prepared sample for the phytochemical test. The identification results showed the presence of alkaloids, flavonoids, saponins, tannins, and triterpenoids. Gastroprotective effect Macroscopic to the gastric mucosa was performed by giving a score based on the severity of peptic ulcers that have been predetermined. The score data obtained was then used to calculate the ulcer index. The ulcer index was calculated by comparing the total number of scores with the number of rats that experienced ulceration. The results of assessment of damage to the gastric mucosa can be seen in Table 1. The cure rate is assessed based on the percentage of ulcer inhibition. The percentage of ulcer inhibitions for each test group can be seen in Figure 1 and the macroscopic image of the rat stomach obtained can be seen in Figure 2 below. Discussions The sample used in this study was pepolo stem bark extract (Bischofia javanica Blume). Ethanol extract was obtained using the reflux method. This method was chosen because the sample used is the bark of the pepolo, which has a hard texture and is resistant to heat. 14 The resulting stem bark extract of pepolo was tested through phytochemical screening to identify its compounds. The results showed that pepolo stem bark extract contains flavonoid compounds, saponins, alkaloids, triterpenoids, and tannins. These results are following previous research that ethanol extract of pepolo stem bark contains tannins and flavonoid compounds that have antioxidant activity. 15 Therefore, based on the description, research has been conducted on gastroprotective tests on aspirin-induced male rats, knowing whether or not the extract affects the stomach of rats and knowing the dose that can give a healing effect on ulceration in rat stomach. The comparison or positive control used in this study was Omeprazole. It was used to see the influence on the stomach that has proven efficacy in protecting gastric mucosa. Positive controls are also used to prove that the method used is valid. Proton pump inhibitors are used to treat gastric acid hypersecretion that occurs in gastric and duodenal ulcers and is used in therapy to radicalize Helicobacter pylori in combination with antibiotics. Omeprazole is the first member drug to bind to the K+H+-ATPase system of parietal cells (proton pumps) to inhibit the secretion of hydrogen ions into the gastric cavity. Proton pump inhibitors (PPI) were chosen as a positive control because they can effectively reduce ulcers in patients taking Aspirin or other NSAID. 16 Although NSAID treatment was still used, treatment with proton pump inhibitors with administration once a day can trigger ulcer healing. 17 The higher the ulcer index value indicates that the greater the gastric damage experienced, and the higher the percentage of ulcer inhibition indicates that the greater the ability to heal and reduce the rate of gastric damage. Based on the results of the calculation (Table 1 and Figure 1), it is known that the ulcer index of the negative control group, normal control, positive control, ethanol extract of pepolo stem bark at doses of 100, 200, and 300 mg/kg BW respectively is 5.75; 0; 0; 3.25; 0 and 0. Until the healing rate is obtained based on the percentage of ulcer inhibition/ulcer by 0% for the negative control, 100% for the normal control group, positive control and ethanol extract at a dose of 200 mg/kg BW and 300 mg/kg BW while for ethanol extract dose 100 mg/kg BW the healing rate is 24.69% respectively. The value of this healing rate was increasing with the increase of the dose given. Based on Table 1, it can be concluded that the entire treatment group of ethanolic extract of pepolo stem bark showed a decrease in the ulcer index's mean value of the rat's stomach compared to the negative control. These results indicate that pepolo stem bark extract affects the healing of peptic ulcers. The positive control group (Omeprazole) had a significant (p < 0.05) lower ulcer score compared to the negative control group. It proves that the methods and workings used in this study were correct. Likewise, there are significant differences (p<0.05) between normal control groups and negative controls. This can be explained because the normal group did not get induced nor treated by anything so that the stomach state was physically not ulcerated, while the negative control group (Na CMC) get induced by Aspirin without getting treated, so the stomach has ulcers. The evaluation of gastric lesions measurement parameters was done with vernier caliper assistance. Figure 2 shows the macroscopic description of lesions in the gastric organs in each treatment group. The normal group (1a) does not indicate lesions of the stomach. The negative control (1b) indicates many lesions in the stomach, the larger size of the lesions, and tend to the perforation of the stomach organs. Meanwhile, the preventive effect on ulceration number and severity of ulcers was seen in rats given pepolo stem bark extract. Gastric observations showed that rats that had The statistical analysis results showed that the severity of ulcers in the pepolo stem bark extract group with a dose of 100 mg/ kg BW against a dose of 200 and 300 mg/kg BW showed a significant difference (p<0.05). Meanwhile, the dose of 200 mg/kg BW to 300 mg/kg BW did not show a significant difference (p<0.05). Based on Figure 1 can be seen percent the rate of healing ulcers per treatment. Percent ulcer healing rate was used to determine how much the test compound's ability to reduce the severity of peptic ulcers in rats compared to the negative control (Na CMC 0.5%). The results obtained, stem bark extract of pepolo dose 100, 200, and 300 mg/ kg BW, show the ability and activity as a gastroprotective with the optimal dose of 200 mg/kg BW. The gastroprotective effect was thought due to secondary metabolites in the pepolo stem bark extract according to the results of phytochemical screening, namely alkaloids, flavonoids, saponins, tannins, and triterpenoids. Alkaloids work with a mechanism to reduce gastric acid secretion, increase mucus and alkaline secretions, and increase gastric mucosal blood flow to aid in the healing and prevention of gastric ulcers against irritant agents/factors. 18 Flavonoids work by protecting the gastric mucosa against ulcerogenic agents through mechanisms of free radical destruction, increased mucus production, and antisecretory system. 19 Tannins compounds are also known to have gastroprotective effects that inhibit gastric secretion and local gastric mucosal protection. Saponins are probably due to the presence of antioxidant activity. 20 Triterpenoids potent gastroprotective and curative effects were probably due to their antioxidant, antisecretory, increased gastric mucus production, and induced PGE2 levels. 21 5. Conclusions This study provides the rationale for using pepolo stem bark to develop a new drug for the treatment and prevent peptic ulcer disease. This study establishes that pepolo stem bark extract has the gastroprotective ability by decreasing the ulcers index and increasing the ratio of protection at an optimal dose was 200 mg/kg BW. Advanced research is required to highlight the fundamental mechanism of the gastroprotective potential of pepolo stem bark.
2021-12-12T17:07:49.229Z
2021-12-08T00:00:00.000
{ "year": 2021, "sha1": "c617d89c5e43d72ad173fece99ba66b5c65c2d6f", "oa_license": "CCBYNC", "oa_url": "https://jurnal.unpad.ac.id/ijpst/article/download/35452/16522", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "fc6f2d3d3d26956aec7c2089f9d46c5fe0fc6730", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
73438788
pes2o/s2orc
v3-fos-license
PES Pathogens in Severe Community-Acquired Pneumonia Worldwide, there is growing concern about the burden of pneumonia. Severe community-acquired pneumonia (CAP) is frequently complicated by pulmonary and extra-pulmonary complications, including sepsis, septic shock, acute respiratory distress syndrome, and acute cardiac events, resulting in significantly increased intensive care admission rates and mortality rates. Streptococcus pneumoniae (Pneumococcus) remains the most common causative pathogen in CAP. However, several bacteria and respiratory viruses are responsible, and approximately 6% of cases are due to the so-called PES (Pseudomonas aeruginosa, extended-spectrum β-lactamase Enterobacteriaceae, and methicillin-resistant Staphylococcus aureus) pathogens. Of these, P. aeruginosa and methicillin-resistant Staphylococcus aureus are the most frequently reported and require different antibiotic therapy to that for typical CAP. It is therefore important to recognize the risk factors for these pathogens to improve the outcomes in patients with CAP. Severe Community-Acquired Pneumonia: What Is the Current Definition? Currently, there is no consensus for the definition of severe community-acquired pneumonia (CAP) largely because it includes such a heterogeneous patient group. The most widely accepted definition is based on the 2007 Infectious Diseases Society of America/American Thoracic Society consensus (ATS/IDSA) guidelines for the management of CAP in adults [1]. According to the ATS/IDSA guidelines, severe CAP is defined by the presence of two major criteria: the need for invasive mechanical ventilation (IMV) due to severe acute respiratory failure and/or the presence of septic shock (Table 1). Several minor criteria requiring high intensity monitoring and treatment have also been proposed [1]. Table 1. The ATS/IDSA severity criteria for community-acquired pneumonia. Adapted from reference [1]. Major Criteria Invasive mechanical ventilation Septic shock Admission to Intensive Care Units: What Is the Real Impact? During recent decades, the numbers of patients with pneumonia requiring management in intensive care has grown globally. The aging population [2] and the increasing number of immunocompromized patients [3][4][5] (e.g., due to solid organ and hematopoietic stem cell transplantation, human immunodeficiency virus, or biological or immunosuppressive therapies) probably explains much of this change. The percentage of admissions to intensive care units (ICUs) that are attributable to elderly patients ranges from 9% to 19% in Europe [6][7][8][9][10][11] and from 20% to 30% in America [12]. A recent French study of ICU admission trends among elderly patients (≥75 years) between 2006 and 2015 indicated that 3% of all hospitalizations (3,856,785 cases) were for an acute respiratory infection (ARI) (98,381 cases) and that 15% of these required ICU admission (15,267 cases). The authors noted that the annual number of ICU hospitalizations increased steadily from 740 to 2034 during the study, with a 2.7-fold increase in ICU admissions for respiratory infection (p = 0.002). There was an overall increase in the number of ICU admissions for all age groups but with the greatest increases in ICU resources seen for patients aged 85-89 years (3.3 fold) and ≥90 years (5.8 fold). Interestingly, the increased ICU admission rate was not associated with significant changes in the ICU mortality rate for patients with ARI, with rates of 19.7% ± 3.0%, 24.0% ± 3.6%, and 25.0% ± 4.0% for those aged 75-79, 80-84, and 85-89 years, respectively. Indeed, there was a significant drop in ICU mortality from 40.9% in 2006 to 22.3% in 2015 (p = 0.03) for patients aged ≥90 years [13]. Finally, they reported that hospitalizations for CAP and acute exacerbations of chronic obstructive pulmonary disease increased significantly for all age groups during the study period. Outcomes of Patients with Severe Cap: What Are the Main Determinants? Despite improvements in management, severe CAP is associated with significant mortality. It is known that patients with respiratory failure and IMV, sepsis, septic shock, and decompensated comorbidities are at greater risk of death [14][15][16][17][18]. In the study by Jain et al. [19] in which 2320 cases of pneumonia were analyzed, approximately 21% of patients with CAP required ICU admission and 6% required IMV [19]. More recently, a Spanish study [14] of 154 severe CAP cases found a higher 30-day mortality rate (33%) in patients receiving IMV compared with non-intubated patients (18%). Patients receiving IMV did not present higher severity scores at hospital admission according to APACHE-II, PSI, or CURB-65 scores, but the use of IMV independently predicted 30-day mortality. The authors concluded that, based on these results, the PSI, CURB-65, and APACHE-II scores were less suitable than IMV for reliably identifying patients with severe CAP at higher risk of mortality. IMV, septic shock, worsening hypoxemia, and increased serum potassium were independently associated with increased mortality. Interestingly, a recently published systemic review and meta-analysis [20] that included nine studies on the use of noninvasive ventilation (NIV) in acute hypoxemic respiratory failure showed a protective effect for intubation and mortality with the use of NIV in patients with acute pulmonary edema, CAP, or immunosuppression ( Figure 1). Cillóniz et al. [15] investigated acute respiratory distress syndrome (ARDS) in mechanically ventilated patients with severe CAP. Of the 5334 participants, 930 (17%) were admitted to the ICU, 462 (52%) were not ventilated, 137 (15%) received NIV, and 295 (33%) received IMV; 125 cases (29%) met the Berlin ARDS criteria [22]. ARDS affected 2% of all patients hospitalized with CAP and 13% of patients admitted to the ICU. According to the severity of ARDS, the 30-day mortality rates were 32%, 33%, and 60% for patients with mild, moderate, and severe ARDS, respectively. Sepsis is another important complication of severe CAP [17]. The global incidence of hospital-treated sepsis has been estimated at 31.5 million for sepsis and 19.5 million for severe sepsis, with a potential 5.3 million annual deaths in high-income countries [23]. Sepsis is associated with prolonged ICU stays and high mortality rates (20%-30%), with those rates increasing when shock is present to approximately 45% [21]. In 2016, a Spanish study investigated the predictors of severe sepsis among 4070 hospitalized patients with CAP. Of these, 37% presented with severe sepsis (1529 patients), of which 67% were ≥65 years and 63% had PSI risk class IV-V. The 30-day mortality of septic patients (7%) was significantly higher than that in non-septic patients (3%; p 0.001). Predictors of severe sepsis were older age, alcohol abuse, renal disease, and chronic obstructive pulmonary disease, whereas prior antibiotic treatment was a protective factor [17]. Georges et al. [24] investigated the prognosis of patients admitted to the ICU with CAP after the implementation of new care strategies, including sepsis bundles derived from the Surviving Sepsis Campaign [25], an initial empirical antimicrobial regimen with a third-generation cephalosporin and levofloxacin, and the use of NIV following extubation. Comparing the pre-implementation period (1995)(1996)(1997)(1998)(1999)(2000) with the implementation period (2005-2010), mortality decreased from 43.6% to 30.9% (p < 0.02). Consistent with these results, Gattarello et al. [26], in a matched case-control study (80 cases and 80 controls) comparing 2000-2002 and 2008-2013, there was a 15% decrease in mortality among patients with pneumococcal pneumonia admitted to ICU. Early antibiotic administration and combination antibiotic therapy were both independently associated with better outcomes. Pathogens Beyond the Core Microorganisms of Cap: Should We Be Worried About Them? The most frequent pathogens outside the core microorganisms of CAP are methicillin-resistant Staphylococcus aureus (MRSA), Pseudomonas aeruginosa, Acinebacter baumannii, and various Enterobacteriaceae [27,28]. Since antibiotic therapy for these pathogens is different from the usual empirical therapy for CAP, it is important to recognize their main risk factors to ensure early diagnosis and appropriate treatment. In 2012, an international expert proposal for interim standard definitions for acquired resistance was published to allow data comparison and to improve comprehension of the real problem of antimicrobial resistance globally. Magiorakos et al. proposed the following definition: multidrug resistance (MDR, or resistance to at least one agent in ≥3 antibiotic groups), extensively drug resistance (XDR, or resistance to at least one agent in all but ≤2 antibiotic groups), and pan drug resistance (PDR, or resistance to all antibiotic groups) [29] (Table 2). In the same year, Aliberti et al. [30] analyzed 935 patients. Among them, 473 (51%) had at least 1 risk factor for MDR pathogen on admission. The authors proposed a score that included the following variables: chronic renal failure (5 points), prior hospitalization (4 points), nursing home residence (3 points), and other variables (0.5 points each for cerebrovascular disease, diabetes, chronic obstructive pulmonary disease, immunosuppression, home wound care, prior antimicrobial therapy, and home infusion therapy). The prevalence of resistant pathogens was 38% in patients with a score of ≥3 points, compared with 8% in patients with a score of ≤0.5. The overall prevalence of PES pathogens in this study was lower than in others. The thresholds 0-0.5 points and 3-12.5 points were associated with low and high risks of MDR, respectively. Table 2. Definitions of the various categories of drug resistance. Adapted from Reference [29]. Category Definition Multidrug resistance (MDR) Non-susceptibility to at least one agent in three or more antimicrobial categories Extensively drug resistance (XDR) Non-susceptibility to at least one agent in all but two or fewer antimicrobial categories Pan drug resistance (PDR) Non-susceptibility to all agents in all antimicrobial categories In 2013 a study about MDR pathogens in two independent European cohorts of hospitalized patients with CAP was published. MDR pathogens were identified in 3.3% of patients in the Spanish cohort and in 7.6% of patients in the Italian cohort, with MRSA being the most common. In both cohorts, there was a significantly higher prevalence of MDR bacteria among patients in the ICU compared with patients treated on the ward [27]. In 2015, Prina et al. [31] proposed the PES score, based on the three most frequent MDR pathogens in CAP (e.g., P. aeruginosa, extended-spectrum β-lactamase-positive Enterobacteriaceae, and methicillin-resistant S. aureus). The following elements were included: 1 point each for age 40-65 years and male sex; 2 points each for age >65 years, previous antibiotic use, chronic respiratory disorder, and impaired consciousness; 3 points for chronic renal failure; and minus 1 point if fever was present initially. The thresholds ≤1 point, 2-4 points, and ≥ 5 points indicated low, medium, and high risks of MDR, respectively (Table 3). Table 3. PES score. Adapted from Reference [29]. Also in 2015, a score was developed by Falcone et al. [28] (the ARUC score) based on the following: 1 point for healthcare-associated pneumonia (HCAP) criteria (defined by the presence of at least one of hospitalization in the previous 3 months, dialysis, intravenous chemotherapy in the past 30 days, admission to an acute care hospital for at least 2 days, surgery in the past 90 days, or resided in a nursing home or long-term care facility); 0.5 points for bilateral pulmonary infiltrations and pleural effusion; and 1.5 points for a PaO2/FiO2 <300. Patients were then stratified as low (<0.5 points) or high (≥3 points) risk for MDR pathogens. The authors analyzed 300 patients with an etiologic diagnosis of CAP or HCAP, of which 99 (11%) presented MDR pathogens; only 12% of these required an ICU admission. Score to PES Pathogen Points In 2016, Webb et al. [32] proposed the DRIP (drug resistance in pneumonia) score based on major and minor risk factors. Major risk factors (2 points) included prior antibiotics, residence in a long-term care facility, tube feeding, and prior infection with a drug-resistant pathogen (1 year) and minor risk factors (1 point) included hospitalization within the previous 60 days, chronic pulmonary disease, poor functional status, gastric acid suppression, wound care, and MRSA colonization (1 year). A threshold of ≥4 points identified patients at high risk of pneumonia due to a drug-resistant pathogen. In 2017, Ishida et al. [33] evaluated the risk factors for antimicrobial-resistant pathogens in immunocompetent patients with pneumonia and validated the role of PES pathogens in this subgroup of patients. Among the 1559 patients with CAP, an etiological diagnosis was reached in 45% of patients and PES pathogens were identified in 7%. Patients with PES pathogens showed a trend toward initial treatment failure, readmission within 30 days, and prolonged hospital stays. Risk factors associated with infection by PES pathogens were female sex, admission within 90 days, poor performance status, and enteric feeding. The authors concluded that the concept of PES pathogens provided an appropriate description of drug-resistant pathogens associated with pneumonia in immunocompetent patients. Recently, the European Antimicrobial Resistance Surveillance Network (EARS-Net) published its data regarding the burden of infections caused by antibiotic-resistant bacteria in countries of the EU and European Economic Area in 2015 [34]. The EARS-Net estimated that there were 671,689 infections with antibiotic-resistant bacteria, of which 64% (426,277) were health care associated. These infections accounted for an estimated 33,110 attributable deaths and 874,541 disability-adjusted life-years. Infants (aged <1 year) and the elderly (≥65 years) had higher burdens, as did Italy and Greece. Moreover 80% of the total disability-adjusted life-years per 100,000 were caused by infection with four pathogens: third-generation cephalosporin-resistant Escherichia coli, MRSA, third-generation cephalosporin-resistant Klebsiella pneumoniae, and carbapenem-resistant Pseudomonas aeruginosa. Risk Scores for Specific Pathogens (MRSA and P. aeruginosa) In 2013, Shorr et al. [35] analyzed 5975 patients admitted with bacterial pneumonia. MRSA was identified in 14% of patients. The authors proposed a new score that included eight variables: recent hospitalization or ICU admission (2 points), age < 30 or > 79 years (1 point), prior IV antibiotic exposure (1 point), dementia (1 point), cerebrovascular disease (1 point), female with diabetes (1 point), or recent exposure to a nursing home/long term acute care facility/skilled nursing facility (1 point). The prevalence of MRSA was < 10% in low risk patients (0-1 points), approximately 22% in medium risk patients (2 to 5 points), and >30% in high risk patients (6 or more points). In 2018, Restrepo et al. [36] published data of a multinational point prevalence study enrolling 3193 patients in 54 countries with confirmed diagnoses of CAP and who underwent microbiological testing at admission. The authors reported that the prevalence of P. aeruginosa was 4.2% and the prevalence of antibiotic-resistant P. aeruginosa was 2.0%. The authors identified the following risk factors for P. aeruginosa infection: prior Pseudomonas infection, tracheostomy, bronchiectasis, invasive respiratory and/or vasopressor support, and severe COPD. Conversely, risk factors for antibiotic-resistant P. aeruginosa infection were past medical history of a pseudomonas infection or tracheostomy. According to the recommendations provided by the authors, in the absence of specific risk factors, an empiric antibiotic therapy with a β-lactam plus a respiratory fluoroquinolone or a macrolide is effective against most pathogens responsible for CAP. Empiric antipseudomonal should be limited to patients with a past medical history of pseudomonas infection and/or chronic lung diseases, independently of disease severity. In conclusion, the continuous increase of drug resistance among virulent pathogens represents a major challenge for clinicians and health care providers. Of course, the identification of risk factors is a key element for the management of pathogens beyond those typical in CAP. We believe that the concept of PES pathogens provides an accurate description of drug resistance in immunocompetent patients with CAP. Empiric Antibiotic Therapy in Severe CAP Caused by PES Pathogens: Is There Something New? A recent study by Marayuma et al. [37] proposed an antibiotic strategy for pneumonia based on the risk factors for PES pathogens, independent of the site of pneumonia acquisition. Risk factors for PES pathogens were antibiotic therapy in the past 180 days, poor functional status (Barthel Index <50 or performance status ≥3), hospitalization for >2 days in the past 90 days, occurrence of pneumonia ≥5 days after admission to an acute hospital, requirement for hemodialysis, and immunosuppression. The authors prospectively applied the therapeutic algorithm to a multicenter cohort of 1089 patients, of whom 656 had CAP, 238 had HCAP, 140 had hospital-acquired pneumonia, and 55 had ventilator-associated pneumonia. Patients with 0-1 risk factors for PES pathogens were treated with standard therapy (a β-lactam plus a macrolide), whereas patients with ≥2 risk factors for PES pathogens were treated with a appropriate therapy for hospital-acquired pneumonia (a two-or three-drug regimen combining an antipseudomonal β-lactam with a quinolone or aminoglycoside plus optional linezolid or vancomycin). Approximately 83% of patients were treated according to the proposed algorithm, and 4% received inappropriate therapy. The authors concluded that using an algorithm based on the risk factors for PES pathogens and disease severity rather than the site of pneumonia acquisition, simplified treatment, improved the accuracy of empiric therapy, and reduced mortality, avoiding the overuse of broad-spectrum antibiotics in some patients. Although this algorithm may be promising, it has only been validated in Japan. It will, therefore, be necessary to validate the algorithm in other countries, health care systems, and clinical settings. Currently, the empiric antibiotic therapy for severe CAP remains based on international guidelines that recommend using a macrolide or a respiratory fluoroquinolone in combination with a β-lactam [1,38]. The coverage for PES pathogens should only be given if risk factors are present. Unfortunately, the superiority of a β-lactam plus a macrolide compared to a β-lactam plus a fluoroquinolone in the treatment of severe CAP remain unconfirmed. A recent meta-analysis [39] of patients with severe CAP showed that patients receiving a β-lactam plus a macrolide were discharged from the hospital about 3 days earlier than patients treated with a β-lactam and fluoroquinolone. The overall mortality also differed significantly between the groups, with rates of 19% for β-lactam plus macrolide therapy and 27% for β-lactam plus fluoroquinolone therapy. However, the length of ICU stay did not differ between the groups. More recently, a Spanish study [40] investigated the effect on mortality of a combined β-Lactam/macrolide therapy for CAP according to the etiology and the inflammatory status (measured by the levels of C-reactive protein). The study included 1,715 CAP patients with known etiology; the authors found that the combination of a β-lactam with a macrolide was associated with a lower mortality in patients with pneumococcal CAP and in patients with high systemic inflammatory response. However, two randomized controlled trials (RCTs) showed that the combination therapy with a β-lactam and a macrolide did not significantly reduce the mortality of non-ICU CAP patients [41,42]. We recommend following the current international guidelines for empiric therapy in cases of severe CAP [1] and to use the PES score [31] to identify patients at risk for PES pathogens. The use of empiric antibiotics that cover PES pathogens should be used in patients with a PES score ≥ 5 points) ( Figure 2). Are There Any New Antibiotics for PES Pathogens in Severe CAP? The research and development of new antibiotics is scientifically and economically challenging, yet it remains an essential goal given current global needs and the spread of antimicrobial resistance. In the European Union alone, it has been calculated that approximately 25,000 deaths are caused by antibiotic-resistant microorganisms each year, with the global burden estimated to be 700,000 deaths per year [43]. The production of new antibiotics has declined in recent decades, however, and between 2010 and 2018, only eight new antibiotics were registered. Ceftobiprole is a broad-spectrum cephalosporin that has activity against many pathogens that cause pneumonia, including gram-positive pathogens, such as penicillin-resistant Streptococcus pneumoniae (PRSP), MRSA, and various gram-negative pathogens (including Pseudomonas species). Ceftobiprole blocks the transpeptidase activity of the penicillin binding proteins in gram-positive and gram-negative pathogens. This causes the synthesis of peptidoglycan to decrease and the bacterium to die by osmosis or by autolysis. The bactericidal activity of ceftobiprole is also time dependent [44]. Results from randomized, double-blind, phase III clinical trials have demonstrated that ceftobiprole monotherapy is noninferior to ceftriaxone both as monotherapy and in combination with linezolid when treating severe CAP [45]. In 2008, the US Food and Drug Administration (FDA) declined to approve ceftobiprole. However, in 2015, it was designed as an infectious disease product for the treatment of pulmonary and skin infectious by the FDA [46]. In contrast, in 2013, ceftobiprole was approved by the European Medicine Agency (EMA) for the treatment of CAP and hospital-acquired pneumonia, excluding ventilator-associated pneumonia, at a recommended intravenous dose of 500 mg every 8-12 h in adults [47]. Ceftaroline is a fifth-generation extended-spectrum cephalosporin that binds to penicillin binding proteins and prevents bacterial cell wall synthesis. Its antimicrobial activity is directed against gram-positive organisms, including S. pneumoniae, Streptococcus pyogenes, S. aureus (including MRSA, vancomycin-resistant S. aureus, and hetero-resistant vancomycin intermediate S. aureus), as well as many common gram-negative organisms, such as Haemophilus influenzae and Moraxella catarrhalis. Phase III clinical trials (FOCUS 1 and 2) have found that ceftaroline is noninferior to ceftriaxone for the treatment of CAP, with cure rates exceeding 82% [48,49]. Ceftaroline is usually well-tolerated, and in clinical trials, only 3% of subjects discontinued therapy due to adverse effects. The most common adverse effects were rash, diarrhea, headache, hypokalemia, insomnia, and phlebitis. The recommended adult dosage is 600 mg/12 h intravenously over 1 h for 5-7 days. Cedftaroline was approved in 2010 for the treatment of acute bacterial skin, skin structure infections (ABSSSIs), and community-acquired bacterial pneumonia by the FDA [50]. In 2012, it was approved by the EMA for the treatment of complicated skin and soft tissue infections and community-acquired pneumonia [51]. Omadacycline is a semi-synthetic aminomethylcycline derived from minocycline. It has shown a broad spectrum of antimicrobial activity against aerobic and anaerobic gram-positive bacteria (S. pneumoniae, S. aureus, MRSA), gram-negative bacteria (Haemophilus influenzae, Klebsiella pneumoniae), and atypical bacteria (Chlamydophila pneumoniae, Legionella pneumophila, and Mycoplasma pneumoniae). Moreover, it has demonstrated activity against MDR bacteria. Omadacycline binds to the 30S ribosomal subunit and blocks bacteria protein synthesis in bacteria [52] In a phase III clinical trial (OPTIC study) [53], omadacycline was noninferior to moxifloxacin for the treatment of bacterial CAP. Patients were randomized to IV omadacycline 100 mg every 12 h for two doses followed by 100 mg/day or moxifloxacin 400 mg/day for 3 days, both intravenously, with the option to switch to oral therapy or continue for a total of 7-14 days. In the intention-to-treat population, omadacycline performed similarly to moxifloxacin at the early clinical response evaluation (81.1% and 82.7%, respectively). At the posttreatment evaluation, the efficacies of omadacycline versus moxifloxacin were similar in both the intention-to-treat (87.6% vs. 85.1%) and the clinically evaluable populations (92.9% and 90.4%, respectively). The recommended dosages for omadacycline in bacterial CAP are as follows: • In 2018, omadacycline was approved by the EMA and FDA for community-acquired bacterial pneumonia, acute bacterial skin, and skin structure infections [52,54]. Lefamulin is a novel semisynthetic pleuromutilin that inhibits bacterial protein synthesis. It binds to the peptidyl transferase center of the 50s bacterial ribosome, preventing the binding of transfer RNA for peptide transfer. Lefamulin expresses antimicrobial activity against gram-positive (S. pneumoniae, H. influenza) and intracellular pathogens (Mycoplasma pneumoniae, Legionella pneumophila, and Chlamydophila pneumoniae) associated with CAP but also has activity against MRSA and vancomycin-resistant Enterococci. In a phase III clinical trial (LEAP 1) [55], the efficacy and safety of lefamulin (intravenous and oral) were compared with those of moxifloxacin (with or without linezolid). The study included 551 adult patients with bacterial CAP, of whom 276 patients were randomized to receive lefamulin 150 mg every 12 h and 275 were randomized to receive moxifloxacin 400 mg every 24 h (with or without linezolid depending on the clinical suspicion of infection by MRSA). Lefamulin was noninferior to moxifloxacin for the primary the Food and Drug Administration (FDA) efficacy outcome of early clinical response (87.3% for lefamulin vs. 90.2% for moxifloxacin +/− linezolid; a 2.9% difference, 95%CI −8.5 to 2.8). The new antibiotic also met the European Medicines Agency non-inferiority endpoint of investigator assessment of clinical response (IACR) at a test-of-cure visit 5-10 days after therapy (86.9% for lefamulin vs. 89.4% for moxifloxacin +/−linezolid; 2.5% difference, 95% CI −8.4 to 3.4). In a second phase III clinical trial (LEAP 2) [56], the efficacy and safety of lefamulin (5 days oral) were compared with those for moxifloxacin (7 days oral) among 738 adults with moderately severe bacterial CAP. Lefamulin met the FDA primary endpoint of non-inferiority (10.0% margin) for an early clinical response at 72-120 h following therapy in the intention-to-treat population. Early clinical response was 90.8% after 5 days of treatment with lefamulin and 90.8% after 7 days of treatment with moxifloxacin (treatment difference, 0.1; 95% CI −4.4 to 4.5). Lefamulin also met the EMA primary endpoint of non-inferiority (10.0% margin) based on an IACR of 5-10 days following drug dosing in the modified intention-to-treat and clinically evaluable at test-of-cure populations. The IACR rates for the modified intention-to-treat population were 87.5% for lefamulin and 89.1% for moxifloxacin (treatment difference −1.6; 95% CI −6.3 to 3.1]), whereas for the clinically evaluable at test-of-cure population, they were 89.7% for lefamulin and 93.6% for moxifloxacin (treatment difference −3.9; 95% CI −8.2 to 0.5]). Lefamulin is currently undergoing FDA and EMA review for the treatment of CAP, both in intravenous and oral formulations. Conclusions It is crucial that we identify patients with severe CAP at risk of being infected with PES pathogens. Specific risk factors, the local ecology, and resistance patterns should always be considered when determining the most appropriate empirical antibiotic therapy. Our recommendation is to follow the current international guidelines for empiric therapy in severe CAP and to use the PES score to categorize patients at risk of infection with PES pathogens, reserving the coverage for PES pathogens for patients at high risk (i.e., PES score ≥5 points). Of course, clinicians will need to become aware of the pharmacological characteristics and microbial activities of the new antibiotics approved for the management of CAP, especially their broad spectrum of coverage.
2019-03-06T03:42:22.093Z
2019-02-01T00:00:00.000
{ "year": 2019, "sha1": "870c923718e09c676179c097ef75242368c8b4a7", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-2607/7/2/49/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "870c923718e09c676179c097ef75242368c8b4a7", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
233859304
pes2o/s2orc
v3-fos-license
Technology of Acid Soil Improvement with Biochar: A Review This paper comprehensively analyzed and summarized the main research progress of biochar in improving acid soil technology at home and abroad. In this paper, the distribution, cause of formation and harm of acid soil were introduced, the differences between biochar improver and traditional improver were compared, the structure and functional basis of biochar were expounded, and the improvement of physical and chemical properties of acid soil was analyzed. Finally, combining with the current situation of China’s agricultural development, the paper puts forward the prospect of the problems that need to be paid attention to in the process of biochar research, in order to provide reference for the application and industrial development of biochar. Introduction Since the beginning of the 21stcentury, the impact of food, environment and energy crisis on the human being has become more and more serious, and the governments, experts, and other parties concerned have been searching for the ways to cope with such crises, but producing little effect. In such case, with its unique structure, physical and chemical characteristics, abundant sources of materials, and extensive application value, biochar has become known as the "black gold" to the public and recognized by the academic circle, and also become one of the hot research areas. With regard to the agricultural production, we must become soberly aware of the importance of agroecological environment. The "soil, crop, and environment" are interdependent and inseparable to each other.Especially in China, it is an indisputable fact that higher degree of soil acidification is affecting the crop yield.The biochar technology conforms to modern agricultural development concepts such as "low carbon, environmental protection, and sustainable development", so that it will exert an important effect and significance in control of soil acidification, promotion of the virtuous cycle of agricultural environment, and sustainable development [1]. Causes for soil acidification There are natural causes and human causes for soil acidification. The natural acidification is universal and inevitable in nature. The human factors have accelerated this process to a great extent. The natural acidification mainly refers to the phenomenon that a large number of salt-based ions are leached in soil,and the exchangeable hydrogen and aluminum ions are increased in large quantities in some regions due to the heavy precipitation, and such process of acidification is relatively slow.Human factors mainly include acid deposition, improper application of nitrogen fertilizer, continuous cropping, and plantation of acid-producing crops.Acid deposition means the phenomenon that the sulfur and nitrogen compounds produced from the combustion of fossil fuel and the emission of automobile exhaust fall onto the ground after the processes such as diffusion, precipitation, or action of gravity.When applying the acidic fertilizers such as ammonium chloride, the ammonium ions are oxidized, and then absorbed and taken away by crops, while the content of hydrogen ions and aluminum ions increases, leading to the decrease of soil pH.Continuous cropping and plantation of acid-producing crops are also the important causes for soil acidification [6]. Hazards of soil acidification 2.3.1. Massive leaching loss of salt-based ions. Soil acidification leads to accelerated leaching loss of salt-based ions, and the amount of such leaching loss is affected by pH value of soil. If the pH value of soil decreases, the positive charge in the soil increases and the negative charge decreases, and the adsorption amount of calcium, magnesium, potassium and other nutrient ions decreases significantly. The ability of such ions to integrate the soil is proportional to the pH value, so that such ions are more likely to be leached with water. The experiments confirm that some minerals in the soil will be weathered after the leaching by acid rain and then release salt-based ions [8], indicating that the long-term leaching by acid rain will lead to the loss of soil nutrient pool, resulting in the impoverishment of soil nutrients. Release of metallic ions. Due to the soil acidification, a large number of aluminum ions will be released from the aluminum minerals in the soil, leading to aluminum poisoning of plants. Studies have shown that under the influence of aluminum toxicity, the growth of the plant root is inhibited, and theabsorption and transport function of the root and the enzyme activity of the root decrease rapidly. The excessively high concentration of aluminum in the plants will also inhibit the cell mitosis and DNA synthesis, affect enzyme activity, destroy structure of cell membrane, and inhibit nutrient absorption [9]. In the condition of low pH value, the solubility and activity of heavy metal ions such as manganese, copper and zinc in the soil will increase. When their concentration exceeds a certain limit, heavy metal poisoning will occur in crops, affecting the growth and development of crops, and posing a potential threat to agricultural production and ecological environment. The release of heavy metals not only affects the growth of crops, but also enters the food chain through absorption and enrichment by plants. However, most heavy metals cannot be metabolized normally in animals, nor can they be eliminated, so heavy metal poisoning will occur if humans and animals eat more than a certain amount [10]. Decreased microbial activity. There are a large number of beneficial microorganisms in the soil, which play an important role in the growth of crops. Most beneficial microorganisms, on the other hand, thrive in neutral environments with a pH value ranging from 6.5 to 7.5.According to research, with the decrease of pH value of soil, the species and activities of such microorganisms will decrease, so as to further affect the fixation of nitrogen and mineralization of organic matters in soil, and affect the conversion of nutrientsseriously, resulting in a lower crop yield [11].In general, in case of the higher microbial diversity of the soil, it is difficult for pathogens to breed, and the ratio of bacteria to fungi decreases with the extension of continuous cropping years, so the soil fertility evolves from "bacterial" to "fungal" on the basis of the mechanism is that some microbial populations, especially theplant pathogenic fungi, will accumulate in large quantities during the process of continuous cropping process, leading to the occurrence of serious plant diseases and insect pests, which cannot be controlled and have a serious impact on crop production [9]. Decreased nutrient availability. The adsorption of potassium, calcium, magnesium and other cations by soil colloid will be weakened with the aggravation of soil acidification. As a result, aluminum ions in soil solution will constantly crowd out salt-based ions, resulting in the decrease of salt-based saturation and cation exchange capacity. The diversity and quantity of soil microorganisms will also be affected by soil acidification. In highly acidic soil, almost all microorganisms will be affected by aluminum toxicity, and the low pH value of soil will inhibit the activity and quantity of soil microorganisms. Because soil microorganisms are directly involved in the decomposition of organic matters in the soil, the balance of carbon, nitrogen, phosphorus and sulfur in the circulation of such elements in the soil is indirectly affected while the microbial activity is affected. The absorption of calcium and magnesium ions by plant roots is also inhibited by competitive cations with higher concentrations in acid soil [12].The contents of calcium and magnesium in the leaves also decrease significantly with the increase of hydrogen ions in the soil [13]. Traditional restoration methods Iming at the phenomenon of soil acidification, the experts and scholars from all over the world are trying to find effective soil improvers. Currently, there are three major categories of traditional acid soil improvers including lime, industrial by-products and organic materials [14].The studies show that all such three categories of improvers have certain effect on the improvement of acid soil, and can increase the content of salt-based ions in soil and reduce the aluminum toxicity and acidity of soil. However, such improvers are used for a single purpose of improvement, and need to be applied frequently with certain side effects. Lime improver Limestone mines are widely distributed in China, and the lime is featured by simple production process and low price; therefore, the lime improver is often used in agricultural production. The lime-like substances mainly include limestone, quicklime, and slaked lime. At present, the most commonly used method for soil improvement is the application of lime. Lime is rich in elements such as calcium and magnesium, and can increase the content of exchangeable calcium and magnesium in soil after application. Due to the strong flocculation of calcium ions and the formation of hydroxides with some amorphous aluminum and iron in the soil, the application of lime can reduce the acting force between soil particles and improve the soil structure [15]. The application of such improver can reduce the acidity of topsoil easily and increase the concentration of exchangeable calcium ions in topsoil; such improver can neutralize hydrogen ions in soilquickly and effectively, increase the pH value of soil, and neutralize potential acids in soil; such improver can improve the soil structure, reduce the toxic effect of heavy metals on crops, and improve the quality and yield of crops; such improver can also reduce the content of exchangeable aluminum in soil solution and supplement the calcium element which is lacking in acid soil [16]. However, there are also some deficiencies in the application of lime in acid soil. The long-term application of lime in soil will accelerate the leaching of potassium and magnesium ions, and the cessation of application of lime will lead to a stronger re-acidification process. In addition, the application of lime in a large amount or for a longterm will not only lead to the soil hardening to form the "lime hardening field", but also lead to the imbalance of calcium, potassium and magnesium in the soil, resulting in reduced production. Application of lime may also cause the precipitation of hydrate oxides of magnesium and aluminum, reducing the concentration of magnesium ions in soil solution and plant availability [17].Therefore, the improvement of acidized soil with lime is not satisfactory. Industrial and mining by-products The research shows that there are alkaline in dolomite, fly ash, phosphorous gypsum, alkali slag and other minerals, silt, pulping wastewater, and other substances, which can be used to neutralize the acid components in the soil and improve the soil acidity or alkalinity. Therefore, such industrial and mining by-products are often used as raw materials to improve soil acidity [18]. Dolomite is mainly composed of crystalline calcium carbonate and magnesium carbonate. The content of exchangeable aluminum and manganese in the soil decreasessignificantly after application of dolomite, and with the increased amount of dolomite powder, the content of available phosphorus and available potassium in the soil presents an increasing trend, and the re-acidification phenomenon is not liable to occur. Characterized by small density and large porosity, the fly ash can improve the soil structure and promote the metabolism of crops. With the content of alkaline substances such as calcium oxide and magnesium oxide, it can neutralize hydrogen ions in the soil, improve the nitrate reductase activity of crops, enhance the nitrification of organic matters, promote protein synthesis, and improve yield and quality of crops. Phosphorous gypsum, as an improver of highly acidic soil, can promote crop growth and improve the balance of nitrogen, phosphorus, potassium and calcium in crops, which is very important for achieving high yield and high quality, and can play a dual role of soil improvement and fertilizer cultivation [19]. This kind of improver has certain improvement effect on acid soil; most of such improvers are sourced from some industrial by-products, which are relatively cheap. But most of such improvers contain a certain amount of toxic metal. For example, phosphorous gypsum and phosphate ore powder contain a small amount of lead, cadmium, mercury, arsenic and chromium [20].Despite of the small content, such improvers can cause pollution to the environment. Organic materials In addition to the above soil improvers, some organic materials such as crop stalks, green manure, and plant ash are also used in the improvement of acid soil. Since the ancient times, the people have been purposefully applying organic materials to the soil to fertilize and improve the soil.Microorganisms in soil can decompose organic matters, produce a variety of organic complexes to enrich soil nutrients, and raise the availability of soil nutrients while enhancing the buffering power of soilagainst acid and alkali. The application of organic matters can enhance the agglomeration of soil particles, improve the pore structure of soil, and intensify the aggregate structure of soil [21].Such organic material contains a large number of nutrients, so it can improve the soil fertility, increase the types and activities of soil microorganisms, change the population distribution density, reduce the content of exchangeable aluminum in the soil, and reduce the toxic effect on crops. The application of organic materials can increase the pH value of soil, but the duration of time is limited, requiring multiple applications. Moreover, the organic materials will release a large amount of CO2 in the process of mineralization, so as not to play the role of soil carbon sequestration. The ammonium ions produced in the process of mineralization will also undergo nitrification and even lead to the decrease in pH value of soil again [22]. Biochar improver The pyrolysis of biomass materials in anaerobic or anoxic conditions can generate CO2, flammable gas, volatile oil, and tar substances, and a solid substance rich in carbon, which is commonly known as biochar [23].The biomass materials used to prepare biochar may be the cheap household garbage and wastes. The conversion of agricultural wastes into biochar through pyrolysis process can reduce the environmental pollution caused by agricultural wastes and replace non-renewable energy with renewable energy. Therefore, in recent years, biomass carbon has attracted extensive attention from academia, enterprises and government departments. If the biochar is used as the soil improver, a number of beneficial effects will poses on soil. The biochar can improve the physical property of soil, enhance the water retention capacity of soil, promote the development of microbial population in soil, enhance the activity of microorganism, reduce the nutrient leaching of soil, promote nutrient cycling, and increase the content of organic carbon in soil, so as to promote the growth of plants. On the surface of biomass carbon, there are negative chargeswith the high cation exchange capacity, which can improve the soil's ability to hold and retaincalcium, potassium, magnesium, NH4+ and other nutrient ions, and improve the soil fertility [24]. Another important feature of biochar is that such biochar is alkaline and has a high pH value, so the biochar can be used for improvement of acid soil.Due to the stability, biochar can overcome the deficiency in direct application of lime improver, and avoid the phenomenon of re-acidification. In addition, the raw materials of biochar are sourced from biomass, reducing the risk of heavy metals to a great extent. Second, the application of biochar can also better inhibit the nitrification of soil without repeated application. Composition and functionality basis of biochar The function of biochar mainly depends on its physical and chemical properties. The physical and chemical properties of biochar depend on the materials andconditions for preparation of biochar, such as temperature, oxygen content, and time. Therefore, different raw materials for biochar preparation and different preparation conditions will lead to great differences in the properties of biochar obtained. However, biochar also has certain commonalities, and the utilization of such commonalities is the hotspot and focus in research of biochar at present. Biochar contains a certain amount of alkaline substances, so it is generally alkaline. The research shows that the organic functional groups such as -COO-and -O-on the surface of biochar and the carbonates in biochar are the main forms of alkali [25]. The contribution of carbonates to the alkalinity of biochar increases with the rise in the preparation temperature, while the contribution of organic functional groups shows an opposite trend. Mainly composed of aromatic hydrocarbons and elemental carbon or carbon with graphite structure, biochar is stable in nature with the carbon content more than 60%. It is not only highly aromatic, but also has strong anti-decomposition ability and thermal stability, so it is not easy to be decomposed by microorganisms. Biochar ash is mainly composed of oxides or salts of mineral elements such as potassium, sodium, calcium and magnesium, which are alkaline when dissolved in water. Therefore, biochar can increase the pH value of soil and improve the acidity or alkalinity of soil [26], and some of the mineral elements (nitrogen, phosphorus, potassium, etc.) contained in biochar are important nutrients in the soil. In addition, due to the high organic carbon content of biochar [27] and the abundance of organic functional groups (carboxyl group, hydroxyl group, aldehyde group, etc.), biochar can also increase the content of organic matters in soil. With a higher porosity, the biochar can be applied into soil to reduce the soil bulk density, improve water, gas and heat conditions of soil, and facilitate the growth of soil microorganisms and crops as well as the degradation of organic pollutants in soil [24].The large specific surface area of biochar can increase the ability of soil to hold water and fertilize, and reduce the effectiveness of heavy metals and organic pollutants in soil.Based on such fundamental properties, biochar has theadsorption capacity, antioxidant capacity, and strong anti-biological decomposition ability, so it can be widely used in industry, agriculture, energy, environment and other fields. Modified biochar In order to further improve the performance of biochar, the researchers modified biochar. The modification techniques for biochar include the changein properties of the biochar in different conditions, or addition of elements or compounds inside the structure or on thesurface of the biochar. The introduction of such elements or compounds will lead to specific behaviors of the biochar or enhance their efficacy. Such modified biochar with specific functions is known as the modified biochar [28].There are five modification methods for biochar including chemical modification method, physical modification method, surface covering method for immersing in mineral or inorganic adsorbent, biological method, and magnetic modification method. With regard to chemical modification, acid, alkali, hydrogen peroxide, and other activators are used to modify biochar, so as to change the surface chemical structure of biochar, so that biochar has more functional groups and microporous structures, larger specific surface area, and cation exchange capacity, and then enhance its adsorption capacity of heavy metals, nutrient elements and organic pollutants. Ding et al [29] modifies biochar with sodium hydroxide solution, removes a variety of metal ions such as Pb 2+, Cu2+, Cd2+, Zn2+ and Ni2+in aqueous solution; after such modification, the specific surface area, surface oxygen containing functional groups, cation exchange capacity, and thermal stability of the biochar are improved, and the adsorption capacity of modified biochar to all kinds of ionsis 2.6-5.8 times as much as that of the raw charcoal. With regard to physical modification, the performance of biochar is enhanced by improving the pore structure of biochar, increasing the number of micro-pores and mesopores, increasing the specific surface area of biochar, introducing the oxygen-containing functional groups, and improving the surface variable chemical properties; compared with chemical modification, physical modification is cleaner and easier to control. In the traditional biochar pyrolysis, the energy is first converted into heat, which is then transferred from the biomass surface to the interior along a temperature gradient. However, microwave-assisted pyrolysis biochar [30] directly converts electromagnetic energy into heat at the molecular level, and the heat is transferred from the inside of the biomass to the surface. During the process of pyrolysis, the larger molecules volatilizes first. With the rise in pyrolysis temperature, the size of volatile molecules gradually decreases, among which the volatilization of smaller molecules will increase the micro-pores of bioch6ar, and decrease the number of basic groups. Biochar composite is a method to modify biochar by coating or impregnating different metal oxides, clay minerals and carbonaceous materials on the surface of biochar during different pyrolysis periods. [31] adopt manganese oxide loading modification, and the adsorption capacity of modified biochar to Cd2+is 81.10mg/g, while that of the raw biochar is 32.74mg/g, which is attributable to the developed pore structure and large specific surface area after modification. The magnetic modification of biochar is similar to the loading of carbon material onto the surface of biochar. By loading the magnetic material onto the biochar, it is possible to not only improve the adsorption capacity of biochar, but also magnetize the biochar, so that the biochar can be recycled. By biological method, the anaerobic bacteria are used to convert organic matters into biogas and digesters, so as to obtain biochar from digestive residues. The digested biochar has the higher specific surface area and pH value, stronger anion exchange capacity, and other excellent properties. Aránet al. [32] make bagasseinto biocharby means of anaerobic digestion; although less methane is produced in anaerobic digestion, and the preparation rate of biochar with bagasse subject to anaerobic digestion is similar to that with bagasse not subject to anaerobic digestion, the biochar made of bagasse subject to anaerobic digestion has the higher pH value and specific surface area, larger ion exchange capacity, stronger hydrophobicity, and more negative surface charges. Improvement of physical properties of soil by biochar 5.1.1. Improvement of water holding capacity of soil. The waterholding capacity of soil mainly depends on the pore content, size distribution and continuity of soil. The biochar has a porous structure, so its pore distribution, particle size, mechanical strength, and connectivity have the effect on the pore structure of soil; when applied into soil, the biochar canincrease the soil porosity and promote the microbial activity. After the application of biochar, due to its large specific surface area and the characteristics of loosening and porosity, the number of soil pores, the content and stability of soil aggregates can be improved, and the improvement of soil pore structure will affect the waterholding capacity of soil. Studies show that with the increase in added amount of biochar, the water content of biochar will increasedgradually due to the increase of pores, but when the added amount of biochar continues to increase, the water content of biochar will decrease, which is mainly related to the water repellency on the surface of the biochar [33]. Reduction of soil bulk density. The agricultural effect of soil varies with the bulk density. The soil with low bulk density and high content of organic matter content is more conducive to the release and maintenance of soil nutrients and the reduction of soil hardening degree, and beneficial to seed germination and saving of planting cost. Zhang Meng et al. [34] apply 25g/kg biochar to silty soil, and the soil bulk density decreases significantly from 1.52g/cm 3 to 1.33g/cm3.There are three possible reasons for decrease of soil bulk density after the application of biochar. One is the "dilution effect", that is, the bulk density of biochar is smaller than that of soil, and the total bulk density of soil decreases after the addition of biochar [12].The second is the effect on the charge of soil colloid. The solution or suspension of organic compounds causes the clay particles to move with each other by changing the charges of such clay particles, so as to increase the permeability coefficient of the clay, increase the cracks of the clay particles, and increase the secondary porosity of the soil. This is the mechanism on the basis of which the soil compactness is reduced after the organic mattersare added to soil, and the ash content in biochar may play the same role [27].The last one is that after added to the soil, the biochar increases the friction between soil particles, thus reducing the compactness of the soil and decreasing the bulk density of the soil. Promotion of carbon cycling. Nowadays, biochar has become a hotspot in research in the world, and has become recognized by the people in application potential in soil improvement. With the highly stable carbon structure, biochar can be retained in soil for hundreds to thousands of years, so that it cannot only improve the organic matter in soil for a long time, but also alleviate the greenhouse effect. As a kind of carbon source, biochar can increase the total carbon content of the soil, giving microorganisms more carbon sources. Studies show that the soluble organic carbon in biochar can be utilized by microorganisms, and biochar, as a potent carbon source, can provide microorganisms with a continuous supply of soluble organic carbon. Adding biochar to soil is a process in which the greenhouse gases in the atmosphere are fixed into the soildirectly or indirectly, so as not only to improve soil structure, but also to increase nutrient utilization rate and reduce greenhouse gas emissions [35]. Furthermore, due to the very slow decomposition, biochar can be used as a sustainable method for carbon sequestration, or used as a material for carbon emission reduction. The carbon content of biochar is very high, so the biochar added into soil can increase the organic carbon content of soil, reduce the mineralized amount of organic carbon, and improve the stability of organic carbon.As a renewable resource, biocharwill become a green soil improver in substitution for fossil materials, and improve soil properties. Increase of pH value of acid soil. The chemical properties and nutrient composition of biochar are the main factors affecting soil improvement and promoting increase in crop yield. During the preparation process of biochar, biochar is becoming alkaline with the continuous pyrolysis of organic matter and the continuous generation of ash, and the pH value of biochar increases with the rise in pyrolysis temperature and extension of pyrolysis time. Biochar is mainly composed of aromatic carbon structure, and is alkaline due to the higher alkalinity; biochar has large porosity and specific surface area, as well as contains a certain amount of nutrient elements. The pH value of biochar ranges from 7.5 to 7.8 [36].Most of the substances contained in biochar itself are alkaline, and it is found that such alkaline substances can enter and change soil components easily, so that it is possible to increase pH value of soil and reduce acidity of soil when biochar is applied to acid soil [37].The experiment confirms that the biochar manifests the improvement effect of sawdust biochar >sludge biochar >wheat straw biochar while increasing pH value of red soil. This rule is related to the alkalinity of biochar itself. Due to the high content of alkaline substances, sawdust biochar can achieve the most significant effect on increase of pH value of soil [38], and the effect on decrease of soil acidity within a certain range is positively correlated with the applied amount of biochar [39,40].When studying the effect of different wheat straw biochar on soil of red soil orchard, the foreign laboratory finds that compared with the situation that the biochar is not applied, the application of biochar in the amount of 40t/hm2 can reduce the capacity of orchard soil significantly, and increase the pH value of soil by 0.88 units; the biochar made at the carbonizing temperature of 700ºC can achieve the most significant effect on remediation of acidsoil [41]. Reduction of exchangeable aluminum in acid soil. The essence of soil acidification is the process in which the hydrogen ion is increased, the aluminum ion is hydrolyzed, and the salt-base cation is decreased. Aluminum toxicity is the most important factor restricting the crop growth in acid soil. When the concentration of soluble aluminum ions in soil solution exceeds a certain limit, it will produce toxic effect on crops. Exchangeable aluminum is the main active form of aluminum in acidic soil. Adding biochar to the soil can reduce its content and mitigates its toxic effect on plants significantly [42]. Biochar contains a certain amount of salt-base cations such as calcium, magnesium and potassium. If biochar is applied into soil, such salt-base cations will exchange with exchangeable aluminum in soil so as to reduce the content of exchangeable aluminum in soil. The increase incontent of exchangeable salt-based cation will result in the increase in content of salt-based nutrients in soil and improve the fertility level of soil, especially the significant increase in exchangeable calcium and potassium [43].The increase in content of calcium and magnesium in soil can also alleviate the toxic effect of aluminum on plants effectively, because calcium and magnesium can compete with aluminum ions for the adsorption potential on surface of plant root, and reduce the amount of aluminum ions on surface of root. Experiment confirms that the effect of biochar on exchangeable aluminum in acidsoil is realized mainly by changing the pH value of soil [44].With the increase in pH value of soil, exchangeable aluminum is hydrolyzed into hydroxyl aluminum and partially forms aluminum hydroxide or oxide precipitation. The surface of biochar is rich in oxygen-containing functional groups. These organic functional groups can form stable chelates with aluminum, which can transform exchangeable aluminum in soil into organic complexed aluminum with low activity. Inhibition of soil nitrification. Nitrification is a process in which ammonia in soil is oxidized to nitrate nitrogen under the action of microorganisms, and composed of two continuous processes completed by ammonia-oxidizing bacteria (AOB) and nitrite oxidizing bacteria (NOB). In suitable environmental conditions, ammonium in soil can undergo rapid nitrification. Nitrate, the nitrification product, is not easy to be absorbed by soil colloid, so nitrification is liable to cause serious threat to the environment. Some scholars also believe that nitrification in soil will release protons, resulting in decrease of pH value [45]. Studies show that biochar can improve acid resistance of soil by inhibiting nitrification of soil [46]. Due to the large specific surface area, biochar can absorb NH4+ from soil, leading to the reduction of available NH4+ in soil, thus inhibiting nitrification [47]. In addition, biochar has a significant inhibitory effect on AOB, and the amount of AOB is positively correlated with content of nitrate nitrogen, but has nothing to do with the added amount of biochar.It is further demonstrated that biochar can inhibit AOB abundance and thus reduce nitrification in soil [48]. Biochar plays a dual role in inhibiting soil nitrification. On the one hand, the application of biochar reduces nitrification capacity and rate by inhibiting the activity of AOB and NOB, and reduces the release of protons during the process of nitrification. On the other hand, the protonation of carboxyl groups on the surface of biochar can improve the acidity resistance of soil and improve the pH buffer capacity of soil [49]. Yong et al. [50] prepare biochar without phenolic substances and conducted a control test with the untreated control group to observe the effect on AOB abundance. It is found that untreated biochar could reduce AOB abundance by 3 orders of magnitude. It is proved that the inhibition of nitrification by biochar is caused by the phenolic substances contained in biochar, and the biochar containing phenolic substances can reduce the diversity of AOB in soil. Increase of nitrogen and phosphorus in soil. Nitrogen, as the most important limiting factor of crop yield, has attracted much attention in agriculture. In order to increase crop yield, a large amount of nitrogen fertilizer is used in agricultural production. However, excessive application of nitrogen fertilizer arouses a series of environmental problems. In addition, leaching loss of nitrogen is also an important reason for limiting the utilization rate of fertilizer. How to reduce leaching loss of nitrogen and increases the utilization rate of nitrogen fertilizer have attracted people's attention [51].In addition to nitrogen, phosphorus in the soil is characterized by easy fixation and poor mobility. Especially in acid soil, iron and aluminum have the high activity and are easy to form insoluble iron and aluminum phosphorus with phosphorus, and even occluded phosphorus with less effectiveness. Therefore, there are also the problems in acid soil such that the total phosphorus content in acid soil is not low, but the phosphorus availability is not high [52]. Biochar increases the ability of soil to hold nitrogen mainly by the following ways. One is to increase the adsorption and holding capacity of the soil to ammonium ions; due to abundant oxygen-containing functional groups on the surface of biochar, a large number of negative charges, porous structure, and large specific surface area, biochar has a strong adsorption capacity to ammonium so as to reduce nitrogen loss [53]. The second is that the increase of soil's capacity to hold ammonium reduces the rate of soil nitrification. The third is that biochar increases the water retention capacity of soil and reduces leaching loss of nitrate nitrogen. The fourth is that the porous structure of biochar provides an activity place for microorganisms, which is conducive to the development of nitrogen-fixing microbial communities and enhances the biological nitrogen fixation capacity of soil [54]. Biochar can increase the ability of soil to retain nitrogen and improve the effectiveness of soil nitrogen. This will not only reduce the applied amount of fertilizer, but also reduce the pollution of nitrogen leaching loss to the water environment, and reduce emissions of NO2 and other greenhouse gases. In addition, biochar can also fix phosphorus by means of soil exchange sites, thus reducing the formation of Fe-P and Al-P. Also, by changing the pH value of soil, metal ions in the soil solution can be adsorbed and fixed so as to reduce the precipitation of phosphorus [55]. The increase of alkaline metal oxides in soil and the decrease of soluble aluminum in soil solution are considered to be the most important reasons for biochar to increase soluble phosphorus in soil [56].In addition, biochar itself is also high in phosphorus content, which can be used by crops to improve the phosphorus level of soil and increase crop yield. Improvement of microbial activity in soil. Biochar keeps the micro-pore structure of raw materials, provides a good "refuge" place for microbial habitat and reproduction, reduces the survival competition among microorganisms, and provides them with different sources of carbon, energies and mineral nutrients, so that they can survive and reproduce vigorously. Mahmood et al. [57] find that after applying tree ash into the soil, the activity of bacteria increases with the change of bacterial community structure, and biochar can promote the growth of bacteria. In addition, the application of biochar into soil can also change the community structure and activity of fungi in soil, and increase the number and improve the activity of soil microorganisms as a whole. With the microporous characteristics, biochar also has a good connectivity, on the basis of which it is possible to preserve water and air for a long time, and provide good environmental conditions for the growth and reproduction of microorganisms. Conclusions and prospects At present, the soil acidification phenomenon for different reasons is becoming more obvious in the whole world, so the R&D and use of soil improver is particularly important. Compared with other traditional restoration methods, biochar improver achieves the significant effect on acid reduction. The main reason is that biochar itself is alkaline, the acidity of soil can be reduced after application of biochar, and the degree of improvement is positively correlated with the applied amount of biochar. Biochar has significant effect on chemical properties of soil such as inhibiting soil nitrification, increasing nitrogen and phosphorus in soil, reducing exchangeable aluminum in soil, and alleviatingaluminum toxicity. In addition, biochar improver can improve physical properties of soil to a certain extent, such as water holding capacity of soil, soil bulk density, and carbon cycling. However, we shall not lose sight of the existing problems. At present, the application of biochar in soil is not universal, and the comprehensive consideration shall be given to the relevant characteristics of raw carbonizingmaterials and how such characteristics can make up for the specific defects of soil. Obviously, further research is needed to fully realize the potential of biochar as a high quality soil improver [58]. At present, the research prospect of biochar and the problems to be solved are shown as follows: First, there are many kinds of raw materials of biochar distributed widely, and there are few studies on the characteristics, environmental effects and influencing factors of biochar in different preparation conditions. Most of the studies are limited to the stage of short-term small-scale farmland and laboratory simulation, with large errors compared with actual applications, resulting in the lack of systematic and comprehensive understanding [59]. Second, in the actual soil environment, there are many factors leading to soil acidification, while most scientific experiments are conducted only for study on the single-cause restoration mechanism and the effect of biochar, showing a poor reliability of conclusion. If there are multiple factors coexisting, it is still unknown whether the restoration mechanism and effect of biochar will change [60]. Third, the current studies on soil improvement by biochar as an improver lay the emphasis on the response to the external environment, and there are less studies on the change of biochar itself; for example, whether there is any change in the physical and chemical properties of biochar as improver during the restoration and remediation process of acid soil environment [61]; in the future, it is necessary to conduct more researches on this aspect.
2021-05-07T00:04:11.880Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "d4d8f1471b671e8969b0b30f06128418b78f5d0b", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/692/4/042098", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "ae706cf5580d51f91a9e83936a2aca890d49347e", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Physics", "Environmental Science" ] }
232171707
pes2o/s2orc
v3-fos-license
Identification of gene products involved in plant colonization by Pantoea sp. YR343 using a diguanylate cyclase expressed in the presence of plants Microbial colonization of plant roots is a highly complex process that requires the coordination and regulation of many gene networks, yet the functions of many of these gene products remain poorly understood. Pantoea sp. YR343, a gamma-proteobacterium isolated from the rhizosphere of Populus deltoides, forms robust biofilms along the root surfaces of Populus and possesses plant growth-promoting characteristics. The mechanisms governing biofilm formation along plant roots by bacteria, including Pantoea sp. YR343, are not fully understood and many genes involved in this process have yet to be discovered. In this work, we identified three diguanylate cyclases in the plant-associated microbe Pantoea sp. YR343 that are expressed in the presence of plant roots, One of these diguanylate cyclases, DGC2884 localizes to discrete sites in the cells and its overexpression results in reduced motility and increased EPS production and biofilm formation. We then performed a genetic screen by expressing this diguanylate cyclase from an inducible promoter in order to identify candidate downstream effectors of c-di-GMP signaling which may be involved in root colonization by Pantoea sp. YR343. Further, we demonstrate the importance of other domains in DGC2884 to its activity, which in combination with the genes identified by transposon mutagenesis, may yield insights into activity and regulation of homologous enzymes in medically and agriculturally relevant microbes. quorum-sensing as a mechanism in the rhizosphere for influencing changes in gene expression that 56 can lead to root colonization and biofilm formation (6-9). Indeed, genome analyses showed that 57 acyl-homoserine lactone (AHL)-based signaling systems are prevalent in the microbiome of 58 Populus deltoides (10). Additionally, plant colonization involves the second messenger signaling 59 molecule, cyclic diguanylate monophosphate (c-di-GMP), which is known to affect motility, 60 virulence, exopolysaccharide (EPS) production, and biofilm formation in many bacterial species 61 (11-15). 62 The levels of c-di-GMP within cells are regulated by two different enzymes: diguanylate 63 cyclases, which catalyze the production of c-di-GMP from two molecules of guanosine 64 triphosphate (GTP), and phosphodiesterases, which degrade c-di-GMP to guanosine 6 cells were grown in M9 minimal media with 0.4% glucose, we found that twelve diguanylate 118 cyclase reporters showed an average fluorescence intensity below 2.00 (weak or no expression), 119 making them suitable candidates for further study in terms of expression in biofilms, pellicles, and 120 during root colonization (Table 1). To test for expression during biofilm formation, the cells were 121 grown statically in M9 minimal medium with 0.4% glucose for 72 hours in 12-well dishes 122 containing a vinyl coverslip as described in Materials and Methods. These data show that eleven 123 diguanylate cyclases showed increased expression under these conditions, with DGC2884 and 124 DGC2242 showing the highest levels (Table 1 and Fig 1). Interestingly, we found that each of the 125 strains showed an increase in expression during biofilm formation based on GFP fluorescence, but 126 images showed that GFP levels driven from the DGC2884 promoter were not uniform within the 127 biofilm (Fig 1). Instead, we found that GFP was highly expressed in specific patches throughout 128 the biofilm, but expressed at low or undetectable levels in other regions. This expression pattern 129 was also observed in some of the other promoter constructs and is reflected, in part, by the higher Table 1. We also tested for expression during pellicle formation and found 131 that most strains only exhibited a modest increase in expression (Table 1). 132 Next, we tested the activity of these 12 promoters during root colonization of T. aestivum and P. 133 trichocarpa. Bacteria associated with roots were examined for the presence or absence of 134 fluorescence, since quantification of expression levels was difficult due to plant autofluorescence 135 (Table 1). After one week of growth post-inoculation, we found that DGC2884, DGC3006, and 136 DGC3134 were expressed on T. aestivum and P. trichocarpa roots (Fig 1 and Table 1). We cannot 137 exclude the possibility that the eight untested diguanylate cyclases may also be expressed during carrying an empty vector (Fig 2). Growth curves were compared in both minimal and rich media 194 ( Fig S2). Notably, expression of wild type DGC2884, but not any of the variants, resulted in (YR343 (pSRK (Km)-DGC2884)) resulted in red, wrinkly colony formation (Fig 2A). In contrast, suggesting that expression of DGC2884 in the absence of enzymatic activity may still retain some 204 function (Fig 2A). We observed that Congo Red binding by strains expressing the DGC2884ΔTM 205 variant was less than that of the DGC2884 expressing strain and we no longer observed wrinkly 206 colony morphology, supporting the hypothesis that the TM domain of DGC2884 is critical to its 207 function (Fig 2A). Since increased levels of c-di-GMP are typically associated with decreased motility (11, 237 43, 44), we next tested whether overexpression of these diguanylate cyclases affected motility 238 using a swim plate agar assay. As expected, overexpression of DGC2884 resulted in impaired 239 motility compared to the control strain, which was partially restored in the DGC2884 AADEF 240 variant ( Fig 2B). We found that, in comparison to strains overexpressing DGC2884, expression 241 of DGC2884ΔTM resulted in partial restoration of motility behavior reminiscent of that observed 242 for strains expressing the DGC2884 AADEF mutant ( Fig 2B). Together, these data suggest that a 243 fully functional DGC2884 is required to modulate motility. 244 Next, we examined whether overexpression of these diguanylate cyclases influenced 245 biofilm formation (Fig 2C). While each of these strains showed formation of biofilms on vinyl 246 coverslips, the most robust biofilms were formed during expression of the wild type DGC2884, domain. 250 We also tested the effect of overexpression of each diguanylate cyclase on pellicle 251 formation and calculated the percentage of cells in pellicles and found that overexpression of 252 DGC2884 resulted in significantly increased pellicle formation when compared to the empty 253 vector control (p < 0.005, t-test) ( Fig 2D). While expression of DGC2884 AADEF and 254 DGC2884ΔTM also resulted in more pellicle formation than the control (significantly more by 255 DGC2884ΔTM, p < 0.05, t-test), they produced significantly less pellicle than that of wild type 256 cells expressing the full-length DGC2884 (p < 0.05, t-test) ( Fig 2D). that the AADEF mutation indeed affected enzyme activity (Fig 2E, 2F). We also found that 266 expressing DGC2884ΔTM resulted in little to no activity (Fig 2E, 2F). To verify that the genes 267 encoding these diguanylate cyclases were expressed in these cells, we examined transcript levels 268 using RT-PCR ( Fig S3). Taken together, results from each of these assays confirm that both To gain further insight into the function of DGC2884, we performed a simple Protter 272 analysis using the amino acid sequence of DGC2884 (46) and found that the sequence for 273 DGC2884 is predicted to have two transmembrane domains at its N-terminus that make up a 274 CHASE8 domain, followed by the GGDEF domain ( Fig 3A). We next examined localization of wild type DGC2884 and DGC2884ΔTM in a wild type 289 background by expressing it fused to either a 3HA or 13Myc tag ( Fig 3B). These data show that 290 DGC2884 was found to primarily localize in discrete foci at the cell pole or towards the mid-cell. 291 In the absence of the N-terminal transmembrane domain, however, DGC2884 no longer localized 292 as discrete foci, but rather the localization pattern became more diffuse with fewer visible foci ( Fig 293 3B and Table 2). To verify that the tag did not alter the expression or function of these enzymes, we performed a motility assay ( Fig 3C) and western blot ( Fig S5) Identification of c-di-GMP responsive genes using transposon mutagenesis 305 Overexpression of DGC2884 resulted in a number of phenotypes (shown in Fig 2) Behavioral defects observed in selected mutants 331 Using the list of genes found in the genetic screen (Table 3) examining EPS production (by observing phenotypes on media with Congo Red) (Fig 4A, 4B). 343 Next, we used the cured transposon mutants to observe pellicle formation (Fig 4C), and measure 344 biofilm production with a crystal violet assay (Fig 4D). Compared to the wild type control, each 345 mutant had a different growth phenotype on media with Congo Red, some of which were more 346 noticeable on one media type over the other (Fig 4A, 4B). These phenotypes were further 347 influenced based on whether the mutant expressed DGC2884 (pSRK (gm)-DGC2884) or an empty 348 vector (pSRK (gm)). We next examined the effects of these mutations on pellicle formation and 349 found that the UDP::Tn5, FliR::Tn5, and GlpF::Tn5 mutants produced significantly less pellicle 350 than the wild type strain (Fig 4C). We also examined biofilms attached to vinyl coverslips and 351 found that while some mutants appear to produce more biofilm, such as FliR::Tn5 and GlpF::Tn5, 352 there were no statistically significant differences measured by quantifying Crystal Violet staining. 353 Interestingly, we did find that the UDP::Tn5 and Ndk::Tn5 mutants produced significantly more 354 biofilm than the wild type strain in this assay (Fig 4C). Ndk::Tn5 mutants showed a slight, though significant, increase in colonization (Fig 5A; 377 statistically significant differences with p < 0.005, t-test). Comparisons of growth rates between 378 transposon mutants and the wild type strain showed no significant differences for most strains, 379 except for growth with UDP::Tn5 ( Fig S4); however, based on growth curves, the maximum OD YfiN has been shown to modulate production of Psl polysaccharides, whose operon possesses 447 genes also found to regulate amylovoran biosynthesis in Erwinia amylovora (33,34,55 insights into the roles of multiple diguanylate cyclases in coordinating these behaviors. 535 Bacterial strains and growth conditions. We sequenced each plasmid from the transposon outwards using the following primers, tpnRL17-633 1 and tpnRL13-1 (79). All resulting sequences were analyzed using BlastX from NCBI in order 634 to identify the region of DNA flanking each transposon. 635 Individual transposon mutants were grown three to four times sequentially on rich media 636 without selection in order to remove the pSRK (Gm)-DGC2884 plasmid. Removal of the plasmid 637 was verified by growth on kanamycin at 50 µg mL -1 , but not on gentamycin at 10 µg mL -1 . 638 Construction of fluorescent strains. We generated fluorescent strains that were also resistant to Microimaging, Thornwood, NY). Images were processed using Zen2012 software (Zeiss). Cell 663 fluorescence intensity measurements were performed using Fiji ImageJ for assays with promoter-664 reporter fusions for DGCs and for the Vc2 Spinach aptamer following the protocol described by 665 Kellenberger, et al (45). Briefly, images were initially collected using the same parameters and 666 then collectively processed so that brightness and contrast was adjusted and normalized across the
2021-03-11T14:09:59.239Z
2021-03-04T00:00:00.000
{ "year": 2021, "sha1": "837d47c611a6ff67421b3f10cfcca597b47029af", "oa_license": "CCBY", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2021/03/04/2021.03.03.433726.full.pdf", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "837d47c611a6ff67421b3f10cfcca597b47029af", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
199057340
pes2o/s2orc
v3-fos-license
Two Pantoea agglomerans type III effectors can transform nonpathogenic and phytopathogenic bacteria into host‐specific gall‐forming pathogens Summary Pantoea agglomerans (Pa), a widespread commensal bacterium, has evolved into a host‐specific gall‐forming pathogen on gypsophila and beet by acquiring a plasmid harbouring a type III secretion system (T3SS) and effectors (T3Es). Pantoea agglomerans pv. gypsophilae (Pag) elicits galls on gypsophila and a hypersensitive response on beet, whereas P. agglomerans pv. betae (Pab) elicits galls on beet and gypsophila. HsvG and HsvB are two paralogous T3Es present in both pathovars and act as host‐specific transcription activators on gypsophila and beet, respectively. PthG and PseB are major T3Es that contribute to gall development of Pag and Pab, respectively. To establish the minimal combinations of T3Es that are sufficient to elicit gall symptoms, strains of the nonpathogenic bacteria Pseudomonas fluorescens 55, Pa 3‐1, Pa 98 and Escherichia coli, transformed with pHIR11 harbouring a T3SS, and the phytopathogenic bacteria Erwinia amylovora, Dickeya solani and Xanthomonas campestris pv. campestris were transformed with the T3Es hsvG, hsvB, pthG and pseB, either individually or in pairs, and used to infect gypsophila and beet. Strikingly, all the tested nonpathogenic and phytopathogenic bacterial strains harbouring hsvG and pthG incited galls on gypsophila, whereas strains harbouring hsvB and pseB, with the exception of E. coli, incited galls on beet. SUMMARY Pantoea agglomerans (Pa), a widespread commensal bacterium, has evolved into a host-specific gall-forming pathogen on gypsophila and beet by acquiring a plasmid harbouring a type III secretion system (T3SS) and effectors (T3Es). Pantoea agglomerans pv. gypsophilae (Pag) elicits galls on gypsophila and a hypersensitive response on beet, whereas P. agglomerans pv. betae (Pab) elicits galls on beet and gypsophila. HsvG and HsvB are two paralogous T3Es present in both pathovars and act as host-specific transcription activators on gypsophila and beet, respectively. PthG and PseB are major T3Es that contribute to gall development of Pag and Pab, respectively. To establish the minimal combinations of T3Es that are sufficient to elicit gall symptoms, strains of the nonpathogenic bacteria Pseudomonas fluorescens 55, Pa 3-1, Pa 98 and Escherichia coli, transformed with pHIR11 harbouring a T3SS, and the phytopathogenic bacteria Erwinia amylovora, Dickeya solani and Xanthomonas campestris pv. campestris were transformed with the T3Es hsvG, hsvB, pthG and pseB, either individually or in pairs, and used to infect gypsophila and beet. Strikingly, all the tested nonpathogenic and phytopathogenic bacterial strains harbouring hsvG and pthG incited galls on gypsophila, whereas strains harbouring hsvB and pseB, with the exception of E. coli, incited galls on beet. Keywords: effectors, galls formation, host specificity, hostspecific transcription activators, Pantoea agglomerans, type III secretion system. Pantoea agglomerans (Pa), a widespread commensal Gramnegative bacterium, is distributed in many diverse habitats and commonly associated with plants as an epiphyte and/or endophyte (Kobayashi and Palumbo, 2000;Lindow and Brandl, 2003). Pantoea agglomerans pv. gypsophilae (Pag) (formerly known as Erwinia herbicola pv. gypsophilae) and P. agglomerans pv. betae (Pab) are two related tumorigenic pathovars. Pag elicits gall formation on gypsophila and a hypersensitive response (HR) on beet, whereas Pab incites galls on beet and gypsophila (Burr et al., 1991;Cooksey, 1986). The virulence of both pathovars relies on the presence of a pathogenicity plasmid (pPATH) containing a pathogenicity island (PAI), which is distributed among genetically diverse populations of P. agglomerans (reviewed in Barash and Manulis-Sasson, 2009). The pathogenicity plasmids of Pag and Pab may vary in size and their curing results in a loss of pathogenicity (Weinthal et al., 2007). Studies on the pPATH Pag of strain Pag 824-1 revealed a plasmid size of approximately 135 kb accommodating a PAI of nearly 75 kb (Barash and Manulis-Sasson, 2009). The PAI harbours an intact hrp/hrc (hypersensitive response and pathogenicity/hypersensitive response and conserved) gene. It contains a functional type III secretion system (T3SS), type III effectors (T3Es), multiple and diverse insertion sequences, which presumably were involved in the evolution of the pPATH Pag (Guo et al., 2002), and a cluster of genes encoding biosynthetic enzymes of the plant hormones auxin and cytokinins (Barash and Manulis-Sasson, 2009). The structure of the PAI, its plasmid location and the observation that all the identified T3Es in Pab 4188 and Pag 824-1 were plasmid-borne (Nissan et al., 2018) strongly suggest a recent evolution of pathogenesis. A draft genome of Pab 4188 and Pag 824-1 combined with a machine-learning approach followed by a translocation assay of T3Es into beet roots and pathogenicity assays were recently employed to reveal the inventories of T3Es in the two pathovars (Nissan et al., 2018). Eight Pab 4188 functional plasmid-borne T3Es could trigger galls on beet and gypsophila, whereas nine plasmid-borne T3Es of Pag 824-1 could trigger galls on gypsophila and HR on beet (Nissan et al., 2018). In contrast to the small repertoire of T3Es in Pab or Pag, pathovars of other phytopathogenic bacteria, including Pseudomonas syringae pv. tomato (DC3000), Xanthomonas euvesicatoria or Ralstonia solani, harbour considerably larger pools of about 30 or more T3Es (Genin and Denny, 2012;Kvitko et al., 2009;Teper et al., 2016). The T3Es of Pab or Pag could be divided into three groups: a HsvB and HsvG are paralogous T3Es that mimic host-specific transcriptional activators on beet and gypsophila, respectively, and determine pathovar specificity (Nissan et al., 2006(Nissan et al., , 2012Valinsky et al., 1998). Both are present in each pathovar and are functional only in the corresponding host; HsvG is required for gypsophila infection and HsvB for beet infection. Replacement of the HsvG promoter with a stronger promoter, equivalent to hrpJp, caused an increase in gall size of up to three times, suggesting that HsvG, and presumably HsvB, may interfere with the plant hormone balance leading to gall development (Nissan et al., 2005). b PthG and PseB are exclusively present as active T3Es in Pag and Pab, respectively (Nissan et al., 2018). PthG supports disease development in gypsophila and triggers HR on multiple beet species (Ezra et al., 2000(Ezra et al., , 2004. In Pab, PthG is truncated and nonfunctional, allowing Pab to infect beet. Similarly, Pag mutated in the PthG gene infects beet as well as gypsophila (Ezra et al., 2000). PseB is a novel T3E from Pab that is exclusively present in this pathovar with as yet unknown function (Nissan et al., 2018). c The remaining T3Es are common to other Gram-negative phytopathogenic bacteria and may contribute to gall development by diverse mechanisms. To the best of our knowledge, HsvB, HsvG, PseB and PthG have not been reported as functional T3Es in any other pathogenic bacteria and presumably could have evolved through a pathoadaptive evolution (Sokurenko et al., 1999). In contrast, the remaining effectors are shared with other phytopathogenic bacteria and presumably have been acquired by horizontal gene transfer (HGT). The contribution of each T3E to virulence was quantitatively assessed by comparing the gall's fresh weight incited by the wildtype strain with that incited by its mutant (Nissan et al., 2018). The highest contribution was provided by HsvG on gypsophila and HsvB on beet as a mutation in the corresponding T3Es caused a >95% reduction in gall formation in gypsophila and beet, respectively. Mutants in either PthG in Pag or PseB in Pab also caused a significant reduction in gall size but considerably lower than the former two effectors (Nissan et al., 2018). This study was undertaken to determine the minimal combination of T3Es in Pag or Pab that is sufficient to elicit galls on either gypsophila or beet. The adopted strategy was to convert nonpathogenic Gram-negative bacteria into gall-forming pathogens on either gypsophila or beet by transformation of T3Es taken from the two Pa pathovars. Initially, the nonpathogenic bacteria were provided with the capability to translocate T3Es into plant cells via transformation of pHIR11, a cosmid harbouring a plant-adapted T3SS (Huang et al., 1988), followed by transformation of T3Es from Pag or Pab. The T3Es HsvG, HsvB, PthG and PseB were selected for this study because they presumably were evolved by pathoadaptive evolution and apparently were instrumental in the emergence of Pag and Pab as new pathogens. The present communication demonstrates that transformation of HsvG and PthG or HsvB and PseB converts nonpathogenic bacteria into host-specific gall-forming pathogens on gypsophila and beet, respectively, with the exception of E. coli strains that could support gall development only on gypsophila. Moreover, transformation of each of the above two T3E pairs into three major T3SS-dependent phytopathogenic bacteria allowed them to expand their host range and incite galls on gypsophila or beet in a host-specific manner without modifying their own characteristic symptoms on the natural hosts. The bacterial strains, plasmids and a cosmid used in this study are described in Table S1. Wild-type strains of Pag and Pab, as well as other phytopathogenic bacteria employed in this study, were grown in Luria-Bertani (LB) agar at 28 °C, whereas E. coli strains were cultured on the same medium at 37 °C. Antibiotics were used at the following concentrations (µg/mL): ampicillin (Amp), 150; kanamycin (Km), 50; rifampicin (Rif), 150; spectinomycin (Spec), 50; tetracycline (Tc), 15. Pathogenicity tests on cuttings of Gypsophila paniculata 'Golan' (Danziger Ltd, Bet Dagan, Israel) were essentially performed according to Lichter et al. (1995) as described by Nissan et al. (2018). After removal of an approximately 2 mm section from the bottom of the stem, the cuttings (ten for each treatment) were inoculated by dipping into a bacterial suspension of 10 6 cells/mL for 30 min and placed in vermiculite-filled trays for symptom visualization. The glasshouse temperature was maintained at 22-25 °C and high humidity was generated by computer-controlled mist sprinklers that were activated every 20 min for 10 s. Pathogenicity was scored 10-15 days after inoculation. The degree of virulence was determined by removal of the galls from the infected cuttings and measurement of their fresh weight. Pathogenicity tests on table beet cubes were performed according to Ezra et al. (2000). Whole matured beets (Beta vulgaris 'Egyptian Red Beet') were soaked in 1% hypochlorite for 10 min following by two washings in sterile water. They were then cut into approximately 0.5 × 0.7 × 0.7 cm under sterile conditions and placed on sterile 1.5% water agar in a Petri dish. Inoculation was carried out with a culture grown overnight on LB agar by puncturing the top of the cube and inserting the bacteria with a sterile toothpick (five cubes per each treatment). Virulence was scored following incubation of the cubes for 5 days at 28 °C. Pathogenicity experiments were conducted in a quarantine greenhouse. Pathogenicity of the phytopathogenic bacteria on their natural hosts, namely, Erwinia amylovora (Ea) on pear blossom clusters, Dickeya solani (Ds) on potato tubers and Xanthomonas campestris pv. campestris (Xcc) on cabbage seedlings, were performed according to Kleitman et al. (2005), Schaad et al. (2001) and Tsror (Lahkim) et al. (2013), respectively. Isolation of DNA from Pa or E. coli strains, cloning, ligation, transformation and other DNA manipulations were performed according to standard procedures (Ausubel et al., 1995) or as recommended by the supplier. The cloning vectors used in this study are listed in Table S1. Transfer of T3Es cloned in E. coli DH5α into nonpathogenic or pathogenic bacterial strains was performed by triparental mating with the E. coli helper plasmid pRK2073 (Spt r ) essentially as described elsewhere (Ditta et al., 1980;Manulis et al., 1998). The recipient and the helper bacteria were mixed on LB agar plates, incubated at 28 °C overnight and then plated on LB agar with appropriate antibiotics. Curing of the desired bacterial strain from transconjugants was carried out by subculturing in the absence of antibiotic selection as previously described . The nonpathogenic bacterial strain Pseudomonas fluorescens 55 (Pf) harbouring the cosmid pHIR11 that encodes a functional T3SS from P. syringae (Huang et al., 1988) as well as the nonpathogenic strains of Pa, Pa 3-1 and Pa 98, and two E. coli strains (Table S1) were employed for transformation into gall-forming pathogens. The E. coli strains included a shiga toxin mutant of the enterohemorrhagic E. coli (EHEC) designated as TUV93-0 and E. coli DH5α (Table S1). To endow the nonpathogenic bacterial strains with the capability of translocating T3Es into plant cells, they were initially transformed with pHIR11 to obtain Pa 3-1 (pHIR11) Pa 98 (pHIR11), EHEC TUV93-0 (pHIR11) and E. coli DH5α (pHIR11). Results presented in Table 1 and Fig. 1 indicate that nonpathogenic bacterial strains harbouring hsvG and pthG elicited galls on gypsophila whereas nonpathogenic bacterial strains harbouring hsvB and pseB elicited galls on beet with the exception of the E. coli strains. Bacterial strains containing pthG generally elicited HR on beet (Ezra et al., 2000). EHEC TUV93-0 (pHIR11) triggered HR on beet, whereas no symptoms could be observed with E. coli DH5α (pHIR11) ( Table 1). Interestingly, the HR response could not be observed with EHEC TUV93-0 lacking pHIR11, suggesting that the HR might be caused by a translocated T3E of EHEC TUV93-0, which is absent in E. coli DH5α. The inability of the E. coli strains harbouring hsvB and pseB to incite galls on beet is not yet understood and may only be hypothesized. EHEC TUV93-0 (a derivative of E. coli 0157:H70) is a shiga toxin mutant of human and animal pathogen that survives well on plants (Wright et al., 2013) and harbours T3Es for virulence (Tobe et al., 2006). In contrast, E. coli DH5α is a genetically engineered bacterial strain used to facilitate cloning and lacks any T3Es (Taylor et al., 1993). The HR of beet to EHEC TUV93-0 (pHIR11) could prevent gall formation as previously described for PthG of Pag (Ezra et al., 2000). Additionally, a minimal degree of endophytic bacterial growth might be considered a prerequisite for translocation of T3Es into a plant's cell. The two E. coli strains most likely differ in their degree of endophytic growth; while EHEC TUV93-0 is adapted for plant colonization, E. coli DH5α is not. Nevertheless, the nutrients released from wounded gypsophila cuttings could be sufficient for translocation of T3Es and formation of small galls (Fig. 1). The galls' fresh weight in gypsophila inoculated by the reconstructed pathogenic strains, Pf, Pa 3-1 or Pa 98, were generally smaller by up to 30% than those produced by the wild type (Pag 824-1). Average gall size for Pag 824-1 was 215 ± 20 mg and for Pf, Pa 3-1 or Pa 98 containing hsvG and pthG was 156 to 165 ± 20 mg. The latter observation might suggest that the remaining T3Es present in Pag 824-1 contribute to maximal gall size. The ability of the two pairs of T3Es to transform nonpathogenic bacteria into host-specific gall-forming pathogens on beet or gypsophila prompted us to examine whether these plasmid-cloned effectors can also transform T3SS-dependent phytopathogenic bacteria into host-specific pathogens on the same two hosts. To resolve this question, three phytopathogenic bacteria, namely, Ea, Ds and Xcc (Table S1), were transformed by three-parental mating with HsvG, HsvB, PthG and PseB in various combinations as described above for the nonpathogenic strains. Pathogenicity tests performed under quarantine conditions indicated that strains of the three tested bacteria harbouring the HsvG-PthG pair and the HsvB-PseB pair incited gall formation on gypsophila and beet, respectively (Table 2 and Fig. 1). Pathogenicity tests with the transformed pathogens listed in Table 2 were also conducted on their natural hosts, namely pear for Ea, potato for Ds and cabbage for Xcc, as indicated above. The characteristic symptoms for each of these pathogens on their natural hosts were preserved without detecting gall appearance (results not shown). These results indicate that plasmid-cloned T3Es can be maintained by bacterial pathogens without modifying their activity on their compatible hosts. A previous attempt to deal with a minimal repertoire of T3Es required for plant disease development was carried out with P. syringae pv. tomato (Pst) strain DC3000 in which eight out of 28 T3Es were sufficient to confer near wild-type bacterial growth and disease symptoms in Nicotiana benthamiana plants (Cunnac et al., 2011). The significant difference in the numbers of minimal effectors between Pag or Pab and Pst DC3000 could be assigned to the evolutionary stage of the two pathogens as well as to the nature of their T3Es. The chromosomal location of the T3SS in Pst DC3000 and its effectors, as well as the substantial number of T3Es that have been accumulated during the co-evolutionary arms Fig. 1 Pathogenicity tests on gypsophila cuttings and beet roots. Gypsophila cuttings: 1, water control; 2, Pantoea agglomerans pv. gypsophilae 824-1; 3, Pseudomonas fluorescens 55 (Pf) (hsvG + pthG); 4, P. agglomerans (Pa) 3-1 (hsvG + pthG); 5, Erwinia amylovora (hsvG + pthG); 6, Escherichia coli (EHEC TUV93-0) (hsvG + pthG); 7, E. coli DH5α (hsvG + pthG). Beet roots: 8, water control; 9, P. agglomerans pv. betae 4188 (arrows point to gall); 10, Pf (hsvG + pthG) showing hypersensitive response symptom (indicated by arrows); 11, Pf (hsvB + pseB) (gall); 12, Pa 3-1 (hsvB + pseB) (gall); 13, E. amylovora (hsvB + pseB) (gall). Photos of gypsophila cuttings were taken 14 days after inoculation and for beet roots 5 days after inoculation. race between host and pathogen (Starvinides et al., 2008), are indicative of a long evolutionary period. In contrast, as described earlier, Pag and Pab are newly evolved pathogens that harbour only a plasmid-borne T3SS and a considerably smaller number of plasmid-borne T3Es. Therefore, the number of minimal indispensable effectors that are accumulated during the evolution of Pst DC3000 is expected to be significantly higher than in Pantoea. The transformation of nonpathogenic Pa into a gall-forming pathogen is essentially dependent on the emergence of HsvG and HsvB as a unique new class of host-specific transcriptional activators (Nissan et al., 2006(Nissan et al., , 2012. T3Es acting as host transcriptional activators provide an effective strategy to manipulate plant gene expression (Buttner, 2016). Type III effector proteins, which can directly be imported into the nucleus and bind to either DNA or to components of the plant transcription machinery, have been previously exemplified by the transcription activator-like (TAL) effectors that can efficiently modify cellular processes (Boch et al., 2014;Buttner, 2016). HsvG and HsvB are structurally different from TAL effectors (Buttner, 2016;Canonne and Rivas, 2012). They can be localized to the plant cell nucleus of host and non-host plants, harbour nuclear localization signals, which are required for Pa pathogenicity as well as helix-turn-helix domain containing a DNA binding motif and possibly responsible for additional functions (Nissan et al, 2006(Nissan et al, , 2012Weinthal et al., 2011). The activation domain of HsvG has two nearly direct repeats (71 and 74 amino acids) whereas that of HsvB has only one repeat. Exchanging the activation domain of HsvG and HsvB resulted in a switch in host specificity (Nissan et al., 2006). A candidate target gene of HsvG in gypsophila is HSVGT, which encodes a predicted acidic protein harbouring characteristic conserved motifs of eukaryotic transcription factors (Nissan et al., 2012). HSVGT is transcriptionally induced in planta by Pag and is dependent on intact hsvG. It was confirmed as a direct target of HsvG by gel-shift assays showing that HsvG binds to the HSVGT promoter. It is possible that the HsvG-mediated activation of the putative transcription factor HSVGT results in the activation of additional plant genes that lead to gall development. Further studies on the interactions between HsvG and HsvB and the transcriptomes of their specific host plants should elucidate the mechanisms for hyperplasia and hypertrophy leading to gall formation. The dominant contribution of HsvG or HsvB as novel transcriptional activators that also determine pathovar specificity make them indispensable for the emergence of gall-forming Pantoea. The role of the two additional T3Es (i.e. PthG and PseB) that were also presumably evolved by pathoadaptation remains to be clarified as well as whether they can be replaced by any of the remaining T3Es. AC K N OW L E D G E M E N T S This research was partially supported by the Israel Science Foundation under grant number 488/19. SU PP O R T I N G I N F O R M AT I O N Additional supporting information may be found in the online version of this article at the publisher's web site:
2019-08-02T13:25:45.666Z
2019-08-01T00:00:00.000
{ "year": 2019, "sha1": "32fa74c9e06f9c650122c13b8894eed9258581cc", "oa_license": "CCBY", "oa_url": "https://bsppjournals.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/mpp.12860", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0d7775854bf9fd3f136fd6bbe1a9da2214a9fabd", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
256190319
pes2o/s2orc
v3-fos-license
Effect of zinc process water recycling on galena flotation from a complex sulphide ore The effects of utilizing recycled tailings pond water from the flotation plant at the Mining Company of Guemassa (MCG) on galena recovery and selectivity towards chalcopyrite ( R Pb-Cu ) , sphalerite ( R Pb-Zn ), and pyrrhotite ( R Pb-Fe ) were studied at bench scale. The results showed that recycling the tailings pond water in the lead circuit without addition of fresh water gave a good flotation performance in terms of lead recovery ( R pb ) (75%) and selectivity towards the other metals: R Pb-Cu (54%), R Pb-Zn (60%) and R Pb-Fe (65%). This allows the water to be recycled at least four times. However, increasing the d 80 from 100 μm to 160 μm currently used at the MCG plant had a negative effect on the lead flotation performance. Introduction In countries like Morocco that have a semi-arid climate, maximizing the recycling of process water, which could be both economically and environmentally beneficial, is a major challenge. Water recycling is an important aspect of sustainable management of the environment and water resources (Zeman, /Rich, and Rose, 2006;Orona et al. 2007;Hochstrat, Wintgens, and Melin, 2008;Mudd 2008). Especially as regards froth flotation, the mining industry is one of the most water-intensive industries, and this encourages greater use of recycled water in place of fresh water (McIntyre 2006;van der Bruggen 2010;Liu, Moran, and Fink, 2013;Molina et al. 2013). Recycling tailings pond water will clearly have a positive impact on the economics of industrial processes, because it reduces water cost and at the same time facilitates the recovery of unconsumed reagents retained in the tailings (Nedved and Jansz 2006;Slatter et al. 2009;Liu, Moran, and Fink, 2013;Molina et al. 2013). However, due to the accumulation of impurities in the pulp, suspended solids, the occurrence of adverse side reactions, bacterial oxidation of sulphide minerals, and decreased pH, recycling the water has an effect on its quality and disrupts flotation performance (Rao and Finch, 1989;Levay, Smart, and Skinner, 2001;N'gandu, 2001;Seke and Pistorius, 2006;Slatter et al., 2009;Muzenda 2010;Ikumapayi et al., 2012;Jing Xu et al., 2012;Deng, Liu, and Xu, 2013;Molina et al., 2013;Wang and Peng, 2014;Boujounoui et al., 2015Boujounoui et al., , 2018Boujounoui et al., , 2019Wang et al., 2015. Some impurities present in the recycled water cause uncontrolled variations in the redox potential of the pulp, which has an adverse effect on the chemistry of the reagents and the flotation performance (Chadwick, 2007). These impurities also induce undesirable variations in the pulp properties, leading to alterations to the surface of the minerals and their floatability (Biçak et al., 2012;Dávila-Pulido et al., 2015). The Mining Company of Guemassa (MCG) concentrator, located 30 km southwest of Marrakech (Morocco), uses selective flotation to successively produce concentrates of galena (using Aerophine A3418 at pH 11.3), chalcopyrite (using Aerophine A3418 at pH 8.9), and sphalerite (using potassium amyl xanthate, at pH about 12). The process water used consists mainly of fresh water from the mine site and the Lalla Takerkoust dam, which is located a few kilometres away from the plant. Owing to the complexity of ore processing, the only possible way to maintain the flotation plant performance is to re-use part of the zinc process water in the zinc circuit, the lead process water in the lead circuit, and the copper process water in the copper circuit. Production of zinc, lead, and copper concentrates at the MCG flotation plant in 2014 was 72 970, 13 812 and 16 755 t respectively (Managem Annual Report, 2014). Previous work on sulphide ore flotation at MCG (Boujounoui et al., 2015(Boujounoui et al., , 2018 showed the need to control Cu 2+ , Zn 2+ , Mg 2+ , Ca 2+ , SO₄ 2-, and potassium amyl xanthate concentrations in the process water to maintain acceptable galena recovery in the presence of chalcopyrite, sphalerite, and pyrrhotite. These results, considering the scarcity of water in and around Marrakech, make process water recycling an alternative way of overcoming the problems of water management at the MCG plant. Some mining plants recycle up to 80% of their water (Atmacaand Kuyumcu, 2003), although the recycle rate does not exceed 34% for MCG. 648 NOVEMBER 2022 VOLUME 122 The Journal of the Southern African Institute of Mining and Metallurgy Three water sources were used to supply 4 900 m 3 daily for this production; 25% from the dam at Lalla Takerkouste, located a few kilometers from the plant, 50% from mine dewatering and groundwater, and 25% from tailings pond water (TPW) recycled in the zinc circuit (Figures 1 and 2) (Boujounoui, 2017). The aim of this study was to assess the effects of using recycled TPW on galena recovery in the MCG flotation plant and the selectivity towards chalcopyrite, sphalerite, and pyrrhotite. Tests were carried out according to the results of Boujounoui et al., (2018), who used a synthetic solution to simulate the industrial process water at MCG. These results showed a water quality limit to not exceed the specification in Table IV. Flotation tests were performed using mixtures of fresh water and TPW produced by the flotation plant. Further flotation experiments were performed on the optimal water mixture obtained by increasing the d 80 particle size to 160 μm, the size currently used in the lead circuit at the plant. Climatology of the site According to the Agency of Basin Haouz Tensift (ABHT), the data on climatic parameters collected at the meteorological station at the Lalla Takerkoust dam from 1962 to 2009, particularly the data on pluviometry, temperature, and evaporation, highlights the need to recycle industrial water at the MCG plant. The Lalla Takerkoust dam is mainly filled by snowmelt from the High Atlas Mountains of Morocco. Snowfall correlates positively with rainfall in the area, and therefore the use of dam water and underground water in the flotation process at the MCG plant has to be carefully managed to preserve water resources in the area. The climatological data ( Figure 3) reveals the following. ➤ Generally, rainfall is low and irregular (about 250 mm/a). Inter-annual rainfall is also irregular, with a maximum of 424 mm in 1970 and a minimum of 106 mm in 1982. The mean monthly rainfall variation over the same period shows two distinct seasons: a rainy season (November to April) and a dry season (May to October), with average total rainfalls of 187 and 67 mm respectively. ➤ The variation in the average monthly temperature recorded from 1985 to 2008 show three distinct periods: a very hot period (June to September), a temperate period (October to May), and a relatively cold period (December to February). The temperature can reach 48°C in August and fall below zero in December. ➤ The trend in evaporation correlates with the temperature: the highest evaporation rates are linked to the hottest season of the year, and consequently both water from the Lalla Takerkoust dam and the rainfall during the hot season are drastically affected by evaporation. The industrial process water used in these tests consisted of tailings pond water (TPW) mixed with fresh water (Table I). TPW and the flotation reagents (sodium cyanide, Aerophine 3418A, and methyl isobutyl carbinol) were provided by MCG. Solid sample preparation A representative sample of 128 kg was taken from the feed belt to the primary ball mill at the MCG flotation plant and crushed down to 2 mm using a laboratory roll crusher. The sample was then divided into 1 kg batch samples for the flotation experiments. These batch samples were stored in vacuum-sealed bags to prevent the sulphide minerals from oxidizing. Prior to each flotation test, a sample of 500 g was milled in 250 ml of process water using a Denver carbon steel ball mill with an internal volume of 9.5 l for 6 to 10 minutes, depending on whether the d 80 target grain size was 160 μm or 100 μm. Water sample preparation Four-litre samples of industrial water were prepared by mixing TPW with fresh water in proportions of 100,90,75,65,50,40,25,15, and 0% TPW. Each test was repeated three times. The quality of the different mixtures was calculated from the individual analyses of the fresh water and TPW given in Tables I and II. Flotation experiments Flotation tests of galena were carried out in a Denver flotation cell of 1.5 l capacity. Solid concentration was about 27% by weight, using mixtures of TPW and fresh water at different proportions. The natural pH was about 7. NaOH was used in all tests to adjust the PH value to 11.3. Sodium cyanide (NaCN) was used as a depressant for sphalerite, chalcopyrite, and pyrrhotite for all tests at a specific dosage of 350 g/t. Diisobutyl phosphinate (Aerophine 3418A) (40 g/t) and methyl isobutyl carbinol (MIBC) (40 g/t) were used as galena collector and frother respectively. The impeller rotation speed was a constant 1000 r/min. The level of the pulp was constantly adjusted by the addition of water at the required quality. The flotation time was 10 minutes for each test, and the concentrates were recovered by automatic scraping every 30 seconds All concentrates and tails were filtered, dried, weighed and then analysed by atomic absorption spectroscopy (AGILENT 280FS) for Cu, Pb, Zn, and Feat in the laboratory at the Reminex Center (Morocco). Metal recoveries to the concentrates were calculated from the following equation: where R (%) is the metal recovery, t c (%) is the grade of the concentrate metal, t f (%) is the grade of the feed metal, C is the concentrate weight, and A is the feed weight. The proportions of iron combined with chalcopyrite were taken into account in the calculations of iron sulphide recoveries. Lead selectivity was calculated as the difference between lead recovery and the recoveries of the other metals. Results and discussion The optimal results of water quality, for galena recovery, were obtained with a synthesized water simulating industrial TPW (Boujounoui et al., 2018). These results indicate that fresh water usage could be reduced in the lead circuit at the MCG plant by substitution with water from the tailings pond. Prior to considering these results as a reference (limit of water quality to not exceed) for the lead flotation circuit, in which the process water contained 5 mg/L Cu 2+ , 13 mg/L Zn 2+ , 1390 mg/L Ca 2+ , 140 mg/L Mg 2+ , 4130 mg/L SO₄ 2-, and 13 mg/L PAX, three validation tests were performed using these optimal operating conditions. The results given in Table III verified the mathematical model proposed for lead recovery by Boujounoui et al., (2018) and showed that as long as the process water quality is close to the reference, the lead flotation performance was maintained. Flotation tests using tailings pond water Bench-scale flotation tests were conducted on galena with various proportion of TPW from zero to 100% to optimize its proportion in the lead circuit. The results would help in assessing the recycling ratio without affecting the lead flotation performance. Table II presents the water qualities used and their relationship with the reference water quality obtained by Boujounoui et al., (2018). It can be deduced from the data that 100% TPW is below the maximum limits for all constituents and could be successfully used as process water. The results presented in Table IV show that the variation in water quality had no significant effect on the recovery of Pb and selectivity over Zn and Fe. However, the selectivity towards copper was adversely affected due to chalcopyrite activation by copper ions (Deng et al., 2014) and their interactions with calcium and PAX (Boujounoui et al., 2018). The best recovery of lead was Table IV Effect of TPW proportions in the flotation water process on metal recoveries (pH = 11.3, 350 g/t of NaCN, 40 g/t of Aerophine 3418A, 40 g/t of MIBC, and flotation time of 10 min, d 80 = 100 μm) 82% obtained at 15% TPW and the best selectivities towards copper (54%), zinc (61%) and iron (70%) were obtained at 100%, 90%, and 75% TPW respectively. Nonetheless, according to the objective of best flotation performance and maximum water process recyclability, the optimal proportion of TPW that can be recycled to the lead circuit is 100%.This proportion allowed 75% of Pb to be recovered, with galena retaining good selectivity over the other minerals: R Pb-Cu (54%), R Pb-Zn (60%) and R Pb-Fe (65%). This result confirmed the robustness of the mathematical model used, which states that the closer the water process to the reference quality (limit to not exceed), the better the performance. Based on the robustness of this model, four successive recycling stages using 100% TPW were performed, with the focus on the evolution of the process water quality after each stage with no need to determine the flotation performances. The results given in Table V show that most of the process water was recycled; the best performance was for galena flotation, except for selectivity over chalcopyrite which remained relatively constant. After four recycling stages with 100% TPW, the water quality was still within the quality limits required for the lead circuit (water quality reference). This means that tailings pond water can be recycled at least four times, as long as chalcopyrite activation is controlled. Lead flotation using industrial grain size Because a d 80 grain size of 160 μm is currently used in the lead circuit at the MCG flotation plant, further tests were carried out to assess the effect of increasing the particle-size from 100 to 160 μm on the lead flotation performance. Experiments were then carried out under MCG plant operating conditions using 100, 90 and 40% TPW. The results presented in Table VI and Figure 4 show that the flotation performance in terms of galena recovery and selectivity was not affected by the process water quality. However, comparison with the results in Table IV shows that, the increase in the ore grain size adversely affected the lead flotation performance. The lead recovery decreased from 75% to 68%, and its selectivity over Cu, Zn, and Fe, decreased from 54% to 36%, 60% to 48% and 65% to 46% respectively. These results confirmed those of Boujounoui et al., (2015) in which particle-size adversely affected galena selectivity. Conclusion Our study on the effects of recirculating MCG tailings pond water on galena recovery and its selectivity towards Cu, Zn, and Fe revealed the following. ➤ The variation in water quality had no significant effect on the lead recovery and its selectivity over zinc and iron, but adversely affected the Pb-Cu selectivity. ➤ Recycling tailings pond water without mixing with any fresh water is possible. At least four recycling stages can be used if a d 80 particle size of 100 μm is adopted. Supplementary work needs to be performed to compare the cost savings gained by the reduced use of fresh water with the increase in energy consumption on an industrial scale from reducing the particle size from 160 to 100 μm. Environmental concerns and water scarcity in the region of the MCG plant must also be considered.
2023-01-24T16:36:43.306Z
2023-01-17T00:00:00.000
{ "year": 2023, "sha1": "015510d15267f6404e1f2ed1cb3293c89a0b585b", "oa_license": "CCBY", "oa_url": "http://www.scielo.org.za/pdf/jsaimm/v122n11/08.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "cd3c091bc4bb76f61e382eae0b55df172234f874", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
250369383
pes2o/s2orc
v3-fos-license
Are Iron Tailings Suitable for Constructing the Soil Profile Configuration of Reclaimed Farmland? A Soil Quality Evaluation Based on Chronosequences Iron tailings used as soil substitute materials to construct reclaimed farmland soil can effectively realize the large-scale resource utilization of iron tailings and reduce environmental risks. It is vital to understand the mechanisms affecting reclaimed soil quality and determine the appropriate pattern for reclamation with iron tailings. Thus, a soil quality index (SQI) was developed to evaluate the soil quality of reclaimed farmland with iron tailings in a semi-arid region. Soil samples were collected from two reclamation measures (20 cm subsoil + 20 cm iron tailings + 30 cm topsoil and 20 cm subsoil + 20 cm iron tailings + 50 cm topsoil) with reclamation years of 3 (R3), 5 (R5), and 10 (R10) at three soil depths (0–10, 10–20, and 20–30 cm) to measure 13 soil physicochemical properties in western Liaoning, China. Adjacent normal farmland (NF) acted as a reference. Results indicated that iron tailings were suitable for constructing the soil profile configuration of reclaimed farmland. SQI of reclaimed soil increased with the reclamation year, but it has not reached the NF level after 3 years, while it was better than NF after 5 years. The nutrient content of reclaimed soil increased with the reclamation year, but it still did not reach the NF level after 10 years. SQI of R10 (with 50 cm topsoil) was also better than NF but slightly lower than R5 (with 30 cm topsoil). For the semi-arid region with sticky soil texture, the topsoil thickness of reclamation was not the thicker the better, and 30 cm topsoil covered on iron tailings in western Liaoning could achieve a better reclamation effect than 50 cm. Introduction The exploitation of mineral resources supports rapid economic development while also causing substantial ecological and environmental problems, which has become one of the key challenges facing the sustainable development of the contemporary world [1][2][3]. Globally, surface mining destroys the regional farmland, forest, and landscape by creating huge overburden dumps and wastelands [4]. These overburdened materials mostly consist of large boulders, loose rock fragments and tailings, devoid of organic matter and nutrients, which are left unmanaged and create environmental pollution, commonly known as "mine spoil" [5,6]. Solid wastes such as mine spoil, if not properly treated and reused, would cause serious pollution to the environment and health risks [7,8]. Solid waste was a misplaced resource, and the level of recycling of resources is one of the important signs of social progress and a pathway toward green and sustainable development [9][10][11]. The report of the 19th National Congress of the Communist Party of China pointed out that we should 'establish and improve the economic system for the development of a green low-carbon cycle', and the circular economy is also called the 4R economy: Reduce, Reuse, Recycle, and quality, it is necessary to select the most representative indicators according to the research objectives, considering the factors such as cost and difficulty of test methods. Minimum Dataset (MDS) can use the least indicators to monitor and reflect changes in soil quality caused by changes in soil management measures, which has been widely used to evaluate soil quality [31]. Therefore, our study intends to evaluate the soil quality of the farmland reclamation with iron tailings in western Liaoning by using the SQI based on the MDS. This research contributes to confirming the feasibility of solid wastes such as iron tailings can be recycled for constructing the soil profile configuration of reclaimed farmland in the existing literature through a soil quality evaluation based on chronosequence and revealing the reconstruction mechanisms of farmland reclaimed with iron tailings and the optimal reconstruction profile configuration. The specific objectives of this study were to (1) develop an SQI evaluation process and analyze the reclaimed soil quality indicators characteristics at different profile configurations and reclamation years; (2) evaluate the SQI of reconstructed farmland based on the MDS and determine the changing mechanisms and key impact indicators of the reclaimed soil quality; and (3) explore the optimal soil profile configuration of reclaimed farmland in western Liaoning and enhance our understanding of the farmland reclamation process with iron tailings to guide the reclamation technology improvement and the management of soil after reclamation. Study Area Our study was conducted on reclaimed farmland with iron tailings at a surface iron mining (Jianping Shengde Rixin Mining Co., Ltd.) in Jianping County, Chaoyang City, western Liaoning Province, China, at 41 • 45 N, 119 • 37 E (Figure 1), which is a region rich in iron ore resources, and mining wastes, such as iron tailings and waste rocks, occupy large amounts of land. This region is characterized by a semi-arid monsoonal climate with a mean annual temperature of 7.6 • C, mean annual precipitation of 467 mm, and a mean annual effective evaporation of approximately 1853 mm. According to the soil classification system of China, the soil in this region belongs to Hapli-Ustic Argosols [32]. The original farmland has a thin tillage layer (about 15 cm), sticky soil texture, and poor soil moisture regimes, which leads to the farmland being mostly medium and low yield fields. Aridity is the primary limiting factor of regional farmland quality. Mining companies must reclaim mining wasteland in accordance with Chinese regulations, and due to the lack of reclamation soil sources in western Liaoning, Jianping sheng Rixin Mining Co., Ltd. combined waste iron tailings and mining stripping soil to reclaim mining wastelands as farmland. In the early stage of reclamation, due to the lack of systematic and scientific theoretical guidance, the specific reclamation schemes and soil profile configuration are different in different periods. At the beginning of reclamation, according to the correlation standard and reclamation practices, it was considered that the thicker the reclamation soil was, the better the effect was. The subsoil with a thickness of 20 cm was filled in the lowest layer, the iron tailings with 20 cm were filled in the middle layer as the soil moisture retention layer, and the tillage soil stripping from mining was covered with 50 cm as the topsoil to form a reconstructed soil profile. In recent years, due to the lack of soil sources, combined with the characteristics of regional farmland and crop growth conditions, the topsoil thickness was changed to 30 cm. On the one hand, the main purpose of filling reclamation with iron tailings is to construct a water retention layer to solve the limitation of water shortage in agricultural development in semi-arid areas. On the other hand, the iron tailings and topsoil are mixed by rotary tillage to improve the regional sticky soil texture. Soil Sampling and Analysis Through field investigation, the soil profile configuration of two typical iron tailings reclamation farmland was (1) 20 cm subsoil + 20 cm iron tailings + 30 cm topsoil and (2) 20 cm subsoil + 20 cm iron tailings + 50 cm topsoil. Soil samples were collected at Figure 1. Location of the study area. Note: the color pentagrams represent the distribution of soil sample plots of reclaimed farmland. NF represents the adjacent normal farmland as a reference; R3 represents farmland reclaimed for 3 years with a soil profile configuration of 20 cm subsoil + 20 cm iron tailings + 30 cm topsoil; R5 represents farmland reclaimed for 5 years with a soil profile configuration of 20 cm subsoil + 20 cm iron tailings + 30 cm topsoil; R10 represents farmland reclaimed for 10 years with a soil profile configuration of 20 cm subsoil + 20 cm iron tailings + 50 cm topsoil. The photos of the map were adopted from www.google.com/maps (accessed on 27 May 2022). Mining companies must reclaim mining wasteland in accordance with Chinese regulations, and due to the lack of reclamation soil sources in western Liaoning, Jianping sheng Rixin Mining Co., Ltd. combined waste iron tailings and mining stripping soil to reclaim mining wastelands as farmland. In the early stage of reclamation, due to the lack of systematic and scientific theoretical guidance, the specific reclamation schemes and soil profile configuration are different in different periods. At the beginning of reclamation, according to the correlation standard and reclamation practices, it was considered that the thicker the reclamation soil was, the better the effect was. The subsoil with a thickness of 20 cm was filled in the lowest layer, the iron tailings with 20 cm were filled in the middle layer as the soil moisture retention layer, and the tillage soil stripping from mining was covered with 50 cm as the topsoil to form a reconstructed soil profile. In recent years, due to the lack of soil sources, combined with the characteristics of regional farmland and crop growth conditions, the topsoil thickness was changed to 30 cm. On the one hand, the main purpose of filling reclamation with iron tailings is to construct a water retention layer to solve the limitation of water shortage in agricultural development in semi-arid areas. On the other hand, the iron tailings and topsoil are mixed by rotary tillage to improve the regional sticky soil texture. Location of the study area. Note: the color pentagrams represent the distribution of soil sample plots of reclaimed farmland. NF represents the adjacent normal farmland as a reference; R3 represents farmland reclaimed for 3 years with a soil profile configuration of 20 cm subsoil + 20 cm iron tailings + 30 cm topsoil; R5 represents farmland reclaimed for 5 years with a soil profile configuration of 20 cm subsoil + 20 cm iron tailings + 30 cm topsoil; R10 represents farmland reclaimed for 10 years with a soil profile configuration of 20 cm subsoil + 20 cm iron tailings + 50 cm topsoil. The photos of the map were adopted from www.google.com/maps (accessed on 27 May 2022). According to the research results of Bünemann et al. [37], combined with the experience and the specific situation of the study area, the total dataset (TDS) of soil quality was established, including 13 indicators (Table 1). In our study, principal component analysis (PCA) combined with Norm value and Pearson correlation analysis was used to select the soil indicators that can best reflect the soil quality characteristics and have significant indigenous effects on the evaluation results from the TDS as the minimum dataset (MDS) [38]. Norm value is the length of the vector norm of the indicator in the multidimensional space composed of components; the longer the length, indicating that the greater the comprehensive load of the indicator in all principal components, the stronger the ability to explain comprehensive information [39]. The formula is as follows (Equation (1)): where N ik is the comprehensive load of the indicator i on the first k principal components with eigenvalue ≥1; µ ik is the load of the indicator i on the principal component k; λ k is the eigenvalue of the principal component k. The factors with high eigenvalues and soil variables with high factor loading were assumed to be indicators that can foremost represent farmland soil [40]; hence, the retained principal components are selected according to the eigenvalue >1, and the loading values of the indicator was within 10% of the maximum loading value [41,42]. If a single component contains more than one soil attribute, the multivariate correlation coefficient is used to determine whether the variable is redundant. For variables with significant correlation, a variable with a high Norm value was selected for soil quality evaluation, and the rest were excluded. If the highly weighted variables are not correlated, each variable can be used for soil quality evaluation [26]. Evaluation Model of Soil Quality Index (SQI) SQI is a comprehensive reflection of soil function by calculating the weight and score of each soil quality evaluation index. The greater the value, the better the soil quality [37]. According to the positive and negative correlation between each soil quality evaluation index and soil quality, the membership function between the evaluation index and soil quality was established, and the membership degree of the index was calculated by Equations (2) and (3) [43][44][45]. Then, the role of each factor is calculated by using the factor load in principal component analysis, and their weights are determined by Equation (4). Finally, the comprehensive evaluation index of soil quality is calculated by Equation (5) through the weighted comprehensive method and addition multiplication [46]. where S i is the standard value of soil variable, x ij is the measured value of soil quality index i in the year j, x i−max is the maximum value of index i, and x i−min is the minimum value of index i. where w i is the weight of the soil quality index i, C i is the common factor variance, C is the sum of the common factor variance. Data Analysis The SPSS (Statistical Program for the Social Sciences, release 25.0) was used to perform a correlated statistical analysis of the data. All variables follow the normal distributions (tested with the Kolmogorov-Smirnov test at the p-value of 0.05). One-way ANOVA (analysis of variance) was carried out to compare the means of soil characteristics of normal farmland and reclaimed soil chronosequence sites. Differences between individual means were tested using DMRT (Duncan's multiple range test) at p < 0.05 significance level. PCA-loaded variables were subject to Pearson correlation analysis. Origin 2021 was used for drawing. Feature of Soil Quality Evaluation Indicators Results of soil physical and chemical analyses indicated that the characteristics of reclaimed soil quality improved with the increase in reclamation years (Figures 2-5). BD is a sensitive indicator of soil compaction, and the soil with low BD is loose, which is beneficial to water storage, and vice versa. BD of reclaimed farmland was higher than that of NF in different reclamation years. At the depths of 0-10 cm, 10-20 cm, and 20-30 cm, BD of R3 was significantly increased by 26.37%, 18.26%, and 20.13% (p < 0.05), R5 was significantly increased by 19.71%, 16.48%, and 17.13% (p < 0.05), and R10 was significantly increased by 7.86%, 11.33%, and 11.54% (p < 0.05), respectively, compared with NF ( Figure 2). In addition, BD increased with soil depth. At different soil depths, the differences in R3 and R5 were not significant (p > 0.05), but in R10, the difference was significant (p < 0.05). With the increase in reclamation years, BD at all levels decreased significantly. SWC of NF was the highest, reclaimed farmland was significantly lower than that of NF, and the differences between different reclamation years were significant (p > 0.05). With the reclamation year increase, SWC showed a continuous growth trend in each soil depth. The SWC of R10 at 0-10 cm, 10-20 cm, and 20-30 cm increased by 57.53%, 21.37%, and 20.75%, respectively, compared with R3. There was no significant difference in SWC of different soil depths within the same reclamation year. Clay of NF was the highest in all sample plots, and clay of reclaimed farmland in R3, R5, and R10 was significantly different from that of NF (p < 0.05) ( Figure 3). There was no significant difference in clay among different reclamation years (p > 0.05), and with the increase in reclamation year, clay increased slightly. However, there were significant differences among different soil depths in the same reclamation year (p < 0.05). The silt was the highest in NF and the lowest in R5, and it was significantly different among each sample plot (p < 0.05), while it has no significant difference among different soil depths in the same reclamation year (p > 0.05). The sand was the lowest in NF and the highest in R5, and it was significantly different among each sample plot (p < 0.05). It has no significant difference among different soil depths in the same reclamation year (p > 0.05). 0.05). With the increase in reclamation years, BD at all levels decreased significantly. SWC of NF was the highest, reclaimed farmland was significantly lower than that of NF, and the differences between different reclamation years were significant (p > 0.05). With the reclamation year increase, SWC showed a continuous growth trend in each soil depth. The SWC of R10 at 0-10 cm, 10-20 cm, and 20-30 cm increased by 57.53%, 21.37%, and 20.75%, respectively, compared with R3. There was no significant difference in SWC of different soil depths within the same reclamation year. Table 1 for abbreviations and units. Clay of NF was the highest in all sample plots, and clay of reclaimed farmland in R3, R5, and R10 was significantly different from that of NF (p < 0.05) ( Figure 3). There was no significant difference in clay among different reclamation years (p > 0.05), and with the increase in reclamation year, clay increased slightly. However, there were significant differences among different soil depths in the same reclamation year (p < 0.05). The silt was the highest in NF and the lowest in R5, and it was significantly different among each sample plot (p < 0.05), while it has no significant difference among different soil depths in the same reclamation year (p > 0.05). The sand was the lowest in NF and the highest in R5, and it was significantly different among each sample plot (p < 0.05). It has no significant difference among different soil depths in the same reclamation year (p > 0.05). Table 1 for abbreviations and units. Table 1 for abbreviations and units. SWC. Different capital letters indicate significant differences (p < 0.05) among different reclamation years in the same soil layer, and different lowercase letters indicate significant differences (p < 0.05) among different soil layers in the same reclamation year. See Table 1 for abbreviations and units. Clay of NF was the highest in all sample plots, and clay of reclaimed farmland in R3, R5, and R10 was significantly different from that of NF (p < 0.05) ( Figure 3). There was no significant difference in clay among different reclamation years (p > 0.05), and with the increase in reclamation year, clay increased slightly. However, there were significant differences among different soil depths in the same reclamation year (p < 0.05). The silt was the highest in NF and the lowest in R5, and it was significantly different among each sample plot (p < 0.05), while it has no significant difference among different soil depths in the same reclamation year (p > 0.05). The sand was the lowest in NF and the highest in R5, and it was significantly different among each sample plot (p < 0.05). It has no significant difference among different soil depths in the same reclamation year (p > 0.05). Table 1 for abbreviations and units. Table 1 for abbreviations and units. There were significant differences in pH between different sample plots and different soil depths (p < 0.05) (Figure 4). pH decreased with the increase in reclamation year at 0-10 cm and 10-20 cm depths, from 7.62 and 7.61 in R3 to 7.41 and 7.55 in R10, respectively. OM of NF was the highest, which was greater than 10.0 g/kg at 0-10 cm and 10-20 cm depths. With the increase in the reclamation year, OM increased significantly (p < 0.05), but OM was still less than 8.0 g/kg. In the same reclamation year, OM decreased significantly with the increase in soil depth (p < 0.05). OM. Different capital letters indicate significant differences (p < 0.05) among different reclamation years in the same soil layer, and different lowercase letters indicate significant differences (p < 0.05) among different soil layers in the same reclamation year. See Table 1 for abbreviations and units. TN of NF was significantly higher than the other three reclaimed farmlands ( Figure 5). At 0-10 cm, TN of R3, R5, and R10 were decreased by 73.39%, 56.88%, and 46.79%, respectively, compared with NF. TN increased significantly with the reclamation year increase (p < 0.05), but there was no significant difference at different soil depths in the same reclamation year (p > 0.05). TP was highest in NF and lowest in R3; it was 0.37 g/kg and 0.27 g/kg at 0-10 cm, respectively. TK of each reclaimed farmland was higher than NF, Table 1 for abbreviations and units. Table 1 for abbreviations and units. Construction of MDS for Soil Quality Evaluation In the results of PCA, the eigenvalues of the first three components were greater than 1, and their cumulative contribution rate reached 90.96%, indicating that the minimum dataset can replace the whole dataset for soil quality evaluation ( Table 2). The first principal component variance was 60.25%, in which TN had the maximum loading value. The loading values of TN, BD, OM, and AN were within 10% of the maximum loading value, while TN had a high correlation with the other three variables ( Figure 6), respectively, 0.914, 0.993, and 0.975 (p < 0.01); therefore, only TN in PC-1 was selected as the MDS. The variance of PC-2 was 19.83%. pH had the maximum loading value, and TP, AP, and clay loading values were within 10% of the maximum loading value. According to Figure 6, the correlation coefficients of pH and TP, AP were respectively −0.85 (p < 0.01) and −0.60 (p < 0.05), while the correlation between pH and clay is very low, and the correlation between clay and TP, AP were all very low, while TP and AP had a high correlation with 0.74 (p < 0.01), according to the Norm value, AP and clay were selected in MDS. The variance of PC-3 was 10.88%, silt and sand were within 10% of the maximum loading value, while the correlation coefficient of silt and sand was very high at −0.98 (p < 0.01), sand was selected in the MDS depending on the Norm value. The MDS of the soil quality evaluation of farmland soil constructed using iron tailings in the semi-arid region consists of TN, AP, clay, and sand. Table 1 for abbreviations and units. There were significant differences in pH between different sample plots and different soil depths (p < 0.05) (Figure 4). pH decreased with the increase in reclamation year at 0-10 cm and 10-20 cm depths, from 7.62 and 7.61 in R3 to 7.41 and 7.55 in R10, respectively. OM of NF was the highest, which was greater than 10.0 g/kg at 0-10 cm and 10-20 cm depths. With the increase in the reclamation year, OM increased significantly (p < 0.05), but OM was still less than 8.0 g/kg. In the same reclamation year, OM decreased significantly with the increase in soil depth (p < 0.05). TN of NF was significantly higher than the other three reclaimed farmlands ( Figure 5). At 0-10 cm, TN of R3, R5, and R10 were decreased by 73.39%, 56.88%, and 46.79%, respectively, compared with NF. TN increased significantly with the reclamation year increase (p < 0.05), but there was no significant difference at different soil depths in the same reclamation year (p > 0.05). TP was highest in NF and lowest in R3; it was 0.37 g/kg and 0.27 g/kg at 0-10 cm, respectively. TK of each reclaimed farmland was higher than NF, and it showed a significant downward trend with the increase in the reclamation year. AN of NF was significantly higher than that of each reclaimed farmland (p < 0.05). At 0-10 cm, 10-20 cm, and 20-30 cm, AN was 66.03, 48.56 mg/kg, and 43.21 mg/kg, respectively, but in reclaimed farmland, it was all less than 40 mg/kg, while with the increase in reclamation year, AN showed a significant upward trend. AP of NF was significantly higher than that of reclaimed farmland, and there was no significant difference between different reclamation years, but it was significantly different at 0-10 cm from 10-20 cm and 20-30 cm (p < 0.05). AK of NF was significantly lower than that of reclaimed farmland, and it was the highest in R3, which was 138.75 mg/kg, 94.22 mg/kg, and 108.98 mg/kg at 0-10 cm, 10-20 cm, and 20-30 cm, respectively. With the increase in reclamation year, AK decreased significantly. In addition, AK among different soil depths in the same reclamation year was significantly different (p < 0.05). In general, nitrogen and phosphorus increased with the reclamation year but were all lower than NF; potassium was significantly higher than NF after reclamation but decreased with the reclamation year. Construction of MDS for Soil Quality Evaluation In the results of PCA, the eigenvalues of the first three components were greater than 1, and their cumulative contribution rate reached 90.96%, indicating that the minimum dataset can replace the whole dataset for soil quality evaluation ( Table 2). The first principal component variance was 60.25%, in which TN had the maximum loading value. The loading values of TN, BD, OM, and AN were within 10% of the maximum loading value, while TN had a high correlation with the other three variables (Figure 6), respectively, 0.914, 0.993, and 0.975 (p < 0.01); therefore, only TN in PC-1 was selected as the MDS. The variance of PC-2 was 19.83%. pH had the maximum loading value, and TP, AP, and clay loading values were within 10% of the maximum loading value. According to Figure 6, the correlation coefficients of pH and TP, AP were respectively −0.85 (p < 0.01) and −0.60 (p < 0.05), while the correlation between pH and clay is very low, and the correlation between clay and TP, AP were all very low, while TP and AP had a high correlation with 0.74 (p < 0.01), according to the Norm value, AP and clay were selected in MDS. The variance of PC-3 was 10.88%, silt and sand were within 10% of the maximum loading value, while the correlation coefficient of silt and sand was very high at −0.98 (p < 0.01), sand was selected in the MDS depending on the Norm value. The MDS of the soil quality evaluation of farmland soil constructed using iron tailings in the semi-arid region consists of TN, AP, clay, and sand. Note: The variable corresponding to the bold value is selective further due to its relatively high scores. Variable loading coefficients (eigenvalues) of the first three factors were extracted using 13 soil attributes, their eigenvalues, and individual and cumulative percentage of total variance explained by each factor. Factor loadings are considered highly weighted when within 10% of the variation of the absolute values of the highest factor loading in each factor. Bold-underlined soil attributes correspond to the indicators included in the MDS. See Table 1 for abbreviations. Empirically, the main purpose of using iron tailings as the matrix to fill reclaimed farmland in areas with sticky soil texture was to improve soil texture, and it has the downside of weak capacity in holding fertilizer. Clay and sand are basic elements used to reflect soil texture. TN and AP are basic elements used to maintain crop growth. Accordingly, TN, AP, clay, and sand are suitable and essential for evaluating the soil quality of reclaimed farmland with iron tailings. Note: The variable corresponding to the bold value is selective further due to its rela scores. Variable loading coefficients (eigenvalues) of the first three factors were extracte soil attributes, their eigenvalues, and individual and cumulative percentage of total v plained by each factor. Factor loadings are considered highly weighted when within variation of the absolute values of the highest factor loading in each factor. Bold-unde attributes correspond to the indicators included in the MDS. See Table 1 for abbreviation Empirically, the main purpose of using iron tailings as the matrix to fill farmland in areas with sticky soil texture was to improve soil texture, and it has t side of weak capacity in holding fertilizer. Clay and sand are basic elements used soil texture. TN and AP are basic elements used to maintain crop growth. Acc TN, AP, clay, and sand are suitable and essential for evaluating the soil qua claimed farmland with iron tailings. Soil Quality Evaluation Based on MDS When performing PCA again for the four selected SQI evaluation indicators in MDS, each PC explained a certain amount (%) of the variation in the dataset (Table 3). TN has the highest contribution value in SQI, with a commonality of 0.833, followed by sand, with a commonality of 0.824, and the commonality of clay and AP is 0.559 and 0.393, respectively. The weight of TN, AP, clay, and sand was 0.319, 0.151, 0.214, and 0.316, respectively. We calculated the scores of each index and sum the weighted scores of each variable to obtain the SQI value of each sample plot in 0-10, 10-20 cm, and 20-30 cm depths (Figure 7). SQI was significantly higher (p < 0.05) at 0-10 cm (0.454-0.636) than 10-20 cm (0.383-0.528) and 20-30 cm (0.262-0.504) of each sample plot. SQI of NF was 0.542, 0.528, and 0.262, respectively, at 0-10 cm, 10-20 cm, and 20-30 cm. Among three kinds of reclamation farmlands in different years, the SQI of R5 was the highest at 0.636 at 0-10 cm, followed by R10 (0.597). SQI of R3 was both the lowest at 0-10 cm (0.454) and 10-20 cm (0.383); it decreased by 16.24% and 27.46%, respectively, compared with NF, but increased by 24.05% at 20-30 cm. SQI of R5 was significantly improved (p < 0.05). At 0-10 cm and 20-30 cm, it was significantly higher than that in the normal farmland, increasing by 17.34% and 92.37%, respectively, and restored to the normal farmland level at 10-20 cm. SQI at 0-10 cm and 20-30 cm of R10 was also significantly better than that of NF, increasing by 10.15% and 86.26%, respectively, but was slightly lower than that of R5. There were significant differences between R5 and R3 but no significant differences between R5 and R10. Applicability Verification of Soil Quality Evaluation Method Based on MDS Generally, the soil quality can be evaluated with high accuracy through the TDS of the soil quality evaluation indicators. However, due to the numerous indicators, experimental analysis is complicated and time-consuming. The indicator dataset can be simplified through a series of statistical analyses, but it will lead to a decrease in evaluation accuracy. Therefore, it is necessary to verify the applicability of MDS of the evaluation indicator in a specific region or a specific soil. The common factor variance of each indicator of TDS was obtained by PCA, and then, the weight of each indicator of TDS was obtained ( Table 3). The above method was used to analyze the soil quality of TDS. SQI based on the MDS (MDS-SQI) and SQI based on the TDS (TDS-SQI) were used for regression analysis to verify the accuracy of the comprehensive value of soil quality based on the MDS (Figure 8). MDS-SQI and TDS-SQI met the linear regression relationship (p < 0.01), and the correlation coefficient was 0.840. The regression equation is y = 0.840x+ 0.147 (n = 36, R 2 = 0.712, p < 0.01), where y represents TDS-SQI, x represents MDS-SQI. The above analysis shows that MDS can replace TDS, and the quality evaluation of farmland soil reclaimed by iron tailings through the MDS indicator system has high accuracy. Applicability Verification of Soil Quality Evaluation Method Based on MDS Generally, the soil quality can be evaluated with high accuracy through the TDS of the soil quality evaluation indicators. However, due to the numerous indicators, experimental analysis is complicated and time-consuming. The indicator dataset can be simplified through a series of statistical analyses, but it will lead to a decrease in evaluation accuracy. Therefore, it is necessary to verify the applicability of MDS of the evaluation indicator in a specific region or a specific soil. The common factor variance of each indicator of TDS was obtained by PCA, and then, the weight of each indicator of TDS was obtained ( Table 3). The above method was used to analyze the soil quality of TDS. SQI based on the MDS (MDS-SQI) and SQI based on the TDS (TDS-SQI) were used for regression analysis to verify the accuracy of the comprehensive value of soil quality based on the MDS (Figure 8). MDS-SQI and TDS-SQI met the linear regression relationship (p < 0.01), and the correlation coefficient was 0.840. The regression equation is y = 0.840x + 0.147 (n = 36, R 2 = 0.712, p < 0.01), where y represents TDS-SQI, x represents MDS-SQI. The above analysis shows that MDS can replace TDS, and the quality evaluation of farmland soil reclaimed by iron tailings through the MDS indicator system has high accuracy. for regression analysis to verify the accuracy of the comprehensive value of soil quality based on the MDS (Figure 8). MDS-SQI and TDS-SQI met the linear regression relationship (p < 0.01), and the correlation coefficient was 0.840. The regression equation is y = 0.840x+ 0.147 (n = 36, R 2 = 0.712, p < 0.01), where y represents TDS-SQI, x represents MDS-SQI. The above analysis shows that MDS can replace TDS, and the quality evaluation of farmland soil reclaimed by iron tailings through the MDS indicator system has high accuracy. Chronosequence Evolution of Soil Quality in Reclaimed Farmland Soil is an interconnected system, and the reconstruction process contains a series of chain reactions that take a certain amount of time [1]. Soil texture was improved after reclamation with iron tailings. Compared with NF, the most significant features of the reclaimed soil in each reclamation year were high sand and low clay. Sand had the largest value in each reclaimed soil, and clay was significantly lower than NF (Figure 7). This was because the soil in the study area is cinnamon soil with high clay content. The iron tailings were filled under the topsoil and would be mixed into topsoil by tillage during the crop planting. Iron tailings were mostly irregular granular [47], and their specific surface area was larger than that of clay particles, increasing the sand content. Meanwhile, lots of iron tailings reduce the clay content and regulate the soil mechanical composition, which fully demonstrates that reclaiming farmland with iron tailings can effectively improve the soil texture in western Liaoning, and the results were consistent with those of Yang [48]. The reclaimed soil quality was poor in the early stage of reclamation; SQI of R3 was significantly lower than NF because TN and AP of R3 were significantly lower than NF ( Figure 5). Soil nitrogen and phosphorus are essential elements for plant growth, and the phosphorus content affects soil fertility and physical and chemical characteristics such as SWC, pH, and OM [49]. Although TN of reclaimed soil showed an overall upward trend with the reclamation year, it was always smaller than NF. TN of R10 was only 53% of NF, and the nitrogen supply capacity was significantly poor. The variation trend of AP was consistent with TN and showed an upward trend with the reclamation year, but it was also less than NF. AP of R10 was only 70% of NF, and the phosphorus supply capacity was significantly poor, which was consistent with Li et al. [50], Duo and Hu [51], and Li et al. [52], that TN and AP showed an upward trend with the reclamation year, but it was always lower than that of normal farmland. The texture of iron tailings is sandy, and the water and fertilizer conservation abilities are poor. The nutrient of reclaimed soil was low, and the recovery time was long. The accumulation of soil nutrients is a long-term process, and it needs to be improved by changing the irrigation mode, rational planting, and following the principle of small amounts and multiple applications of fertilizer. After 5 years of reclamation, SQI was significantly higher than NF, indicating that the reclamation with iron tailings has a great influence on the comprehensive quality of soil, and the reclaimed soil after 5 years can reach or even better than the comprehensive quality of normal farmland in the region. The research results of Mukhopadhyay et al. [6] also showed that the quality of reclaimed soil improved with the increase in reclamation year. Cao et al. [53] reported that the reclaimed soil was largely restored after 12 years of reconstruction; however, the recovery was not completed. Effect of Profile Configuration on Reclaimed Soil Quality Reclaimed soil quality will recover better with the increase in reclamation year [6], but the SQI of R10 was slightly less than R5. By comparing their profile configurations, R5 covered 30 cm topsoil on iron tailings, while R10 covered 50 cm topsoil. Through field investigation, local farmers plowed farmland with a depth of 30-40 cm by rotary tillage machine in spring. Therefore, the topsoil with 30 cm would mix iron tailings in the plowing process, which effectively increases the sand content of the topsoil and improves the sticky soil texture, and the number of iron tailings mixed in the topsoil increases with the years of cultivation and the soil texture was getting better and better (Figure 9). However, iron tailings were difficult to be mixed into the topsoil with 50 cm through plowing, so the effect of using iron tailings to improve the sticky soil texture was poor. In addition, the sticky soil texture in the region leads to poor soil water retention capacity, and aridity is the primary limiting factor of regional farmland quality [48]. In the reclamation process, the role of filling iron tailings in the middle layer is to use the pore structure of iron tailings to build a water retention layer and improve the water holding capacity of the reclaimed soil. The research results of Jia [54] showed that in farmlands similar to the soil texture of our study area when the rainfall is less than 20 mm, the infiltration depth is 20 cm; when the rainfall is 20-50 mm, the infiltration depth is 40 cm, and when the rainfall is more than 50 mm, the infiltration depth can reach the soil layer below 40 cm. The mean annual precipitation in Jianping County is 467 mm, with the highest rainfall in July, about 137 mm. Generally, the maximum individual rainfall is less than 50 mm, so it is estimated that the infiltration depth of an individual rainfall in the study area will not exceed 40 cm. Therefore, the soil moisture migration might make it difficult to In addition, the sticky soil texture in the region leads to poor soil water retention capacity, and aridity is the primary limiting factor of regional farmland quality [48]. In the reclamation process, the role of filling iron tailings in the middle layer is to use the pore structure of iron tailings to build a water retention layer and improve the water holding capacity of the reclaimed soil. The research results of Jia [54] showed that in farmlands similar to the soil texture of our study area when the rainfall is less than 20 mm, the infiltration depth is 20 cm; when the rainfall is 20-50 mm, the infiltration depth is 40 cm, and when the rainfall is more than 50 mm, the infiltration depth can reach the soil layer below 40 cm. The mean annual precipitation in Jianping County is 467 mm, with the highest rainfall in July, about 137 mm. Generally, the maximum individual rainfall is less than 50 mm, so it is estimated that the infiltration depth of an individual rainfall in the study area will not exceed 40 cm. Therefore, the soil moisture migration might make it difficult to reach the iron tailings layer with 50 cm topsoil, which makes the soil water retention effect of the iron tailings layer poor, while the soil moisture could reach the iron tailings layer with 30 cm topsoil and effectively maintain the soil moisture, thereby promoting the nutrient cycling in soil (Figure 9). The amount of iron tailings mixed in the topsoil increases with the years of cultivation, which made the soil moisture retention ability stronger and the nutrient cycling ability better. As shown in Figure 2b, SWC after reclamation for 5 years was significantly higher than that after 3 years. Our result is different from some previous cognition, that was, the thicker the reclaimed soil in land reclamation is, the better it is [55]. For semi-arid regions, the topsoil thickness when iron tailings are used for reclamation is not the thicker, the better, but the thickness suitable for regional conditions is the best. In western Liaoning, based on regional soil characteristics and tillage practice, when iron tailings are used as mine soil to reconstruct farmland, 30 cm topsoil covered on iron tailings can achieve a good reclamation effect, and with the increase in reclamation year, it is conducive to improving the quality of reclaimed soil. This reclamation pattern consumes large amounts of iron tailings and forms a soil water-retaining layer, which meets the requirements of regional crop cultivation and forms a good reclamation effect. Moreover, reducing the coverage thickness of the topsoil can effectively save soil resources and reclamation costs [56]. The Variation Regulation of Soil Quality in Vertical Profiles with Different Reclamation Years The MDS-SQI was significantly higher (p < 0.05) at 0-10 cm (0.454-0.636) than 10-20 cm (0.383-0.514) and 20-30 cm (0.325-0.504) (Figure 10) of the reclaimed farmland. The MDS-SQI observed for NF was 0.542, 0.528, and 0.262, respectively, at 0-10 cm, 10-20 cm, and 20-30 cm. It showed that the quality of topsoil in reclaimed soil was higher than that of subsoil. In a similar study on reclaimed sites, the quality of topsoil was higher than that of subsoil [57]. For both the soil layers, the overall trend of SQI has followed the order: R3 < NF < R10 < R5. After 5 years of reclamation with iron tailings, the soil comprehensive quality at each soil depth could reach or even better than the level of regional normal farmland. Our result is different from some previous cognition, that was, the thicker the reclaimed soil in land reclamation is, the better it is [55]. For semi-arid regions, the topsoil thickness when iron tailings are used for reclamation is not the thicker, the better, but the thickness suitable for regional conditions is the best. In western Liaoning, based on regional soil characteristics and tillage practice, when iron tailings are used as mine soil to reconstruct farmland, 30 cm topsoil covered on iron tailings can achieve a good reclamation effect, and with the increase in reclamation year, it is conducive to improving the quality of reclaimed soil. This reclamation pattern consumes large amounts of iron tailings and forms a soil water-retaining layer, which meets the requirements of regional crop cultivation and forms a good reclamation effect. Moreover, reducing the coverage thickness of the topsoil can effectively save soil resources and reclamation costs [56]. The Variation Regulation of Soil Quality in Vertical Profiles with Different Reclamation Years The MDS-SQI was significantly higher (p < 0.05) at 0-10 cm (0.454-0.636) than 10-20 cm (0.383-0.514) and 20-30 cm (0.325-0.504) (Figure 10) of the reclaimed farmland. The MDS-SQI observed for NF was 0.542, 0.528, and 0.262, respectively, at 0-10 cm, 10-20 cm, and 20-30 cm. It showed that the quality of topsoil in reclaimed soil was higher than that of subsoil. In a similar study on reclaimed sites, the quality of topsoil was higher than that of subsoil [57]. For both the soil layers, the overall trend of SQI has followed the order: R3 < NF < R10 < R5. After 5 years of reclamation with iron tailings, the soil comprehensive quality at each soil depth could reach or even better than the level of regional normal farmland. Key Indicator Identification and Policy Implications By using a series of soil quality indicators, the effect of filling reclamation with iron tailings on soil quality was studied through the SQI method based on MDS, and the applicability of this method in the region was verified. The MDS was screened by PCA combined with the Norm value, and the Norm value was introduced to consider the load of the indicator on all principal components to avoid the loss of information on other principal components [58]. The summary of the relevant research results of soil quality evaluation based on MDS by Bünemann et al. [37] showed that bulk density, pH, organic matter, Key Indicator Identification and Policy Implications By using a series of soil quality indicators, the effect of filling reclamation with iron tailings on soil quality was studied through the SQI method based on MDS, and the applicability of this method in the region was verified. The MDS was screened by PCA combined with the Norm value, and the Norm value was introduced to consider the load of the indicator on all principal components to avoid the loss of information on other principal components [58]. The summary of the relevant research results of soil quality evaluation based on MDS by Bünemann et al. [37] showed that bulk density, pH, organic matter, sand percentage, silt percentage, total nitrogen, available phosphorus, and soil water content have a high frequency of use. The three indicators (sand, TN, AP) determined in our study were consistent with most of the results. In addition, clay was selected as MDS in our study, indicating that in addition to sand, TN, and AP, the effect of clay content on soil quality in the study area was also significant, which was determined by the regional natural conditions and soil characteristics. The study area is a typical semi-arid region, the soil texture was sticky, and filling reclamation with iron tailings would change the soil texture. Therefore, the four MDS indicators selected in our study are the key indicators affecting the quality of reclaimed soil in western Liaoning. The traditional filling reclamation materials such as fly ash and gangue usually contain harmful heavy metal elements such as Cd, Pb, and Hg, and the reclaimed soil has potential ecological harm [26]. In addition, if these heavy metals diffuse with runoff, they will also pollute a wider range of soil [51]. The results of our previous study found that the content of heavy metals such as Cd, Cr, Cu, Zn, Pb, Ni, Hg, and As did not exceed the risk intervention value for pollution of agricultural land in China [59], even lower than the heavy metal content in the regional native soil, with no toxicity and are not a source of pollution for the soil and crops [48]. In addition, our previous column leaching test results also showed that the heavy metal content in the leachate was very low, which met the standard [59], especially the content of Cr, Pb, and Cd was very low, which was almost impossible to be detected. This once again proves that it is feasible to use iron tailings as reclamation materials of mine wasteland, which can increase the farmland area and reduce the tailings accumulation, and save the cost of tailings pond management. In general, our study evaluated the reclaimed soil quality after the implementation of different iron tailings reconstruction farmland techniques to identify the most effective reclamation techniques. The results of this study can also provide valuable policy implications on improving the treatment of waste iron tailings, guiding the formulation of land reclamation technical standards, and promoting ecological restoration of the mining area. For example, in the future formulation of relevant technical standards for land reclamation in China, the appropriate thickness of topsoil cover should be reasonably determined according to the actual situation of reclamation areas. In addition, the research results have a positive guiding role in the formulation of technical policies for "harmless", "reduction", and "resource utilization" of solid waste in China. Furthermore, through the wide promotion and application of this technology, the use of iron tailings to reclaim historical legacy mines in the region can effectively solve the problems of limited reclamation resources and shortage of repair funds faced by local governments in the ecological restoration of historical legacy mines and improve the comprehensive utilization value of abandoned lands. Our research results are of great significance to promoting the ecological restoration of mines, realizing the sustainable development of ecological civilization construction, and supporting the "UN decade on Ecosystem Restoration" action. Conclusions Iron tailings were confirmed to be suitable as soil substitutes for constructing the soil profile configuration of reclaimed farmland. The comprehensive quality of reclaimed soil improved with the reclamation year, but it has not reached the level of regional normal farmland after 3 years of reclamation. The soil quality after 5 years of reclamation was better than that of normal farmland. SQI of R10 was also better than NF but slightly lower than R5. The quality of topsoil was better than that of subsoil in the same reclaimed farmland. The thickness of topsoil would affect the reclaimed soil quality. The soil quality of 30 cm topsoil covered in 5 years of reclamation was better than that of 50 cm topsoil covered in 10 years of reclamation. For the semi-arid region with sticky soil texture, the thickness of reclaimed topsoil is not the thicker, the better. The topsoil covering 30 cm after iron tailings filling in western Liaoning could achieve a better reclamation effect; the topsoil texture was improved, and the reclamation cost was effectively saved. Our study mainly analyzes the effect of the measures that have been completed by regional mines to reclaim farmland with iron tailings. The study was not carried out based on the strict and systematic experimental scheme design but conducted targeted research according to the actual reclamation process with iron tailings. According to the results of our study, it is expected to further establish a test site for systematic research in the study area in the future, to provide a basis for the improvement of ecological restoration theory and waste resource utilization technology in mining areas.
2022-07-09T15:29:14.710Z
2022-07-01T00:00:00.000
{ "year": 2022, "sha1": "2b56895cb56dcc7769954a23d6de537f936c10e7", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/19/14/8235/pdf?version=1657094973", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e973dcb474c50bc2a33fa0ed547242acab2240f7", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
237485106
pes2o/s2orc
v3-fos-license
On $3$-graphs with no four vertices spanning exactly two edges Let $D_2$ denote the $3$-uniform hypergraph with $4$ vertices and $2$ edges. Answering a question of Alon and Shapira, we prove an induced removal lemma for $D_2$ having polynomial bounds. We also prove an Erd\H{o}s-Hajnal-type result: every induced $D_2$-free hypergraph on $n$ vertices contains a clique or an independent set of size $n^{c}$ for some absolute constant $c>0$. In the case of both problems, $D_2$ is the only nontrivial $k$-uniform hypergraph with $k\geq 3$ which admits a polynomial bound. Introduction The famous triangle removal lemma of Ruzsa and Szemerédi [26] started a new chapter in combinatorics. It states that if an n-vertex graph G contains at most δ(ε)n 3 triangles, then G can be made triangle-free by deleting at most εn 2 edges. A similar statement holds when the triangle is replaced with an arbitrary graph F . Alon, Fischer, Krivelevich and Szegedy [2] proved an analogous result for the much more challenging setting of induced subgraphs. This result, known as the induced removal lemma, states that if G contains at most δ F (ε)n v(F ) induced copies of a graph F , then G can be made induced F -free by adding/deleting at most εn 2 edges. A generalization to arbitrary hereditary graph properties was later obtained by Alon and Shapira [5]. For more on graph removal lemmas, we refer the reader to [12]. One of the major developments in extremal combinatorics in the last twenty years was the establishment of a hypergraph version of Szemerédi's regularity lemma, which made it possible to extend the results mentioned in the previous paragraph to k-uniform hypergraphs. A hypergraph analogue of the graph removal lemma was proved by Gowers [20,21] and independently by Nagle, Rödl, Schacht and Skokan [23,25]. An analogue of the induced removal lemma, and more generally the Alon-Shapira theorem, was then obtained by Avart, Rödl and Schacht [9] for 3-uniform hypergraphs and by Rödl and Schacht [24] in the general case. As an example, for a k-uniform hypergraph F , the induced F -removal lemma states that if a k-uniform hypergraph H contains at most δ F (ε)n v(F ) induced copies of F , then H can be made induced F -free by adding/deleting at most εn k edges. The proofs of all of the above results rely on the regularity lemma of Szemerédi [29] or generalizations thereof. Consequently, the bounds on δ(ε) supplied by these proofs are quite poor. Even in the case of the triangle removal lemma, the best known bound, due to Fox [17], is 1/δ ≤ tower(O(log 1/ε)), where tower(x) is a tower of x exponents. Still, in some cases better bounds are known. This raised the natural question of characterizing the cases where the removal with the exception of F = C 5 . Let G ∼ G(n, 1/2) be the random graph, where each edge is present independently with probability 1/2, and let H be the k-uniform hypergraph on V (G), whose edges are the k-element cliques of G. Then H has only polylogarithmic sized homogeneous sets. Note that if k = 3, then H contains no induced copy of C 5 . Also, if H contains an induced copy of a k-uniform hypergraph F with k + 1 vertices, then |E(F )| ∈ {0, 1, 2, k + 1}. But then, if the complement of H contains F , then |E(F )| ∈ {0, k −1, k, k +1}. Therefore, if neither H nor the complement of H avoids F , then F is the complete or empty k-uniform hypergraph on k + 1 vertices, or k = 3 and F = D 2 . In the case F is the complete or empty hypergraph, the claim is well known, see e.g. [15]. Some further families of hypergraphs with polynomial homogeneous sets are studied in [10,22,27]. Our paper is organized as follows. In the next subsections, we introduce our notation, and outline our proof. Then, in the next section, we prove several lemmas which will form the backbone of the proofs of our main theorems. We then prove Theorem 1.1 in Section 3 and Theorem 1.2 in Section 4. Notation and preliminaries For a graph G and a set X ⊆ V (G), the density of X is d(X) = e(X) , where e(X) is the number of edges of G contained in X. For a pair of disjoint sets X, Y ⊆ V (G), the density of (X, Y ) is We also use similar definitions for hypergraphs. For a 3-uniform hypergraph H and , where e(X, X, Y ) is the number of edges which have two vertices in X and one in Y . We say that (X, is 0-homogeneous then we simply say that (X, Y ) is homogeneous. Recall that a graph G is called a cograph if either |V (G)| = 1 or G can be obtained from two smaller vertex-disjoint cographs G 1 , G 2 by placing a complete or empty bipartite graph between V (G 1 ) and V (G 2 ). It is well-known that cographs are perfect, implying that every cograph on n vertices contains a clique or an independent set of size at least √ n. It is also well-known [28] that a graph G is a cograph if and only if it is induced P 4 -free, where P 4 is the path with 4 vertices. Alon and Fox [3] proved a polynomial removal lemma for cographs: ). For every ζ > 0 there is δ = δ cograph (ζ) = poly(ζ) > 0 such that if an n-vertex graph G contains at most δn 4 induced copies of P 4 , then G can be turned into a cograph by adding/deleting at most ζn 2 edges. Outline of the proofs Let us give a rough outline of the proofs of our main theorems. In order to prove Theorems 1.1 and 1.2, we establish certain structural results about 3-graphs with no (or few) induced copies of D 2 . The first crucial observation is that if H is D 2 -free, then the link graph of every vertex is a cograph. The bulk of the work is then put into proving the following: if |V (H)| = n and H contains (only) δn 4 induced copies of D 2 , then there is a partition V (H) = X ∪ Y ∪ S such that |S| ≤ ξn (which we view as a set of 'leftovers'), |X |, |Y| > ξn, and (X , Y) is ε-homogeneous, provided that δ ≤ (ξε) c for some absolute constant c > 0. This is essentially done in Lemma 2.6. By recursively applying this partition result, we arrive to a partition V (H) = X 1 ∪ · · · ∪ X k ∪ S ′ and a cograph C on vertex set [k] with the following properties: Controlling the densities of pairs (X i , X j ) and using the fact that the hypergraph has few induced D 2 's then allows us to also control the densities of triples (X i , X j , X k ). In order to prove Theorem 1.1, we do some cleaning up with the help of this partition. To prove Theorem 1.2, we observe that we can choose ξ, ε = n −Ω (1) , and show that a large clique or independent set in C corresponds to a large clique or independent set in H. The Main Lemmas In this section, we prove some lemmas regarding the structure of hypergraphs with no (or few) induced copies of D 2 . For a 3-uniform hypergraph H and v ∈ V (H), denote by L(v) the link graph of v. Proof. Let v ∈ V (H), and suppose by contradiction that a, b, c, d is an induced path in L(v). In order for v, a, b, c not to span an induced copy of D 2 , we must have {a, b, c} ∈ E(H). Similarly, {b, c, d} ∈ E(H), and also, {a, b, d}, {a, c, d} / ∈ E(H). But now a, b, c, d span an induced copy of D 2 , a contradiction. , then one of the following holds: Proof. We will only prove the assertion in the case that 80 then follows by considering the complement of H and switching the roles of X and Y . We assume Items 2,3 do not hold, and show that then Item 1 must hold. Let us first show that d(Y, Y, X) ≥ 1 − , z ∈ Z uniformly at random and independently. Let A be the event that the following two items hold: By assumption, (a) holds with probability larger than γ 2 holds with probability at least 1−5· γ 2 80 = 1− γ 2 16 . Hence, P[A] ≥ γ 2 16 . Therefore, the number of 4-tuples (x, {y, y ′ }, z) satisfying A is at least γ 2 16 |X| |Y | 2 |Z| ≥ γ 2 64 |X||Y | 2 |Z|. We will now show that if A happens then H[{v, x, y, y ′ , z}] contains an induced copy of D 2 . This would imply that either H contains at least γ 2 128 |X||Y | 2 |Z| induced copies of D 2 , or H contains at least γ 2 128 · 1 n · |X||Y | 2 |Z| induced copies of D 2 which contain v. In either case, we will obtain a contradiction to the assumption that Items 2 and 3 do not hold. So suppose that A happens. Note that Now consider several cases. If {x, y, z} / ∈ E(H) then v, x, y, z span an induced copy of D 2 . So we may assume that {x, y, z} ∈ E(H), and similarly {x, y ′ , z} ∈ E(H). If {v, y, y ′ } / ∈ E(H) then v, x, y, y ′ span an induced copy of D 2 , so we may assume that {v, y, y ′ } ∈ E(H). Finally, if {y, y ′ , z} ∈ E(H), then v, y, y ′ , z span an induced copy of D 2 , and if {y, y ′ , z} / ∈ E(H), then x, y, y ′ , z span an induced copy of D 2 , proving our claim. This means that when sampling {x, Next, in the following lemma, we show that cographs have partitions in which all pairs of parts, except for pairs which form a matching, are homogeneous. We will also guarantee some additional structure on the non-homogeneous pairs. Lemma 2.4. Let 1 ≤ m < n be integers, let 0 < β < 1, and let G be a cograph on n vertices. Then there is a partition 2 forming a matching, such that the following holds: For every and one of the following holds: Proof. The proof is by induction on n. The base case n = 2 is trivial. Given G, define sets A 1 , A 2 , . . . as follows. Suppose that the process stopped after k steps, and consider the sets is either complete or empty. Let I + be the set of indices 1 ≤ i ≤ k − 1 for which this bipartite graph is complete, and I − the set of 1 ≤ i ≤ k − 1 for which this bipartite graph is empty. Order the elements of X ∪ Y = A 1 ∪ · · · ∪ A k−1 such that the elements of A i come before the elements of A j for every 1 ≤ i < j ≤ k − 1, and partition X ∪ Y into r intervals I 1 , . . . , I r according to this order, with |I i | = ⌊|X ∪ Y |/r⌋ or |I i | = ⌈|X ∪ Y |/r⌉ for every i, and with |I 1 | ≥ · · · ≥ |I r |. Observe that for 1 ≤ i < j ≤ r, the bipartite graph ( is complete, and the bipartite graph (Y ∩ I i , X ∩ I j ) is empty. Suppose without loss of generality that |X ∩ I 1 | ≥ |I 1 |/2; the case that |Y ∩ I 1 | ≥ |I 1 |/2 is symmetric. Let s be the largest integer satisfying |Y ∩ I s | > β|I s |; if no such s exists then set s = 0. Set X 1 = X ∩ (I 1 ∪ · · · ∪ I s−1 ), Similarly, as |Y ∩ I s | > β|I s |, it must be that I s ⊆ Y , and hence X ∩ I s = ∅. So in both cases, |Y ∩ I 1 |, |X ∩ I s | ≤ 2βm. It follows that after adding the above sets to S, we have |S| < 5βm. For each of the sets X 1 , X 2 , Y 1 , if it has size less than βm then put its elements into S. After this step, we have |S| < 8βm. Note that the bipartite graph (Y 1 , X 2 ) is empty. Also, the bipartite graph ( For each i = 1, 2, we do the following. If |Z i | < βm then place Z i into S. Following this step, |S| < 10βm. If |Z i | > m then apply the induction hypothesis to G[Z i ] (with the same parameters m and β) to obtain a partition where the second inequality holds by the general inequality ⌈x⌉ + ⌈y⌉ ≤ ⌈x + y⌉ + 1. Add all elements of S 1 ∪ S 2 into S. We now have |S| ≤ (2⌈ n m ⌉ − 3) · 10βm, as required. Our partition of V (G) \ S consists of the parts U i,1 , . . . , U i,t i (for i = 1, 2 for which |Z i | > m); of Z i for i = 1, 2 with βm ≤ |Z i | ≤ m; and of those sets among X 1 , Y 1 , X 2 which were not placed into S. The matching M consists of M 1 ∪ M 2 and the edge {X 1 , Y 1 } (unless one of X 1 , Y 1 was placed into S). It is easy to see that all requirements in Items 1-4 are satisfied. In the following lemma we will consider a graph G with a weight function w : . Then there is a partition V (G) = I ∪ J ∪ L such that w(I), w(J ) ≥ β/2, w(L) < β, and the bipartite graph between I and J is complete or empty. Proof. Define sets A 1 , A 2 , . . . as follows. For each i ≥ 0, if w(A 1 ∪· · ·∪A i ) ≥ β then stop. Otherwise let A ∪ B be a partition of V (G) \ (A 1 ∪ · · · ∪ A i ) such that the bipartite graph (A, B) is complete or empty and w(A) ≤ w(B); set A i+1 = A. Such a partition (A, B) exists because if w(A 1 ∪· · ·∪A i ) < β, then there are at least two vertices outside of A 1 ∪ · · · ∪ A i (since w(v) ≤ 1 − β for every v ∈ V (G)). This process has to stop at some point. Suppose that the process stopped after k steps, and consider the sets A 1 , . . . , A k . By definition, we have w(A 1 ∪ · · · ∪ A k−1 ) < β and w(A 1 ∪ · · · ∪ A k ) ≥ β. For each 1 ≤ i ≤ k, the bipartite graph between A i and V (G) \ (A 1 ∪ · · · ∪ A i ) is either complete or empty. We consider two cases. If w(A k ) ≥ β, then I = A k , J = V (G)\(A 1 ∪ · · ·∪ A k ), L = A 1 ∪ · · ·∪ A k−1 satisfy the assertion of the lemma. Note that w(J ) ≥ w(I) by construction. Suppose now that w(A k ) < β. Then w(A 1 ∪ · · · ∪ A k ) < 2β. Let I + be the set of 1 ≤ i ≤ k for which the bipartite graph between A i and V (G) \ (A 1 ∪ · · · ∪ A i ) is complete, and let I − be the set of 1 ≤ i ≤ k for which this bipartite graph is empty. Without loss of generality, w( i∈I Also, the bipartite graph between I and J is complete. The following is the main lemma of the paper, and will be used in the proofs of Theorems 1.1 and 1.2. As the proof of Lemma 2.6 is somewhat technical, let us give a rough outline. Fist, we apply Theorem 1.4 to L(v) to turn it into a cograph G using few edge additions/deletions. This is possible because L(v) contains few induced P 4 's because of Lemma 2.1 and Items 2-3 in Lemma 2.6. Then we apply Lemma 2.4 to the cograph G to obtain the partition S ∪ V 1 ∪ · · · ∪ V t . We consider the reduced graph on [t]; in this graph, {i, j} is an edge if (V i , V j ) is a complete bipartite graph, and a non-edge if it is an empty bipartite graph. For pairs {i, j} ∈ M, we use Item 4 in Lemma 2.4 to define the adjacency. We group 1, . . . , t into groups I 1 , . . . , I r such that if i, j are in the same group then they have the same relation to all other vertices in the reduced graph. Each group I a forms a clique or an independent set in the reduced graph. Consequently, no single group I a can span almost all of the vertices of G, because otherwise G (and hence also L(v)) would have density very close to 0 or 1, contradicting Item 1 in the lemma. If i, j belong to different groups, then there is some k such that i, j relate differently to k, and then we can use Lemma 2.3 to deduce that (V i , V j ) is γ-homogeneous for an appropriate γ. (This argument does not apply if {i, j} ∈ M, but there are very few such pairs, and so their contribution is negligible.). Then, using Lemma 2.2, we can deduce that almost all triples (V i , V j , V k ) are ε-homogeneous. This already gives us a lot of structure on the hypergraph H. As a final step, we group V 1 , . . . , V t into just two groups, which will correspond to the sets X , Y in the statement of Lemma 2.6, by applying Lemma 2.5 to the reduced graph (or, more precisely, to the graph on I 1 , . . . , I r derived from the reduced graph). The full proof follows. Proof of Lemma 2.6. Set α := ξ 2 ε 800 , β := ξ 240 , γ := ε 2 32 , ζ := α 2 β 4 γ 2 160 , δ := min Here, δ cograph is from Theorem 1.4. We claim that L(v) has at most 4δ(n − 1) 4 induced copies of P 4 . Indeed, for each copy X of P 4 , it holds by Lemma 2.1 that X ∪ {v} contains an induced copy of D 2 . Hence, either X spans an induced copy of D 2 , or there is an induced copy of D 2 consisting of v and 3 vertices from X. If at least half of the copies X of P 4 are of the first kind, then we get 2δ(n − 1) 4 ≥ δn 4 induced copies of D 2 in H, a contradiction. And if at least half are of the second kind, then we get at least 2δ(n − 1) 3 ≥ δn 3 induced copies of D 2 containing v, again giving a contradiction. By our choice of δ via Theorem 1.4, we get that L(v) can be turned into a cograph G by adding/deleting at most ζ(n − 1) 2 edges. Apply Lemma 2.4 to G with m := ⌈α(n − 1)⌉ and β as above, to obtain a partition V (G) = S ∪ V 1 ∪ · · · ∪ V t and a matching M ⊆ Then ∼ is an equivalence relation. Let I 1 , . . . , I r be the equivalence classes of ∼; so [t] = I 1 ∪ · · · ∪ I r . Note that for every 1 ≤ a < b ≤ r, the bipartite graph between I a and I b in K is complete or empty. Let F be the corresponding reduced graph on [r]; that is, {a, b} ∈ E(F ) if the bipartite graph between I a and I b is complete and {a, b} / ∈ E(F ) if this bipartite graph is empty. Let us define sets U i ⊆ V i , i ∈ [t], as follows. If there is j such that {i, j} ∈ M, then take U i to be the set V ′ i from Item 4 in Lemma 2.4, and otherwise take U i = V i . Observe that by the definition of K and F , for all 1 ≤ a = b ≤ r and i ∈ I a , j ∈ I b we have that ( Note also that by Items 2 and 4 in Lemma 2.4, and as m ≥ α(n − 1), we have for every i ∈ [t]. Claim 2.7. F is a cograph. Proof. Suppose, for the sake of contradiction, that (a, b, c, d) is an induced path in F for some a, b, c, d ∈ [r]. Fix i ∈ I a , j ∈ I b , k ∈ I c , ℓ ∈ I d . We saw above that the bipartite graphs (U i , U j ), (U j , U k ), (U k , U ℓ ) are complete in G and the bipartite graphs (U i , U k ), (U j , U ℓ ), (U i , U ℓ ) are empty in G. It follows that G contains an induced path on 4 vertices (every quadruple in U i × U j × U k × U ℓ spans such a path), in contradiction to G being a cograph. Proof. Suppose, for the sake of contradiction, that |W a | ≥ (1 − ξ 3 )(n − 1). Observe that I a is either a clique or an independent set in K. Let us assume that it is a clique (the other case can be handled symmetrically). This means that {i, j} ∈ E(K) for all i, j ∈ I a . By the definition of K, we have that (V i , V j ) is complete in G for all {i, j} ∈ Ia 2 \ M. By Item 2 in Lemma 2.4 we have that Moreover, the number of non-edges in G which touch It follows that the total number of non-edges in G is at most ( ξ 3 + 2α)(n − 1) 2 . Since G and L(v) differ on at most ζ(n − 1) 2 edges, the total number of non-edges in L(v) is at most ( ξ 3 + 2α + ζ)(n − 1) 2 < ξ n−1 2 . Hence, d(L(v)) > 1 − ξ, contradicting condition 1. We will apply Lemma 2.5 to the cograph F . Define the weight function w on where in the last inequality we used our choice of β. Apply Lemma 2.5 to the cograph F with parameter ξ 4 to obtain a partition V (F ) = [r] = I ∪ J ∪ L as in that lemma. Set X = a∈I W a , Y = a∈J W a and S = a∈L W a . Note that |S| ≤ ξn/4 and |X |, |Y| ≥ ξ Place the elements of S ∪ {v} into S. After this step, we have |S| ≤ ξ 4 n + 20βn + 1 ≤ ξ 2 n. As guaranteed by Lemma 2.5, the bipartite graph between I and J (in F ) is either complete or empty; suppose without loss of generality that it is complete (the other case can be handled symmetrically). Set I = a∈I I a and J = a∈J I a . In other words, I is the set of i ∈ [t] such that V i ⊆ X , and similarly J is the set of i ∈ [t] such that V i ⊆ Y. Note that the bipartite graph between I and J in the graph K is complete. Proof. By Claim 2.9, we have d( . Now apply Lemma 2.2 with X = V i , Y = V j , Z = V k and parameter ε 2 . If Item 2 in Lemma 2.2 holds then H contains at least (ε/2) 2 16 |V i | 2 |V j ||V k | ≥ ε 2 64 (αβ(n − 1)) 4 ≥ δn 4 induced copies of D 2 , a contradiction. Hence Item 1 in Lemma 2.2 holds, giving d( By symmetry, we have the following: Claim 2.11. Let j, k ∈ I, i ∈ J with j = k and {i, j}, {i, k} / ∈ M. Then d(V i , V j , V k ) ≥ 1 − ε 2 . By Claims 2.9 and 2.10, the number of triples {x, y 1 , y 2 } with x ∈ X and y 1 , y 2 ∈ Y which are non-edges of H, is at most where in the last inequality we used (3). It follows that Here, the last inequality uses our choice of α. Similarly, by using Claims 2.9 and 2.11, one establishes that d(X , X , Y) ≥ 1 − ε. Hence, (X , Y) is ε-homogeneous, as required. Proof of Theorem 1.1 We start with the following corollary of Lemma 2.6. Let us assume, without loss of generality, that d( Define the family of cohypergraphs as follows. The 3-uniform hypergraph H is a cohypergraph if |V (H)| = 1, or V (H) has a partition X ∪ Y with X, Y = ∅ such that (X, Y ) is homogeneous and H[X], H[Y ] are cohypergraphs. The next lemma easily follows from the definitions. Proof. We prove this by induction on |V (H)|. The base case |V (H)| = 1 is trivial, so assume |V (H)| ≥ 2. Then there exists a partition X ∪ Y with X, Y = ∅ such that (X, Y ) is homogeneous and H[X], H[Y ] are cohypergraphs, and therefore D 2 -free. But then H is also D 2 -free. We remark that the converse is not true, that is, not every D 2 -free hypergraph is a cohypergraph. E.g. linear 3-graphs are D 2 -free but not necessarily cohypergraphs. However, we prove that if a hypergraph contains few induced copies of D 2 , it can be made a cohypergraph by changing few edges. Proof of Theorem 1.1. Let δ ′ denote the δ(ε) given by Lemma 3.1. We show that in Theorem 1.1, one can take δ = δ ′ ε 4 . We decompose H by repeatedly applying Lemma 3.1, as follows. It is convenient to describe the decomposition using a tree, where each node corresponds to a subset of V (H). The root is V (H). At each step, if there is a leaf X with |X| ≥ εn, then apply Lemma 3.1 to H[X], noting that H[X] contains less than δ ′ |X| 4 induced copies of D 2 by our choice of δ. Lemma 3.1 gives a partition X = X 1 ∪ X 2 such that (X 1 , X 2 ) is ε-homogeneous. Now add X 1 , X 2 as the children of X. When this process stops, every leaf is of size less than εn. For each non-leaf X, make the pair (X 1 , X 2 ) homogeneous by adding/deleting at most ε · |X 1 | 2 |X 2 | + |X 2 | 2 |X 1 | edges. This requires at most ε n 3 edge changes in total. Next, for each leaf X, delete every edge of H[X]. This requires at most additional edge changes. So the total number of edge-changes is at most ε n 3 + ε 2 n 3 6 ≤ εn 3 . After these edge-changes, it is easy to see that the resulting hypergraph is a cohypergraph, so it is also induced D 2 -free by Lemma 3.2. This completes the proof. Proof of Theorem 1.2 We start with some preliminary lemmas. The following lemma shows that if the density of a hypergraph is bounded away from 0 and 1, then there is a vertex whose link graph also has density bounded away from 0 and 1. Applying Lemma 2.6 completes the proof. We will need the following well-known simple probabilistic lemma, see [7, Exercise 3 in Chapter 3]. For completeness, we include a proof. Proof. If d < 3 n 2 , then H contains at most n 2 edges, so by removing a vertex from each edge we get an independent set of size at least n 2 . Suppose that d ≥ 3 n 2 , then p = 3 d · 1 n satisfies p ≤ 1. Sample a subset X ⊆ V (H) by choosing each vertex with probability p independently. Delete one vertex from each edge contained in X to obtain an independent set. The expected size of X is np, and the expected number of edges contained in X is p 3 d n 3 ≤ p 3 dn 3 /6. Hence, there is a choice of X for which the resulting independent set has size at least np − p 3 dn 3 /6 = np/2 = 3 4d . Let us give an outline of the proof of Theorem 1.2. The main idea is to apply Lemma 4.2 with ε = n −c , for some sufficiently small constant c > 0. This way we get a partition of the vertex-set into X , Y and a "leftover set" S such that (X , Y) is ε-homogeneous and S is small. We continue decomposing the hypergraph by applying Lemma 4.2 to the hypergraphs induced by X , Y and so on, until all parts are sufficiently small. By choosing the parameters appropriately, we can make sure that the union of all leftover sets S accumulated throughout the process is small. We then consider an auxiliary graph on the set of parts obtained in the process, in which two parts are adjacent if their density is close to 1 and non-adjacent if their density is close to 0. This graph is a cograph. Using the fact that every cograph G has a clique or independent set of size |V (G)|, we obtain a set A of parts such that either all pairs X, Y ∈ A 2 have density close to 1, or all such pairs have density close to 0. We then use Lemma 2.2 to deduce that all triples X, Y, Z ∈ A 3 have density close to 1 or all have density close to 0. This way we obtain a set, namely X∈A X, which has density at least 1 − n −c or at most n −c , where c > 0 is an appropriate small constant. Finally, using Lemma 4.3, we get the required polynomial-size clique or independent set. We would like to draw the reader's attention to the following subtlety in the outline above. Suppose that at some step of the process, we found an ε-homogeneous pair (X 1 , Y 1 ), and then went on to decompose X 1 and Y 1 further. Suppose that X 2 ⊆ X 1 and Y 2 ⊆ Y 1 are parts found later in the process. The point is that we want the pair (X 2 , Y 2 ) to also be nearly-homogeneous (i.e. ε ′ -homogeneous for some small ε ′ ). To deduce this from the ε-homogeneity of (X 1 , Y 1 ), we need ε to be much smaller than |X 2 |/|X 1 | and |Y 2 |/|Y 1 |. This will indeed be true because the sets X , Y given by Lemma 2.6 are guaranteed to occupy at least a ξ 10 -fraction of the entire vertex-set, and ε will be chosen much smaller than ξ. This is the reason that in Lemma 2.6 we need two parameters, one governing the sizes of X and Y, and one the homogeneity. Proof of Theorem 1.2. We may and will assume that n is large enough where needed. Let η = n −1/(100C 0 ) , and define the following parameters: Then we can choose C such that C 0 < C < 10C 0 and also holds. Note that ξγn > n 1/2 . Let H be a D 2 -free 3-uniform hypergraph on n vertices. We will prove that H contains a homogeneous set of size Ω( 1/η) = Ω(n 1/(200C 0 ) ). We may assume that every subset U ⊆ V (H) of size |U | ≥ γn satisfies β ≤ d(H[U ]) ≤ 1 − β; otherwise we are done by Lemma 4.3. We decompose H by repeatedly applying Lemma 4.2, as follows. At each step, we will have a partition P i of a subset of V (H). Set P 0 = {V (H)}. We also maintain a cograph G, whose vertex set will be P i (initially G is the 1-vertex graph). For i = 1, . . . , t, do as follows: if for every U ∈ P i−1 it holds that |U | ≤ γn, then stop. Otherwise, let U i ∈ P i−1 with |U i | > γn. Apply Lemma 4.2 to H[U i ] to obtain a partition U i = X i ∪ Y i ∪ S i such that |X i |, |Y i | ≥ ξ|U i |/10 ≥ ξγn/10, |S i | ≤ ξ|U i | ≤ ξn, and It is easy to see that G is a cograph throughout the process. We say that step i is good if |X i | ≤ γ|U i | or |Y i | ≤ γ|U i |. Otherwise, step i is bad. Proof. For i ≥ 0, define q i := U ∈P i (|U |/n) 2 . Evidently, 0 ≤ q i ≤ 1 for every i. Note that for i ≥ 1, we have q i−1 − q i = (|U i |/n) 2 − (|X i |/n) 2 − (|Y i |/n) 2 > 0, where U i , X i , Y i are as above. Also, if step i is bad then q i−1 − q i ≥ (|U i |/n) 2 · 1 − γ 2 − (1 − γ) 2 = (|U i |/n) 2 · 2(γ − γ 2 ) ≥ (|U i |/n) 2 · γ ≥ γ 3 . So we see that if the number of bad steps were larger than 1/γ 3 , then we would have q i < 0 for some i, a contradiction. Suppose that the process ran for s steps, where 1 ≤ s ≤ t. Let S be the union of the sets S i , 1 ≤ i ≤ s. Note that |S| ≤ t · ξn = n 2 . We now show that at the end of the process, there are many parts of size at most γn. Proof. Suppose first that s < t, namely that the process stopped before step t. Then by definition, we must have |U | ≤ γn for every U ∈ P s . Also, | U ∈Ps U | = n − |S| ≥ n 2 . It follows that |P s | ≥ 1 2γ , completing the proof in this case. Suppose now that s = t. By Claim 4.4, the number of bad steps is at most 1/γ 3 . Hence, the number of good steps is at least t − 1/γ 3 ≥ 1 2γ . Observe that if step i is good, then after this step, we have #{U ∈ P i : |U | ≤ γn} ≥ #{U ∈ P i−1 : |U | ≤ γn} + 1. It follows that there are at least 1 2γ sets U ∈ P t satisfying |U | ≤ γn, as required. Concluding remarks In this paper we considered 3-uniform hypergraphs which have no 4 vertices spanning exactly 2 edges. More generally, for integers 3 ≤ k < ℓ and a set S ⊆ {1, . . . , ℓ k }, one can study k-uniform hypergraphs which have no ℓ vertices which span exactly s edges for some s ∈ S. What can be said about the Erdős-Hajnal properties of such hypergraphs? Do polynomial Erdős-Hajnal bounds hold in the case ℓ = k + 1, S = {2, 3, . . . , k − 1} for every k?
2021-09-13T01:15:37.542Z
2021-09-10T00:00:00.000
{ "year": 2021, "sha1": "8cf33bd47cae4eff79dc2ccb0c51edd54cc9fddd", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2109.04944", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8cf33bd47cae4eff79dc2ccb0c51edd54cc9fddd", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
79493478
pes2o/s2orc
v3-fos-license
RESEARCH Implementation of a Renal Replacement Therapy Simulation to Strengthen Essential Pharmacist Skills Objective. To assess third-year pharmacy students’ knowledge and application of renal pharmaco-therapy using a renal replacement therapy (RRT) simulation. Methods. A simulation was developed that involved three stations related to RRT: peritoneal dialysis, continuous renal replacement therapy (CRRT), and hemodialysis. Stations involved demonstration of each modality, literature searches for drug information questions related to renal dosing with written recommendations, and utilization of an electronic medical record (EMR) to develop a verbal Situation, Background, Assessment, Recommendation (SBAR) for a patient with chronic kidney disease (CKD). Pre-and post-simulation assessments regarding therapeutic knowledge of RRT was used. Results. All 174 students completed the pre-and post-simulation assessments over the course of two years. Student performance indicated significant improvement in overall knowledge based on the assessments with significant overall differences in questions relating to indications for RRT, type of RRT indicated for hemodynamic instability, and agents used to maintain circuit patency. Overall inter-class differences were also identified at baseline and specifically for the questions regarding indications for RRT and agents used to maintain circuit patency. Both classes showed significant improvement in overall knowledge based on the post-simulation assessments. Debrief sessions and course evaluations indicated student satisfaction with the simulation experience. Students reported that the experience met the simulation objectives. Conclusion. Participation in an RRT simulation allowed pharmacy students to apply knowledge and skills learned didactically related to renal pharmacotherapy. INTRODUCTION The National Kidney Foundation has established clinical practice guidelines for the evaluation and management of patients with kidney disease. 1,2Despite this, renal pharmacotherapy concepts are challenging to teach and master in the Doctor of Pharmacy (PharmD) curriculum and across clinical practice.Possible reasons for this include but are not limited to varying methods of calculating renal clearance and variations in patient-specific parameters that may affect renal clearance.In addition, the approaches used in treating patients with acute versus chronic kidney disease (CKD) and renal replacement therapy (RRT) further adds to the complexity of drug dosing and administration.As a result, both students and pharmacists may find it difficult to master the concept of appropriate drug dosing for patients with renal dysfunction. The Accreditation Council for Pharmacy Education (ACPE) Standards 2016 and the Center for Advancement of Pharmacy Education (CAPE) Educational Outcomes 2013 emphasize the importance of foundational knowledge, patient-centered care for special populations, literature evaluation, and problem-solving within the PharmD curriculum, which can be mapped to approaches used to teach renal pharmacotherapy. 3,4Similarly, the American Society of Health-System Pharmacists also identifies medication dosing in renal dysfunction as one of the 20 pharmaceutical care competencies for pharmacists. 5For these reasons, and because renal function is often affected by various comorbidities, the approach to renal pharmacotherapy instruction must be considered to accommodate various learner levels and adequately integrate multiple concepts in the didactic curriculum. ][11][12] Low-and high-fidelity simulation activities provide students an opportunity to apply their knowledge and practice their problem-solving and communication skills in a protected environment.][15][16][17] Pierce and colleagues described improved student performance and enhanced perceptions from a flipped classroom model followed by patient cases to teach a renal pharmacotherapy module. 14Strohfeld and colleagues showed that case studies related to renal therapeutics followed by written pharmaceutical care plans achieved desired learning outcomes. 16Lastly, Benedict and colleagues described the use of an online patient case software following lectures related to critically ill patients with CKD and found improved knowledge depicted by pre-and post-activity assessments and test scores. 17he RRT simulation activity created at the University of South Florida (USF) College of Pharmacy was intended to integrate and assess retention of foundational concepts learned in the pharmacotherapeutics course regarding CKD and RRT and provide students with the opportunity to further enhance their drug information and communication skills.The objective of this study was to assess students' knowledge before and after the RRT simulation in two different cohorts of third year pharmacy (P3) students. METHODS An RRT simulation-based learning activity was developed alongside the traditional teaching methods of the pharmacotherapeutics renal module in the second (P2) year curriculum at the University of South Florida College of Pharmacy.Renal lecture topics included renal medicinal chemistry and pharmacology, acid and base disorders, fluids and electrolytes, acute renal failure (ARF), and chronic kidney disease.Total contact time, including case discussions, for the renal module was 10 hours in the fall semester.Other topics related to renal pharmacotherapy, such as drug dosing in renal insufficiency and drug dosing in dialysis, were addressed in the clinical pharmacokinetics and pharmacodynamics course for three lecture hours in the P2 spring semester.All lectures were taught or overseen by the same two core faculty members to ensure continuity between the content. The RRT simulation then occurred the following fall semester of the P3 year in the pharmaceutical skills course, which is intended to allow for vertical and horizontal integration of topics through active learning.This simulation allowed for higher levels of Bloom's taxonomy through application of pharmacotherapeutic knowledge and delved into more advanced clinical topics, including analysis and evaluation of scenarios involving RRT, associated clinical considerations, and proper dosage adjustments. 18Additionally, the simulation emphasized drug information and literature evaluation skills, pharmaceutical calculations, and verbal patient presentation skills. Planning of the simulation occurred over three months and was completed by three core faculty members.The faculty met each week for two hours to discuss simulation logistics, assessment measures, and knowledge content.A clinical pharmacy specialist and critical care dialysis nurse developed and gave a lecture prior to the simulation activity.They also worked with the core faculty members to ensure that the content of the simulation was practical and clinically applicable.Faculty met with the personnel at the simulation center one month before the activity to run through simulation logistics. The station schedule was staggered, and three groups began the simulation each round but worked independently.Students could use their iPads for access to electronic medical record (EMRs) that contained the simulation patient's charts and access to drug information resources.EMRs used in the simulation included Epic (Epic Systems Corporation, Verona, WI) and Allscripts (Allscripts Healthcare Solutions, Inc., Chicago, IL) for the inpatient and outpatient settings, respectively.Patients' charts were developed through collaboration with USF Health Informational System (IS). A week prior to the simulation, students received a 2hour didactic component covering pharmacist-related issues with continuous renal replacement therapy (CRRT), peritoneal dialysis (PD), and hemodialysis (HD).A clinical pharmacy specialist and dialysis nurse from a local public hospital prepared and taught the material.Three core faculty members reviewed the lecture materials prior to the live presentation to ensure that specific topics were addressed.Specific teaching points were emphasized at each simulation station.Students were instructed to review their materials from the pharmacotherapeutics renal module prior to participating in the simulation. The simulation was held at the Center for Advanced Medical Learning and Simulation (CAMLS), a state-ofthe art health care simulation and training center with high-fidelity manikins and rooms that resemble inpatient settings (eg, emergency room and critical care units), outpatient settings (eg, ambulatory care clinics), and inpatient and outpatient pharmacy settings.On the day of the simulation, students attended a 20-minute orientation session prior to the start of the activity to complete the pre-assessment, review expectations, schedules, station layout, and answer any questions.Students worked in groups of five to six students, which were randomly assigned by the Office of Student Affairs at the start of each academic year.Every group had a designated group leader for each station and students were instructed to rotate roles between stations.Each of the three stations was 20 minutes in duration.At the end of the simulation, there was a 20-minute debriefing session for students to complete the post-simulation assessment and reflect on the simulation.Upon completion of the assessment, faculty reiterated key teaching points and students provided verbal feedback to the faculty regarding the simulation. The first station focused on concepts related to peritoneal dialysis.Three manikins depicting a patient on peritoneal dialysis were set up.A clinical pharmacist served as a facilitator and discussed how peritoneal dialysis works, infection risks and drug administration during dialysis.Students then received a corresponding scenario of a pharmacy consult to dose peritoneal vancomycin for a patient who had developed peritonitis.Workstations on wheels were also set up next to each manikin and students were required to verify a peritoneal dialysis order set using Epic. The second station involved a discussion of how CRRT works, including demonstration of a CRRT bag and mechanical set-up, overview of hemodynamic parameters, and review of anticoagulants used in CRRT.The facilitator at this station was a critical care dialysis nurse.Students were required to access a patient record in Epic, renally adjust the patient's antibiotic regimen while on CRRT, and provide references to support the dosage adjustments. The third station required students to access a patient chart through Allscripts and evaluate the medication list of a patient with end stage renal disease on hemodialysis to determine whether the medication regimen was appropriate based on presentation and laboratory examinations.After preparing recommendations as a group, one student verbally presented the patient case with recommendations to a preceptor in Situation, Background, Assessment, Recommendation (SBAR) format.A rubric, which has been consistently used during class, was utilized to evaluate the verbal presentations and provide feedback (Table 3). A worksheet was used to evaluate accuracy of clinical recommendations and associated references in the first two stations.The SBAR rubric was used for the third station (Table 3).Since there is no validated assessment tool related to renal replacement therapies for pharmacy students, a pre-and post-simulation assessment was developed and composed of five questions.The faculty who taught the renal content created the questions, which were peer reviewed by other faculty members.Questions matched the simulation learning objectives and assessed the teaching points that were reinforced throughout the simulation (Table 1).The pre-and post-simulation assessments were administered through Canvas (Salt Lake City, UT) directly before and after the simulation.Students received credit for the simulation and for completing the assessments.Assessment questions remained the same pre-and post-simulation to ensure that knowledge obtained was from participation in the simulation. The primary outcome was the difference in overall scores before and after the simulation.Secondary outcomes included the difference in overall scores for individual questions, overall differences between Cohort 1 (class of 2016) and Cohort 2 (class of 2017) prior to the simulation, and overall differences of each class before Station Core Concepts 1 Formulate a pharmacotherapeutic plan for a patient on peritoneal dialysis with peritonitis, including doses and monitoring parameters; Describe the components of a peritoneal dialysis order set in an EMR system. 2 Gather, interpret, evaluate, and summarize drug information from primary, secondary, and tertiary literature and communicate the results in a verbal or written manner appropriate to the requesting party; Describe the dialysate solution composition. 3 Assess confidence level using EMRs during a simulation; Verbally communicate a clear and concise evidence-based, patient-specific therapeutic plan in the form of an SBAR for a hemodialysis patient. and after the simulation.Pre-and post-simulation assessments were matched per student.The paired t-test was used to assess the change in overall performance on the assessment.The difference in overall scores for individual questions was assessed using the Cochrane Q test.The overall differences between the two classes prior to the simulation were assessed using the student's t-test. The differences of each class before and after the simulation were assessed using the paired t-test.P values ,.05 were statistically significant.Data was analyzed using the Real Statistics Resource Pack software, Version 4.1 (Real Statistics Using Excel, San Antonio, TX). 19Student perceptions of the simulation were garnered during the debrief session and from course evaluations.The study was determined to be exempt as non-human subject research by the Institutional Review Board at the University of South Florida. RESULTS There were 174 students who completed the simulation over the course of two years.Sixty-three students from the Cohort 1 and 111students from Cohort 2 completed the simulation, which is representative of the full class size (Table 2).For the primary outcome related to the difference in overall scores before and after the simulation, the average pre-simulation score was 55%, while the average post-simulation score was 74%.This difference was found to be statistically significant with a p,.001.For secondary outcomes, a significant overall difference was observed between questions related to indication for RRT, type of dialysis for hemodynamic instability, and agents to maintain circuit patency, p,.001 for each question.For secondary outcomes regarding interclass differences at baseline, the average score was 45% for Cohort 1 and 60.4% for Cohort 2, p,.001.Similarly, significant inter-class differences were observed for questions related to indication for RRT and agents to maintain circuit patency, p,.001.In addition, there was a significant change in pre-and post-simulation assessment scores for each cohort.For Cohort 1, assessment scores increased from 45% to 72%, p,.001.For Cohort 2, assessment scores increased from 60% to 74%, p,.001. Students expressed satisfaction with the RRT simulation during the debrief sessions and course evaluations.Generally, students appreciated the opportunity to review and apply renal concepts later in the curriculum to review key concepts and enhance their clinical competence in this area.Furthermore, students enjoyed learning directly from the clinical pharmacy specialist and critical care nurse. DISCUSSION Although pharmacists are not directly involved with initiation and implementation of specific renal replacement modalities, they play an integral role across a patient's spectrum of care. 20Opportunities arise in the primary care setting for patients with CKD regarding medication management of CKD and other comorbid conditions in order to prevent disease progression. 21In acute care settings, pharmacists must be vigilant in caring for patients on RRT for optimal drug dosing and any acute changes requiring dosing adjustments.Such clinical pharmacy services lead to improved patient care, although clear outcomes attributed directly to pharmacist involvement have not been quantified in the literature. 22For these reasons, it is imperative for students to appreciate the breadth of clinical management for these patients. This simulation allowed for reinforcement of content related to CKD and RRT, which were taught in the previous years of the curriculum.It also offered students an opportunity to apply their knowledge through patient cases and associated discussions with content area experts from an affiliated teaching hospital.The objectives of this activity aligned with several ACPE standards. 3To date, pharmacy education literature is limited in depicting live simulations related to CKD and renal replacement modalities. Results from the pre-and post-simulation assessments demonstrate an overall increase in knowledge after the simulation.Further evaluation of each question showed a statistically significant improvement of knowledge on questions 1, 2, and 4, which are related to indications for RRT, patient-specific selection of RRT modalities, and appropriate anticoagulation for each modality, respectively.This is representative of the key discussion points iterated during the simulation, which were not often emphasized in other cases and activities.For example, students have encountered multiple renal drug dose adjustments in cases and drug information questions within other courses.However, the content of these specific three questions did not often arise elsewhere in the curriculum.For this reason, results can be directly attributed to the RRT simulation.Since students completed the pre-and post-simulation knowledge assessments individually and on the same day as the simulation, authors do not anticipate confounders to the results. There was no demographic baseline statistical analysis conducted to examine the differences between Cohort 1 and 2, which is a limitation of this study.The secondary outcome related to inter-class differences depicted an increase in baseline knowledge prior to the simulation.Originally for Cohort 1, the didactic lecture component was combined with the simulation experience on the same day.Students provided feedback that they would have benefitted had the lecture been separated from the simulation activity to allow time to study the material.For this reason, faculty separated the components by one week to allow more time for students to review the materials prior to participating in the simulation.This may explain the difference in baseline knowledge between the two cohorts of classes. One of the strengths of this simulation is related to vertical and horizontal alignment to enhance student knowledge from the renal module as well as integration of other skills such as utilization of drug information resources and pharmaceutical calculations.Utilization of simulation is one of the strengths of this activity, which encompassed the spectrum of renal replacement modalities.Students had been introduced to these concepts in previous years, but the activity allowed for demonstration and application of specific, real-life scenarios related to RRT and encouraged critical thinking skills to evaluate resources related to drug dosing.Inclusion of the clinical pharmacy specialist and critical care dialysis nurse added to the depth of content delivery, allowing students to ask specific questions and network with content area specialists.Furthermore, involvement of the nurse specialist has reinforced the importance of interprofessional practice, especially in the care of patients requiring high acuity care. The assessment questions were not validated, which may limit generalizability of the results.Another limitation of the activity includes time constraints for students to complete the tasks at each station.Student feedback included the need for more time to complete a thorough SBAR that addressed all aspects of the patient's CKD.Additionally, students expressed a desire to complete the simulation individually, rather than in their academic groups, to strengthen their abilities.Although this would be beneficial to encourage individual learning, it poses the challenge of having all students complete the simulation during allotted class time. Future considerations involve increasing the time in each station for students to complete the activities and use the expertise of the specialists during the simulation.Additionally, the simulation may be expanded for an interprofessional component, especially with medical students (to include hemodynamic parameters and diuresis) and nursing students (to include IV access sites, etc.).Such a simulation would also be ideal to depict a transitions of care model that follows a patient with acute or chronic kidney disease throughout the continuum of care. CONCLUSION Simulation can be used effectively to reinforce and teach renal pharmacotherapy content learned didactically.This simulation encompassed various aspects of RRT modalities that often have limited coverage within pharmacy curricula.Curricular alignment between the pharmacotherapeutics and pharmaceutical skills courses allowed opportunities for critical thinking and application of clinical concepts, which aligns with both the ACPE Standards 2016 and the 2013 CAPE Outcomes. 3,4 Table 1 . Renal Replacement Simulation Learning Objectives Table 2 . Knowledge Assessment Before and After Participation in RRT Simulation Session I5incorrect response; C5correct response; PC a 51 out of 3 correct response; PC b 52 out of 3 correct responses; PC c 51 out of 2 correct response; AEIOU5acidosis, electrolytes, intoxications, overload, uremia; CRRT5continuous renal replacement therapy; HD5hemodialysis American Journal of Pharmaceutical Education 2019; 83 (2) Article 6519. Table 3 . Situation, Background, Assessment, Recommendation (SBAR) Rubric Evaluators were also asked to provide comments for each of the areas American Journal of Pharmaceutical Education 2019; 83 (2) Article 6519.
2019-03-17T13:07:34.284Z
2019-03-01T00:00:00.000
{ "year": 2019, "sha1": "243114273ad2fe2e624660141af6ab20e93901dc", "oa_license": null, "oa_url": "https://www.ajpe.org/content/ajpe/83/2/6519.full.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "14df2f689a16388171bbbac011372c169df8ef4d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
263310418
pes2o/s2orc
v3-fos-license
Sp1149 II: Spectroscopy of HII Regions Near the Critical Curve of MACS J1149 and Cluster Lens Models Galaxy-cluster gravitational lenses enable the study of faint galaxies even at large lookback times, and, recently, time-delay constraints on the Hubble constant. There have been few tests, however, of lens model predictions adjacent to the critical curve (<8") where the magnification is greatest. In a companion paper, we use the GLAFIC lens model to constrain the Balmer L-sigma relation for HII regions in a galaxy at redshift z=1.49 strongly lensed by the MACS J1149 galaxy cluster. Here we perform a detailed comparison between the predictions of ten cluster lens models which employ multiple modeling assumptions with our measurements of 11 magnified giant HII regions. We find that that the models predict magnifications an average factor of 6.2 smaller, a 2-sigma tension, than that inferred from the HII regions under the assumption that they follow the low-redshift L-sigma relation. To evaluate the possibility that the lens model magnifications are strongly biased, we next consider the flux ratios among knots in three images of Sp1149, and find that these are consistent with model predictions. Moreover, while the mass-sheet degeneracy could in principle account for a factor of ~6 discrepancy in magnification, the value of H0 inferred from SN Refsdal's time delay would become implausibly small. We conclude that the lens models are not likely to be highly biased, and that instead the HII regions in Sp1149 are substantially more luminous than the low-redshift Balmer L-sigma relation predicts. INTRODUCTION Gravitational lensing by galaxy clusters provides a uniquely powerful tool for both finding and studying intrinsically faint galaxies and stars that existed when the Universe was a fraction of its present age.The magnifying power of galaxy-cluster lenses amplifies the flux and increases the angular sizes of background galaxies, enabling measurements that would otherwise not be possible (e.g., Swinbank et al. 2009;Wuyts et al. 2014;Wang et al. 2017;Curti et al. 2020;Williams et al. 2023).Refsdal (1964) showed that, in principle, the time delays between multiple images of the same strongly-lensed supernova (SN) could be used to measure the Hubble constant (H 0 ).This method of measuring H 0 using time delays provides an independent means to address the "Hubble tension" between measurements of H 0 from Type Ia supernovae (SNe Ia) in the local Universe (Riess et al. 2022) and from early-Universe observations of the cos-mic microwave background (CMB; Planck Collaboration et al. 2020).If the Hubble tension represents a true difference, reconciling the inferred values may require revision to the standard Λ-cold-dark-matter (ΛCDM) cosmological model. Uncertainties associated with cluster lens models limit our ability to study magnified populations of galaxies, as well as measure H 0 from time delays.Models use systems of multiple images of lensed background sources as constraints, yet also require assumptions about the connection between the distribution of luminous matter and dark matter.For well-constrained galaxy clusters such as the Hubble Frontier Fields (HFF; Lotz et al. 2017), the latest modeling techniques predict magnifications that are consistent among different models up to a magnification factor of µ ≈ 40-50 (Bouwens et al. 2022).These modeling techniques have been tested on simulated images of galaxy clusters, and most mod- els can reliably predict the magnification due to lensing within an accuracy of 30% for magnifications up to µ ≈ 10 ( Meneghetti et al. 2017).Uncertainties become more pronounced in regions near the critical curve where µ ≳ 60, where different lens modeling techniques can predict magnifications ranging from ∼ 40 to ∼ 100 at the same positions (Bouwens et al. 2022).Identifying the most accurate lens modeling assumptions would improve our ability to use gravitational lenses as tools to study the galaxies that existed in earlier epochs of the Universe as well as measure H 0 from time-delay cosmography. SN Refsdal, the first known multiply-imaged SN, was discovered in 2014 (Kelly et al. 2015;Treu et al. 2016) in an Einstein-cross configuration around an elliptical galaxy in the Hubble Frontier Fields (HFF; Lotz et al. 2017) galaxy cluster MACS J1149.5+2223(redshift z = 0.544, hereinafter referred to as MACS J1149), and reappeared ∼ 8 ′′ away in 2015 (Kelly et al. 2016).SN Refsdal provided the first opportunity to use a cluster-scale gravitational lens to infer the value of H 0 .The systematic uncertainties associated with cluster-scale lens models differ from the galaxy-scale models that have been used in previous measurements of H 0 from multiplyimaged quasars (Grillo et al. 2020).The time delay between the 2014 and 2015 appearances of SN Refsdal was measured within 1.5% (Kelly et al. 2023).Given a perfect lens model, this would yield an equally precise constraint on H 0 .In the case of SN Refsdal, the greatest contribution to the uncertainty in H 0 is the uncertainty associated with the MACS J1149 cluster mass models (Kelly et al. 2023;Grillo et al. 2020).The value of H 0 derived from the time delays of SN Refsdal is H 0 = 64.8+4.4 −4.3 km s −1 Mpc −1 using the full set of prereappearance lens models, and 66.6 +4.1 −3.3 km s −1 Mpc −1 using the two models that best reproduce the H 0independent observables (Kelly et al. 2023). After the discovery of SN Refsdal, an individual, highly magnified (µ ≈ 600) blue supergiant star was discovered in the same host galaxy as SN Refsdal in the MACS J1149 field (Kelly et al. 2018).Known as "Icarus," the star was the first individual star discovered at a cosmological distance.An image of Icarus is always detectable in sufficiently deep images.The light curve of Icarus can place constraints on the abundance of primordial black holes (Oguri et al. 2018) and the initial-mass function (IMF) of stars responsible for intracluster light, but inferences depend on the ability of lens models of the MACS J1149 cluster to predict the magnification of Icarus.The MACS J1149 cluster lens has been modeled using several different methods.So-called simply parameterized models have been constructed by Sharon & Johnson (2015), Keeton (2010), Grillo et al. (2016), the GLAFIC team (Oguri 2010;Kawamata et al. 2016), and the Clusters As Telescopes (CATS) team (Jauzac et al. 2016).The simply parameterized method assigns dark-matter halos to individual cluster galaxies and to the cluster, and uses halos with simple, physically motivated profiles each described by a small number of parameters.These models use the positions of multiply-imaged galaxies, and assign the mass to each galaxy halo using a proxy such as the stellar mass.Williams & Liesenborgs (2019) instead make no assumptions about the connection between luminous and dark matter, and use only the positions of multiply-imaged galaxies as constraints.They apply a "free-form" approach which involves a large number of components that are not associated with any cluster galaxies but instead only uses the strong lensing image positions as input.Bradač et al. (2009) created a free-form model which uses both strong and weak lensing image positions.A hybrid model was created by Diego et al. (2015), which uses a free-form approach to model the overall cluster halo and a parametric approach for the individual cluster members.Zitrin et al. (2013) constructed a "light-traces-mass" (LTM) model, which reconstructs the mass distribution of the cluster by smoothing and rescaling the surface brightnesses of the individual cluster members.Zitrin (2021) also constructed a parametric model using a Navarro, Frenk, & White (NFW;Navarro et al. 1996) density profile.We designate the two Zitrin models as "Zitrin-LTM" and "Zitrin-NFW." The strongly lensed host galaxy of SN Refsdal and Icarus presents an opportunity to constrain the cluster lens' magnification at positions near the critical curve of the MACS J1149 cluster, which has not been possible for lens models of clusters.Known as Sp1149, the host galaxy is a triply-imaged and highly magnified face-on spiral galaxy at z = 1.49(Smith et al. 2009;Di Teodoro et al. 2018).The three images of this galaxy are some of the largest images ever observed of a spiral at z > 1 (see Fig. 1).The high magnification and relative lack of image distortion make it possible to study the spatial structure of the galaxy in detail.In 2011, Yuan et al. (2011) acquired integral field unit (IFU) spectroscopy of the largest image of Sp1149 with the OH-Suppressing Infra-Red Spectrograph (OSIRIS; Larkin et al. 2006) on the Keck II 10 m telescope.The Hα map measured from these observations revealed more than ten resolved H II regions located less than 10 ′′ from the critical curve (see Fig. 2). The H II regions in Sp1149 should, in principle, allow a direct measurement of the magnification due to the MACS J1149 cluster by utilizing the empirical relationship between Balmer luminosities and velocity dispersions of H II galaxies and giant H II regions (the L − σ relation; Melnick et al. 1988).The L − σ relation for H II galaxies has been shown to be consistent with luminosity distances expected for standard cosmological parameters to z ≈ 4, and observations of H II galaxies have been used to measure H 0 and constrain the dark energy equation of state (e.g., Chávez et al. 2016;Fernández Arenas et al. 2018;Tsiapi et al. 2021).Terlevich et al. (2016) used the L − σ relation to estimate the intrinsic Hβ luminosity of a single compact H II galaxy at z = 3.12 that is gravitationally lensed by the HFF cluster Abell S0163, and inferred a magnification of 23 ± 11, in agreement with the value of ∼ 17 predicted by a simply parameterized model presented by Caminha et al. (2016).Due to their compact size and intrinsic faintntess, there are few constraints on the L − σ relation for giant H II regions at redshifts beyond z ≈ 1.If the L − σ relation measured at low redshift is an accurate description of the giant H II regions in Sp1149 at z ≈ 1.5, the Balmer luminosities of the H II regions can be used as standardizable candles and the magnification due to lensing at their positions can be constrained.Previously, direct measurements of a galaxy cluster's magnification have only been made using SNe Ia at offsets of more than 20 ′′ from the critical curves where the magnifications are ≲ 2 (Nordin et al. 2014;Patel et al. 2014;Rodney et al. 2015;Rubin et al. 2018). In Paper I of this series, Williams et al. (2023, submitted), we used a combination of archival OSIRIS IFU data and newly acquired spectroscopy from the Mulit-Object Spectrometer For Infra-Red Exploration (MOS-FIRE) to measure the Hα luminosities and intrinsic velocity dispersions of 11 H II regions in Sp1149.After correcting for magnification using the GLAFIC mass model (v3) of MACS J1149 (Oguri 2010;Kawamata et al. 2016), we found that the H II regions in Sp1149 were 6.4 +2.9 −2.0 times more luminous than expected from the locally calibrated L − σ relation.However, if we instead assume that the L − σ relation for giant H II regions calibrated using low-redshift galaxies accurately describes those in Sp1149, then this result would suggest that the GLAFIC model underpredicts the magnification at the positions of the H II regions by a factor of ∼ 6. In this work, we make the assumption that the L − σ relation in Sp1149 is identical to that in low-redshift galaxies, and infer the magnification due to lensing at the positions of 11 H II regions in Sp1149.The H II regions are adjacent to the critical curve of the MACS J1149 cluster (∼ 2 ′′ -8 ′′ ), where magnifications are expected to reach up to ∼ 20.We compare our magnification measurements with the predicted magnifications from ten different lens models of MACS J1149.Magnification depends on the second derivative of the gravitational potential, so these measurements test a different aspect of the cluster models than the relative time delays from SN Refsdal, which depend on the difference in the potential between the multiple appearances. In Section 2 we describe the OSIRIS and MOSFIRE observations and data reduction.Section 3 details our method for inferring the magnification due to lensing at the position of each H II region.We compare our measurements to the magnifications predicted by the models in Section 4 and discuss the implications of our results in Section 5. OBSERVATIONS AND DATA REDUCTION Rest-frame optical IFU spectroscopy of the largest image of Sp1149 was obtained by Yuan et al. (2011) with the OSIRIS instrument on the Keck II 10 m telescope (Larkin et al. 2006).The observations were taken using the Hn3 filter (15,940-16,760 Å; resolution R ≡ λ∆λ ≈ 3400), capturing the Hα emission line at the redshift of Sp1149.Adaptive optics (AO) provided a corrected spatial resolution of 0.1 ′′ , corresponding to ∼ 300 pc in the source plane for a typical magnification of µ = 8.The total exposure time was 4.75 hr.Data-reduction details are described by Yuan et al. (2011).We extracted the one-dimensional (1D) spectra of 11 H II regions in Sp1149, inside circular apertures with radius r = 500 pc in the source plane, given the GLAFIC model magnification predictions.Figure 2 shows the Hα emission-line intensity map of Sp1149 and the H II region extraction apertures. To measure the Balmer decrement and infer the extinction due to dust at the positions of the H II regions, we acquired multislit spectroscopy using MOSFIRE on the Keck II 10 m telescope (McLean et al. 2012) over two half nights, February 25 and 26, 2020 UTC.We observed each H II region in the J and H bands to detect the Hβ and Hα emission lines, respectively, and total exposure times ranged from 8 min to 36 min on six different slit masks. The spectra were reduced with the MOSFIRE Data Reduction Pipeline (DRP; Konidaris et al. 2019).We used observations of telluric standard stars to correct for telluric absorption and acquired spectra of field stars on each slit mask to measure the absolute flux calibration for each mask.See Paper I for a detailed description of the MOSFIRE observations and data reduction. MAGNIFICATION MEASUREMENTS To infer magnification values using the Balmer L − σ relation, we require a calibration of the L − σ relation that uses aperture sizes and spectral resolution comparable to those of our OSIRIS measurements of the Hα luminosities and velocity dispersions of the H II regions in Sp1149.Using archival IFU spectroscopy of nine nearby spiral galaxies taken with the Multi Unit Spectroscopic Explorer (MUSE; Bacon et al. 2010) on the Very Large Telescope (VLT), we extract the spectra of 347 H II regions at z ≈ 0 using the same physical aperture sizes, given the GLAFIC model predictions, that we used to extract the H II regions in Sp1149 from the OSIRIS data. We employ Markov-Chain Monte Carlo (MCMC) sampling with the pymc3 package (Salvatier et a Model-predicted magnifications at the position of each H II region, and our measured magnifications.The ratio µ obs /µ mod is the weighted average of the ratios of the measured to model-predicted magnifications for the 11 H II regions.Reported uncertainties are 1σ. These measurements are shown in Figure 3. .The statistical tension between our measurements of the magnification compared with each model's prediction for the magnification at the position of each H II region.A positive value for the tension indicates that our measured magnification is greater than the model's prediction.The black lines indicate the median tension for each model.We find that all ten models underpredict the magnification compared to the measurements by a median value among the H II regions of 1.8-2.5σ. of the Hβ and Hα emission lines for each H II region to constrain the extinction due to dust, and the OSIRIS observations of the Hα emission lines to infer their intrinsic velocity dispersions and, in combination with our constraints on extinction, intrinsic Hα luminosities. To measure the magnification at the position of each H II region, we use the posteriors from Paper I to compute the observed (magnified) Hα luminosity, L obs (Hα).We compute the expected Hα luminosity, L exp (Hα) of each H II region using the posteriors for the velocity dispersion from Paper I and applying our calibration for the local L − σ relation (Eq.1).The posteriors on the slope and intercept of the low-redshift relation are used to propagate the uncertainties associated with the calibration.The observed magnification, µ obs , at the position of each H II region is given by log(µ obs ) = L obs (Hα) − L exp (Hα) . (2) We list our measurements of the magnifications at the positions of 11 H II regions in Sp1149 in Table 1, and show these as a function of distance from the critical curve of MACS J1149 at z = 1.49 in Figure 3. COMPARISON WITH THE MODELS We next compare our observed magnifications of the 11 H II regions in Sp1149 with the magnifications predicted by ten different models of the MACS J1149 cluster.Each lens modeling team provided ∼ 100 model magnification maps corresponding to different MCMC realizations 1 .We use the median value at the position of each H II region as the predicted magnification, adopting the 16th and 84th percentile values to compute the 1σ uncertainties.The model magnification values are listed in Table 1, and the model predictions together with our constraints are plotted in Figure 3. To evaluate the level of agreement between our measurements and the model predictions, we calculate the tension between each measurement and prediction.We take the difference between the measurement and the prediction, and divide by the 1σ uncertainty in the difference.A positive tension indicates that the value we measure is greater than the model's prediction. In Figure 4, we plot the tension for each model and H II region, as well as the median tension for each model.Almost all of our magnification measurements are ∼ 1-3σ greater than the models' predictions.The median statistical tension among the set of 11 H II regions for each model is 1.8-2.6σ. We next compute the average factor by which each model underpredicts the magnification compared to our measurements by computing µ obs /µ model for each H II region and assigning a weight to each measurement based on the combined 1σ uncertainties of the observed magnification and the model prediction.As Figure 5 shows, the 10 available models of MACS J1149 under- HST imaging of the MACS J1149 cluster field.We compare these flux ratios to the predicted magnification ratios at their positions and find that the measured ratios agree with the predictions from the models within the 1σ uncertainties. predict the magnification by a factor of ∼ 5-8.The average underprediction factor among the models is 6.2, and the median factor among the models is 6.6. DISCUSSION Using the Balmer L − σ relation, we measured the magnification due to gravitational lensing by the MACS J1149 cluster of 11 giant H II regions in the spiral galaxy Sp1149 (z = 1.49).We have compared our measurements to the magnifications predicted by 10 different cluster mass models and found that all of the models predict magnifications that are smaller than our inferred values at the positions of the H II regions by factors of ∼ 5-8.The tension between our measurements and the model predictions is 1.8-2.6σ. The models that are in the least tension with our measurements (∼ 1.8σ) are the simply parameterized Sharon model and the Zitrin light-traces-mass model, which underpredict the magnification in comparison to our constraints by an average factor of 4.1 ± 1.4 and 4.8 ± 1.7, respectively.The Williams free-form model underpredicts the magnification by the largest average factor among the models, with ⟨µ obs /µ model ⟩ = 8.3±2.9. Here we have calculated the magnification under the assumption that the L − σ relation for giant H II regions that we have calibrated at low redshift in matching apertures and spectral resolution applies to the H II regions in SN Refsdal's host galaxy at z = 1.49.Previous studies have shown that the L − σ relation for H II galaxies does not evolve strongly with redshift at least z ≈ 4, and it has been used to calculate H 0 and the dark energy equation of state (e.g., Chávez et al. 2016;Fernández Arenas et al. 2018;Tsiapi et al. 2021).If we instead assume that the magnifications predicted by the models are accurate, then our results indicate that H II regions are substantially more luminous in Sp1149 than predicted by the low-redshift L−σ realtion (3σ tension), and would imply a physical difference between the two populations of H II regions (see Paper I). If our results correspond to a systematic bias of the lens models of the MACS J1149 cluster, the magnification corrections applied to background galaxies that are lensed by MACS J1149 would cause us to overestimate their intrinsic luminosities and emission-line fluxes.Physical properties of these galaxies, such as star-formation rate and stellar mass, would also be overestimated.The value of H 0 inferred from time-delay measurements is sensitive to the details of cluster models, so a systematic bias in the lens models used to infer H 0 from SN Refsdal could affect the interpretation of that measurement.For instance, increasing the magnification by a factor of 6 would require a mass sheet of κ = 0.6 under the mass-sheet degeneracy (µ ∝ (1−κ) −2 ; Oguri & Kawano 2003), which would reduce the derived value of H 0 by a factor of 1 − κ = 0.4.The value of H 0 inferred from the time delays of SN Refsdal is 64.8 +4.4 −4.3 km s −1 Mpc −1 , so this interpretation would imply an implausible H 0 ≈ 26 km s −1 Mpc −1 . To test the likelihood of the magnification of Image 1.1 of Sp1149 being a factor of ∼ 3 times higher than the models predict, we compare the observed flux ratios of bright knots in Image 1.1 and Image 1.3 to the model-predicted magnification ratios at their positions.As shown in Figure 1, Image 1.3 is substantially farther from the critical curve than mirrored Images 1.1 and 1.2.We identify five bright knots in Sp1149 and measure their flux densities in both Image 1.1 and Image 1.3 from the HST F606W image of MACS J1149 (see Figure 7).We find that, for all five knots, the observed flux ratios agree with the predicted magnification ratios from a majority of the models (see Figure 6).In other galaxy-cluster fields, the magnifications of SNe Ia have been measured at similar offsets from the critical curve of ∼ 20 ′′ , and obtained approximate agreement (Nordin et al. 2014;Patel et al. 2014;Rodney et al. 2015;Rubin et al. 2018).Additionally, a factor of 6 bias in the predicted magnifications in Image 1.1 would imply an improbably small value of H 0 ≈ 26 km s −1 Mpc −1 .Consequently, we conclude that the discrepancy we identify between the predicted and measured Hα luminosities of the giant H II regions in Sp1149 is most likely not due to a systematic problem in the lens models.Instead, the Balmer L − σ relation of the H II regions in Sp1149 is likely offset to higher luminosities by a factor of ∼ 6. Observations of the luminosities and velocity dispersions of H II regions in magnified galaxies are needed to confirm this conclusion. The Balmer L − σ relation is a well-established empirical correlation, but the physical origin of this relation is not yet understood.While the L−σ relation is constant with redshift for H II galaxies out to at least z ≈ 4, our results suggest that it may not apply for individual H II regions at z ≳ 1. ACKNOWLEDGMENTS The lens models used in this work were downloaded from the HST Frontier Fields Data Access Page.Some of the data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, Figure 1 . Figure 1.False-color image of a portion of the MACS J1149 lensing cluster from HST imaging of the field.The three images of the face-on spiral galaxy Sp1149 are labeled, and the critical curve of the cluster (GLAFIC model) is shown as a white line.The appearances of SN Refsdal in an Einstein cross configuration are labeled S1-S4.Images 1.1 and 1.3 are shown in the right panels, with the H II regions identified in white. Figure 2 . Figure 2. Emission-line intensity map of Hα in Image 1.1 of Sp1149, with the 11 H II regions we use to infer the magnification labeled in white.Labels are the same as Figure 1.This map was created from archival OSIRIS IFU observations of Sp1149 (PI: Kewley). Figure 3 . Figure 3. Magnification measurements for each of the observed H II regions (large black points) overlaid on each of the models' predictions for the magnification at their positions (colored points).Error bars correspond to 1σ uncertainties. Figure4.The statistical tension between our measurements of the magnification compared with each model's prediction for the magnification at the position of each H II region.A positive value for the tension indicates that our measured magnification is greater than the model's prediction.The black lines indicate the median tension for each model.We find that all ten models underpredict the magnification compared to the measurements by a median value among the H II regions of 1.8-2.5σ. Figure 5 . Figure5.The factor by which each model underpredicts the magnification of the H II regions in Sp1149.The average underprediction factor for all the models is 6.2). Figure 6 . Figure6.The flux ratios of five bright knots in Image 1.1 and Image 1.3 of Sp1149, measured from the F606W HST imaging of the MACS J1149 cluster field.We compare these flux ratios to the predicted magnification ratios at their positions and find that the measured ratios agree with the predictions from the models within the 1σ uncertainties. Figure 7 . Figure 7. HST F606W close-up images of Image 1.1 and Image 1.3 of Sp1149, with five bright knots identified in each image.We measure the flux density of each knot in both images and compare their flux ratios to the magnification ratios predicted by each model.The white regions show the apertures used to measure the flux of each knot. Table 1 . Magnification Measurements a
2023-10-02T06:42:07.695Z
2023-09-28T00:00:00.000
{ "year": 2023, "sha1": "9ce92196e4ab18b8ac5ad14abf29831e5ab0e7c9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "9ce92196e4ab18b8ac5ad14abf29831e5ab0e7c9", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
267285953
pes2o/s2orc
v3-fos-license
Sternoclavicular Septic Arthritis and Surgical Intervention: A Case Report Management of septic arthritis is an area of controversy, especially in rare locations like the sternoclavicular joints. In this case report, we present a case of septic sternoclavicular joint, which was resistant to medical treatment and deteriorated during the treatment course. Although medical treatment has proven effective based on previous literature, some cases will still not benefit from it. In this case, our patient responded significantly to surgical treatment regarding upper limb function, faster infection eradication and rehabilitation, and shorter hospitalization and antibiotics duration. Introduction Septic arthritis is a common and urgent condition in clinical practice with high morbidity and mortality [1].Several pathogens cause septic arthritis, usually bacterial, such as Staphylococcus aureus and Streptococcus species, which account for approximately 90% of the cases [2].Infrequently, fungi, parasites, viruses, or other atypical pathogens may be encountered [3].Typically, septic arthritis affects a single major joint, such as the knee or hip [4].Uncommonly, septic arthritis can be present in the sternoclavicular joint (SCJ) [1].However, it is relatively rare in individuals without underlying medical conditions and accounts for less than 2% of the cases [1,4].Several risk factors have been associated with an increased susceptibility to developing septic arthritis of the SCJ [5].These risk factors include diabetes mellitus, rheumatoid arthritis, immunosuppression, intravenous drug use, traumatic events, and underlying arthropathies [5,6].Septic arthritis of the SCJ has been recognized as a potentially life-threatening condition due to its close anatomical proximity to critical vascular structures in the chest [7].Patients frequently report sudden onset of chest or shoulder pain, accompanied by or without fever [8].Warmth and redness around the afflicted joint, as well as other signs of joint inflammation, may occur [9].In addition, other cases may present with less common manifestations, such as abscess formation and subcutaneous emphysema [10].Imaging modalities such as computed tomography (CT) and magnetic resonance imaging (MRI) are utilized for the evaluation of infection severity [6].Joint aspiration or biopsy should be performed if an infection of the SCJ is suspected [8].Controversy persists on the most effective way to treat SCJ infections [6,8,11].In addition to long-term antibiotics, most authors believe that surgery is often necessary [8,11]. Case Presentation In this report, we present the case of a 70-year-old female who presented to our hospital with a three-day history of pain located in her right SCJ.Past medical history was significant for diabetes mellitus.She denied having experienced any recent trauma.However, she had a history of a positive urinary tract infection (UTI) a month prior to her presentation.The patient was afebrile and vitally stable.Local examination revealed only mild tenderness and swelling over the SCJ.Laboratory results revealed an elevated white blood cell count of 13.29x109, an erythrocyte sedimentation rate of 107, and a C-reactive protein level of 99.A CT without contrast showed soft tissue changes suggestive of a collection measuring approximately 3.6 cm in craniocaudal dimension and likely extending into the right SCJ.There was no definite bone destruction.Septic arthritis of the right SCJ was suspected, and an MRI was recommended for further evaluation (Figure 1).The MRI revealed right SCJ joint septic arthritis with phlegmon/collection anterior to the joint, in addition to another predominant inflammatory phlegmon deep to the sternal insertion of the right pectoralis major muscle.A diagnosis of SCJ septic arthritis was made, and the patient underwent surgical irrigation and debridement through a direct anterior clavicle approach.The pus drainage was measured to be 20 to 40 ml.Antibiotics were administered to the patient after a tissue culture was obtained.Cefazolin and vancomycin were administered intravenously for a total of 14 days, then clindamycin was continued orally for a total of 14 days. SCJ: sternoclaviclar joint The initial culture came positive for Streptococcus agalactiae (Group B) and biopsy findings were necrotic tissues with visible inflammatory cells confirming the diagnosis of septic arthritis.After completion of the antibiotics course, postoperatively on Day 1, a reoccurrence was evident clinically with a recurrence of symptoms, and recollection was observed on repeated images.The decision was made after patient counseling for surgical intervention in the form of medial clavicle resection and thorough irrigation and debridement.The same incision was opened, and a large amount of pus was evacuated.The incision was extended medially and laterally over the clavicle to reach the necrotic tissue.A large amount was evacuated from the anterior chest under the pectoralis muscle.The SCJ was dislocated, and a small amount of pus was found.The joint was clearly damaged and necrotic.The medial clavicle was excised until reaching healthy bone and sent for histopathology as shown in Figure 2. Following that, irrigation and debridement were carried out.Another course of intravenous antibiotics was prescribed.A postoperative X-ray indicated that the patient showed significant improvement since Day 1, regained her right upper limb function daily, and scored well based on the Disability of Arm, Shoulder, and Hand (DASH) questionnaire (Figure 3).Two months postoperatively, the patient's wound healed, and she regained full range of motion of the right shoulder with no complaints (Figure 4). Discussion Streptococci have been acknowledged as a common etiological factor in the development of septic arthritis, contributing to approximately 20% of all cases, being the second most common cause after Staphylococcus aureus, which accounts for 50-60% of cases [12].Group B streptococcus (GBS), also known as Streptococcus agalactiae, has recently been recognized as a well-known cause of septic arthritis, accounting for 5-10% of all cases [12].There are several risk factors identified in the literature that make patients more likely to be predisposed to GBS septic arthritis.In our case, both her advanced age (over 60 years of age) and gender (female) have been linked to the development of GBS [13].Early identification and management are crucial in SCJ septic arthritis due to its close anatomical proximity to critical vascular structures [7] and the infection's tendency to spread beyond the joint due to the capsule's inability to distend [14].As a result, patients are prone to experience serious complications such as emphysema, osteomyelitis, mastitis, and abscess formation [6].Determining the optimal management strategy can be challenging in cases of SCJ septic arthritis.To this day, a consensus has yet to be reached regarding the most effective treatment modality, thus representing a topic of debate [6,8,11].The fundamental principles of management include the eradication of infection, preservation of joint function, and reduction of pain [6].Treatment options range from antibiotic therapy to radical surgery, including SCJ resection requiring muscle flap coverage [15].Antibiotic therapy alone or in conjunction with isolated aspiration or needle lavage has been proposed, but there is insufficient evidence to support its efficacy [8].Jang et al.'s study examined the outcomes of medical management in patients with S. aureus sternoclavicular septic arthritis [16].Patients were enrolled in the study, and they were all managed medically or with limited surgery (incision-drainage and debridement) [16].The average duration of the antibiotic course was 35 days [16].The study found that all cases were successfully treated, with no recurrence or deterioration [16]. According to Jang et al., in selected patients without significant complications, medical treatment alone or in combination with limited surgery represents a successful management strategy [16].On the other hand, Abu Arab et al. investigated the role of surgery in 14 patients with SCJ septic arthritis who had failed medical treatment with antibiotics [17].All surgically treated patients had excellent outcomes with no restrictions in shoulder movements or recurrence of the infection [17].In contrast to Jang et al. [16], the study recommended surgical intervention as the most effective treatment following the failure of the antibiotic trial [17].Additionally, in a retrospective study, Song et al. evaluated minimally invasive interventions such as antibiotic administration, surgical drainage, and debridement in six patients [18].The study reported a failure rate of 83% [18].Reflecting on our case, we attempted medical management using simple irrigation and debridement in addition to IV antibiotics.Unfortunately, the infection persisted, necessitating a more aggressive intervention with medial clavicle resection.Our patient recovered well from the surgery with no complications and regained full joint function.Prompt intervention and infection eradication are essential for preventing prolonged hospitalization and serious complications resulting from chronic infection [6].Additionally, reports of adverse outcomes in patients managed conservatively, including recurrence or persistence of infection, eventually leading to surgical therapy, have been documented [17,19].Studies reporting the success of medical management were exclusive to S. aureus infection.A case report by Cydylo et al. described a case where the patient responded to oral trimethoprim-sulfamethoxazole; this case is the first to treat sternoclavicular septic arthritis with outpatient oral antibiotics.However, the patient refused aspiration and was treated empirically [20].Furthermore, Jang et al.'s study was limited to S. aureus infections.In our case, the patient's culture was positive for GBS [16]. Many studies have suggested that medical management demonstrated success in achieving desired clinical outcomes in selective cases, such as patients presenting with a local infection with no complications.However, in situations where medical treatment fails, or serious complications occur, surgical intervention is a highly effective approach that can reduce the risk of complications and decrease morbidity and mortality rates. Conclusions In this case, which showed rapid deterioration despite limited surgery and medical treatment, surgical intervention in the form of medial clavicle resection proved to be very effective in shortening patient hospitalization, enabling the patient to recover faster and resume daily activities.Although, based on previous literature, medical treatment has shown to be an effective model of treatment for sternoclavicular joint arthritis but still surgical intervention is a valid option for cases that are reluctant to receive medical treatment.Further research is needed to prove the effectiveness of medial clavicle resection in terms of hospitalization and the recovery of physical activity. FIGURE 2 : FIGURE 2: Intraoperative picture of medial clavicle length of resection as marked by Kocher forceps. FIGURE 4 : FIGURE 4: Two months postoperative surgical wound showing good healing.
2024-01-28T16:17:31.280Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "ddd8e21d693147994df27a1f1ba79abe520f8e15", "oa_license": "CCBY", "oa_url": "https://assets.cureus.com/uploads/case_report/pdf/222902/20240126-23236-joo29j.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5eef96816914a75a2aa08acc7cbe696a825f3278", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
4024169
pes2o/s2orc
v3-fos-license
Seed banks of native forbs, but not exotic grasses, increase during extreme drought . Extreme droughts such as the one that affected California in 2012 – 2015 have been linked to severe ecological consequences in perennial-dominated communities such as forests. In annual communities, drought impacts are difficult to assess because many species persist through facultative multiyear seed dormancy, which leads to the development of seed banks. Impacts of extreme drought on the abundance and composition of the seed banks of whole communities are little known. In 80 heterogeneous grassland plots where cover is dominated by ~ 15 species of exotic annual grasses and diversity is dominated by ~ 70 species of native annual forbs, we grew out seeds from soil cores collected early in the California drought (2012) and later in the multiyear drought (2014), and analyzed drought-associated changes in the seed bank. Over the course of the study we identified more than 22,000 seedlings to species. We found that seeds of exotic annual grasses declined sharply in abundance during the drought while seeds of native annual forbs increased, a pattern that resembled but was even stronger than the changes in aboveground cover of these groups. Consistent with the expectation that low specific leaf area (SLA) is an indicator of drought tolerance, we found that the community-weighted mean SLA of annual forbs declined both in the seed bank and in the aboveground commu- nity, as low-SLA forbs increased disproportionately. In this system, seed dormancy reinforces the indirect benefits of extreme drought to the native forb community. INTRODUCTION Climate change is projected to increase both the frequency and severity of extreme events, including drought (Easterling et al. 2000, IPCC 2014, Swain et al. 2016. From 2012 to 2015, California experienced one of the most extreme droughts in the last 1,200 yr (Griffin andAnchukaitis 2014, Robeson 2015), causing widespread tree die-offs (Young et al. 2017) and lower agricultural output (Howitt et al. 2014). The effects of severe drought on annual plant communities, however, are less clear because many annual plants produce seeds with multiyear dormancy, leading to the formation of substantial and potentially long-lasting seed banks (Baskin and Baskin 2014). Previous research on drought impacts on annual communities has focused almost exclusively on aboveground life stages of plants and not directly measured abundances in seed banks. This research has identified some consistent community-level effects of drought: experimentally imposed drought led to the loss of shallowrooted species (Kimball et al. 2016) while long-term observational data has similarly revealed the disproportionate loss of high specific leaf area (SLA; g/mm 2 dry mass) species under aridification (Harrison et al. 2010(Harrison et al. , 2015. Without taking seed banks into account, however, conclusions about drought impacts on community composition and diversity remain tentative, because observed loss of a species aboveground may not signal a loss of the species from the community. In this study, we quantify changes in annual plant seed banks during an extreme drought event and compare these to aboveground cover estimates to provide a fuller picture of a diverse annual plant community's response to drought. Seed banks spread germination out over time to reduce the likelihood of large population declines during unfavorable periods (Baskin and Baskin 2014). This strategy is particularly beneficial in variable environments where lower climatic predictability leads to higher variability in mean growth rates (Ellner 1987). By keeping a portion of their seed dormant, seed banking species incur less of a cost during climatically bad years, such as a drought (Cohen 1966, Philippi 1993. Although seed banks strongly affect both the restoration potential and the resilience of a community (Hopfensperger 2007), we know very little about how communities dominated by seed banking species respond to severe drought events or even to climate change in general (Ooi 2012). An increase in drought frequency and severity could increase the probability of failed germination or of seedling mortality (Ooi 2012) while higher soil temperatures have been shown to increase germination in some species and lower seed viability in other species (Ooi et al. 2009), all of which could limit aboveground recovery of these systems after a disturbance. Species with stronger facultative dormancy are thought to be generally less drought-tolerant once they germinate (Brown and Venable 1986), exhibiting lower water use efficiency and higher relative growth rates (Huxman et al. 2008, Huang et al. 2016. These species also typically have higher SLA, a trait associated with wetter climates (Westoby et al. (Kimball et al. 2012), while low SLA species with less persistent seed banks are more reliant on drought tolerant traits such as deeper roots and higher water use efficiency (Farooq et al. 2009). Therefore, high-SLA species that are disappearing aboveground may be remaining dormant belowground. Species without adaptive dormancy or drought-tolerant traits are likely to be highly sensitive to variability in climate and intense droughts. Diversity in California annual grasslands is dominated by native annual forbs, many of which are known for undergoing multiyear seed dormancy. Many of these forbs have bethedging strategies, germinating only a portion of their seeds each year, and often delay their germination until the onset of cooler rains that are indicative of more reliable winter rainfall (Levine et al. 2008, Mayfield et al. 2014). The floral diversity in this region, however, is threatened by exotic annual grasses, which dominate the landscape in cover and biomass. This dominance is due in part to their extremely high annual seed production (as high as 60,000 seeds/m 2 ), their high relative growth rates that make them competitively superior to their native counterparts, and their buildup of thatch that also limits germination in native annuals (Bartolome 1979). Instead of relying on persistent seed banks, these exotic annual grasses have shorter seed longevity and readily germinate a larger proportion of their annual seed each year (Thompson andGrime 1979, Jain 1982). We analyzed drought-induced changes in the seed bank by growing out seeds from two sets of soil cores collected in fall 2012 and fall 2014 in a northern California grassland. A previous study at the same site found that the extreme 2012-2014 drought caused aboveground cover of exotic annual grasses, but not native forbs, to decline even more than expected based on community responses to normal interannual variability (Copeland et al. 2016). Long-term work at this site also showed that dry years cause high-SLA forbs to decline in aboveground cover and diversity relative to low-SLA forbs (Harrison et al. 2015(Harrison et al. , 2017. Thus, based simply on changes in aboveground abundance and seed input, we predicted that (1) abundance of exotic annual grass seeds in the seed bank would decline more during the severe drought than the abundance of native annual forb seeds, and (2) high-SLA native annual forbs would decline more in the seed bank than low-SLA forbs, leading to lower community weighted mean SLA of native annual forbs in the seed bank in 2014 than 2012. In addition, higher drought-induced dormancy in forbs than grasses should tend to strengthen prediction (1), while higher drought-induced dormancy in high-SLA than low-SLA forbs should tend to weaken prediction (2); to evaluate these possibilities, we compared changes in seed bank composition to the corresponding changes in aboveground cover of these functional groups. Collection site and greenhouse study This study took place in a heterogeneous, annual-dominated grassland at University of California McLaughlin Natural Reserve (https://naturalreserves.ucdavis.edu/mclaughlin-reserve) in the Inner North Coast Range (N 38°52 0 , W 122°26 0 ). The site has a Mediterranean climate with cool wet winters and dry hot summers; pre-drought annual winter precipitation averaged 46.4 cm and winter mean temperatures averaged 7.9°C (Flint and Flint 2014). During the recent extreme drought (2012)(2013)(2014), winter precipitation at our site averaged 26.6 cm and winter mean temperature averaged 9.3°C. Annual plants in this community germinate in fall (September-December) shortly after rains begin, are present as seedlings during winter (December-February), and flower in spring (March-May) or summer (June-September). Our study used a set of 80 vegetation-monitoring plots that were chosen haphazardly and are widely dispersed across the reserve; 42 are on fertile soils derived from volcanic and sedimentary rocks and are dominated by exotic annual grasses, while 38 are on infertile soils derived from serpentine rock and have substantially higher native diversity (mean of 17 species per 5-m 2 in serpentine soils vs. a mean of 9 per 5-m 2 on non-serpentine soils). Each plot consists of five permanently marked 1-m 2 subplots along a 40-m transect where visual estimates of species cover ("aboveground data") were recorded twice annually in April and June to capture peak cover for both early-and lateflowering species. See Harrison (1999), Elmendorf andHarrison (2009), Fernandez-Going et al. (2012), and Harrison et al. (2017) for further details and previous analyses of aboveground data from these plots. All vegetation surveys used in the present study were done using a 1-m 2 sampling frame and were carried out by the same trained and experienced person with a minimum estimate threshold of 0.1%. Here we chose to focus our analysis on two functional groups: exotic annual grasses, which form >90% of the cover and are well known to be the dominant competitors in Californian grasslands (Eviner 2016), and native annual forbs, which comprise 44.5% of the species diversity in our sites and are the focus of considerable ecological and conservation interest (Eviner 2016). In 2012, early in the drought, and again in 2014, late in the drought, we collected five soil cores per plot (one from each subplot) and aggregated the cores into one sample per plot, giving us a total of 80 samples in each year. Soil cores were 5 cm in diameter and were taken from the top 10 cm of soil. Samples were collected in September of each year in question, after seeds from the previous growing season had set but before germination for the next growing season began. We sifted the samples to remove rocks and large vegetation fragments. After samples were homogenized, we mixed a 1 kg subsample with equal parts sand to improve drainage due to the high clay content of soils from the site. We then spread out each sample in half flats (10.875 inches wide 9 10.875 inches long 9 2.25 inches deep) and placed the flats in a shade house in the UC Davis Greenhouse Complex where they were open to the natural background temperature variation. Flats were watered daily throughout the growing season, stirred before drying down for the summer, then resumed watering for another growing season. Every seedling that emerged was identified to species, recorded, and discarded ("belowground data"). In total, we recorded just over 11,000 seedlings during each year of the study from a total of 126 annual species. Specific leaf area We focused on SLA because of its known correlation with both water use efficiency (WUE) and relative growth rate (RGR;Reich et al. 1999, Wright et al. 2004). Specific leaf area (SLA) was measured in 2010 on 10 individuals per species (Spasojevic et al. 2012) following standard protocols (Cornelissen et al. 2003). To determine the average SLA of a community, we calculated community weighted mean SLA for each plot which weights the SLA contribution of a particular species to the mean SLA by its relative abundance in the community (Garnier et al. 2004). Data analysis To test for stronger declines in seed bank abundances of grasses than seed banks of forbs (prediction 1), we used generalized linear mixed effects models with number of seeds summed by functional group as the response variable, and year, functional group (native forb or exotic grass), and their interaction as the predictor variables, and a random slope for functional group nested within each plot. Seed counts were modeled using a negative binomial regression model because count data were overdispersed. To compare the aboveground changes to the seed bank results for Prediction 1, we conducted similar analyses on cover data. We used a linear mixed effects model on square root-transformed cover values also summed by functional group with year, functional group, and their interaction as the predictor variables, and a random slope for functional group within each plot. For both analyses, we conducted multiple comparison tests using the glht function in the multcomp library (Hothorn et al. 2008) to compare seed bank abundances and cover across years. We adjusted Pvalues using Benjamini-Hochberg corrections to account for multiple comparisons (Benjamini and Hochberg 1995). To test for declines in community weighted SLA of forbs (prediction 2), we used a linear mixed effect model on logtransformed community weighted SLA data with year and community type (seed bank or aboveground), and their interactions as the predictor variables, and a random slope for community type within each plot. We then analyzed changes in belowground abundance of species with high vs. low SLA to test whether these changes were driven by low-or high-SLA species. Species with below-median SLA for a given plot were classified as low SLA species, while those with abovemedian SLA for a given plot were classified as high SLA species. We then ran a negative binomial regression model on summed seed bank abundance at the plot level with year, high vs. low SLA, and their interaction as predictor variables, and a random slope for high vs. low SLA species nested within each plot. For aboveground data, we used a linear mixed effects model on square root-transformed cover data, also summed by SLA group, again with year, high vs. low SLA, and their interaction as predictor variables, and a random slope for high vs. low SLA species nested within each plot. We then conducted multiple comparison tests using the glht function in the multcomp library (Hothorn et al. 2008) to compare SLA changes across years. We adjusted P-values using Benjamini-Hochberg corrections to account for multiple comparisons (Benjamini and Hochberg 1995). Due to the different species composition of grasslands on serpentine and non-serpentine soils, we also ran all models including soil type as a predictor as well as its interaction with other predictors. Although there were significant quantitative differences in abundance, cover, and community weighted SLA changes between soil types, the directional change of all our variables did not vary by soil type and thus inclusion of soil type did not qualitatively change the results (Appendix S1: Tables S1, S2 and Figs. S1-S3). All data analyses were done in R version 3.3.1 (R Core Team 2016). Seed bank abundance In agreement with Prediction 1, we found that seed abundance belowground of exotic annual grasses significantly declined from 2012 to 2014 (Z = À7.76, P < 0.001; Fig. 1a) while native annual forbs significantly increased over the course of the drought (Z = 9.61, P < 0.001; Fig. 1a). To determine whether the observed community-level changes were driven by many or only a few species, we tabulated the direction of change for each species. We found that the trends were generally consistent across species. In the seed bank, 11 of 15 grass species declined in seed bank abundance, while only 4 grass species increased in abundance ( Fig. 2; Appendix S1: Table S3). In contrast, 65 of 81 native annual forb species increased in abundance in the seed bank, while only 14 species declined in abundance and 2 species displayed no change ( Fig. 2; Appendix S1: Table S3). Specific leaf area In partial agreement with Prediction 2, community weighted mean SLA of the forb community decreased significantly in the seed bank from 2012 to 2014 (Z = À1.99, P = 0.05; Fig. 4). However, rather than being driven by a decrease in high-SLA forbs as predicted, this pattern was driven by a large increase in seeds of low-SLA forbs. Although both forb species with below-median SLA and forbs with above-median SLA for a given plot significantly increased in summed abundance (Table 1), below-median SLA species increased by 263% while above-median SLA species increased by 119%. Aboveground comparisons Aboveground, we found that grass cover significantly decreased (Z = À6.95, P < 0.001; Fig. 1b) while forb cover significantly increased (Z = 2.63, P = 0.01; Fig. 1b). These changes were smaller in magnitude than the corresponding changes in the seed bank (grasses: À39% aboveground vs. À52% in seed bank; forbs: +14% aboveground vs. +201% in seed bank). As we did for seed bank abundance, we compared these functional group level contrasts to trends in cover of individual species. Aboveground, 11 of 14 grass species declined in cover, while only 3 grass species increased ( Fig. 3; Appendix S1: Table S3). In contrast, 52 of 88 native forb species increased in cover, while 35 decreased and 1 species displayed no change ( Fig. 3; Appendix S1: Table S3). Community weighted mean SLA also significantly decreased aboveground (Z = À2.03, P = 0.05; Fig. 4). Similar to the FIG. 2. Mean change in seed bank abundance per species (15 exotic annual grass species and 81 native annual forb species). See Appendix S1: Table S3 for a list of species and their associated changes in abundance. change in the seed bank, this change was driven by an increase in low-SLA forbs; summed cover of species with below-median SLA significantly increased by 13%, while summed cover of those with above-median SLA decreased non-significantly (Table 1). Community weighted mean SLA was higher for the seed bank than for the aboveground community in both years (2012: 20% higher, P < 0.001; 2014: 21% higher, P < 0.001). DISCUSSION Together, our results reveal that the grass and forb abundance changed in the same direction both above-and belowground in response to drought, but that the magnitude of each functional group's response was stronger belowground. Moreover, these changes were well predicted by both the drought tolerances and seed dormancy tendencies of the different functional groups. Exotic annual grasses suffered the strongest negative effects with a 52% decline in belowground seed bank abundance between 2012 and 2014. This belowground decline mirrored their 39% decline in aboveground cover, a decline that significantly exceeded the drought response predicted from normal interannual variability (Copeland et al. 2016). Our work supports other studies on annual grasses that found a large decrease in grass seed banks during an experimentally imposed drought (Hild et al. 2001), as well as negative aboveground responses to decreased rainfall including increased senescence and mortality (Clary et al. 2004), reduced competitive effects (Sheley and James 2014), and decreased densities (Salo 2004). While the observed decline in grass abundances may have resulted from decreased germination, survival (either in the seed or post-germination stage), growth, and/or seed production throughout the drought, the even stronger decreased abundance of seeds attests to a low capacity for population-level buffering through facultative seed dormancy, also in accord with work in other arid, annual-grass invaded systems (Forcella andGill 1986, Harel et al. 2011). In contrast to grasses, native forbs significantly increased in both seed bank abundance and aboveground cover during the drought. The 201% increase in native forb seed bank abundance was considerably larger than the 14% increase in cover aboveground, suggesting that the drought induced much higher levels of seed dormancy in these species, especially the high-SLA forbs, which did not increase in abundance aboveground. Our results also suggest that the enlarged native annual forb seed bank during the drought was mainly driven by the low-SLA, drought-tolerant species, which increased in abundance belowground and in cover aboveground in contrast to the drought-intolerant high-SLA species, which displayed a smaller yet still sizeable increase FIG. 3. Mean change in percent cover per species (14 exotic annual grass species and 88 native annual forb species). See Appendix S1: Table S3 for a list of species and their associated changes in cover. in abundance belowground and a non-significant decrease aboveground. Similar patterns of increased dormancy during dry years have also been observed in desert annual plants (Pake and Venable 1996, Venable 2007, Angert et al. 2009). The evident benefit of the drought to the native annual forb community both below-and aboveground is consistent with theoretical (Levine and Rees 2004) and previous empirical evidence (Suttle et al. 2007, Dudney et al. 2017 pointing to the positive effects of reduced competition from exotic annual grasses. However, some other studies have found the direct negative effects of droughts on native annual diversity to outweigh the positive indirect ones (Tilman andHaddi 1992, Pfeifer-Meister et al. 2016). One possible explanation of this discrepancy is that our site underwent a longer-term trend toward drier winters during the 12 yr preceding the drought, which was associated with trends toward lower forb diversity and a lesser prevalence of drought-intolerant species (Harrison et al. 2015). Thus, the communities we studied may have already been disproportionately poor in drought-intolerant species by 2012. Our findings also support other work that showed that mesic-adapted species maintain larger proportions of their populations in a dormant state in the seed bank than species that are better equipped to tolerate drought stress once germinated (Brown and Venable 1986). The seed bank, acting as a reservoir for the less drought-tolerant forbs, had a significantly higher weighted SLA across years compared to aboveground communities. We have already seen similar trends in interannual variability in our system, where local species richness is higher in wet years, with only a nested, smaller subset of species appearing in drier, hotter years (Elmendorf andHarrison 2009, Fernandez-Going et al. 2012). Similarly, extensive work in desert annual systems has found that stress-tolerant plants with higher water use efficiency and lower relative growth rates have more buffered population dynamics and higher germination fractions while species with lower water use efficiency and higher relative growth rates had more variable survival and fecundity and much lower germination fractions, indicating a higher tendency for seed banking in these species (Pake and Venable 1996, Angert et al. 2007, Venable 2007, Huxman et al. 2008, Huang et al. 2016. The exotic annual grasses that currently dominate Californian grasslands are found in disturbed environments in their native range in the Mediterranean basin (Jackson 1985). They evolved to be highly ruderal and flexible, with high relative growth rates, high reproductive effort, and rapid germination allowing them to increase rapidly in wet years (Jackson 1985, Salo 2004. Such a high-risk, high-reward strategy becomes less advantageous as the probability of a wet winter decreases (Ellner 1987). Since their introduction to California in the mid-to late-1800s , there have been no years nearly as dry as 2014 (Griffin and Anchukaitis 2014). The cumulative 3-yr (2012-2014) Palmer Drought Severity Index of À14.55 was the worst drought on record, even more extreme than longer (4-9 yr) droughts (Griffin and Anchukaitis 2014), indicating that the grasses have not yet been exposed to a drought of this severity. With such an extreme response in the annual grass community, there would likely be a lag before grass populations recover even with a return to wetter, more favorable conditions. Since these grasses play critical ecosystem roles, including as competitors with native plants (Barger et al. 2003), forage for livestock (Huntsinger et al. 2007), cover for wildlife (Schiffman 2007), food resources for granivores (Schiffman 2007), and fuel for wildfires (D'Antonio and Vitousek 1992), prolonged droughts could have many cascading ecosystem consequences mediated by declines in annual grass abundance. While these results may give a positive outlook for native annual forb populations under drought, it is likely perennial bunchgrasses that have been similarly affected by exotic annual grass invasion in California will not be as resilient to increased drought. Perennial bunchgrasses are better adapted to wetter climates and occur in higher abundances along the coast of California and in areas with higher summer rainfall and lower variation in temperature and rainfall (Clary 2012). These bunchgrasses also tend to be competition and disturbance intolerant (Dyer andRice 1999, Maron andMarler 2008) and to lack a persistent seed bank (Hild et al. 2001, Gibson 2009). The best recovery targets for California's grasslands may therefore be increased populations of native forbs, rather than native bunchgrasses. Overall, our results highlight the dramatic negative effect of severe droughts on annual grass dominance in this system, and an unexpected neutral-to-positive response in competitively inferior native forbs. Underlying this response was facultative seed dormancy in the drought-intolerant competitively inferior natives, combined with a release from grass competition that benefited the aboveground success of drought-tolerant, low-SLA native annual forbs. The drought-intolerant native forbs appear resilient to a single extreme drought event; however, it is possible that more frequent severe or prolonged future droughts could eventually exceed the adaptive capacity of native species to survive through seed dormancy.
2018-03-22T20:37:53.744Z
2018-04-01T00:00:00.000
{ "year": 2018, "sha1": "f6acca00d5843ab72542eefd911a63425d6bfd40", "oa_license": "CCBY", "oa_url": "https://escholarship.org/content/qt0jw0z82n/qt0jw0z82n.pdf?t=p5iw5b", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "0d8d7926760262b884b839aefa945c81928cd404", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
59353488
pes2o/s2orc
v3-fos-license
Spatial Distribution of Regenerated Woody Plants in Alnus hirsuta ( Turcz . ) var . sibirica Stand in Japan The role of N2 fixation in structuring plant communities and influencing ecosystem function will be potentially large. In previous study, we investigated nodule biomass and activity, and calculated the amount of N2 fixation in a naturally established 18-year-old alder (Alnus hirsute (Turcz.) var. sibirica) stand following disturbance by road construction in Takayama, central Japan. In this study, to estimate the facilitation effects by alder on the spatial distribution of the regenerated tree species, we examined the distribution pattern of the regenerated tree species in this naturally established 18-year-old alder stand. The distribution pattern of alder and the regenerated woody species was analyzed in terms of spatial point processes and the regenerated species tended to distribute near the alder site. In particular, bird-dispersed tree species (endozoochory species) with relatively high shade tolerance showed a significant attraction to alder. These results suggest that alder will be used as roost trees and play the role of mother trees for these regenerated species at the degraded site. It was also suggested that the endozoochory species, which occupy 13 of 26 regenerated species in this stand, might regenerate faster than other species at this alder stand. Introduction Given the virtually ubiquitous limitation of plant growth by N supply (Vitousek & Howarth, 1991), the role of N 2 fixation in structuring plant communities and influencing ecosystem function is potentially large (Chapin et al., 1994;Thomas & Bowman, 1998).Nitrogen accretion is accelerating due to N 2 fixation during community development, facilitating invasion by later successional species and accelerating the rate at which succession proceeds (Thomas & Bowman, 1998).Interspecific facilitation by plants may be more important in structuring plant communities and the ecosystem function than previously thought (Hunter & Aarssen, 1988;Callaway & Walker, 1997).For example, alpine Trifolium species have high rates of symbiotic N 2 fixation, which influenced the abundance, and growth of nearby plant species growing (Thomas & Bowman, 1998).The abundance of some species was positively associated with the presence of Trifolium, though other species were less abundant.These results suggest that N 2 fixing species may exert both facilitative and inhibitive effects on the abundance and growth of plant species growing near them and, in the process, substantially influence the spatial heterogeneity in community structure and primary production. Species of actinorhizal Alnus that fix N through the metabolic activity of the filamentous bacterial symbiont Frankia, play an important role in the N cycle of temperate forest ecosystems (Tjepkema et al.,1986;Tobita et al., 2013b).An interest in these Frankia-Alnus systems has increased as their value in the revegetation of deteriorated wildlife habitats and the rehabilitation of N-deficient disturbed areas has become apparent (Sharma, 1988;Baker & Schwintzer, 1990;Zitzer & Dawson, 1992;Tobita et al., 1993;Chapin et al., 1994;Enoki et al., 1997).In the mixed Alnus-conifer young-growth stands, Alnus species appear to provide much more productive understory vegetation and wildlife habitat than similar-aged pure conifer stands (Hanley et al., 2006).Alnus species often regenerate naturally at disturbed sites by road constructions or natural soil slide, and so on (Tobita et al., 2010).If there are facilitative effects on plant abundance by the N 2 -fixer, Alnus species, the distribution pattern of regenerated plants may be also influenced by the distribution of alder. Dispersal mechanisms promoting seed arrival from distant sources are key in primary succession (Finegan, 1984;Walker & Chapin, 1987).As plants are sessile, both the initial spatial pattern of offspring (Houle, 1992) and the spatial population structure (Armest et al., 1991) are determined by the location of parent plants and their seed dispersal ability (Nanami et al., 1999).In addition, stands established on degraded soils will have no mother trees of other tree species at an early stage of stand development.Large trees of other tree species, roost trees for the birds, act as foci for depositing and recruiting of bird-dispersed, endozoochory, plants (Hatton, 1989;Maltez-Mouro et al., 2007).In the temperate zones, many fleshy-fruited plants rely on migrating birds to disperse their seeds (Johnson et al., 1985;Nakanishi, 1996).If Alnus species may perform as roost trees for birds and play a role as mother trees, endozoochory tree species might distribute around the Alnus trees as shown in Maltez-Mouro et al. (2007). Alnus hirsuta var.sibirica is a deciduous early successional tree species that is widely distributed in northern districts and highlands of Japan.We investigated nodule biomass and the amount of N 2 fixation in a naturally established 18-year-old stand of A. hirsuta var.sibirica in areas degraded by road construction in Takayama, central Japan (Tobita et al., 2010(Tobita et al., , 2013a)).We found the horizontal distribution of nodules in A. hirsuta var.sibirica varied among tree sizes, and in particular, for trees with smaller dbh, there was a concentration of nodule density near the stem (Tobita et al., 2010).In addition, the N 2 fixation rate in this A. hirsuta var.sibirica stand was estimated at 56.4 kg•ha −1 •year −1 , which corresponded to 66.4% of the N content in leaf litter in a year (Tobita et al., 2013a).These results suggested that the N 2 fixation of A. hirsuta var.sibirica contributed to rapid N accumulation into the soil.In this study, we tried to clarify the effects of the Alnus species on the spatial distribution of regenerated tree species at this naturally established 18-year-old stand of A. hirsuta var.sibirica. Of course, many processes, such as the germination and survival of seeds and seedlings, growth of saplings, competition between herbaceous species, light environment, soil water content, and litter as physical obstructions, should contribute to the present distribution of each species (Hatton, 1989).In this study, we consider the potential impact of the presence of Alnus species as one of many factors of determining for the spatial distribution of the regenerated woody plants. Study Site The study site was at an altitude of approximately 1100 m on Mt.Norikura in the eastern part of Takayama city, Gifu prefecture, central Japan (36˚9'N, 137˚15'E).A study plot of 30 × 35 m was set up in an alder (Alnus hirsuta Turcz.var.sibirica (Fischer) C.K. Schn.) stand (in detail: Tobita et al., 2010).Alder regenerated naturally after the disturbance when a road was built through the site in 1975.Alders are a deciduous and early successional species, widely distributed in the northern districts and highlands of Japan.These trees are used to improve the growth of mixed conifer plantations, produce logs and revegetate degenerated soil.In our study site, all the canopy trees were alders.The tree height was about 15 m, and the canopy of this stand was almost closed (Hasegawa & Takeda, 2001;Tobita et al., 2010).The mean (± SD) stem diameter at breast height (dbh) of trees was 12.4 (±3.8) cm in April 1995 and the frequency distribution of dbh was unimodal, indicating that this stand comprised trees of similar age (Tobita et al., 2010).Because several trees died during the study, the stand density varied from 1114 ha −1 in April 1995 to 1038 ha −1 in May 1996.Although the site floor was densely covered in herbaceous plants, regenerating specimens of several species of trees and shrubs were also present.The neighboring forest stand was used for coppicing, and has been dominated by Pinus thunbergii, Quercus mongolica, Betula platyphylla var.japonica, Prunus grayana, Lindera obtusiloba, and Euptelea polyandra. Regenerated Woody Plants From June to November 1996, naturally regenerated woody species in the study site were mapped to analyze the pattern of spatial distribution and the height and positions of the stem base were measured.All regenerated species was divided into three groups by seed-dispersal type; namely as bolochory species by gravity, anemochory species by wind, and endozoochory species by birds. Data Analysis of Spatial Distribution of Regenerated Woody Plants The alder population, including trees which died before April 1995, was reconstructed using data on detectable fallen trees, dieback trees and stumps (Tobita et al., 2010).Subsequently, we constructed the five populations of alder; 1) Live trees in April 1995 and dead trees before April 1995; 2) live trees in April 1995; 3) live trees in April 1996; 4) live trees in April 1997 and 5) dead trees before April 1996.To analyze of the spatial interactions with the regenerated species, two populations of alder were used; live trees in April 1996 and dead trees before April 1996.We understand that the number of dead trees before April 1995 would be underestimate because we counted only those detectable, such as fallen trees, dieback trees, or stump. The spatial pattern of regenerated woody species (all species and the major seven species, for which a relatively larger number of individuals emerged ( ) ) and alder populations was analyzed using the function ( ) L t , a transformation of Ripley's ( ) (Ripley, 1977), as suggested by Besag, (1977).The function ) is defined as the expected number of plants within distance t of an arbitrarily chosen plant.The unbiased estimate of ( ) where n is the number of plants in a plot A ; A denotes plot area; ij u is the distance between ith and jth w is the proportion of the circumference of a circle, centered at the ith plant and radius ij u that lies within A ; and summation is for all pairs of plants not more than t apart (Ripley, 1977;Diggle, 1983;Nanami et al., 1999).( ) L t is defined as follows: ( ) ( ) A value of ( ) 0 L t = indicates that the spatial pattern at distance t is random.Values of ( ) The function (Besag, 1977;Nanami et al., 1999).The null hypothesis is complete spatial randomness.To analyze the univariate spatial pattern, spatial independence was assumed for bivariate spatial interactions between two groups.Ninety-five per cent confidence envelopes were defined as the highest and lowest values of ( ) L t for each spatial scale found in 950 analyses of random point distributions.Ninety-nine per cent confidence envelopes require 990 simulations.Earlier uses of both functions and Monte Carlo simulations are discussed (Peterson & Squierss, 1995;Nanami et al., 1999). Spatial Distribution Pattern of Alnus hirsuta var. sibirica The alder populations showed significant clumped distribution (Figure 1); live and dead trees in April 1995 at 0.5 m (P < 0.01), live trees in 1996 at 9.5 -10.0 m (P< 0.05) and 0.5 m (P < 0.05), and live trees in 1997 at 0.5 -1.0 m (P < 0.05).The magnitude of departure from randomness at large distance declined from 1995 to 1997. Species Composition In this stand, 23 woody species with 389 individuals were regenerated (Table 1) and the plant density of all regenerated plants was 0.37 plants m −2 in this stand.These regenerated species included 14 tree species with 229 individuals, and 9 shrub species with 160 individuals.13 endozoochory plants with 239 individuals, 7 anemochory plants with 106 individuals, and 3 bolochory plants with 44 individuals comprising these regenerated plants.In terms of tree species, Prunus incisa, Cornus controversa, Juglans mandshurica, and Acer rufinerve were the major tree species, while Weigela hortensis, Sambucus racemosa, and Aralia elata were the major shrub species.There was no seedling of alder at the forest floor.The frequency of height of all regenerated woody species showed the mode at 0 -0.5 m (Table 1).Except for Salix bakko, almost all individuals of each tree species were less than 4 m in height, while the height of the shrub species was less than 3 m, but the portion of taller individuals exceeded that of tree species. Spatial Interaction between Regenerated Woody Species and Alnus hirsuta var.sibirica The population of all regenerated tree species showed a significant attraction to alder (live trees in 1996) at 0.5 -2 m (P < 0.01) and at 2.5 m (P < 0.05) (Figure 4(A)).As the results of analysis among seed dispersal patterns, the population of bolochory and anemochory species showed no significant departure from the independence of alder population (live trees in 1996) (Figure 4(B) and Figure 4(D)).The population of endozoochory species, which included 13 species, also showed a significant attraction to alder (live trees in 1996) at 0.5 -1.5 m (P < 0.01) and at 2 m (P < 0.05) (Figure 4 Discussion Regenerated woody species tended to distribute near the Alnus hirsuta var.sibirica.In particular, endozoochory species with relatively high shade tolerance showed a significant attraction to alder.These results suggest alder will be used for roost trees and play a role as mother trees of these regenerated endozoochory tree species.It was suggested that endozoochory species, which occupied 13 of 26 regenerated species in this stand, might regenerate faster than species with other types of seed dispersal at the naturally regenerated stands of Alnus species after the soil disturbance. The population of all regenerated tree species showed a significant attraction to alder (Figure 4(A)), but differed in terms of spatial interaction with Alnus species among the regenerated woody species (Figure 5).As predicted, some endozoochory species, such as Prunus incisa and Sambucus racemosa, showed significant attraction to alder (Figure 5(E) and Figure 5(F)).Viburnum tinus, which has bird-dispersal seeds, were clumped at the site without their mature trees on the upper trees (Maltez-Mouro et al., 2007).They discussed that one of the reasons for the aggregation pattern of the Viburnum tinus was because the seeds of Viburnum tinus were dispersed under those of other species, which were used as roosted trees by birds.In the case of herbaceous species with bird-dispersal seeds, they showed a strong aggregation pattern under large trees, which would act as roosting trees for birds and foci for to deposit and recruit endozoochory plants (Hatton, 1989).The number of birds in the plantations of Alnus species usually exceeds that of Eucalyptus and Pinus species, because the number of soil animals, which can be bird-fed, will increase with the soil fertilization (Carlson & Dawson, 1985).In this study, we have no data on the number of birds and seed falls, but observed the gathering of birds in the crown of this alder stand during daytime.In this study plot, it was suggested that Alnus hirsuta var.sibirica was used as roost trees and mother trees for endozoochory species. However, two of the four endozoochory species, Cornus controversa and Aralia elata, showed independent spatial distribution with alder (Figure 5(D) and Figure 5(G)).Cornus controversa showed attraction when we analyzed spatial interaction including dead alder trees (data not shown).These results indicated that the attractive distribution pattern between Cornus controversa and alder became unclear with increasing dead alder trees.Conversely, Aralia elata attractively distributed relative to the population of dead alder trees (Figure 6 (C)). These results indicated that it may be necessary for Aralia elata to improve the light conditions in addition to seed supply to regenerate in this alder stand, because Aralia elata is a species requiring significant light (Tobita et al., 1993). Weigela hortensis, which is an anemochory species, showed a significant attraction with the population of dead alder trees (Figure 6(B)), and a significant repulsion from the population of live alder trees (Figure 5(C)).Thomas & Borman (1998) reported the similar results; namely that some anemochory species showed repulsion from a leguminous species, Trifolium.However, Acer rufinerva, which is also an anemochory species, showed a significant repulsion with the population of dead alder trees (Figure 6(A)), and showed an independent distribution from the live alder population (Figure 5(B)).Though Acer rufinervais an intolerant understory species (Masaki et al., 1992), these results suggest that Weigela hortensis may be more light demanding than Acer rufinerva (Katsuta et al., 1998).However, Weigela hortensis showed vegetative reproduction in several cases, which might affect the analytical results of the spatial distribution of this species. The distribution pattern of bolochory species depends on the behavior of mammal species as well as the location of the mother trees.For example, the location of the mother trees alone could not explain the distribution pattern of the seedlings in Quercus serrata which were regenerated under the plantation of Pinus thunbergii (Tobita et al., 1993).Though Juglans mandshurica, a bolochory species, showed no significant departure from the independence of the alder population (Figure 5(A)), this species was observed to grow close to the alders.These results led to speculation of secondary dispersal by mammals. The horizontal spatial pattern of individuals in a plant community may reflect many factors (Maltez-Mouro et al., 2007) and has been interpreted in terms of wide-ranging of processes, including mortality due to herbivores or pathogens, gap disturbance, competition, microhabitat variability, and limited dispersal range from adults (Hatton, 1989).The presence of canopy trees, regardless of N 2 -fixing or otherwise, will promote a change in the physical properties on the forest floor (Maltez-Mouro et al., 2007).In this study, we cannot determine the factors mainly affecting the distribution pattern of the regenerated plants at this alder stand.However, it was suggested that the presence of alder might promote the regeneration of woody species as one of many factors determining for the spatial distribution.It was also suggested that the N 2 fixation ability of the alder (Tobita et al., 2013a) may not only help improve soil fertility but also promote vegetative regeneration by used as roost and mother trees for these regenerated species at degraded site. This study demonstrated only the results at one stand of A. hirsuta var.sibirica at one time.In the future re-search, it will be necessary to confirm the species composition and successional proceeding after several decades at this alder stand, and also to verify the facilitative effects on distribution pattern of regenerated plants at several stands with different successional stages and with different dominant Alnus species. Conclusion Regenerated woody species, in particular, endozoochory species with relatively high shade tolerance, tended to show a significant attraction to Alnus hirsuta var.sibirica.These results suggest that alder will be used for roost trees and play a role as mother trees of these regenerated endozoochory tree species at the naturally established stands of Alnus species after the soil disturbance.This study also suggested that the symbiotic N 2 fixation of A. hirsuta var.sibirica might affect the distribution pattern of regenerated tree species as well as improving soil fertility. Figure 1 . Figure 1. ( ) L t values for the population of Alnus hirsuta var.sibirica: (A) live trees in April 1995 and dead trees before April 1995; (B) live trees in April 1996; (C) live trees in April 1997.The solid line shows actual ( ) L t values of extant plants, dashed and dotted lines show 95 and 99% confidence envelopes derived from 1000 simulation of random point processes in the study plot, respectively.Values outside the envelopes indicate significant departures from randomness. Figure 3 . Figure 3. ( ) L t values for the population of each of seven major regenerated woody species Table 1 . Number and height of regenerated woody species in this Alnus hirsuta var.sibirica stand.Species are shown in order by life form; tree species and shrub species.The seed dispersal type was shown as bolochory (B), endozoochory (E), and anemochory (A).
2018-12-22T10:02:23.511Z
2015-01-26T00:00:00.000
{ "year": 2015, "sha1": "63928f311714d5afacf7355922f9754f7e631c45", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=53541", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "63928f311714d5afacf7355922f9754f7e631c45", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
257194459
pes2o/s2orc
v3-fos-license
Surface Micro-Patterned Biofunctionalized Hydrogel for Direct Nucleic Acid Hybridization Detection The present research is focused on the development of a biofunctionalized hydrogel with a surface diffractive micropattern as a label-free biosensing platform. The biosensors described in this paper were fabricated with a holographic recording of polyethylene terephthalate (PET) surface micro-structures, which were then transferred into a hydrogel material. Acrylamide-based hydrogels were obtained with free radical polymerization, and propargyl acrylate was added as a comonomer, which allowed for covalent immobilization of thiolated oligonucleotide probes into the hydrogel network, via thiol-yne photoclick chemistry. The comonomer was shown to significantly contribute to the immobilization of the probes based on fluorescence imaging. Two different immobilization approaches were demonstrated: during or after hydrogel synthesis. The second approach showed better loading capacity of the bioreceptor groups. Diffraction efficiency measurements of hydrogel gratings at 532 nm showed a selective response reaching a limit of detection in the complementary DNA strand of 2.47 µM. The label-free biosensor as designed could significantly contribute to direct and accurate analysis in medical diagnosis as it is cheap, easy to fabricate, and works without the need for further reagents. Introduction Nowadays, the interest in developing affordable and mass-producible clinical diagnostics devices is increasing to improve accessibility to healthcare worldwide. Having fast and self-monitoring tests that allow detection onsite is a global interest to avoid hospital crowding and the spreading of contagious diseases. Definitely, the development of portable devices for point-of-care testing (POCT), which allows fast analyte detection with an easily interpretable readout, is crucial for the future [1]. POCT is presently available for a variety of analyses, for example, pregnancy tests, infectious disease tests (such as respiratory infections and sexually transmitted diseases), glucose tests, and several other applications [2][3][4][5][6]. Among various types of sensors, optical biosensors present great advantages over conventional analytical techniques because they enable direct, real-time, and label-free detection of many biological and chemical substances [7][8][9]. Their advantages include high sensitivity, small size, light weight, cost-effectiveness, and the ability to provide multiplexed or distributed sensing. In this context, holographic biosensors offer an appealing approach for label-free optical biosensing. Holographic sensors are gratings, recorded with holographic techniques, of functionalized polymers capable of quantifying the concentration of the target analyte [10]. As a transducer, a holographic pattern is recorded in the sensitive polymer structure, which consists of a 3D periodic structure with alternating strips of differing refractive index (RI), and thus it diffracts the light. After the holographic recording, the polymer matrix, permeable to the target analyte, changes its physical and chemical characteristics, such as lattice spacing and/or refractive index based The morphological characterization of hydrogels was carried out using scanning electron microscopy (SEM, Gemini SEM 500 system, Zeiss, Oxford Instruments, Oxford, UK). Hydrogels were completely swollen in distilled water and frozen at −20 • C. Then, they were lyophilized overnight (Telstar Lyoquest freeze-drier, Azbil Telstar Technologies, S. L. U., Terrasa, Spain) to yield completely dry aerogel samples. Finally, dry samples were prepared using sputter coating with a Au layer of about 15 nm (BAL-TEC SCD 005 sputter coater, Leica microsystems, Wetzlar, Germany). Fourier transform infrared (FT-IR) spectroscopy of lyophilized hydrogels was performed using a Tensor 27 FT-IR-spectrophotometer (Bruker, MA, USA). UV-Visible spectra of hydrogels immersed in H 2 O were collected in an Agilent 8453 spectrophotometer (Santa Clara, CA, USA). For the analysis, hydrogels were polymerized inside an Eppendorf and, after washing, they were placed inside a 1 × 1 cm cuvette filled with H 2 O. Swelling behavior studies were carried out with lyophilized hydrogel samples. Samples with a size of approximately 1 cm 3 were immersed in PBS-T (10 mL) at room temperature. The weight of the swollen hydrogels was recorded at different times until they were totally swollen (reaching a constant weight). Buffer excess on the surface of the hydrogel was removed with filter paper before weighing. The swelling degree was calculated using Equation (1), where Wt is the weight of the hydrogel after being immersed in the buffer during time "t" and W0 is the weight of the lyophilized hydrogel before the immersion. Hydrogel Synthesis Acrylamide/Propargyl acrylate (AM/PA) and acrylamide (AM) hydrogels were prepared using free radical polymerization (FRP) either with photochemical or thermal activation (Scheme S1). Different hydrogel compositions were optimized: AM(25)/PA, AM(8)/PA, AM (25), and AM (8). The AM(25)/PA hydrogel was prepared by mixing 25% (w/v) of AM monomer, 0.05% (w/v) of MBA crosslinker, and 15 µL of PA co-monomer in 1 mL of distilled water. The AM(8)/PA hydrogel was prepared by mixing 8% (w/v) of AM monomer, 0.25% (w/v) of MBA crosslinker, and 15 µL of PA co-monomer in 1 mL of distilled water. The control hydrogel AM (25) was prepared by mixing 25% (w/v) of AM monomer and 0.05% (w/v) of MBA crosslinker, while the control hydrogel AM (8) was prepared by mixing 8% (w/v) of AM monomer and 0.25% (w/v) of MBA crosslinker. For the synthesis of the hydrogel using thermal activation, potassium persulfate (KPS) at 1% (v/v) was added to the solution as a thermal initiator, and the reaction mixtures were placed in an oven at 60 • C for 90 min. For the synthesis of hydrogels with photochemical activation, 2,2-Dimethoxy-2-phenylacetophenone (DMPA) photoinitiator at 1% (w/v) was added to the reaction mixture and hydrogels were polymerized irradiating at 365 nm in a UV photoreactor (13 mW/cm 2 ) for 10 min. Once polymerized, the hydrogels were washed with immersion in distilled water for at least 2 h using three times fresh water to ensure that non-polymerized monomers were eliminated. The obtained hydrogels were stored completely swollen in distilled water at 4 • C. Probe Immobilization and Hybridization Assay For potential biosensing applications, AM/PA hydrogels and their control systems, AM hydrogels, were covalently functionalized with a thiol-modified oligonucleotide probe, and hybridization capacity was tested with a fluorescence-labeled target. All probes used are listed in Table S1. The bioreceptor immobilization was studied either during or after the hydrogel synthesis. In the first approach, after monomers and crosslinker homogenization in water, 1 µM of Probe 1 and 1% (w/v) of DMPA photoinitiator in water were added to the mixture, and the solution was irradiated at 365 nm (13 mW/cm 2 ) for 10 min. In this strategy, polymerization and bioreceptor immobilization were carried out simultaneously in one step. In the second approach, the already thermally synthesized hydrogels were cut into squares (0.5 × 0.5 cm) and immersed in 100 µL of 1 µM of Probe 1 and 1% (w/v) of DMPA photoinitiator in THF:Ac-TCEP 1:1. Then, the hydrogels were irradiated at 365 nm (13 mW/cm 2 ) for 30 min. In both approaches, after the immobilization step, the hydrogels were placed on an oscillator plate and washed overnight with PBS-T. For the hybridization assays, Probe 1-functionalized hydrogels of 0.5 × 0.5 cm were placed in a transparent ELISA (enzyme-linked immunosorbent assay) plate and equilibrated in 250 µL of SSC1x for 24 h. Then, SSC1x was discarded, and the hydrogels were incubated with 50 µL of Cy5-labeled, complementary strand Target 2, in SSC1x, at growing concentrations (0; 0.2; 0.4; 0.8; 1; 1.5; and 2 µM) for one hour at 37 • C. Fluorescence signals were collected immediately after the hybridization and after overnight washing with SSC1x. Control hydrogels having immobilized a non-complementary sequence (Probe 2) were also hybridized as described. Surface Micropattern Fabrication Surface microstructures made of Polyethylene terephthalate (PET) were fabricated using the direct laser interference patterning (DLIP) technique [32]. The DLIP system was equipped with a frequency quadrupled Q-switched laser head (TECH-263 Advanced Laserexport Co., Ltd., Moscow, Russia) with a maximum pulse energy of 50 µJ, operating at a wavelength of 263 nm and with a pulse duration shorter than 3 ns. A fluence of 0.09 J/cm 2 was used to obtain PET masters with a period of approximately 4 µm. The structural features of the original PET master were characterized with a 3D optical profilometer (Sensofar, PLu neon, Terrasa, Spain). Hydrogel surface micropatterns were fabricated using the replica molding technique (REM) from the original PET master. The micropattern obtained on the hydrogel surface was observed with optical microscopy (OM, Leica microsystems, MZ APO, Wetzlar, Germany). Micropatterns were obtained in the hydrogel surface using replica molding (Scheme 1). Firstly, the original PET micropattern was copied onto PDMS. The PDMS solution was poured onto the PET surface, a vacuum was applied for 10 min to aid the solution-pattern adhesion, and then it was placed in the oven at 60 • C for 2 h. Secondly, the PDMS negative pattern was transferred onto the hydrogel surface. Initially, pre-polymeric solutions with monomers and crosslinkers of hydrogels AM(25)/PA, AM(25), AM(8)/PA, and AM (8) were stirred for 20 min until homogenization. Then, KPS was added, and the solution was sonicated for 2 min. The solutions were poured onto different PDMS micropatterned surfaces, a vacuum was applied for 10 min, and then they were placed in an oven at 60 • C for 1.5 h. Once polymerized, they were peeled off and washed with immersion in distilled water for at least 2 h using three times fresh water to ensure that non-polymerized monomers were eliminated. The micropatterned hydrogels were stored completely swollen in distilled water at 4 • C. The micropatterns obtained on the PDMS and the swollen hydrogel surface were observed with optical microscopy (OM, Leica microsystems, MZ APO, Wetzlar, Germany). Surface pattern characterization was also carried out with an optical set-up as shown in Figure 1. From the bottom, a continuous green laser beam (532 nm, 100 mW) is attenuated and orthogonally directed to the sample holder using a mirror. The sample holder is a 3D-printed platform provided with a pinhole and patterned lanes that allow the x-y movement of a 96-well ELISA plate so the laser beam can be unequivocally directed toward every well. Then, movable silicon photodiodes are placed after the sample holder to record the intensity of the different laser beams (incident or diffracted). A concave spherical lens (f = 30 mm) was placed on the top of the 96-well plate to focus the diffracted beams produced by the hydrogel micropatterns. Diffraction efficiency (DE%) of the micropatterns was calculated with Equation (2): where I0 was the intensity of the zero-diffraction order and I1 was the intensity of the first diffracted order. Label-Free Hybridization Assay Bioreceptor immobilization in micropatterned hydrogels was carried out in two steps. Firstly, thermally polymerized micropatterned hydrogel (AM(25)/PA) was functionalized with 5 μM of Probe 1. For that, micropatterned hydrogels were cut in squares (0.5 × 0.5cm) and treated with 100 μL of a 5 μM solution of Probe 1 and 1% (w/v) of DMPA photoinitiator in THF:Ac-TCEP 1:1. Then, the hydrogels were irradiated at 365 nm (13 mW/cm 2 ) for 30 min. The functionalized micropatterned hydrogels were washed overnight with PBS-T to eliminate the non-covalently attached probes. For the label-free hybridization assays, the probe-functionalized micropatterned hydrogels were placed in separated wells of a transparent ELISA plate and equilibrated in 250 μL of SSC1x. The day Scheme 1. Micropatterning process steps for hydrogel surface structure manufacturing. The micropatterns obtained on the PDMS and the swollen hydrogel surface were observed with optical microscopy (OM, Leica microsystems, MZ APO, Wetzlar, Germany). Surface pattern characterization was also carried out with an optical set-up as shown in Figure 1. From the bottom, a continuous green laser beam (532 nm, 100 mW) is attenuated and orthogonally directed to the sample holder using a mirror. The sample holder is a 3D-printed platform provided with a pinhole and patterned lanes that allow the x-y movement of a 96-well ELISA plate so the laser beam can be unequivocally directed toward every well. Then, movable silicon photodiodes are placed after the sample holder to record the intensity of the different laser beams (incident or diffracted). A concave spherical lens (f = 30 mm) was placed on the top of the 96-well plate to focus the diffracted beams produced by the hydrogel micropatterns. The micropatterns obtained on the PDMS and the swollen hydrogel surface were observed with optical microscopy (OM, Leica microsystems, MZ APO, Wetzlar, Germany). Surface pattern characterization was also carried out with an optical set-up as shown in Figure 1. From the bottom, a continuous green laser beam (532 nm, 100 mW) is attenuated and orthogonally directed to the sample holder using a mirror. The sample holder is a 3D-printed platform provided with a pinhole and patterned lanes that allow the x-y movement of a 96-well ELISA plate so the laser beam can be unequivocally directed toward every well. Then, movable silicon photodiodes are placed after the sample holder to record the intensity of the different laser beams (incident or diffracted). A concave spherical lens (f = 30 mm) was placed on the top of the 96-well plate to focus the diffracted beams produced by the hydrogel micropatterns. Diffraction efficiency (DE%) of the micropatterns was calculated with Equation (2): where I0 was the intensity of the zero-diffraction order and I1 was the intensity of the first diffracted order. Label-Free Hybridization Assay Bioreceptor immobilization in micropatterned hydrogels was carried out in two steps. Firstly, thermally polymerized micropatterned hydrogel (AM(25)/PA) was functionalized with 5 μM of Probe 1. For that, micropatterned hydrogels were cut in squares (0.5 × 0.5cm) and treated with 100 μL of a 5 μM solution of Probe 1 and 1% (w/v) of DMPA photoinitiator in THF:Ac-TCEP 1:1. Then, the hydrogels were irradiated at 365 nm (13 mW/cm 2 ) for 30 min. The functionalized micropatterned hydrogels were washed overnight with PBS-T to eliminate the non-covalently attached probes. For the label-free hybridization assays, the probe-functionalized micropatterned hydrogels were placed in separated wells of a transparent ELISA plate and equilibrated in 250 μL of SSC1x. The day Diffraction efficiency (DE%) of the micropatterns was calculated with Equation (2): where I 0 was the intensity of the zero-diffraction order and I 1 was the intensity of the first diffracted order. Label-Free Hybridization Assay Bioreceptor immobilization in micropatterned hydrogels was carried out in two steps. Firstly, thermally polymerized micropatterned hydrogel (AM(25)/PA) was functionalized with 5 µM of Probe 1. For that, micropatterned hydrogels were cut in squares (0.5 × 0.5cm) and treated with 100 µL of a 5 µM solution of Probe 1 and 1% (w/v) of DMPA photoinitiator in THF:Ac-TCEP 1:1. Then, the hydrogels were irradiated at 365 nm (13 mW/cm 2 ) for 30 min. The functionalized micropatterned hydrogels were washed overnight with PBS-T to eliminate the non-covalently attached probes. For the label-free hybridization assays, the probe-functionalized micropatterned hydrogels were placed in separated wells of a transparent ELISA plate and equilibrated in 250 µL of SSC1x. The day after, SSC1x buffer solution was replaced with a fresh one and the initial diffraction efficiencies (DE i ) of the hydrogels were obtained using the optical set-up ( Figure 1) and Equation (2). Hybridization assay was performed using incubation of the hydrogels with growing concentrations of Target 1 (0; 2; 5; 10; and 25 µM) in 50 µL SSC1x for one hour at 37 • C. The hybridization experiment was also carried out with the AM(25)/PA hydrogel functionalized with a noncomplementary, thiol-bearing oligonucleotide sequence (Probe 2), and hybridized at 10 and 25 µM of Target 1, as a negative control. Then, the hydrogels were washed overnight with SSC1x to be sure that all the non-specifically bound targets were removed. The final diffraction efficiencies of the hydrogels (DE f ) were obtained using the optical set-up ( Figure 1) and Equation (2). The relative diffraction efficiency was used to characterize the response of the hydrogel to the target concentration, as described in Equation (3): where RDE is the relative diffraction efficiency, DE i is the initial diffraction efficiency (after the equilibration step with SSC1x), and DE f is the final diffraction efficiency (after incubation and washing steps) for the first diffraction order. All experiments were repeated three times. Optimized Hydrogel Compositions First, hydrogel composition was optimized from both a physical and a chemical point of view. Polyacrylamide hydrogels are one of the most utilized materials in the synthesis of holographic and photonic hydrogel due, among other things, to their excellent optical properties [11]. AM was chosen as the main monomer for the synthesis of the hydrogel networks and MBA as one of the most common crosslinkers for polyacrylamide. The PA co-monomer was incorporated to introduce the alkyne moiety, which was necessary for the further thiolated-probe covalent attachment through thiol-yne photo-click coupling chemistry [33]. Apart from reaching adequate physical and optical properties such as good porosity, transparency, and low optical background, the chemical formulation was adapted to increase the immobilization density of the biorecognition Probe 1. For that, different ratios of monomer (AM), co-monomer (PA), and crosslinker (MBA) were assayed. All the assay compositions are shown in Table S2. As expected, all the hydrogels were transparent with almost zero absorbance at the working wavelength of our system (532 nm). Figure S1 shows the UV-Visible spectra of all hydrogels. However, not all the synthesized hydrogels showed the consistency required for part of our purposes: the fabrication of surface relief diffraction grating using replica molding. The requirements of hydrogels for potentially yielding suitable gratings include the following: they must adapt the form of the container used for the polymerization and they need to be manipulable, easy to cut, not brittle, and to keep the macroscopical form after washing and swelling. The consistency of the different synthesized hydrogels polymerized with thermal activation is indicated in Table S2. In addition, Figure S2 shows photographs of hydrogels with different consistencies. AM(25)/PA and AM(8)/PA showed the best consistency and potential to be used as surface relief gratings for DNA hybridization, so they, and their counterpart controls without PA, were selected for further optimization. The selected compositions are shown in Table 1 and photographs of the hydrogels are shown in Figure S3. As the activation process for polymerization can affect the final properties of the hydrogel, i.e., porosity, swelling, etc., the polymerization was carried out following two different activation processes: thermally and photochemically. The morphology of the optimized hydrogel compositions that contain PA was comparatively observed for thermal and photochemical activation, as poor homogeneity has been previously reported in hydrogels polymerized with UV-light [34][35][36]. For that, lyophilized hydrogels were analyzed with SEM ( Figure 2 and Figure S4). As can be observed in the SEM micrographs, the thermal activation provided higher homogeneity and porosity to the hydrogel network for both the AM(25)/PA and AM (8) The morphology of the optimized hydrogel compositions that contain PA was comparatively observed for thermal and photochemical activation, as poor homogeneity has been previously reported in hydrogels polymerized with UV-light [34][35][36]. For that, lyophilized hydrogels were analyzed with SEM (Figures 2 and S4). As can be observed in the SEM micrographs, the thermal activation provided higher homogeneity and porosity to the hydrogel network for both the AM(25)/PA and AM(8)/PA compositions, although both activation procedures resulted in adequate porosity levels. As the hydrogels obtained with thermal activation showed the best homogeneity based on the SEM, swelling behavior studies of these hydrogels were carried out to test the hydrogel buffer absorption capacity. In Figure S5 of the Supplementary Materials, the swelling studies show how the chemical composition affects the hydrogel water uptake. Hydrogels AM(8) demonstrated a higher swelling degree than hydrogels AM (25). This is probably because the larger quantity of monomer used in AM(25) hydrogels counteracts the higher crosslinker degree present in AM(8) hydrogels. Equally, the propargyl acrylate co-monomer contributed to the polymer swelling capacity. PA reduces the buffer absorption in AM(25)/PA and AM(8)/PA hydrogels in comparison to AM (25) and AM (8) reference systems, probably due to the higher hydrophobicity of the alkyne moiety. However, in both compositions, the swelling capacity was over 400%. Thus, the optimized compositions were tested for subsequent bioreceptor immobilization and surface micropatterning. As the hydrogels obtained with thermal activation showed the best homogeneity based on the SEM, swelling behavior studies of these hydrogels were carried out to test the hydrogel buffer absorption capacity. In Figure S5 of the Supplementary Materials, the swelling studies show how the chemical composition affects the hydrogel water uptake. Hydrogels AM(8) demonstrated a higher swelling degree than hydrogels AM (25). This is probably because the larger quantity of monomer used in AM(25) hydrogels counteracts the higher crosslinker degree present in AM(8) hydrogels. Equally, the propargyl acrylate comonomer contributed to the polymer swelling capacity. PA reduces the buffer absorption in AM(25)/PA and AM(8)/PA hydrogels in comparison to AM (25) and AM(8) reference systems, probably due to the higher hydrophobicity of the alkyne moiety. However, in both compositions, the swelling capacity was over 400%. Thus, the optimized compositions were tested for subsequent bioreceptor immobilization and surface micropatterning. Probe Immobilization and Hybridization Assay AM/PA hydrogels and their corresponding controls (without PA) were covalently functionalized with a thiol-bearing oligonucleotide probe for potential biosensing applications. The oligonucleotide probe acts as the specific biorecognition element for its complementary sequence (target). In the hydrogel formulation, the propargyl acrylate (PA) co-monomer had a C-C triple bond that was expected to enhance the binding with thiol-probes, in comparison to the control system [37]. Thiolated probes incorporation was carried out using the thiol-yne photoclick coupling reaction with UV irradiation at 365 nm (Scheme S2). Previous work by our group performed in microarray format had demonstrated that these irradiation conditions did not affect the probes stability and bioavailability to hybridize with the complementary strands [38]. Firstly, the thermally polymerized AM(25)/PA and AM(8)/PA hydrogels were biofunctionalized as the thermal activation yielded hydrogels with higher homogeneity and porosity, and, in addition, they showed a high swelling degree. Hydrogels were functionalized with Probe 1, complementary to the target, and, additionally, with Probe 2, which was a thiolated, non-complementary sequence. In addition, hydrogels without PA, AM (25) and AM (8), were also submitted to functionalization with Probe 1 to assess the role of PA in the probe immobilization process. The immobilization was carried out in 1:1 THF:Ac-TCEP, and TCEP was added to facilitate the reduction of disulfide bonds established between the thiolated probes. After probe immobilization, a fluorescence-labeled target sequence was used for hybridization assays to verify the successful incorporation of the thiol probe and its bioavailability for the specific hybridization. Therefore, thermally activated, probe-biofunctionalized hydrogels AM(25)/PA, AM(25), AM(8)/PA, and AM (8) were hybridized with increasing concentrations of the Cy5-labeled target sequence (Target 2) for 1h at 37 • C, and the fluorescence was registered after washing overnight (Figure 3a,b). As a control, a fluorescence signal was also registered after hybridization in several cross-section pieces of the hydrogels AM(8)/PA and AM(25)/PA to demonstrate that target 2 could reach the probe within 1h ( Figure S7). Figure 3a,b show that significantly higher fluorescence signals (4-fold to 5-fold) were observed for AM(25)/PA and AM(8)/PA hydrogels compared to their control systems AM(25) and AM(8) when they were functionalized with Probe 1, complementary to the target. As expected, the introduction of the PA co-monomer allowed a much more effective probe immobilization, thanks to the thiol-yne coupling chemistry, increasing the probe loading in the hydrogels. Therefore, the immobilization strategy was successful for both AM(25)/PA and AM(8)/PA hydrogels. Moreover, a higher fluorescence signal was measured for the AM(25)/PA hydrogel in comparison to the AM(8)/PA hydrogel. In addition, almost no fluorescence was observed when AM(25)/PA and AM(8)/PA hydrogels were functionalized with Probe 2, having the non-complementary sequence, which demonstrated that specific hybridization was taking place, and non-specific binding was negligible inside the hydrogel supports. As polymerization could be also activated photochemically using the same wavelength needed for the thiol-yne coupling reaction, a second strategy was assessed for the hydrogels biofunctionalization: a one-step process that consisted of the immobilization of the thiolated probe during hydrogel polymerization. In this strategy, the thiol-yne photoclick coupling reaction and acrylamide polymerization, using DMPA as a photoinitiator, were triggered with UV irradiation at the same time. Therefore, pre-polymeric solutions of AM(25)/PA and AM(25) hydrogels were mixed with 1 µM of complementary Probe 1 and DMAP, and then irradiated at 365 nm for 30 min. Additionally, a control experiment was carried out with AM(25)/PA hydrogel and the non-complementary Probe 2. Once hydrogels were washed and equilibrated with SSC1x, hybridization assays with the Cy5-labeled target sequence (Target 2) at increasing concentrations, as above, were carried out, and fluorescence was registered after washing (Figure 3c). In this case, the highest fluorescence signal was also observed for hydrogels AM(25)/PA functionalized with Probe 1. However, the high fluorescence observed in the hybridization curve of hydrogel AM (25) showed that the thiolated probe resulted in being immobilized without the presence of (PA) co-monomer. This is due to the thiol-acrylate coupling reaction which follows the same principle as thiol-yne photocoupling reaction [39]. Figure S6 shows the IR spectrum of a lyophilized AM (25) hydrogel, which showed a spectral profile compatible with the presence of residual unreacted acrylamide groups. However, even in this case, the presence of PA increased the hydrogel probe immobilization capability. As before, AM(25)/PA hydrogels biofunctionalized with Probe 2 did not show a significant fluorescence signal after hybridization, which reveals that non-specific binding is also avoided with the one-pot functionalization strategy. Comparing the two strategies for AM(25)/PA hydrogels functionalized with Probe 1, complementary to Target 2, the ones biofunctionalized after polymerization (Figure 3a) showed two-fold the fluorescence signal of the ones biofunctionalized during the polymerization (Figure 3c). Probably, in the case of the biofunctionalization after the polymer synthesis, a larger number of bioreceptors are introduced and, in addition, these probes are more accessible to the target. Thus, thermally polymerized AM(25)/PA hydrogels biofunctionalized after their synthesis showed the best performance for the detection of the complementary target using fluorescence. [39]. Figure S6 shows the IR spectrum of a lyophilized AM(25) hydrogel, which showed a spectral profile compatible with the presence of residual unreacted acrylamide groups. However, even in this case, the presence of PA increased the hydrogel probe immobilization capability. As before, AM(25)/PA hydrogels biofunctionalized with Probe 2 did not show a significant fluorescence signal after hybridization, which reveals that non-specific binding is also avoided with the one-pot functionalization strategy. Comparing the two strategies for AM(25)/PA hydrogels functionalized with Probe 1, complementary to Target 2, the ones biofunctionalized after polymerization (Figure 3a) showed two-fold the fluorescence signal of the ones biofunctionalized during the polymerization (Figure 3c). Probably, in the case of the biofunctionalization after the polymer synthesis, a larger number of bioreceptors are introduced and, in addition, these probes are more accessible to the target. Thus, thermally polymerized AM(25)/PA hydrogels biofunctionalized after their synthesis showed the best performance for the detection of the complementary target using fluorescence. Figures S8, S9, S10 and S11). Surface Micropattern Fabrication and Characterization For the surface micropatterning of hydrogels, PET masters were used to obtain a negative in PDMS which was in turn replicated with the above optimized hydrogel compositions. The fabricated PET master was characterized using confocal microscopy ( Figure 4a). The profile obtained from the confocal images shows that the gratings have a period of 4 μm and a depth of 2.1 μm. The PDMS negative copy was characterized with optical microscopy where, as expected, a period of 4 μm was observed, which confirmed the correct replica of the PET master (Figure 4b). In addition, the original PET master and its PDMS copies were irradiated with a continuous green laser at 532 nm using the optical set-up described in the Materials and Methods section (Figure 1), and the diffraction efficiency (DE%) was calculated using Equation (2). Both fabricated microstructures showed good diffraction efficiency. Figures S8, S9, S10 and S11). Surface Micropattern Fabrication and Characterization For the surface micropatterning of hydrogels, PET masters were used to obtain a negative in PDMS which was in turn replicated with the above optimized hydrogel compositions. The fabricated PET master was characterized using confocal microscopy ( Figure 4a). The profile obtained from the confocal images shows that the gratings have a period of 4 µm and a depth of 2.1 µm. The PDMS negative copy was characterized with optical microscopy where, as expected, a period of 4 µm was observed, which confirmed the correct replica of the PET master (Figure 4b). In addition, the original PET master and its PDMS copies were irradiated with a continuous green laser at 532 nm using the optical set-up described in the Materials and Methods section (Figure 1), and the diffraction efficiency (DE%) was calculated using Equation (2). Both fabricated microstructures showed good diffraction efficiency. Hydrogel surface micropatterning was realized, during the polymerization, for the optimized compositions using replica molding. The thermally activated curing process, for Acrylamide/Propargyl acrylate hydrogels, took place in 1.30 h, supposedly a sufficient time for obtaining a good copy of the original PET microstructure. For the AM(25)/PA and AM(25) compositions, a good copy of the microstructure was obtained during the thermal curing. Figure 4c shows the optical microscopy image of the AM(25)/PA hydrogel grating which correctly replicated the pattern. It should be noticed that a higher period is observed in the hydrogel compared to the PDMS master, as the first one is swollen in water. The diffraction of the AM(25)/PA hydrogel, thermally polymerized, was also evaluated after its irradiation with a continuous green laser at 532 nm using the optical set-up of Figure 1. Figure 4d shows the diffraction pattern of the AM(25)/PA hydrogel. Zero, first, and second diffractive orders are present and distinguishable, so it could be very useful for label-free biosensing based on diffractive measurements. The diffraction efficiency (DE%) was calculated for the first diffraction order using Equation (2), resulting in 4.6 ± 0.5, 9.8 ± 0.5, and 1.1 ± 0.2, for PET, PDMS, and AM(25)/PA gratings, respectively. Lower values were observed in comparison with the PET and PDMS master, which was expected as the hydrogel has a watery nature and the PET and PDMS are plastics. The replica of the microstructure using thermal activation was not possible for the AM(8)/PA and AM (8) compositions. This was attributed to the amount of monomer used, which was too low to achieve the right viscosity for the replication process. On the other hand, trials of the grating replica molding using photochemical polymerization resulted unsuccessful, since the polymerization proceeded too fast to permit the correct molding. OR PEER REVIEW 10 of 15 Hydrogel surface micropatterning was realized, during the polymerization, for the optimized compositions using replica molding. The thermally activated curing process, for Acrylamide/Propargyl acrylate hydrogels, took place in 1.30 h, supposedly a sufficient time for obtaining a good copy of the original PET microstructure. For the AM(25)/PA and AM(25) compositions, a good copy of the microstructure was obtained during the thermal curing. Figure 4c shows the optical microscopy image of the AM(25)/PA hydrogel grating which correctly replicated the pattern. It should be noticed that a higher period is observed in the hydrogel compared to the PDMS master, as the first one is swollen in water. The diffraction of the AM(25)/PA hydrogel, thermally polymerized, was also evaluated after its irradiation with a continuous green laser at 532 nm using the optical set-up of Figure 1. Figure 4d shows the diffraction pattern of the AM(25)/PA hydrogel. Zero, first, and second diffractive orders are present and distinguishable, so it could be very useful for label-free biosensing based on diffractive measurements. The diffraction efficiency (DE%) was calculated for the first diffraction order using Equation (2), resulting in 4.6 ± 0.5, 9.8 ± 0.5, and 1.1 ± 0.2, for PET, PDMS, and AM(25)/PA gratings, respectively. Lower values were observed in comparison with the PET and PDMS master, which was expected as the hydrogel has a watery nature and the PET and PDMS are plastics. The replica of the microstructure using thermal activation was not possible for the AM(8)/PA and AM (8) compositions. This was attributed to the amount of monomer used, which was too low to achieve the right viscosity for the replication process. On the other hand, trials of the grating replica molding using photochemical polymerization resulted unsuccessful, On the other hand, by varying UV photoreactor parameters individually for each hydrogel composition, such as UV light power and irradiation time, hydrogel surface micropatterns were successfully obtained for all the optimized hydrogel compositions. However, the peeling-off of the hydrogel surface pattern copied from the PDMS, using photochemical activation, was cumbersome, and thus 20 µL of glycerol was added to promote the detachment. For the (AM(25)/PA), AM (25), and (AM(8)/PA) hydrogels, micropatterned replicas were obtained using 15 min of UV irradiation and 10 mW/cm 2 of light power, whereas for the AM(8) hydrogel, 10 min of UV irradiation and 0.6 mW/cm 2 of light power were used. Although it was possible to replicate the grating using both thermal and photochemical activation, it was concluded that better reproducibility in surface micropatterns copies was obtained for the AM(25)/PA hydrogel composition during thermal curing. Thus, the AM(25)/PA hydrogel composition showed the best results in terms of micropattern fabrication and biorecognition properties. Consequently, it was chosen for further label-free biosensing studies. Label-Free Biorecognition To evaluate the potential label-free sensing of surface relief gratings of the probefunctionalized hydrogels, a hybridization assay was performed using unlabeled probes. Firstly, surface microstructures were obtained for the AM(25)/PA hydrogels during the thermal curing as, according to previous results, this hydrogel composition and reaction conditions produced the hydrogel with the best properties for the selective detection of targets with fluorescent sensing, and, in addition, they yielded micropatterned hydrogels that were able to correctly diffract the light. Therefore, the same conditions were expected to produce hydrogels with the best properties for the label-free detection of targets. After the hydrogel synthesis, the AM(25)/PA hydrogel was functionalized with 5 µM of Probe 1. The functionalized hydrogel patterns were placed in a Petri dish and washed overnight with SSC1x buffer. The day after, they were cut into squares (0.5 × 0.5cm) and positioned in separated wells of a transparent ELISA plate with 250 µL of SSC1x. The size of the hydrogel was chosen to perfectly fit within the ELISA wells and, thus, avoid the crushing of their walls and their free flotation. Diffraction efficiencies (DE%) of the functionalized hydrogel patterns were registered using the optical set-up ( Figure 1) at controlled conditions (RH 45 ± 5% and 24 ± 1 • C). Ambient conditions were reached with domestic air conditioning and humidifier systems. Figure S12 shows that signals were stable for at least 30 min. Therefore, the signal was not affected by the incidence of the focused laser beam and slight delays in the reading time would not affect the obtained results. After that, the hybridization assay was performed in triplicate. Hydrogels were incubated with a growing concentration of Target 1 (0; 5; 10; and 25 µM) in 50 µL of SSC1x for 1h at 37 • C. After overnight washing with SSC1x, DE was registered at 532 nm and RDE was calculated according to Equation (3) to assess the direct detection of complementary DNA-sequence (Target 1) ( Figure 5). As a control experiment, the AM(25)/PA hydrogel was also functionalized with a non-complementary DNA sequence (Probe 2), and hybridization assays were performed with Target 1 at 25 µM following exactly the same procedure. A gradual decrease in the DE% with increasing concentration of the unlabelled target was observed for the (AM(25)/PA) hydrogel functionalized with Probe 1, while for the control system, having immobilized the non-complementary sequence Probe 2, no tendency was observed. The DE (%) data obtained with probe 1 can be best fitted using a Hill 1 correlation curve, obtaining a correlation coefficient of R 2 = 0.991. The RDE (%) data obtained with Probe 1 can also be best fitted using a Hill 1 correlation curve, obtaining a correlation coefficient of R 2 = 0.997. The limit of detection (LOD) of 2.47 µM was calculated from the RDE (%) curve as the concentration associated with the mean signal of ten blank measurements plus three times their standard deviation. Thus, it was possible to detect the analyte in the range from 2.47 to 10 µM using the micropatterned hydrogels as an optical transducer. To evaluate the potential label-free sensing of surface relief gratings of the probefunctionalized hydrogels, a hybridization assay was performed using unlabeled probes. Firstly, surface microstructures were obtained for the AM(25)/PA hydrogels during the thermal curing as, according to previous results, this hydrogel composition and reaction conditions produced the hydrogel with the best properties for the selective detection of targets with fluorescent sensing, and, in addition, they yielded micropatterned hydrogels that were able to correctly diffract the light. Therefore, the same conditions were expected to produce hydrogels with the best properties for the label-free detection of targets. After the hydrogel synthesis, the AM(25)/PA hydrogel was functionalized with 5 μM of Probe 1. The functionalized hydrogel patterns were placed in a Petri dish and washed overnight with SSC1x buffer. The day after, they were cut into squares (0.5 × 0.5cm) and positioned in separated wells of a transparent ELISA plate with 250 μL of SSC1x. The size of the hydrogel was chosen to perfectly fit within the ELISA wells and, thus, avoid the crushing of their walls and their free flotation. Diffraction efficiencies (DE%) of the functionalized hydrogel patterns were registered using the optical set-up ( Figure 1) at controlled conditions (RH 45 ± 5% and 24 ± 1°C). Ambient conditions were reached with domestic air conditioning and humidifier systems. Figure S12 shows that signals were stable for at least 30 min. Therefore, the signal was not affected by the incidence of the focused laser beam and slight delays in the reading time would not affect the obtained results. After that, the hybridization assay was performed in triplicate. Hydrogels were incubated with a growing concentration of Target 1 (0; 5; 10; and 25 μM) in 50 μL of SSC1x for 1h at 37 °C. After overnight washing with SSC1x, DE was registered at 532 nm and RDE was calculated according to Equation (3) to assess the direct detection of complementary DNA-sequence (Target 1) ( Figure 5). As a control experiment, the AM(25)/PA hydrogel was also functionalized with a non-complementary DNA sequence (Probe 2), and hybridization assays were performed with Target 1 at 25 μM following exactly the same procedure. A gradual decrease in the DE% with increasing concentration of the unlabelled target was observed for the (AM(25)/PA) hydrogel functionalized with Probe 1, while for the control system, having immobilized the non-complementary sequence Probe 2, no tendency was observed. The DE (%) data obtained with probe 1 can be best fitted using a Hill 1 correlation curve, obtaining a correlation coefficient of R 2 = 0.991. The RDE (%) data obtained with Probe 1 can also be best fitted using a Hill 1 correlation curve, obtaining a correlation coefficient of R 2 = 0.997. The limit of detection (LOD) of 2.47 μM was calculated from the RDE (%) curve as the concentration associated with the mean signal of ten blank measurements plus three times their standard deviation. Thus, it was possible to detect the analyte in the range from 2.47 to 10 μM using the micropatterned hydrogels as an optical transducer. Therefore, the label-free biosensing assay using unlabeled probes, performed for (AM(25)/PA) hydrogels with the surface micropattern, showed excellent preliminary results. The LOD of DNA in our system is higher lower than most of the hydrogel-based systems described in the literature [40]. However, most of the approaches are based on labels or/and elaborate DNA architectures. DNA hybridization with hydrogel has also been explored for actuators and other purposes [41], but poor consideration of the analytical performance is contemplated in these studies. Baba and co-workers have reported the use of diffraction gratings for the label-free detection of DNA with very low LOD, but the DNA was amplified during the analysis [42]. Our results are very promising, but the diffraction efficiency calculated for the obtained hydrogel surface-micropattern is not high. Hence, further improvements in the micropattern fabrication can be realized to increase the initial DE% and, accordingly, the sensitivity for the analyte detection. These improvements involve the fabrication of thinner surface relief gratings as well as the replication with lowerperiod PET masters. Although fabrication of these gratings can be challenging, technologies such as two-photon polymerization can be used for fabricating 2D/3D microstructures with high accuracy [43,44]. In addition, quicker data acquisition and automatization of hydrogel SRGs will allow for increasing the number of replicates and lowering the experimental error. Despite those facts, it was possible to directly detect the analyte with good selectivity and sensitivity, given that this is the first time that surface micro-patterned hydrogels were used to directly detect hybridization events. Conclusions and Future Outlook Optical biosensors are emerging for point-of-care testing (POCT) as they present some advantages such as increased sensitivity and suitability for being integrated into a compact device with the purpose of being utilized out-of-the-lab. Overall, line-like periodic microstructures were successfully fabricated on a bioresponsive hydrogel surface and used as transducers for converting the analyte-bioreceptor binding into a measurable optical signal. The planned approach for the covalent immobilization of the bioreceptor probes had notable outcomes. Furthermore, different bioreceptors with thiol terminal groups could be used, depending on the analyte to be detected. Accordingly, the developed biosensor can sense multiple analytes. Results obtained from the label-free biorecognition assay have shown a direct correlation between the diffraction efficiency measured and the target concentration. The label-free biosensor as designed could significantly contribute to direct and accurate analysis in medical diagnosis, being cheap, easy to fabricate and working without the need for further reagents. To fully achieve this, further aspects should be considered, such as the minimization of biofouling of hydrogels when they are immersed in real fluids. This can be achieved by tuning the composition of hydrogels, for instance, using polyacrylamide copolymers or zwitterionic moieties [45]. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/bios13030312/s1. Table S1. Nucleotide Sequence of Probes and Targets used. Scheme S1. Schematic representation of the hydrogel synthesis by free-radical polymerization (FRP). AM: Acrylamide, MBA: N, N'-methylenebis (acrylamide), PA: propargyl acrylate, Initiator = DMPA: 2,2-Dimethoxy-2-phenylacetophenone. Scheme S2. Thiol probe immobilization by thiolene and thiol-yne click reaction of (AM/PA) hydrogels by UV light. AM: Acrylamide, PA: propargyl acrylate, Initiator = DMPA: 2,2-Dimethoxy-2-phenylacetophenone. Table S2. Hydrogel compositions. Figure S1. UV-Visible spectra of hydrogels with different compositions (a) without PA and (b) with PA. Figure S2 Figure S6. ATR-FTIR spectrum of AM(25) hydrogel. Figure S7. Fluorescence signals obtained for probe-functionalized a)(AM(8)/PA) hydrogel and b) (AM(25)/PA) after hybridization with Target 2 for 1h at 37 • C (λex = 633 nm, λem = 670 nm). Firstly, hydrogels of were functionalized during the synthesis, using the first strategy (one-pot, photochemical) with 1 µM of the thiolated probes: Probe 1. After overnight washing with PBS-T, they were hybridized with 1 µM of fluorescent-labeled Target 2. Hydrogels were cutted in three pieces and the central piece was flipped prior to analysis to observe the signals of the cross-section profile. Fluorescence signals were collected after hybridization. Experiment was carried out in triplicate (three rows of the images). The fluorescence signal is visible in all thee pieces for both hydrogels. Figure S8. Fluorescence signals obtained for probe-functionalized (AM(25)/PA) hydrogel after hybridization with Target 2 (λex = 633 nm, λem = 670 nm). Firstly, hydrogels were functionalized during the synthesis, using the first strategy (one-pot, photochemical) with 1 µM of the thiolated probes: Probe 1 and, as a control, Probe 2. After overnight washing with PBS-T, they were hybridized with 1 µM of fluorescent-labeled Target 2. Fluorescence signals were collected after hybridization and 2 hours washing and after overnight washing with SSC1x. The fluorescence signal remained only in the case of Probe 1, complementary to the Target. Figure S9. Fluorescence signals obtained for AM (25) and (AM(25)/PA) hydrogels through hybridization assay with Target 2 (λex = 633 nm, λem = 670 nm). Firstly, hydrogels were biofunctionalized with thiolated probes (Probe 1 and Probe 2) at 1 µM after the polymerization. In the first bar chart, fluorescence signals were registered just after the hybridization assay with 0.5 µM of Target 2. In the second bar chart, the fluorescence was registered after overnight washing with SSC1x in order to wash away all the non-specific binding. Figure S10. Fluorescence signals obtained for (AM(8)/PA) hydrogel through hybridization assay with Target 2 (λex = 633 nm, λem = 670 nm). Firstly, hydrogels were functionalized with thiolated probes (Probe 1 and the control probe Probe 2) at 1 µM during the synthesis, using the one-pot synthesis strategy. After overnight washing with PBS-T, they were hybridized with 1 µM of the Target 2. Fluorescence signals, after hybridization, were collected after overnight washing with SSC1x. The experiment was conducted in triplicate. Figure S11. Fluorescence signals obtained for AM (8) and (AM(8)/PA) hydrogels through hybridization assay with Target 2 (λex = 633 nm, λem = 670 nm). Firstly, hydrogels were functionalized with thiolated probes (Probe 1 and, as control probe, Probe 2) at 1 µM after the synthesis, using the two-step strategy. In the first bar chart, fluorescence signals were registered just after the hybridization assay with 1 µM of the Target 2. In the second bar chart, the fluorescence was registered after overnight washing with SSC 1x in order to wash away all the non-covalent probe binding. Figure S12. Stability of the measured signals with the optical setup over night: Intensities of the zero and first diffraction orders generated by the AM(25)/PA hydrogel immersed in SSC1X withing the wells of the plate were registered with the photodiodes after illumination with the laser beam (λ = 532 nm). Data Availability Statement: The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy restrictions.
2023-02-26T16:17:53.231Z
2023-02-23T00:00:00.000
{ "year": 2023, "sha1": "1ea9a9887aef573d27b1df125e95da409cba4ac9", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-6374/13/3/312/pdf?version=1677161331", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a69cf38e45c00c238eae93ff96ba92fa58ebd92b", "s2fieldsofstudy": [ "Biology", "Materials Science" ], "extfieldsofstudy": [] }
215532107
pes2o/s2orc
v3-fos-license
Non-neoplastic indications and outcomes of the proximal and distal femur megaprosthesis: a critical review Purpose Megaprosthesis or endoprosthetic replacement of the proximal and distal femur is a well-established modality for treatment of tumors. The indications for megaprosthesis have been expanded to the treatment of some non-neoplastic conditions of the knee and hip, with the severe bone loss associated with failed arthroplasty, communited fractures in the elderly with poor bone quality, and resistant non-union. Th aim of this study is to find out whether megaprosthesis of the knee and hip is successful in the treatment of non-neoplastic condtions. The study comprises a review of the indications, complications, and outcomes of megaprosthesis of the proximal and distal femur in non-neoplastic conditions of the knee and hip joints. Methods We extensively reviewed the literature on non-neoplastic indications for megaprosthesis of the proximal and distal femur after performing a detailed search of the Pubmed database using the medical subject heading (MeSH) terms ‘proximal femur replacement’ or ‘distal femur replacement’ and ‘hip or knee megaprosthesis.’ The data obtained after the structured search were entered into a Microsoft Excel spreadsheet. The frequency distribution of the demographic data, indications, complications, and outcome was calculated. Result We included ten studies (seven proximal femur replacement and three distal femur replacement) of 245 proximal femur and 54 distal femur mega prostheses for treatment of non-neoplastic conditions. Bone loss in failed arthroplasty, either due to periprosthetic fracture or deep infection, was the most common indication for megaprosthesis. Dislocation was the most common complication after proximal femur megaprosthesis, and infection was the leading cause of complications after distal femur megaprosthesis. Conclusion Megaprosthesis for treatment of non-neoplastic conditions around the distal and proximal femur is a viable option for limb salvage, with an acceptable long-term outcome. Although the complications and survival rates of megaprosthesis in non-neoplastic conditions are inferior to a primary arthroplasty of the hip and knee but are comparable or better than the mega prosthetic replacement in the neoplastic conditions. Proximal femoral megaprosthesis has higher dislocation rates and requirement for revision compared to distal femoral megaprosthesis. However, the proximal femoral megaprosthesis has lower rates of infection, periprosthetic fractures, and soft tissue complications, as compared to distal femoral megaprosthetic replacement. Both associated with aseptic loosening but not statistically significant. Summary Megaprosthesis or endoprosthetic replacement of the proximal and distal femur is a well-established modality for the treatment of tumors. The indications for megaprosthesis have been expanded to the treatment of some non-neoplastic conditions of the knee and hip, with the severe bone loss associated with failed arthroplasty, communited fractures in the elderly with poor bone quality, and resistant non-union. Very few systematic reviews are available on proximal or distal femoral replacement for treatment of non-neoplastic conditions. This study reviews the indications, complications, and the outcomes of the megaprosthesis of the proximal and distal femur, in non-neoplastic conditions of the knee and hip joints. We included ten studies (seven on proximal femur replacement and three on distal femur replacement) of 245 proximal femur and 54 distal femur megaprostheses for treatment of nonneoplastic conditions. Bone loss in failed arthroplasty, either due to periprosthetic fracture or deep infection was the most common indication for megaprosthesis. Dislocation was the most common complication after proximal femur megaprosthesis and infection was the leading cause of complications after distal femur megaprosthesis. Proximal and distal femur megaprosthesis can be used as a salvage procedure in non-neoplastic conditions, with massive bone loss. Megaprosthesis for treatment of non-neoplastic conditions around the distal and proximal femur is a viable option for limb salvage, with an acceptable long-term outcome. Although the complications and survival rates after megaprosthesis in non-neoplastic conditions are inferior to primary arthroplasty of the hip and knee but are comparable or better than the mega prosthetic replacement in the neoplastic conditions. Proximal femoral megaprosthesis has higher dislocation rates and requirement for revision compared to distal femoral megaprosthesis. However, proximal femoral megaprosthesis has lower rates of infection, periprosthetic fractures, and soft tissue complications, as compared to distal femoral megaprosthetic replacement. Both of these procedures have a statistically insignificant difference in the aseptic loosening of the prosthesis. Dislocations in proximal femur megaprosthesis and infection in distal femur megaprosthesis are the major significant complications. Introduction Megaprosthesis or endoprosthetic replacement has been the standard of care for orthopaedic oncology for many decades [11]. Severe bone stock deficiency in the proximal or distal femur, as seen in septic or aseptic failed hip or knee arthroplasty and osteoporotic fracture in the elderly with severe comminution or failed fracture fixation, precludes the use of conventional prostheses. The treatment options available in such a situation are structural allograft-prosthesis composite, impaction allografting, long cemented or press-fit revision stem resection arthroplasty and megaprosthesis [1,21]. There are many limitations associated with the use of allograft for reconstruction in bone loss, thus increasing the use of megaprosthesis for tumor surgery [5,6]. Encouraging results of the successful outcome of megaprosthesis for tumor salvage in the proximal and distal femur have broadened the indications for megaprosthesis for the treatment of non-neoplastic conditions with extensive bone loss in the proximal or distal femora [8,19]. Very few systematic reviews are available on proximal or distal femoral replacement for treatment of nonneoplastic conditions [20,24]. Two recent systematic reviews on megaprosthesis for treatment of non-neoplastic conditions of the proximal and distal femur found overall midterm survival rates of 76% and 83% for proximal and distal femoral prostheses, respectively [14,15]. The main aim of this study is to review the literature and analyze the demography, indications, complications, and outcomes of proximal or distal femur megaprosthesis for the treatment of non-neoplastic conditions. We also attempted to compare the complications and outcomes of proximal and distal femoral megaprosthesis. Literature search We searched the Pubmed database for literature on megaprosthesis of the proximal or distal femur for the treatment of non-neoplastic conditions, to access the most relevant studies, on 10 July 2019. The keywords used in the Pubmed search were 'proximal femoral replacement' or 'distal femur replacement' and 'hip or knee megaprosthesis.' Eligibility criteria The inclusion criterion was articles that described the use of a proximal or distal femur megaprosthesis for treatment of non-neoplastic conditions. Case reports and the reports on the use of megaprosthesis for treatment of tumors were excluded. Data collection The authors screened the abstracts of possibly relevant articles and studied the full text of those articles meeting the inclusion criterion. Articles on proximal femur and distal femur megaprosthesis were reviewed separately. Data were extracted on type of study, number of patients, age, indications, complications, and follow up. Complications were classified according to the system reported by Henderson et al. [12], as previously modified for use in non-neoplastic conditions [14], as soft-tissue complications (type 1), aseptic loosening (type 2), structural complications or periprosthetic fracture (type 3), and peri-megaprosthetic infections (type 4). Data on revision and survival rates were also recorded when available. The data were then registered in a Microsoft Excel sheet, and the frequency distribution and the mean were calculated. Search results A total of 2682 articles was identified after the initial Pubmed search. 2173 articles revealed full available texts which were further screened. All the relevant articles on non-neoplastic conditions were then screened. All the articles with full text and those meeting the inclusion criterion were selected for this study. A total of ten studies (seven on proximal femur megaprosthesis and three on distal femur megaprosthesis) fulfilled the eligibility criterion. The search strategy is illustrated in Fig. 1. Out of ten studies, four on proximal femur megaprosthesis and two on distal femur megaprosthesis were prospective studies. We analyzed data on 245 proximal femur megaprostheses (in 243 patients) and 54 distal femur megaprostheses (in 54 patients). These studies had a sample size ranging from 8 to 79 (Tables 1 and 3). Proximal femur megaprosthesis (n = 245) The mean age of the patients was 68.7 years, and the mean follow-up duration was 44.64 months ( Table 1). The indications for surgery were periprosthetic infection (28.9%), periprosthetic fracture (28.1%), and massive bone loss due to arthroplasty and complex fractures or failed internal fixation (22.8%) ( Table 2). Dislocation of the prosthesis was the most common complication (14.6%, n = 32), followed by periprosthetic fracture and aseptic loosening in 7.5% and 6.9% of cases, respectively ( Table 2). The Harris hip score improved from a preoperative mean of 38.9 to a mean of 72.6 at the last follow up. Out of seven study reports, only four [3,20,25,26] discussed revision and implant survival. A revision was required in 32 cases,and the mean implant survival was 80% at 5 years (Table 2). Distal femur megaprosthesis (n = 54) The mean age of the patients was 75.49 years, and the mean follow-up duration was 43.05 months ( Table 3). The most common indication for distal femur megaprosthesis was substantial bone loss after failed knee arthroplasty in 55.5% of cases (Tables 2 and 4). Periprosthetic infection was the most common complication (18.5% cases) (Tables 2 and 4). The Knee Society Score improved from a preoperative median of 20 to a postoperative median of 80 [27]. Out of three studies, only Vertesich et al. [27] mentioned revision and implant survival in their study. Revision was required in three cases, and the implant survival was 74.8% at 1 year and 40.9% at 10 years ( Table 4). Comparison of complications of the proximal femur and distal femur megaprosthesis Proximal femoral megaprosthesis has higher dislocation rates and requirement for revision compared to distal femoral megaprosthesis. However, proximal femoral megaprosthesis is associated with lower rates of infection, periprosthetic fractures, and soft tissue complications, as compared to distal femoral megaprosthetic replacement. Both of these procedures have a statistically insignificant difference in the aseptic loosening of the prosthesis (Fig. 2). Discussion Megaprosthesis of the proximal or distal femur is a viable option for limb reconstruction in non-neoplastic conditions like failed hip or knee arthroplasty, periprosthetic fractures, osteoporotic fracture with severe comminution, or resistant non-union in elderly patients [2,29]. Megaprosthesis in such cases should be considered as a limb salvage option in carefully selected patients when other surgical options are not feasible [14]. In this review, we analyzed the complications and outcome of proximal and distal femur megaprosthesis. Failed hip arthroplasty with extensive bone loss (due to infection, fracture, or aseptic loosening) was the most common (83.6%) non-neoplastic indication for proximal femur megaprosthesis ( Table 2). Failed total knee arthroplasty (55.5%) was the most common non-neoplastic indication for distal femur megaprosthesis ( Table 4). Dislocation of the hip prosthesis is the most common complication observed in this review, seen in 14.6% of proximal femur megaprostheses (but in none of the distal femur megaprostheses: Table 4). Our results are in agreement with the systematic review by Korim et al. [14], reporting a 15.7% rate of hip dislocation at a mean follow up of 45 months. The cause of instability is multifactorial, including inability to achieve a secure repair of the residual soft tissues around the metal prosthesis [9] and compromised abductors around the hip due to multiple previous reconstructive procedures. The monobloc implants used previously were less versatile and often led to dislocation, but with the new generation of megaprosthesis providing better provision for more secure soft tissue reattachment and the ability to reapproximate the retained proximal host bone to the prosthesis, the rate of dislocation has decreased [21]. Periprosthetic infection was seen in 6.9% of proximal femur megaprostheses (hip) and in 18.5% of distal femur megaprostheses (knee), thus agreeing with previous findings of a mean rate of 7.6% for proximal femoral prostheses [13,14] and 15% for distal femoral prostheses [15]. A recent systematic review has reported a mean rate of peri-megaprosthetic infection of 10%, following tumor resection [22]. The overall infection rate in hip and knee arthroplasty is as low as 1% [16,17]. Periprosthetic infection is common and remains the most challenging complication after megaprosthesis because of poorquality soft tissue due to multiple previous surgeries, poor overall health status, and long operating times [16,18,22]. These factors result in a poor functional outcome and failed limb salvage. Aseptic loosening was seen in 7.7% and 9.9% of proximal and distal femur megaprosthesis, respectively. Aseptic loosening of megaprosthesis in the treatment of nonneoplastic diseases has been previously detected with rates ranging from 0% to 9.5% [25,29], and these reports are consistent with aseptic loosening after tumor prosthesis [4]. Periprosthetic fracture was seen in 3.2% and 11.1% of cases of proximal and distal femur megaprosthesis, respectively. The mean age of the distal femur cohort was 75.49 years compared to 68.7 years in the proximal femur group; poorer bone quality may be the reason for a higher rate of periprosthetic fracture in this group [26]. Soft tissue complications were seen in 1.2% and 11.1% of cases of proximal and distal femur megaprosthesis, respectively (Table 4). In a retrospective review of 2174 patients, Henderson et al. [12] detected an overall rate of soft-tissue complications (i.e., including dislocation) of 5.2% with primary proximal femoral prosthesis. We found that revision was required in 13.06% and 5.5% cases of proximal and distal femur megaprosthesis, respectively. In a systemic review by Korim [9], reoperation rates ranged from 13.3% to 40% in proximal femur megaprosthesis. We found significant improvement in the Harris hip score after proximal femur megaprosthesis and significant improvement in the Knee Society Score after distal femur megaprosthesis. We found that the mean 5-year survival of proximal femur megaprosthesis was 80%, which is comparable (78-90%) to that reported for neoplastic indications [7,8,19]. The main limitations of this review were the heterogeneity and the small sample size of the study. Data on the patients who were lost to follow up in most of the studies was lost. Details of the complications and their outcomes could not be assessed thoroughly, as none of the articles except for that of Grammatopoulos et al. [10] reported the complications for each indication and we were able to analyze only three studies on distal femur megaprosthesis. Conclusion Proximal and distal femur megaprosthesis can be used as a salvage procedure for non-neoplastic conditions, with massive bone loss. Proximal femoral megaprosthesis has higher dislocation rates and requirement for revision compared to distal femoral megaprosthesis. However, the proximal femoral megaprosthesis is associated with rates of infection, periprosthetic fractures, and soft tissue complications, as compared to distal femoral megaprosthetic replacement. Both of these procedures have a statistically insignificant difference in the aseptic loosening of the prosthesis. Dislocations in proximal femur megaprosthesis and infection in distal femur megaprosthesis are the major significant complications.
2020-04-09T18:38:01.491Z
2020-04-09T00:00:00.000
{ "year": 2020, "sha1": "2c305a7f598a4cc7d99f592a70e952279a864a25", "oa_license": "CCBY", "oa_url": "https://kneesurgrelatres.biomedcentral.com/track/pdf/10.1186/s43019-020-00034-7", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2c305a7f598a4cc7d99f592a70e952279a864a25", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17758176
pes2o/s2orc
v3-fos-license
The Absolute Flux Distribution of LDS749B Observations from the Space Telescope Imaging Spectrograph define the flux of the DBQ4 star LDS749B from 0.12-1.0 \mu m with an uncertainty of ~1% relative to the three pure hydrogen WD primary HST standards. With T_{eff}=13575 K, log g=8.05, and a trace of carbon at<1x10^{-6} of solar, a He model atmosphere fits the measured STIS fluxes within the observational noise, except in a few spectral lines with uncertain physics of the line broadening theory. Upper limit to the atmospheric hydrogen and oxygen fractions by number are 1x10^{-7} and 7x10^{-10}, respectively. The excellent agreement of the model flux distribution with the observations lends confidence to the accuracy of the modeled IR fluxes beyond the limits of the STIS spectrophotometry. The estimated precision of ~1% in the predicted IR absolute fluxes at 30 \mu m should be better than the model predictions for Vega and should be comparable to the absolute accuracy of the three primary WD models. Introduction The DBQ4 star LDS749B (WD2129+00) has long been considered for a flux standard (e.g., Bohlin et al. 1990). To establish the flux on the Hubble Space Telescope (HST ) white dwarf (WD) flux scale, STIS spectrophotometry was obtained in [2001][2002]. The virtues of LDS749B as a flux standard include an equatorial declination and a significantly cooler flux distribution than the 33000-61000 K primary DA standards GD71, GD153, and G191B2B. Full STIS wavelength coverage is provided from 0.115-1.02 µm, and the peak in the SED is near 1900Å. At V = 14.674 (Landolt & Uomoto 2007), LDS749B is among the faintest HST standards and is suitable for use with larger ground-based telescopes and with the more sensitive HST instrumentation, such as the ACS/SBC and COS. The bulk of the STIS data was obtained as part of the FASTEX (Faint Astronomical Sources Extention) program. Finding charts appear in Turnshek et al. (1990) and in Landolt & Uomoto (2007); but there is a large proper motion of 0.416 and 0.034 arcsec/yr in right ascension and declination, respectively. The absolute flux calibration of HST instrumentation is based on models of three pure hydrogen WD stars GD71, GD153, and G191B2B (Bohlin 2000;Bohlin, Dickenson, & Calzetti 2001;Bohlin 2003). In particular, the NLTE model fluxes produced by by the Tlusty code (Hubeny & Lanz 1995) determine the shape of the flux distributions using the known physics of the hydrogen atom and of stellar atmospheres. If there are no errors in the basic physics used to determine the stellar temperatures and gravities from the Balmer line profiles, then the uncertainty of 3000 K for the effective temperature of G191B2B means that the relative flux should be correct to better than 2.5% from 0.13 to 1 µm and to better than 1% from 0.35 to 1 µm. A model that matches the observations serves as a noise free surrogate for the observational flux distribution and provides a reliable extrapolation beyond the limits of the observations for use as a calibration standard for JWST, Spitzer, and other IR instrumentation. Currently, the best IR absolute flux distributions are found in a series of papers from the epic and pioneering work of M. Cohen and collaborators, i.e., the Cohen-Walker-Witteborn (CWW) network of absolute flux standard (e.g., Cohen, Wheaton, & Megeath 2003;Cohen 2007). The CWW IR standard star fluxes are all ultimately based on models for Vega and Sirius (Cohen et al. 1992). More recently, Bohlin & Gilliland (2004) observed Vega and published fluxes on the HST /STIS WD flux scale. A small revision in the STIS calibration resulted in excellent agreement of the STIS flux distribution with a custom made Kurucz model with T eff = 9400 K (Bohlin 2007), which is the same T eff used for the Cohen et al. (1992) Vega model. The model presented here for LDS749B and archived in the CALSPEC database 1 should have a better precision than the Kuruzc T eff = 9400 K model for Vega, especially beyond ∼12 µm, where the Vega's dust disk becomes important (Engleke, Price, & Kraemer 2006). Vega is also a pole-on rapid rotator, which may also cause IR deviations from the flux for a single temperature model. Our modeled flux distribution for LDS749B should have an accuracy comparable to the pure hydrogen model flux distributions for the primary WD standards GD71, GD153, and G191B2B. The Model A helium model atmosphere flux distribution for LDS749B is calculated with the LTE code of Koester (e.g., Castanheira et al. 2006) for T eff = 13575 K and log g = 8.05. At such a cool temperature, the differences between LTE and NLTE in the continuum flux distributions should be <0.1% from the far-UV to the IR. For example for a pure hydrogen DA, the difference between the continua of a hot 40,000 K LTE/NLTE pair of models is 1% between 0.1-2.5 µm. The same maximum difference at 20,000 K is only 0.3%. Napiwotzki (1997) did not discuss pure He models but concludes that NLTE effects tend to become smaller with lower effective temperature. For cool DA WDs, Koester et al. (1998) show that the only NLTE effect that approaches 1% is a deeper line core of Hα. The matter densities in helium-rich white dwarfs are significantly higher, leading to a higher ratio of collisional versus radiative transitions between atomic levels. The larger importance of collisions increases the tendency towards LTE occupation numbers because of the robust Maxwell distribution of particle velocities. The T eff = 13575 K is higher than the T eff = 13000 K published for LDS749B (alias G26−10) in Castanheira et al. (2006), because only UV spectra of lower precision (IUE heritage) were used in that analysis. Voss et al. (2007) found T eff = 14440 K with large uncertainty, because only line profiles in the optical range were used and log g had to be assumed. A trace of carbon at 10 −6 of the solar C/He ratio is included, i.e. the C/He number ratio is 3.715×10 −9 . The model mass is 0.614 M ⊙ and the stellar radius is 0.01224 R ⊙ , which corresponds to a distance of 41 pc for the measured STIS flux. The line broadening theory for the He lines combines van der Waals, Stark, and Doppler broadening to make a Voigt profile. However, the Stark broadening uses a simple Lorentz profile with width and shift determined from the broadening data in Griem (1964), instead of the elaborate calculations of Beauchamp et al. (1997). The Griem method is computationally much faster, and data are available for more lines than are calculated by Beauchamp et al. The fit of the higher series He lines is much improved, if the neutral-neutral interaction is decreased in comparison to the original formalism of the Hummer-Mihalas occupation probabilities. A similar effect was noticed by Koester et al. (2005), and our model uses the same value of the quenching parameter that Koester et al. derived (f = 0.005). The model wavelengths are all on a vacuum scale. STIS Spectrophotometry The sensitivities of the five STIS low dispersion spectrophotometric modes have been carefully tracked since the STIS commissioning in 1997. After correcting for changing sensitivity with time (Stys, Bohlin, & Goudfrooij 2004) and for charge transfer efficiency (CTE) losses for the three STIS CCD spectral modes (Bohlin & Goudfrooij 2003;Goudfrooij et al. 2006), STIS internal repeatability is often better than 0.5% (Bohlin 2003). Thus, HST /STIS observations of LDS749B provide absolute spectrophotometry with a precision that is superior to ground based flux measurements, which require problematic corrections for atmospheric extinction. Observations with a resolution R = 1000-1500 in four STIS modes from 1150-1710Å (G140L), 1590-3170Å (G230L), 2900-5690Å (G430L), and 5300-10200Å (G750L) were obtained in 2001-2002. Earlier observations of LDS749B in 1997 to test the time-tagged mode were unsuccessful. Two observations in the CCD G230LB mode overlap the wavelength coverage of the MAMA G230L but are too noisy to include in the final combined absolute flux measurement from the other four modes. Table 1 summarizes the individual observations used for the final combined average, along with the unused G230LB data for completeness. Figure 1 shows the ratios of the three individual G230L and the two G230LB observations to the model fluxes, which are normalized to the STIS flux in the 5300-5600Å range. The excellent repeatability of STIS spectrophotometry over broad bands is illustrated and the global average ratio over the 1750-3000Å band pass is written in each panel. This ratio is unity to within 0.3%, even for the shorter CCD G230LB exposures despite their almost 3× higher noise level and CTE corrections. The other CCD modes G430L and G750L also require CTE corrections. Repeatability for all the STIS spectral modes is comparable, i.e., the global ratio deviates rarely from unity by more than 0.6%. The observations in each of the four spectral modes are averaged and the four segments are combined. This composite standard star spectrum extends from 1150-10226Å and can be obtained at http://www.stsci.edu/hst/observatory/cdbs/calspec.html/ along with the remainder of the HST standard star library (Bohlin, Dickenson, & Calzetti 2001). This binary fits table named lds749b stis 001.f its has 3666 wavelength points and seven columns. An ascii file of the flux distribution in Table 2 is available via the electronic version of this paper. Table 2 contains the wavelength inÅ and the flux in erg cm −2 s −1Å−1 in the first two columns, while columns 3-4 are the Poisson and systematic uncertainty estimates in flux units, respectively. Column 5 is the FWHM of the resolution inÅ. The f its version has two more columns than the ascii version: Column 6 is a data-quality flag, where one is good and zero may be poor quality. The seventh column is the exposure time in seconds. The fluxes at the shortest wavelengths below 1160Å are unreliable because of the steepness of the sensitivity drop-off. The Continuum To compare the model and observations, a convenient method of removing the slope of the spectral energy distribution (SED) is to divide both fluxes by the same theoretical model continuum. Small differences between the observations and the model, either in the lines or in the actual continuum, are easily illustrated in such plots. The theoretical continuum contains only continuum opacities with an extrapolation across the He i opacity edges at 2601, 3122, and 3422Å, in order to avoid discontinuities. Figure 2 shows an overview of the comparison of the STIS fluxes with the model after division of both SEDs by this same smooth line-and-edge-free continuum. The mean continuum level of the data between the absorption lines agrees with the model within ∼1% almost everywhere. The most significant deviation of the data from the model is in the broad 1400-1550Å region, where each of the three spectra comprising the G140L average have 350,000 photoelectron events in this 150Å band. The background level is <0.1% of the net signal, so that neither counting statistics nor background subtraction error could cause the observed ∼1.5% average disparity. Of the five low dispersion modes, G140L shows the worst photometric repeatability of individual spectra in broad bands of σ ∼0.6%. The three individual spectra comprising the G140L average do show occasional 2-3σ broadband dips within their 550Å coverage region; but the probability of such a large excursion as 1.5% in their average is extremely unlikely at any particular wavelength. However, the probability is much greater that such a large excursion could occur in some 150Å band. Individual G140L spectra of the monitoring standard GRW+70 • 5824 often show a broad region differing by 1-2% from the average. The cause of such excursions could be flat field errors, temporal instabilities in the flat field, or other detector effects that might make the flat field inapplicable to a narrow spectral trace. Uncertainties in T eff and log g The uncertainty in the model T eff is determined by the uncertainty in the slope of the UV flux distribution. For a constant log g model that is cooler or hotter by 50 K and normalized to the measured 5300-5600Å flux, there are increasing differences with the data from 1% near 2000Å to 2% at the shorter wavelengths. Such a large change in the modeled continuum level in Figure 2 (red line) is inconsistent with the STIS flux (black line). This 50 K uncertainty of the model T eff is an internal uncertainty relative to the temperatures of the primary WD standards GD71, GD153, and G191B2B. If a re-analysis of the Balmer lines in these primary DA standards produces a systematic shift in the temperature scale, this shift would be reflected in a revised T eff for LDS749B that is independent of the 50 K internal uncertainty. A 50 K temperature difference causes a <0.5% flux change in the IR longward of 1 µm. To estimate the uncertainty in log g, models are computed at the 13575 K baseline temperature but with an increment in log g. Positive and negative increments produce nearly mirror image changes in the flux distribution. For a decrease of 0.7 in log g, the flux decreases by a nearly uniform 1.5% below 3600Å after normalizing to unity in the 5300-5600Å range. Increasing the T eff by the full 50 K uncertainty to 13625 K can compensate for this flux decrease below ∼2000Å. However, the +50 K increase compensates little in the 2500-3600Å range, leaving a disparity of ∼1%. Because this 2500-3600Å range includes some of the best S/N STIS data, a 1% disparity establishes the uncertainty of 0.7 dex in log g as barely compatible with the STIS flux distribution. The IR uncertainty corresponding to this limiting case of T eff = 13625 K and log g = 7.35 is ∼1% longward of 1 µm, because the fractional percent changes in the IR from the higher temperature and from the lower log g are both in the same direction. Interstellar Reddening Another source of error in the model T eff is interstellar reddening. The standard galactic reddening curve has a strong broad feature around 2200Å; and a tiny limit to the extinction E(B − V ) is set by the precise agreement of STIS with the model in this region of Figure 2. For the upper temperature limit of 13575 + 50 = 13625 K, an E(B − V ) = 0.002 brings the reddened model into satisfactory agreement with STIS. However, for a temperature increment of 100 K and E(B − V ) = 0.004, the model is ∼1% high at 1300Å and ∼1% low at 2200Å. Thus, for standard galactic reddening, E(B − V ) must be less than 0.004, and T eff is less than 13675 K. In this case of T eff and E(B − V ) at these allowed limits, the IR flux beyond 1 µm is still the same as for the unreddened baseline T eff = 13575 K within 0.5%. However, Bohlin (2007) presented arguments for reddening with a weak 2200Å bump for other lines of sight with tiny amounts of extinction. Reddening curves measured in the SMC (e.g., Witt & Gordon 2000) are missing the 2200Å feature and can cause larger uncertainty in T eff . Additional evidence for extinction curves more like those in the Magellanic clouds is presented by Clayton et al. (2000) for the local warm intercloud medium, where the reddening is low. Changes in the shape of the flux distribution after reddening with the SMC curve of Witt and Gordon are similar to the change in shape with T eff . For example, reddening a model with T eff = 14130 K by SMC extinction of 0.015 is required to make an equally unacceptable fit as for 13675 K and galactic extinction of E(B − V ) = 0.004. In this extreme limiting case of SMC extinction, the IR flux beyond 1 µm is still the same within ∼1% as for T eff = 13575 K and E(B − V ) = 0. Despite small uncertainties in the interstellar reddening and consequent uncertainty in T eff , our modeling technique still predicts the continuum IR fluxes to 1% from 1 µm to 30 µm. Discounting the most pathological case of SMC reddening, the worst far-IR uncertainty is from the combined 50 K temperature and 0.7 log g uncertainties, because the changes in the slope from the visual band normalization region into the IR due to higher temperature and lower log g are both in the same direction. In the absence of modeling errors or other physical complications like IR excesses from dust rings, the measured fluxes of LDS749B relative to the three primary WDs should be the same as predicted by the relative fluxes of the respective models to a precision of 1% in the IR. Hydrogen An upper limit on the equivalent width for Hα of ∼0.1Å constrains the fraction by number of hydrogen in the atmosphere of LDS749B to <1×10 −6 of helium. However, a stricter limit of <1×10 −7 is provided by the weak Lyα line. Because interstellar absorption at Lyα could be significant, zero hydrogen is consistent with the observations and is adopted for the final best model for LDS749B. After normalization in the V band, the continuum of a model with 1 × 10 −7 hydrogen composition and the baseline T eff = 13575 K and log g = 8.05 agrees with the zero hydrogen baseline continuum to ∼0.5% from Lyα to 30 µm. (Greenstein & Trimble 1967). The model is smoothed with a triangular profile of FWHM corresponding to a resolution R = 1500 for the MAMA spectra shortward of 3065Å and to R = 1000 for the CCD spectra longward of 3065Å. In general, the model underestimates the line strengths, even for the quenching of the neutral-neutral interactions with f = 0.005. There is a suggestion of some systematic asymmetry with stronger absorption in the short wavelength side of the line profile. This asymmetry could be in the STIS line spread function (LSF); or perhaps, a more exact treatment of the Stark line broadening theory would reproduce the observed asymmetries. Carbon With a C/He ratio of <1×10 −6 solar, i.e. a C/He number ratio of 3.715×10 −9 , the modeled C i and C ii lines reproduce the observations within the observational noise, as shown in Figure 4. In particular, the agreement of the modeled C i(1329)/C ii(1335) line ratio with the observed ratio means that the carbon ionization ratio corresponds to the photospheric temperature of the star. With this small amount of carbon, the spectral clasification of LDS749B should more properly be DBQ (Wesemael et al. 1993). Oxygen The oxygen triplet at 1302.17, 1304.87, and 1306.04Å constrains the fraction of oxygen in the LDS749B atmosphere. This triplet absorption feature extends over 4Å or about seven STIS pixels; but no obvious absorption feature appears above the noise level. After binning the STIS data by seven pixels, the rms noise in the 1300Å region is 0.8%. The corresponding 3σ upper limit to the equivalent width is 0.10Å, which implies an upper limit to the atmospheric oxygen fraction by number of 7×10 −10 of helium. Conclusion In the absence of any interstellar reddening, a helium model with T eff = 13575 K ± 50, log g = 8.05 ± 0.7, and a trace of carbon at <1×10 −6 of solar fits the measured STIS flux distribution for LDS749B. The noise-free, absolute flux distribution from the model after normalization to the observed broadband visual flux is preferred for most purposes. This normalized model SED is a high fidelity far-UV to far-IR calibration source; and the flux distribution is available via Table 3 in the electronic version of the Journal. Both the observed flux distribution and the modeled fluxes are also available from the CALSPEC database. 2 The model flux is smoothed to the approximate STIS resolution of R = 1500 for the MAMA spectra (shortward of 3065Å) and to R = 1000 for the CCD spectra (longward of 3065Å). A thin dashed line marks the unity level.
2008-01-04T20:28:45.000Z
2008-01-04T00:00:00.000
{ "year": 2008, "sha1": "b15d3f0629a4ca96875326f9e7ec3a3de6a71a2f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0801.0645v1.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b15d3f0629a4ca96875326f9e7ec3a3de6a71a2f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
257304622
pes2o/s2orc
v3-fos-license
Synthesis and characterization of hydroxyl-terminated polybutadiene modified low temperature adaptive self-matting waterborne polyurethane Hydroxyl-terminated polybutadiene (HTPB) is a flexible telechelic compound with a main chain containing a slightly cross-linked activated carbon–carbon double bond and a hydroxyl group at the end. Therefore, in this paper, HTPB was used as a terminal diol prepolymer, and sulfonate AAS and carboxylic acid DMPA were used as hydrophilic chain extenders to prepare low-temperature adaptive self-matting waterborne polyurethane (WPU). Due to the fact that the non-polar butene chain in the HTPB prepolymer cannot form a hydrogen bond with the urethane group, and the solubility parameter difference between the hard segment formed by the urethane group is large, the gap of Tg between the soft and hard segments of the WPU increases by nearly 10 °C, with more obvious microphase separation. At the same time, by adjusting the HTPB content, WPU emulsions with different particle sizes can be obtained, thereby obtaining WPU emulsions with good extinction properties and mechanical properties. The results show that HTPB-based WPU with a certain degree of microphase separation and roughness obtained by introducing a large number of non-polar carbon chains has good extinction ability, and the 60° glossiness can be as low as 0.4 GU. Meanwhile, the introduction of HTPB can improve the mechanical properties and low temperature flexibility of WPU. The Tg,s (the glass transition temperature of soft segment) of WPU modified by the HTPB block decreased by 5.82 °C, and the ΔTg increased by 21.04 °C, indicating that the degree of microphase separation increased. At −50 °C, the elongation at break and tensile strength of WPU modified by HTPB can still maintain 785.2% and 76.7 MPa, which are 1.82 times and 2.91 times those of WPU with only PTMG as soft segment, respectively. The self-matting WPU coating prepared in this paper can meet the requirements of severe cold weather and has potential application prospects in the field of finishing. Introduction With the change of people's aesthetic concepts, low gloss coatings are becoming more and more popular in surface decoration and architectural design. 1 Compared with high gloss coatings, low gloss coatings are suitable for hiding minor scratches and defects, reducing dust and ngerprint aggregation. [2][3][4][5][6] They can also reduce visual distraction and make people more focused, such as reducing surface glare in schools and hospitals. Therefore, matt coatings are widely used on the surfaces of wood and furniture, leather accessories, automotive parts, aircra shells, electronic equipment shells, school and hospital wall surfaces and so on. 2,[7][8][9][10][11] With the global emphasis on environmental protection concepts, volatile organic solvent (VOC) emission standards are becoming more and more stringent. [12][13][14][15][16] Waterborne polyurethane (WPU) is a polyurethane resin with water as dispersion medium. 11 WPU not only has the advantages of traditional solvent-based polyurethane, such as excellent wear resistance and mechanical properties, but also is environmentally friendly, safe and reliable. 10,12,17 Therefore, it is widely used in coatings, adhesives, leather nishing agents and inks. 9,18,19 The principle of obtaining low gloss coating is that the rough surface has microscopic protrusions and depressions. When the light is irradiated, the light will be reected in different directions at different reection angles to form a diffuse reection, giving the impression of low gloss. For WPU resins, the change in refractive index is negligibly small and has a limited effect on gloss. Therefore, the method of reducing the gloss of different types of coatings depends on the control of surface morphology to a large extent. The fabricated rough morphology can scatter the incident light to multiple directions to achieve the extinction effect. 5 Traditional matting agents (silica, diatomite, etc.) are incompatible with WPU emulsions and cannot be completely dispersed, forming a rough surface during lm formation. The disadvantage of the external matting agent is that the matting agent particles are not xed rmly and are easily detached from the surface. The rough structure will disappear over time, and the extinction effect will decrease. In addition, the added matting agent will reduce the stability of the emulsion and the bending resistance of the coating. 3,[20][21][22][23] Self-matting WPU does not contain any additional matting agent, but generates emulsion particles with similar effects to matting agent during the synthesis process. When lm-forming these emulsion particles mutually accumulate to form microrough surfaces, to achieve extinction effect. When the solvent evaporates, these emulsion particles accumulate with each other to form a microscopically rough surface, achieving a matting effect. Therefore, self-matting WPU has better emulsion stability. At the same time, it eliminates the defects caused by the use of matting agent, and also improves the mechanical properties and water resistance of WPU matte coating. It has the advantages of environmental protection and low cost. 3,[20][21][22][23] The particle size of WPU can be easily adjusted through formula adjustment. 7 In order to obtain a stable WPU lotion, hydrophilic groups (carboxylic acid and sulfonic acid, amine) are usually added to the main chain or side chain of the polymer. 8 Li et al. 21 prepared WPU using amino sulfonate chain extender (A95) and hydrazine hydrate as both chain extender and emulsier. The WPU lotion particles obtained were formed regular microspheres. The 60°gloss of the coating lm was as low as 1.5 GU. Yong et al. 1 synthesized a new solvent-free WPU dispersion using AAS and DMPA as hydrophilic chain extenders. AAS can provide better hydrolysis stability and thermal stability of the coating than DMPA. It avoids the excessive use of DMPA in the process of improving the glossiness and water resistance of WPU coating. At the same time, AAS is instrumental in the formation of many regular spherical particles in WPU lotion. Moreover, the increase of the content of sulfonic acid hydrophilic chain extender or the initial molecular weight of the so segment is benecial to the improvement of thermal stability. Cao et al. 20 introduced trimethylolpropane (TMP) and AAS salt into the polyurethane prepolymer as the post chain extender. TMP has the function of improving the micro phase separation and reducing the loss factor. By increasing the micro phase separation between the hard segment and the so segment of polyurethane, it can form relief to increase the roughness of the coating. Yong et al. 2 precisely adjusted the surface roughness of the lm by changing the weight ratio of the hard/so monomer of the acrylic monomer, and the gloss of the WPU acrylate (WPUA) hybrid lotion coating obtained can be as low as 3 GU. Some researchers are also working on more complex technologies. Bauer et al. 24 used a dual ultraviolet lamp device (consisting of 172 nm excimer lamp and mercury arc lamp) to irradiate the acrylate formula to form a micro texture morphology, resulting in a low gloss or matte effect of the lm. However, the research on self extinction WPU mainly focuses on improving the extinction ability at present, and the research on the mechanical properties and water resistance of WPU, especially the mechanical properties at low temperature, is rare. HTPB is one of the liquid rubbers, which has a long non-polar carbon chain. Because of its low glass transition temperature, high elasticity, high water resistance and hydrophobicity, HTPB is widely used in adhesives, foam, rocket propellant and other elds. 25,26 The low surface energy and exibility at low temperature of HTPB based polyurethane elastomers have been extensively studied, 27,28 however, there are few studies on HTPB based WPU resins. Ding et al. 3 proposed a method to exfoliate organic montmorillonite into single-layer nanosheets in HTPB, and then reacted with isocyanate to synthesize a multifunctional matte WPU coating. The nanosheets promoted the crosslinking and cyclization of HTPB at high temperature, making the surface morphology of the coating rougher. In the matte range, the 60°gloss is reduced to 4.6 GU. In this paper, HTPB was introduced into the main chain, together with PTMG as so segment. At the same time, DMPA and AAS containing hydrophilic groups were introduced as chain extender and emulsier, and then EDA was used to extend the chain. And the low gloss WPU with low temperature adaptability was successfully prepared. By adjusting the HTPB content to adjust the particle size of the emulsion, the extinction performance is greatly improved. At the same time, aer HTPB was introduced into the WPU segment, the longer nonpolar carbon chain in the HTPB structure will cause obvious microphase separation in the so and hard segment regions, 14 thereby roughening the WPU lm surface to achieve the effect of extinction. The extinction principle is shown in Fig. 1. In addition, the strong hydrophobic olen bonds in HTPB can improve the water resistance of WPU. Due to the excellent elasticity and exibility of HTPB, the mechanical properties and low temperature exibility of WPU can be well improved, so that the self-matting WPU coating can meet the requirements of cold weather. Preparation of WPU A certain amount of PTMG and HTPB were placed to a three neck ask and dehydrated in vacuum at 120°C for 2 h. Aer cooling to 80°C, NPG and IPDI were added by stirring for 2 h. The mixture was cooled to 70°C and DMPA diluted with NMP was added to it for 1 h. Two drops of neodecanoic acid bismuth(3 + ) salt were added to the reaction system and the reaction was carried out for 3.5 h at 70°C. The viscosity was adjusted with an appropriate amount of acetone. Aer the temperature of the reactor was maintained under 40°C, AAS and EDA diluted with water were slowly added dropwise to the ask under vigorous stirring. Aer cooling to 29°C, TEA was added to neutralize the carboxyl group for 5 min. Transfer the above prepolymer to a plastic beaker and set the speed to 1500 rpm to emulsify for 10 minutes. Aer standing overnight, acetone was removed by vacuum distillation. The emulsion was aged at 60°C for 2-3 days to obtain the low-glossed WPU emulsion. The above reaction procedure is presented in Scheme 1. The formulations used are shown in Table 1 (WPU-B0% means the amount of HTPB added is 0%, and the meaning of WPU-B10-50% is the same). FTIR-ATR. The infrared absorption of waterborne polyurethane was tested by the 8700 Fourier transform infrared spectrometer (ATR-FTIR, Nicolet, USA). The sample was scanned 48 times at a resolution of 4 cm −1 over the frequency range of 4000-400 cm −1 . 2.3.2 Particle size and zeta potential test. Aer emulsication, the particle size of WPU emulsion was carried out via a Zetasizer Nano ZS90 laser particle size tester from Malvern (Malvern, UK). The emulsion was diluted with deionized water to a mass fraction of 0.01% and measured at room temperature. 2.3.3 Gloss test of self-matting WPU leather coatings. 10 g WPU emulsion and 0.1 g wetting agent (BYK381) were mixed for 10 min, coated on the PVC leather using a 40 mm wire rod, and subsequently placed in an oven at 120°C. Aer drying for 2 min, the leather samples were taken out for gloss test. According to GB/T9754-2007 "paints and varnishesdetermination of 20°, Scheme 1 Synthesis route of self-matting waterborne polyurethane. 60°and 85°specular gloss of paint lms without metallic pigments", the 60°gloss was determined using the Ref 101N photometer from Sheen (Essex, UK). The average value of three tests was used as the nal result. 2.3.4 Stability test of WPU emulsion. The stability of WPU emulsion was determined according to GB/T6753.3-1986 "test method for storage stability of coatings". Centrifuge emulsion of the same mass in a high-speed centrifuge (HC-3018) at a speed of 3000 rpm for 15 min. Observe the state of the emulsion. If there is no precipitation, it can be considered that the emulsion has a storage stability period of 6 months or more. 2.3.5 Water contact angle of self-matting WPU leather coatings. The static water contact angle of leather coating samples was measured at 25°C with OCA20 contact angle tester (Dataphysics, Germany). 2.3.6 Thermogravimetry (TG). In a N 2 atmosphere, the thermal decomposition of the lm was tested by a TGA/DSC1 type weight loss analyzer (Mettler Toledo, Switzerland) from 30 to 600°C at a heating rate of 10°C min −1 . Weight loss was recorded as a function of temperature. 2.3.7 Differential scanning calorimetry (DSC). Under the protection of N 2 , the glass transition temperature (T g ) of the lm was obtained by using a DSC1 differential scanning calorimeter from Mettler Toledo. The test range was −150 to 150°C and the heating rate was 10 K min −1 . 2.3.8 SEM. A SU8020 scanning electron microscope (TES-CAN MIRA LMS) was used to observe the surface morphology of the polyurethane coating lm on the leather. The operating voltage was 3 kV, and gold was sprayed before the sample test to improve the conductivity. 2.3.9 Tensile properties. According to "GBT 528-1998 determination of tensile stress-strain properties of vulcanized rubber or thermoplastic rubber", cut the waterborne polyurethane lm to a size of 2 mm × 12 mm dumbbell shaped standard spline. The tensile properties of the standard spline are tested by AGS-J electronic universal testing machine. The test temperature is 25°C and −50°C, and the tensile rate is 100 mm min −1 . Structural characterization of HTPB-based WPU The ATR-FTIR spectra of raw materials and WPU lm are shown in Fig. 2. Fig. 2 shows that the three raw materials have their own exclusive infrared characteristic absorption peaks. The peaks at 3470 cm −1 and 1112 cm −1 was caused by the stretching vibration of -OH and C-O-C in PTMG, respectively. The broad absorption peaks at 3200-3600 cm −1 were attributed to the -OH of HTPB. The weak absorption peak at 726 cm −1 was corresponding to the carbon-carbon double bond in the cis-1,4 conformation of HTPB. While there was a strong absorption peak of the bending vibration absorption peaks of carboncarbon double bond in trans-1,4 conformation and 1,2-vinyl conformation of HTPB at 967 cm −1 and 909 cm −1 . The stretching vibration of -NCO of IPDI located at 2264 cm −1 . The peak at 1716 cm −1 was attributed to the carbonyl absorption peak of carbamate bond. The peak at 3332 cm −1 and 1548 cm −1 was caused by the stretching vibration and bending vibration of N-H. And there was no characteristic absorption peak of -NCO of isocyanate at 2264 cm −1 , indicating that the isocyanate had been fully involved in the reaction. At the same time, the absorption peaks of carbon-carbon double bonds of HTPB can be seen at 726 cm −1 , 967 cm −1 and 909 cm −1 , which shows that HTPB has been inserted into the WPU main chain successfully. 29,30 In addition, the absorption peaks at 2854-2941 cm −1 were attributed to the C-H stretching vibration of -CH 3 and -CH 2 . The IR spectrum demonstrated the aqueous polyurethane was synthesized. 3.2 Stability and particle size analysis of self-matting WPU dispersions 3.2.1 Effect of HTPB addition on particle size of WPU. The changes of particle size of WPU with different HTPB content are shown in Fig. 3. WPU emulsion without HTPB modication had a particle size of 3158 nm. Aer HTPB was added, the particle size of the WPU emulsion decreased slightly and then increased gradually. When the amount of HTPB added was small, the hydrophobic segments in the emulsion will be agglomerated and wrapped in the interior of the ball by hydrophilic groups. At this time, the hydrophobic properties were not dominant. Therefore, the particle size didn't increase when the HTPB content was below 20%. There was an ether oxygen bond between each monomer in the molecular structure of PTMG, while the molecular structure of HTPB was a double bond but no ether oxygen bond. The ether oxygen bond in the product synthesized by PTMG can form intermolecular hydrogen bonds. When PTMG was replaced by a small part of HTPB, the ether oxygen bond was reduced. Therefore, the intermolecular force was weakened to a certain extent, and the physical crosslinking points in and between the macromolecular chains were reduced. Due to the low speed of emulsication, the intermolecular force was difficult to be broken when the amount of HTPB was small. In consequence, it is easy to be sheared and dispersed when emulsied in water, and the particle size of the emulsion is slightly reduced. When the addition of HTPB continues to increase, the average particle size of WPU emulsions was signicantly affected by the hydrophilicity of WPU. The block ratio of the sample with HTPB content of 50% is 2.5 : 1, and the average particle size of the emulsion reached a maximum of 6866 nm. HTPB structure is composed of full carbon chain, which is not hydrophilic. Therefore, as the HTPB content increased, the hydrophilicity of the whole system decreased, although the content of the hydrophilic chain extender (DMPA, AAS) of the system remained constant. Consequently, the water dispersibility of the emulsion became worse, which led to an increase in the average particle size of the emulsion. In addition, the carbon-carbon double bonds in HTPB molecules existed in three forms, namely cis-1,4 structure, trans-1,4 structure and 1,2-vinyl structure. These irregular structures made it unable to be closely arranged to form a relatively large spatial structure. 31 These two factors led to the increase of the average particle size of the emulsion when the HTPB content increased. 3.2.2 Appearance and stability of WPU dispersions. The appearance, centrifugal stability and zeta potential of WPU emulsions with different HTPB contents are shown in Table 2. The emulsion looked opaque and milky white. The emulsion was centrifuged with only a few re-dispersed precipitates when the HTPB content was 0-30%. And the absolute value of the zeta potential is greater than 30 mV, so there was a good storage stability of the emulsion. When the HTPB content reached 40% or more, a large amount of precipitation occurred aer centrifugation, indicating that the emulsion couldn't be stably stored. The reason for the change of emulsion appearance and stability was that the hydrophobicity of molecular structure in HTPB structure made the hydrophilicity of WPU worse. Room and low-temperature mechanical tensile properties of self-matting WPU lm The stress-strain curves of WPU with different HTPB content at room temperature are shown in Fig. 4. The data of tensile strength and elongation at break of WPU lms with different HTPB content are shown in Table 3. It can be seen that with the increase of HTPB content, the tensile strength and elongation at break of WPU lm increased rst and then decreased. The tensile strength of WPU-B0% spline without HTPB was 20.6 MPa and the elongation at break was 1473.6%. When the HTPB content was 20%, the elongation at break of WPU-B20% was up to 1768.8%, and the tensile strength was 22.3 MPa. WPU-B40% had the highest tensile strength of 26.6 MPa and the elongation at break was 1354.9%. The stress-strain curves of WPU lm with different HTPB contents at −50°C are shown in Fig. 5. It can be seen that the tensile strength of WPU-B0% spline without HTPB was 26.4 MPa and the elongation at break was 430.4% at −50°C. Aer the introduction of HTPB, the tensile strength and elongation at break of WPU splines at low temperature were greatly improved. The elongation at break of WPU-B20% at −50°C was 785.2% and the tensile strength was 76.7 MPa. The increase of HTPB content increased the holistic so segment content and improved the exibility of the molecular chain. The signicant improvement of mechanical properties aer adding HTPB was mainly attributed to the structure of HTPB. Due to the difference in polarity, the so and hard segments in WPU would aggregate with each other to produce microphase separation. The main chain of HTPB was composed of long nonpolar carbon chains. The introduction of HTPB weakened the hydrogen bonding between so and hard segments, which aggravated the microphase separation between so and hard segments. The weakening of the interaction between so and hard segments made the hard segments distribute freely in so segments, which played the role of physical crosslinking point and could improve the tensile strength of WPU. The increase of HTPB content also increased the integral so segment content, thereby improving the exibility of the molecular chain and increasing its elongation under stress. Therefore, appropriate increase of microphase separation was benecial to improve the tensile strength and elongation at break simultaneously. However, when the HTPB content was 50%, the further increase of microphase separation led to a sharp decline in the interaction between the so and hard segments of the WPU-B50% lm, showing a discontinuous state. This resulted in a decrease in the tensile strength and elongation at break of the WPU-B50% lm under stress. Thermal analysis of WPU 3.4.1 Microphase separation. The DSC curve of WPU lm with different HTPB content are shown in Fig. 6. The T g data of WPU lms with different HTPB contents are listed in Table 4. Fig. 6 shows that all WPU samples had two glass transition temperatures, corresponding to the glass transition temperatures of the so and hard segments of WPU, respectively. It showed that there was a certain microphase separation in the whole system. It could be seen from Table 4 that with the Fig. 5 The stress-strain curves of WPU films with different HTPB content at −50°C. increase of HTPB content, the glass transition temperature of so segment (T g,s ) decreased and the glass transition temperature of hard segment (T g,h ) increased gradually, indicating that the degree of microphase separation between so and hard segments increased. The main chain of HTPB was composed of non-polar carbon chains and couldn't form hydrogen bonds with hard segments. Therefore, with the increase of HTPB content, the restriction effect of hard segment on so segment was weakened, and the so segment was more likely to move when the temperature changes. T g,s moved to the low temperature direction, which improved the low temperature exibility of WPU. 32 At the same time, the interaction between the hard segments was enhanced by hydrogen bonding, so that the cohesive energy between the hard segments was increased. It's conducive to the formation of short-range ordered structure, resulting in T g,h moving to high temperature. As T g,s decreased and T g,h increased, the difference between DT g increased, indicating that the degree of microphase separation between hard and so segments gradually increased. It further explained that the changes of gloss, surface morphology, transmittance and mechanical properties of the lm were caused by the increase of microphase separation. 3.4.2 Thermal stability. It can be seen from Fig. 7 that the thermal decomposition of WPU lms with different HTPB contents mainly occurred in the range of 200-500°C, and the residual mass of the samples remained basically constant aer 500°C. The thermal decomposition of WPU-B0% lm without WPU modication was divided into three stages, corresponding to the decomposition of DMPA-TEA salt (200-270°C), the decomposition of hard segment carbamate bonds (270-350°C) and the decomposition of so segment polyether (350-500°C). For WPU modied by HTPB, in addition to the above three thermal decomposition stages, there was a decomposition stage between 450-500°C, which was the thermal decomposition of HTPB. With the increase of HTPB content, the decomposition peak of HTPB between 450-500°C gradually became obvious. The decomposition temperature and carbon residue of WPU lm at 5% and 50% weight loss are listed in Table 5. For WPU-0%, the ambient temperatures at 5% and 50% weight loss were There are a lot of C]C bonds in the main chain of HTPB molecular structure, and the bond energy of C]C bonds (615 kJ mol −1 ) is much higher than that of C-O bonds (351 kJ mol −1 ) and C-C bonds (348 kJ mol −1 ) in PTMG molecular chain. The greater the bond energy, the greater the energy released when the chemical bond is formed, meaning that the chemical bond is more stable. Therefore, HTPB itself has good thermal stability, and the increase of HTPB content can improve the overall thermal stability. Moreover, due to the small C-H bond energy on HTPB allyl groups, allyl hydrogen was easily attacked by oxygen to form hydroperoxide and decompose into free radicals. Then further crosslinking reaction occurred between free radicals, resulting in a certain crosslinking in the WPU system. This crosslinking effect made the overall structure more compact, which limited and hindered the movement of WPU molecular chain segments to a certain extent and improved heat resistance. Gloss analysis of self-matting WPU leather coating The WPU leather coatings with different HTPB content and their 60°gloss with HTPB content are shown in Fig. 8. The extinction effect was excellent when the addition amount of HTPB was 0-20%. The gloss of WPU lm was less than 1 GU, and the minimum was 0.4 GU. Since the addition of non-polar HTPB would increase the microphase separation between the so and hard segments in WPU, the so and hard segments were aggregated during the lm formation process, changing the structure of the WPU lm surface and resulting in changes in gloss. When the amount of HTPB was further increased, the microphase separation was too large to destroy the original surface structure, and thus the glossiness adversely increased to 3.3 GU. The SEM images of WPU lms with different HTPB additions are shown in Fig. 9. With the increase of HTPB content, the surface roughness of WPU lm increased gradually and then tended to be smooth. The change of surface morphology of WPU lm was essentially determined by the degree of microphase separation in the system. The introduction of non-polar HTPB promoted the microphase separation between the so and hard segments in WPU, resulting in the aggregation of the so and hard segments in the WPU emulsion during the lm forming process. Aer the emulsion was completely dried, incompatible two phases appeared on the surface of the lm, forming different degrees of rough structure. When the degree of microphase separation was too small, it was not conducive to the formation of microspheres, such as WPU-B0% and WPU-B10%. Proper microphase separation produced more regular microsphere bulges, such as WPU-B20% and WPU-B30%. When the amount of HTPB continued to increase, the degree of microphase separation was too large, so that the hard segment was completely wrapped in the so segment and the bulge couldn't be formed. Moreover, due to the large particle size, WPU couldn't be completely coated on the leather surface, resulting in increased gloss. Water contact angle of self-matting WPU leather coating Contact angle is a manifestation of the wetting phenomenon, which can directly reect the hydrophilicity of the lm surface and further support the water resistance of the lm. The change of contact angle of WPU lm with different HTPB content are shown in Fig. 10. When HTPB was not introduced into the so segment, the contact angle of WPU-B0% was 75.85°. With the increase of HTPB content, the contact angle increased rst and then decreased. This phenomenon indicated that the introduction of HTPB decreased the surface tension of WPU lm and the surface hydrophobicity of WPU lm was enhanced. Due to the existence of a large number of strong hydrophobic structures in the HTPB molecular chain, the more hydrophobic structures in the lm will more effectively hinder the entry of water molecules. However, as shown in Fig. 9(f), when the addition amount of HTPB increased to 50%, the waterborne polyurethane emulsion could not be completely coated on the leather surface due to the large particle size. Since the uncoated part of the leather has not been hydrophobically modied, the contact angle value was slightly reduced. Conclusion In this paper, HTPB and PTMG were introduced as so segments, and a combination of sulfonate and carboxyl hydrophilic chain extender was used to prepare a self-extinction WPU with excellent low temperature performance. Aer HTPB was introduced into the WPU segment, the longer non-polar carbon chain in the HTPB structure would cause obvious microphase separation in the so and hard segment regions, resulting in a rough surface of the WPU lm to achieve the effect of extinction. When the addition amount of HTPB was 20%, the particle size of the emulsion was about 2542 nm, and the gloss was as low as 0.4 GU. It had excellent extinction effect without additional matting agent. HTPB has excellent elasticity and exibility, which can improve the mechanical properties and low temperature exibility of WPU. The elongation at break at room temperature was up to 1768.8%, and the tensile strength was 22.3 MPa. The elongation at break could still be maintained at 785.2% at −50°C, and the tensile strength was 76.7 MPa. So that the WPU coating meets the requirements of cold weather. At the same time, the self-matting WPU coating has excellent water resistance and good thermal stability. The prepared selfmatting WPU has potential application prospects in the eld of low gloss leather nishing. Conflicts of interest There are no conicts to declare.
2023-03-03T16:11:06.001Z
2023-02-21T00:00:00.000
{ "year": 2023, "sha1": "822f334d8d93952aa33b3758ffe7307d5d0eb2fc", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "324ecd51d279ebca2c0d9c6bceda168f27f398c7", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
7343681
pes2o/s2orc
v3-fos-license
Single-Molecule Analysis Reveals the Kinetics and Physiological Relevance of MutL-ssDNA Binding DNA binding by MutL homologs (MLH/PMS) during mismatch repair (MMR) has been considered based on biochemical and genetic studies. Bulk studies with MutL and its yeast homologs Mlh1-Pms1 have suggested an integral role for a single-stranded DNA (ssDNA) binding activity during MMR. We have developed single-molecule Förster resonance energy transfer (smFRET) and a single-molecule DNA flow-extension assays to examine MutL interaction with ssDNA in real time. The smFRET assay allowed us to observe MutL-ssDNA association and dissociation. We determined that MutL-ssDNA binding required ATP and was the greatest at ionic strength below 25 mM (KD = 29 nM) while it dramatically decreases above 100 mM (KD>2 µM). Single-molecule DNA flow-extension analysis suggests that multiple MutL proteins may bind ssDNA at low ionic strength but this activity does not enhance stability at elevated ionic strengths. These studies are consistent with the conclusion that a stable MutL-ssDNA interaction is unlikely to occur at physiological salt eliminating a number of MMR models. However, the activity may infer some related dynamic DNA transaction process during MMR. Introduction MutL homologs (MLH/PMS) are key components of mismatch repair (MMR). Mismatch recognition by MutS homologs (MSH) results in long-lived ATP-bound sliding clamps that recruit MLH/ PMS, which in turn stimulate the DNA transaction activities of several downstream effectors. In Escherichia coli (E. coli), these downstream effectors include MutH and UvrD. For example, the E. coli MutL stimulates the MutH endonuclease activity on a hemimethylated d(GATC) that directs excision repair to the newly replicated strand as well as enhances the UvrD helicase activity required for the strand excision process [1,2]. MutL has been suggested to bind ssDNA in the presence of ATP; an activity that may play an important role in its interaction with downstream effectors such as UvrD [3,4,5]. Biochemical and structural studies suggest that the C-terminal region of E. coli MutL forms a stable homodimer (LC20) [6,7] while the N-terminal domain (LN40) contains a GHKL ATPase site [8] that dimerizes upon binding to ATP [9,10]. Together, the resulting structure of the ATP-bound MutL appears to form a cavity via a flexible linker that contains a positively charged cleft [9]. This ATP-dependent MLH/ PMS conformational change appears to be modulated by the ATP binding and hydrolysis cycle even in the absence of DNA [11]. It is the positively charged cavity formed by ATP-bound MutL that appears to contain the ssDNA-binding domain [9,11]. However, the properties and roles of MutL-ssDNA binding in MMR are poorly understood. Studies that appear to support a role for MutL-ssDNA binding in MMR include: 1) MutL appears to bind unmethylated ssDNA better than methylated ssDNA or unmethylated/methylated dsDNA [3], 2) ssDNA stimulates the MutL ATPase [9], and 3) the MutL(R266E) mutant protein that displays a weak ssDNA-binding affinity and that lacks ssDNA-stimulated ATPase activity genetically behaves like a mutL null mutant [5]. While these studies appear to correlate MutL-ssDNA binding with MMR, all of the ssDNA binding studies in vitro were performed at non-physiological ionic strengths [6,12]. Moreover, there are reports that suggest MutL does not bind to DNA at physiological salt and that DNA binding is not required for MMR [13,14]. We have developed single-molecule assays that examine the lengthening of random-coiled ssDNA, which results from MutL binding. These assays, single-molecule Förster Resonance Energy Transfer (smFRET) [15] and a single molecule flow-extension assay [16,17], have allowed us to study the kinetics of the MutL-ssDNA interaction in real time. Our studies examined the interaction between MutL and ssDNA in the absence of other MMR proteins. Together the single-molecule analysis detailed both the heterogeneity and the physiological relevance of the MutL-ssDNA interaction. E. coli MutL binds and stretches ssDNA The ssDNA binding activity of MutL was examined using a partial duplex DNA that consisted of 15 bp double-stranded DNA (dsDNA) with a 33-deoxythymidine nucleotide (dT33) 59-overhang. An acceptor Cy5 at the ssDNA/dsDNA junction and a donor Cy3 on the end of the 59-overhang were used as FRET pairs. A 59-biotin anchored the DNA substrate to the quartz slide glass coated with PEG-biotin using a streptavidin linker (Fig. 1A). Injection of MutL (50 nM) in 25 mM NaCl resulted in the abrupt decrease of the acceptor signal at 3 s that was maintained for 13 s (Fig. 1B). This pattern and the anticorrelation signals between a donor and an acceptor were repetitive for 85 s (Fig. 1B). To identify the resulting FRET states, we applied hidden Markov modeling (HMM) analysis, which determines the states with a distinct FRET efficiency (Data Analysis in Materials and Methods). The HMM analysis discerned two FRET states resulting from the ssDNA binding by MutL, which was presented in a transition density plot (Fig. 1C) [18]. The transition density plot represents the transition distribution between two distinct FRET states from 0.44 to 0.26 and from 0.26 to 0.44 along each axis (FRET before transition to FRET after transition). It indicates that the random coiled ssDNA tail that was not bound by MutL displayed a constant FRET efficiency of 0.44. However, the lower FRET efficiency of 0.26 appears to be the result of ssDNA binding by MutL. The lower FRET value was cycled with the FRET efficiency observed in the absence of MutL (0.4160.15; mean 6 s.d. ; Fig. 1D). These results indicate a time-dependent distance increased between the donor and the acceptor when MutL bound to ssDNA; providing a convenient assay for mutational and kinetic analysis (Fig. 1A). The association (t on ) and dissociation (t off ) dwell time of FRET states may be garnered by examining individual traces of FRET efficiency (Fig. 1E). A histogram derived from a population of dwell times was fitted to a single exponential and resulted in the off-rate (k off = 1/t on = 0.2560.04 s 21 , mean 6 s.e.m.) and on-rate (k on = 1/t off = 0.4660.03 s 21 , mean 6 s.e.m.) for 50 nM MutL in 25 mM NaCl (Fig. 1E). The single exponential property of the dwell-time distribution indicates that the kinetics of MutL-ssDNA binding can be described by a single rate constant. To assess whether the FRET change is due to the specific ssDNA binding by MutL, we performed smFRET studies with the MutL(R266E) mutant protein. The mutant protein substitutes a negatively charged glutamic acid for a positively charged arginine residue within the opening formed by the linked LC20 and LN40 peptides that contains the putative DNA binding region [9,19]. We found that the FRET efficiency of MutL(R266E) was 0.4060.16 (mean 6 s.d.), which was nearly identical to that observed in the absence of MutL ( Fig. 2A). These results are consistent with the conclusion that binding by MutL stretches the ssDNA and that the MutL(R266E) is defective in this process. These studies suggest that the MutL(R266) residue plays a role in ssDNA binding. We confirmed that the length change of the ssDNA was generated by a specific MutL-ssDNA interaction since we observed no change in FRET values in the presence of UvrD helicase that moves along ssDNA from 39 to 59 unidirectionally to unwind duplex DNA (data not shown). To further explore the specificity of MutL-ssDNA binding, we investigated the kinetic rate dependence on the ssDNA length. We constructed a partial duplex DNA (15 bp dsDNA) with a 44deoxythymidine nucleotide [dT(33+11)] 59-overhang that bears digoxigenin. The donor Cy3 was attached to the 11 th dT from the 59 end, which maintains the distance between Cy3 and Cy5 similar to the dT33 substrate ( Fig. 2B inset). The 59 end of the ssDNA was blocked by anti-digoxigenin antibody, which can prevent MutL from binding to the ssDNA end and from dissociating from the end (Fig. 2B cartoon). We also prepared the partial duplex with an end-blocked 18 nt ssDNA tail. Cy3 was conjugated to a 3 rd nucleotide from the end of 18 nt ssDNA [dT(15+3)] (Fig. 2B cartoon). We found that the on-rate (k on ) of the end-blocked dT(33+11) (0.6760.07 s 21 ) was greater than the unblocked dT33 (0.4660.03 s 21 ) and the end-blocked dT(15+3) (0.1860.04 s 21 ) (Fig. 2B). This result suggests that a longer ssDNA tail increases the rate of MutL association with the ssDNA. In contrast, the off-rates (k off ) of the end-blocked dT(33+11) (0.3360.06 s 21 ), the end-free dT33 (0.2560.04 s 21 ), and the end-blocked dT(15+3) (0.3060.06 s 21 ) DNA substrates were not significantly different (Fig. 2B). The errors in the kinetic rates represent s.e.m. Together these results clearly suggest that the change in FRET values results from MutL-ssDNA binding. Moreover, MutL binding does not require a ssDNA end for binding; although interaction with the ssDNA/dsDNA junction can not be ruled out with the smFRET studies. ATP dependence of MutL binding to ssDNA We further characterized the kinetics of MutL-ssDNA binding (Fig. 1E). The k on was found to be proportional to the concentration of MutL, while the k off was independent of MutL concentration (Fig. 3A). We determined the dissociation constant (K D ) as the intercept of k on and k off from a titration of MutL in 25 mM NaCl (K D = 2969 nM, mean 6 s.e.m.; Fig. 3A) [18]. ATP processing by MutL is essential for interactions with MutH and UvrD [4,13]. In the absence of ATP we did not observe any significant changes in FRET efficiency in the presence of MutL. Moreover, the k off did not vary with ATP concentration. However, the k on increases with increasing ATP concentration and saturated at ,500 mM ATP (Fig. 3B). To investigate the dependence of nucleotide for MutL-ssDNA binding, we also performed smFRET studies in the presence of ADP. We observed approximately 3-fold more MutL-ssDNA binding events in the presence of ATP compared to ADP (Fig. 3C). The k on in the presence of ADP also decreased significantly (0.1460.04 s 21 ), while the k off (0.2560.04 s 21 ) in the presence of ATP was not significantly different from ADP (0.2060.08 s 21 ) (Fig. 3C). In addition, smFRET studies with the MutL(D58A) substitution mutation that does not bind ATP [5], displays no significant changes in FRET efficiency ( Figure S1). These results suggest that ssDNA binding requires MutL ATP/ADP binding functions, although ADP is clearly less effective than ATP as an allosteric effector [11]. ssDNA binding by MutL is absent at physiological ionic strength Ionic contacts play an important role in DNA-protein interaction since negatively charged DNA phosphates specifically contact positively charged peptide residues within binding sites [20]. We examined the ionic-dependence of MutL-ssDNA binding using the smFRET system. The t on (1/k off ) dwell time of the reduced FRET efficiency induced by MutL binding decreased with the increasing salt concentration and was completely absent above 100 mM NaCl (Fig. 4A). The calculated k off increased more than 3-fold from 25 to 100 mM NaCl (Fig. 4B). In contrast, the k on was not significantly affected by similar salt concentrations (Fig. 4B). Furthermore, at 110 mM NaCl the change in FRET signals was exceedingly inaccurate and any altered FRET efficiency induced by MutL binding dramatically disappeared at 120 mM NaCl (Fig. 4B). Since the dwell time of MutL (50 nM) at ionic strengths above 100 mM is ,50 ms (k off .20 s 21 ) while the k on appears relatively constant (0.41 s 21 ), we estimate the K D to be greater that 2 mM at physiological ionic strength. These results are consistent with the conclusion that MutL does not bind ssDNA at physiological ionic strength, which is coincident with previous reports [13]. Flow-extension single-molecule analysis of MutL-ssDNA binding It is possible that an interaction among multiple MutL proteins could alter and/or stabilize a ssDNA binding activity [12]. We developed a flow-extension single-molecule assay capable of examining the lengthening of ssDNA induced by the binding of multiple MutL proteins ( Fig. 5A-C). One end of a 39-biotin 5.3 kb ssDNA was linked to a PEG-biotin surface via streptavidin. The opposite end containing digoxigenin was attached to a 2.8 mm diameter super-paramagnetic bead coated with anti-digoxigenin antibody (Fig. 5A). A force was applied to the bead in the flow chamber, with the net force given by a magnetic force perpendicular to the surface and a laminar flow parallel to the surface, that ultimately results in stretching of the ssDNA ( 1 mm by ,100 s (Fig. 5D). Washing free MutL protein from the flow chamber at 400 s resulted in a slow shortening of the extended ssDNA (Fig. 5D). These results demonstrate the association and dissociation of MutL from the ssDNA (Fig. 5B and 5C). We did not observe any change in the length of the ssDNA in the absence of ATP as we expected from the smFRET experiment. A length change to nearly 1 mm strongly suggests that multiple MutL proteins are binding to the ssDNA. To confirm this notion, we examined the MutL concentration dependence of ssDNA extension (Fig. 5D and 5E). We found that the maximum ssDNA extension was dependent on MutL concentration (S 0.5 = 24 nM) that saturated at a length of approximately 1 mm, which is equivalent to 30% of the length of the fully stretched ssDNA. The rate of extension was observed to be linearly proportional to the concentration of MutL (Fig. 5F). These results parallel the observation that the kinetic rate of association (k on ) is proportional to MutL concentration (Fig. 5B), and suggest that ssDNA length extension is controlled by the rate of MutL association. We performed MutL-ssDNA extension studies in a range of ionic conditions (Fig. 5G). We used 200 nM of MutL that appeared saturating for ssDNA binding. Between 0 and 25 mM of NaCl the ssDNA extension was maximized (946677 nm at 25 mM, mean 6 s.e.m.; Fig. 5H). At 100 mM NaCl the extension of ssDNA decreased to 297617 nm (mean 6 s.e.m.) and at salt concentrations above 100 mM (120 mM and 150 mM) no significant change in length of ssDNA was observed (Fig. 5G). These observations are consistent with the smFRET studies and suggest that multiple MutL proteins do not stabilize ssDNA binding, and that polymerization of MutL on ssDNA is unlikely to occur at physiological salt conditions [12]. Discussion We have developed two single-molecule assays and demonstrated ATP-dependent MutL-ssDNA binding at ionic strength below 100 mM. Our work represents the first single-molecule analysis of MutL-ssDNA interactions in real time. Very recently, Gorman et al. reported that human MutL homolog (Mlh1/Pms1) moves on dsDNA with a mean diffusion coefficient, 0.14360.29 mm 2 /s, at 150 mM NaCl [21]. However E. coli MutL bound to ssDNA does not seem to diffuse along ssDNA, which is supported by our observations of the identical off-rate between the end-free and the end-blocked ssDNA. In our smFRET assay, Cy3 binding by MutL might cause a low FRET due to the enhancement of Cy3 intensity. It is known that binding or proximity to a single fluorophore by unlabeled proteins can induce the intensity enhancement of the fluorophore [22,23,24]. To test this, we investigated the Cy3 intensity in the presence of MutL with the 33 nt ssDNA substrate eliminating Cy5 at the junction. We found no intensity changes. These results confirm that the FRET change we observed occurred by the distance change between a donor and an acceptor owing to ssDNA binding by MutL. The results presented here are consistent with previous studies that have demonstrated ssDNA binding by MutL [4,5,6,9]. At low ionic strength, MutL-ssDNA binding is controlled by a protein concentration dependent first-order on-rate (k on ). Increasing ionic strength increases the off-rate (k off ) more than 3-fold; presumably escalating it such that binding is not observed above 100 mM. These results suggest a salt masking effect where stable MutL-ssDNA contact(s) are either eliminated or substantially reduced by increasing ionic strength [20]. We conclude that there are unlikely to be stable or long-lived interactions between MutL and ssDNA at physiological salt as suggested by Acharya et al. [13]. We found that MutL can stretch ssDNA, which allows us to observe an individual MutL binding to ssDNA in the smFRET assay. In addition, MutL polymerization on a long ssDNA (5.3 kb) could also be explored using the flow-extension assay. However the length change of dsDNA was not observed in the presence of dsDNA in our single-molecule assays. The extension mechanism associated with MutL-ssDNA binding is unclear. Interestingly, other biochemical studies showed that MutL and MutL(D58A) bound to 92/93 bp partial duplex DNA in the absence of ATP [4,5]. Yet, MutL only bound to ssDNA in the presence of ATP [4,5]. Taken together, we speculate that the FRET changes observed in our studies are likely to result from DNA interaction(s) by two distinct sites within the MutL homodimer that causes stretching of the random coiled ssDNA tail when the N-terminal domains form an ATP-induced dimeric structure. A binding association that is eliminated by ionic strength may be indicative of alternative MutL-ssDNA interaction(s) or alternate interactions in the presence of additional MMR proteins. Genetic studies have demonstrated that the MutL(R266), residue implicated in ssDNA binding, is required in MMR [5]. The MutL(R266) residue is located in a cleft formed by the ATPbinding controlled homodimerization N-terminal LN40 domain. Moreover, the cleft is located in a hole formed by the connection of the C-terminal LC20 homodimer interaction domains via a flexible linker. Thus, the ATP-dependent dimerization of the Nterminal LN40 domain would appear to form a cavity containing the MutL(R266) residue much like the cavity formed by MSH protein clamps on mismatched DNA [25,26,27]. Interestingly, MLH/PMS ssDNA-dependent ATPase activity has only been performed at or below 90 mM NaCl [5,10,12], potentially suggesting an inverse correlation with salt concentration. We regard it possible that such a dynamic ATP-dependent conformational transition may allow transient interaction(s) during the excision reaction that ultimately positions a displaced ssDNA strand in the MutL cavity [13]. Alternatively, interactions between downstream effectors such as UvrD or one of four exonucleases required for MMR may enhance MutL-ssDNA binding by altering local ionic conditions. However, it appears clear that polymerization of MutL along ssDNA as a mechanism in MMR is unlikely to occur under physiologically relevant conditions [12]. Materials and Methods Protein purification of E. coli wild type MutL, MutL(D58A), MutL(R266E) Cloned hexahistidine tagged E. coli wtMutL and mutant MutL(D58A, R266E) in a pET15b-TEV were overexpressed from the E. coli strain BL21(DE3). Harvested cells were resuspended in 100 mL of lysis buffer (20 mM Tris-HCl, pH 8.0, and 0.5 mM bmercaptoethanol). Cells were lysed by adding 1 mg/mL lysozyme, which was followed by sonication. Lysates were centrifuged and the supernatant was applied to a 5 mL HisTrap TM HP (GE Healthcare) that was pre-equilibrated with a binding buffer (20 mM Tris-HCl, pH 8.0, 0.5 mM b-mercaptoethanol, and 500 mM NaCl). After the column was washed with a binding buffer that contains 15 mM imidazole, proteins were eluted with the binding buffer that contains 300 mM imidazole. The wtMutL and mutant MutL from the Ni-column were directly injected into a desalting G-25 column that was pre-equilibrated with a desalting buffer (20 mM Tris-HCl, pH 8.0, 0.5 mM b-mercaptoethanol, and 125 mM NaCl). Then, the proteins were applied to a MonoQ HR 10/10 column (GE Healthcare) that was pre-equilibrated with buffer A (20 mM Tris-HCl, pH 8.0, 1 mM EDTA and 1 mM DTT, 5 mM MgCl 2 ) and eluted with a linear gradient from 25% to 100% of buffer B (20 mM Tris-HCl, pH 8.0, 1 mM EDTA, 1 mM DTT, 5 mM MgCl 2 and 500 mM NaCl). The wtMutL and mutant MutL were eluted at 190-220 mM NaCl. The dimeric proteins were purified with a Superdex 200 HR 10/30 column (GE Healthcare) with buffer C (20 mM Tris-HCl, pH 8.0, 0.5 mM b-mercaptoethanol, and 125 mM NaCl) to obtain greater than 95% purity. The concentration of the proteins was kept in less than 0.5 mg/ml to avoid self-aggregation. More detailed information for strains, plasmid, and protein purification was described in Ref. [5,9]. Single-molecule FRET Experiment setup. To construct partial duplex DNA substrates, PAGE or HPLC-purified oligodeoxynucleotides that were modified with biotin, Cy3, Cy5, and digoxigenin were purchased from IDT (Coralville, USA): Cy3-dT33 oligo (Cy3-59-dT 33 CGA CGG CAG CGA GGC-39), Dig-dT(33+11) oligo (dig-59-dT 11 -Cy3-dT 33 CGA CGG CAG CGA GGC-39), Dig-dT(15+3) oligo (dig-59-dT 3 -Cy3-dT 15 CGA CGG CAG CGA GGC-39), and biotin-Cy5 oligo (Biotin-59-GCC TCG CTG CCG TCG-39-Cy5). Partial duplex substrates that consist of 15 bp duplex with 33 nt, 15 nt, 44 nt (dig), and 18 nt (dig) 59-overhang were prepared by annealing a pair of biotin-Cy5 and Cy3 oligos (Cy3-dT33 oligo, Cy3-dT15 oligo, Dig-dT(33+11) oligo, Dig-dT(15+3) oligo) at a molar ratio of 1:1.1 in the annealing buffer (10 mM Tris-HCl, pH 8.0, 100 mM NaCl, 1 mM EDTA) for a final concentration of 4 mM, respectively. The solution that contains the oligos was incubated at 95uC for 5 min and was then slowly cooled down to room temperature over 3 h. The annealed DNA substrates were stored at 4uC. Quartz glass was functionalized with PEG-biotin and PEG (1:40 in a mass ratio, Laysan Bio), while cover glass was functionalized only with PEG to minimize the nonspecific binding of the DNA substrates or proteins [28]. Streptavidin in PBS (4 mM in 125 ml, Sigma) was spread on the surface of the quartz glass and incubated for 30 min. The quartz glass was washed with double distilled water and dried with a nitrogen gas jet. A flow chamber with a channel of 25 mm63 mm60.1 mm that was generated using a double sticky tape (Biolabs) was constructed with the streptavidincoated quartz glass and PEG-only-functionalized cover glass. To immobilize the DNA substrates, 10 pM DNA in the blocking buffer (20 mM Tris-HCl, pH 7.5, 2 mM EDTA, 50 mM NaCl, 0.0025% Tween 20 (v/v), 0.1 mg/ml BSA) was incubated in the flow chamber for 5 min. Free DNA was removed by extensive washing with blocking buffer. Reaction buffer consisted of 20 mM Tris-HCl, pH 7.5, 25-150 mM NaCl, 0.1 mM EDTA, 3 mM MgCl 2 , 0.5 mM ATP, and 1 mM DTT. Proteins in the reaction buffer were injected into the chamber to measure binding to DNA substrate. To increase the photostability of the dyes, 2 mM of trolox, 0.8% (w/v) of D-glucose, 165 U/ml of glucose oxidase, and 2,170 U/ml of catalase were added as an oxygen-scavenging system in the reaction buffer [29]. Emission signals from a donor excited with a 532 nm DPSS laser (Cobalt, 100 mW) and an acceptor excited by energy transfer were collected and recorded using EM-CCD (Andor iXon EM+ 897), with lab-developed imaging software and a 50 ms time resolution. To image the fluorescent signals, we used a wide-field total internal reflection fluorescence (TIRF) microscope with water-immersion objective (606, N.A. = 1.2, Olympus), for which the total internal reflection of an incident beam was induced by a prism. Data analysis. The data were analyzed using IDL and MATLAB scriptures obtained from Ha group at the University of Illinois (http://bio.physics.illinois.edu). After the corrections of the donor (I D ) and the acceptor (I A ) intensities for cross-talk between their channels as well as for the background, FRET efficiencies were calculated as the ratio of I A to I D +I A . Each of the single traces was processed using hidden Markov modeling (HMM) with maximum evidence to identify multiple states without personal prejudice [30]. The software is available at http://vbfret. sourceforge.net. Dwell time at the states determined by HMM analysis was used to calculate the kinetic parameters of an on-rate (k on ) and off-rate (k off ) in the following reaction. Histogram of the dwell time (binding time : t on , unbinding time : t off ) at each state was fitted by an exponential function of exp(2k off ?t) or exp(2k on ?t) for a single rate reaction, where k off = 1/ t on as an off-rate and k on = 1/t off as an on-rate. Flow-extension assay For the flow-extension experiments, we constructed a 5.3 kb ssDNA as follows: 1) l phage DNA (New England Biolabs) was digested with BsrGI (New England Biolabs); 2) a resulting 5,208 bp left-arm fragment that contains a 4 nt BsrGI 59overhang and the l DNA left cohesive-end was isolated from a 7% agarose gel (QIAquick Gel Extraction Kit, QIAGEN); 3) the 12 nt l-tail was annealed and ligated with a 39biotin oligo (59-AGG TCG CCG CCC AGT TAC AGA TTT ATG GTG ACG ATA CAA ACT ATA GAG TGA (dT) 43 -39-biotin); 4) the 4 nt BsrGI-tail was annealed and ligated with a 59-digoxigenin (Dig) oligo (Dig-59-(dT) 12 TGA TGA ATT CTA ATG-39) and a complementary linker oligo (59-GTA CCA TTA GAA TTC ATC A-39); and 5) the 5,333 nt ssDNA bearing both 39-biotin and 59digoxigenin was obtained by heating the constructed dsDNA in a 2 mM NaOH solution at 99uC for 5 min, and subsequently quenching it in a 4uC blocking buffer to prevent them from reannealing. The cover glass was functionalized with PEG-biotin and PEG (with a mass ratio of 1:100, Laysan Bio). A flow chamber was developed similar to that for the FRET studies. The chamber was placed on the stage of an inverted optical microscope (IX51, Olympus). The 5.3 kb ssDNA (0.5 pM) in the blocking buffer was incubated in the flow chamber for 10 min and unattached DNA was removed by extensive washing as described for the FRET studies. A super-paramagnetic bead (2.8 mm in diameter, Invitrogen) that was coated with anti-digoxigenin Fab (Roche) was linked to Dig-end of the ssDNA by flowing the beads into the flow chamber in the blocking buffer. Prior to the addition of MutL, the free beads were stringently removed by extensive washing [17]. A drag force parallel to the bottom surface was applied to a tethered bead from a laminar flow produced by a syringe pump (Harvard apparatus). A magnetic force generated by a rare earth magnet (NdFeB) upward from the surface was also applied to avoid nonspecific interaction(s) between the bead and the surface. The hydrodynamic and the magnetic forces were calculated by measuring the mean-square displacement SDr 2 T in the transverse direction to the stretching force, for which the bead position was measured at 50 Hz with 1006 objective in a bright field optical microscope. The force (F) was determined as F~k B Tl SDr 2 T , where k B is the Boltzmann constant, T is the absolute temperature, and l is the length of DNA [31]. Our studies were performed under 2.560.4 pN, which results from a vector summation of the magnetic force (1.160.4 pN) and the hydrodynamics force (2.260.2 pN) by a laminar flow. The error of the force represents s.e.m. The beads were imaged through a 106 objective (N.A. = 0.40, Olympus). We observed more than 150 beads in a field of view. The diffraction patterns of the beads were recorded with a highresolution CCD (RETIGA 2000R, QImaging) using MetaVue (Molecular Devices) imaging software. The bead positions that were recorded with a 500 ms time resolution were determined using 2D Gaussian fitting with a 10 nm accuracy [32]. The data were analyzed by DiaTrack 3.0 (Semasopht) and OriginPro 8 (OriginLab).
2014-10-01T00:00:00.000Z
2010-11-12T00:00:00.000
{ "year": 2010, "sha1": "6eb847005b1e7d34b247b873df2ff646fe06a798", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0015496&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6eb847005b1e7d34b247b873df2ff646fe06a798", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
14571906
pes2o/s2orc
v3-fos-license
Tachyon Kink on non-BPS Dp-brane in the General Background This paper is devoted to the study of the tachyon kink on the worldvolume of a non-BPS Dp-brane that is embedded in general background, including NS-NS two form B and also general Ramond-Ramond field. We will explicitly show that the dynamics of the kink is described by the equations of motion that arrise from the DBI and WZ action for D(p-1)-brane. Introduction and Summary The study of the open string tachyon brought significant progress in the understanding of the nonperturbative aspect of string theory 1 . Among many results that were obtained in the past there is a very interesting observation that shows that some aspects of the tachyon condensation can be correctly captured by the effective field theory description, where the tachyon effective action (2.1), describing the dynamics of the tachyon field on a non-BPS Dp-brane of type IIA and IIB theory was proposed in [6,7,8,9] 2 . One of the well known solutions of the tachyon effective field theory is a kink solution which is supposed to describe a BPS D(p-1)-brane [15,16,17,18,19,20,21,22]. Very nice analysis of the kink solution was performed in the paper [15] where it was shown that the energy density of the kink in the effective field theory is localised on codimension one surface as in the case of a BPS D(p-1)-brane. It was then also shown that the worldvolume theory of the kink solution is also given by the Dirac-Born-Infeld (DBI) action on a BPS D(p-1)-brane. Thus result demonstrates that the tachyon effective action reproduces the low energy effective action on the world-volume of the soliton. In our recent paper [30] we have extended this analysis to the spatial dependent tachyon condensation on a unstable Dp-brane moving in nontrivial background with the diagonal form of the metric 3 . We have shown that this form of the tachyon condensation leads to an emergence of a D(p-1)-brane where the scalar modes that propagate on the kink worldvolume are solutions of the equations of motion that arise from the DBI action for D(p-1)-brane that is moving in given background and that is localised at the core of the kink. The purpose of this paper is to extend this analysis to the general background, including NS − NS two form field and Ramond-Ramond forms as well. We will study this problem in two ways. In the first one we will consider a non-BPS Dp-brane action where the worldvolume diffeomorphism is not fixed at all. The analysis of the equation of motion in this way is straightforward and in some sense demonstrates the efficiency of the study of the Dp-brane dynamics without imposing any gauge fixing conditions 4 . More precisely, we will show that the spatial dependent tachyon condensation leads to an emergence of a D(p-1)-brane whose dynamics is governed by equation of motion that arise from the DBI and WZ action for D(p-1)-brane. We will also show that the mode that characterises the core of the kink does not depend on the worldvolume coordinates of the kink and that all its values are equivalent. This result is consistent with the fact that we do not presume any relations between worldvolume coordinates and target space ones so that all positions of the kink on the worldvolume of a unstable Dp-brane are equivalent. In the second approach we use the diffeomorphism invariance so that we will presume that the worldvolume coordinate that parametrises the spatial dependent tachyon condensation is equal to one spatial coordinate in target spacetime. We will then demonstrate that the dynamics of the kink solution is governed by the equation of motion of D(p-1)-brane even if the analysis of these equations is more difficult. We will also show that the mode that describes location of the kink on the worldvolume of a non-BPS Dp-brane has physical meaning as the embedding coordinate in the spatial direction that coincides with the worldvolume direction. We will also show that this mode obeys the equation of motion that arises from the DBI and WZ term for D(p-1)-brane moving in given background. These results explicitly demonstrate that the tachyon like DBI action together with WZ term allows correct description of the emergence of a BPS D(p-1)-brane. We also hope that this analysis could be extended to another situations where the effective field theory description of the tachyon condensation could be useful. For example, we would like to apply this analysis to the supersymmetric version of a non-BPS Dp-brane in general background, following again [15]. It would be also nice to find solution of the tachyon equation of motion that describes D-branes with codimensions larger then one. In other words, we would like to see whether we can describe an emergence of a D(p-2)-brane that, by definition is unstable and hence the tachyon should be present on the worldvolume of the kink. The rest of this paper is organised as follows. In the next section (2) we will analyse the equation of motion for non -BPS Dp-brane in curved background without any static gauge presumption. We will see that the modes living on the worldvolume of the kink solve the equation of motion that arise from the DBI and WZ action for BPS D(p-1)-brane. We will also calculate the stress energy tensor and we will show that it is equal to the stress energy tensor for D(p-1)-brane. In section (3) we will study the same problem where now we partially fix the gauge. We will again show that the dynamics of the kink is governed by the DBI and WZ action for BPS D(p-1)-brane. Non-BPS Dp-brane in general background As in our previous paper we begin with the Dirac-Born-Infeld like tachyon effective action in general background [6,7,8,9] where A µ , µ, ν = 0, . . . , p and X M,N , M, N = 0, . . . , 9 are gauge and the transverse scalar fields on the worldvolume of the non-BPS Dp-brane and T is the tachyon field. V (T ) is the tachyon potential that is symmetric under T → −T has maximum at T = 0 equal to the tension of a non-BPS Dp-brane τ p and has its minimum at T = ±∞ where it vanishes. Since we will consider a non-BPS Dp-brane in the background with nontrivial Ramond-Ramond field we should also include the Wess-Zumino (WZ) term for non-BPS Dp-brane that is supposed to have a form [25] In (2.2) Σ denotes the worldvolume of a non-BPS Dp-brane and C collects all RR n-form gauge potentials (pulled back to the worldvolume) as The form of the WZ term (2.2) was determined from the requirement that the Ramond-Ramond charge of the tachyon kink is equal to the charge of D(p-1)-brane 6 . Using (2.1) and (2.2) we now obtain the equations of motion for T, X M and A µ . The equation of motion for tachyon takes the form δT S W Z is the source current derived from varying the Wess-Zumino term. For scalar modes we obtain where J µ = δ δAµ S W Z . To simplify notation it is convenient to introduce the symmetric and antisymmetric form of the matrix (A −1 ) Now we derive the explicit form of the currents that arise from the WZ term (2.2). To do this we write (2.2) as (We will closely follow the analysis of the currents for BPS Dp-brane that was performed in [31].) where ǫ µ 1 ...µ p+1 is Levi-Civita tensor (with no metric factors) and q = (2p+1−1−2n). The explicit variation of (2.8) is equal to (2.9) From this equation we obtain following form of the currents (2.11) and Now we try to find the solution of the equations of motion (2.4), (2.5) and (2.6) that can be interpreted as a lower dimensional D(p-1)-brane moving in given background. Without lost of generality we choose one particular worldvolume coordinate, say ξ p ≡ x and consider following ansatz for the tachyon where as in [15] we presume that f (u) satisfies following properties but is otherwise an arbitrary function of its argument u. a is a constant that we shall take to ∞ in the end. In this limit we have T = ∞ for x > t(ξ) and T = −∞ for x < t(ξ). Note also that t(ξ) in (2.13) is function of ξ α , α = 0, . . . , p − 1. Let us also presume following ansatz for massless fields where again ξ ≡ (ξ 0 , . . . , ξ p−1 ). Before we proceed further we would like to stress what is the main goal of this analysis. We would like to show that the dynamics of the kink is governed by the action (2.17) In other words we will show that the modes given in (2.15) that propagate on the worldvolume of the kink obey the equations of motion derived from (2.16) that have the form In the same way we get that the equation of motion for A α are whereJ Let us again return to the ansatz (2.13) and (2.15) and calculate the matrix A µν Now using the fact that As a next step we determine the inverse matrix (A −1 ) up the correction O 1 a . After some algebra we get In the limit of large a. Using also the relation A µν (A −1 ) νρ = δ ρ µ and the form of the matrix A given in (2.22) we easily determine following relation Now with the help of (2.27) we get where we have used the fact that and also the fact that the only field that depends on x is tachyon T . Using (2.28) we get following form of the DBI part of the tachyon equation of motion (2.4) Now we consider the DBI part of the equation of motion for X K (2.5). With the ansatz (2.13) and (2.15) the first two lines there take the form On the other hand the expression on the third line in (2.5) takes the form where we have used and also the fact that X K are function of ξ α only. In the same way as in (2.31) we can show that If we collect all these results we obtain that the DBI part of the equation of motion for X K takes the form (2.34) Now let us consider the equation of motion for gauge field. For A x we get where we have used an antisymmetry of (ã −1 ) αβ A so that (ã −1 ) αβ A ∂ α ∂ β t = 0. On the other hand the equations of motion for A α take the form As a next step we evaluate the currents given in (2.10), (2.11) and (2.12) for the ansatz (2.13) and (2.15). To begin with we determine the components of the embedding of various fields. It is easy to see that due to the fact that all worldvolume massless modes do not depend on x and also thanks to the fact that A x = 0. Then the only nonzero components of F µν arẽ F αβ . For C (n) the situation is the same, namely any component that in the subscript contains x vanishes since Now we begin with the gauge current J µ . Firstly, J x is equal to On the other hand the current J α 1 is equal to using the fact that (F ) n−1 ...x and C ...x... are equal to zero. If we now combine (2.36) with (2.40) we get Let us now analyse the behaviour of the term af ′ V in the limit a → ∞. Since by definition f ′ (u) is finite for all u it remains to study the properties of the expression We see that for x = t(ξ) the expression aV goes to zero in the limit a → ∞. On the other hand for x = t(ξ) the potential V (0) = τ p and hence in order to obey the equation of motion for A α we find that the expression in the bracket in (2.41) should vanish. In fact, this expression is the same as the equation of motion for A α given in (2.20) On the other hand using (2.35) and (2.39) the equation of motion for A x takes the form that clearly holds using the fact that all modes obey the equation of motion (2.20). Now we will analyse the current J K . Looking on its form (2.12) it is clear that the expressions on the first and the second line are nonzero for µ p+1 = x only. On the other hand the expression on the third line can be nonzero for µ p+1 = x and for Finally, the expression on the last line in (2.12) is equal to If we now combine all these results together we obtain final form of the current J K (2.47) Following discussion given below (2.41) we see that the expression in the bracket in (2.47) should be equal to zero. On the other hand this equation is exactly the equation of motion for the embedding mode that lives on the worldvolume of D(p-1)-brane that was given in (2.18). Finally we come to the analysis of the tachyon current J T that can be written as It is not hard to see that the tachyon current is equal to zero. Firstly, the contribution to J T for which µ p+1 = x vanishes thanks to the fact that all massless modes do not depend on x. On the other hand for µ p+1 = x all contributions to J T vanish since then there certainly exists F or C with the lower index containing x and as we argued above these terms are equal to zero. Hence we get Then the equation (2.4) takes the form Since for general background all massless fields depend on ξ the only way how to obey this equation for x = t(ξ) where V (0) = τ p is to demand that ∂ α t = 0. In other words we obtain a set of the tachyon kink solutions labelled with constant t that determines the position of the core of the kink on the worldvolume of an unstable Dp-brane. We mean that this is a natural result for a non-BPS Dp-brane where no gauge fixing procedure was imposed. In this case the position of a Dp-brane in the target spacetime is not specified and consequently all kink solutions on its worldvolume are equivalent. In summary, we have shown that the spatial dependent tachyon condensation on the worldvolume of a unstable Dp-brane in general background leads to an emergence of a lower dimensional D(p-1)-brane where the massless modes that propagate on the worldvolume of the kink obey the equations of motion that arise from the DBI and WZ action for D(p-1)-brane. Stress energy tensor Further support for an interpretation of the tachyon kink as a lower dimensional D(p-1)-brane can be derived from the analysis of the stress energy tensor for the non-BPS Dp-brane. In order to find its form recall that we can write the action (2.1) as From (2.51) we can easily determine components of the stress energy tensor T M N (x) of an unstable D-brane using the fact that the stress energy tensor T M N (x) is defined as the variation of S p with respect to g M N (x) (2.52) Now from (2.13) and (2.15) we know that all massless modes are x independent. Hence (2.52) is equal to where is a tension of BPS D(p-1)-brane. In other words the stress energy tensor evaluated on the ansatz (2.13) and (2.15) corresponds to the stress energy tensor for D(p-1)brane. In the same way we can study other currents that express the coupling of the non-BPS Dp-brane to closed string massless fields. For example, let us consider current J M 1 ...M N C corresponding to the variation of S W Z with respect to C M 1 ...M N (x) where n = p−N 2 . It is clear that the nonzero components corresponds to µ p+1 = x (since in the opposite case there will be derivative ∂ x X that for (2.15) vanishes) and we get where µ p−1 = T p−1 is a Ramond-Ramond charge of D(p-1)-brane and hence (2.56) is an appropriate current for D(p-1)-brane. Partial fixing gauge In order to find solution of the tachyon effective action, where the mode t that determines the location of the core of the kink could be interpreted as an additional embedding coordinate, we should partial fix the gauge. In other words, when we choose one spatial coordinate on the worldvolume theory on which the tachyon depends we will also presume that this coordinate coincides with one arbitrary spatial coordinate in the target spacetime. Since both worldvolume theory and spacetime theory are diffeomorphism invariant we can without loose of generality choose the worldvolume direction on which the tachyon depends to be ξ p and the spacetime direction to be X 9 . Then we demand that Let us now consider following ansatz for the tachyon where f (u) could be the same function as was defined in previous section. We also presume following ansatz for massless modes As in the previous section we obtain that det A is equal to det A = a 2 f ′2 det(ã αβ ) + O(1/a) (3.5) and the inverse matrix (A −1 ) when it is expressed as function of (ã −1 ) and ∂t takes the form where the relations in (3.6) hold up to corrections of order 1/a 2 . Now using the form of the matrix A (3.4) and the equation (A −1 ) µν A νρ = δ µ ρ we easily determine following exact relation (3.7) Then with the help of (3.7) we can write the second term in (2.4) as Following [15] we can now argue that due to the explicit factor of a 2 f ′2 in the denominator the leading contribution from individual terms in this expression is now of order a and hence we can use the approximative results of det A and (A −1 ) given in (3.5) and (3.6) to analyse the DBI part of the equation of motion for tachyon (2.4) (3.9) We should now more carefully interpret the result given above. Firstly, as we know from the previous section the tachyon potential V is equal to zero for x − t(ξ) = 0 while for x − t(ξ) = 0 we get V (0) = τ p in the limit a → ∞. Moreover, we will show in the next subsection that the tachyon current J T is equal to J T = −VJ 9 when it is evaluated on the ansatz (3.2) and (3.3). Note thatJ 9 is gauge fixed version of the current (2.19). The main point is that the tachyon equation of motion is obeyed for x − t(ξ) = 0 while for x = t(ξ) we should demand that the expression in the bracket in (3.9) together with −J 9 should in be equal to zero. If we now use the fact that we can write the expression in the bracket in (3.9) with −J 9 in the form where we have introduced the notation It is important to stress that in (3.11) we firstly perform the derivative with respect to x and then we replace x with t(ξ). Then the presence of the following expressions in (3.9) To see this more clearly let us compare (3.11) with the equation of motion (2.18) for K = 9 and observe that the expression on the third line in (2.18) can be written as where on the second line the derivative with respect to ξ α treats x as an independent variable so that we firstly perform derivative with respect to ξ α and then we replace x with Y . We see that this prescription coincides with the expressions on the second line in (3.11). In the same way we can proceed with the expression on the fourth line in (2.18) 3.15) and this again coincides with the expressions on the fourth line in (3.11). In summary, the location of the tachyon kink in the x 9 direction is completely determined by field t(ξ) that obeys the equation of motion (2.18) for K = 9. Now we come to the analysis of the equation of motion for X K , K = 0, . . . , 8. For the ansatz (3.2) and (3.3) the first term in (2.5) takes the form On the other hand the expression on the second line in (2.5) can be written as where we have used the notation (3.12). Finally we will analyse the expression on the third and the fourth line in (2.5) that can be written as After some length calculations we obtain that (3.18) for the ansatz (3.2) and (3.3) takes the form Finally, using (3.16), (3.17) and (3.19) we get using the result that will be proven in the next subsection that the current J K is equal to aV f ′J K , whereJ K is given in (2.19). As we know from the previous section the expression af ′ V goes to zero in the limit a → ∞ when x = t(ξ). On the other hand for x = t(ξ) the potential V (0) = τ p for arbitrary a and hence in order to obey the equation of motion for X K (2.5) we get that the expression in the bracket {. . .} should vanish for x = t(ξ). However this is precisely the equation of motion (2.18) and hence we again obtain the result that the scalar modes X K should solve the equation of motion that arise from the action for BPS D(p-1)-brane. Since we mean that it is very important to find the correct interpretation of the equation (3.20) we we would like again stress that in the expression in the bracket in (3.20) we firstly perform a derivative with respect to ξ α and then we replace x with t(ξ) in the limit a → ∞. This fact implies that t(ξ) is an scalar mode that parametrises the location of D(p-1)-brane in the x 9 direction. To complete the discussion of the equation of motion for X K we should also analyse the equation of motion for X 9 . If we proceed in the same way as for X K that was analysed above we obtain that the equation of motion for X 9 takes the form where we have again used the result from the next subsection that J 9 = af V ′J 9 . We see that the expression in the bracket (. . .) in (3.21) coincides with the equation of motion (2.18) for K = 9. This is nice result since we should obtain ten independent equations for scalar modes and we see that the equation of motion for T and for X 9 imply one equation of motion for mode t. Finally we come to the analysis of the equation of motion for A µ given in (2.6). For µ = α the DBI part of the equation of motion (2.6) takes the form (3.23) As usual we demand that the expression in the bracket (. . .) in (3.23) should be equal to zero for x = t(ξ). Then the vanishing of this expression is equivalent to that is an equation of motion for the gauge field given in (2.20). Finally, the DBI part of the equation of motion (2.6) for µ = x and for the ansatz (3.2), (3.3) takes the form using (2.32) and then an antisymmetry of the matrix (ã −1 ) αβ A . Now with the help of the current J x given in (3.39) and with (3.25) the equation of motion (2.6) for µ = x takes the form where we have included an expression aV f ′ ∂ β t∂ α t∂ x e −Φ (ã −1 ) αβ A √ − detã that vanishes thanks to the antisymmetry of (ã −1 ) αβ A however whose presence is crucial for an interpretation of t as an embedding coordinate. Following arguments given above we obtain that the expression in the bracket {. . .} should be equal to zero for x = t(ξ) in the limit a → ∞. We see that this holds since as we have argued above the massless modes obey (2.20). In summary, we have shown that the dynamics of the tachyon kink is governed by the equation of motion that arises from the DBI and WZ action for D(p-1)-brane that is localised at the point x = t(ξ). To really conclude this section we should now evaluate currents J M , J µ and J T . Analysis of currents In this subsection we will analyse the currents (2.10) , (2.11) and (2.12) for the ansatz given in (3.2) and (3.3). We will see that this analysis is much more difficult that in the case when we did not impose any gauge fixing conditions. We start with the gauge current (2.10) where µ 1 = α 1 . In this case we get (3.28) Now we come to one important point. As we know from the previous section the factor aV f ′ vanishes for x = t(ξ) for a → ∞. At the same time we argued that we should regard t(ξ) as an embedding coordinate. On the other hand F αβ contains an embedding of B that is equal to and we also have Now we would like to argue that whenever some term in any current will contain a factor ∂ αx t we can replace all F αβ and all C α 2n+1 ...αp withF αβ andC α 2n+1 ...αp wherẽ where Y M was introduced in (3.12). To see that this replacement is correct note that the additional terms in expressions (We mean expressions with the overall multiplicative factor ∂ αx t) , when we replace F withF and C withC contain derivative of t in the form ∂ αy t. Now thanks to the existence of the factor ǫ α 1 ...αpx it is clear that these terms after multiplication with ∂ αx t vanish since Now we proceed to the analysis of the expression on the fourth line in (3.27) n≥0 using the fact that Now it is easy to see that (3.33) together with (3.28) gives (3.35) To complete the discussion of the current we should analyse the expression on the last line in (3.27) (3.36) Now we see that (3.36) is precisely the expression that is needed to replace C α 2n+1 ...αp withC α 2n+1 ...αp in (3.27). Finally, if we combine (3.35) with (3.36) we obtain following form of the current J α 1 whereJ α 1 is a gauge field current for D(p-1)-brane given in (2.21). Note also that the term on the second line in (3.37) is exactly the right one in order to interpret t as an embedding coordinate since in the expression on the first line in (3.37) the partial derivative ∂ α 2 treats x as an independent variable. We will also see that in all other currents similar additional terms appear as well. Finally, we will analyse the gauge current for where we have used an antisymmetry of ǫ xα 1 ...αp under exchange of α 1 and α p so that Thanks to the presence of the term ∂ α 1 t we can, following discussion given above, everywhere replace F withF and C withC. From the same reason we can add to (3.38) an expression aV f ′ n≥0 2n n!2 n q! ǫ α 1 ...αpx ∂ x (F ) n−1 α 3 ...α 2nC α 2n+1 ...αp ∂ α 2 t∂ α 1 t that formally vanishes however with this term the current (3.38) can be written as (3.39) Let us now proceed to the analysis of the tachyon current (2.11) for the ansatz (3.2) and (3.3) (3.40) Now we split the calculations into two parts, the first one when µ p+1 = x and the second one when µ p+1 = x. In the first case we get (3.42) In the same way we can proceed with the expression ∂ x C I 2n+1 ...Ip . Then the expression (3.41) takes the form where we have included tilde components defined in (3.31). We have also used the fact that we can write b 9I ∂ α 2 X I = b 9M ∂ α Y M and in the same way we can extend the embedding C 9I 2n+2 ... Let us now consider the case when µ p+1 = x in (3.40). In this case we get using the fact that F xα 1 = F α 1 x = 0. We will again argue that terms written on the third and the fourth line in (3.43) are important for an interpretation of t as an embedding coordinate. In fact, following discussion performed in previous section it is easy to see that where the second term vanishes after multiplying this derivative with ǫ α 1 α 2 ... . In the same way we can show that the derivative ∂ α 2 F α 3 α 4 takes the form If we multiply the expression given above with ǫ α 2 α 3 α 4 ... we obtain that the first and the last term vanishes as can be seen from following examples If we now combine (3.43) with (3.44) we obtain that the tachyon current has natural interpretation as the current for the scalar mode t(ξ) that parametrises location of D(p-1)-brane in the x 9 direction J T = −VJ 9 (3.48) withJ 9 given in (2.21). Finally we will analyse currents J K given in (2.12). Let us start with the first term in (2.12) In the previous expressions we have included the terms with tilde from the same reasons as was argued in case of gauge field current. We can also simplify the expression above using the fact that ǫ α 1 α 2 α 3 ...α 2n ... F n−1 α 3 ...α 2n + 2(n − 1)b 9I ∂ α 3 t∂ α 4 X I (F ) n−2 α 5 ...α 2n = ǫ α 1 α 2 α 3 ...α 2n ... (F ) n−1 α 3 ...α 2n . As the final point we should determine the form of the current J 9 . In fact, since in the analysis performed above there is nothing special about the index K it is clear that the result obtained there can be applied for K = 9 as well and we get J x = af ′ VJ x . (3.61)
2014-10-01T00:00:00.000Z
2005-08-31T00:00:00.000
{ "year": 2005, "sha1": "042620e2030bf783e014aa9180cae3954dedc38a", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/0508239", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1204ebaa5de95567b0cb2d7d65eb56e0d80709ed", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics" ] }